The training dataset is said to "consist of a stream of data collected over 1000 hours of simulation, divided into a hundred of 10-hour long independent cycles." Is it the case that each of the 100 cycles was generated using different parameters, and that within each cycle there are 10 independent 1-hour simulations using the same parameters? Or, within each cycle, does each 10hr period represent a continuous stream of data, representing, for example, 12:00-22:00?

I assume it is the first, so that within each cycle a contestent can estimate a parameter vector for his or her particular model 10 times, once for each 1-hr periods. These 10 estimates of the parameter vector should then be fairly consistent within each cycle, but differ among the 100 cycles.

Thanks.

Kem