hello marcin et al..
try as I might couldn't win the basic track, bested out at 0.74 only at numeous quite different best attempts
(congratulations to eventual winners)
I'm developing a novel ensembling algorithm which showed it worked by raising the balanced accuracy from 0.68 (LOOCV at each set -> best single classifier baseline) to that said level on two of the sets, 2 and 4, but the expected gain from remaining four sets did not materialise.
This left me puzzled, and disappointed at wasting my last attempts at "something new" in the last two days, where I should have kept on the chosen track.
I'm sure some others among the many entrants would like to finalise their methods without the limitation of time (null the 100 counter?)
and when presented with this choice: "we'll publish the datasets and evaluation procedure, so everyone will be able to perform appropriate calculations by himself", would prefer keeping this an independent test, but if implementing a post-challenge submission system as in KDD Cup is not feasible to you, do you have estimation of when you would publish the key files to see where we went wrong?
In any case, I wish to thank you for a meaningful competition