Another great post from Bill Dimm.
I recently encountered a marketing piece where a vendor claimed that their tests showed their predictive coding software demonstrated favorable performance compared to the software tested in the 2009 TREC Legal Track for Topic 207 (finding Enron emails about fantasy football). I spent some time puzzling about how they could possibly have measured their performance when they didn’t actually participate in TREC 2009.
One might question how meaningful it is to compare to performance results from 2009 since the TREC participants have probably improved their software over the past six years. Still, how could you do the comparison if you wanted to? The stumbling block is that TREC did not produce a yes/no relevance determination for all of the Enron emails. Rather, they did stratified sampling and estimated recall and prevalence for the participating teams by producing relevance determinations for just a few thousand emails.
Stratified sampling means that the…
View original post 1,680 more words