Tue 15 Sep 2009
Two concepts come up when talking about information retrieval in most standard documentation, Precision and Recall. Precision is a measure that tells you if your result set contains only results that are relevant to the query, and recall tells you if your result set contains everything that is relevant to the query.
The formula for classical precision is:
However, I would argue that the classical notion of Precision is flawed, in that it doesn't model anything we tend to care about. Rarely are we interested in binary classification, instead we want a ranked classification of relevance.
When Google tells you that you have a million results, do you care? No, you skim the first few entries for what it is that you are looking for, unless you are particularly desperate for an answer. So really, you want a metric that models the actual behavior of a search engine user and that level of desperation.
There are two issues with classical precision:
- the denominator of precision goes to infinity as the result set increases in size
- each result is worth the same amount no matter where it appears in the list
The former ensures that a million answers drowns out any value from the first screen, the latter ensures that it doesn't matter which results are on the first screen. A more accurate notion of precision suitable for modern search interfaces should model the prioritization of the results, and should allow for a long tail of crap if the stuff that people will look at is accurate over all.
So how to model user behavior? We can replace the denominator with a partial sum of a geometric series for probability p < 1, where p models the percentage chance that a user will continue to browse to the next item in the list. Then you can scale the value of the nth summand in the numerator as being worth up to pn. If you have a ranked training set it is pretty easy to score precision in this fashion.
You retain all of the desirable properties of precision. It maxes out at 100%, it decreases when you give irrelevant results, but now it effectively models when you return irrelevant results early in your result list.
The result more accurately models user behavior when faced with a search engine than the classical binary precision metric. The parameter p models the desperation of the user and can vary to fit your problem domain. I personally like p=50%, because it makes for nice numbers, but it should proabably be chosen based on sampling based on knowledge of the search domain.
You can of course embellish this model with a stair-step in the cost function on each page boundary, etc. — any monotone decreasing infinite series that sums to a finite number in the limit should do.
A similar modification can of course be applied to recall.
I used this approach a couple of years ago to help tune a search engine to good effect. I went to refer someone to this post today and I realized I hadn't posted it in the almost two years since it was written, so here it is, warts and all.
If anyone is familiar with similar approaches in the literature, I'd be grateful for references!
September 15th, 2009 at 6:37 pm
So you’re suggesting \sum_n (p^n)*R(n) where R is a function taking the nth document to a relevance score on the 0..1 interval? That sounds very familiar, though I can’t quite pull up a name or reference for it. In particular it’s similar to measures of effective utility. The (true) utility of a particular state is fixed over time, but the effective utility of an action resulting in that state will be lesser the longer it takes between the action and the resulting payoff (due to lost opportunity cost, random chance of not reaching the goal, etc). This sort of model is used often in game theory and similar approaches to AI and complex systems; so that may be somewhere to start looking.
Another interesting enhancement for this metric is that the R function needn’t select only from {0,1}. One particular direction to take this is to define R(n) dynamically in terms of R(0)..R(n-1) such that, for example, duplicate content is not considered relevant. This would allow for a more information theoretic approach where we want to maximize the diversity of content up front rather than presenting the same content over and over again. This is especially important if we assume the searcher will read from multiple documents, but it also provides some stability if we don’t trust our ability to discern relevance.
September 15th, 2009 at 9:34 pm
Wren:
Yeah, you’ve got the metric. I used this once a couple of years back to pretty good effect.
That is a very good point about diversity. The metric I actually used did a much simpler version of what you proposed in that I just considered a pre-ranked training set. And if you found an item that should be ranked up to, say, 4th in the 3rd slot you earned 3/4ths of the points, clamped to 1. Ties can them be allowed in the list of expected rankings when the distinctions between them aren’t clear. This doesn’t address diversity directly, but it does ensure that you can reach 100% precision given perfect ordering.
September 16th, 2009 at 2:52 am
Precision isn’t usually interesting on its own. What’s more important for a ranked search (like Google) is what proportion of the first n results are relevant for ALL n. This is why in real IR papers, you usually see a precision-recall curve.
You will occasionally also see precision-at-n for various n which are multiples of a hypothetical screenful (e.g. n=20).
September 16th, 2009 at 7:10 am
Pseudonym:
I agree, although the precision metric for a given recall level can be calculated using the same machinery.
I’m happy to learn though. References are welcome! I hardly suspect that I came up with anything novel during a 2 hour whiteboard session two years back and a similar amount of time hacking things up in perl. ;)
I just haven’t seen anything similar written up.
September 16th, 2009 at 10:35 am
Check out the work by Dr. John Wilbur at the National Center for Biotechnology Information at the National Library of Medicine. He was writing papers ca. 1991-93 about measuring performance of document searching, mostly dealing with the medical literature in MEDLINE.
What he ended up with was a similar idea, but base on the information theoretic entropy. He called it relevance information.
If we assume R relevant documents in a corpus of D documents, the probability of any given document being relevant is uniformly R/D. If we score and then rank them by some procedure we can then assume the probability is no longer uniform, but decreasing (hopefully) sharply over the rankings. The effectiveness of the scoring scheme is measured by the decrease in entropy over the whole distribution.
He examined some of the properties of this measure and felt that it captured the best of both precision and recall in one number and was reasonably robust, but AFAIK it never caught on in the literature.
I can’t find the original paper, and he seems to have moved away from it in any of his more recent papers.
(Disclosure — I used to work for him)
Wait — I found it:
An Information Measure of Retrieval Performance (1992)
by W J Wilbur
http://citeseerx.ist.psu.edu/showciting?cid=1837654
September 16th, 2009 at 11:57 am
Jim, that sounds promising. I’ll take a look!
September 16th, 2009 at 1:01 pm
For IR, no one ever measures true precision or recall, precisely because the denominator is so large that you can’t annotate all the docs as relevant or not. As “Pseudonym” said, IR researchers often measure the precision of the top results using measures like precision-at-N (the precision after N documents), or mean average precision (MAP), a mean of precisions-at-N for a sequence of N. Sometimes they measure area under the precision/recall or ROC curves (aka AUC).
The big web search engines, in particular, are concerned with precision “above the fold” (in the newspaper sense). That is, if you take a default install of IE or Firefox and do a search on Bing, Yahoo, or Google, what’s the precision for the number of results you can see on the screen. The chance to continue browsing is not continuous. Basically, that would induce a kink in your probability of keeping going to the next items.
There are also applications which are highly recall oriented, like curating databases of protein interactions from the literature or intelligence analysis over the news. It’d still fit your model, it’d just give you a different likelihood of looking at another item. We’re particularly interested in precision at 99% recall or 99.9% recall for these situations.
The other major issue to consider is diversity of results. If I send you ten different versions of the same information, it’s not very useful even if they’re all “relevant” in the binary relevant/not relevant sense. The problem is that to measure this idea, you need a notion of relative information contribution of a new result given a set of other results.
What the IR folks call precision is what the epidemiologists call “positive predictive accuracy”. It’s basically the likelihood that you have a condition if you test positive for it, and it’s very useful exactly as stated in that context.
You might want to consult the section of the Wikipedia entry Information Retrieval about performance measures, or the TREC IR Measures overview.
September 16th, 2009 at 5:33 pm
Hello Bob,
Thanks for the references! The TREC IR Measures overview seems to be exactly what I was looking for.
December 23rd, 2011 at 11:53 am
Hello, you used to write excellent, but the last few posts have been kinda boring¡ I miss your great writings. Past few posts are just a little bit out of track! come on!