Comments on: Remodeling Precision http://comonad.com/reader/2009/remodeling-precision/ types, (co)monads, substructural logic Sat, 29 Dec 2012 15:18:06 -0800 http://wordpress.org/?v=2.8.4 hourly 1 By: Aide Chatfield http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-91959 Aide Chatfield Fri, 23 Dec 2011 16:53:22 +0000 http://comonad.com/reader/?p=151#comment-91959 Hello, you used to write excellent, but the last few posts have been kinda boring¡­ I miss your great writings. Past few posts are just a little bit out of track! come on! Hello, you used to write excellent, but the last few posts have been kinda boring¡­ I miss your great writings. Past few posts are just a little bit out of track! come on!

]]>
By: Edward Kmett http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-11828 Edward Kmett Wed, 16 Sep 2009 22:33:06 +0000 http://comonad.com/reader/?p=151#comment-11828 Hello Bob, Thanks for the references! The TREC IR Measures overview seems to be exactly what I was looking for. Hello Bob,

Thanks for the references! The TREC IR Measures overview seems to be exactly what I was looking for.

]]>
By: Bob Carpenter http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-11812 Bob Carpenter Wed, 16 Sep 2009 18:01:31 +0000 http://comonad.com/reader/?p=151#comment-11812 For IR, no one ever measures true precision or recall, precisely because the denominator is so large that you can't annotate all the docs as relevant or not. As "Pseudonym" said, IR researchers often measure the precision of the top results using measures like precision-at-N (the precision after N documents), or mean average precision (MAP), a mean of precisions-at-N for a sequence of N. Sometimes they measure area under the precision/recall or ROC curves (aka AUC). The big web search engines, in particular, are concerned with precision "above the fold" (in the newspaper sense). That is, if you take a default install of IE or Firefox and do a search on Bing, Yahoo, or Google, what's the precision for the number of results you can see on the screen. The chance to continue browsing is not continuous. Basically, that would induce a kink in your probability of keeping going to the next items. There are also applications which are highly recall oriented, like curating databases of protein interactions from the literature or intelligence analysis over the news. It'd still fit your model, it'd just give you a different likelihood of looking at another item. We're particularly interested in precision at 99% recall or 99.9% recall for these situations. The other major issue to consider is diversity of results. If I send you ten different versions of the same information, it's not very useful even if they're all "relevant" in the binary relevant/not relevant sense. The problem is that to measure this idea, you need a notion of relative information contribution of a new result given a set of other results. What the IR folks call precision is what the epidemiologists call "positive predictive accuracy". It's basically the likelihood that you have a condition if you test positive for it, and it's very useful exactly as stated in that context. You might want to consult the section of the Wikipedia entry <a href="http://en.wikipedia.org/wiki/Information_retrieval#Performance_measures" rel="nofollow">Information Retrieval</a> about performance measures, or the <a href="http://trec.nist.gov/pubs/trec15/appendices/CE.MEASURES06.pdf" rel="nofollow">TREC IR Measures</a> overview. For IR, no one ever measures true precision or recall, precisely because the denominator is so large that you can’t annotate all the docs as relevant or not. As “Pseudonym” said, IR researchers often measure the precision of the top results using measures like precision-at-N (the precision after N documents), or mean average precision (MAP), a mean of precisions-at-N for a sequence of N. Sometimes they measure area under the precision/recall or ROC curves (aka AUC).

The big web search engines, in particular, are concerned with precision “above the fold” (in the newspaper sense). That is, if you take a default install of IE or Firefox and do a search on Bing, Yahoo, or Google, what’s the precision for the number of results you can see on the screen. The chance to continue browsing is not continuous. Basically, that would induce a kink in your probability of keeping going to the next items.

There are also applications which are highly recall oriented, like curating databases of protein interactions from the literature or intelligence analysis over the news. It’d still fit your model, it’d just give you a different likelihood of looking at another item. We’re particularly interested in precision at 99% recall or 99.9% recall for these situations.

The other major issue to consider is diversity of results. If I send you ten different versions of the same information, it’s not very useful even if they’re all “relevant” in the binary relevant/not relevant sense. The problem is that to measure this idea, you need a notion of relative information contribution of a new result given a set of other results.

What the IR folks call precision is what the epidemiologists call “positive predictive accuracy”. It’s basically the likelihood that you have a condition if you test positive for it, and it’s very useful exactly as stated in that context.

You might want to consult the section of the Wikipedia entry Information Retrieval about performance measures, or the TREC IR Measures overview.

]]>
By: Edward Kmett http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-11810 Edward Kmett Wed, 16 Sep 2009 16:57:17 +0000 http://comonad.com/reader/?p=151#comment-11810 Jim, that sounds promising. I'll take a look! Jim, that sounds promising. I’ll take a look!

]]>
By: Jim F http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-11805 Jim F Wed, 16 Sep 2009 15:35:28 +0000 http://comonad.com/reader/?p=151#comment-11805 Check out the work by Dr. John Wilbur at the National Center for Biotechnology Information at the National Library of Medicine. He was writing papers ca. 1991-93 about measuring performance of document searching, mostly dealing with the medical literature in MEDLINE. What he ended up with was a similar idea, but base on the information theoretic entropy. He called it relevance information. If we assume R relevant documents in a corpus of D documents, the probability of any given document being relevant is uniformly R/D. If we score and then rank them by some procedure we can then assume the probability is no longer uniform, but decreasing (hopefully) sharply over the rankings. The effectiveness of the scoring scheme is measured by the decrease in entropy over the whole distribution. He examined some of the properties of this measure and felt that it captured the best of both precision and recall in one number and was reasonably robust, but AFAIK it never caught on in the literature. I can't find the original paper, and he seems to have moved away from it in any of his more recent papers. (Disclosure -- I used to work for him) Wait -- I found it: An Information Measure of Retrieval Performance (1992) by W J Wilbur http://citeseerx.ist.psu.edu/showciting?cid=1837654 Check out the work by Dr. John Wilbur at the National Center for Biotechnology Information at the National Library of Medicine. He was writing papers ca. 1991-93 about measuring performance of document searching, mostly dealing with the medical literature in MEDLINE.

What he ended up with was a similar idea, but base on the information theoretic entropy. He called it relevance information.

If we assume R relevant documents in a corpus of D documents, the probability of any given document being relevant is uniformly R/D. If we score and then rank them by some procedure we can then assume the probability is no longer uniform, but decreasing (hopefully) sharply over the rankings. The effectiveness of the scoring scheme is measured by the decrease in entropy over the whole distribution.

He examined some of the properties of this measure and felt that it captured the best of both precision and recall in one number and was reasonably robust, but AFAIK it never caught on in the literature.

I can’t find the original paper, and he seems to have moved away from it in any of his more recent papers.

(Disclosure — I used to work for him)

Wait — I found it:

An Information Measure of Retrieval Performance (1992)
by W J Wilbur

http://citeseerx.ist.psu.edu/showciting?cid=1837654

]]>
By: Edward Kmett http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-11800 Edward Kmett Wed, 16 Sep 2009 12:10:50 +0000 http://comonad.com/reader/?p=151#comment-11800 Pseudonym: I agree, although the precision metric for a given recall level can be calculated using the same machinery. I'm happy to learn though. References are welcome! I hardly suspect that I came up with anything novel during a 2 hour whiteboard session two years back and a similar amount of time hacking things up in perl. ;) I just haven't seen anything similar written up. Pseudonym:

I agree, although the precision metric for a given recall level can be calculated using the same machinery.

I’m happy to learn though. References are welcome! I hardly suspect that I came up with anything novel during a 2 hour whiteboard session two years back and a similar amount of time hacking things up in perl. ;)

I just haven’t seen anything similar written up.

]]>
By: Pseudonym http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-11791 Pseudonym Wed, 16 Sep 2009 07:52:16 +0000 http://comonad.com/reader/?p=151#comment-11791 Precision isn't usually interesting on its own. What's more important for a ranked search (like Google) is what proportion of the first n results are relevant for ALL n. This is why in real IR papers, you usually see a precision-recall curve. You will occasionally also see precision-at-n for various n which are multiples of a hypothetical screenful (e.g. n=20). Precision isn’t usually interesting on its own. What’s more important for a ranked search (like Google) is what proportion of the first n results are relevant for ALL n. This is why in real IR papers, you usually see a precision-recall curve.

You will occasionally also see precision-at-n for various n which are multiples of a hypothetical screenful (e.g. n=20).

]]>
By: Edward Kmett http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-11782 Edward Kmett Wed, 16 Sep 2009 02:34:39 +0000 http://comonad.com/reader/?p=151#comment-11782 Wren: Yeah, you've got the metric. I used this once a couple of years back to pretty good effect. That is a very good point about diversity. The metric I actually used did a much simpler version of what you proposed in that I just considered a pre-ranked training set. And if you found an item that should be ranked up to, say, 4th in the 3rd slot you earned 3/4ths of the points, clamped to 1. Ties can them be allowed in the list of expected rankings when the distinctions between them aren't clear. This doesn't address diversity directly, but it does ensure that you can reach 100% precision given perfect ordering. Wren:

Yeah, you’ve got the metric. I used this once a couple of years back to pretty good effect.

That is a very good point about diversity. The metric I actually used did a much simpler version of what you proposed in that I just considered a pre-ranked training set. And if you found an item that should be ranked up to, say, 4th in the 3rd slot you earned 3/4ths of the points, clamped to 1. Ties can them be allowed in the list of expected rankings when the distinctions between them aren’t clear. This doesn’t address diversity directly, but it does ensure that you can reach 100% precision given perfect ordering.

]]>
By: wren ng thornton http://comonad.com/reader/2009/remodeling-precision/comment-page-1/#comment-11775 wren ng thornton Tue, 15 Sep 2009 23:37:50 +0000 http://comonad.com/reader/?p=151#comment-11775 So you're suggesting \sum_n (p^n)*R(n) where R is a function taking the nth document to a relevance score on the 0..1 interval? That sounds very familiar, though I can't quite pull up a name or reference for it. In particular it's similar to measures of effective utility. The (true) utility of a particular state is fixed over time, but the effective utility of an action resulting in that state will be lesser the longer it takes between the action and the resulting payoff (due to lost opportunity cost, random chance of not reaching the goal, etc). This sort of model is used often in game theory and similar approaches to AI and complex systems; so that may be somewhere to start looking. Another interesting enhancement for this metric is that the R function needn't select only from {0,1}. One particular direction to take this is to define R(n) dynamically in terms of R(0)..R(n-1) such that, for example, duplicate content is not considered relevant. This would allow for a more information theoretic approach where we want to maximize the diversity of content up front rather than presenting the same content over and over again. This is especially important if we assume the searcher will read from multiple documents, but it also provides some stability if we don't trust our ability to discern relevance. So you’re suggesting \sum_n (p^n)*R(n) where R is a function taking the nth document to a relevance score on the 0..1 interval? That sounds very familiar, though I can’t quite pull up a name or reference for it. In particular it’s similar to measures of effective utility. The (true) utility of a particular state is fixed over time, but the effective utility of an action resulting in that state will be lesser the longer it takes between the action and the resulting payoff (due to lost opportunity cost, random chance of not reaching the goal, etc). This sort of model is used often in game theory and similar approaches to AI and complex systems; so that may be somewhere to start looking.

Another interesting enhancement for this metric is that the R function needn’t select only from {0,1}. One particular direction to take this is to define R(n) dynamically in terms of R(0)..R(n-1) such that, for example, duplicate content is not considered relevant. This would allow for a more information theoretic approach where we want to maximize the diversity of content up front rather than presenting the same content over and over again. This is especially important if we assume the searcher will read from multiple documents, but it also provides some stability if we don’t trust our ability to discern relevance.

]]>