Currently, the matched observations are against other citizen scientist observations. I wonder if I can get a tool-tip in there to define the terms a little bit.
Basically, it checks all of your observations in an image against all observations with other citizen scientists in the image. If all four corners in your observation and another observation are within 10-pixels of each other, we consider that a match -- there's a paper available on how we determined that to be the optimal matching algorithm for this set, compared to a few other choices and parameters.
We're currently doing analysis on expert observations (made by people Susan, et al have trained specifically), raw citizen scientist observations, and matched citizen science observations. We just submitted another paper on that, but the basic findings is that raw citizen science observations are to variable on the same object to make good machine learning candidates, but when matched together, they're nearly as good as the experts.
In the future, we're going to be also matching up against expert observations and observations that our machine learning has detected. I'd like to get some nice web pages together so we can make stats, draw boxes over images, etc publically accessible to get some nice visualization.