I had a great conversation with Chris Sherman at Web Search University about search results, which brought up to me an enormous disconnect between SEO and info pro communities. (Gee, what a surprise!)
He and I were comparing Google results for a particular search technique. Of the first 10 results, the second and sixth hits were spot on. Since it was a somewhat amorphous topic, I expected to have to wade through a lot of stuff to find what I was looking for. Two good hits from the first 10 was pretty good for me. Chris labelled those results as a #fail, since 80% of the results weren't useful.
My response: It's a question of precision vs. recall. Search engine marketers consider results with less than 90% relevance to be a failed search, because the precision wasn't high enough. Info pros, on the other hand, are often more concerned about recall; we want to make sure that we find as many useful resources as possible. For a difficult search, 10% or 20% relevance is just fine.
And this comes back to my ongoing theme that we info pros are far more data-tolerant than most people. Wading through page after page of search results is just fine with us, as long as we find something useful. And we often forget that our clients are far less data-tolerant than we are. One of the biggest challenges of info pros is limiting our results to the depth and breadth that our clients need. We want to keep digging deeper, and to send our clients all the fascinating information that we found, forgetting that this may be doing the client a disservice.
For more on distillation and making information more user-friendly, see my talk on adding value at BatesInfo.com/extras
Hi Ellen,
Great post. This is a typical haunting thought for those concerned with search engines and their quality. I want to share an insight which basically comes down to this surprising fact: a good search engine is more than a great algorithm. Half the work goes into the user interface, facets, autocomplete, the data structure of the index, and all the other features which supports users in developing a good query.
Poor quality query + Great Search Algorithm = Poor Results
Great quality query + Poor Search Algorithm = Good Results
Needless to say, if both the quality of the query and the matching algorithms are good, results will probably be optimal.
Am I blaming poor search results quality on ignorant users who couldn't develop an advanced search query like ["poor search * quality" -site:google.com -"marketing"] on their own? No, of course. It's up to the software to make it easy to develop optimal queries, using suggestions, autocomplete, search facets, etc.
I have always been an avid searcher and loved to do research at the library during my college and grad school days in the late 80s and early 90s. Then I worked for Infoseek, Netscape, and a bunch of web search engine companies in the 90s. I often had the exact same thought as Ellen in her post.
The difference between the synthetic mind of the average search user and the analytic mind of the research pro is the following, I believe: The average search user would rather receive two relevant hits for his query and nothing else. At least that's what the marketers "monetizing" the search engines with ads at Google and other companies would like to think ;-) He/she has little time and always wishes that the "magic of search" would guess the exact desired result even when his/her query doesn't actually express it very precisely. The pro user is different. He/she wants to use his/her brain to analyze and discover the boundary of the semantic field being searched. Example: search for "Japanese Raw Fish Cooking" might lead him to other words, like "sushi" which will become part of the final, ideal advanced search query such as ["Japanese Raw Fish Cooking" AND "Sushi"].
This concept is important because now that I work for a company that manufactures custom search engines, we find customers that want all sorts of preciseness. We say that our software we can tune the "search corridor" to be rather large or precise. But beyond this optimizing between the conflicting influences of noise (too many irrelevant results) and silence (two few results, or even zero results), what became obvious for me over the years, was that just like in the library, a good search sessions involved careful development of one or several ideal queries. One often starts a session with a vague idea of what the search query should be. That's why I am a big fan of Google's instant search, its suggest tool, faceted search and other such tools which, in essence, guide the user towards the ideal query. We also have developed such an autocomplete tool which we even offer as a SaaS tool for ecommerce (Exorbyte Commerce - sorry shameless plug here).
Wow that was a long comment, I'll just have to post to our blog with a link to yours I guess.
Cheers,
Dan
Posted by: Dan Nicollet - Exorbyte | April 07, 2011 at 05:59 PM