Competing With Google – What Can Be Exploited?

A few emerging search engines try to take over Google at a semantic level, forgetting that Google already patented a search formula based on natural language. Of course “natural language” and “semantic search” are not the same thing, but Google is still ahead of the competition because it has something they don’t – the natural search algorithm.

So far we’ve seen the algorithm working pretty good. A search for chocolate returns mainly websites related to the term, led by Wikipedia. Even the sponsored results are quite accurate. This is a happy case. But Google is not infallible.

In an article at Search Engine Watch Erik Qualman shows that Google has weaknesses that can be exploited. He makes a good point analysing the search results for “Hotels in Memphis” where Google returns in its sponsored results listings about Chicago. And Erik’s logical conclusion follows: the results make a confusing user experience. When someone types in “hotels in Memphis” they certainly don’t want hotels in Chicago.

A result that is not relevant for the users is not the best that can happen to a search engine. Google is still the best when it comes to general search queries. To be perfect it would need to refine the sponsored results too, and make them user specific. Google is trying to achieve this through personalized search and so far the results are not disappointing.

Search Wikia tries combining the human factor with semantic technology for even more accurate results. Such developments take time and there’s no way to tell whether they’ll succeed. But everything meant to enhance user experience is worth a try.

//

Leave a Reply