How do Internet search engines work?

HeXp£Øi±

Well-Known Member
Javed Mostafa, the Victor Yngve Associate Professor of information science and director of the Laboratory of Applied Informatics at Indiana University, Bloomington, explains.

It has been estimated that the amount of textual information accessible via search engines is at least 40 times larger than the digitized content of all the books in the Library of Congress, the world's largest library. It is a challenge to provide access to such a large volume of information, yet current search engines do remarkably well in sifting through the content and identifying related links to queries.

There is a multitude of information providers on the web. These include the commonly known and publicly available sources such as Google, InfoSeek, NorthernLight and AltaVista, to name a few. A second group of sources--sometimes referred to as the "hidden web"--is much larger than the public web in terms of the amount of information they provide. This latter group includes sources such as Lexis-Nexis, Dialog, Ingenta and LoC. They remain hidden for various reasons: they may not allow other information providers access to their content; they may require subscription; or they may demand payment for access. This article is concerned with the former group, the publicly available web search services, collectively referred to here as search engines.

Search engines employ various techniques to speed up searches. Some of the common techniques are briefly described below.

Preprocessed Data

One way search engines save time is by preprocessing the content of the web. That is, when a user issues a query, it is not sent to millions of web sites. Instead, the matching takes place against preprocessed data stored in one site. The preprocessing is carried out with the aid of a software program called a crawler. The crawler is sent out periodically by the database maintainers to collect web pages. A specialized computer program parses the retrieved pages to extract words. These words are then stored along with the links to the corresponding pages in an index file. Users' queries are matched against this index file, not against other web sites.

Smart Representation

In this technique, the representation for the index is carefully selected with an eye toward minimizing search time. Information scientists have produced an efficient data structure called a tree that can guarantee significantly shorter overall search time compared with searches conducted against a sequential list (see sidebar). To accommodate searches conducted by many users simultaneously and eliminate "wait queues," the index is usually duplicated on multiple computers in the search site.

Prioritizing Results

The URLs or links produced as a result of searches are usually numerous. But due to ambiguities of language (for instance, "window blind" versus "blind ambition"), the resulting links would generally not be equally relevant to a user’s query. To provide quicker access to the most relevant records (and to place them at or near the top), the search algorithm applies various ranking strategies. A common ranking method known as term-frequency-inverse document-frequency (TFIDF) considers the distribution of words and their frequencies and generates numerical weights for words signifying their importance in individual documents. It produces word weights whereby words that are highly frequent (such as 'or,' 'to' or 'with') and that appear in many documents have substantially less weight than words that are semantically more relevant and appear in relatively few documents.

In addition to term weighting, web pages can be weighted using other strategies. For example, link analysis considers the nature of each page in terms of its association with other pages—namely if it is an authority (number of other pages that point to it) or a hub (number of pages it points to). The highly successful Google search engine uses link-analysis to improve the ranking of its search results.
Context and Distance

In order to identify the most relevant links quickly, certain search engines compare query terms to contextual information such as recent queries the user submitted. This technique is sometimes referred to as query catching, and involves collecting the words from recent queries and using these words to disambiguate, refine or expand the current query. Another way certain information providers can speed up the delivery of search results is by using a distributed delivery model, whereby copies of the index and related content are duplicated and moved to multiple geographical locations so as to shorten the network distance between users and content. The content providers work with third-party services such as Akamai to implement distributed content delivery.

Limitations

There are costs associated with some of the speed-up techniques described above. The separation of the organizations conducting the indexing from the organizations that produce the actual content can lead to so-called rotting links, which point to pages that no longer exist. Alternatively, links to new web content could be missing. Both rotting and missing links may occur due to delays in crawling or re-indexing. Some crawlers retrieve pages blindly, without attention to either the reputation or the authority of information providers. This process encourages manipulation of indexing for malicious purposes. One common phenomenon is called index-spamming. Sites desiring to artificially increase their ranking in search results may place thousands of words in pages using font colors that match the background of the pages. This procedure hides the words from viewers but makes them available to indexes. Finally, by taking advantage of a feature of web server software, information providers can manipulate it to return different pages for the same request made by different hosts. This has lead to page-jacking, whereby a site can copy a competitor’s page, have it indexed by a search engine host as its own, and direct requests from other hosts for the original page to alternative content or sites.
Scource:
http://www.sciam.com/askexpert_question.cfm?articleID=00009562-3FFB-1DA4-815A809EC5880000&catID=3
 
Top