A search engine is a class of computer programs that work together to find specific information, collect and organize that information, and make it available to whoever is searching for the information.

Generally, computers have search engines. They allow the user to search for specific keywords within the contents of his computer. The user’s search results are presented to him in a list of programs or documents located in the computer wherein those keywords or phrases have been found. However, when people nowadays talk about search engines, often they are referring to internet search engines, such as Google, that utilize massive assemblages of computers to search the huge amount of data on the web. In brief, then, search engines are complicated tools whose function it is to find online or offline data.

Web search engines have four tasks.

Crawling Indexing Calculating relevancy and ranking Presenting results

Search engines use automated search programs – called spiders, robots, bots, or crawlers – to crawl through the websites on the internet. When a spider reaches a web page, it makes a copy of it and assimilates its content, then adds it to the search engine’s database. After copying the content and links on a page, the spider follows the links on a webpage, or sends other spiders to follow the link. The sent spiders repeat the process. Spiders scour the vast expanse of the internet 24/7, enabling search engines to amass gargantuan databases.

Once the content is in the database, the indexing phase begins. The search engine analyses and categorizes the web page, relying heavily on specific key phrases found on the web page, meta tags that describe each page, and links to the page.

Once the information has been indexed, instructions on how to make this information available are stored in a database. A search engine’s database or indexes are akin to gigantic electronic filing cabinets. When a searcher keys in specific key words or phrases, the search engine checks its database for those key words. It ranks the web pages in order of relevance, presenting the ones it deems the most pertinent on top of a customized search results list.

Each search engine uses its own set of rules to determine the relevancy of a web page. Google, for example, has different algorithms than those of Yahoo. This is the reason that it is possible to get a different list of results when a searcher uses Google as the one he gets when he searches using Ask or Yahoo. Additionally, a user can get different results the next time he searches using the same key phrases, even if he uses the same search engine. The reason for this is that, as they incessantly crawl through the internet, spiders take note when any alterations have been made to a web page or when content has been added; the search engine database is then updated. Any changes made to a web page can influence the page’s relevancy to a specific search.

John Roberts is a self employed freelance consultant, web developer and full time internet marketer, from Sydney, Australia. John operates a training course for those that wish to learn seo. For more information about John, be sure to visit http://www.seotrainingkit.com