blog




  • Essay / Search Engines - 1478

    There are currently over a billion pages of information on the Internet on every topic imaginable. The question is how do you find what you want? Computer algorithms can be written to search the Internet, but most are impractical because they must sacrifice accuracy for coverage. However, a few engines have found interesting ways to deliver high-quality information quickly. Ranking by page value, topic-specific searches, and metasearch engines are three of the most popular because they work smarter, not harder. Although no commercial search engine makes its algorithm public, the basic structure can be inferred by testing the results. The reason is that there would be a thousand imitation sites, which would mean little or no profit for developers. The most primitive search is the sequential search, which goes through each element in the list one by one. Yet the sheer size of the Web immediately rules out this possibility. Although the sequential method may give the best results, you will likely never see results due to the inflammatory growth rate of the web. Even the fastest computers would take a long time, and in that time all sorts of new pages would have been created. Some of the older "spiders" like Alta Vista are designed to literally crawl the web at random using links to other pages. This is accomplished with high-speed servers with 300 connections open at the same time. These web “spiders” are content-based, meaning they read and categorize the HTML code on each page. One of the flaws of this is the problem of verbal disagreement where you have a particular word that can describe two different concepts. Type a few words into the query and you'll be lucky if you find something related to what you're looking for. Query words can be found anywhere on a page and are likely to be taken out of context. Content-based searches can also be easily manipulated. Some tactics are very deceptive, for example "...some automotive websites have stooped to writing 'Buy this car' dozens of times in hidden fonts...a subliminal version of the AAAA automobile listing in the Yellow Pages" ( 1). The truth is you would never know if a site does this unless you look at the code and most consumers don't look at the code. A less subtle tactic is paying to get to the top. For example, the GoTo engine accepts payment from those who wish to b...... middle of paper ...... the meta search engine can have several advantages:1 It will present users with a more sophisticated interface…2 Make the translation more accurate3 Get more complete and accurate results4 Improve source selection and priority decisions” (3). Once again the idea of ​​optimizing the Internet through intelligent software appears. It's simply about designing an algorithm that doesn't forget what it has learned. Most people did not foresee the tremendous growth of the Internet in the 1990s. Computer algorithms went from small government programs to every personal computer in the world. You start with the most basic problem solving and end with the most complex problem solving. This of course comes down to sorting through a database that is growing almost exponentially. Plain and simple, the Internet contains a lot of information. A robot works around the clock to dig it all out. The engine..