Many terms are used to label the software programs used to index websites. Most refer to them as web crawlers, web robots, webbots, or spiders. Web crawlers, robots, and spiders are used by many companies to retrieve information about your website. They are often automated and are programmed to simply follow hyperlinks throughout the web. Search engines are the most well known users of spiders, but others use them as well. All in all, crawlers, robots and spiders make up a significant number of the "website visitors" for a well established site.
The Google robot, known as Googlebot, will visit a well established site several times a day to check for updates. Of all the spiders who traverse the sites for which we have web log access, Googlebot is by far the most active, on average around twice as active as the Yahoo! crawler and six times more active than the new msnbot spider. On their website, Google refers to Googlebot as "Google's web crawler" and as "Google's web-crawling robot."