Automated traffic to Web sites has steadily increased, driven by legitimate search-engine indexing, questionable crawlers and malicious attackers — companies need to know which is which.
To that end, Web-security cloud service Incapsula launched a site on Wednesday for cataloging Web bots, the automated programs that crawl Web sties to index pages, grab competitive price information, gather information on social-networking users or scan for vulnerabilities. With the site, dubbed BotoPedia, the company is gathering data on the Internet addresses used by Web bots as well as the user-agent strings and any other identifying information. The catalog will be open, but moderated, in much the same way as Wikipedia, says Marc Gaffan, co-founder and vice president of business development for Incapsula.
“This is essentially trying to take the gray area and classify it to a higher level of granularity, so that Web site operators have got the ability to cherry pick who they want to let in and who they don’t,” Gaffan says.
While many services attempt to identify bots by the user-agent strings — typically indicating browser information — the signature is changed too easily to be useful, he says. Instead, BotoPedia will include the user-agent string, IP addresses and other details.
While that is a simple change, it’s an important one, says Bogdan Botezatu, senior threat analyst with security firm BitDefender.
“If you block my spider, I will change its name and come back and crawl your Web server in a few minutes without losing much money or time,” Botezatu says. “But if you block my IP address, then I will have to either change my IP or change my provider or move to a different datacenter.”
[Researchers release free search engine-based data mining tools to identify and extract sensitive information from many popular cloud-based services. See Researchers To Launch New Tools For Search Engine Hacking.]
BotoPedia was initially seeded with data on the top 50 bots, but another dozen had been submitted by outside sources by Wednesday evening. While the operators of good Web bots will self submit, researchers will likely add information on bad bots as well, Gaffan says.
“I do expect a lot of bad bots to get in there, but obviously not by them coming forward,” he says.
The rise of automated Web traffic is playing out against the backdrop of an estimated quadrupling of Internet traffic by the year 2016, according to networking giant Cisco’s efforts to predict future bandwidth demand. Web traffic will increase slightly faster, expanding some five-fold between 2011 and 2016, the company estimates.
Automated traffic is taking an increasing share of the pie. Currently, slightly more than half of the traffic to Web sites comes from bots, according to Incapsula’s data. Of the total, 20 percent are good page indexers and other desired bots, another 19 percent are intelligence gathering bots that sites may not want, and the remaining 12 percent are scrapers, comment spammers and flat-out attacks.
Attacks could be automated SQL-injection attacks on backend databases, the scraping of user information or just automated attempts at logging in. Overall, sites should expect each Web application to suffer a sustained attack nearly 120 days of each year, according to a report issued earlier this week by Web security firm Imperva. Companies should prepare for intense automated attacks, the company says.
“The success of the whole mission depends on the defense performance when under attack,” states the report. “Therefore, the defense solutions and procedures should be designed to accommodate attack bursts.”
Increasingly attackers will cloak themselves in the appearance of legitimacy. By appearing to be a search-engine index bot, attackers will be able to bypass most filters, says Incapsula’s Gaffan.
In a study of 1,000 customers, Incapsula found that more than 16 percent encountered Web bots that impersonated Google’s automated crawlers. Because Google search rankings are so important, no site wants to block the company from indexing its pages.
In the end, the company hopes the online catalog will empower companies to make better decisions about what automated traffic they allow to peruse their site, and what traffic they block, he says.
“This will give Web site owners a lot of different information and better awareness into who they want to let in,” Gaffan says.
By Robert Lemos, Contributing Writer