Common Crawl – Free Database Of The Entire Web, Competition For Google
We all know Google! Google started off as nothing but a website indexer, built a more efficient algorithm for ranking Web pages and finally built its success on crawling the Web—using software that visits every page in order to build up a vast index of online content.
A nonprofit called Common Crawl is now using its own Web crawler and making a giant copy of the Web that it makes accessible to anyone. Common Crawl supplies a database of over five billion Web pages that can be accessed and analyzed by anyone, in the hope that it will inspire new research or online services.
“The Web represents, as far as I know, the largest accumulation of knowledge, and there’s so much you can build on top,” says entrepreneur Gilad Elbaz, who founded Common Crawl. “But simply doing the huge amount of work that’s necessary to get at all that information is a large blocker; few organizations … have had the resources to do that.”
Elbaz says he noticed around five years ago that researchers with new ideas about how to use Web data felt compelled to take jobs at Google because it was the only place they could test those ideas. He says Common Crawl’s data will make it easier for novel ideas to gain traction, both in the world of startups and in academic research.
Common Crawl has so far indexed more than five billion pages, adding up to 81 terabytes of data, made available through Amazon’s cloud computing service. For about $25 a programmer could set up an account with Amazon and get to work crunching Common Crawl data, says Lisa Green, Common Crawl’s director.
More details at MIT Technology Review.