Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Distributed web crawling
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Distributed computing technique}} {{more citations needed|date=July 2008}} '''Distributed web crawling''' is a [[distributed computing]] technique whereby [[Internet]] [[search engines]] employ many computers to [[Search engine indexing|index]] the Internet via [[web crawler|web crawling]]. Such systems may allow for users to voluntarily offer their own computing and bandwidth resources towards crawling web pages. By spreading the load of these tasks across many computers, costs that would otherwise be spent on maintaining large computing clusters are avoided. ==Types== Cho<ref name="cho2002parallel"/> and Garcia-Molina studied two types of policies: ===Dynamic assignment=== With this type of policy, a central server assigns new URLs to different crawlers dynamically. This allows the central server to, for instance, dynamically balance the load of each crawler.<ref>{{Cite book |last1=Guerriero |first1=A. |last2=Ragni |first2=F. |last3=Martines |first3=C. |title=2010 IEEE International Conference on Computational Intelligence for Measurement Systems and Applications |chapter=A dynamic URL assignment method for parallel web crawler |date=2010 |chapter-url=https://ieeexplore.ieee.org/document/5611764 |pages=119β123 |doi=10.1109/CIMSA.2010.5611764|isbn=978-1-4244-7228-4 |s2cid=14817039 }}</ref> With dynamic assignment, typically the systems can also add or remove downloader processes. The central server may become the bottleneck, so most of the workload must be transferred to the distributed crawling processes for large crawls. There are two configurations of crawling architectures with dynamic assignments that have been described by Shkapenyuk and Suel:<ref> {{cite conference | url = http://cis.poly.edu/tr/tr-cis-2001-03.htm | accessdate =2015-10-13 | title =Design and implementation of a high-performance distributed web crawler |author1=Shkapenyuk, Vladislav |author2=Suel, Torsten | year =2002 | book-title =Data Engineering, 2002. Proceedings. 18th International Conference on | publisher =IEEE | pages =357β368 }} </ref> * A small crawler configuration, in which there is a central [[Domain Name System|DNS]] resolver and central queues per Web site, and distributed downloaders. * A large crawler configuration, in which the DNS resolver and the queues are also distributed. ===Static assignment=== With this type of policy, there is a fixed rule stated from the beginning of the crawl that defines how to assign new URLs to the crawlers. For static assignment, a hashing function can be used to transform URLs (or, even better, complete website names) into a number that corresponds to the index of the corresponding crawling process.<ref>{{Cite book |last1=Wan |first1=Yuan |last2=Tong |first2=Hengqing |title=2008 IEEE International Conference on Networking, Sensing and Control |chapter=URL Assignment Algorithm of Crawler in Distributed System Based on Hash |date=2008 |chapter-url=http://dx.doi.org/10.1109/icnsc.2008.4525482 |journal=IEEE |pages=1632β1635 |doi=10.1109/icnsc.2008.4525482|isbn=978-1-4244-1685-1 |s2cid=39188334 }}</ref> As there are external links that will go from a Web site assigned to one crawling process to a website assigned to a different crawling process, some exchange of URLs must occur. To reduce the overhead due to the exchange of URLs between crawling processes, the exchange should be done in batch, several URLs at a time, and the most cited URLs in the collection should be known by all crawling processes before the crawl (e.g.: using data from a previous crawl).<ref name="cho2002parallel">{{cite conference|url=http://dl.acm.org/citation.cfm?id=511464|accessdate=2015-10-13|title=Parallel crawlers|author1=Cho, Junghoo|author2=Garcia-Molina, Hector|year=2002|book-title=Proceedings of the 11th international conference on World Wide Web|publisher=ACM|pages=124β135|isbn=1-58113-449-5|doi=10.1145/511446.511464|url-access=subscription}}</ref> ==Implementations== As of 2003, most modern commercial search engines use this technique. [[Google]] and [[Yahoo]] use thousands of individual computers to crawl the Web. Newer projects are attempting to use a less structured, more ''ad hoc'' form of collaboration by enlisting volunteers to join the effort using, in many cases, their home or personal computers. [[LookSmart]] is the largest search engine to use this technique, which powers its [[Grub (search engine)|Grub distributed web-crawling project]]. Wikia (now known as [[Fandom (website)|Fandom]]) acquired Grub from LookSmart in 2007.<ref>{{Cite web |last= |date=2007-07-27 |title=Wikia Acquires Distributed Web Crawler Grub |url=https://techcrunch.com/2007/07/27/wikia-acquires-distributed-web-crawler-grub/ |access-date=2022-10-08 |website=TechCrunch |language=en-US}}</ref> This solution uses computers that are connected to the [[Internet]] to crawl [[URLs|Internet addresses]] in the background. Upon downloading crawled web pages, they are compressed and sent back, together with a status flag (e.g. changed, new, down, redirected) to the powerful central servers. The servers, which manage a large database, send out new URLs to clients for testing. ==Drawbacks== According to the [[FAQ]] about [[Nutch]], an open-source search engine website, the savings in bandwidth by distributed web crawling are not significant, since "A successful search engine requires more bandwidth to upload query result pages than its crawler needs to download pages...".<ref>{{Cite web |title=Nutch: faq |url=https://nutch.sourceforge.net/docs/en/faq.html |access-date=2022-10-08 |website=nutch.sourceforge.net}}</ref> ==See also== *[[Distributed computing]] *[[Web crawler]] *[[YaCy]] - P2P web search engine with distributed crawling *[[Seeks]] - Open-Source P2P Web Search ==Sources== {{reflist}} ==External links== *[http://www.majestic12.co.uk/ Majestic-12 Distributed Search Engine] *[https://dial.uclouvain.be/pr/boreal/object/boreal%3A213815/datastream/PDF_01/view UniCrawl: A Practical Geographically Distributed] *[https://www.zenrows.com/blog/distributed-web-crawling#simple-celery-task Distributed web crawling made easy: system and architecture] {{Distributed search engines}} {{Web crawlers}} [[Category:Applications of distributed computing]] [[Category:Internet search algorithms]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite conference
(
edit
)
Template:Cite web
(
edit
)
Template:Distributed search engines
(
edit
)
Template:More citations needed
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Web crawlers
(
edit
)