爬虫网络网络爬虫外文翻译

上传人:飞*** 文档编号:30233679 上传时间:2018-01-28 格式:DOC 页数:19 大小:256.39KB
返回 下载 相关 举报
爬虫网络网络爬虫外文翻译_第1页
第1页 / 共19页
爬虫网络网络爬虫外文翻译_第2页
第2页 / 共19页
爬虫网络网络爬虫外文翻译_第3页
第3页 / 共19页
爬虫网络网络爬虫外文翻译_第4页
第4页 / 共19页
爬虫网络网络爬虫外文翻译_第5页
第5页 / 共19页
点击查看更多>>
资源描述

《爬虫网络网络爬虫外文翻译》由会员分享,可在线阅读,更多相关《爬虫网络网络爬虫外文翻译(19页珍藏版)》请在金锄头文库上搜索。

1、外文资料ABSTRACTCrawling the web is deceptively simple: the basic algorithm is (a)Fetch a page (b) Parse it to extract all linked URLs (c) For all the URLs not seen before, repeat (a)-(c). However, the size of the web (estimated a【over 4 billion pages) and its rate of change (estimated at 1% per week) m

2、ove this plan from a trivial programming exercise to a serious algorithmic and system design challenge. Indeed, these two factors alone imply that for a reasonably fresh and complete crawl of the web, step (a) must be executed about a thousand times per second, and thus the membership test (c) must

3、be done well over ten thousand times per second against a set too large to store in main memory. This requires a distributed architecture, which further complicates the membership test.A crucial way to speed up the test is to cache, that is, to store in main memory a (dynamic) subset of the “seen” U

4、RLs. The main goal of this paper is to carefully investigate several URL caching techniques for web crawling. We consider both practical algorithms: random replacement, static cache, LRU, and CLOCK, and theoretical limits: clairvoyant caching and infinite cache. We performed about 1,800 simulations

5、using these algorithms with various cache sizes, using actual log data extracted from a massive 33 day web crawl that issued over one billion HTTP requests. Our main conclusion is that caching is very effective - in our setup, a cache of roughly 50,000 entries can achieve a hit rate of almost 80%. I

6、nterestingly, this cache size falls at a critical point: a substantially smaller cache is much less effective while a substantially larger cache brings little additional benefit. We conjecture that such critical points are inherent to our problem and venture an explanation for this phenomenon.I. INT

7、RODUCTIONA recent Pew Foundation study 31 states that “Search engines have become an indispensable utility for Internet users,and estimates that as of mid-2002, slightlyover 50% of all Americans have used web search to find information. Hence,thetechnology that powers web search is of enormous pract

8、ical interest. In this paper, we concentrate on one aspect of the search technology, namely the process of collecting web pages that eventually constitute the search engine corpus.Search engines collect pages in many ways, among them direct URL submission, paid inclusion, and URL extraction from non

9、web sources, but the bulk of the corpus is obtained by recursively exploring the web, a process known as crawling or SPIDERing. The basic algorithm is(a) Fetch a page(b) Parse it to extract all linked URLs(c) For all the URLs not seen before, repeat (a)-(c)Crawling typically starts from a set of see

10、d URLs, made up of URLs obtained by other means as described above and/or made up of URLs collected during previous crawls. Sometimes crawls are started from a single well connected page, or a directory such as , but in this case a relatively large portion of the web (estimated at over 20%) is never

11、 reached. See 91 for a discussion of the graph structure of the web that leads to this phenomenon.If we view web pages as nodes in a graph, and hyperlinks as directed edges among these nodes, then crawling becomes a process known in mathematical circles as graph traversal. Various strategies for gra

12、ph traversal differ in their choice of which node among the nodes not yet explored to explore next. Two standard strategies for graph traversal are Depth First Search (DFS) and Breadth First Search (BFS) - they are easy to implement and taught in many introductory algorithms classes. (See for instan

13、ce 34).However, crawling the web is not a trivial programming exercise but a serious algorithmic and system design challenge because of the following two factors.1. The web is very large. Currently, Google 20 claims to have indexed over 3 billion pages. Various studies 13, 27, 28J have indicated tha

14、t, historically, the web has doubled every 9-12 months.2. Web pages are changing rapidly. If “change” means “any change,then about 40% of all web pages change weekly 12. Even if we consider only pages that changeby a third or more, about 7% of all web pages change weekly 17.These two factors imply t

15、hat to obtain a reasonably fresh and 679 complete snapshot of the web, a search engine must crawl at least 100 million pages per day. Therefore, step (a) must be executed about 1,000 times per second, and the membership test in step (c) must be done well over ten thousand times per second, against a

16、 set of URLs that is too large to store in main memory. In addition, crawlers typically use a distributed architecture to crawl more pages in parallel, which further complicates the membership test: it is possible that the membership question can only be answered by a peer node, not locally.A crucial way to speed up the membership test is to cache a (dynamic) subset of the “seen” URLs in main memory. The main goal of this paper is to investigate in depth several

展开阅读全文
相关资源
相关搜索

当前位置:首页 > 行业资料 > 其它行业文档

电脑版 |金锄头文库版权所有
经营许可证:蜀ICP备13022795号 | 川公网安备 51140202000112号