Web crawler strategies for web pages under robot.txt restriction

التفاصيل البيبلوغرافية
العنوان: Web crawler strategies for web pages under robot.txt restriction
المؤلفون: Vyas, Piyush, Chauhan, Akhilesh, Mandge, Tushar, Hardikar, Surbhi
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Artificial Intelligence, Computer Science - Information Retrieval
الوصف: In the present time, all know about World Wide Web and work over the Internet daily. In this paper, we introduce the search engines working for keywords that are entered by users to find something. The search engine uses different search algorithms for convenient results for providing to the net surfer. Net surfers go with the top search results but how did the results of web pages get higher ranks over search engines? how the search engine got that all the web pages in the database? This paper gives the answers to all these kinds of basic questions. Web crawlers working for search engines and robot exclusion protocol rules for web crawlers are also addressed in this research paper. Webmaster uses different restriction facts in robot.txt file to instruct web crawler, some basic formats of robot.txt are also mentioned in this paper.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2308.04689
رقم الأكسشن: edsarx.2308.04689
قاعدة البيانات: arXiv