久久精品国产亚洲高清|精品日韩中文乱码在线|亚洲va中文字幕无码久|伊人久久综合狼伊人久久|亚洲不卡av不卡一区二区|精品久久久久久久蜜臀AV|国产精品19久久久久久不卡|国产男女猛烈视频在线观看麻豆

    1. <style id="76ofp"></style>

      <style id="76ofp"></style>
      <rt id="76ofp"></rt>
      <form id="76ofp"><optgroup id="76ofp"></optgroup></form>
      1. 千鋒教育-做有情懷、有良心、有品質(zhì)的職業(yè)教育機(jī)構(gòu)

        手機(jī)站
        千鋒教育

        千鋒學(xué)習(xí)站 | 隨時(shí)隨地免費(fèi)學(xué)

        千鋒教育

        掃一掃進(jìn)入千鋒手機(jī)站

        領(lǐng)取全套視頻
        千鋒教育

        關(guān)注千鋒學(xué)習(xí)站小程序
        隨時(shí)隨地免費(fèi)學(xué)習(xí)課程

        當(dāng)前位置:首頁(yè)  >  技術(shù)干貨  > 20天學(xué)會(huì)爬蟲之Scrapy框架通用爬蟲CrawlSpider

        20天學(xué)會(huì)爬蟲之Scrapy框架通用爬蟲CrawlSpider

        來(lái)源:千鋒教育
        發(fā)布人:qyf
        時(shí)間: 2022-09-20 14:48:03 1663656483

          上篇文章給大家分享的是Spider類的使用,本次我們繼續(xù)分享學(xué)習(xí)Spider類的子類CrawlSpider類。

          介紹CrawlSpider

          CrawlSpider其實(shí)是Spider的一個(gè)子類,除了繼承到Spider的特性和功能外,還派生除了其自己獨(dú)有的更加強(qiáng)大的特性和功能。

          比如如果你想爬取知乎或者是簡(jiǎn)書全站的話,CrawlSpider這個(gè)強(qiáng)大的武器就可以爬上用場(chǎng)了,說(shuō)CrawlSpider是為全站爬取而生也不為過(guò)。

          其中最顯著的功能就是”LinkExtractors鏈接提取器“。Spider是所有爬蟲的基類,其設(shè)計(jì)原則只是為了爬取start_url列表中網(wǎng)頁(yè),而從爬取到的網(wǎng)頁(yè)中提取出的url進(jìn)行繼續(xù)的爬取工作使用CrawlSpider更合適。

          CrawlSpider源碼分析

          源碼解析

          class CrawlSpider(Spider):

          rules = ()

          def __init__(self, *a, **kw):

          super(CrawlSpider, self).__init__(*a, **kw)

          self._compile_rules()

          # 首先調(diào)用parse()來(lái)處理start_urls中返回的response對(duì)象

          # parse()則將這些response對(duì)象傳遞給了_parse_response()函數(shù)處理,并設(shè)置回調(diào)函數(shù)為parse_start_url()

          # 設(shè)置了跟進(jìn)標(biāo)志位True

          # parse將返回item和跟進(jìn)了的Request對(duì)象

          def parse(self, response):

          return self._parse_response(response, self.parse_start_url, cb_kwargs={}, follow=True)

          # 處理start_url中返回的response,需要重寫

          def parse_start_url(self, response):

          return []

          def process_results(self, response, results):

          return results

          # 從response中抽取符合任一用戶定義'規(guī)則'的鏈接,并構(gòu)造成Resquest對(duì)象返回

          def _requests_to_follow(self, response):

          if not isinstance(response, HtmlResponse):

          return

          seen = set()

          # 抽取之內(nèi)的所有鏈接,只要通過(guò)任意一個(gè)'規(guī)則',即表示合法

          for n, rule in enumerate(self._rules):

          links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]

          # 使用用戶指定的process_links處理每個(gè)連接

          if links and rule.process_links:

          links = rule.process_links(links)

          # 將鏈接加入seen集合,為每個(gè)鏈接生成Request對(duì)象,并設(shè)置回調(diào)函數(shù)為_repsonse_downloaded()

          for link in links:

          seen.add(link)

          # 構(gòu)造Request對(duì)象,并將Rule規(guī)則中定義的回調(diào)函數(shù)作為這個(gè)Request對(duì)象的回調(diào)函數(shù)

          r = Request(url=link.url, callback=self._response_downloaded)

          r.meta.update(rule=n, link_text=link.text)

          # 對(duì)每個(gè)Request調(diào)用process_request()函數(shù)。該函數(shù)默認(rèn)為indentify,即不做任何處理,直接返回該Request.

          yield rule.process_request(r)

          # 處理通過(guò)rule提取出的連接,并返回item以及request

          def _response_downloaded(self, response):

          rule = self._rules[response.meta['rule']]

          return self._parse_response(response, rule.callback, rule.cb_kwargs, rule.follow)

          # 解析response對(duì)象,會(huì)用callback解析處理他,并返回request或Item對(duì)象

          def _parse_response(self, response, callback, cb_kwargs, follow=True):

          # 首先判斷是否設(shè)置了回調(diào)函數(shù)。(該回調(diào)函數(shù)可能是rule中的解析函數(shù),也可能是 parse_start_url函數(shù))

          # 如果設(shè)置了回調(diào)函數(shù)(parse_start_url()),那么首先用parse_start_url()處理response對(duì)象,

          # 然后再交給process_results處理。返回cb_res的一個(gè)列表

          if callback:

          #如果是parse調(diào)用的,則會(huì)解析成Request對(duì)象

          #如果是rule callback,則會(huì)解析成Item

          cb_res = callback(response, **cb_kwargs) or ()

          cb_res = self.process_results(response, cb_res)

          for requests_or_item in iterate_spider_output(cb_res):

          yield requests_or_item

          # 如果需要跟進(jìn),那么使用定義的Rule規(guī)則提取并返回這些Request對(duì)象

          if follow and self._follow_links:

          #返回每個(gè)Request對(duì)象

          for request_or_item in self._requests_to_follow(response):

          yield request_or_item

          def _compile_rules(self):

          def get_method(method):

          if callable(method):

          return method

          elif isinstance(method, basestring):

          return getattr(self, method, None)

          self._rules = [copy.copy(r) for r in self.rules]

          for rule in self._rules:

          rule.callback = get_method(rule.callback)

          rule.process_links = get_method(rule.process_links)

          rule.process_request = get_method(rule.process_request)

          def set_crawler(self, crawler):

          super(CrawlSpider, self).set_crawler(crawler)

          self._follow_links = crawler.settings.getbool('CRAWLSPIDER_FOLLOW_LINKS', True)

          CrawlSpider爬蟲文件字段介紹

          CrawlSpider除了繼承Spider類的屬性:name、allow_domains之外,還提供了一個(gè)新的屬性:rules。它是包含一個(gè)或多個(gè)Rule對(duì)象的集合。每個(gè)Rule對(duì)爬取網(wǎng)站的動(dòng)作定義了特定規(guī)則。如果多個(gè)Rule匹配了相同的鏈接,則根據(jù)他們?cè)诒緦傩灾斜欢x的順序,第一個(gè)會(huì)被使用。

          CrawlSpider也提供了一個(gè)可復(fù)寫的方法:

          parse_start_url(response)

          當(dāng)start_url的請(qǐng)求返回時(shí),該方法被調(diào)用。該方法分析最初的返回值并必須返回一個(gè)Item對(duì)象或一個(gè)Request對(duì)象或者一個(gè)可迭代的包含二者的對(duì)象。

          注意:當(dāng)編寫爬蟲規(guī)則時(shí),請(qǐng)避免使用parse 作為回調(diào)函數(shù)。 由于CrawlSpider使用parse 方法來(lái)實(shí)現(xiàn)其邏輯,如果 您覆蓋了parse 方法,CrawlSpider將會(huì)運(yùn)行失敗。

          另外,CrawlSpider還派生了其自己獨(dú)有的更加強(qiáng)大的特性和功能,最顯著的功能就是”LinkExtractors鏈接提取器“。

          LinkExtractor

          class scrapy.linkextractors.LinkExtractor

          LinkExtractor是從網(wǎng)頁(yè)(scrapy.http.Response)中抽取會(huì)被follow的鏈接的對(duì)象。目的很簡(jiǎn)單: 提取鏈接?每個(gè)LinkExtractor有唯一的公共方法是 extract_links(),它接收一個(gè) Response 對(duì)象,并返回一個(gè) scrapy.link.Link 對(duì)象

          即Link Extractors要實(shí)例化一次,并且 extract_links 方法會(huì)根據(jù)不同的 response 調(diào)用多次提取鏈接?源碼如下:

          class scrapy.linkextractors.LinkExtractor(

          allow = (), # 滿足括號(hào)中“正則表達(dá)式”的值會(huì)被提取,如果為空,則全部匹配。

          deny = (), # 與這個(gè)正則表達(dá)式(或正則表達(dá)式列表)不匹配的URL一定不提取。

          allow_domains = (), # 會(huì)被提取的鏈接的domains。

          deny_domains = (), # 一定不會(huì)被提取鏈接的domains。

          deny_extensions = None,

          restrict_xpaths = (), # 使用xpath表達(dá)式,和allow共同作用過(guò)濾鏈接

          tags = ('a','area'),

          attrs = ('href'),

          canonicalize = True,

          unique = True,

          process_value = None

          )

          作用:提取response中符合規(guī)則的鏈接。

          參考鏈接:https://scrapy-chs.readthedocs.io/zh_CN/latest/topics/link-extractors.html

          Rule類

          LinkExtractor是用來(lái)提取的類,但是提取的規(guī)則需要通過(guò)Rule類實(shí)現(xiàn)。Rule類的定義如下:

          class scrapy.contrib.spiders.Rule(link_extractor,callback=None,cb_kwargs=None,

          follow=None,process_links=None,process_request=None)

          參數(shù)如下:

          link_extractor:是一個(gè)Link Extractor對(duì)象。其定義了如何從爬取到的頁(yè)面提取鏈接。

          callback:是一個(gè)callable或string(該Spider中同名的函數(shù)將會(huì)被調(diào)用)。從link_extractor中每獲取到鏈接時(shí)將會(huì)調(diào)用該函數(shù)。該回調(diào)函數(shù)接收一個(gè)response作為其第一個(gè)參數(shù),并返回一個(gè)包含Item以及Request對(duì)象(或者這兩者的子類)的列表。

          cb_kwargs:包含傳遞給回調(diào)函數(shù)的參數(shù)(keyword argument)的字典。

          follow:是一個(gè)boolean值,指定了根據(jù)該規(guī)則從response提取的鏈接是否需要跟進(jìn)。如果callback為None,follow默認(rèn)設(shè)置True,否則默認(rèn)False。

          processlinks:是一個(gè)callable或string(該Spider中同名的函數(shù)將會(huì)被調(diào)用)。從linkextrator中獲取到鏈接列表時(shí)將會(huì)調(diào)用該函數(shù)。該方法主要是用來(lái)過(guò)濾。

          processrequest:是一個(gè)callable或string(該spider中同名的函數(shù)都將會(huì)被調(diào)用)。該規(guī)則提取到的每個(gè)request時(shí)都會(huì)調(diào)用該函數(shù)。該函數(shù)必須返回一個(gè)request或者None。用來(lái)過(guò)濾request。

          參考鏈接:https://scrapy-chs.readthedocs.io/zhCN/latest/topics/spiders.html#topics-spiders-ref

          通用爬蟲案例

          CrawlSpider整體的爬取流程:

          爬蟲文件首先根據(jù)url,獲取該url的網(wǎng)頁(yè)內(nèi)容

          鏈接提取器會(huì)根據(jù)提取規(guī)則,對(duì)步驟1網(wǎng)頁(yè)內(nèi)容中的鏈接進(jìn)行提取

          規(guī)則解析器會(huì)根據(jù)指定的解析規(guī)則,將鏈接提取器中提取到的鏈接按照指定的規(guī)則進(jìn)行解析

          將3中解析的數(shù)據(jù)封裝到item中,最后提交給管道進(jìn)行持久化存儲(chǔ)

          創(chuàng)建CrawlSpider爬蟲項(xiàng)目

          創(chuàng)建scrapy工程:scrapy startproject projectName

          創(chuàng)建爬蟲文件(切換到創(chuàng)建的項(xiàng)目下執(zhí)行):scrapy genspider -t crawl spiderName www.xxx.com

          --此指令對(duì)比以前的指令多了 "-t crawl",表示創(chuàng)建的爬蟲文件是基于CrawlSpider這個(gè)類的,而不再是Spider這個(gè)基類。

          啟動(dòng)爬蟲文件(基于步驟二的路徑執(zhí)行):scrapy crawl crawlDemo

          案例(爬取小說(shuō)案例)

          測(cè)試小說(shuō)是否可用

          本案例是17k小說(shuō)網(wǎng)小說(shuō)爬取,打開首頁(yè)---->選擇:分類---->選擇:已完本、只看免費(fèi),如下圖:

        Picture

          鏈接:https://www.17k.com/all/book/200030101.html

          按照上面的步驟我們依次:

          scrapy startproject seventeen_k

          scrapy genspider -t crawl novel www.17k.com

          Pycharm 打開項(xiàng)目

          查看novel.py

          class NovelSpider(CrawlSpider):

          name = 'novel'

          allowed_domains = ['www.17k.com']

          start_urls = ['https://www.17k.com/all/book/2_0_0_0_3_0_1_0_1.html']

          rules = (

          Rule(allow = LinkExtractor(allow=r'//www.17k.com/book/\d+.html', restrict_xpaths=('//td[@class="td3"]')),

          callback='parse_book',follow=True, process_links="process_booklink"),

          )

          def process_booklink(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("限制一本書:", link.url)

          yield link

          else:

          return

          def parse_book(self, response):

          item = {

          return item

          首先測(cè)試一下是否可以爬取到內(nèi)容,注意rules給出的規(guī)則:

          Rule(allow = LinkExtractor(allow=r'//www.17k.com/book/\d+.html', restrictxpaths=('//td[@class="td3"]')),

          callback='parsebook',follow=True, processlinks="processbooklink")

          在allow中指定了提取鏈接的正則表達(dá)式,相當(dāng)于findall(r'正則內(nèi)容',response.text),在LinkExtractor中添加了參數(shù)restrict_xpaths是為了與正則表達(dá)式搭配使用,更快的定位鏈接。

          callback='parse_item'是指定回調(diào)函數(shù)

          process_links用于處理LinkExtractor匹配到的鏈接的回調(diào)函數(shù)

          然后,配置settings.py里的必要配置后運(yùn)行,即可發(fā)現(xiàn)指定頁(yè)面第一本小說(shuō)URL獲取正常:

        Picture(1)

          執(zhí)行:scrapy crawl novel ,運(yùn)行結(jié)果:

        Picture(2)

          解析小說(shuō)的詳細(xì)信息

          上圖鏈接對(duì)應(yīng)小說(shuō)的詳情頁(yè): https://www.17k.com/book/3352644.html

        Picture(3)

          通過(guò)解析書籍的URL的獲取到的響應(yīng),獲取以下數(shù)據(jù):

          catagory(分類),bookname,status,booknums,description,ctime,bookurl,chapter_url

          改寫parse_book函數(shù)內(nèi)容如下:

          import scrapy

          from scrapy.linkextractors import LinkExtractor

          from scrapy.spiders import CrawlSpider, Rule

          class NovelSpider(CrawlSpider):

          name = 'novel'

          allowed_domains = ['www.17k.com']

          start_urls = ['https://www.17k.com/all/book/2_0_0_0_3_0_1_0_1.html']

          rules = (

          Rule(LinkExtractor(allow=r'//www.17k.com/book/\d+.html', restrict_xpaths=('//td[@class="td3"]')), callback='parse_book',

          follow=True, process_links="process_booklink"),

          )

          def process_booklink(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("限制一本書:", link.url)

          yield link

          else:

          return

          def parse_book(self, response):

          item ={}

          print("解析book_url")

          # 字?jǐn)?shù):

          book_nums = response.xpath('//div[@class="BookData"]/p[2]/em/text()').extract()[0]

          # 書名:

          book_name = response.xpath('//div[@class="Info "]/h1/a/text()').extract()[0]

          # 分類

          category = response.xpath('//dl[@id="bookInfo"]/dd/div[2]/table//tr[1]/td[2]/a/text()').extract()[0]

          # 概述

          description = "".join(response.xpath('//p[@class="intro"]/a/text()').extract())

          # 小說(shuō)鏈接

          book_url = response.url

          # 小說(shuō)章節(jié)

          chapter_url = response.xpath('//dt[@class="read"]/a/@href').extract()[0]

          print(book_nums, book_url,book_name,category,description,chapter_url)

          return item

          打印結(jié)果:

        Picture(4)

          解析章節(jié)信息

          通過(guò)解析書籍的URL獲取的響應(yīng)里解析得到每個(gè)小說(shuō)章節(jié)列表頁(yè)的URL,并發(fā)送請(qǐng)求獲得響應(yīng),得到對(duì)應(yīng)小說(shuō)的章節(jié)列表頁(yè),獲取以下數(shù)據(jù):id , title(章節(jié)名稱) content(內(nèi)容),ordernum(序號(hào)),ctime,chapterurl(章節(jié)url),catalog_url(目錄url)

          在novel.py的rules中添加:

          ...

          rules = (

          Rule(LinkExtractor(allow=r'//www.17k.com/book/\d+.html', restrict_xpaths=('//td[@class="td3"]')),

          callback='parse_book',

          follow=True, process_links="process_booklink"),

          # 匹配章節(jié)目錄的url

          Rule(LinkExtractor(allow=r'/list/\d+.html',

          restrict_xpaths=('//dt[@class="read"]')), callback='parse_chapter', follow=True,

          process_links="process_chapterlink"),

          )

          def process_chapterlink(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("章節(jié):", link.url)

          yield link

          else:

          return

          ...

        Picture(5)

          通過(guò)上圖可以發(fā)現(xiàn)從上一個(gè)鏈接的response中,匹配第二個(gè)rule可以提取到章節(jié)的鏈接,繼續(xù)編寫解析章節(jié)詳情的回調(diào)函數(shù)parse_chapter,代碼如下:

          # 前面代碼省略

          ......

          def parse_chapter(self, response):

          print("解析章節(jié)目錄", response.url) # response.url就是數(shù)據(jù)的來(lái)源的url

          # 注意:章節(jié)和章節(jié)的url要一一對(duì)應(yīng)

          a_tags = response.xpath('//dl[@class="Volume"]/dd/a')

          chapter_list = []

          for index, a in enumerate(a_tags):

          title = a.xpath("./span/text()").extract()[0].strip()

          chapter_url = a.xpath("./@href").extract()[0]

          ordernum = index + 1

          c_time = datetime.datetime.now()

          chapter_url_refer = response.url

          chapter_list.append([title, ordernum, c_time, chapter_url, chapter_url_refer])

          print('章節(jié)目錄:', chapter_list)

          重新運(yùn)行測(cè)試,發(fā)現(xiàn)數(shù)據(jù)獲取正常!

        Picture(6)

          獲取章節(jié)詳情

          通過(guò)解析對(duì)應(yīng)小說(shuō)的章節(jié)列表頁(yè)獲取到每一章節(jié)的URL,發(fā)送請(qǐng)求獲得響應(yīng),得到對(duì)應(yīng)章節(jié)的章節(jié)內(nèi)容,同樣添加章節(jié)的rule和回調(diào)函數(shù).完整代碼如下:

          import datetime

          import scrapy

          from scrapy.linkextractors import LinkExtractor

          from scrapy.spiders import CrawlSpider, Rule

          class NovelSpider(CrawlSpider):

          name = 'novel'

          allowed_domains = ['www.17k.com']

          start_urls = ['https://www.17k.com/all/book/2_0_0_0_3_0_1_0_1.html']

          rules = (

          Rule(LinkExtractor(allow=r'//www.17k.com/book/\d+.html', restrict_xpaths=('//td[@class="td3"]')),

          callback='parse_book',

          follow=True, process_links="process_booklink"),

          # 匹配章節(jié)目錄的url

          Rule(LinkExtractor(allow=r'/list/\d+.html',

          restrict_xpaths=('//dt[@class="read"]')), callback='parse_chapter', follow=True,

          process_links="process_chapterlink"),

          # 解析章節(jié)詳情

          Rule(LinkExtractor(allow=r'/chapter/(\d+)/(\d+).html',

          restrict_xpaths=('//dl[@class="Volume"]/dd')), callback='get_content',

          follow=False, process_links="process_chapterDetail"),

          )

          def process_booklink(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("限制一本書:", link.url)

          yield link

          else:

          return

          def process_chapterlink(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("章節(jié):", link.url)

          yield link

          else:

          return

          def process_chapterDetail(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("章節(jié)詳情:", link.url)

          yield link

          else:

          return

          def parse_book(self, response):

          print("解析book_url")

          # 字?jǐn)?shù):

          book_nums = response.xpath('//div[@class="BookData"]/p[2]/em/text()').extract()[0]

          # 書名:

          book_name = response.xpath('//div[@class="Info "]/h1/a/text()').extract()[0]

          # 分類

          category = response.xpath('//dl[@id="bookInfo"]/dd/div[2]/table//tr[1]/td[2]/a/text()').extract()[0]

          # 概述

          description = "".join(response.xpath('//p[@class="intro"]/a/text()').extract())

          # 小說(shuō)鏈接

          book_url = response.url

          # 小說(shuō)章節(jié)

          chapter_url = response.xpath('//dt[@class="read"]/a/@href').extract()[0]

          print(book_nums, book_url, book_name, category, description, chapter_url)

          def parse_chapter(self, response):

          print("解析章節(jié)目錄", response.url) # response.url就是數(shù)據(jù)的來(lái)源的url

          # 注意:章節(jié)和章節(jié)的url要一一對(duì)應(yīng)

          a_tags = response.xpath('//dl[@class="Volume"]/dd/a')

          chapter_list = []

          for index, a in enumerate(a_tags):

          title = a.xpath("./span/text()").extract()[0].strip()

          chapter_url = a.xpath("./@href").extract()[0]

          ordernum = index + 1

          c_time = datetime.datetime.now()

          chapter_url_refer = response.url

          chapter_list.append([title, ordernum, c_time, chapter_url, chapter_url_refer])

          print('章節(jié)目錄:', chapter_list)

          def get_content(self, response):

          content = "".join(response.xpath('//div[@class="readAreaBox content"]/div[@class="p"]/p/text()').extract())

          print(content)

          同樣發(fā)現(xiàn)數(shù)據(jù)是正常的,如下圖:

        Picture(7)

          進(jìn)行數(shù)據(jù)的持久化,寫入Mysql數(shù)據(jù)庫(kù)

          a. 定義結(jié)構(gòu)化字段(items.py文件的編寫):

          class Seventeen_kItem(scrapy.Item):

          '''匹配每個(gè)書籍URL并解析獲取一些信息創(chuàng)建的字段'''

          # define the fields for your item here like:

          # name = scrapy.Field()

          category = scrapy.Field()

          book_name = scrapy.Field()

          book_nums = scrapy.Field()

          description = scrapy.Field()

          book_url = scrapy.Field()

          chapter_url = scrapy.Field()

          class ChapterItem(scrapy.Item):

          '''從每個(gè)小說(shuō)章節(jié)列表頁(yè)解析當(dāng)前小說(shuō)章節(jié)列表一些信息所創(chuàng)建的字段'''

          # define the fields for your item here like:

          # name = scrapy.Field()

          chapter_list = scrapy.Field()

          class ContentItem(scrapy.Item):

          '''從小說(shuō)具體章節(jié)里解析當(dāng)前小說(shuō)的當(dāng)前章節(jié)的具體內(nèi)容所創(chuàng)建的字段'''

          # define the fields for your item here like:

          # name = scrapy.Field()

          content = scrapy.Field()

          chapter_detail_url = scrapy.Field()

          b. 編寫novel.py

          import datetime

          import scrapy

          from scrapy.linkextractors import LinkExtractor

          from scrapy.spiders import CrawlSpider, Rule

          from sevencat.items import Seventeen_kItem, ChapterItem, ContentItem

          class NovelSpider(CrawlSpider):

          name = 'novel'

          allowed_domains = ['www.17k.com']

          start_urls = ['https://www.17k.com/all/book/2_0_0_0_3_0_1_0_1.html']

          rules = (

          Rule(LinkExtractor(allow=r'//www.17k.com/book/\d+.html', restrict_xpaths=('//td[@class="td3"]')),

          callback='parse_book',

          follow=True, process_links="process_booklink"),

          # 匹配章節(jié)目錄的url

          Rule(LinkExtractor(allow=r'/list/\d+.html',

          restrict_xpaths=('//dt[@class="read"]')), callback='parse_chapter', follow=True,

          process_links="process_chapterlink"),

          # 解析章節(jié)詳情

          Rule(LinkExtractor(allow=r'/chapter/(\d+)/(\d+).html',

          restrict_xpaths=('//dl[@class="Volume"]/dd')), callback='get_content',

          follow=False, process_links="process_chapterDetail"),

          )

          def process_booklink(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("限制一本書:", link.url)

          yield link

          else:

          return

          def process_chapterlink(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("章節(jié):", link.url)

          yield link

          else:

          return

          def process_chapterDetail(self, links):

          for index, link in enumerate(links):

          # 限制一本書

          if index == 0:

          print("章節(jié)詳情:", link.url)

          yield link

          else:

          return

          def parse_book(self, response):

          print("解析book_url")

          # 字?jǐn)?shù):

          book_nums = response.xpath('//div[@class="BookData"]/p[2]/em/text()').extract()[0]

          # 書名:

          book_name = response.xpath('//div[@class="Info "]/h1/a/text()').extract()[0]

          # 分類

          category = response.xpath('//dl[@id="bookInfo"]/dd/div[2]/table//tr[1]/td[2]/a/text()').extract()[0]

          # 概述

          description = "".join(response.xpath('//p[@class="intro"]/a/text()').extract())

          # # 小說(shuō)鏈接

          book_url = response.url

          # 小說(shuō)章節(jié)

          chapter_url = response.xpath('//dt[@class="read"]/a/@href').extract()[0]

          # print(book_nums, book_url, book_name, category, description, chapter_url)

          item = Seventeen_kItem()

          item['book_nums'] = book_nums

          item['book_name'] = book_name

          item['category'] = category

          item['description'] = description

          item['book_url'] = book_url

          item['chapter_url'] = chapter_url

          yield item

          def parse_chapter(self, response):

          print("解析章節(jié)目錄", response.url) # response.url就是數(shù)據(jù)的來(lái)源的url

          # 注意:章節(jié)和章節(jié)的url要一一對(duì)應(yīng)

          a_tags = response.xpath('//dl[@class="Volume"]/dd/a')

          chapter_list = []

          for index, a in enumerate(a_tags):

          title = a.xpath("./span/text()").extract()[0].strip()

          chapter_url = a.xpath("./@href").extract()[0]

          ordernum = index + 1

          c_time = datetime.datetime.now()

          chapter_url_refer = response.url

          chapter_list.append([title, ordernum, c_time, chapter_url, chapter_url_refer])

          # print('章節(jié)目錄:', chapter_list)

          item = ChapterItem()

          item["chapter_list"] = chapter_list

          yield item

          def get_content(self, response):

          content = "".join(response.xpath('//div[@class="readAreaBox content"]/div[@class="p"]/p/text()').extract())

          chapter_detail_url = response.url

          # print(content)

          item = ContentItem()

          item["content"] = content

          item["chapter_detail_url"] = chapter_detail_url

          yield item

          c. 編寫管道文件:

          import pymysql

          import logging

          from .items import Seventeen_kItem, ChapterItem, ContentItem

          logger = logging.getLogger(__name__) # 生成以當(dāng)前文件名命名的logger對(duì)象。 用日志記錄報(bào)錯(cuò)。

          class Seventeen_kPipeline(object):

          def open_spider(self, spider):

          # 連接數(shù)據(jù)庫(kù)

          data_config = spider.settings["DATABASE_CONFIG"]

          if data_config["type"] == "mysql":

          self.conn = pymysql.connect(**data_config["config"])

          self.cursor = self.conn.cursor()

          def process_item(self, item, spider):

          # 寫入數(shù)據(jù)庫(kù)

          if isinstance(item, Seventeen_kItem):

          # 寫入書籍信息

          sql = "select id from novel where book_name=%s and author=%s"

          self.cursor.execute(sql, (item["book_name"], ["author"]))

          if not self.cursor.fetchone(): # .fetchone()獲取上一個(gè)查詢結(jié)果集。在python中如果沒(méi)有則為None

          try:

          # 如果沒(méi)有獲得一個(gè)id,小說(shuō)不存在才進(jìn)行寫入操作

          sql = "insert into novel(category,book_name,book_nums,description,book_url,chapter_url)" \

          "values(%s,%s,%s,%s,%s,%s)"

          self.cursor.execute(sql, (

          item["category"],

          item["book_name"],

          item["book_nums"],

          item["description"],

          item["book_url"],

          item["catalog_url"],

          ))

          self.conn.commit()

          except Exception as e: # 捕獲異常并日志顯示

          self.conn.rollback()

          logger.warning("小說(shuō)信息錯(cuò)誤!url=%s %s") % (item["book_url"], e)

          return item

          elif isinstance(item, ChapterItem):

          # 寫入章節(jié)信息

          try:

          sql = "insert into chapter (title,ordernum,c_time,chapter_url,chapter_url_refer)" \

          "values(%s,%s,%s,%s,%s)"

          # 注意:此處item的形式是! item["chapter_list"]====[(title,ordernum,c_time,chapter_url,chapter_url_refer)]

          chapter_list = item["chapter_list"]

          self.cursor.executemany(sql,

          chapter_list) # .executemany()的作用:一次操作,寫入多個(gè)元組的數(shù)據(jù)。形如:.executemany(sql,[(),()])

          self.conn.commit()

          except Exception as e:

          self.conn.rollback()

          logger.warning("章節(jié)信息錯(cuò)誤!%s" % e)

          return item

          elif isinstance(item, ContentItem):

          try:

          sql = "update chapter set content=%s where chapter_url=%s"

          content = item["content"]

          chapter_detail_url = item["chapter_detail_url"]

          self.cursor.execute(sql, (content, chapter_detail_url))

          self.conn.commit()

          except Exception as e:

          self.conn.rollback()

          logger.warning("章節(jié)內(nèi)容錯(cuò)誤!url=%s %s") % (item["chapter_url"], e)

          return item

          def close_spider(self, spider):

          # 關(guān)閉數(shù)據(jù)庫(kù)

          self.cursor.close()

          self.conn.close()

          其中涉及到settings.py的配置:

          DATABASE_CONFIG={

          "type":"mysql",

          "config":{

          "host":"localhost",

          "port":3306,

          "user":"root",

          "password":"root",

          "db":"noveldb",

          "charset":"utf8"

          }

          }

          數(shù)據(jù)庫(kù)的表分別為:

          novel表字段有:

          id(自動(dòng)增長(zhǎng)的)

          category

          book_name

          book_nums

          description

          book_url

          chapter_url

          chapter表字段有:

          id

          title

          ordernum

          c_time

          chapter_url

          chapter_url_refer

          conent

          如果想獲取多頁(yè)的小說(shuō)則需要加入對(duì)start_urls處理的函數(shù),通過(guò)翻頁(yè)觀察每頁(yè)URL的規(guī)律,在此函數(shù)中拼接得到多頁(yè) 的URL,并將請(qǐng)求發(fā)送給引擎!

          ......

          page_num = 1

          #start_urls的回調(diào)函數(shù)

          # 作用:拼接得到每頁(yè)小說(shuō)的url。實(shí)現(xiàn)多頁(yè)小說(shuō)獲取。

          def parse_start_url(self, response):

          print(self.page_num,response)

          #可以解析star_urls的response 相當(dāng)于之前的parse函數(shù)來(lái)用

          #拼接下一頁(yè)的url

          self.page_num+=1

          next_pageurl='https://www.17k.com/all/book/2_0_0_0_3_0_1_0_{}.html'.format(self.page_num)

          if self.page_num==3:

          return

          yield scrapy.Request(next_pageurl)

          注意:在每個(gè)對(duì)應(yīng)的processxxxx的回調(diào)函數(shù)中都是獲取index=0的小說(shuō),文章等,可以修改比如processchapterlink中的index<=20,那就是更多的章節(jié)信息了。

        Picture(8)

          ok抓緊時(shí)間測(cè)試一下吧!相信你會(huì)收獲很多!

        tags:
        聲明:本站稿件版權(quán)均屬千鋒教育所有,未經(jīng)許可不得擅自轉(zhuǎn)載。
        10年以上業(yè)內(nèi)強(qiáng)師集結(jié),手把手帶你蛻變精英
        請(qǐng)您保持通訊暢通,專屬學(xué)習(xí)老師24小時(shí)內(nèi)將與您1V1溝通
        免費(fèi)領(lǐng)取
        今日已有369人領(lǐng)取成功
        劉同學(xué) 138****2860 剛剛成功領(lǐng)取
        王同學(xué) 131****2015 剛剛成功領(lǐng)取
        張同學(xué) 133****4652 剛剛成功領(lǐng)取
        李同學(xué) 135****8607 剛剛成功領(lǐng)取
        楊同學(xué) 132****5667 剛剛成功領(lǐng)取
        岳同學(xué) 134****6652 剛剛成功領(lǐng)取
        梁同學(xué) 157****2950 剛剛成功領(lǐng)取
        劉同學(xué) 189****1015 剛剛成功領(lǐng)取
        張同學(xué) 155****4678 剛剛成功領(lǐng)取
        鄒同學(xué) 139****2907 剛剛成功領(lǐng)取
        董同學(xué) 138****2867 剛剛成功領(lǐng)取
        周同學(xué) 136****3602 剛剛成功領(lǐng)取
        相關(guān)推薦HOT
        什么是域控制器?

        一、域控制器的定義域控制器是指在Windows Server操作系統(tǒng)中部署Active Directory服務(wù)的服務(wù)器。Active Directory是微軟公司開發(fā)的目錄服務(wù),用...詳情>>

        2023-10-15 00:10:28
        深度學(xué)習(xí)模型權(quán)重h5、weights、ckpt、pth有什么區(qū)別?

        1.來(lái)源框架不同h5格式通常用于Keras和TensorFlow框架,weights用于Darknet框架,ckpt是TensorFlow框架的一種格式,而pth則主要用于PyTorch框架...詳情>>

        2023-10-15 00:05:17
        大數(shù)據(jù)測(cè)試工程師需要具備哪些技能?

        一、理解大數(shù)據(jù)概念大數(shù)據(jù)測(cè)試工程師需要理解大數(shù)據(jù)的基本概念和原理,如分布式存儲(chǔ)、MapReduce、實(shí)時(shí)計(jì)算等。他們還需要了解如何處理大規(guī)模的...詳情>>

        2023-10-14 23:43:03
        為什么SpringBoot的 jar 可以直接運(yùn)行?

        一、JAR文件的結(jié)構(gòu)與執(zhí)行方式Spring Boot的JAR包是Java Archive的縮寫,它是一種壓縮文件格式,可以將Java項(xiàng)目的類文件、資源文件以及依賴庫(kù)等...詳情>>

        2023-10-14 23:01:49
        站群服務(wù)器是什么?

        站群服務(wù)器的含義與用途站群服務(wù)器主要用于支持站群,即由一組相互鏈接的網(wǎng)站組成的群體。這些網(wǎng)站通常由同一組織或個(gè)人擁有,并且經(jīng)常會(huì)互相鏈...詳情>>

        2023-10-14 22:46:12
        快速通道
        开鲁县| 井陉县| 和政县| 利川市| 海南省| 银川市| 千阳县| 布尔津县| 庄浪县| 长沙县| 洮南市| 上栗县| 大洼县| 垦利县| 仙游县| 江阴市| 荔波县| 台南县| 双鸭山市| 南川市| 罗甸县| 商南县| 罗城| 庆阳市| 衡南县| 苍南县| 昌江| 综艺| 饶阳县| 沙田区| 汤原县| 湘潭市| 阳谷县| 枣阳市| 九龙城区| 金沙县| 龙山县| 泉州市| 浠水县| 顺平县| 永嘉县|