當我們在scrapy中寫了幾個爬蟲程式之後,他們是怎麼被檢索出來的,又是怎麼被加載的?這就涉及到爬蟲加載的API,今天我們就來分享爬蟲加載過程及其自定義加載程式。
SpiderLoader API
該API是爬蟲執行個體化API,主要實作一個類SpiderLoader
class scrapy.loader.SpiderLoader
該類負責檢索和處理項目中定義的spider類。
可以通過在SPIDER_LOADER_CLASS項目設定中指定其路徑來使用自定義爬蟲裝載程式,但是自定義加載程式必須完全實作scrapy.interfaces.ISpiderLoader接口以保證無錯執行。
該類具備下列方法
from_settings(settings)
Scrapy使用此類方法來建立類的執行個體。使用目前項目配置,加載在SPIDER_MODULES 設定子產品中遞歸發現的爬蟲,在一般是建立項目時候生成的setting檔案中類似['demo1.spiders']
參數: settings(Settings執行個體) - 項目配置檔案
@classmethod
def from_settings(cls, settings):
return cls(settings)
load(spider_name )
擷取具有給定名稱的Spider類。它将檢視以前加載的名為spider_name的爬蟲類的爬蟲,如果找不到則會引發KeyError。
參數: spider_name(str) - 爬蟲類名
def load(self, spider_name):
"""
Return the Spider class for the given spider name. If the spider
name is not found, raise a KeyError.
"""
try:
return self._spiders[spider_name]
except KeyError:
raise KeyError("Spider not found: {}".format(spider_name))
list()
擷取項目中可用蜘蛛的名稱。
def list(self):
"""
Return a list with the names of all spiders available in the project.
"""
return list(self._spiders.keys())
find_by_request(request)
列出可以處理給定請求的爬蟲名稱。将嘗試将請求的URL與爬蟲的域比對。
參數: request(Requestinstance) - 查詢請求
def find_by_request(self, request):
"""
Return the list of spider names that can handle the given request.
"""
return [name for name, cls in self._spiders.items()
if cls.handles_request(request)]
完整源碼:
-
# -*- coding: utf-8 -*- from __future__ import absolute_import from collections import defaultdict import traceback import warnings from zope.interface import implementer from scrapy.interfaces import ISpiderLoader from scrapy.utils.misc import walk_modules from scrapy.utils.spider import iter_spider_classes @implementer(ISpiderLoader) class SpiderLoader(object): """ SpiderLoader is a class which locates and loads spiders in a Scrapy project. """ def __init__(self, settings): self.spider_modules = settings.getlist('SPIDER_MODULES') self.warn_only = settings.getbool('SPIDER_LOADER_WARN_ONLY') self._spiders = {} self._found = defaultdict(list) self._load_all_spiders() def _check_name_duplicates(self): dupes = ["\n".join(" {cls} named {name!r} (in {module})".format( module=mod, cls=cls, name=name) for (mod, cls) in locations) for name, locations in self._found.items() if len(locations)>1] if dupes: msg = ("There are several spiders with the same name:\n\n" "{}\n\n This can cause unexpected behavior.".format( "\n\n".join(dupes))) warnings.warn(msg, UserWarning) def _load_spiders(self, module): for spcls in iter_spider_classes(module): self._found[spcls.name].append((module.__name__, spcls.__name__)) self._spiders[spcls.name] = spcls def _load_all_spiders(self): for name in self.spider_modules: try: for module in walk_modules(name): self._load_spiders(module) except ImportError as e: if self.warn_only: msg = ("\n{tb}Could not load spiders from module '{modname}'. " "See above traceback for details.".format( modname=name, tb=traceback.format_exc())) warnings.warn(msg, RuntimeWarning) else: raise self._check_name_duplicates() @classmethod def from_settings(cls, settings): return cls(settings) def load(self, spider_name): """ Return the Spider class for the given spider name. If the spider name is not found, raise a KeyError. """ try: return self._spiders[spider_name] except KeyError: raise KeyError("Spider not found: {}".format(spider_name)) def find_by_request(self, request): """ Return the list of spider names that can handle the given request. """ return [name for name, cls in self._spiders.items() if cls.handles_request(request)] def list(self): """ Return a list with the names of all spiders available in the project. """ return list(self._spiders.keys())
- 配置自定義加載類
在setting檔案中配置SPIDER_LOADER_CLASS
預設: 'scrapy.spiderloader.SpiderLoader'将用于加載蜘蛛的類,必須實作 SpiderLoader API。
該API的作用是搜尋項目中定義的多個爬蟲程式,并提供相關的操作方法,包括加載、查詢指定爬蟲、判斷請求對應的爬蟲等功能,一般情況下不需要自己寫加載程式,而是内部實作。