這幾天完成了分布式爬蟲的學習,發現了解scrapy-redis源代碼對于分布式爬蟲的學習真的很重要,廢話少說,直接上幹貨:
文章目錄
- 1.建立項目
- 2.源代碼解析
-
-
- 2.1 Connection.py
- 2.2 defaults.py
- 2.3 dupefilter.py
- 2.4 picklecompat.py
- 2.5 pipeline.py
- 2.6 queue.py
- 2.7 scheduler.py
- 2.8 spiders.py
- 2.9 utils.py
-
1.建立項目
首先,我們要建立一個Scrapy項目來進行學習。建立項目用下面的代碼在Terminal中執行。
scrapy startproject 項目名稱
如圖:
然後我們需要将scrapy-redis的源碼拷貝到scrapy中,源碼下載下傳位址如下:
https://github.com/rmax/scrapy-redis
我們可以通過下載下傳zip檔案和git clone兩種方式進行下載下傳,個人建議還是用常見的下載下傳zip檔案比較友善。
2.源代碼解析
分布式爬蟲系統的結構如下:
scrapy-redis的源碼中主要有以下幾個檔案:
connection.py
defaults.py
dupefilter.py
picklecompat.py
pipeline.py
queue.py
scheduler.py
spiders.py
utils.py
2.1 Connection.py
這個檔案是用來連接配接redis的檔案,與其他檔案相比較而言,這個檔案用到的次數是很多的,也是最重要的檔案,Connection提供了一個非常重要的參數。pipeline,queue,scheduler檔案都會調用。Connection.py檔案解析如下:
import six
from scrapy.utils.misc import load_object
from . import defaults
# Shortcut maps 'setting name' -> 'parmater name'.
# redis資料庫的關系映射
SETTINGS_PARAMS_MAP = {
'REDIS_URL': 'url',
'REDIS_HOST': 'host',
'REDIS_PORT': 'port',
'REDIS_ENCODING': 'encoding',
}
def get_redis_from_settings(settings):
# 擷取一個redis連接配接執行個體
# 生成連接配接redis參數
"""Returns a redis client instance from given Scrapy settings object.
This function uses ``get_client`` to instantiate the client and uses
``defaults.REDIS_PARAMS`` global as defaults values for the parameters. You
can override them using the ``REDIS_PARAMS`` setting.
Parameters
----------
settings : Settings
A scrapy settings object. See the supported settings below.
Returns
-------
server
Redis client instance.
Other Parameters
----------------
REDIS_URL : str, optional
Server connection URL.
REDIS_HOST : str, optional
Server host.
REDIS_PORT : str, optional
Server port.
REDIS_ENCODING : str, optional
Data encoding.
REDIS_PARAMS : dict, optional
Additional client parameters.
"""
# 淺拷貝,是為了防止params改變,會導緻預設的REDIS_PARAMS被改變
params = defaults.REDIS_PARAMS.copy()
# 将settings中的參數更新到params
params.update(settings.getdict('REDIS_PARAMS'))
# XXX: Deprecate REDIS_* settings.
# 周遊映射表,擷取指定的參數
for source, dest in SETTINGS_PARAMS_MAP.items():
# 優先使用settings中的參數
val = settings.get(source)
# 如果settings中沒有進行設定,則params不更新
if val:
params[dest] = val
# Allow ``redis_cls`` to be a path to a class.
if isinstance(params.get('redis_cls'), six.string_types):
params['redis_cls'] = load_object(params['redis_cls'])
return get_redis(**params)
# Backwards compatible alias.
from_settings = get_redis_from_settings
def get_redis(**kwargs):
"""Returns a redis client instance.
Parameters
----------
redis_cls : class, optional
Defaults to ``redis.StrictRedis``.
url : str, optional
If given, ``redis_cls.from_url`` is used to instantiate the class.
**kwargs
Extra parameters to be passed to the ``redis_cls`` class.
Returns
-------
server
Redis client instance.
"""
# 沒有redis_cli,則預設redis連接配接
redis_cls = kwargs.pop('redis_cls', defaults.REDIS_CLS)
url = kwargs.pop('url', None) # 判斷kwargs有沒有url
if url:
#用url連結redis,優先使用url連接配接redis
return redis_cls.from_url(url, **kwargs)
else:
#用字典的方式連接配接redis
return redis_cls(**kwargs)
2.2 defaults.py
import redis
# For standalone use.
# 去重的鍵名
DUPEFILTER_KEY = 'dupefilter:%(timestamp)s'
# 定義的存儲items的鍵名(key),spider是爬蟲的名稱
PIPELINE_KEY = '%(spider)s:items'
# Redis的連接配接對象,用于連接配接redis
REDIS_CLS = redis.StrictRedis
# 字元集編碼
REDIS_ENCODING = 'utf-8'
# Sane connection defaults.
# redis資料庫的連接配接參數
REDIS_PARAMS = {
'socket_timeout': 30,
'socket_connect_timeout': 30,
'retry_on_timeout': True,
'encoding': REDIS_ENCODING,
}
# 隊列的變量名,用于存儲爬取的url隊列
SCHEDULER_QUEUE_KEY = '%(spider)s:requests'
# 優先級隊列,用于規定隊列的進出方式
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'
# 用于去重的key值,給request加指紋存儲的地方
SCHEDULER_DUPEFILTER_KEY = '%(spider)s:dupefilter'
# 用于生成指紋的類
SCHEDULER_DUPEFILTER_CLASS = 'scrapy_redis.dupefilter.RFPDupeFilter'
#起始url對應的類(key)
START_URLS_KEY = '%(name)s:start_urls'
#起始url的類型
START_URLS_AS_SET = False
2.3 dupefilter.py
import logging
import time
from scrapy.dupefilters import BaseDupeFilter
from scrapy.utils.request import request_fingerprint
from . import defaults
from .connection import get_redis_from_settings
logger = logging.getLogger(__name__)
# scrapy去重是利用集合實作的
# TODO: Rename class to RedisDupeFilter.
class RFPDupeFilter(BaseDupeFilter):
"""Redis-based request duplicates filter.
This class can also be used with default Scrapy's scheduler.
"""
logger = logger
def __init__(self, server, key, debug=False):
"""Initialize the duplicates filter.
Parameters
----------
server : redis.StrictRedis
The redis server instance.
redis 連接配接執行個體
key : str 存儲requests指紋的地方
Redis key Where to store fingerprints.
debug : bool, optional
Whether to log filtered requests.
是否記錄過濾的requests
"""
#看server是如何生成的,因為我們通過server就可以擷取redis中的隊列或者set
self.server = server
self.key = key
self.debug = debug
self.logdupes = True
# 類方法傳遞目前的方法
@classmethod
def from_settings(cls, settings):
"""Returns an instance from given settings.
This uses by default the key ``dupefilter:<timestamp>``. When using the
``scrapy_redis.scheduler.Scheduler`` class, this method is not used as
it needs to pass the spider name in the key.
Parameters
----------
settings : scrapy.settings.Settings
Returns
-------
RFPDupeFilter
A RFPDupeFilter instance.
"""
# 擷取redis的連接配接執行個體
server = get_redis_from_settings(settings)
# XXX: This creates one-time key. needed to support to use this
# class as standalone dupefilter with scrapy's default scheduler
# if scrapy passes spider on open() method this wouldn't be needed
# TODO: Use SCRAPY_JOB env as default and fallback to timestamp.
# 存取指紋的key
key = defaults.DUPEFILTER_KEY % {'timestamp': int(time.time())}
debug = settings.getbool('DUPEFILTER_DEBUG') # 預設值是false
# 傳給目前類,并把參數傳遞給init函數
return cls(server, key=key, debug=debug)
@classmethod
def from_crawler(cls, crawler):
"""Returns instance from crawler.
Parameters
----------
crawler : scrapy.crawler.Crawler
Returns
-------
RFPDupeFilter
Instance of RFPDupeFilter.
"""
return cls.from_settings(crawler.settings)
def request_seen(self, request):
"""Returns True if request was already seen.
Parameters
----------
request : scrapy.http.Request
Returns
-------
bool
"""
fp = self.request_fingerprint(request) # 生成一個指紋
# This returns the number of values added, zero if already exists.
# 将 指紋加入redis 是一個集合類型
# self.server redis連接配接執行個體
# self.key 存儲指紋的key
# fp 就是指紋
added = self.server.sadd(self.key, fp)
# 當added為0,說明指紋已經存在,傳回True,否則傳回False
return added == 0
def request_fingerprint(self, request):
"""Returns a fingerprint for a given request.
Parameters
----------
request : scrapy.http.Request
Returns
-------
str
"""
return request_fingerprint(request)
@classmethod
def from_spider(cls, spider):
settings = spider.settings
server = get_redis_from_settings(settings)
dupefilter_key = settings.get("SCHEDULER_DUPEFILTER_KEY", defaults.SCHEDULER_DUPEFILTER_KEY)
key = dupefilter_key % {'spider': spider.name}
debug = settings.getbool('DUPEFILTER_DEBUG')
return cls(server, key=key, debug=debug)
def close(self, reason=''):
# 當爬蟲結束時,清空指紋的地方
"""Delete data on close. Called by Scrapy's scheduler.
Parameters
----------
reason : str, optional
"""
self.clear()
def clear(self):
"""Clears fingerprints data."""
self.server.delete(self.key)
# 生成日志的地方
def log(self, request, spider):
"""Logs given request.
Parameters
----------
request : scrapy.http.Request
spider : scrapy.spiders.Spider
"""
if self.debug:
msg = "Filtered duplicate request: %(request)s"
self.logger.debug(msg, {'request': request}, extra={'spider': spider})
elif self.logdupes:
msg = ("Filtered duplicate request %(request)s"
" - no more duplicates will be shown"
" (see DUPEFILTER_DEBUG to show all duplicates)")
self.logger.debug(msg, {'request': request}, extra={'spider': spider})
self.logdupes = False
2.4 picklecompat.py
"""A pickle wrapper module with protocol=-1 by default."""
try:
import cPickle as pickle # PY2
except ImportError:
import pickle #PY3用的包
#反序列化就是将字元串資料轉化成json資料
def loads(s):
return pickle.loads(s)
#序列化 就是将json資料轉化成字元串
def dumps(obj):
return pickle.dumps(obj, protocol=-1)
在這個檔案中實作了loads和dumps兩個函數,其實就是實作了一個序列化器。因為redis資料庫不能存儲複雜對象(key部分隻能是字元串,value部分隻能是字元串、字元串清單、字元串集合和hash),是以我們存啥都要先串行化成文本才行。這裡使用的就是python的pickle子產品,一個相容py2和py3的串行化工具。這個serializer主要用于scheduler存request對象。
2.5 pipeline.py
from scrapy.utils.misc import load_object
from scrapy.utils.serialize import ScrapyJSONEncoder
from twisted.internet.threads import deferToThread
from . import connection, defaults
# 序列化的字元串
default_serialize = ScrapyJSONEncoder().encode
# 用于處理爬蟲爬取的資料 将資料序列化到redis中
class RedisPipeline(object):
"""Pushes serialized item into a redis list/queue
Settings
--------
REDIS_ITEMS_KEY : str
Redis key where to store items.
REDIS_ITEMS_SERIALIZER : str
Object path to serializer function.
"""
def __init__(self, server,
key=defaults.PIPELINE_KEY,
serialize_func=default_serialize):
"""Initialize pipeline.
Parameters
----------
server : StrictRedis
Redis client instance.
key : str
Redis key where to store items.
serialize_func : callable
Items serializer function.
"""
self.server = server
self.key = key
self.serialize = serialize_func
# 将類本身傳入函數,用來生成參數和redis連接配接執行個體
@classmethod
def from_settings(cls, settings):
# 生成redis連接配接執行個體
params = {
'server': connection.from_settings(settings),
}
# 如果設定中有REDIS_ITEMS_KEY,我們就用設定中
if settings.get('REDIS_ITEMS_KEY'):
params['key'] = settings['REDIS_ITEMS_KEY']
# 如果設定中有序列化的函數,則優先使用設定中的
if settings.get('REDIS_ITEMS_SERIALIZER'):
params['serialize_func'] = load_object(
settings['REDIS_ITEMS_SERIALIZER']
)
return cls(**params)
@classmethod
def from_crawler(cls, crawler):
return cls.from_settings(crawler.settings)
# 将item傳遞過來,自動觸發這個函數
def process_item(self, item, spider):
# 建立一個線程,用于存儲item,也就是說上一個iten還沒有存儲完,下一個item就可以同時存儲
return deferToThread(self._process_item, item, spider)
# 實作存儲的函數
def _process_item(self, item, spider):
key = self.item_key(item, spider) # 生成item_key
data = self.serialize(item) # 使用預設的序列化函數,将item序列化為字元串
self.server.rpush(key, data) # 把資料放到redis裡面----self.server是redis的連接配接執行個體
return item
# 用于存儲item
def item_key(self, item, spider):
"""Returns redis key based on given spider.
Override this function to use a different key depending on the item
and/or spider.
"""
#根據spider的name生成一個redis_key
return self.key % {'spider': spider.name}
2.6 queue.py
from scrapy.utils.reqser import request_to_dict, request_from_dict
from . import picklecompat
class Base(object):
"""Per-spider base queue class"""
def __init__(self, server, spider, key, serializer=None):
"""Initialize per-spider redis queue.
Parameters
----------
server : StrictRedis
Redis client instance.
spider : Spider
Scrapy spider instance.
key: str
Redis key where to put and get messages.
serializer : object
Serializer object with ``loads`` and ``dumps`` methods.
"""
if serializer is None:
# Backward compatibility.
# TODO: deprecate pickle.
serializer = picklecompat
# 當序列化時沒有laods函數時,就會抛出異常
# 抛出異常的目的就是為了使傳過來的序列化必須函數loads函數
if not hasattr(serializer, 'loads'):
raise TypeError("serializer does not implement 'loads' function: %r"
% serializer)
# 當序列化時沒有dumps函數時,就會抛出異常
if not hasattr(serializer, 'dumps'):
raise TypeError("serializer '%s' does not implement 'dumps' function: %r"
% serializer)
# 下面的這些函數當類的所有函數,都可以使用
self.server = server
self.spider = spider
self.key = key % {'spider': spider.name}
self.serializer = serializer
# 将requests進行編碼成字元串
def _encode_request(self, request):
"""Encode a request object"""
# 将requests轉化成字典
obj = request_to_dict(request, self.spider)
# 将字典轉化為字元串并傳回
return self.serializer.dumps(obj)
# 将已經編碼的ncode_request解碼為字典
def _decode_request(self, encoded_request):
"""Decode an request previously encoded"""
# 将dict轉換為requests object取出,直接通過下載下傳器進行下載下傳
obj = self.serializer.loads(encoded_request)
return request_from_dict(obj, self.spider)
# 下面的len方法 push方法 pop方法 必須重載 否則不能使用
def __len__(self):
"""Return the length of the queue"""
raise NotImplementedError
def push(self, request):
"""Push a request"""
raise NotImplementedError
def pop(self, timeout=0):
"""Pop a request"""
raise NotImplementedError
# 删除指定的self.key的值
def clear(self):
"""Clear queue/stack"""
self.server.delete(self.key)
# 先進先出 針對有序隊列
class FifoQueue(Base):
"""Per-spider FIFO queue"""
# 傳回隊列長度
def __len__(self):
"""Return the length of the queue"""
return self.server.llen(self.key)
# 從頭部插入request
def push(self, request):
"""Push a request"""
self.server.lpush(self.key, self._encode_request(request))
def pop(self, timeout=0):
"""Pop a request"""
# timeout逾時,一般預設為0
if timeout > 0:
#pop出來的時候是隊尾彈出
data = self.server.brpop(self.key, timeout)
if isinstance(data, tuple):
data = data[1]
else:
# 這個是從尾部删除
data = self.server.rpop(self.key)
if data:
# 彈出元素再解碼為request直接給下載下傳器進行下載下傳
return self._decode_request(data)
# 優先級隊列 每次放出的打一個分數,對于有序集合,彈出的時候優先彈出
class PriorityQueue(Base):
"""Per-spider priority queue abstraction using redis' sorted set"""
def __len__(self):
"""Return the length of the queue"""
return self.server.zcard(self.key)
def push(self, request):
"""Push a request"""
data = self._encode_request(request)
score = -request.priority
# We don't use zadd method as the order of arguments change depending on
# whether the class is Redis or StrictRedis, and the option of using
# kwargs only accepts strings, not bytes.
# 使用有序集合實作優先級隊列
self.server.execute_command('ZADD', self.key, score, data)
def pop(self, timeout=0):
"""
Pop a request
timeout not support in this queue class
"""
# use atomic range/remove using multi/exec
# pipeline其實就是self.server的一個方法
# pipe相當于執行個體化的函數
pipe = self.server.pipeline()
pipe.multi()
# zrange是從小到大排序後傳回第一個值
# zremrangebyrank是删除第一個request
pipe.zrange(self.key, 0, 0).zremrangebyrank(self.key, 0, 0)
# 執行上面的語句,删除的同時傳回被删除的資料
# results接收的是第一條資料
# count 删除的元素,傳回值是1或0
results, count = pipe.execute()
if results: # 隻要有一個元素results是真值
# 将擷取的第一個元素(傳回的是一個清單),拿出來,進行解碼
return self._decode_request(results[0])
# 後進先出
class LifoQueue(Base):
"""Per-spider LIFO queue."""
def __len__(self):
"""Return the length of the stack"""
return self.server.llen(self.key)
def push(self, request):
"""Push a request"""
self.server.lpush(self.key, self._encode_request(request))
def pop(self, timeout=0):
"""Pop a request"""
if timeout > 0:
data = self.server.blpop(self.key, timeout)
if isinstance(data, tuple):
data = data[1]
else:
data = self.server.lpop(self.key)
if data:
return self._decode_request(data)
# TODO: Deprecate the use of these names.
SpiderQueue = FifoQueue
SpiderStack = LifoQueue
SpiderPriorityQueue = PriorityQueue
2.7 scheduler.py
import importlib
import six
from scrapy.utils.misc import load_object
from . import connection, defaults
# TODO: add SCRAPY_JOB support.
# FIXME
class Scheduler(object):
"""Redis-based scheduler
Settings
--------
SCHEDULER_PERSIST : bool (default: False)
Whether to persist or clear redis queue.
SCHEDULER_FLUSH_ON_START : bool (default: False)
Whether to flush redis queue on start.
SCHEDULER_IDLE_BEFORE_CLOSE : int (default: 0)
How many seconds to wait before closing if no message is received.
SCHEDULER_QUEUE_KEY : str
Scheduler redis key.
SCHEDULER_QUEUE_CLASS : str
Scheduler queue class.
SCHEDULER_DUPEFILTER_KEY : str
Scheduler dupefilter redis key.
SCHEDULER_DUPEFILTER_CLASS : str
Scheduler dupefilter class.
SCHEDULER_SERIALIZER : str
Scheduler serializer.
"""
def __init__(self, server,
persist=False,
flush_on_start=False,
queue_key=defaults.SCHEDULER_QUEUE_KEY,
queue_cls=defaults.SCHEDULER_QUEUE_CLASS,
dupefilter_key=defaults.SCHEDULER_DUPEFILTER_KEY,
dupefilter_cls=defaults.SCHEDULER_DUPEFILTER_CLASS,
idle_before_close=0,
serializer=None):
"""Initialize scheduler.
Parameters
----------
server : Redis
The redis server instance.
persist : bool
Whether to flush requests when closing. Default is False.
flush_on_start : bool
Whether to flush requests on start. Default is False.
queue_key : str
Requests queue key.
queue_cls : str
Importable path to the queue class.
dupefilter_key : str
Duplicates filter key.
dupefilter_cls : str
Importable path to the dupefilter class.
idle_before_close : int
Timeout before giving up.
"""
if idle_before_close < 0:
raise TypeError("idle_before_close cannot be negative")
self.server = server
self.persist = persist
self.flush_on_start = flush_on_start
self.queue_key = queue_key
self.queue_cls = queue_cls
self.dupefilter_cls = dupefilter_cls
self.dupefilter_key = dupefilter_key
self.idle_before_close = idle_before_close
self.serializer = serializer
self.stats = None
def __len__(self):
return len(self.queue)
@classmethod
def from_settings(cls, settings): #作為入口
kwargs = {
#是否将隊列持久化
'persist': settings.getbool('SCHEDULER_PERSIST'),
#是否将隊列中的資料清空
'flush_on_start': settings.getbool('SCHEDULER_FLUSH_ON_START'),
'idle_before_close': settings.getint('SCHEDULER_IDLE_BEFORE_CLOSE'),
}
# If these values are missing, it means we want to use the defaults.
optional = {
# TODO: Use custom prefixes for this settings to note that are
# specific to scrapy-redis.
'queue_key': 'SCHEDULER_QUEUE_KEY',
'queue_cls': 'SCHEDULER_QUEUE_CLASS', #預設作為優先隊列
'dupefilter_key': 'SCHEDULER_DUPEFILTER_KEY', #去重
# We use the default setting name to keep compatibility.
'dupefilter_cls': 'DUPEFILTER_CLASS',
'serializer': 'SCHEDULER_SERIALIZER',
}
for name, setting_name in optional.items():
val = settings.get(setting_name)
if val:
kwargs[name] = val
# Support serializer as a path to a module.
if isinstance(kwargs.get('serializer'), six.string_types):
kwargs['serializer'] = importlib.import_module(kwargs['serializer'])
# redis的連接配接執行個體
server = connection.from_settings(settings)
# 驗證
# Ensure the connection is working.
server.ping()
return cls(server=server, **kwargs)
@classmethod
def from_crawler(cls, crawler):
instance = cls.from_settings(crawler.settings)
# FIXME: for now, stats are only supported from this constructor
instance.stats = crawler.stats
return instance
def open(self, spider):
self.spider = spider
try:
self.queue = load_object(self.queue_cls)(
server=self.server,
spider=spider,
key=self.queue_key % {'spider': spider.name},
serializer=self.serializer,
)
except TypeError as e:
raise ValueError("Failed to instantiate queue class '%s': %s",
self.queue_cls, e)
self.df = load_object(self.dupefilter_cls).from_spider(spider)
if self.flush_on_start:
self.flush()
# notice if there are requests already in the queue to resume the crawl
if len(self.queue):
spider.log("Resuming crawl (%d requests scheduled)" % len(self.queue))
def close(self, reason):
if not self.persist:
self.flush()
def flush(self):
self.df.clear()
self.queue.clear()
# 入隊函數
def enqueue_request(self, request):
# self.df.request_seen(request) 傳回的bool值,傳回True代表request存在
# not request.dont_filter 預設傳回是true
# 當我們選擇是過濾而且request 已經進入隊列,我們傳回一個false
if not request.dont_filter and self.df.request_seen(request):
self.df.log(request, self.spider)
return False
# 一般用不到,預設是none
if self.stats:
self.stats.inc_value('scheduler/enqueued/redis', spider=self.spider)
self.queue.push(request)
return True
# 出隊函數
def next_request(self):
block_pop_timeout = self.idle_before_close
# 彈出一條資料
request = self.queue.pop(block_pop_timeout)
if request and self.stats:
self.stats.inc_value('scheduler/dequeued/redis', spider=self.spider)
# 傳回request給引擎,引擎給下載下傳器,進行下載下傳網頁
return request
def has_pending_requests(self):
return len(self) > 0
這個檔案重寫了scheduler類,用來代替scrapy.core.scheduler的原有排程器。實作原理是使用指定的一個redis記憶體作為資料存儲的媒介,以達到各個爬蟲之間的統一排程。
2.8 spiders.py
from scrapy import signals
from scrapy.exceptions import DontCloseSpider
from scrapy.spiders import Spider, CrawlSpider
from . import connection, defaults
from .utils import bytes_to_str
class RedisMixin(object):
"""Mixin class to implement reading urls from a redis queue."""
redis_key = None # 在redis裡起始url對應的key
redis_batch_size = None # 容量
redis_encoding = None # 字元集編碼
# Redis client placeholder.
server = None
#重寫start_request方法調用next_requests
def start_requests(self):
"""Returns a batch of start requests from redis."""
return self.next_requests()
def setup_redis(self, crawler=None):
"""Setup redis connection and idle signal.
This should be called after the spider has set its crawler object.
"""
if self.server is not None:
return
if crawler is None:
# We allow optional crawler argument to keep backwards
# compatibility.
# XXX: Raise a deprecation warning.
crawler = getattr(self, 'crawler', None)
if crawler is None:
raise ValueError("crawler is required")
settings = crawler.settings
if self.redis_key is None:
self.redis_key = settings.get(
'REDIS_START_URLS_KEY', defaults.START_URLS_KEY,
)
self.redis_key = self.redis_key % {'name': self.name}
if not self.redis_key.strip():
raise ValueError("redis_key must not be empty")
if self.redis_batch_size is None:
# TODO: Deprecate this setting (REDIS_START_URLS_BATCH_SIZE).
self.redis_batch_size = settings.getint(
'REDIS_START_URLS_BATCH_SIZE',
settings.getint('CONCURRENT_REQUESTS'),
)
try:
self.redis_batch_size = int(self.redis_batch_size)
except (TypeError, ValueError):
raise ValueError("redis_batch_size must be an integer")
if self.redis_encoding is None:
self.redis_encoding = settings.get('REDIS_ENCODING', defaults.REDIS_ENCODING)
self.logger.info("Reading start URLs from redis key '%(redis_key)s' "
"(batch size: %(redis_batch_size)s, encoding: %(redis_encoding)s",
self.__dict__)
#redis連接配接執行個體
self.server = connection.from_settings(crawler.settings)
# The idle signal is called when the spider has no requests left,
# that's when we will schedule new requests from redis queue
crawler.signals.connect(self.spider_idle, signal=signals.spider_idle)
def next_requests(self):
"""Returns a request to be scheduled or none."""
# 預設使用redis_keys是一個清單,否則是集合
use_set = self.settings.getbool('REDIS_START_URLS_AS_SET', defaults.START_URLS_AS_SET)
#從redis資料庫裡取出起始url
#use_set=false 傳回self.server.lpop(清單資料類型)
#use_set=true 傳回self.server.spop(集合類型)
fetch_one = self.server.spop if use_set else self.server.lpop
# XXX: Do we need to use a timeout here?
found = 0
# TODO: Use redis pipeline execution.
while found < self.redis_batch_size:
#從資料庫中取出起始url資料,傳回一個清單
data = fetch_one(self.redis_key)
if not data:
# Queue empty.
break
#取出的url是一個bytes類型,需要轉換為str相容python3
req = self.make_request_from_data(data)
if req:
yield req # 把req給Request
found += 1
else:
self.logger.debug("Request not made from data: %r", data)
if found:
self.logger.debug("Read %s requests from '%s'", found, self.redis_key)
#傳回的是requestss執行個體,是通過來自redis的data資料
def make_request_from_data(self, data):
"""Returns a Request instance from data coming from Redis.
By default, ``data`` is an encoded URL. You can override this method to
provide your own message decoding.
Parameters
----------
data : bytes
Message from redis.
"""
url = bytes_to_str(data, self.redis_encoding)
return self.make_requests_from_url(url)
def schedule_next_requests(self):
"""Schedules a request if available"""
# TODO: While there is capacity, schedule a batch of redis requests.
for req in self.next_requests():
self.crawler.engine.crawl(req, spider=self)
def spider_idle(self):
"""Schedules a request if available, otherwise waits."""
# XXX: Handle a sentinel to close the spider.
self.schedule_next_requests()
raise DontCloseSpider
class RedisSpider(RedisMixin, Spider):
"""Spider that reads urls from redis queue when idle.
Attributes
----------
redis_key : str (default: REDIS_START_URLS_KEY)
Redis key where to fetch start URLs from..
redis_batch_size : int (default: CONCURRENT_REQUESTS)
Number of messages to fetch from redis on each attempt.
redis_encoding : str (default: REDIS_ENCODING)
Encoding to use when decoding messages from redis queue.
Settings
--------
REDIS_START_URLS_KEY : str (default: "<spider.name>:start_urls")
Default Redis key where to fetch start URLs from..
REDIS_START_URLS_BATCH_SIZE : int (deprecated by CONCURRENT_REQUESTS)
Default number of messages to fetch from redis on each attempt.
REDIS_START_URLS_AS_SET : bool (default: False)
Use SET operations to retrieve messages from the redis queue. If False,
the messages are retrieve using the LPOP command.
REDIS_ENCODING : str (default: "utf-8")
Default encoding to use when decoding messages from redis queue.
"""
@classmethod
def from_crawler(self, crawler, *args, **kwargs):
obj = super(RedisSpider, self).from_crawler(crawler, *args, **kwargs)
obj.setup_redis(crawler)
return obj
class RedisCrawlSpider(RedisMixin, CrawlSpider):
"""Spider that reads urls from redis queue when idle.
Attributes
----------
redis_key : str (default: REDIS_START_URLS_KEY)
Redis key where to fetch start URLs from..
redis_batch_size : int (default: CONCURRENT_REQUESTS)
Number of messages to fetch from redis on each attempt.
redis_encoding : str (default: REDIS_ENCODING)
Encoding to use when decoding messages from redis queue.
Settings
--------
REDIS_START_URLS_KEY : str (default: "<spider.name>:start_urls")
Default Redis key where to fetch start URLs from..
REDIS_START_URLS_BATCH_SIZE : int (deprecated by CONCURRENT_REQUESTS)
Default number of messages to fetch from redis on each attempt.
REDIS_START_URLS_AS_SET : bool (default: True)
Use SET operations to retrieve messages from the redis queue.
REDIS_ENCODING : str (default: "utf-8")
Default encoding to use when decoding messages from redis queue.
"""
@classmethod
def from_crawler(self, crawler, *args, **kwargs):
obj = super(RedisCrawlSpider, self).from_crawler(crawler, *args, **kwargs)
obj.setup_redis(crawler)
return obj
這個檔案是分布式爬蟲的入口代碼,首先,通過connection接口,spider初始化時,通過setup_redis()函數初始化好和redis的連接配接。其次,通過next_requests函數從redis中取出strat url,spider使用少量的start url + LinkExtractor,可以發展出很多新的url,這些url會進入scheduler進行判重和排程。直到spider跑到排程池内沒有url的時候,直到spider跑到排程池内沒有url的時候,會觸發spider_idle信号,進而觸發spider的next_requests函數。再次從redis的start url池中讀取一些url。
2.9 utils.py
import six
def bytes_to_str(s, encoding='utf-8'):
"""Returns a str if a bytes object is given."""
#将我們的bytes類型轉化為字元串
if six.PY3 and isinstance(s, bytes):
return s.decode(encoding)
return s