天天看点

Python 3 抓取网页资源的 N 种方法

1、最简单

import urllib.request

response = urllib.request.urlopen('http://python.org/')

html = response.read()

2、使用 Request

req = urllib.request.Request('http://python.org/')

response = urllib.request.urlopen(req)

the_page = response.read()

3、发送数据

Python 3 抓取网页资源的 N 种方法

#! /usr/bin/env python3

import urllib.parse

url = 'http://localhost/login.php'

user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

values = {

'act' : 'login',

'login[email]' : '[email protected]',

'login[password]' : '123456'

}

data = urllib.parse.urlencode(values)

req = urllib.request.Request(url, data)

req.add_header('Referer', 'http://www.python.org/')

print(the_page.decode("utf8"))

Python 3 抓取网页资源的 N 种方法

4、发送数据和header

Python 3 抓取网页资源的 N 种方法

headers = { 'User-Agent' : user_agent }

req = urllib.request.Request(url, data, headers)

Python 3 抓取网页资源的 N 种方法

5、http 错误

Python 3 抓取网页资源的 N 种方法

req = urllib.request.Request('http://www.python.org/fish.html')

try:

urllib.request.urlopen(req)

except urllib.error.HTTPError as e:

print(e.code)

print(e.read().decode("utf8"))

Python 3 抓取网页资源的 N 种方法

6、异常处理1

Python 3 抓取网页资源的 N 种方法

from urllib.request import Request, urlopen

from urllib.error import URLError, HTTPError

req = Request("http://twitter.com/")

response = urlopen(req)

except HTTPError as e:

print('The server couldn\'t fulfill the request.')

print('Error code: ', e.code)

except URLError as e:

print('We failed to reach a server.')

print('Reason: ', e.reason)

else:

print("good!")

print(response.read().decode("utf8"))

Python 3 抓取网页资源的 N 种方法

7、异常处理2

Python 3 抓取网页资源的 N 种方法

from urllib.error import URLError

if hasattr(e, 'reason'):

elif hasattr(e, 'code'):

Python 3 抓取网页资源的 N 种方法

8、HTTP 认证

Python 3 抓取网页资源的 N 种方法

# create a password manager

password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()

# Add the username and password.

# If we knew the realm, we could use it instead of None.

top_level_url = "https://cms.tetx.com/"

password_mgr.add_password(None, top_level_url, 'yzhang', 'cccddd')

handler = urllib.request.HTTPBasicAuthHandler(password_mgr)

# create "opener" (OpenerDirector instance)

opener = urllib.request.build_opener(handler)

# use the opener to fetch a URL

a_url = "https://cms.tetx.com/"

x = opener.open(a_url)

print(x.read())

# Install the opener.

# Now all calls to urllib.request.urlopen use our opener.

urllib.request.install_opener(opener)

a = urllib.request.urlopen(a_url).read().decode('utf8')

print(a)

Python 3 抓取网页资源的 N 种方法

9、使用代理

Python 3 抓取网页资源的 N 种方法

proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'})

opener = urllib.request.build_opener(proxy_support)

a = urllib.request.urlopen("http://g.cn").read().decode("utf8")

Python 3 抓取网页资源的 N 种方法

10、超时

Python 3 抓取网页资源的 N 种方法

import socket

# timeout in seconds

timeout = 2

socket.setdefaulttimeout(timeout)

# this call to urllib.request.urlopen now uses the default timeout

# we have set in the socket module

req = urllib.request.Request('http://twitter.com/')

a = urllib.request.urlopen(req).read()

Python 3 抓取网页资源的 N 种方法