天天看點

python圖書館預約_利用Python爬蟲搶座圖書館!

python圖書館預約_利用Python爬蟲搶座圖書館!
python圖書館預約_利用Python爬蟲搶座圖書館!

login.PNG

下面給大家看一下,fiddler抓包的結果(送出的form資訊和請求的頭部資訊):

python圖書館預約_利用Python爬蟲搶座圖書館!

form.PNG

請求頭:

python圖書館預約_利用Python爬蟲搶座圖書館!

捕獲2.PNG

接下來小編給大家解釋下form中各個參數的含義:

1 TxtBox1 &TxtBox2 後面的内容是固定不變的(我也不知道後面的值是什麼鬼)

2 ddDay 後面跟的參數是你想要預約的日期(“今日”或“明日”)

3 ddRoom 一共有7位數字(第一位表示校區,即:1----“東區”,“2”----“中區”,3----“西區”;第2-4位表示 預約教室的門牌号;後三位“001”是固定的)是以上圖中“3207001”指代的就是西區圖書館207自習室;

4 txtSeats:是已預約的座位/空閑座位

小編在撸代碼的時候,以為這個網站沒有反爬措施,是我天真了,竟然還有反爬!是以,還是老規矩需要在請求的頭部資訊加上headers,僞裝成浏覽器才行。

import requests

import http.cookiejar

from bs4 import BeautifulSoup

import pytesseract

import time

cookies = {} #建構一個空字典用來存放cookies

headers={

'Connection': 'Keep-Alive',

'Accept-Language': 'zh-CN,zh;q=0.8',

'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',

'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0',

'Accept-Encoding': 'gzip, deflate, sdch',

'Upgrade-Insecure-Requests':'1',

'Referer': 'http://172.16.47.84/',

'X-Requested-With': 'XMLHttpRequest',

'Host': '172.16.47.84',

}

def login(username, password, code):

url = 'http://172.16.47.84/'

form = {

'__VIEWSTATE': '/wEPDwUKMTM4MDI3NzI1MGRkScbX95yvCTaTc5BiKFlgoJSN0gi9TvPi4as2Sed98Ug=',

'__EVENTVALIDATION': '/wEWBQLYjc7zAwLs0bLrBgLs0fbZDALs0Yq1BQKM54rGBuOROfeko3/nasulQt/v8ihgd8TDoZ4pgdA4rtbVvanV',

'TextBox1': username,

'TextBox2': password,

'TextBox3': code,

'Button1': '登入',

}

resp = requests.post(url, headers=headers, data=form, cookies=cookies)

對了,這個網站也有驗證碼,而且還極為複雜,表示給使用者帶來了很不好的體驗,用pytesseract識别的時候,成功率幾乎為0.在下感覺這個開發者腦子裡有坑,作為一個人類的我,傾斜過後的0和o,還有g和數字9 都不厘清楚,更不要講機器了。下面是驗證碼的處理:

def get_code():

url = 'http://172.16.47.84/VerifyCode.aspx?'

resp = requests.get(url, headers=headers)

cookies['ASP.NET_SessionId'] = resp.cookies.get('ASP.NET_SessionId')

with open('code.jpg', 'wb') as img:

img.write(resp.content)

def check_show():

checkcode = Image.open("code.jpg")

image = Image.open('code.jpg')

imgry = image.convert('L')

imgry.show()

成功登陸後就是選座了:

def cchose_seat():

try:

churl = "http://172.16.47.84/DayNavigation.aspx"

churl_1 = "http://172.16.47.84/AppSTod.aspx?roomid=3207&hei=834&wd=1920"

responce = requests.get(url)

form_1 = {

'__VIEWSTATE':'/wEPDwUJMTAxMjgzNTc0D2QWAgIDD2QWBgIFDxBkZBYBZmQCBw8QDxYGHg5EYXRhVmFsdWVGaWVsZAUGc2VhdGlkHg1EYXRhVGV4dEZpZWxkBQhsb2NhdGlvbh4LXyFEYXRhQm91bmRnZBAVCB7kuJzljLrlm77kuabppoboh6rkuaDlrqQ0MDHlrqQe5Lit5Yy65Zu+5Lmm6aaG6Ieq5Lmg5a6kMTAx5a6kHuS4reWMuuWbvuS5pummhuiHquS5oOWupDIwMeWupB7kuK3ljLrlm77kuabppoboh6rkuaDlrqQyMDblrqQe5Lit5Yy65Zu+5Lmm6aaG6Ieq5Lmg5a6kMjEx5a6kHuilv+WMuuWbvuS5pummhuiHquS5oOWupDIwN+WupB7opb/ljLrlm77kuabppoboh6rkuaDlrqQ0MDHlrqQe6KW/5Yy65Zu+5Lmm6aaG6Ieq5Lmg5a6kNDA45a6kFQgHMTQwMTAwMQcyMTAxMDAxBzIyMDEwMDEHMjIwNjAwMQcyMjExMDAxBzMyMDcwMDEHMzQwMTAwMQczNDA4MDAxFCsDCGdnZ2dnZ2dnFgECBWQCCQ8PFgIeBFRleHQFBjc4LzI0NGRkZIJLRlSaEzl+hXYbMzsabmNO5M8kN4OxvGABm9Fy2LY8',

'__EVENTVALIDATION':"/wEWEwL72v/5BALs0bLrBgLs0fbZDAKPwd2MAQLpu8rPDwKt6srPDwLytq6nDwK6p5TSCwKVpujRCwKVpuzQCwKVpvCRCAKetsSdCgL0p+TRCAL0p5TSCwL0p8CRDwL9h9mtDgKM54rGBgK7q7GGCALWlM+bAudkkeCZYjIbgxcot/UJfplpDini+B9RpiIThosUANI+",

'TextBox1':"883",

'TextBox2':"1920",

"ddlDay":"明日",

'ddlRoom':"3207001",

"txtSeats":"",

"Button1":"手動選座",

}

resp_1 = requests.post(churl, headers=headers, data=form_1, cookies=cookies)

except:

print("####---------登陸失敗--------#######")

###送出預約座位

def chseat():

churl = "http://172.16.47.84/AppSTod.aspx?roomid=3207&hei=834&wd=1920"

churl_1= "http://172.16.47.84/Skip.aspx?seatid=3207128"

from_2 = {

'roomid' :'3207',

'hei' :'883',

'wd' :'1920',

}

from_3 = {

'seatid':'3207128',

}

for i in range(0,2):

resp_2 = requests.post(churl,headers = headers,data = from_2,cookies = cookies)

resp_3 = requests.post(churl_1,headers = headers,data = from_3,cookies = cookies)

soup = BeautifulSoup(resp_3.content,'lxml')

Soup = soup.find_all("script")

if Soup:

print(Soup[0])

else:

print("正在搶座")

for i in range(0,50):

time.sleep(1) #為防止伺服器癱瘓,請求50次,這個大家可以根據需要,自行改動

resp_2 = requests.post(churl,headers = headers,data = from_2,cookies = cookies)

resp_3 = requests.post(churl_1,headers = headers,data = from_3,cookies = cookies)

檢測是否成功搶座,是根據預約成功後彈出一個頁面,告訴“你已經成功預約”, 而這句話是存在于

給大家看下結果:

python圖書館預約_利用Python爬蟲搶座圖書館!

運作結果.PNG

登陸網站驗證:

python圖書館預約_利用Python爬蟲搶座圖書館!

驗證結果.PNG

小編還在成長,希望大家多多指教。