天天看點

微網誌資料爬蟲——擷取特定ID的粉絲和關注(二)1.通過o_id擷取p_id2.擷取使用者的關注清單3.擷取使用者的粉絲清單

注意:近期發現使用requests庫通路微網誌資料出現ssl error錯誤,而使用urllib庫通路則不會出現錯誤

功能:給定特定微網誌使用者ID,擷取微網誌使用者的粉絲和關注

1.通過o_id擷取p_id

使用者首頁結構如下所示:

微網誌資料爬蟲——擷取特定ID的粉絲和關注(二)1.通過o_id擷取p_id2.擷取使用者的關注清單3.擷取使用者的粉絲清單

通過使用正則比對即可擷取p_id值

add = urllib.request.Request(url='https://weibo.com/u/%s' % o_id, headers=headers)
r = urllib.request.urlopen(url=add, timeout=10).read().decode('utf-8')
p_id = re.findall(r'CONFIG\[\'page_id\']=\'(\d+)\'',r)[0]
           

2.擷取使用者的關注清單

微網誌資料爬蟲——擷取特定ID的粉絲和關注(二)1.通過o_id擷取p_id2.擷取使用者的關注清單3.擷取使用者的粉絲清單

使用者關注資訊結構如下:

微網誌資料爬蟲——擷取特定ID的粉絲和關注(二)1.通過o_id擷取p_id2.擷取使用者的關注清單3.擷取使用者的粉絲清單

使用正則比對

follows = re.findall(r'action-type=\\"itemClick\\" action-data=\\"uid=(\d+)&fnick=(.*?)&',r)
           

3.擷取使用者的粉絲清單

同理,使用正則比對

fans = re.findall(r'action-type=\\"itemClick\\" action-data=\\"uid=(\d+)&fnick=(.*?)&', r)
           

完整代碼如下所示

import requests
import re
from urllib import request
import urllib


def get_follow_fan(o_id):
    headers = {
        'authority': 'weibo.com',
        'cache-control': 'max-age=0',
        'sec-ch-ua': '"Google Chrome";v="89", "Chromium";v="89", ";Not A Brand";v="99"',
        'sec-ch-ua-mobile': '?0',
        'upgrade-insecure-requests': '1',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36',
        'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'sec-fetch-site': 'same-origin',
        'sec-fetch-mode': 'navigate',
        'sec-fetch-user': '?1',
        'sec-fetch-dest': 'document',
        'referer': 'https://weibo.com/p/1005057006403277/home?from=page_100505&mod=TAB',
        'accept-language': 'zh-CN,zh;q=0.9',
        'cookie': 'SINAGLOBAL=6305977095079.815.1616397209554; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9Whz61LsS7a7JfSRlmm6Blq55JpX5KMhUgL.Foqf1KnNehn7S0n2dJLoIpYc1K2Ni--ciKn7iKL2i--fiKLsiKLsi--Xi-iFiK.R; ALF=1650453176; SSOLoginState=1618917180; SCF=AgNorTL2GgvK5yfYVqY3knwSwqj5lZNS84d3pueWId7X8UrmumFcK5RzqIA1AeSsRoxjI-uFDhQr8Ls_HmgW3Ig.; SUB=_2A25NesdsDeRhGeBL4loW8CbMzDSIHXVu8b-krDV8PUNbmtAKLWagkW9NRq1zk3DqkxCQtlPWwfMBOeiJ_k-vowk5; _s_tentry=login.sina.com.cn; UOR=,,login.sina.com.cn; Apache=7071940489168.076.1618917200237; ULV=1618917200242:6:2:1:7071940489168.076.1618917200237:1617503411652; wb_view_log_6598708078=1440*9601.5; webim_unReadCount=%7B%22time%22%3A1618924157611%2C%22dm_pub_total%22%3A0%2C%22chat_group_client%22%3A0%2C%22chat_group_notice%22%3A0%2C%22allcountNum%22%3A42%2C%22msgbox%22%3A0%7D',
    }
    # o_id = "7006403277"
    # response = requests.get('https://weibo.com/u/%s' % o_id, headers=headers, params=params)

    add = urllib.request.Request(url='https://weibo.com/u/%s' % o_id, headers=headers)
    r = urllib.request.urlopen(url=add, timeout=10).read().decode('utf-8')


    p_id = re.findall(r'CONFIG\[\'page_id\']=\'(\d+)\'',r)[0]

    f_follow = open("data/" + o_id+"_follow.txt","w")
    f_fan = open("data/" + o_id+"_fan.txt","w")
    follow_data = []
    fan_data = []

    for i in range(1,6):
        # 擷取關注使用者
        # response = requests.get('https://weibo.com/p/%s/follow?page=%d' % (p_id, i), headers=headers)

        add = urllib.request.Request(url='https://weibo.com/p/%s/follow?page=%d' % (p_id, i), headers=headers)
        r = urllib.request.urlopen(url=add, timeout=10).read().decode('utf-8')

        follows = re.findall(r'action-type=\\"itemClick\\" action-data=\\"uid=(\d+)&fnick=(.*?)&',r)
        print("關注:")
        print(follows)
        for follow in follows:
            str = follow[0] + " " + follow[1]
            f_follow.write(str)
            f_follow.write("\n")
            follow_data.append(follow)

        # 擷取粉絲使用者
        # response = requests.get('https://weibo.com/p/%s/follow?relate=fans&page=%d' % (p_id, i), headers=headers)

        add = urllib.request.Request(url='https://weibo.com/p/%s/follow?relate=fans&page=%d' % (p_id, i), headers=headers)
        r = urllib.request.urlopen(url=add, timeout=10).read().decode('utf-8')

        fans = re.findall(r'action-type=\\"itemClick\\" action-data=\\"uid=(\d+)&fnick=(.*?)&', r)
        print("粉絲:")
        print(fans)
        for fan in fans:
            str = fan[0] + " " + fan[1]
            f_fan.write(str)
            f_fan.write("\n")
            fan_data.append(fan)

    f_follow.close()
    f_fan.close()


if __name__ == '__main__':
    get_follow_fan("7006403277")
           

結果如下所示

微網誌資料爬蟲——擷取特定ID的粉絲和關注(二)1.通過o_id擷取p_id2.擷取使用者的關注清單3.擷取使用者的粉絲清單

繼續閱讀