天天看点

微博数据爬虫——获取特定ID的粉丝和关注(二)1.通过o_id获取p_id2.获取用户的关注列表3.获取用户的粉丝列表

注意:近期发现使用requests库访问微博数据出现ssl error错误,而使用urllib库访问则不会出现错误

功能:给定特定微博用户ID,获取微博用户的粉丝和关注

1.通过o_id获取p_id

用户主页结构如下所示:

微博数据爬虫——获取特定ID的粉丝和关注(二)1.通过o_id获取p_id2.获取用户的关注列表3.获取用户的粉丝列表

通过使用正则匹配即可获取p_id值

add = urllib.request.Request(url='https://weibo.com/u/%s' % o_id, headers=headers)
r = urllib.request.urlopen(url=add, timeout=10).read().decode('utf-8')
p_id = re.findall(r'CONFIG\[\'page_id\']=\'(\d+)\'',r)[0]
           

2.获取用户的关注列表

微博数据爬虫——获取特定ID的粉丝和关注(二)1.通过o_id获取p_id2.获取用户的关注列表3.获取用户的粉丝列表

用户关注信息结构如下:

微博数据爬虫——获取特定ID的粉丝和关注(二)1.通过o_id获取p_id2.获取用户的关注列表3.获取用户的粉丝列表

使用正则匹配

follows = re.findall(r'action-type=\\"itemClick\\" action-data=\\"uid=(\d+)&fnick=(.*?)&',r)
           

3.获取用户的粉丝列表

同理,使用正则匹配

fans = re.findall(r'action-type=\\"itemClick\\" action-data=\\"uid=(\d+)&fnick=(.*?)&', r)
           

完整代码如下所示

import requests
import re
from urllib import request
import urllib


def get_follow_fan(o_id):
    headers = {
        'authority': 'weibo.com',
        'cache-control': 'max-age=0',
        'sec-ch-ua': '"Google Chrome";v="89", "Chromium";v="89", ";Not A Brand";v="99"',
        'sec-ch-ua-mobile': '?0',
        'upgrade-insecure-requests': '1',
        'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36',
        'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'sec-fetch-site': 'same-origin',
        'sec-fetch-mode': 'navigate',
        'sec-fetch-user': '?1',
        'sec-fetch-dest': 'document',
        'referer': 'https://weibo.com/p/1005057006403277/home?from=page_100505&mod=TAB',
        'accept-language': 'zh-CN,zh;q=0.9',
        'cookie': 'SINAGLOBAL=6305977095079.815.1616397209554; SUBP=0033WrSXqPxfM725Ws9jqgMF55529P9D9Whz61LsS7a7JfSRlmm6Blq55JpX5KMhUgL.Foqf1KnNehn7S0n2dJLoIpYc1K2Ni--ciKn7iKL2i--fiKLsiKLsi--Xi-iFiK.R; ALF=1650453176; SSOLoginState=1618917180; SCF=AgNorTL2GgvK5yfYVqY3knwSwqj5lZNS84d3pueWId7X8UrmumFcK5RzqIA1AeSsRoxjI-uFDhQr8Ls_HmgW3Ig.; SUB=_2A25NesdsDeRhGeBL4loW8CbMzDSIHXVu8b-krDV8PUNbmtAKLWagkW9NRq1zk3DqkxCQtlPWwfMBOeiJ_k-vowk5; _s_tentry=login.sina.com.cn; UOR=,,login.sina.com.cn; Apache=7071940489168.076.1618917200237; ULV=1618917200242:6:2:1:7071940489168.076.1618917200237:1617503411652; wb_view_log_6598708078=1440*9601.5; webim_unReadCount=%7B%22time%22%3A1618924157611%2C%22dm_pub_total%22%3A0%2C%22chat_group_client%22%3A0%2C%22chat_group_notice%22%3A0%2C%22allcountNum%22%3A42%2C%22msgbox%22%3A0%7D',
    }
    # o_id = "7006403277"
    # response = requests.get('https://weibo.com/u/%s' % o_id, headers=headers, params=params)

    add = urllib.request.Request(url='https://weibo.com/u/%s' % o_id, headers=headers)
    r = urllib.request.urlopen(url=add, timeout=10).read().decode('utf-8')


    p_id = re.findall(r'CONFIG\[\'page_id\']=\'(\d+)\'',r)[0]

    f_follow = open("data/" + o_id+"_follow.txt","w")
    f_fan = open("data/" + o_id+"_fan.txt","w")
    follow_data = []
    fan_data = []

    for i in range(1,6):
        # 获取关注用户
        # response = requests.get('https://weibo.com/p/%s/follow?page=%d' % (p_id, i), headers=headers)

        add = urllib.request.Request(url='https://weibo.com/p/%s/follow?page=%d' % (p_id, i), headers=headers)
        r = urllib.request.urlopen(url=add, timeout=10).read().decode('utf-8')

        follows = re.findall(r'action-type=\\"itemClick\\" action-data=\\"uid=(\d+)&fnick=(.*?)&',r)
        print("关注:")
        print(follows)
        for follow in follows:
            str = follow[0] + " " + follow[1]
            f_follow.write(str)
            f_follow.write("\n")
            follow_data.append(follow)

        # 获取粉丝用户
        # response = requests.get('https://weibo.com/p/%s/follow?relate=fans&page=%d' % (p_id, i), headers=headers)

        add = urllib.request.Request(url='https://weibo.com/p/%s/follow?relate=fans&page=%d' % (p_id, i), headers=headers)
        r = urllib.request.urlopen(url=add, timeout=10).read().decode('utf-8')

        fans = re.findall(r'action-type=\\"itemClick\\" action-data=\\"uid=(\d+)&fnick=(.*?)&', r)
        print("粉丝:")
        print(fans)
        for fan in fans:
            str = fan[0] + " " + fan[1]
            f_fan.write(str)
            f_fan.write("\n")
            fan_data.append(fan)

    f_follow.close()
    f_fan.close()


if __name__ == '__main__':
    get_follow_fan("7006403277")
           

结果如下所示

微博数据爬虫——获取特定ID的粉丝和关注(二)1.通过o_id获取p_id2.获取用户的关注列表3.获取用户的粉丝列表

继续阅读