天天看点

Python爬取分析全网最硬核粽子(附源码)

Python爬取分析全网最硬核粽子(附源码)

说到粽子,想必大家都是会想起这些普通的粽子,即使再有南北差异大家也见怪不怪了

但有种硬核粽子的味道在行哥的记忆里一直不能忘怀,那就是《盗墓笔记》里的粽子。这种粽子最好需要黑驴蹄子来搭配食用更加美味哦

本文行哥爬取了整本的《盗墓笔记》来分析一下粽子的口味到底有多奇特

1.代码爬取

本文将通过小说网站

http://www.daomubiji.com/

来爬取整本盗墓笔记并保存,在这一过程中使用python网络库requests实现简单的python爬虫以及使用html文档分析库BeautifulSoup分析

网页爬取代码如下,可以复制粘贴直接运行

# 公众号:一行数据
from bs4 import BeautifulSoup
import requests
import re
 
 
# 获取每本书的链接
def get_book_urls(url):
    book_urls = []
 
    index = requests.get("http://www.daomubiji.com/")
    soup = BeautifulSoup(index.content.decode("utf8"), 'lxml')
 
    articles = soup.find_all("article", class_='article-content')
    for article in articles:
        links = article.find_all('a', href=re.compile("dao-mu-bi-ji"))
        for link in links:
            book_urls.append(link["href"])
 
    return book_urls
 
 
# 获取每章的链接
def get_chapter_urls(url):
    chapter_urls = []
    page = requests.get(url)
    soup = BeautifulSoup(page.content.decode("utf8"), "lxml")
 
    articles = soup.find_all("article", class_="excerpt excerpt-c3")
    for article in articles:
        chapter_urls.append(article.a["href"])
 
    return chapter_urls
 
 
# 获取每章的内容
def get_content(url):
    content = ""
    page = requests.get(url)
    soup = BeautifulSoup(page.content.decode("utf8"), "lxml")
 
    title = soup.find_all("h1", class_="article-title")[0].string
    content += ("\n" + title + "\n\n")
 
    articles = soup.find_all("article", class_="article-content")
    for article in articles:
        ps = article.find_all('p')
        for p in ps:
            for string in p.strings:
                content = content + string + "\n"
 
    return content
 
 
# 获取全本《盗墓笔记》并保存到文件
def get_article(url):
    print(1)
    book_urls = get_book_urls(url)
    print(1)
    chapter_urls = []
    for url in book_urls:
        print(1)
        # url = "http://www.daomubiji.com/dao-mu-bi-ji-2"
        chapter_urls.extend(get_chapter_urls(url))
    print(chapter_urls)
 
    result = ""
 
    for chapter_url in chapter_urls:
        content = get_content(chapter_url)
        result += content
        print(content)
 
    with open("daomubiji.txt", "a") as f:
        f.write(result.encode("utf8"))
 
 
get_article("http://www.daomubiji.com/")      

2.粽子分析

依稀记得书中的粽子种类繁多

   大粽子:厉害的僵尸、恶鬼之类的东西

   老粽子:可发生尸变的不好对付的僵尸

   干粽子:是指墓里的尸体烂得只剩下一堆白骨了。

   肉粽子:是指尸体身上值钱的东西多。

   血粽子:血尸墓中的粽子,最厉害。

   霉粽子:是指具有尸毒的尸体。

   女粽子:女尸尸化后的粽子,比普通粽子更厉害

这里把前三本书中出现“粽子”的每句话给提取出来,并生成一张词云图来展示一下粽子的味道,可以看到四字的拆分比两字拆分描述得更深刻一些

Python爬取分析全网最硬核粽子(附源码)
Python爬取分析全网最硬核粽子(附源码)

4字分词

代码如下:

import jieba
from wordcloud import WordCloud,ImageColorGenerator
from matplotlib import pyplot as plt
from PIL import Image
import numpy as np
with open('zongzi','r',encoding="UTF-8") as file1:
    content = "".join(file1.readlines())
content_after = "/".join(word for word in jieba.cut(content, cut_all=True) if len(word) <= 3)
 
print(content_after)
##添加的代码,把刚刚你保存好的图片用Image方法打开,
##然后用numpy转换了一下
images = Image.open("zongzi2.jpg")
maskImages = np.array(images)
 
# ##修改了一下wordCloud参数,就是把这些数据整理成一个形状,
# ##具体的形状会适应你的图片的.
wc = WordCloud(font_path="/Library/Fonts/Songti.ttc",background_color="black",max_words=1000,max_font_size=100,width=1500,height=1500,mask=maskImages).generate(content_after )
plt.imshow(wc)
 
wc.to_file('wolfcodeTarget3.png')
       

3.最后

在这里祝大家粽子节快乐