天天看點

基于tensorflow+CNN的新聞文本分類

2018年10月4日筆記

tensorflow是谷歌google的深度學習架構,tensor中文叫做張量,flow叫做流。

CNN是convolutional neural network的簡稱,中文叫做卷積神經網絡。

文本分類是NLP(自然語言處理)的經典任務。

0.程式設計環境

作業系統:Win10

tensorflow版本:1.6

tensorboard版本:1.6

python版本:3.6

1.緻謝聲明

本文是作者學習《使用卷積神經網絡以及循環神經網絡進行中文文本分類》的成果,感激前輩;

github連結:https://github.com/gaussic/text-classification-cnn-rnn

2.配置環境

使用循環神經網絡模型要求有較高的機器配置,如果使用CPU版tensorflow會花費大量時間。

讀者在有nvidia顯示卡的情況下,安裝GPU版tensorflow會提高計算速度50倍。

安裝教程連結:https://blog.csdn.net/qq_36556893/article/details/79433298

如果沒有nvidia顯示卡,但有visa信用卡,請閱讀我的另一篇文章《在谷歌雲伺服器上搭建深度學習平台》,連結:https://www.jianshu.com/p/893d622d1b5a

3.下載下傳并解壓資料集

資料集下載下傳連結: https://pan.baidu.com/s/1oLZZF4AHT5X_bzNl2aF2aQ 提取碼: 5sea

下載下傳壓縮檔案cnews.zip完成後,選擇解壓到cnews。

檔案夾結構如下圖所示:

基于tensorflow+CNN的新聞文本分類

image.png

4.完整代碼

代碼檔案需要放到和cnews檔案夾同級目錄。

with open('./cnews/cnews.train.txt', encoding='utf8') as file:
    line_list = [k.strip() for k in file.readlines()]
    train_label_list = [k.split()[0] for k in line_list]
    train_content_list = [k.split(maxsplit=1)[1] for k in line_list]
with open('./cnews/cnews.vocab.txt', encoding='utf8') as file:
    vocabulary_list = [k.strip() for k in file.readlines()]
word2id_dict = dict(((b, a) for a, b in enumerate(vocabulary_list)))
content2vector = lambda content : [word2id_dict[word] for word in content if word in word2id_dict]
train_vector_list = [content2vector(content) for content in train_content_list ]
vocab_size = 5000  # 詞彙表達小
embedding_dim = 64  # 詞向量次元
seq_length = 600  # 序列長度
num_classes = 10  # 類别數
num_filters = 256  # 卷積核數目
kernel_size = 5  # 卷積核尺寸
hidden_dim = 128  # 全連接配接層神經元
dropout_keep_prob = 0.5  # dropout保留比例
learning_rate = 1e-3  # 學習率
batch_size = 64  # 每批訓練大小
import tensorflow.contrib.keras as kr
train_X = kr.preprocessing.sequence.pad_sequences(train_vector_list, 600)
from sklearn.preprocessing import LabelEncoder
labelEncoder = LabelEncoder()
train_y = labelEncoder.fit_transform(train_label_list)
train_Y = kr.utils.to_categorical(train_y, num_classes=10)
import tensorflow as tf
tf.reset_default_graph()
X_holder = tf.placeholder(tf.int32, [None, seq_length])
Y_holder = tf.placeholder(tf.float32, [None, num_classes])

embedding = tf.get_variable('embedding', [vocab_size, embedding_dim])
embedding_inputs = tf.nn.embedding_lookup(embedding, X_holder)
conv = tf.layers.conv1d(embedding_inputs, num_filters, kernel_size)
max_pooling = tf.reduce_max(conv, reduction_indices=[1])
full_connect = tf.layers.dense(max_pooling, hidden_dim)
full_connect_dropout = tf.contrib.layers.dropout(full_connect, keep_prob=1)
full_connect_activate = tf.nn.relu(full_connect_dropout)
softmax_before = tf.layers.dense(full_connect_activate, num_classes)
predict_Y = tf.nn.softmax(softmax_before)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_holder, logits=softmax_before)
loss = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate)
train = optimizer.minimize(loss)
isCorrect = tf.equal(tf.argmax(Y_holder, 1), tf.argmax(predict_Y, 1))
accuracy = tf.reduce_mean(tf.cast(isCorrect, tf.float32))

init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)

import random
for i in range(3000):
    selected_index = random.sample(list(range(len(train_y))), k=64)
    batch_X = train_X[selected_index]
    batch_Y = train_Y[selected_index]
    session.run(train, {X_holder:batch_X, Y_holder:batch_Y})
    step = i + 1 
    if step % 100 == 0:
        selected_index = random.sample(list(range(len(train_y))), k=200)
        batch_X = train_X[selected_index]
        batch_Y = train_Y[selected_index]
        loss_value, accuracy_value = session.run([loss, accuracy], {X_holder:batch_X, Y_holder:batch_Y})
        print('step:%d loss:%.4f accuracy:%.4f' %(step, loss_value, accuracy_value))           

複制

上面一段代碼的運作結果如下(隻截取前十行):

step:100 loss:0.5491 accuracy:0.8650

step:200 loss:0.2495 accuracy:0.9200

step:300 loss:0.1928 accuracy:0.9450

step:400 loss:0.1123 accuracy:0.9700

step:500 loss:0.1183 accuracy:0.9800

step:600 loss:0.0946 accuracy:0.9800

step:700 loss:0.1316 accuracy:0.9600

step:800 loss:0.1455 accuracy:0.9650

step:900 loss:0.1226 accuracy:0.9600

step:1000 loss:0.0686 accuracy:0.9800

5.資料準備

with open('./cnews/cnews.train.txt', encoding='utf8') as file:
    line_list = [k.strip() for k in file.readlines()]
    train_label_list = [k.split()[0] for k in line_list]
    train_content_list = [k.split(maxsplit=1)[1] for k in line_list]
with open('./cnews/cnews.vocab.txt', encoding='utf8') as file:
    vocabulary_list = [k.strip() for k in file.readlines()]
word2id_dict = dict(((b, a) for a, b in enumerate(vocabulary_list)))
content2vector = lambda content : [word2id_dict[word] for word in content if word in word2id_dict]
train_vector_list = [content2vector(content) for content in train_content_list ]
vocab_size = 5000  # 詞彙表達小
embedding_dim = 64  # 詞向量次元
seq_length = 600  # 序列長度
num_classes = 10  # 類别數
num_filters = 256  # 卷積核數目
kernel_size = 5  # 卷積核尺寸
hidden_dim = 128  # 全連接配接層神經元
dropout_keep_prob = 0.5  # dropout保留比例
learning_rate = 1e-3  # 學習率
batch_size = 64  # 每批訓練大小
import tensorflow.contrib.keras as kr
train_X = kr.preprocessing.sequence.pad_sequences(train_vector_list, 600)
from sklearn.preprocessing import LabelEncoder
labelEncoder = LabelEncoder()
train_y = labelEncoder.fit_transform(train_label_list)
train_Y = kr.utils.to_categorical(train_y, num_classes=10)
import tensorflow as tf
tf.reset_default_graph()
X_holder = tf.placeholder(tf.int32, [None, seq_length])
Y_holder = tf.placeholder(tf.float32, [None, num_classes])           

複制

6.搭建神經網絡

embedding = tf.get_variable('embedding', [vocab_size, embedding_dim])
embedding_inputs = tf.nn.embedding_lookup(embedding, X_holder)
conv = tf.layers.conv1d(embedding_inputs, num_filters, kernel_size)
max_pooling = tf.reduce_max(conv, reduction_indices=[1])
full_connect = tf.layers.dense(max_pooling, hidden_dim)
full_connect_dropout = tf.contrib.layers.dropout(full_connect, keep_prob=1)
full_connect_activate = tf.nn.relu(full_connect_dropout)
softmax_before = tf.layers.dense(full_connect_activate, num_classes)
predict_Y = tf.nn.softmax(softmax_before)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y_holder, logits=softmax_before)
loss = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate)
train = optimizer.minimize(loss)
isCorrect = tf.equal(tf.argmax(Y_holder, 1), tf.argmax(predict_Y, 1))
accuracy = tf.reduce_mean(tf.cast(isCorrect, tf.float32))           

複制

7.參數初始化

init = tf.global_variables_initializer()
session = tf.Session()
session.run(init)           

複制

8.模型訓練

import random
for i in range(3000):
    selected_index = random.sample(list(range(len(train_y))), k=64)
    batch_X = train_X[selected_index]
    batch_Y = train_Y[selected_index]
    session.run(train, {X_holder:batch_X, Y_holder:batch_Y})
    step = i + 1 
    if step % 100 == 0:
        selected_index = random.sample(list(range(len(train_y))), k=200)
        batch_X = train_X[selected_index]
        batch_Y = train_Y[selected_index]
        loss_value, accuracy_value = session.run([loss, accuracy], {X_holder:batch_X, Y_holder:batch_Y})
        print('step:%d loss:%.4f accuracy:%.4f' %(step, loss_value, accuracy_value))           

複制