利用Python進行文本分類,
可用于過濾垃圾文本
1. 抽樣
2. 人工标注樣本文本中垃圾資訊
3. 樣本模組化
4. 模型評估
5. 新文本預測
參考:
http://scikit-learn.org/stable/user_guide.html
PYTHON自然語言進行中文翻譯 NLTK Natural Language Processing with Python 中文版
主要步驟:
1. 分詞
2. 特征詞提取
3. 生成詞-文檔矩陣
4. 整合分類變量
5. 模組化
6. 評估
7. 預測新文本
#示例
#!/usr/bin/env python
# -*- coding:utf-8 -*-
import MySQLdb
import pandas as pd
import numpy as np
import jieba
import nltk
import jieba.posseg as pseg
from sklearn import cross_validation
#1. 讀取資料,type為文本分類,0/1變量
df = pd.read_csv('F:\csv_test.csv',names=['id','cont','type'])
#2. 關鍵抽取
cont = df['cont']
tagall=[]
for t in cont:
tags = jieba.analyse.extract_tags(t,kn)
tagall.append(tags)
dist = nltk.FreqDist(tagall) #詞頻統計選top100的關鍵詞
fea_words = fdist.keys()[:100]
#3. 生成詞-文檔矩陣
def word_features(content, top_words):
word_set = set(content)
features = {}
for w in top_words:
features["w_%s" % w] = (w in word_set)
return features
#4. 整合矩陣與分類結果變量
def data_feature(df, fea_words):
data_set = []
cont = df['cont']
for i in range(0,len(cont)):
content =jieba.cut(cont)
feat = word_features(content,fea_words )
category = df.loc[i,'type']
tup = (feat, category)
data_set.append(tup)
return data_set
data_list = data_feature(df, fea_words)
#5. 建立分類模型
#訓練集與測試集
train_set,test_set = cross_validation.train_test_split(data_list,test_size=0.5)
#模組化,貝葉斯
classifier = nltk.NaiveBayesClassifier.train(train_set)
#模組化,決策樹
classifier = nltk.DecisionTreeClassifier.train(train_set)
#6. 模型評估準确率
print nltk.classify.accuracy(classifier,test_set)
#7. 預測結果輸出
pre_set = data_feature(new_data,fea_words)
pre_result = []
for item in pre_set:
result = classifier.classify(item)
pre_result.append(result)
#檢視預測結果分布
pre_tab = set(pre_result)
for p in pre_tab:
print p,pre_result.count(p)
其中2中特征詞提取可采用各種方法進行,
3,4步驟可改善,提高性能,
5模組化部分的模型可采用更多分類模型,邏輯回歸,SVM...