天天看點

詳解機器學習的決策樹算法(DT)-以及劃分資料集的ID3算法

1:什麼是決策樹

顧名思義:決策樹就是根據已有的條件進行決策進而産生的一棵樹。

比如,這就是一顆決策樹,根據不同的取值決定不同的走向

詳解機器學習的決策樹算法(DT)-以及劃分資料集的ID3算法

2、那麼如何根據現有的屬性來決定誰是第一個節點,誰是第二個節點呢,這裡就要用到ID3算法了

Id3 算法大家可以搜一下,就是利用資訊熵來計算的,根據資訊增益每次找到最合适的來當樹根,這樣,就會更符合實際情況

3、有了建樹的方法,接下來就是進行建樹,建樹是遞歸建立的,代碼在底下,大家可以自己了解一下

4:最後利用輸入訓練資料進行訓練,然後對測試資料進行樹上的查找,進而進行預測

import numpy
from math import log
import operator
import treePlotter

# 計算熵的函數
def calcShannonEnt(dataSet):
    numEntries = len(dataSet)
    labelCounts = {}
    for featVec in dataSet: #the the number of unique elements and their occurance
        currentLabel = featVec[-1]
        if currentLabel not in labelCounts.keys(): labelCounts[currentLabel] = 0
        labelCounts[currentLabel] += 1
    shannonEnt = 0.0
    for key in labelCounts:
        prob = float(labelCounts[key])/numEntries
        shannonEnt -= prob * log(prob,2) #log base 2
    return shannonEnt

# 建立資料集
def createDataSet():
    dataSet = [[1, 1, 'yes'],
               [1, 1, 'yes'],
               [1, 0, 'no'],
               [0, 1, 'no'],
               [0, 1, 'no']]
    labels = ['no surfacing','flippers']
    #change to discrete values
    return dataSet, labels

# 測試
# dataSet , labels =createDataSet()
# ans = calcShannonEnt(dataSet)
# 熵越高,混合的資料就越多
# print(ans)

# 劃分資料集,沒有計算熵,直接分類
def splitDataSet(dataSet, axis, value):
    # 參數: 待劃分的資料集,劃分資料集的特征的列,按照該列進行分類的值,如果該列中有符合這個value的值,那麼就會被分為一類
    # 注意:python 語言在函數中傳遞的是清單的引用,在函數内部對清單對象的修改,
    #   将會直接影響清單對象,是以,這裡重新聲明了一個清單
    retDataSet = []
    # dataSet中的資料也是清單
    for featVec in dataSet:
        # 講符合特征的資料抽取出來
        if featVec[axis] == value:
            reducedFeatVec = featVec[:axis]     #chop out axis used for splitting
            reducedFeatVec.extend(featVec[axis+1:])
            retDataSet.append(reducedFeatVec)
    return retDataSet

# 通過計算熵來進行分類,調用上面計算熵的函數和樸素分類的算法
def chooseBestFeatureToSplit(dataSet):
    numFeatures = len(dataSet[0]) - 1      #the last column is used for the labels
    baseEntropy = calcShannonEnt(dataSet)
    bestInfoGain = 0.0; bestFeature = -1
    for i in range(numFeatures):        #iterate over all the features
        featList = [example[i] for example in dataSet]#create a list of all the examples of this feature
        uniqueVals = set(featList)       #get a set of unique values
        newEntropy = 0.0
        for value in uniqueVals:
            subDataSet = splitDataSet(dataSet, i, value)
            prob = len(subDataSet)/float(len(dataSet))
            newEntropy += prob * calcShannonEnt(subDataSet)
        infoGain = baseEntropy - newEntropy     #calculate the info gain; ie reduction in entropy
        if (infoGain > bestInfoGain):       #compare this to the best gain so far
            bestInfoGain = infoGain         #if better than current best, set to best
            bestFeature = i
    return bestFeature                      #returns an integer



def majorityCnt(classList):
    classCount={}
    for vote in classList:
        if vote not in classCount.keys(): classCount[vote] = 0
        classCount[vote] += 1
    sortedClassCount = sorted(classCount.items(), key=operator.itemgetter(1), reverse=True)
    return sortedClassCount[0][0]


# 遞歸構造決策樹,ID3  算法
def createTree(dataSet,labels):
    classList = [example[-1] for example in dataSet]
    if classList.count(classList[0]) == len(classList):
        return classList[0]#stop splitting when all of the classes are equal
    if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet
        return majorityCnt(classList)
    bestFeat = chooseBestFeatureToSplit(dataSet)
    bestFeatLabel = labels[bestFeat]
    myTree = {bestFeatLabel:{}}
    del(labels[bestFeat])
    featValues = [example[bestFeat] for example in dataSet]
    uniqueVals = set(featValues)
    for value in uniqueVals:
        subLabels = labels[:]       #copy all of labels, so trees don't mess up existing labels
        myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels)
    return myTree

# 劃分資料集測試
dataSet , labels =createDataSet()
# ans =  splitDataSet(dataSet,0,0)        # 對dataset進行分類,按照地0列,值為0的進行歸類
# print(ans)
# ans =  chooseBestFeatureToSplit(dataSet)
# print("對結果影響最大的一列是:"+str(ans))

# 構造決策樹
myTree = createTree(dataSet,labels)
# print(myTree)

# 使用matplotlib 繪制樹
#####################     省略

# 測試算法:使用決策樹進行分類
# 使用決策樹的分類函數
def classify(inputTree,featLabels,testVec):
    firstStr = inputTree.keys()[0]
    secondDict = inputTree[firstStr]
    featIndex = featLabels.index(firstStr)
    key = testVec[featIndex]
    valueOfFeat = secondDict[key]
    if isinstance(valueOfFeat, dict):
        classLabel = classify(valueOfFeat, featLabels, testVec)
    else: classLabel = valueOfFeat
    return classLabel

# 使用pickle 子產品存儲決策樹
def storeTree(inputTree, filename):
    import pickle
    fw = open(filename, 'w')
    pickle.dump(inputTree, fw)
    fw.close()


def grabTree(filename):
    import pickle
    fr = open(filename)
    return pickle.load(fr)

           

本來想利用決策樹優化手寫數字的識别,但是暫時沒寫出來。。。還是不太回寫。。。後期再發吧。。

參考文獻-machine learning -peter harrington

繼續閱讀