天天看點

Sklearn GridSearchCV 參數優化

2018/3/16更新:

遇到個參數優化的需求,不禁想起了網格搜尋算法,還是比較好用的,存在的問題:速度慢,每次更新參數都需要重訓練,是以針對這個問題需要自己權衡;下面就已随機森林算法為例,做一個網格優化的Demo。

代碼如下:這個代碼主要優化的是森林規模、森林深度和樣本權重

import pandas as pd
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report,confusion_matrix
from sklearn.metrics import accuracy_score,f1_score
from sklearn.grid_search import GridSearchCV
from sklearn import metrics

def fea_select():
    train_data = pd.read_csv(r'data.csv')
    test_data = pd.read_csv(r'data.csv')                   
    train_label = train_data['41']
    test_label = test_data['41']
    return   train_data,test_data,train_label,test_label           
def test_result(train_data,test_data,train_label,test_label):
    n = 200
    weight_dis = []
    while n>0:
        weight_dis.append({0:1,1:3,2:1,3:1+0.01*n,4:0.02*n})
        n -= 1

    sel_train = train_data[['0','2','3','4','5','9',
     '11','12','22','23','25','26','27','31','32','33','34',
    '35','36','37','38','39']]
    
    sel_test = test_data[['0','2','3','4','5','9',
    '11','12','22','23','25','26','27','31','32','33','34',
    '35','36','37','38','39']]
   
    
    over_train = sel_train.iloc[1003:,:]
    over_train = over_train.reset_index(drop=True)
    over_label = train_label.iloc[1003:]

    #thr_train = over_train.iloc[0:25,:]
    #thr_label = pd.DataFrame([3]*len(thr_train))
        
    new_train = pd.concat([sel_train,over_train],axis=0)
    new_label = pd.concat([train_label,over_label],axis=0) 
    parameters = {'n_estimators':[x for x in range(50,500,20)],'max_depth':[x for x in range(5,25,1)],'class_weight':weight_dis}
    rf  = RandomForestClassifier()
    gride = GridSearchCV(rf, parameters, scoring = 'precision_weighted' )
    gride.fit(new_train,new_label)
    print(gride.best_score_)
    print(gride.best_params_)           

之後列印最佳訓練分數和模型最佳參數;但是對于參數優化而言,需要告知模型一個評分标準,是準确率,損失等等;本例中用的是 precision_weight;如果需要定制,可以利用make_scorer進行定制,例如:

def loss_func(y_truth, y_predict):
       diff = np.abs(ground_truthpredictions).max()
       return np.log(1 + diff)
loss  = make_scorer(loss_func, greater_is_better=False)           

通過make_scorer封裝之後,實作接口,可以在scoring = 'loss’,完成私人配置。

以上就是Sklearn基本的參數優化和評估定制方法,有不懂的地方可以問我!

轉載請注明出處!

繼續閱讀