題目描述:https://www.kaggle.com/c/santander-customer-satisfaction
簡單總結:一堆匿名屬性;label是0/1;目标是最大化AUC(ROC曲線下的面積)。
第一次嘗試:
特征:
由于比賽已經關閉,隻能作為測試,我就直接用了暴力搜尋提取較好的特征:
[python] view plain copy
![](https://img.laitimes.com/img/_0nNw4CM6IyYiwiM6ICdiwiIn5Gcu82Yp9VRE90Qvw1c0V2czF2LcRXZu5ibkN3YuUGZvN2Lc9CX6MHc0RHaiojIsJye.png)
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! better use a RFC or GBC as the clf
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! because the final predict model are those two
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! we should select better feature for RFC or GBC, not for LR
- clf = LogisticRegression(class_weight='balanced', penalty='l2', n_jobs=-1)
- selectedFeaInds=GreedyFeatureAdd(clf, trainX, trainY, scoreType="auc", goodFeatures=[], maxFeaNum=150)
- joblib.dump(selectedFeaInds, 'modelPersistence/selectedFeaInds.pkl')
- #selectedFeaInds=joblib.load('modelPersistence/selectedFeaInds.pkl')
- trainX=trainX[:,selectedFeaInds]
- testX=testX[:,selectedFeaInds]
- print trainX.shape
模型:
直接使用sklearn中最常用的三個模型:
[python] view plain copy
![](https://img.laitimes.com/img/_0nNw4CM6IyYiwiM6ICdiwiIn5Gcu82Yp9VRE90Qvw1c0V2czF2LcRXZu5ibkN3YuUGZvN2Lc9CX6MHc0RHaiojIsJye.png)
- trainN=len(trainY)
- print "Creating train and test sets for blending..."
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! always use a seed for randomized procedures
- models=[
- RandomForestClassifier(n_estimators=1999, criterion='gini', n_jobs=-1, random_state=SEED),
- RandomForestClassifier(n_estimators=1999, criterion='entropy', n_jobs=-1, random_state=SEED),
- ExtraTreesClassifier(n_estimators=1999, criterion='gini', n_jobs=-1, random_state=SEED),
- ExtraTreesClassifier(n_estimators=1999, criterion='entropy', n_jobs=-1, random_state=SEED),
- GradientBoostingClassifier(learning_rate=0.1, n_estimators=101, subsample=0.6, max_depth=8, random_state=SEED)
- ]
- #StratifiedKFold is a variation of k-fold which returns stratified folds: each set contains approximately the same percentage of samples of each target class as the complete set.
- #kfcv=KFold(n=trainN, n_folds=nFold, shuffle=True, random_state=SEED)
- kfcv=StratifiedKFold(y=trainY, n_folds=nFold, shuffle=True, random_state=SEED)
- dataset_trainBlend=np.zeros( ( trainN, len(models) ) )
- dataset_testBlend=np.zeros( ( len(testX), len(models) ) )
- meanAUC=0.0
- for i, model in enumerate(models):
- print "model ", i, "=="*20
- dataset_testBlend_j=np.zeros( ( len(testX), nFold ) )
- for j, (trainI, testI) in enumerate(kfcv):
- print "Fold ", j, "^"*20
最終結果:
通過blending所有模型的結果:
[python] view plain copy
![](https://img.laitimes.com/img/_0nNw4CM6IyYiwiM6ICdiwiIn5Gcu82Yp9VRE90Qvw1c0V2czF2LcRXZu5ibkN3YuUGZvN2Lc9CX6MHc0RHaiojIsJye.png)
- print "Blending models..."
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if we want to predict some real values, use RidgeCV
- model=LogisticRegression(n_jobs=-1)
- C=np.linspace(0.001,1.0,1000)
- trainAucList=[]
- for c in C:
- model.C=c
- model.fit(dataset_trainBlend,trainY)
- trainProba=model.predict_proba(dataset_trainBlend)[:,1]
- trainAuc=metrics.roc_auc_score(trainY, trainProba)
- trainAucList.append((trainAuc, c))
- sortedtrainAucList=sorted(trainAucList)
- for trainAuc, c in sortedtrainAucList:
- print "c=%f => trainAuc=%f" % (c, trainAuc)
[python] view plain copy
![](https://img.laitimes.com/img/_0nNw4CM6IyYiwiM6ICdiwiIn5Gcu82Yp9VRE90Qvw1c0V2czF2LcRXZu5ibkN3YuUGZvN2Lc9CX6MHc0RHaiojIsJye.png)
- model.C=sortedtrainAucList[-1][1] #0.05
- model.fit(dataset_trainBlend,trainY)
- trainProba=model.predict_proba(dataset_trainBlend)[:,1]
- print "train auc: %f" % metrics.roc_auc_score(trainY, trainProba) #0.821439
- print "model.coef_: ", model.coef_
- print "Predict and saving results..."
- submitProba=model.predict_proba(dataset_testBlend)[:,1]
- df=pd.DataFrame(submitProba)
- print df.describe()
- SaveFile(submitID, submitProba, fileName="1submit.csv")
歸一化:
[python] view plain copy
![](https://img.laitimes.com/img/_0nNw4CM6IyYiwiM6ICdiwiIn5Gcu82Yp9VRE90Qvw1c0V2czF2LcRXZu5ibkN3YuUGZvN2Lc9CX6MHc0RHaiojIsJye.png)
- print "MinMaxScaler predictions to [0,1]..."
- mms=preprocessing.MinMaxScaler(feature_range=(0, 1))
- submitProba=mms.fit_transform(submitProba)
- df=pd.DataFrame(submitProba)
- print df.describe()
- SaveFile(submitID, submitProba, fileName="1submitScale.csv")
從測試結果中總結經驗:
第一:暴力搜尋特征的方式在特征數較多的情況下不可取;較少的情況下可以考慮(<200)
第二:sklearn中的這幾個模型,ExtraTreesClassifier效果最差,RandomForestClassifier效果較好且速度比較快,GradientBoostingClassifier結果最好但速度非常慢(因為不能并行)
第三:當某一個模型(GradientBoostingClassifier)比其他模型效果好很多時,不要使用blending的方法(尤其是特征空間一樣,分類器類似的情況,比如這裡的五個分類器都在同一組特征上模組化,而且都是基于樹的分類器),因為blending往往會使整體效果低于單獨使用最好的一個模型
第四:對于AUC,實際上關心的是樣本間的排名,而不是具體數值的大小,是以結果沒必要做歸一化處理;關于這個結論,自行搜尋資料了解
持續更新後續經驗,歡迎關注^_^