题目描述:https://www.kaggle.com/c/santander-customer-satisfaction
简单总结:一堆匿名属性;label是0/1;目标是最大化AUC(ROC曲线下的面积)。
第一次尝试:
特征:
由于比赛已经关闭,只能作为测试,我就直接用了暴力搜索提取较好的特征:
[python] view plain copy
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! better use a RFC or GBC as the clf
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! because the final predict model are those two
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! we should select better feature for RFC or GBC, not for LR
- clf = LogisticRegression(class_weight='balanced', penalty='l2', n_jobs=-1)
- selectedFeaInds=GreedyFeatureAdd(clf, trainX, trainY, scoreType="auc", goodFeatures=[], maxFeaNum=150)
- joblib.dump(selectedFeaInds, 'modelPersistence/selectedFeaInds.pkl')
- #selectedFeaInds=joblib.load('modelPersistence/selectedFeaInds.pkl')
- trainX=trainX[:,selectedFeaInds]
- testX=testX[:,selectedFeaInds]
- print trainX.shape
模型:
直接使用sklearn中最常用的三个模型:
[python] view plain copy
- trainN=len(trainY)
- print "Creating train and test sets for blending..."
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! always use a seed for randomized procedures
- models=[
- RandomForestClassifier(n_estimators=1999, criterion='gini', n_jobs=-1, random_state=SEED),
- RandomForestClassifier(n_estimators=1999, criterion='entropy', n_jobs=-1, random_state=SEED),
- ExtraTreesClassifier(n_estimators=1999, criterion='gini', n_jobs=-1, random_state=SEED),
- ExtraTreesClassifier(n_estimators=1999, criterion='entropy', n_jobs=-1, random_state=SEED),
- GradientBoostingClassifier(learning_rate=0.1, n_estimators=101, subsample=0.6, max_depth=8, random_state=SEED)
- ]
- #StratifiedKFold is a variation of k-fold which returns stratified folds: each set contains approximately the same percentage of samples of each target class as the complete set.
- #kfcv=KFold(n=trainN, n_folds=nFold, shuffle=True, random_state=SEED)
- kfcv=StratifiedKFold(y=trainY, n_folds=nFold, shuffle=True, random_state=SEED)
- dataset_trainBlend=np.zeros( ( trainN, len(models) ) )
- dataset_testBlend=np.zeros( ( len(testX), len(models) ) )
- meanAUC=0.0
- for i, model in enumerate(models):
- print "model ", i, "=="*20
- dataset_testBlend_j=np.zeros( ( len(testX), nFold ) )
- for j, (trainI, testI) in enumerate(kfcv):
- print "Fold ", j, "^"*20
最终结果:
通过blending所有模型的结果:
[python] view plain copy
- print "Blending models..."
- #!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if we want to predict some real values, use RidgeCV
- model=LogisticRegression(n_jobs=-1)
- C=np.linspace(0.001,1.0,1000)
- trainAucList=[]
- for c in C:
- model.C=c
- model.fit(dataset_trainBlend,trainY)
- trainProba=model.predict_proba(dataset_trainBlend)[:,1]
- trainAuc=metrics.roc_auc_score(trainY, trainProba)
- trainAucList.append((trainAuc, c))
- sortedtrainAucList=sorted(trainAucList)
- for trainAuc, c in sortedtrainAucList:
- print "c=%f => trainAuc=%f" % (c, trainAuc)
[python] view plain copy
- model.C=sortedtrainAucList[-1][1] #0.05
- model.fit(dataset_trainBlend,trainY)
- trainProba=model.predict_proba(dataset_trainBlend)[:,1]
- print "train auc: %f" % metrics.roc_auc_score(trainY, trainProba) #0.821439
- print "model.coef_: ", model.coef_
- print "Predict and saving results..."
- submitProba=model.predict_proba(dataset_testBlend)[:,1]
- df=pd.DataFrame(submitProba)
- print df.describe()
- SaveFile(submitID, submitProba, fileName="1submit.csv")
归一化:
[python] view plain copy
- print "MinMaxScaler predictions to [0,1]..."
- mms=preprocessing.MinMaxScaler(feature_range=(0, 1))
- submitProba=mms.fit_transform(submitProba)
- df=pd.DataFrame(submitProba)
- print df.describe()
- SaveFile(submitID, submitProba, fileName="1submitScale.csv")
从测试结果中总结经验:
第一:暴力搜索特征的方式在特征数较多的情况下不可取;较少的情况下可以考虑(<200)
第二:sklearn中的这几个模型,ExtraTreesClassifier效果最差,RandomForestClassifier效果较好且速度比较快,GradientBoostingClassifier结果最好但速度非常慢(因为不能并行)
第三:当某一个模型(GradientBoostingClassifier)比其他模型效果好很多时,不要使用blending的方法(尤其是特征空间一样,分类器类似的情况,比如这里的五个分类器都在同一组特征上建模,而且都是基于树的分类器),因为blending往往会使整体效果低于单独使用最好的一个模型
第四:对于AUC,实际上关心的是样本间的排名,而不是具体数值的大小,所以结果没必要做归一化处理;关于这个结论,自行搜索资料理解
持续更新后续经验,欢迎关注^_^