天天看點

svr代碼matlab,matlab解決svr代碼.docx

svr代碼matlab,matlab解決svr代碼.docx

matlab解決svr代碼.docx

多元線性回歸和BP神經網絡及決策向量機之間的比較,個人了解:多元線性回歸:就是多個屬性的線性組合,在組合時,通過不斷調節每個屬性的權重來使多元線性回歸函數更多的适用于多個樣本。BP神經網絡:通過使用最快速下降法,通過反向傳播來不斷調整網絡中的權值和門檻值,使網絡的誤差平方和最小。決策向量機:它仍是對每個樣本操作,使得所有樣本距離最終生成的拟合曲線的間隔最小化。算法比較:BP目标函數:權值調整:決策向量機目标函數:min1/2w^2支援向量機(Supportvectormachines,SVM)與神經網絡類似,都是學習型的機制,但與神經網絡不同的是SVM使用的是數學方法和優化技術。學習效率的比較:導入資料:File->importdata參數優化常用方法:[train_pca,test_pca]=pcaForSVM(train_data,test_data,97);//主元分析[bestCVmse,bestc,bestg,ga_option]=gaSVMcgForRegress(train_label,train_pca);[bestmse,bestc,bestg]=SVMcgForRegress(train_label,train_data)=[ -c ,num2str(bestc), -g ,num2str(bestg), -s3-p0.01 ];train_label=data(1:50,1);train_data=data(1:50,2:14);model=svmtrain(train_label,train_data, -s3-t2-c2.2-g2.8-p0.01 );test_label=data(51:100,1);test_data=data(51:100,2:14);[predict_label,mse,dec_value]=svmpredict(test_label,test_data,model);[bestmse,bestc,bestg]=SVMcgForRegress(train_label,train_data)=[ -c ,num2str(bestc), -g ,num2str(bestg), -s3-p0.01 ];代碼整理:Part1:從核函數的角度出發,當選取不同核函數類型時,模型的效率是否有所提高mpjjdyJ12)(21kijkijwJ1.核函數為RBF核函數時:優化前:train_label=data(1:50,1);train_data=data(1:50,2:14);model=svmtrain(train_label,train_data, -s3-t2-c2.2-g2.8-p0.01 );[predict_label,mse,dec_value]=svmpredict(train_label,train_data,model);%上一行利用自身的值和預測值進行比較,求得模型實際結果和預測結果的均方值test_label=data(51:100,1);test_data=data(51:100,2:14);[predict_label,mse,dec_value]=svmpredict(test_label,test_data,model);優化後:train_label=data(1:50,1);train_data=data(1:50,2:14);[bestmse,bestc,bestg]=SVMcgForRegress(train_label,train_data)%優化方法暫定為網格尋優=[ -c ,num2str(bestc), -g ,num2str(bestg), -s3–t2-p0.01 ];model=svmtrain(train_label,train_data,);[ptrain,mse,dec_value]=svmpredict(train_label,train_data,model);figure;%畫圖比較預測值和實際值subplot(2,1,1);plot(train_label, -o );holdon;plot(ptrain, r-s );gridon;legend( original , predict );title( TrainSetRegressionPredictbySVM );2.核函數為多項式核函數時train_label=data(1:50,1);train_data=data(1:50,2:14);[bestmse,bestc,bestg]=SVMcgForRegress(train_label,train_data);=[ -c ,num2str(bestc), -g ,num2str(bestg), -s3-t1-p0.01 ];model=svmtrain(train_label,train_data,);[ptrain,mse]=svmpredict(train_label,train_data,model);figure;%畫圖比較預測值和實際值subplot(2,1,1);plot(train_label, -o );holdon;plot(ptrain, r-s );gridon;legend( original , predict );title( TrainSetRegressionPredictbySVM );Meansquarederror=14505.6(regression)Squaredcorrelationcoefficient=0.349393(regression)3.核函數為線性乘積0--linear:u *vtrain_label=data(1:50,1);train_data=data(1:50,2:14);[bestmse,bestc,bestg]=SVMcgForRegress(train_label,train_data);=[ -c ,num2str(bestc), -g ,num2str(bestg), -s3-t0-p0.01 ];model=svmtrain(train_label,train_data,);[ptrain,mse]=svmpredict(train_label,train_data,model);figure;%畫圖比較預測值和實際值subplot(2,1,1);plot(train_label, -o );holdon;plot(ptrain, r-s );gridon;legend( original , predict );title( TrainSetRegressionPredictbySVM );Meansquarederror=14537(regression)Squaredcorrelationcoefficient=0.389757(regression)4.核函數為sigmoid:tanh(gamma*u *v+coef0)神經元的非線性作用函數train_label=data(1:50,1);train_d