天天看點

python 最優化算法庫_sopt:一個簡單的python最優化庫

complex_constraints_method:求解複雜限制的方法,預設為'loop',即如果解不滿足複雜限制,則再次随機産生解,直到滿足限制,暫時不支援其他的求解方式。

RandomWalk 除了提供基本的random_walk函數之外,還提供了一個更加強大的improved_random_walk函數,後者的全局尋優能力要更強。

6. 求解帶複雜限制的目标函數

上面所述的各種優化方法求解的都是變量僅有簡單邊界限制(形如\(a \le x_i \le b\)),下面介紹如何使用各種優化方法求解帶有複雜限制條件的目标函數。其實,求解方法也非常簡單,以GA為例,下面的例子即對Rosenbrock函數求解了帶有三個複雜限制條件的最優值:

from time import time

from sopt.GA.GA import GA

from sopt.util.functions import *

from sopt.util.ga_config import *

from sopt.util.constraints import *

class TestGA:

def __init__(self):

self.func = Rosenbrock

self.func_type = Rosenbrock_func_type

self.variables_num = Rosenbrock_variables_num

self.lower_bound = Rosenbrock_lower_bound

self.upper_bound = Rosenbrock_upper_bound

self.cross_rate = 0.8

self.mutation_rate = 0.1

self.generations = 300

self.population_size = 200

self.binary_code_length = 20

self.cross_rate_exp = 1

self.mutation_rate_exp = 1

self.code_type = code_type.real

self.cross_code = False

self.select_method = select_method.proportion

self.rank_select_probs = None

self.tournament_num = 2

self.cross_method = cross_method.uniform

self.arithmetic_cross_alpha = 0.1

self.arithmetic_cross_exp = 1

self.mutation_method = mutation_method.uniform

self.none_uniform_mutation_rate = 1

self.complex_constraints = [constraints1,constraints2,constraints3]

self.complex_constraints_method = complex_constraints_method.penalty

self.complex_constraints_C = 1e8

self.M = 1e8

self.GA = GA(**self.__dict__)

def test(self):

start_time = time()

self.GA.run()

print("GA costs%.4fseconds!" % (time()-start_time))

self.GA.save_plot()

self.GA.show_result()

if __name__ == '__main__':

TestGA().test()

運作結果如下:

GA costs 1.9957 seconds!

-------------------- GA config is: --------------------

lower_bound:[-2.048, -2.048]

cross_code:False

complex_constraints_method:penalty

mutation_method:uniform

mutation_rate:0.1

mutation_rate_exp:1

cross_rate:0.8

upper_bound:[2.048, 2.048]

arithmetic_cross_exp:1

variables_num:2

generations:300

tournament_num:2

select_method:proportion

func_type:max

complex_constraints_C:100000000.0

cross_method:uniform

complex_constraints:[, , ]

func:

none_uniform_mutation_rate:1

cross_rate_exp:1

code_type:real

M:100000000.0

binary_code_length:20

global_generations_step:300

population_size:200

arithmetic_cross_alpha:0.1

-------------------- GA caculation result is: --------------------

global best target generation index/total generations:226/300

global best point:[ 1.7182846 -1.74504313]

global best target:2207.2089435117955

python 最優化算法庫_sopt:一個簡單的python最優化庫

圖6 GA求解帶有三個複雜限制的Rosenbrock函數

上面的constraints1,constraints2,constraints3是三個預定義的限制條件函數,其定義分别為:\(constraints1:x_1^2 + x_2^2 - 6 \le 0\);\(constraints2:x_1 + x_2 \le 0\);\(constraints3:-2-x_1 - x_2 \le 0\),函數原型為:

def constraints1(x):

x1 = x[0]

x2 = x[1]

return x1**2 + x2**2 -3

def constraints2(x):

x1 = x[0]

x2 = x[1]

return x1+x2

def constraints3(x):

x1 = x[0]

x2 = x[1]

return -2 -x1 -x2

其實觀察可以發現,上面的代碼和原始的GA執行個體代碼唯一的差別,就是其增加了self.complex_constraints = [constraints1,constraints2,constraints3]這樣一句,對于其他的優化方法,其都定義了complex_constraints和complex_constraints_method這兩個屬性,隻要傳入相應的限制條件函數清單以及求解限制條件的方法就可以求解帶複雜限制的目标函數了。比如我們再用Random Walk求解和上面一樣的帶三個限制的Rosenbrock函數,代碼及運作結果如下:

from time import time

from sopt.util.functions import *

from sopt.util.constraints import *

from sopt.util.random_walk_config import *

from sopt.Optimizers.RandomWalk import RandomWalk

class TestRandomWalk:

def __init__(self):

self.func = Rosenbrock

self.func_type = Rosenbrock_func_type

self.variables_num = Rosenbrock_variables_num

self.lower_bound = Rosenbrock_lower_bound

self.upper_bound = Rosenbrock_upper_bound

self.generations = 100

self.init_step = 10

self.eps = 1e-4

self.vectors_num = 10

self.init_pos = None

self.complex_constraints = [constraints1,constraints2,constraints3]

self.complex_constraints_method = complex_constraints_method.loop

self.RandomWalk = RandomWalk(**self.__dict__)

def test(self):

start_time = time()

self.RandomWalk.random_walk()

print("random walk costs%.4fseconds!" %(time() - start_time))

self.RandomWalk.save_plot()

self.RandomWalk.show_result()

if __name__ == '__main__':

TestRandomWalk().test()

運作結果:

Finish 1 random walk!

Finish 2 random walk!

Finish 3 random walk!

Finish 4 random walk!

Finish 5 random walk!

Finish 6 random walk!

Finish 7 random walk!

Finish 8 random walk!

Finish 9 random walk!

Finish 10 random walk!

Finish 11 random walk!

Finish 12 random walk!

Finish 13 random walk!

Finish 14 random walk!

Finish 15 random walk!

Finish 16 random walk!

Finish 17 random walk!

random walk costs 0.1543 seconds!

-------------------- random walk config is: --------------------

eps:0.0001

func_type:max

lower_bound:[-2.048 -2.048]

upper_bound:[2.048 2.048]

init_step:10

vectors_num:10

func:

variables_num:2

walk_nums:17

complex_constraints_method:loop

generations:100

generations_nums:2191

complex_constraints:[, , ]

-------------------- random walk caculation result is: --------------------

global best generation index/total generations:2091/2191

global best point is: [-2.41416736 0.41430367]

global best target is: 2942.6882849234585

python 最優化算法庫_sopt:一個簡單的python最優化庫

圖7 Random Walk求解帶有三個複雜限制的Rosenbrock函數

可以發現Random Walk 求解得到的最優解要比GA好,而且運作時間更快,經過實驗發現,在所有的優化方法中,不論是求解帶複雜限制還是不帶複雜限制條件的目标函數,求解效果大體上排序是:Random Walk > PSO > GA > SA 。是以當你在求解具體問題時,不妨多試幾種優化方法,然後擇優選擇。

7. 基于梯度的系列優化方法

上面所述的各種優化方法,比如GA,PSO,SA等都是基于随機搜尋的優化算法,其計算是不依賴于目标函數的具體形式,也不需要知道其梯度的,更加傳統的優化算法是基于梯度的算法,比如經典的梯度下降(上升)法(Gradient Descent)以及其一系列變種。下面就簡要介紹sopt中GD,Momentum,AdaGrad,RMSProp以及Adam的實作。關于這些基于梯度的優化算法的具體原理,可以參考我之前的一篇博文深度學習中常用的優化方法。另外需要注意,以下所述的基于梯度的各種優化算法,一般都是用在無限制優化問題裡面的,如果是有限制的問題,請選擇上面其他的優化算法。下面是GradientDescent的一個使用執行個體:

from time import time

from sopt.util.gradients_config import *

from sopt.util.functions import *

from sopt.Optimizers.Gradients import GradientDescent

class TestGradientDescent:

def __init__(self):

self.func = quadratic50

self.func_type = quadratic50_func_type

self.variables_num = quadratic50_variables_num

self.init_variables = None

self.lr = 1e-3

self.epochs = 5000

self.GradientDescent = GradientDescent(**self.__dict__)

def test(self):

start_time = time()

self.GradientDescent.run()

print("Gradient Descent costs%.4fseconds!" %(time()-start_time))

self.GradientDescent.save_plot()

self.GradientDescent.show_result()

if __name__ == '__main__':

TestGradientDescent().test()

運作結果為:

Gradient Descent costs 14.3231 seconds!

-------------------- Gradient Descent config is: --------------------

func_type:min

variables_num:50

func:

epochs:5000

lr:0.001

-------------------- Gradient Descent caculation result is: --------------------

global best epoch/total epochs:4999/5000

global best point: [ 0.9999524 1.99991045 2.99984898 3.9998496 4.99977767 5.9997246

6.99967516 7.99964102 8.99958143 9.99951782 10.99947879 11.99944665

12.99942492 13.99935192 14.99932708 15.99925856 16.99923686 17.99921689

18.99911527 19.9991255 20.99908968 21.99899699 22.99899622 23.99887832

24.99883597 25.99885616 26.99881394 27.99869772 28.99869349 29.9986766

30.99861142 31.99851987 32.998556 33.99849351 34.99845985 35.99836731

36.99832444 37.99831792 38.99821067 39.99816567 40.99814951 41.99808199

42.99808161 43.99806655 44.99801207 45.99794449 46.99788003 47.99785468

48.99780825 49.99771656]

global best target: 1.0000867498727912

python 最優化算法庫_sopt:一個簡單的python最優化庫

圖7 GradientDescent 求解quadratic50

下面簡要說明以下GradientDescent,Momentum等類中的主要參數,像func,variables_num等含義已經解釋很多次了,不再贅述,這裡主要介紹各類特有的一些參數。