天天看點

Neural Networks學習筆記

1、神經網絡(Neural Networks)

Neural Networks學習筆記

用 nl 表示網絡層數,本例中 nl=3 ,将第 l 層記為Ll,于是 L1 是輸入層, Lnl 是輸出層。本例神經網絡有參數

Neural Networks學習筆記

,其中

Neural Networks學習筆記

Neural Networks學習筆記

Neural Networks學習筆記

Neural Networks學習筆記

我們用

Neural Networks學習筆記

表示第 l 層第i單元的激活值(輸出值)。當

Neural Networks學習筆記

時,

Neural Networks學習筆記

也就是第 i 個輸入值(輸入值的第i個特征)。對于給定參數集合 W,b ,我們的神經網絡就可以按照函數 hW,b(x) 來計算輸出結果。本例神經網絡的計算步驟如下:

Neural Networks學習筆記

我們用 z(l)i 表示第 l 層第 i 單元輸入權重和(包括偏置單元),比如,z(2)i= ∑nj=1W(1)ijxj+b(1)i ,則 a(l)i=f(z(l)i) 。

Neural Networks學習筆記

我們将上面的計算步驟叫作前向傳播。回想一下,之前我們用 a(1)=x 表示輸入層的激活值,那麼給定第 l 層的激活值 a(l) 後,第 l+1 層的激活值 a(l+1) 就可以按照下面步驟計算得到:

Neural Networks學習筆記

将參數矩陣化,使用矩陣-向量運算方式,我們就可以利用線性代數的優勢對神經網絡進行快速求解。

其中函數 f:R↦R 被稱為“激活函數”。通常是sigmoid函數:

Neural Networks學習筆記

它的導數是

Neural Networks學習筆記

Neural Networks學習筆記

函數是sigmoid函數的一種變體,它的取值範圍為 [−1,1] ,而不是sigmoid函數的 [0,1] 。

Neural Networks學習筆記

它的導數是

Neural Networks學習筆記

2、反向傳導算法(Backpropagation Algorithm)

假設我們有一個固定樣本集 {(x(1),y(1)) , … , (x(m),y(m))} ,它包含 m 個樣例。我們可以用批量梯度下降法來求解神經網絡。具體來講,對于單個樣例 (x,y) ,其代價函數為:

J(W,b;x,y)=12∣∣hW,b(x)−y∣∣2.

這是一個(二分之一的)方差代價函數。給定一個包含 m 個樣例的資料集,我們可以定義整體代價函數為:

J(W,b)=[1m∑i=1mJ(W,b;x(i),y(i))]+λ2∑l=1nl−1∑i=1sl∑j=1sl+1(W(l)ji)2=[1m∑i=1m(12∥∥hW,b(x(i))−y(i)∥∥2)]+λ2∑l=1nl−1∑i=1sl∑j=1sl+1(W(l)ji)2

以上公式中的第一項 J(W,b) 是一個均方差項。第二項是一個規則化項(也叫權重衰減項),其目的是減小權重的幅度,防止過度拟合。

我們的目标是針對參數 W 和 b 來求其函數 J(W,b) 的最小值。為了求解神經網絡,我們需要将每一個參數 W(l)ij 和 b(l)i 初始化為一個很小的、接近零的随機值(比如說,使用正态分布 Normal(0,ϵ2) 生成的随機值,其中 ϵ 設定為 0.01 ),而不是全部置為 0。如果所有參數都用相同的值作為初始值,那麼所有隐藏層單元最終會得到與輸入值有關的、相同的函數(也就是說,對于所有 i,W(1)ij 都會取相同的值,那麼對于任何輸入 x 都會有: a(2)1=a(2)2=a(2)3=… )。

梯度下降法中每一次疊代都按照如下公式對參數 W 和 b 進行更新:

W(l)ijb(l)i=W(l)ij−α∂∂W(l)ijJ(W,b)=b(l)i−α∂∂b(l)iJ(W,b)

其中 α 是學習速率。

∂∂W(l)ijJ(W,b;x,y) 和 ∂∂b(l)iJ(W,b;x,y) 這兩項是單個樣例 (x,y) 的代價函數 J(W,b;x,y) 的偏導數。一旦我們求出該偏導數,就可以推導出整體代價函數 J(W,b) 的偏導數:

∂∂W(l)ijJ(W,b)∂∂b(l)iJ(W,b)=⎡⎣1m∑i=1m∂∂W(l)ijJ(W,b;x(i),y(i))⎤⎦+λW(l)ij=1m∑i=1m∂∂b(l)iJ(W,b;x(i),y(i))

反向傳播算法的思路如下:給定一個樣例 (x,y) ,我們首先進行“前向傳導”運算,計算出網絡中所有的激活值,包括 hW,b(x) 的輸出值。之後,針對第 l 層的每一個節點 i ,我們計算出其“殘差” δ(l)i ,該殘差表明了該節點對最終輸出值的殘差産生了多少影響。對于最終的輸出節點,我們可以直接算出網絡産生的激活值與實際值之間的差距,我們将這個差距定義為 δ(nl)i (第 nl 層表示輸出層)。

反向傳播算法可表示為以下幾個步驟:

1、進行前饋傳導計算,利用前向傳導公式,得到 L2,L3,… 直到輸出層 Lnl 的激活值。

2、對輸出層(第 nl 層),計算:

δ(nl)=−(y−a(nl))∙f′(z(nl))(1)

3、對于 l=nl−1,nl−2,nl−3,…,2 的各層,計算:

δ(l)=((W(l))Tδ(l+1))∙f′(z(l))(2)

4、計算最終需要的偏導數值:

∇W(l)J(W,b;x,y)∇b(l)J(W,b;x,y)=δ(l+1)(a(l))T,=δ(l+1).(3)

假設 f(z) 是sigmoid函數,就可以計算得到 f′(z(l)i)=a(l)i(1−a(l)i)。

下面,我們實作批量梯度下降法中的一次疊代:

1、對于所有 l ,令 ΔW(l):=0 , Δb(l) := 0 (設定為全零矩陣或全零向量)

2、對于 i=1 到 m ,

使用反向傳播算法計算 ∇W(l)J(W,b;x,y) 和 ∇b(l)J(W,b;x,y) 。

計算 ΔW(l):=ΔW(l)+∇W(l)J(W,b;x,y) 。

計算 Δb(l):=Δb(l)+∇b(l)J(W,b;x,y) 。

3、更新權重參數:

Neural Networks學習筆記

現在,我們可以重複梯度下降法的疊代步驟來減小代價函數 J(W,b) 的值,進而求解我們的神經網絡。

反向傳導算法代碼解讀

代碼來源:Matlab Toolbox for Dimensionality Reduction

function network = backprop(network, X, T, max_iter, noise, lambda)
%BACKPROP Trains a network on a dataset using backpropagation
%
% The function trains the specified network using backpropagation on
% dataset X with targets T for max_iter iterations. The dataset X is an NxD
% matrix, whereas the targets matrix T has size NxM. The function returns 
% the trained network in network.
%targets T作為正确的輸出值,用來建構損失函數
%lambda是權重衰減項的系數,一般設定為1e-4
%
    if ~exist('max_iter', 'var') || isempty(max_iter)
        max_iter = ;
    end
    if ~exist('noise', 'var') || isempty(noise)
        noise = ;
    end
    if ~exist('lambda', 'var') || isempty(lambda)
        lambda = ;
    end

    % Initialize some variables
    n = size(X, );%X資料集的樣本個數
    no_layers = length(network);%網絡層數
    batch_size = max(round(n / ), );%選round(n / 100)和100中的大數作為一批樣本的容量

    % Perform the backpropagation
    for iter=:max_iter
        disp([' - iteration ' num2str(iter) ' of ' num2str(max_iter) '...']);

        % Loop over all batches
        index = randperm(n);
        for batch=:batch_size:n%使用每批樣本更新一次W和bias_upW(批量梯度下降法)

            % Select current batch
            tmpX = X(index(batch:min([batch + batch_size -  n])),:);%取一批X樣本,最後一批不足batch_size的的樣本歸為一批
            tmpT = T(index(batch:min([batch + batch_size -  n])),:);

            % Randomly black out some of the input data
            if noise > 
                tmpX(rand(size(tmpX)) < noise) = ;
            end   

            % Convert the weights and store them in the network
            v = [];
            for i=:length(network)
                v = [v; network{i}.W(:); network{i}.bias_upW(:)];%把所有層的參數W和bias_upW彙總為一個列向量。
            end

            % Conjugate gradient minimization using 3 linesearches
            [v, fX] = minimize(v, 'backprop_gradient', , network, tmpX, tmpT, lambda);
            %[C, dC] = backprop_gradient(v, network, X,targets,lambda)
            %使用network和X計算出輸出值,與正确輸出值targets相減,建構損失函數。lambda是對W進行L2規則化的系數。C是損失函數的值,dC是損失函數的導數值。
            % minimize函數是共轭梯度下降法(3次線性搜尋),解決連續可導多變量函數的最優化問題。初始值是參數v(必須是列向量),得到的v使損失函數值最小,即最優值,fX是三次搜尋中每次得到的損失函數值(如果搜尋不成功,則fX為空)。

            % Deconvert the weights and store them in the network
            ind = ;%把列向量v拆分成所有層的參數W和bias_upW。
            for i=:no_layers
                network{i}.W        = reshape(v(ind:ind -  + numel(network{i}.W)),        size(network{i}.W));         ind = ind + numel(network{i}.W);
                network{i}.bias_upW = reshape(v(ind:ind -  + numel(network{i}.bias_upW)), size(network{i}.bias_upW));  ind = ind + numel(network{i}.bias_upW);
            end

            % Stop upon convergence
            if isempty(fX)
                reconX = run_data_through_autoenc(network, X);
                C = sum((T(:) - reconX(:)) .^ ) ./ n;
                disp([' - final noisy MSE: ' num2str(C)]);
                return
            end
        end%按批使用樣本結束

        % Estimate the current error
        reconX = run_data_through_autoenc(network, X);
        C = sum((T(:) - reconX(:)) .^ ) ./ n;%估計誤差
        disp([' - current noisy MSE: ' num2str(C)]); %每次循環,顯示一次誤差
    end%達到最大循環次數max_iter
           
function [X, fX, i] = minimize(X, f, length, P1, P2, P3, P4, P5, P6);

% Minimize a continuous differentialble multivariate function. Starting point
% is given by "X" (D by 1), and the function named in the string "f", must
% return a function value and a vector of partial derivatives. The Polack-
% Ribiere flavour of conjugate gradients is used to compute search directions,
% and a line search using quadratic and cubic polynomial approximations and the
% Wolfe-Powell stopping criteria is used together with the slope ratio method
% for guessing initial step sizes. Additionally a bunch of checks are made to
% make sure that exploration is taking place and that extrapolation will not
% be unboundedly large. The "length" gives the length of the run: if it is
% positive, it gives the maximum number of line searches, if negative its
% absolute gives the maximum allowed number of function evaluations. You can
% (optionally) give "length" a second component, which will indicate the
% reduction in function value to be expected in the first line-search (defaults
% to 1.0). The function returns when either its length is up, or if no further
% progress can be made (ie, we are at a minimum, or so close that due to
% numerical problems, we cannot get any closer). If the function terminates
% within a few iterations, it could be an indication that the function value
% and derivatives are not consistent (ie, there may be a bug in the
% implementation of your "f" function). The function returns the found
% solution "X", a vector of function values "fX" indicating the progress made
% and "i" the number of iterations (line searches or function evaluations,
% depending on the sign of "length") used.
%
% Usage: [X, fX, i] = minimize(X, f, length, P1, P2, P3, P4, P5, P6)
%
% See also: checkgrad 
%
% Copyright (C) 2001 and 2002 by Carl Edward Rasmussen. Date 2002-02-13
%
%

% This file is part of the Matlab Toolbox for Dimensionality Reduction.
% The toolbox can be obtained from http://homepage.tudelft.nl/19j49
% You are free to use, change, or redistribute this code in any way you
% want for non-commercial purposes. However, it is appreciated if you 
% maintain the name of the original author.
%
% (C) Laurens van der Maaten, Delft University of Technology


RHO = ;                            % a bunch of constants for line searches
SIG = ;       % RHO and SIG are the constants in the Wolfe-Powell conditions
INT = ;    % don't reevaluate within 0.1 of the limit of the current bracket
EXT = ;                    % extrapolate maximum 3 times the current bracket
MAX = ;                         % max 20 function evaluations per line search
RATIO = ;                                      % maximum allowed slope ratio

argstr = [f, '(X'];                      % compose string used to call function
for i = :(nargin - )
  argstr = [argstr, ',P', int2str(i)];
end
argstr = [argstr, ')'];                 %建構了語句:backprop_gradient(X,P1,P2,P3,P4)

if max(size(length)) == , red=length(); length=length(); else red=; end
if length>, S=['Linesearch']; else S=['Function evaluation']; end 

i = ;                                            % zero the run length counter
ls_failed = ;                             % no previous line search has failed
fX = [];
[f1 df1] = eval(argstr);        %執行之前建構的語句。即,backprop_gradient(X,network, tmpX, tmpT, lambda)
                                                 % get function value and gradient
i = i + (length<);                                            % count epochs?!
s = -df1;                                        % search direction is steepest
d1 = -s'*s;                                                 % this is the slope
z1 = red/(-d1);                                  % initial step is red/(|s|+1)

while i < abs(length)                                      % while not finished
  i = i + (length>);                                      % count iterations?!

  X0 = X; f0 = f1; df0 = df1;                   % make a copy of current values
  X = X + z1*s;                                             % begin line search
  [f2 df2] = eval(argstr);
  i = i + (length<);                                          % count epochs?!
  d2 = df2'*s;
  f3 = f1; d3 = d1; z3 = -z1;             % initialize point 3 equal to point 1
  if length>, M = MAX; else M = min(MAX, -length-i); end
  success = ; limit = -;                     % initialize quanteties
  while 
    while ((f2 > f1+z1*RHO*d1) | (d2 > -SIG*d1)) & (M > ) 
      limit = z1;                                         % tighten the bracket
      if f2 > f1
        z2 = z3 - (*d3*z3*z3)/(d3*z3+f2-f3);                 % quadratic fit
      else
        A = *(f2-f3)/z3+*(d2+d3);                                 % cubic fit
        B = *(f3-f2)-z3*(d3+*d2);
        z2 = (sqrt(B*B-A*d2*z3*z3)-B)/A;       % numerical error possible - ok!
      end
      if isnan(z2) | isinf(z2)
        z2 = z3/;                  % if we had a numerical problem then bisect
      end
      z2 = max(min(z2, INT*z3),(-INT)*z3);  % don't accept too close to limits
      z1 = z1 + z2;                                           % update the step
      X = X + z2*s;
      [f2 df2] = eval(argstr);
      M = M - ; i = i + (length<);                           % count epochs?!
      d2 = df2'*s;
      z3 = z3-z2;                    % z3 is now relative to the location of z2
    end
    if f2 > f1+z1*RHO*d1 | d2 > -SIG*d1
      break;                                                % this is a failure
    elseif d2 > SIG*d1
      success = ; break;                                             % success
    elseif M == 
      break;                                                          % failure
    end
    A = *(f2-f3)/z3+*(d2+d3);                      % make cubic extrapolation
    B = *(f3-f2)-z3*(d3+*d2);
    z2 = -d2*z3*z3/(B+sqrt(B*B-A*d2*z3*z3));        % num. error possible - ok!
    if ~isreal(z2) | isnan(z2) | isinf(z2) | z2 <    % num prob or wrong sign?
      if limit < -                               % if we have no upper limit
        z2 = z1 * (EXT-);                 % the extrapolate the maximum amount
      else
        z2 = (limit-z1)/;                                   % otherwise bisect
      end
    elseif (limit > -) & (z2+z1 > limit)          % extraplation beyond max?
      z2 = (limit-z1)/;                                               % bisect
    elseif (limit < -) & (z2+z1 > z1*EXT)       % extrapolation beyond limit
      z2 = z1*(EXT-);                           % set to extrapolation limit
    elseif z2 < -z3*INT
      z2 = -z3*INT;
    elseif (limit > -) & (z2 < (limit-z1)*(-INT))   % too close to limit?
      z2 = (limit-z1)*(-INT);
    end
    f3 = f2; d3 = d2; z3 = -z2;                  % set point 3 equal to point 2
    z1 = z1 + z2; X = X + z2*s;                      % update current estimates
    [f2 df2] = eval(argstr);
    M = M - ; i = i + (length<);                             % count epochs?!
    d2 = df2'*s;
  end                                                      % end of line search

  if success                                         % if line search succeeded
    f1 = f2; fX = [fX' f1]';
%     fprintf('%s %6i;  Value %4.6e\r', S, i, f1);
    s = (df2'*df2-df1'*df2)/(df1'*df1)*s - df2;      % Polack-Ribiere direction
    tmp = df1; df1 = df2; df2 = tmp;                         % swap derivatives
    d2 = df1'*s;
    if d2 >                                       % new slope must be negative
      s = -df1;                              % otherwise use steepest direction
      d2 = -s'*s;    
    end
    z1 = z1 * min(RATIO, d1/(d2-realmin));          % slope ratio but max RATIO
    d1 = d2;
    ls_failed = 0;                              % this line search did not fail
  else
    X = X0; f1 = f0; df1 = df0;  % restore point from before failed line search
    if ls_failed | i > abs(length)          % line search failed twice in a row
      break;                             % or we ran out of time, so we give up
    end
    tmp = df1; df1 = df2; df2 = tmp;                         % swap derivatives
    s = -df1;                                                    % try steepest
    d1 = -s'*s;
    z1 = /(-d1);                     
    ls_failed = ;                                    % this line search failed
  end
end

           
function [C, dC] = backprop_gradient(v, network, X, targets, lambda)
%BACKPROP Compute the cost gradient for CG optimization of a neural network
%為共轭最優化算法建構損失函數和損失函數的梯度
    % Initialize some variables
    n = size(X, );
    no_layers = length(network);
    middle_layer = ceil(no_layers / );

    % Deconvert the weights and store them in the network
    ind = ;
    for i=:no_layers
        network{i}.W        = reshape(v(ind:ind -  + numel(network{i}.W)),        size(network{i}.W));         ind = ind + numel(network{i}.W);
        network{i}.bias_upW = reshape(v(ind:ind -  + numel(network{i}.bias_upW)), size(network{i}.bias_upW));  ind = ind + numel(network{i}.bias_upW);
    end

    % Run the data through the network
    %前饋傳導計算得到網絡的各層輸出值
    activations = cell(, no_layers + );
    activations{} = [X ones(n, )]; %第一層網絡是原始資料X,最後加上一列是bias
    for i=:no_layers
        if i ~= middle_layer && i ~= no_layers
            activations{i + } = [ ./ ( + exp(-(activations{i} * [network{i}.W; network{i}.bias_upW]))) ones(n, )];%激活函數是sigmiod函數
        else
            activations{i + } = [activations{i} * [network{i}.W; network{i}.bias_upW] ones(n, )];
        end
    end  

    % Compute value of cost function (= MSE)
    C = ( / ( * n)) .* sum(sum((activations{end}(:,:end - ) - targets) .^ )) + lambda .* sum(v .^ );
    %損失函數的第一項是一個均方差項,即網絡的輸出值activations與正确輸出值targets的差。第二項是一個規則化項(也叫權重衰減項),其目的是減小權重v的幅度,防止過度拟合。對應上述文中的J(W,b)
    % Only compute gradient if requested
    if nargout > 

        % Compute gradients 
        dW = cell(, no_layers);
        db = cell(, no_layers);
        Ix = (activations{end}(:,:end - ) - targets) ./ n;
        %計算最後一層(輸出層)的殘差,對應上文中的(1)
        for i=no_layers:-:   %從倒數第二層開始,依次計算殘差

            % Compute update
            delta = activations{i}' * Ix;%對應上文中的(3),即後一層的殘差乘這一層的激活值。(激活值的最後一列為1)
            dW{i} = delta(:end - ,:);
            db{i} = delta(end,:);

            % Backpropagate error
            if i > 
                if i ~= middle_layer + 
                    Ix = (Ix * [network{i}.W; network{i}.bias_upW]') .* activations{i} .* ( - activations{i});  %計算各層的殘差,對應上文中的(2)
                else
                    Ix = Ix * [network{i}.W; network{i}.bias_upW]';
                end
                Ix = Ix(:,:end - );
            end
        end%所有層的dW,db計算結束

        % Convert gradient information
        %把每層的dW和db彙合成一個列向量dC
        dC = zeros(numel(v), );
        ind = ;
        for i=:no_layers
            dC(ind:ind -  + numel(dW{i})) = dW{i}(:); ind = ind + numel(dW{i});
            dC(ind:ind -  + numel(db{i})) = db{i}(:); ind = ind + numel(db{i});
        end
        dC = dC +  .* lambda .* v;
    end
           

參考文獻:

UFLDL Tutorial

繼續閱讀