1. vanish of gradient
RNN的error相對于某個時間點t的梯度為:
\(\frac{\partial E_t}{\partial W}=\sum_{k=1}^{t}\frac{\partial E_t}{\partial y_t}\frac{\partial y_t}{\partial h_i}\frac{\partial h_t}{\partial h_k}\frac{\partial h_k}{\partial W}\) (公式1),
其中\(h\)是hidden node的輸出,\(y_t\)是網絡在t時刻的output,\(W\)是hidden nodes 到hidden nodes的weight,而\(\frac{\partial h_t}{\partial h_k}\)是導數在時間段[k,t]上的鍊式展開,這段時間可能很長,會造成vanish或者explosion gradiant。将\(\frac{\partial h_t}{\partial h_k}\)沿時間展開:\(\frac{\partial h_t}{\partial h_k}=\prod_{j=k+1}^{t}\frac{\partial h_j}{\partial h_{j-1}}=\prod_{j=k+1}^{t}W^T \times diag [\frac{\partial\sigma(h_{j-1})}{\partial h_{j-1}}]\)。上式中的diag矩陣是個什麼鬼?我來舉個例子,你就明白了。假設現在要求解\(\frac{\partial h_5}{\partial h_4}\),回憶向前傳播時\(h_5\)是怎麼得到的:\(h_5=W\sigma(h_4)+W^{hx}x_4\),則\(\frac{\partial h_5}{\partial h_4}=W\frac{\partial \sigma(h_4)}{\partial h_4}\),注意到\(\sigma(h_4)\)和\(h_4\)都是向量(次元為D),是以\(\frac{\partial \sigma(h_4)}{\partial h_4}\)是Jacobian矩陣也即:\(\frac{\partial \sigma(h_4)}{\partial h_4}=\) \(\begin{bmatrix} \frac{\partial\sigma_1(h_{41})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_1(h_{41})}{\partial h_{4D}} \\ \vdots&\cdots&\vdots \\ \frac{\partial\sigma_D(h_{4D})}{\partial h_{41}}&\cdots&\frac{\partial\sigma_D(h_{4D})}{\partial h_{4D}}\end{bmatrix}\),明顯的,非對角線上的值都是0。這是因為sigmoid logistic function \(\sigma\)是element-wise的操作。
後面推導vanish或者explosion gradiant的過程就很簡單了,我就不寫了,請參考http://cs224d.stanford.edu/lecture_notes/LectureNotes4.pdf 中的公式(14)往後部分。
2. weight shared (tied) 時, the gradient of tied weight = sum of gradient of individual weights
舉個例子你就明白了:假設有向前傳播\(y=F[W_1f(W_2x)]\), 且weights \(W_1\) \(W_2\) tied, 現在要求gradient \(\frac{\partial y}{\partial W}\)
辦法一:
先求gradient \(\frac{\partial F[]}{\partial W_2} = F'[]f() \)
再求gradient \(\frac{\partial F[]}{\partial W_1} = F'[] (W_2f'()x) \)
将上兩式相加後得,\(F'[]f()+F'[] (W_2f'()x)=F'[](f()+W_2f'()x)\)
假設weights \(W_1\) \(W_2\) tied,則上式=\(F'[](f()+Wf'()x) = \frac{\partial y}{\partial W} \)
辦法二:
現在我們換個辦法,在假設weights \(W_1\) \(W_2\) tied的基礎上,直接求gradient
\(\frac{\partial y}{\partial W} = F'[]( \frac{\partial Wf()}{\partial W} + W \frac{\partial f()}{\partial W} ) = F'[](f()+Wf'()x) \)
可見,兩種方法的結果是一樣的。是以,當權重共享時,關于權重的梯度=兩個不同權重梯度的和。
3. LSTM & Gated Recurrent units 是如何避免vanish的?
To understand this, you will have to go through some math. The most accessible article wrt recurrent gradient problems IMHO is Pascanu's ICML2013 paper [1].
A summary: vanishing/exploding gradient comes from the repeated application of the recurrent weight matrix [2]. That the spectral radius of the recurrent weight matrix is bigger than 1 makes exploding gradients possible (it is a necessary condition), while a spectral radius smaller than 1 makes it vanish, which is a sufficient condition.
Now, if gradients vanish, that does not mean that all gradients vanish. Only some of them, gradient information local in time will still be present. That means, you might still have a non-zero gradient--but it will not contain long term information. That's because some gradient g + 0 is still g. (上文中公式1,因為是相加,是以有些為0,也不會引起全部為0)
If gradients explode, all of them do. That is because some gradient g + infinity is infinity.(上文中公式1,因為是相加,是以有些為無限大,會引起全部為無限大)
That is the reason why LSTM does not protect you from exploding gradients, since LSTM also uses a recurrent weight matrix(h(t) = o(t) ◦ tanh(c(t))?), not only internal state-to-state connections( c(t) = f (t) ◦ ˜c(t−1) +i(t) ◦ ˜c(t) h(t)). Successful LSTM applications typically use gradient clipping.
LSTM overcomes the vanishing gradient problem, though. That is because if you look at the derivative of the internal state at T to the internal state at T-1, there is no repeated weight application. The derivative actually is the value of the forget gate. And to avoid that this becomes zero, we need to initialise it properly in the beginning.
That makes it clear why the states can act as "a wormhole through time", because they can bridge long time lags and then (if the time is right) "re inject" it into the other parts of the net by opening the output gate.
[1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." arXiv preprint arXiv:1211.5063 (2012).
[2] It might "vanish" also due to saturating nonlinearities, but that is sth that can also happen in shallow nets and can be overcome with more careful weight initialisations.
ref: Recursive Deep Learning for Natural Language Processing and Computer Vision.pdf
CS224D-3-note bp.pdf
未完待續。。。
轉載于:https://www.cnblogs.com/congliu/p/4546634.html