生成模型和判别模型
Intro
介紹
Recently I gave a presentation at work, where I explained how I solved some problem using Conditional Random Fields (CRF). And, since CRF was not so much known to my colleagues, I had a couple of slides devoted to the theory behind this algorithm. As I prepared the theory slides, I felt a duty to compare CRF with a conceptually similar algorithm, Hidden Markov Model (HMM). CRF is known to be a discriminative model and HMM is a generative model. I had to refresh my knowledge about this categorisation of supervised machine learning methods, especially generative models. Now I would like to share my understanding of the difference between generative and discriminative models in simple terms.
最近,我在工作中做了一個演講,向我解釋了如何使用條件随機場(CRF)解決一些問題。 而且,由于我的同僚對CRF知之甚少,是以我有幾張幻燈片專門讨論了該算法背後的理論。 在準備理論幻燈片時,我感到有責任将CRF與概念上類似的算法,即隐馬爾可夫模型(HMM)進行比較 。 衆所周知,CRF是一種判别模型,而HMM是一種生成模型。 我必須重新整理有關監督式機器學習方法(尤其是生成模型)的分類的知識。 現在,我想簡單地分享一下我對生成模型和區分模型之間差異的了解。
Generative models are a wide class of machine learning algorithms which make predictions by modelling joint distribution P(y, x).
生成模型是一類廣泛的機器學習算法,它們通過對聯合分布P(y,x)模組化來進行預測。
Discriminative models are a class of supervised machine learning models which make predictions by estimating conditional probability P(y|x).
判别模型是一類監督的機器學習模型,通過估計條件機率P(y | x)進行預測。
In order to use a generative model, more unknowns should be solved: one has to estimate probability of each class and probability of observation given class. These probabilities are used to compute joint probability, and finally, joint probability can be used as a substitute for conditional probability to make predictions.
為了使用生成模型,應該解決更多的未知數:必須估計每個類别的機率和給定類别的觀察機率。 這些機率用于計算聯合機率,最後,聯合機率可以用作條件機率的替代來進行預測。
Generative model has more steps than discriminative model in order to estimate conditional probability P(y|x) 生成模型比判别模型具有更多的步驟,以便估計條件機率P(y | x)
The discriminative model takes a shorter way: it simply estimates conditional probability directly.
判别模型采用了更短的方法:它隻是直接估算條件機率。
There are many pros and cons to each of the models. I just note that generative model can be used to generate new samples, but it requires more data. Discriminative model often superior than generative model, given the same amount of data, but it does not know about dependencies between features, because it is irrelevant for prediction. Therefore discriminative model can not generate new samples.
每個模型都有很多優點和缺點。 我隻是注意到生成模型可用于生成新樣本,但是它需要更多資料。 在給定相同資料量的情況下,判别模型通常優于生成模型,但它不了解要素之間的依賴性,因為它與預測無關。 是以,判别模型無法生成新樣本。
Now let’s take a closer look at the concept of generative models.
現在,讓我們仔細看看生成模型的概念。
Generative model
生成模型
As I showed earlier, to make predictions, conditional distribution P(y|x) is enough. But since P(y|x) = P(y, x) / P(x), where P(x) is constant for the given x and all possible y, it is valid to use joint distribution P(y, x) to make predictions.
如我先前所示,要進行預測,條件分布 P(y | x) 就足夠了。 但是由于 P(y | x)= P(y,x)/ P(x) ,其中 P(x) 對于給定 x 和 所有可能的 y 都是常數 ,是以使用聯合分布 P(y,x) 是有效的 做出預測。
By modelling joint distribution P(y, x) is meant that for each pair (yᵢ, xᵢ) a probability P(yi, xi) is known (modelled). At the beginning it was a bit difficult for me to understand how it is even possible — the range of possible values of X might be enormous, so it’s gonna be unrealistic to suggest probabilities for each xi, leave alone pair (yi, xi). How is it supposed to be done?
通過對聯合分布P(y,x)進行模組化,意味着對于每一對( yᵢ,xᵢ) ,已知機率P(yi,xi) (已模組化)。 剛開始時,我很難了解它的可能性-X的可能值範圍可能很大,是以建議每個xi的機率,而對( yi , xi)單獨給出機率将是不現實的。 應該怎麼做?
First. Bayes theorem! It breaks computation of joint probability P(y,x) into computation of two other types of probabilities: probability of class, P(y), and probability of observation given class, P(x|y).
第一。 貝葉斯定理! 它将聯合機率P(y,x)的計算分解為另外兩種類型的機率的計算:類機率P(y)和給定類觀察機率P(x | y)。
P(y, x) = P(y) * P(x|y)
P(y,x)= P(y)* P(x | y)
What benefits does it give? This way it is at least easier to figure out probability P(y), because it can be estimated from the dataset by computing class frequencies. P(x|y) is trickier, because usually x is not just one feature, but a set of features: x = xi, …, xn, which might have dependencies between each other.
它有什麼好處? 這樣,至少可以容易地找出機率P(y) ,因為可以通過計算類别頻率從資料集中進行估計。 P(x | y)比較棘手,因為通常x不僅是一個特征,而且是一組特征: x = xi,…,xn ,它們之間可能存在依賴關系。
P(x|y) = П P(xi|y, x1, xi-1, xi+1, xn)
P(x | y)=ПP(xi | y,x1,xi-1,xi + 1,xn)
Often the dependencies between the features are not known, especially when they appear in complex constellations (y, x1, xi-1, xi+1, xn).
通常,這些特征之間的依存關系是未知的,尤其是當它們出現在複雜的星座( y,x1,xi-1,xi + 1,xn)中時 。
So what should be done to estimate P(x|y)? For this, there is the following trick:
那麼,應該怎麼做才能估計P(x | y)呢? 為此,有以下技巧:
Second. Make wild assumptions! Or just some assumptions which make estimation of P(x|y) tractable. Naive Bayes classifier can serve as a perfect example of a generative model with such assumption, which makes computation of P(x|y) easier. Namely, it has independence assumption between the features xi, …, xn.
第二。 做出瘋狂的假設! 或者隻是使P(x | y)的估算變得容易的一些假設。 在這種假設下,樸素貝葉斯分類器可以用作生成模型的完美示例,這使得P(x | y)的計算更加容易。 即,它在特征xi,…,xn之間具有獨立性假設。
P(x|y) = П P(xi|y)
P(x | y)=ПP(xi | y)
With this relaxation, estimation of P(x|y) is tractable, because every P(xi|y) can be estimated either by finding frequencies of discrete feature xi independently from other features or using Gaussian distribution, if feature xi is continuous.
通過這種放寬, P(x | y)的估計是易于處理的,因為如果特征xi是連續的,則可以通過獨立于其他特征找到離散特征xi的頻率或使用高斯分布來估計每個P(xi | y) 。
Conclusion
結論
So now you can see that in order to use generative models one should be prepared to estimate two types of probabilities P(y) and P(x|y). At the same time, discriminative models estimate conditional probability P(y|x) directly, which often is more efficient because one does not estimate dependencies between features, as these relationships don’t necessarily contribute to the prediction of the target variable.
是以,現在您可以看到,為了使用生成模型,應該準備一個估計兩種類型的機率P(y)和P(x | y)。 同時,判别模型直接估算條件機率P(y | x) ,這通常更為有效,因為人們不估算特征之間的依賴性,因為這些關系不一定有助于目标變量的預測。
翻譯自: https://medium.com/@tanyadembelova/introduction-to-generative-and-discriminative-models-9c9ef152b9af
生成模型和判别模型