from:https://blog.csdn.net/on2way/article/details/72773771
之前
GAN網絡是近兩年深度學習領域的新秀,火的不行,本文旨在淺顯了解傳統GAN,分享學習心得。現有GAN網絡大多數代碼實作使用python、torch等語言,這裡,後面用matlab搭建一個簡單的GAN網絡,便于了解GAN原理。
GAN的鼻祖之作是2014年NIPS一篇文章:Generative Adversarial Net,可以細細品味。
- 分享一個目前各類GAN的一個論文整理集合
- 再分享一個目前各類GAN的一個代碼整理集合
開始
我們知道GAN的思想是是一種二人零和博弈思想(two-player game),博弈雙方的利益之和是一個常數,比如兩個人掰手腕,假設總的空間是一定的,你的力氣大一點,那你就得到的空間多一點,相應的我的空間就少一點,相反我力氣大我就得到的多一點,但有一點是确定的就是,我兩的總空間是一定的,這就是二人博弈,但是呢總利益是一定的。
引申到GAN裡面就是可以看成,GAN中有兩個這樣的博弈者,一個人名字是生成模型(G),另一個人名字是判别模型(D)。他們各自有各自的功能。
相同點是:
- 這兩個模型都可以看成是一個黑匣子,接受輸入然後有一個輸出,類似一個函數,一個輸入輸出映射。
不同點是:
- 生成模型功能:比作是一個樣本生成器,輸入一個噪聲/樣本,然後把它包裝成一個逼真的樣本,也就是輸出。
- 判别模型:比作一個二分類器(如同0-1分類器),來判斷輸入的樣本是真是假。(就是輸出值大于0.5還是小于0.5);
直接上一張個人覺得解釋的好的圖說明:
在之前,我們首先明白在使用GAN的時候的2個問題
-
我們有什麼?
比如上面的這個圖,我們有的隻是真實采集而來的人臉樣本資料集,僅此而已,而且很關鍵的一點是我們連人臉資料集的類标簽都沒有,也就是我們不知道那個人臉對應的是誰。
-
我們要得到什麼
至于要得到什麼,不同的任務得到的東西不一樣,我們隻說最原始的GAN目的,那就是我們想通過輸入一個噪聲,模拟得到一個人臉圖像,這個圖像可以非常逼真以至于以假亂真。
好了再來了解下GAN的兩個模型要做什麼。首先判别模型,就是圖中右半部分的網絡,直覺來看就是一個簡單的神經網絡結構,輸入就是一副圖像,輸出就是一個機率值,用于判斷真假使用(機率值大于0.5那就是真,小于0.5那就是假),真假也不過是人們定義的機率而已。其次是生成模型,生成模型要做什麼呢,同樣也可以看成是一個神經網絡模型,輸入是一組随機數Z,輸出是一個圖像,不再是一個數值而已。從圖中可以看到,會存在兩個資料集,一個是真實資料集,這好說,另一個是假的資料集,那這個資料集就是有生成網絡造出來的資料集。好了根據這個圖我們再來了解一下GAN的目标是要幹什麼:
- 判别網絡的目的:就是能判别出來屬于的一張圖它是來自真實樣本集還是假樣本集。假如輸入的是真樣本,網絡輸出就接近1,輸入的是假樣本,網絡輸出接近0,那麼很完美,達到了很好判别的目的。
- 生成網絡的目的:生成網絡是造樣本的,它的目的就是使得自己造樣本的能力盡可能強,強到什麼程度呢,你判别網絡沒法判斷我是真樣本還是假樣本。
有了這個了解我們再來看看為什麼叫做對抗網絡了。判别網絡說,我很強,來一個樣本我就知道它是來自真樣本集還是假樣本集。生成網絡就不服了,說我也很強,我生成一個假樣本,雖然我生成網絡知道是假的,但是你判别網絡不知道呀,我包裝的非常逼真,以至于判别網絡無法判斷真假,那麼用輸出數值來解釋就是,生成網絡生成的假樣本進去了判别網絡以後,判别網絡給出的結果是一個接近0.5的值,極限情況就是0.5,也就是說判别不出來了,這就是納什平衡了。
由這個分析可以發現,生成網絡與判别網絡的目的正好是相反的,一個說我能判别的好,一個說我讓你判别不好。是以叫做對抗,叫做博弈。那麼最後的結果到底是誰赢呢?這就要歸結到設計者,也就是我們希望誰赢了。作為設計者的我們,我們的目的是要得到以假亂真的樣本,那麼很自然的我們希望生成樣本赢了,也就是希望生成樣本很真,判别網絡能力不足以區分真假樣本位置。
再了解
知道了GAN大概的目的與設計思路,那麼一個很自然的問題來了就是我們該如何用數學方法解決這麼一個對抗問題。這就涉及到如何訓練這樣一個生成對抗網絡模型了,還是先上一個圖,用圖來解釋最直接:
需要注意的是生成模型與對抗模型可以說是完全獨立的兩個模型,好比就是完全獨立的兩個神經網絡模型,他們之間沒有什麼聯系。
好了那麼訓練這樣的兩個模型的大方法就是:單獨交替疊代訓練。
什麼意思?因為是2個網絡,不好一起訓練,是以才去交替疊代訓練,我們一一來看。
假設現在生成網絡模型已經有了(當然可能不是最好的生成網絡),那麼給一堆随機數組,就會得到一堆假的樣本集(因為不是最終的生成模型,那麼現在生成網絡可能就處于劣勢,導緻生成的樣本就不咋地,可能很容易就被判别網絡判别出來了說這貨是假冒的),但是先不管這個,假設我們現在有了這樣的假樣本集,真樣本集一直都有,現在我們人為的定義真假樣本集的标簽,因為我們希望真樣本集的輸出盡可能為1,假樣本集為0,很明顯這裡我們就已經預設真樣本集所有的類标簽都為1,而假樣本集的所有類标簽都為0. 有人會說,在真樣本集裡面的人臉中,可能張三人臉和李四人臉不一樣呀,對于這個問題我們需要了解的是,我們現在的任務是什麼,我們是想分樣本真假,而不是分真樣本中那個是張三label、那個是李四label。況且我們也知道,原始真樣本的label我們是不知道的。回過頭來,我們現在有了真樣本集以及它們的label(都是1)、假樣本集以及它們的label(都是0),這樣單就判别網絡來說,此時問題就變成了一個再簡單不過的有監督的二分類問題了,直接送到神經網絡模型中訓練就完事了。假設訓練完了,下面我們來看生成網絡。
對于生成網絡,想想我們的目的,是生成盡可能逼真的樣本。那麼原始的生成網絡生成的樣本你怎麼知道它真不真呢?就是送到判别網絡中,是以在訓練生成網絡的時候,我們需要聯合判别網絡一起才能達到訓練的目的。什麼意思?就是如果我們單單隻用生成網絡,那麼想想我們怎麼去訓練?誤差來源在哪裡?細想一下沒有,但是如果我們把剛才的判别網絡串接在生成網絡的後面,這樣我們就知道真假了,也就有了誤差了。是以對于生成網絡的訓練其實是對生成-判别網絡串接的訓練,就像圖中顯示的那樣。好了那麼現在來分析一下樣本,原始的噪聲數組Z我們有,也就是生成了假樣本我們有,此時很關鍵的一點來了,我們要把這些假樣本的标簽都設定為1,也就是認為這些假樣本在生成網絡訓練的時候是真樣本。那麼為什麼要這樣呢?我們想想,是不是這樣才能起到迷惑判别器的目的,也才能使得生成的假樣本逐漸逼近為正樣本。好了,重新順一下思路,現在對于生成網絡的訓練,我們有了樣本集(隻有假樣本集,沒有真樣本集),有了對應的label(全為1),是不是就可以訓練了?有人會問,這樣隻有一類樣本,訓練啥呀?誰說一類樣本就不能訓練了?隻要有誤差就行。還有人說,你這樣一訓練,判别網絡的網絡參數不是也跟着變嗎?沒錯,這很關鍵,是以在訓練這個串接的網絡的時候,一個很重要的操作就是不要判别網絡的參數發生變化,也就是不讓它參數發生更新,隻是把誤差一直傳,傳到生成網絡那塊後更新生成網絡的參數。這樣就完成了生成網絡的訓練了。
在完成生成網絡訓練好,那麼我們是不是可以根據目前新的生成網絡再對先前的那些噪聲Z生成新的假樣本了,沒錯,并且訓練後的假樣本應該是更真了才對。然後又有了新的真假樣本集(其實是新的假樣本集),這樣又可以重複上述過程了。我們把這個過程稱作為單獨交替訓練。我們可以實作定義一個疊代次數,交替疊代到一定次數後停止即可。這個時候我們再去看一看噪聲Z生成的假樣本會發現,原來它已經很真了。
看完了這個過程是不是感覺GAN的設計真的很巧妙,個人覺得最值得稱贊的地方可能在于這種假樣本在訓練過程中的真假變換,這也是博弈得以進行的關鍵之處。
進一步
文字的描述相信已經讓大多數的人知道了這個過程,下面我們來看看原文中幾個重要的數學公式描述,首先我們直接上原始論文中的目标公式吧:
minGmaxDV(D,G)=Ex∼pdata(x)[log(D(x))]+Ez∼pz(z)[log(1−D(G(z)))]minGmaxDV(D,G)=Ex∼pdata(x)[log(D(x))]+Ez∼pz(z)[log(1−D(G(z)))]
上述這個公式說白了就是一個最大最小優化問題,其實對應的也就是上述的兩個優化過程。有人說如果不看别的,能達看到這個公式就拍案叫絕的地步,那就是機器學習的頂級專家,哈哈,真是前路漫漫。同時也說明這個簡單的公式意義重大。
這個公式既然是最大最小的優化,那就不是一步完成的,其實對比我們的分析過程也是這樣的,這裡現優化D,然後在取優化G,本質上是兩個優化問題,把拆解就如同下面兩個公式:
優化D:
maxDV(D,G)=Ex∼pdata(x)[log(D(x))]+Ez∼pz(z)[log(1−D(G(z)))]maxDV(D,G)=Ex∼pdata(x)[log(D(x))]+Ez∼pz(z)[log(1−D(G(z)))]
優化G:
minGV(D,G)=Ez∼pz(z)[log(1−D(G(z)))]minGV(D,G)=Ez∼pz(z)[log(1−D(G(z)))]
可以看到,優化D的時候,也就是判别網絡,其實沒有生成網絡什麼事,後面的G(z)這裡就相當于已經得到的假樣本。優化D的公式的第一項,使的真樣本x輸入的時候,得到的結果越大越好,可以了解,因為需要真樣本的預測結果越接近于1越好嘛。對于假樣本,需要優化是的其結果越小越好,也就是D(G(z))越小越好,因為它的标簽為0。但是呢第一項是越大,第二項是越小,這不沖突了,是以呢把第二項改成1-D(G(z)),這樣就是越大越好,兩者合起來就是越大越好。 那麼同樣在優化G的時候,這個時候沒有真樣本什麼事,是以把第一項直接卻掉了。這個時候隻有假樣本,但是我們說這個時候是希望假樣本的标簽是1的,是以是D(G(z))越大越好,但是呢為了統一成1-D(G(z))的形式,那麼隻能是最小化1-D(G(z)),本質上沒有差別,隻是為了形式的統一。之後這兩個優化模型可以合并起來寫,就變成了最開始的那個最大最小目标函數了。
是以回過頭來我們來看這個最大最小目标函數,裡面包含了判别模型的優化,包含了生成模型的以假亂真的優化,完美的闡釋了這樣一個優美的理論。
再進一步
有人說GAN強大之處在于可以自動的學習原始真實樣本集的資料分布,不管這個分布多麼的複雜,隻要訓練的足夠好就可以學出來。針對這一點,感覺有必要好好了解一下為什麼别人會這麼說。
我們知道,傳統的機器學習方法,我們一般都會定義一個什麼模型讓資料去學習。比如說假設我們知道原始資料屬于高斯分布呀,隻是不知道高斯分布的參數,這個時候我們定義高斯分布,然後利用資料去學習高斯分布的參數得到我們最終的模型。再比如說我們定義一個分類器,比如SVM,然後強行讓資料進行東變西變,進行各種高維映射,最後可以變成一個簡單的分布,SVM可以很輕易的進行二分類分開,其實SVM已經放松了這種映射關系了,但是也是給了一個模型,這個模型就是核映射(什麼徑向基函數等等),說白了其實也好像是你事先知道讓資料該怎麼映射一樣,隻是核映射的參數可以學習罷了。所有的這些方法都在直接或者間接的告訴資料你該怎麼映射一樣,隻是不同的映射方法能力不一樣。那麼我們再來看看GAN,生成模型最後可以通過噪聲生成一個完整的真實資料(比如人臉),說明生成模型已經掌握了從随機噪聲到人臉資料的分布規律了,有了這個規律,想生成人臉還不容易。然而這個規律我們開始知道嗎?顯然不知道,如果讓你說從随機噪聲到人臉應該服從什麼分布,你不可能知道。這是一層層映射之後組合起來的非常複雜的分布映射規律。然而GAN的機制可以學習到,也就是說GAN學習到了真實樣本集的資料分布。
再拿原論文中的一張圖來解釋:
這張圖表明的是GAN的生成網絡如何一步步從均勻分布學習到正太分布的。原始資料x服從正太分布,這個過程你也沒告訴生成網絡說你得用正太分布來學習,但是生成網絡學習到了。假設你改一下x的分布,不管什麼分布,生成網絡可能也能學到。這就是GAN可以自動學習真實資料的分布的強大之處。
還有人說GAN強大之處在于可以自動的定義潛在損失函數。 什麼意思呢,這應該說的是判别網絡可以自動學習到一個好的判别方法,其實就是等效的了解為可以學習到好的損失函數,來比較好或者不好的判别出來結果。雖然大的loss函數還是我們人為定義的,基本上對于多數GAN也都這麼定義就可以了,但是判别網絡潛在學習到的損失函數隐藏在網絡之中,不同的問題這個函數就不一樣,是以說可以自動學習這個潛在的損失函數。
開始做小實驗
本節主要實驗一下如何通過随機數組生成mnist圖像。mnist手寫體資料庫應該都熟悉的。這裡簡單的使用matlab來實作,友善看到整個實作過程。這裡用到了一個工具箱
DeepLearnToolbox,關于該工具箱的一些其他使用說明
網絡結構很簡單,就定義成下面這樣子:
将上述工具箱添加到路徑,然後運作下面代碼:
clc
clear
%% 構造真實訓練樣本 60000個樣本 1*784維(28*28展開)
load mnist_uint8;
train_x = double(train_x(1:60000,:)) / 255;
% 真實樣本認為為标簽 [1 0]; 生成樣本為[0 1];
train_y = double(ones(size(train_x,1),1));
% normalize
train_x = mapminmax(train_x, 0, 1);
rand('state',0)
%% 構造模拟訓練樣本 60000個樣本 1*100維
test_x = normrnd(0,1,[60000,100]); % 0-255的整數
test_x = mapminmax(test_x, 0, 1);
test_y = double(zeros(size(test_x,1),1));
test_y_rel = double(ones(size(test_x,1),1));
%%
nn_G_t = nnsetup([100 784]);
nn_G_t.activation_function = 'sigm';
nn_G_t.output = 'sigm';
nn_D = nnsetup([784 100 1]);
nn_D.weightPenaltyL2 = 1e-4; % L2 weight decay
nn_D.dropoutFraction = 0.5; % Dropout fraction
nn_D.learningRate = 0.01; % Sigm require a lower learning rate
nn_D.activation_function = 'sigm';
nn_D.output = 'sigm';
% nn_D.weightPenaltyL2 = 1e-4; % L2 weight decay
nn_G = nnsetup([100 784 100 1]);
nn_G.weightPenaltyL2 = 1e-4; % L2 weight decay
nn_G.dropoutFraction = 0.5; % Dropout fraction
nn_G.learningRate = 0.01; % Sigm require a lower learning rate
nn_G.activation_function = 'sigm';
nn_G.output = 'sigm';
% nn_G.weightPenaltyL2 = 1e-4; % L2 weight decay
opts.numepochs = 1; % Number of full sweeps through data
opts.batchsize = 100; % Take a mean gradient step over this many samples
%%
num = 1000;
tic
for each = 1:1500
%----------計算G的輸出:假樣本-------------------
for i = 1:length(nn_G_t.W) %共享網絡參數
nn_G_t.W{i} = nn_G.W{i};
end
G_output = nn_G_out(nn_G_t, test_x);
%-----------訓練D------------------------------
index = randperm(60000);
train_data_D = [train_x(index(1:num),:);G_output(index(1:num),:)];
train_y_D = [train_y(index(1:num),:);test_y(index(1:num),:)];
nn_D = nntrain(nn_D, train_data_D, train_y_D, opts);%訓練D
%-----------訓練G-------------------------------
for i = 1:length(nn_D.W) %共享訓練的D的網絡參數
nn_G.W{length(nn_G.W)-i+1} = nn_D.W{length(nn_D.W)-i+1};
end
%訓練G:此時假樣本标簽為1,認為是真樣本
nn_G = nntrain(nn_G, test_x(index(1:num),:), test_y_rel(index(1:num),:), opts);
end
toc
for i = 1:length(nn_G_t.W)
nn_G_t.W{i} = nn_G.W{i};
end
fin_output = nn_G_out(nn_G_t, test_x);
函數nn_G_out為:
function output = nn_G_out(nn, x)
nn.testing = 1;
nn = nnff(nn, x, zeros(size(x,1), nn.size(end)));
nn.testing = 0;
output = nn.a{end};
end
看一下這個及其簡單的函數,其實最值得注意的就是中間那個交替訓練的過程,這裡我分了三步列出來:
- 重新計算假樣本(假樣本每次是需要更新的,産生越來越像的樣本)
- 訓練D網絡,一個二分類的神經網絡;
- 訓練G網絡,一個串聯起來的長網絡,也是一個二分類的神經網絡(不過隻有假樣本來訓練),同時D部分參數在下一次的時候不能變了。
就這樣調一調參數,最終輸出在fin_output裡面,多運作幾次顯示不同運作次數下的結果:
可以看到的是結果還是有點像模像樣的。
實驗總結
運作上述簡單的網絡我發現幾個問題:
- 網絡存在着不收斂問題;網絡不穩定;網絡難訓練;讀過原論文其實作者也提到過這些問題,包括GAN剛出來的時候,很多人也在緻力于解決這些問題,當你實驗自己碰到的時候,還是很有意思的。那麼這些問題怎麼展現的呢,舉個例子,可能某一次你會發現訓練的誤差很小,在下一代訓練時,馬上又出現極限性的上升的很厲害,過幾代又發現訓練誤差很小,震蕩太嚴重。
- 其次網絡需要調才能出像樣的結果。交替疊代次數的不同結果也不一樣。比如每一代訓練中,D網絡訓練2回,G網絡訓練一回,結果就不一樣。
- 這是簡單的無條件GAN,是以每一代訓練完後,隻能出現一個結果,那就是0-9中的某一個數。要想在一代訓練中出現好幾種結果,就需要使用到條件GAN了。
最後
現在的GAN已經到了五花八門的時候了,各種GAN應用也很多,了解底層原理再慢慢往上層擴充。GAN還是一個很厲害的東西,它使得現有問題從有監督學習慢慢過渡到無監督學習,而無監督學習才是自然界中普遍存在的,因為很多時候沒有辦法拿到監督資訊的。要不Yann Lecun贊歎GAN是機器學習近十年來最有意思的想法。
福利
該節部分出了個視訊版的講解,詳情請點選:http://www.mooc.ai/open/course/301
歡迎關注【微信公衆号:AInewworld】了解更多。
from: https://deephunt.in/the-gan-zoo-79597dc8c347
Every week, new papers on Generative Adversarial Networks (GAN) are coming out and it’s hard to keep track of them all, not to mention the incredibly creative ways in which researchers are naming these GANs! You can read more about GANs in this Generative Models post by OpenAI or this overview tutorial in KDNuggets.
Explosive growth — All the named GAN variants cumulatively since 2014. Credit: Bruno Gavranović
So, here’s the current and frequently updated list, from what started as a fun activity compiling all named GANs in this format: Name and Source Paperlinked to Arxiv. Last updated on Feb 23, 2018.
- 3D-ED-GAN — Shape Inpainting using 3D Generative Adversarial Network and Recurrent Convolutional Networks
- 3D-GAN — Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling (github)
- 3D-IWGAN — Improved Adversarial Systems for 3D Object Generation and Reconstruction (github)
- 3D-PhysNet — 3D-PhysNet: Learning the Intuitive Physics of Non-Rigid Object Deformations
- 3D-RecGAN — 3D Object Reconstruction from a Single Depth View with Adversarial Learning (github)
- ABC-GAN — ABC-GAN: Adaptive Blur and Control for improved training stability of Generative Adversarial Networks(github)
- ABC-GAN — GANs for LIFE: Generative Adversarial Networks for Likelihood Free Inference
- AC-GAN — Conditional Image Synthesis With Auxiliary Classifier GANs
- acGAN — Face Aging With Conditional Generative Adversarial Networks
- ACGAN — Coverless Information Hiding Based on Generative adversarial networks
- ACtuAL — ACtuAL: Actor-Critic Under Adversarial Learning
- AdaGAN — AdaGAN: Boosting Generative Models
- Adaptive GAN — Customizing an Adversarial Example Generator with Class-Conditional GANs
- AdvEntuRe — AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples
- AdvGAN — Generating adversarial examples with adversarial networks
- AE-GAN — AE-GAN: adversarial eliminating with GAN
- AEGAN — Learning Inverse Mapping by Autoencoder based Generative Adversarial Nets
- AF-DCGAN — AF-DCGAN: Amplitude Feature Deep Convolutional GAN for Fingerprint Construction in Indoor Localization System
- AffGAN — Amortised MAP Inference for Image Super-resolution
- AL-CGAN — Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts
- ALI — Adversarially Learned Inference (github)
- AlignGAN — AlignGAN: Learning to Align Cross-Domain Images with Conditional Generative Adversarial Networks
- AlphaGAN — AlphaGAN: Generative adversarial networks for natural image matting
- AM-GAN — Activation Maximization Generative Adversarial Nets
- AmbientGAN — AmbientGAN: Generative models from lossy measurements (github)
- AMC-GAN — Video Prediction with Appearance and Motion Conditions
- AnoGAN — Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery
- APD — Adversarial Distillation of Bayesian Neural Network Posteriors
- APE-GAN — APE-GAN: Adversarial Perturbation Elimination with GAN
- ARAE — Adversarially Regularized Autoencoders for Generating Discrete Structures (github)
- ARDA — Adversarial Representation Learning for Domain Adaptation
- ARIGAN — ARIGAN: Synthetic Arabidopsis Plants using Generative Adversarial Network
- ArtGAN — ArtGAN: Artwork Synthesis with Conditional Categorial GANs
- ASDL-GAN — Automatic Steganographic Distortion Learning Using a Generative Adversarial Network
- ATA-GAN — Attention-Aware Generative Adversarial Networks (ATA-GANs)
- Attention-GAN — Attention-GAN for Object Transfiguration in Wild Images
- AttGAN — Arbitrary Facial Attribute Editing: Only Change What You Want(github)
- AttnGAN — AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks (github)
- AVID — AVID: Adversarial Visual Irregularity Detection
- B-DCGAN — B-DCGAN:Evaluation of Binarized DCGAN for FPGA
- b-GAN — Generative Adversarial Nets from a Density Ratio Estimation Perspective
- BAGAN — BAGAN: Data Augmentation with Balancing GAN
- Bayesian GAN — Deep and Hierarchical Implicit Models
- Bayesian GAN — Bayesian GAN (github)
- BCGAN — Bayesian Conditional Generative Adverserial Networks
- BCGAN — Bidirectional Conditional Generative Adversarial networks
- BEAM — Boltzmann Encoded Adversarial Machines
- BEGAN — BEGAN: Boundary Equilibrium Generative Adversarial Networks
- BGAN — Binary Generative Adversarial Networks for Image Retrieval(github)
- BicycleGAN — Toward Multimodal Image-to-Image Translation (github)
- BiGAN — Adversarial Feature Learning
- BinGAN — BinGAN: Learning Compact Binary Descriptors with a Regularized GAN
- BourGAN — BourGAN: Generative Networks with Metric Embeddings
- BranchGAN — Branched Generative Adversarial Networks for Multi-Scale Image Manifold Learning
- BRE — Improving GAN Training via Binarized Representation Entropy (BRE) Regularization (github)
- BS-GAN — Boundary-Seeking Generative Adversarial Networks
- BWGAN — Banach Wasserstein GAN
- C-GAN — Face Aging with Contextual Generative Adversarial Nets
- C-RNN-GAN — C-RNN-GAN: Continuous recurrent neural networks with adversarial training (github)
- CA-GAN — Composition-aided Sketch-realistic Portrait Generation
- CaloGAN — CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks (github)
- CAN — CAN: Creative Adversarial Networks, Generating Art by Learning About Styles and Deviating from Style Norms
- CapsGAN — CapsGAN: Using Dynamic Routing for Generative Adversarial Networks
- CapsuleGAN — CapsuleGAN: Generative Adversarial Capsule Network
- CatGAN — Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
- CatGAN — CatGAN: Coupled Adversarial Transfer for Domain Generation
- CausalGAN — CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training
- CC-GAN — Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks (github)
- cd-GAN — Conditional Image-to-Image Translation
- CDcGAN — Simultaneously Color-Depth Super-Resolution with Conditional Generative Adversarial Network
- CE-GAN — Deep Learning for Imbalance Data Classification using Class Expert Generative Adversarial Network
- CFG-GAN — Composite Functional Gradient Learning of Generative Adversarial Models
- CGAN — Conditional Generative Adversarial Nets
- CGAN — Controllable Generative Adversarial Network
- Chekhov GAN — An Online Learning Approach to Generative Adversarial Networks
- ciGAN — Conditional Infilling GANs for Data Augmentation in Mammogram Classification
- CipherGAN — Unsupervised Cipher Cracking Using Discrete GANs
- CM-GAN — CM-GANs: Cross-modal Generative Adversarial Networks for Common Representation Learning
- CoAtt-GAN — Are You Talking to Me? Reasoned Visual Dialog Generation through Adversarial Learning
- CoGAN — Coupled Generative Adversarial Networks
- ComboGAN — ComboGAN: Unrestrained Scalability for Image Domain Translation (github)
- ConceptGAN — Learning Compositional Visual Concepts with Mutual Consistency
- Conditional cycleGAN — Conditional CycleGAN for Attribute Guided Face Image Generation
- constrast-GAN — Generative Semantic Manipulation with Contrasting GAN
- Context-RNN-GAN — Contextual RNN-GANs for Abstract Reasoning Diagram Generation
- CorrGAN — Correlated discrete data generation using adversarial training
- Coulomb GAN — Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields
- Cover-GAN — Generative Steganography with Kerckhoffs’ Principle based on Generative Adversarial Networks
- cowboy — Defending Against Adversarial Attacks by Leveraging an Entire GAN
- CR-GAN — CR-GAN: Learning Complete Representations for Multi-view Generation
- Cramèr GAN — The Cramer Distance as a Solution to Biased Wasserstein Gradients
- Cross-GAN — Crossing Generative Adversarial Networks for Cross-View Person Re-identification
- crVAE-GAN — Channel-Recurrent Variational Autoencoders
- CS-GAN — Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets
- CSG — Speech-Driven Expressive Talking Lips with Conditional Sequential Generative Adversarial Networks
- CT-GAN — CT-GAN: Conditional Transformation Generative Adversarial Network for Image Attribute Modification
- CVAE-GAN — CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training
- CycleGAN — Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks (github)
- D-GAN — Differential Generative Adversarial Networks: Synthesizing Non-linear Facial Variations with Limited Number of Training Data
- D-WCGAN — I-vector Transformation Using Conditional Generative Adversarial Networks for Short Utterance Speaker Verification
- D2GAN — Dual Discriminator Generative Adversarial Nets
- D2IA-GAN — Tagging like Humans: Diverse and Distinct Image Annotation
- DA-GAN — DA-GAN: Instance-level Image Translation by Deep Attention Generative Adversarial Networks (with Supplementary Materials)
- DAGAN — Data Augmentation Generative Adversarial Networks
- DAN — Distributional Adversarial Networks
- DBLRGAN — Adversarial Spatio-Temporal Learning for Video Deblurring
- DCGAN — Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (github)
- DE-GAN — Generative Adversarial Networks with Decoder-Encoder Output Noise
- DeblurGAN — DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks (github)
- Defense-GAN — Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (github)
- Defo-Net — Defo-Net: Learning Body Deformation using Generative Adversarial Networks
- DeliGAN — DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data (github)
- DF-GAN — Learning Disentangling and Fusing Networks for Face Completion Under Structured Occlusions
- DialogWAE — DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder
- DiscoGAN — Learning to Discover Cross-Domain Relations with Generative Adversarial Networks
- DistanceGAN — One-Sided Unsupervised Domain Mapping
- DM-GAN — Dual Motion GAN for Future-Flow Embedded Video Prediction
- DMGAN — Disconnected Manifold Learning for Generative Adversarial Networks
- DNA-GAN — DNA-GAN: Learning Disentangled Representations from Multi-Attribute Images
- dp-GAN — Differentially Private Releasing via Deep Generative Model
- DP-GAN — DP-GAN: Diversity-Promoting Generative Adversarial Network for Generating Informative and Diversified Text
- DPGAN — Differentially Private Generative Adversarial Network
- DR-GAN — Representation Learning by Rotating Your Faces
- DRAGAN — How to Train Your DRAGAN (github)
- DRPAN — Discriminative Region Proposal Adversarial Networks for High-Quality Image-to-Image Translation
- DSH-GAN — Deep Semantic Hashing with Generative Adversarial Networks
- DSP-GAN — Depth Structure Preserving Scene Image Generation
- DTLC-GAN — Generative Adversarial Image Synthesis with Decision Tree Latent Controller
- DTN — Unsupervised Cross-Domain Image Generation
- DTR-GAN — DTR-GAN: Dilated Temporal Relational Adversarial Network for Video Summarization
- DualGAN — DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
- Dualing GAN — Dualing GANs
- DVGAN — Human Motion Modeling using DVGANs
- Dynamics Transfer GAN — Dynamics Transfer GAN: Generating Video by Transferring Arbitrary Temporal Dynamics from a Source Video to a Single Target Image
- E-GAN — Evolutionary Generative Adversarial Networks
- EAR — Generative Model for Heterogeneous Inference
- EBGAN — Energy-based Generative Adversarial Network
- ecGAN — eCommerceGAN : A Generative Adversarial Network for E-commerce
- ED//GAN — Stabilizing Training of Generative Adversarial Networks through Regularization
- Editable GAN — Editable Generative Adversarial Networks: Generating and Editing Faces Simultaneously
- EGAN — Enhanced Experience Replay Generation for Efficient Reinforcement Learning
- EL-GAN — EL-GAN: Embedding Loss Driven Generative Adversarial Networks for Lane Detection
- ELEGANT — ELEGANT: Exchanging Latent Encodings with GAN for Transferring Multiple Face Attributes
- EnergyWGAN — Energy-relaxed Wassertein GANs (EnergyWGAN): Towards More Stable and High Resolution Image Generation
- ExGAN — Eye In-Painting with Exemplar Generative Adversarial Networks
- ExposureGAN — Exposure: A White-Box Photo Post-Processing Framework(github)
- ExprGAN — ExprGAN: Facial Expression Editing with Controllable Expression Intensity
- f-CLSWGAN — Feature Generating Networks for Zero-Shot Learning
- f-GAN — f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization
- FairGAN — FairGAN: Fairness-aware Generative Adversarial Networks
- Fairness GAN — Fairness GAN
- FakeGAN — Detecting Deceptive Reviews using Generative Adversarial Networks
- FBGAN — Feedback GAN (FBGAN) for DNA: a Novel Feedback-Loop Architecture for Optimizing Protein Functions
- FBGAN — Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference
- FC-GAN — Fast-converging Conditional Generative Adversarial Networks for Image Synthesis
- FF-GAN — Towards Large-Pose Face Frontalization in the Wild
- FGGAN — Adversarial Learning for Fine-grained Image Search
- Fictitious GAN — Fictitious GAN: Training GANs with Historical Models
- FIGAN — Frame Interpolation with Multi-Scale Deep Loss Functions and Generative Adversarial Networks
- Fila-GAN — Synthesizing Filamentary Structured Images with GANs
- First Order GAN — First Order Generative Adversarial Networks (github)
- Fisher GAN — Fisher GAN
- Flow-GAN — Flow-GAN: Bridging implicit and prescribed learning in generative models
- FrankenGAN — rankenGAN: Guided Detail Synthesis for Building Mass-Models Using Style-Synchonized GANs
- FSEGAN — Exploring Speech Enhancement with Generative Adversarial Networks for Robust Speech Recognition
- FTGAN — Hierarchical Video Generation from Orthogonal Information: Optical Flow and Texture
- FusedGAN — Semi-supervised FusedGAN for Conditional Image Generation
- FusionGAN — Learning to Fuse Music Genres with Generative Adversarial Dual Learning
- FusionGAN — Generating a Fusion Image: One’s Identity and Another’s Shape
- G2-GAN — Geometry Guided Adversarial Facial Expression Synthesis
- GAAN — Generative Adversarial Autoencoder Networks
- GAF — Generative Adversarial Forests for Better Conditioned Adversarial Learning
- GAGAN — GAGAN: Geometry-Aware Generative Adverserial Networks
- GAIA — Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions
- GAIN — GAIN: Missing Data Imputation using Generative Adversarial Nets
- GAMN — Generative Adversarial Mapping Networks
- GAN — Generative Adversarial Networks (github)
- GAN Q-learning — GAN Q-learning
- GAN-ATV — A Novel Approach to Artistic Textual Visualization via GAN
- GAN-CLS — Generative Adversarial Text to Image Synthesis (github)
- GAN-RS — Towards Qualitative Advancement of Underwater Machine Vision with Generative Adversarial Networks
- GAN-SD — Virtual-Taobao: Virtualizing Real-world Online Retail Environment for Reinforcement Learning
- GAN-sep — GANs for Biological Image Synthesis (github)
- GAN-VFS — Generative Adversarial Network-based Synthesis of Visible Faces from Polarimetric Thermal Faces
- GAN-Word2Vec — Adversarial Training of Word2Vec for Basket Completion
- GANAX — GANAX: A Unified MIMD-SIMD Acceleration for Generative Adversarial Networks
- GANCS — Deep Generative Adversarial Networks for Compressed Sensing Automates MRI
- GANDI — Guiding the search in continuous state-action spaces by learning an action sampling distribution from off-target samples
- GANG — GANGs: Generative Adversarial Network Games
- GANG — Beyond Local Nash Equilibria for Adversarial Networks
- GANosaic — GANosaic: Mosaic Creation with Generative Texture Manifolds
- GAP — Context-Aware Generative Adversarial Privacy
- GAP — Generative Adversarial Privacy
- GATS — Sample-Efficient Deep RL with Generative Adversarial Tree Search
- GAWWN — Learning What and Where to Draw (github)
- GC-GAN — Geometry-Contrastive Generative Adversarial Network for Facial Expression Synthesis
- GeneGAN — GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data (github)
- GeoGAN — Generating Instance Segmentation Annotation by Geometry-guided GAN
- Geometric GAN — Geometric GAN
- GLCA-GAN — Global and Local Consistent Age Generative Adversarial Networks
- GMAN — Generative Multi-Adversarial Networks
- GMM-GAN — Towards Understanding the Dynamics of Generative Adversarial Networks
- GoGAN — Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking
- GONet — GONet: A Semi-Supervised Deep Learning Approach For Traversability Estimation
- GP-GAN — GP-GAN: Towards Realistic High-Resolution Image Blending(github)
- GP-GAN — GP-GAN: Gender Preserving GAN for Synthesizing Faces from Landmarks
- GPU — A generative adversarial framework for positive-unlabeled classification
- GRAN — Generating images with recurrent adversarial networks (github)
- Graphical-GAN — Graphical Generative Adversarial Networks
- GraspGAN — Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
- GT-GAN — Deep Graph Translation
- HAN — Chinese Typeface Transformation with Hierarchical Adversarial Network
- HAN — Bidirectional Learning for Robust Neural Networks
- HiGAN — Exploiting Images for Video Recognition with Hierarchical Generative Adversarial Networks
- HP-GAN — HP-GAN: Probabilistic 3D human motion prediction via GAN
- HR-DCGAN — High-Resolution Deep Convolutional Generative Adversarial Networks
- hredGAN — Multi-turn Dialogue Response Generation in an Adversarial Learning framework
- IAN — Neural Photo Editing with Introspective Adversarial Networks(github)
- IcGAN — Invertible Conditional GANs for image editing (github)
- ID-CGAN — Image De-raining Using a Conditional Generative Adversarial Network
- IdCycleGAN — Face Translation between Images and Videos using Identity-aware CycleGAN
- IFcVAEGAN — Conditional Autoencoders with Adversarial Information Factorization
- iGAN — Generative Visual Manipulation on the Natural Image Manifold(github)
- Improved GAN — Improved Techniques for Training GANs (github)
- In2I — In2I : Unsupervised Multi-Image-to-Image Translation Using Generative Adversarial Networks
- InfoGAN — InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets(github)
- IntroVAE — IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis
- IR2VI — IR2VI: Enhanced Night Environmental Perception by Unsupervised Thermal Image Translation
- IRGAN — IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval models
- IRGAN — Generative Adversarial Nets for Information Retrieval: Fundamentals and Advances
- ISGAN — Invisible Steganography via Generative Adversarial Network
- Iterative-GAN — Two Birds with One Stone: Iteratively Learn Facial Attributes with GANs (github)
- IterGAN — IterGANs: Iterative GANs to Learn and Control 3D Object Transformation
- IVE-GAN — IVE-GAN: Invariant Encoding Generative Adversarial Networks
- iVGAN — Towards an Understanding of Our World by GANing Videos in the Wild (github)
- IWGAN — On Unifying Deep Generative Models
- JointGAN — JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets
- JR-GAN — JR-GAN: Jacobian Regularization for Generative Adversarial Networks
- KBGAN — KBGAN: Adversarial Learning for Knowledge Graph Embeddings
- KGAN — KGAN: How to Break The Minimax Game in GAN
- l-GAN — Representation Learning and Adversarial Generation of 3D Point Clouds
- LAC-GAN — Grounded Language Understanding for Manipulation Instructions Using GAN-Based Classification
- LAGAN — Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis
- LAPGAN — Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks (github)
- LB-GAN — Load Balanced GANs for Multi-view Face Image Synthesis
- LBT — Learning Implicit Generative Models by Teaching Explicit Ones
- LCC-GAN — Adversarial Learning with Local Coordinate Coding
- LD-GAN — Linear Discriminant Generative Adversarial Networks
- LDAN — Label Denoising Adversarial Network (LDAN) for Inverse Lighting of Face Images
- LeakGAN — Long Text Generation via Adversarial Training with Leaked Information
- LeGAN — Likelihood Estimation for Generative Adversarial Networks
- LGAN — Global versus Localized Generative Adversarial Nets
- Lipizzaner — Towards Distributed Coevolutionary GANs
- LR-GAN — LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation
- LS-GAN — Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities
- LSGAN — Least Squares Generative Adversarial Networks
- M-AAE — Mask-aware Photorealistic Face Attribute Manipulation
- MAD-GAN — Multi-Agent Diverse Generative Adversarial Networks
- MAGAN — MAGAN: Margin Adaptation for Generative Adversarial Networks
- MAGAN — MAGAN: Aligning Biological Manifolds
- MalGAN — Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN
- MaliGAN — Maximum-Likelihood Augmented Discrete Generative Adversarial Networks
- manifold-WGAN — Manifold-valued Image Generation with Wasserstein Adversarial Networks
- MARTA-GAN — Deep Unsupervised Representation Learning for Remote Sensing Images
- MaskGAN — MaskGAN: Better Text Generation via Filling in the ______
- MC-GAN — Multi-Content GAN for Few-Shot Font Style Transfer (github)
- MC-GAN — MC-GAN: Multi-conditional Generative Adversarial Network for Image Synthesis
- McGAN — McGan: Mean and Covariance Feature Matching GAN
- MD-GAN — Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks
- MDGAN — Mode Regularized Generative Adversarial Networks
- MedGAN — Generating Multi-label Discrete Electronic Health Records using Generative Adversarial Networks
- MedGAN — MedGAN: Medical Image Translation using GANs
- MEGAN — MEGAN: Mixture of Experts of Generative Adversarial Networks for Multimodal Image Generation
- MelanoGAN — MelanoGANs: High Resolution Skin Lesion Synthesis with GANs
- memoryGAN — Memorization Precedes Generation: Learning Unsupervised GANs with Memory Networks
- MGAN — Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks (github)
- MGGAN — Multi-Generator Generative Adversarial Nets
- MGGAN — MGGAN: Solving Mode Collapse using Manifold Guided Training
- MIL-GAN — Multimodal Storytelling via Generative Adversarial Imitation Learning
- MIX+GAN — Generalization and Equilibrium in Generative Adversarial Nets (GANs)
- MIXGAN — MIXGAN: Learning Concepts from Different Domains for Mixture Generation
- MLGAN — Metric Learning-based Generative Adversarial Network
- MMC-GAN — A Multimodal Classifier Generative Adversarial Network for Carry and Place Tasks from Ambiguous Language Instructions
- MMD-GAN — MMD GAN: Towards Deeper Understanding of Moment Matching Network (github)
- MMGAN — MMGAN: Manifold Matching Generative Adversarial Network for Generating Images
- MoCoGAN — MoCoGAN: Decomposing Motion and Content for Video Generation (github)
- Modified GAN-CLS — Generate the corresponding Image from Text Description using Modified GAN-CLS Algorithm
- ModularGAN — Modular Generative Adversarial Networks
- MolGAN — MolGAN: An implicit generative model for small molecular graphs
- MPM-GAN — Message Passing Multi-Agent GANs
- MS-GAN — Temporal Coherency based Criteria for Predicting Video Frames using Deep Multi-stage Generative Adversarial Networks
- MTGAN — MTGAN: Speaker Verification through Multitasking Triplet Generative Adversarial Networks
- MuseGAN — MuseGAN: Symbolic-domain Music Generation and Accompaniment with Multi-track Sequential Generative Adversarial Networks
- MV-BiGAN — Multi-view Generative Adversarial Networks
- N2RPP — N2RPP: An Adversarial Network to Rebuild Plantar Pressure for ACLD Patients
- NAN — Understanding Humans in Crowded Scenes: Deep Nested Adversarial Learning and A New Benchmark for Multi-Human Parsing
- NCE-GAN — Dihedral angle prediction using generative adversarial networks
- ND-GAN — Novelty Detection with GAN
- NetGAN — NetGAN: Generating Graphs via Random Walks
- OCAN — One-Class Adversarial Nets for Fraud Detection
- OptionGAN — OptionGAN: Learning Joint Reward-Policy Options using Generative Adversarial Inverse Reinforcement Learning
- ORGAN — Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models
- ORGAN — 3D Reconstruction of Incomplete Archaeological Objects Using a Generative Adversary Network
- OT-GAN — Improving GANs Using Optimal Transport
- PacGAN — PacGAN: The power of two samples in generative adversarial networks
- PAN — Perceptual Adversarial Networks for Image-to-Image Transformation
- PassGAN — PassGAN: A Deep Learning Approach for Password Guessing
- PD-WGAN — Primal-Dual Wasserstein GAN
- Perceptual GAN — Perceptual Generative Adversarial Networks for Small Object Detection
- PGAN — Probabilistic Generative Adversarial Networks
- PGD-GAN — Solving Linear Inverse Problems Using GAN Priors: An Algorithm with Provable Guarantees
- PGGAN — Patch-Based Image Inpainting with Generative Adversarial Networks
- PIONEER — Pioneer Networks: Progressively Growing Generative Autoencoder
- Pip-GAN — Pipeline Generative Adversarial Networks for Facial Images Generation with Multiple Attributes
- pix2pix — Image-to-Image Translation with Conditional Adversarial Networks (github)
- pix2pixHD — High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs (github)
- PixelGAN — PixelGAN Autoencoders
- PM-GAN — PM-GANs: Discriminative Representation Learning for Action Recognition Using Partial-modalities (github)
- PN-GAN — Pose-Normalized Image Generation for Person Re-identification
- POGAN — Perceptually Optimized Generative Adversarial Network for Single Image Dehazing
- Pose-GAN — The Pose Knows: Video Forecasting by Generating Pose Futures
- PP-GAN — Privacy-Protective-GAN for Face De-identification
- PPAN — Privacy-Preserving Adversarial Networks
- PPGN — Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space
- PrGAN — 3D Shape Induction from 2D Views of Multiple Objects
- ProGanSR — A Fully Progressive Approach to Single-Image Super-Resolution
- Progressive GAN — Progressive Growing of GANs for Improved Quality, Stability, and Variation (github)
- PS-GAN — Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond
- PSGAN — Learning Texture Manifolds with the Periodic Spatial GAN
- PSGAN — PSGAN: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening
- PS²-GAN — High-Quality Facial Photo-Sketch Synthesis Using Multi-Adversarial Networks
- RadialGAN — RadialGAN: Leveraging multiple datasets to improve target-specific predictive models using Generative Adversarial Networks
- RaGAN — The relativistic discriminator: a key element missing from standard GAN
- RAN — RAN4IQA: Restorative Adversarial Nets for No-Reference Image Quality Assessment (github)
- RankGAN — Adversarial Ranking for Language Generation
- RCGAN — Real-valued (Medical) Time Series Generation with Recurrent Conditional GANs
- ReConNN — Reconstruction of Simulation-Based Physical Field with Limited Samples by Reconstruction Neural Network
- RefineGAN — Compressed Sensing MRI Reconstruction with Cyclic Loss in Generative Adversarial Networks
- ReGAN — ReGAN: RE[LAX|BAR|INFORCE] based Sequence Generation using GANs (github)
- RegCGAN — Unpaired Multi-Domain Image Generation via Regularized Conditional GANs
- RenderGAN — RenderGAN: Generating Realistic Labeled Data
- Resembled GAN — Resembled Generative Adversarial Networks: Two Domains with Similar Attributes
- ResGAN — Generative Adversarial Network based on Resnet for Conditional Image Restoration
- RNN-WGAN — Language Generation with Recurrent Generative Adversarial Networks without Pre-training (github)
- RoCGAN — Robust Conditional Generative Adversarial Networks
- RPGAN — Stabilizing GAN Training with Multiple Random Projections(github)
- RTT-GAN — Recurrent Topic-Transition GAN for Visual Paragraph Generation
- RWGAN — Relaxed Wasserstein with Applications to GANs
- SAD-GAN — SAD-GAN: Synthetic Autonomous Driving using Generative Adversarial Networks
- SAGA — Generative Adversarial Learning for Spectrum Sensing
- SAGAN — Self-Attention Generative Adversarial Networks
- SalGAN — SalGAN: Visual Saliency Prediction with Generative Adversarial Networks (github)
- sAOG — Deep Structured Generative Models
- SAR-GAN — Generating High Quality Visible Images from SAR Images Using CNNs
- SBADA-GAN — From source to target and back: symmetric bi-directional adaptive GAN
- SCH-GAN — SCH-GAN: Semi-supervised Cross-modal Hashing by Generative Adversarial Network
- SD-GAN — Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
- Sdf-GAN — Sdf-GAN: Semi-supervised Depth Fusion with Multi-scale Adversarial Networks
- SEGAN — SEGAN: Speech Enhancement Generative Adversarial Network
- SeGAN — SeGAN: Segmenting and Generating the Invisible
- SegAN — SegAN: Adversarial Network with Multi-scale L1 Loss for Medical Image Segmentation
- Sem-GAN — Sem-GAN: Semantically-Consistent Image-to-Image Translation
- SeqGAN — SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient (github)
- SeUDA — Semantic-Aware Generative Adversarial Nets for Unsupervised Domain Adaptation in Chest X-ray Segmentation
- SG-GAN — Semantic-aware Grad-GAN for Virtual-to-Real Urban Scene Adaption (github)
- SG-GAN — Sparsely Grouped Multi-task Generative Adversarial Networks for Facial Attribute Manipulation
- SGAN — Texture Synthesis with Spatial Generative Adversarial Networks
- SGAN — Stacked Generative Adversarial Networks (github)
- SGAN — Steganographic Generative Adversarial Networks
- SGAN — SGAN: An Alternative Training of Generative Adversarial Networks
- SGAN — CT Image Enhancement Using Stacked Generative Adversarial Networks and Transfer Learning for Lesion Segmentation Improvement
- sGAN — Generative Adversarial Training for MRA Image Synthesis Using Multi-Contrast MRI
- SiGAN — SiGAN: Siamese Generative Adversarial Network for Identity-Preserving Face Hallucination
- SimGAN — Learning from Simulated and Unsupervised Images through Adversarial Training
- SisGAN — Semantic Image Synthesis via Adversarial Learning
- Sketcher-Refiner GAN — Learning Myelin Content in Multiple Sclerosis from Multimodal MRI through Adversarial Training(github)
- SketchGAN — Adversarial Training For Sketch Retrieval
- SketchyGAN — SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis
- SL-GAN — Semi-Latent GAN: Learning to generate and modify facial images from attributes
- SN-DCGAN — Generative Adversarial Networks for Unsupervised Object Co-localization
- SN-GAN — Spectral Normalization for Generative Adversarial Networks(github)
- SN-PatchGAN — Free-Form Image Inpainting with Gated Convolution
- Sobolev GAN — Sobolev GAN
- Social GAN — Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks
- Softmax GAN — Softmax GAN
- SoPhie — SoPhie: An Attentive GAN for Predicting Paths Compliant to Social and Physical Constraints
- Spike-GAN — Synthesizing realistic neural population activity patterns using Generative Adversarial Networks
- Splitting GAN — Class-Splitting Generative Adversarial Networks
- SR-CNN-VAE-GAN — Semi-Recurrent CNN-based VAE-GAN for Sequential Data Generation (github)
- SRGAN — Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
- SRPGAN — SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution
- SS-GAN — Semi-supervised Conditional GANs
- ss-InfoGAN — Guiding InfoGAN with Semi-Supervision
- SSGAN — SSGAN: Secure Steganography Based on Generative Adversarial Networks
- SSL-GAN — Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks
- ST-CGAN — Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal
- ST-GAN — Style Transfer Generative Adversarial Networks: Learning to Play Chess Differently
- ST-GAN — ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing
- StackGAN — StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks (github)
- StainGAN — StainGAN: Stain Style Transfer for Digital Histological Images
- StarGAN — StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation (github)
- StarGAN-VC — StarGAN-VC: Non-parallel many-to-many voice conversion with star generative adversarial networks
- SteinGAN — Learning Deep Energy Models: Contrastive Divergence vs. Amortized MLE
- Super-FAN — Super-FAN: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with GANs
- SVSGAN — SVSGAN: Singing Voice Separation via Generative Adversarial Network
- SWGAN — Solving Approximate Wasserstein GANs to Stationarity
- SyncGAN — SyncGAN: Synchronize the Latent Space of Cross-modal Generative Adversarial Networks
- S²GAN — Generative Image Modeling using Style and Structure Adversarial Networks
- table-GAN — Data Synthesis based on Generative Adversarial Networks
- TAC-GAN — TAC-GAN — Text Conditioned Auxiliary Classifier Generative Adversarial Network (github)
- TAN — Outline Colorization through Tandem Adversarial Networks
- tcGAN — Cross-modal Hallucination for Few-shot Fine-grained Recognition
- TD-GAN — Task Driven Generative Modeling for Unsupervised Domain Adaptation: Application to X-ray Image Segmentation
- tempCycleGAN — Improving Surgical Training Phantoms by Hyperrealism: Deep Unpaired Image-to-Image Translation from Real Surgeries
- tempoGAN — tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow
- TequilaGAN — TequilaGAN: How to easily identify GAN samples
- Text2Shape — Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings
- textGAN — Generating Text via Adversarial Training
- TextureGAN — TextureGAN: Controlling Deep Image Synthesis with Texture Patches
- TGAN — Temporal Generative Adversarial Nets
- TGAN — Tensorizing Generative Adversarial Nets
- TGAN — Tensor-Generative Adversarial Network with Two-dimensional Sparse Coding: Application to Real-time Indoor Localization
- TGANs-C — To Create What You Tell: Generating Videos from Captions
- tiny-GAN — Analysis of Nonautonomous Adversarial Systems
- TP-GAN — Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis
- Triple-GAN — Triple Generative Adversarial Nets
- tripletGAN — TripletGAN: Training Generative Model with Triplet Loss
- TV-GAN — TV-GAN: Generative Adversarial Network Based Thermal to Visible Face Recognition
- UGACH — Unsupervised Generative Adversarial Cross-modal Hashing
- UGAN — Enhancing Underwater Imagery using Generative Adversarial Networks
- Unim2im — Unsupervised Image-to-Image Translation with Generative Adversarial Networks (github)
- UNIT — Unsupervised Image-to-image Translation Networks (github)
- Unrolled GAN — Unrolled Generative Adversarial Networks (github)
- UT-SCA-GAN — Spatial Image Steganography Based on Generative Adversarial Network
- UV-GAN — UV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face Recognition
- VA-GAN — Visual Feature Attribution using Wasserstein GANs
- VAC+GAN — Versatile Auxiliary Classifier with Generative Adversarial Network (VAC+GAN), Multi Class Scenarios
- VAE-GAN — Autoencoding beyond pixels using a learned similarity metric
- VariGAN — Multi-View Image Generation from a Single-View
- VAW-GAN — Voice Conversion from Unaligned Corpora using Variational Autoencoding Wasserstein Generative Adversarial Networks
- VEEGAN — VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning (github)
- VGAN — Generating Videos with Scene Dynamics (github)
- VGAN — Generative Adversarial Networks as Variational Training of Energy Based Models (github)
- VGAN — Text Generation Based on Generative Adversarial Nets with Latent Variable
- ViGAN — Image Generation and Editing with Variational Info Generative Adversarial Networks
- VIGAN — VIGAN: Missing View Imputation with Generative Adversarial Networks
- VoiceGAN — Voice Impersonation using Generative Adversarial Networks
- VOS-GAN — VOS-GAN: Adversarial Learning of Visual-Temporal Dynamics for Unsupervised Dense Prediction in Videos
- VRAL — Variance Regularizing Adversarial Learning
- WaterGAN — WaterGAN: Unsupervised Generative Network to Enable Real-time Color Correction of Monocular Underwater Images
- WaveGAN — Synthesizing Audio with Generative Adversarial Networks
- weGAN — Generative Adversarial Nets for Multiple Text Corpora
- WGAN — Wasserstein GAN (github)
- WGAN-CLS — Text to Image Synthesis Using Generative Adversarial Networks
- WGAN-GP — Improved Training of Wasserstein GANs (github)
- WGAN-L1 — Subsampled Turbulence Removal Network
- WS-GAN — Weakly Supervised Generative Adversarial Networks for 3D Reconstruction
- XGAN — XGAN: Unsupervised Image-to-Image Translation for many-to-many Mappings
- ZipNet-GAN — ZipNet-GAN: Inferring Fine-grained Mobile Traffic Patterns via a Generative Adversarial Neural Network
- α-GAN — Variational Approaches for Auto-Encoding Generative Adversarial Networks (github)
- β-GAN — Annealed Generative Adversarial Networks
- Δ-GAN — Triangle Generative Adversarial Networks
Visit the Github repository to add more links via pull requests or create an issue to lemme know something I missed or to start a discussion. Thanks to all the contributors, especially Emanuele Plebani, Lukas Galke, Peter Waller and Bruno Gavranović.
If you like what you are reading, follow Deep Hunt — a weekly AI newsletter with special focus on Machine Learning to stay updated in this fast moving field.