天天看點

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

論文資訊

論文标題:Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction

論文作者:Shaked Brody, Uri Alon, Eran Yahav

論文來源:2022,ICLR

論文位址:download 

論文代碼:download

1 Abstract

  提出了一種新的圖自編碼器,其中 Encoder 為普通的 GAE,而 Decoder 實作了特征重建、度重建,以及一種基于 2-Wasserstein distance 的鄰居重建。

2 Introduction

  

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

  兩種典型的 GAE :

    • GAE(Kipf & Welling, 2016)使用簡單的重建連結結構,導緻無法區分像 (2, 4) 和 (3, 5)這樣的點;
    • GraphWave (Donnat et al., 2018) 面向結構的嵌入模型不考慮節點特征和空間接近度,無法區分像 (0, 1), (2, 4) 和 (3,5) 的節點對;

  本文提出的 新架構為 Neighborhood Wasserstein Reconstruction Graph Auto-Encoder (NWR-GAE),将重構損失分解為三個部分,即 節點度、鄰居表示分布和節點特征。

  其中最重要的是重建鄰居分布,由于在 $\text{k-hop}$ 消息傳遞後,在節點 $v$ 的表示中編碼的資訊來源本質上來自于 $v$ 的 $k-hop$ 鄰域(Fig.2)。是以,節點 $v$ 的良好表示應該捕獲其 $k$ 跳鄰域中所有節點的特征資訊,這與下遊任務是無關的。

  

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

Optimal-transport (OT) losses

  考慮兩個分布之間的距離:當兩個分布有非重疊的部分時,$f$ 散度家族存在非連續的問題。

  Suppose we have two probability distributions, $P$ and $Q$ :

  $\forall(x, y) \in P, x=0 \text { and } y \sim U(0,1) \forall(x, y) \in Q, x=\theta, 0 \leq \theta \leq 1 \text { and } y \sim U(0,1)$

  

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

  When $\theta \neq 0$ :

    $D_{K L}(P \| Q) =\sum\limits _{x=0, y \sim U(0,1)} 1 \cdot \log \frac{1}{0}=+\infty$

    $D_{K L}(Q \| P) =\sum\limits_{x=0, y \sim U(0,1)} 1 \cdot \log \frac{1}{0}=+\infty$

    $D_{J S}(P, Q) =\frac{1}{2}\left(\sum\limits_{x=0, y \sim U(0,1)} 1 \cdot \log \frac{1}{1 / 2}+\sum\limits_{x=0, y \sim U(0,1)} 1 \cdot \log \frac{1}{1 / 2}\right)=\log 2$

   

    $W(P, Q) =|\theta| $

  But when $\theta = 0$, two distributions are fully overlapped:

    $D_{K L}(P \| Q) =D_{K L}(Q \| P)=D_{J S}(P, Q)=0$

    $W(P, Q) =0=|\theta|$

  可以使用 最優傳輸 OT 的 2-Wasserstein distance 衡量兩個分布之間的距離,在這裡,給出一個基于 2-Wasserstein distance 的常用的 OT 損失:

  Definition 2.1. Let $\mathcal{P}$, $\mathcal{Q}$ denote two probability distributions with finite second moment defined on $\mathcal{Z} \subseteq \mathbb{R}^{m}$ . The 2-Wasserstein distance between $\mathcal{P}$ and $\mathcal{Q}$ defined on $\mathcal{Z}$, $\mathcal{Z}^{\prime} \subseteq \mathbb{R}^{m}$ is the solution to the optimal mass transportation problem with $\ell_{2}$ transport cost (Villani, 2008):

    $\mathcal{W}_{2}(\mathcal{P}, \mathcal{Q})=\left(\inf _{\gamma \in \Gamma(\mathcal{P}, \mathcal{Q})} \int_{\mathcal{Z} \times \mathcal{Z}^{\prime}}\left\|Z-Z^{\prime}\right\|_{2}^{2} d \gamma\left(Z, Z^{\prime}\right)\right)^{1 / 2}$

  where $\Gamma(\mathcal{P}, \mathcal{Q})$ contains all joint distributions of $\left(Z, Z^{\prime}\right)$ with marginals $\mathcal{P}$ and $\mathcal{Q}$ respectively.

3 Methods

  預先定義 :

    • Encoder:$\phi$
    • Decoder:$\psi$

  Encoder 可以是任何基于消息傳遞的 GNNs,Decoder 可以分成三部分:$\psi=\left(\psi_{s}, \psi_{p}, \psi_{d}\right)$

3.1 Neighborhood peconstruction principle

  本文隻考慮 1-hop 鄰域重建。用 $H^{(0)}$ 代表 $X$ 初始特征矩陣,對于每個節點 $v \in V$ 被 GNN Enocoder 編碼後 ,其節點表示  $h_{v}^{(1)}$ 從 $h_{v}^{(0)}$ 及其鄰居表示 $H_{\mathcal{N}_{v}}^{(0)}=\left\{h_{u}^{(0)} \mid u \in \mathcal{N}_{v}\right\}$ 。本文主要目的是重構來自 $h_{v}^{(0)}$ 和 $H_{\mathcal{N}_{v}}^{(0)}$ 的資訊,是以有

    $\begin{array}  {l}\underset{\phi, \psi}{\text{min}} &   \sum\limits _{v \in V} \mathcal{M}\left(\left(h_{v}^{(0)}, H_{\mathcal{N}_{v}}^{(0)}\right), \psi\left(h_{v}^{(1)}\right)\right)\\\text { s.t. } & h_{v}^{(1)}=\phi\left(h_{v}^{(0)}, H_{\mathcal{N}_{v}}^{(0)}\right), \forall v \in V\end{array} \quad\quad\quad(2)$

  其中,$\mathcal{M}(\cdot, \cdot)$ 定義了重建損失,$M$ 可分為兩部分,分别測量自 特征重建 和 鄰域重建:

    $\mathcal{M}\left(\left(h_{v}^{(0)}, H_{\mathcal{N}_{v}}^{(0)}\right), \psi\left(h_{v}^{(1)}\right)\right)=\mathcal{M}_{s}\left(h_{v}^{(0)}, \psi\left(h_{v}^{(1)}\right)\right)+\mathcal{M}_{n}\left(H_{\mathcal{N}_{v}}^{(0)}, \psi\left(h_{v}^{(1)}\right)\right)\quad\quad\quad(3)$

  對于特征重建,及其特征重建損失函數$\mathcal{M}_{s}$ :

    $\mathcal{M}_{s}\left(h_{v}^{(0)}, \psi\left(h_{v}^{(1)}\right)\right)=\left\|h_{v}^{(0)}-\psi_{s}\left(h_{v}^{(1)}\right)\right\|^{2}\quad\quad\quad(4)$

代碼中:

  feature_losses = self.feature_loss_func(h0, self.feature_decoder(gij))

其實 h0 代表原始特征矩陣 $X$,而 gij 是經過四層的 GCN encoder 得到的隐表示,然後使用一個 FNN 的 feature_decoder 将 gij 的次元映射成和  $X$ 一樣大。

   $\mathcal{M}_{n}$ 被分為度重建和 鄰居重建,對于節點 $v$,鄰域資訊被表示為 i.i.d .的經驗實作從 $\mathcal{P}_{v}^{(0)}$ 中采樣 $d_{v}$ 元素,其中 $\mathcal{P}_{v}^{(0)} \triangleq \frac{1}{d_{v}} \sum\limits _{u \in \mathcal{N}_{v}} \delta_{h_{u}^{(0)}}$。具體來說,采用

    $\mathcal{M}_{n}\left(H_{\mathcal{N}_{v}}^{(0)}, \psi\left(h_{v}^{(1)}\right)\right)=\left(d_{v}-\psi_{d}\left(h_{v}^{(1)}\right)\right)^{2}+\mathcal{W}_{2}^{2}\left(\mathcal{P}_{v}^{(0)}, \psi_{p}^{(1)}\left(h_{v}^{(1)}\right)\right)\quad\quad\quad(5)$

Generalizing to k-hop neighborhood reconstruction

  

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

  本文期望 $h_{v}^{(k)}$ 直接重構 $H_{\mathcal{N}_{v}}^{(i)}$   $i<k-1$。具體來說,對于每個節點 $v \in V$,使用重構損失

  $\begin{array}{l}&\mathcal{M}^{\prime}\left(\left(h_{v}^{(0)},\left\{H_{\mathcal{N}_{v}}^{(i)} \mid 0 \leq i \leq k-1\right\}\right), \psi\left(h_{v}^{(k)}\right)\right)=\mathcal{M}_{s}\left(h_{v}^{(0)}, \psi\left(h_{v}^{(k)}\right)\right)+\sum\limits_{i=0}^{k-1} \mathcal{M}_{n}\left(H_{\mathcal{N}_{v}}^{(i)}, \psi\left(h_{v}^{(k)}\right)\right) \\&=\lambda_{s}\left\|h_{v}^{(0)}-\psi_{s}\left(h_{v}^{(k)}\right)\right\|^{2}+\lambda_{d}\left(d_{v}-\psi_{d}\left(h_{v}^{(k)}\right)\right)^{2}+\sum\limits_{i=0}^{k-1} \mathcal{W}_{2}^{2}\left(\mathcal{P}_{v}^{(i)}, \psi_{p}^{(i)}\left(h_{v}^{(k)}\right)\right)\end{array}\quad(6)$

  其中 $\psi_{s}$ 是解碼初始特征,$\psi_{d}$ 是度解碼,$\psi_{p}^{(i)}$,$0 \leq i \leq k-1$ 是解碼 $i$ 層鄰域表示分布 $\mathcal{P}_{v}^{(i)}\left(: \triangleq \frac{1}{d_{v}} \sum\limits _{u \in \mathcal{N}_{v}} \delta_{h_{u}^{(i)}}\right)$。$\lambda_{s}$ 和 $\lambda_{d}$ 為非負性超參數。是以,$k$ 跳鄰域重建的全部目标是

    $\begin{array}{l} \underset{\phi, \psi}{\text{min}}\quad \sum\limits _{v \in V} \mathcal{M}^{\prime}\left(\left(h_{v}^{(0)},\left\{H_{\mathcal{N}_{v}}^{(i-1)} \mid 1 \leq i \leq k\right\}\right), \psi\left(h_{v}^{(k)}\right)\right)\\ \text { s.t. }\quad H^{(i)}=\phi^{(i)}\left(H^{(i-1)}\right), \quad1 \leq i \leq k\end{array}\quad(7)$

  其中,$\phi=\left\{\phi^{(i)} \mid 1 \leq i \leq k\right\}$ 包括 $k$ 個GNN層,$\mathcal{M}^{\prime}$ 在 $\text{Eq.6}$ 中定義。

  這一小節,公式寫的很迷,你隻需要知道期望 $h_{v}^{(k)}$ 直接重構 $H_{\mathcal{N}_{v}}^{(i)}$   $i<k-1$ ,後面我會用說人話的方式解讀。

3.2 Decoding distributions——Decoders $\psi_{p}^{(i)}, 0 \leq i \leq k-1$

  Note :上述圖說的其實是使用 第 $k$ 的節點 $v$ 的節點表示$h_{v}^{(k)}$重建鄰域 $H_{\mathcal{N}_{v}}^{(k-1)}$。

  

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

    $\begin{array}{l}\psi_{p}^{(i)}\left(h_{v}^{(k)}\right)=\operatorname{FNN}_{p}^{(i)}(\xi),\quad \xi \sim \mathcal{N}\left(\mu_{v}, \Sigma_{v}\right)\\\text { where }\quad \mu_{v}=\operatorname{FNN}_{\mu}\left(h_{v}^{(k)}\right), \quad\Sigma_{v}=\operatorname{diag}\left(\exp \left(\operatorname{FNN}_{\sigma}\left(h_{v}^{(k)}\right)\right)\right)\end{array}\quad(8)$

上代碼悟:

self.m = torch.distributions.Normal(torch.zeros(sample_size, hidden_dim),torch.ones(sample_size, hidden_dim)) 
self.mlp_mean = nn.Linear(hidden_dim, hidden_dim)
self.mlp_sigma = nn.Linear(hidden_dim, hidden_dim)

sampled_embeddings_list, mark_len_list = self.sample_neighbors(neighbor_indexes, neighbor_dict, to_layer)    #從第 k-1 層采樣節點的5個鄰居節點,未滿五個的使用 0 填充特征矩陣
for i, neighbor_embeddings1 in enumerate(sampled_embeddings_list):
    # Generating h^k_v, reparameterization trick
    index = neighbor_indexes[i]
    mask_len1 = mark_len_list[i]
    mean = from_layer[index].repeat(self.sample_size, 1)      #[5,512]   ,擷取第 k 層的 節點 v 的表示
    mean = self.mlp_mean(mean)                                #[5,512]   ,線性變換
    sigma = from_layer[index].repeat(self.sample_size, 1)   #[5,512]   ,擷取第 k 層的 節點 v 的表示
    sigma = self.mlp_sigma(sigma)                        #[5,512]   ,線性變換
    std_z = self.m.sample().to(device)                  # [5,512]  ,每個元素為均值為0,标準差為1 的正态分布生成
    var = mean + sigma.exp() * std_z                  # [5,512]      正态分布處理生成的鄰居表示集合
    nhij = FNN_generator(var, device)                 #[5,512]  ,前饋神經網絡線性變換
    generated_neighbors = nhij      

  接着就去計算 節點 $v$ 的鄰域 和生成的鄰域之間的 2-Wasserstein distance ,這裡的代碼我沒看懂,好難過啊。

3.3 Further discussion-Decoders $\psi_{s}$ 、$\psi_{d}$ and Encoder $\phi$

3.3.1 degree_decoder​

  重構節點度的解碼器 $\psi_{d}$ 是一個 FNN+ReLU 激活函數,使其值非負。

    $\psi_{d}(h_{v}^{(k)})=\exp (\mathrm{FNN}_{d}(h_{v}^{(k)}))\quad\quad\quad(9)$

度解碼器代碼:

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》
論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》
self.degree_decoder = FNN(hidden_dim, hidden_dim, 1, 4)

# FNN
class FNN(nn.Module):
    def __init__(self, in_features, hidden, out_features, layer_num):
        super(FNN, self).__init__()
        self.linear1 = MLP(layer_num, in_features, hidden, out_features)
        self.linear2 = nn.Linear(out_features, out_features)
    def forward(self, embedding):
        x = self.linear1(embedding)
        x = self.linear2(F.relu(x))
        return x

def degree_decoding(self, node_embeddings):
    degree_logits = F.relu(self.degree_decoder(node_embeddings))
    return degree_logits      

View Code

度重構采用的損失函數 MSE loss:

self.degree_loss_func = nn.MSELoss()      

3.3.2 feature_decoder​

  重構初始特征 $h_{v}^{(0)}$ 的解碼器 $\psi_{s}$ 是一個 FNN。 

    $\psi_{s}\left(h_{v}^{(k)}\right)=\mathrm{FNN}_{s}\left(h_{v}^{(k)}\right)\quad\quad\quad(9)$

特征解碼器代碼:

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》
論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》
self.feature_decoder = FNN(hidden_dim, hidden_dim, in_dim, 3)

# FNN
class FNN(nn.Module):
    def __init__(self, in_features, hidden, out_features, layer_num):
        super(FNN, self).__init__()
        self.linear1 = MLP(layer_num, in_features, hidden, out_features)
        self.linear2 = nn.Linear(out_features, out_features)
    def forward(self, embedding):
        x = self.linear1(embedding)
        x = self.linear2(F.relu(x))
        return x      

View Code

特征重構采用的損失函數 MSE loss:

self.feature_loss_func = nn.MSELoss()      

3.3.3 Encoder

  Encoder 可以為 GINConv、GraphConv、SAGEConv layer, 考慮到本實驗堆疊了多層 GNNs layer,是以很容易造成過平滑,故,這裡采用 PairNorm 緩解過平滑存在的問題。 

  $\begin{array}{l}\left\{h_{v}^{(0)} \mid v \in V\right\}=\text { pair-norm }\left(\left\{x_{v} W \mid v \in V\right\}\right) \\ \text {where } W \text { is a learnable parameter matrix. }\end{array}\quad\quad\quad(10)$

Encoder 代碼:

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》
論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》
if GNN_name == "GCN":
    self.graphconv1 = GraphConv(in_dim, hidden_dim)            #[1433,512]
    self.graphconv2 = GraphConv(hidden_dim, hidden_dim)         #[512,512]
    self.graphconv3 = GraphConv(hidden_dim, hidden_dim)        #[512,512]
    self.graphconv4 = GraphConv(hidden_dim, hidden_dim)       #[512,512]

self.norm = PairNorm(norm_mode, norm_scale)

def forward_encoder(self, g, h):
    # K-layer Encoder
    # Apply graph convolution and activation, pair-norm to avoid trivial solution
    h0 = h
    l1 = self.graphconv1(g, h0)
    l1_norm = torch.relu(self.norm(l1))
    l2 = self.graphconv2(g, l1_norm)
    l2_norm = torch.relu(self.norm(l2))
    l3 = self.graphconv3(g, l2)
    l3_norm = torch.relu(l3)
    l4 = self.graphconv4(g, l1_norm) # 5 layers
    return l4, l3_norm, l2_norm, l1_norm, h0

class PairNorm(nn.Module):
    ...
    def forward(self, x):
        if self.mode == 'None':
            return x
        col_mean = x.mean(dim=0)
        if self.mode == 'PN':
            x = x - col_mean
            rownorm_mean = (1e-6 + x.pow(2).sum(dim=1).mean()).sqrt()
            x = self.scale * x / rownorm_mean
        if self.mode == 'PN-SI':
            x = x - col_mean
            rownorm_individual = (1e-6 + x.pow(2).sum(dim=1, keepdim=True)).sqrt()
            x = self.scale * x / rownorm_individual
        if self.mode == 'PN-SCS':
            rownorm_individual = (1e-6 + x.pow(2).sum(dim=1, keepdim=True)).sqrt()
            x = self.scale * x / rownorm_individual - col_mean
        return x      

View Code

4 Experiments

   我們設計實驗來評估 NWR-GAE,重點關注以下研究問題:RQ1:與最先進的無監督圖嵌入基線相比,NWR-GAE在基于結構角色的合成資料集上表現如何?RQ2:NWR-GAE及其消融如何與不同類型的真實世界圖形資料集上的基線進行比較?RQ3:嵌入尺寸 $d$ 和采樣尺寸 $q$ 等主要模型參數對NWR-GAE的影響是什麼?

4.1 Experimental setup

4.1.1 Datasets

  Synthetic datasets

  Real-world graph Datasets

4.1.2 Baselines

  1) Random walk based (DeepWalk, node2vec)

  2) Structural role based (RoleX, struc2vec, GraphWave)

  3) Graph auto-encoder based (GAE, VGAE, ARGVA)

  4) Contrastive learning based (DGI, GraphCL, MVGRL)

4.1.3 Evaluation metrics

  • Homogeneity: conditional entropy of ground-truth among predicting clusters.
  • Completeness: ratio of nodes with the same ground-truth labels assigned to the same cluster.
  • Silhouette score: intra-cluster distance vs. inter-cluster distance.

4.2 Performance on synthetic datasets(RQ1)

  

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

4.3 Performance on real-world datasets(RQ2)

  

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

4.4 In-depth analysis of NWR-GAE

  

論文解讀(NWR)《Graph Auto-Encoder via Neighborhood Wasserstein Reconstruction》

5 Conclusion

  在這項工作中,我們解決了現有的無監督圖表示方法的局限性,并提出了第一個能夠正确地捕獲圖中節點的接近性、結構和特征資訊的模型,并在低維嵌入空間中對其進行區分編碼的模型。該模型在合成和真實基準資料集上進行了廣泛的測試,結果有力地支援了其聲稱的優勢。由于它是通用的,有效的,而且在概念上也很容易了解,我們相信它有潛力作為無監督圖表示學習的實際方法。在未來,它将有望看到其在不同領域的應用研究,以及仔細分析其魯棒性和隐私性等潛在問題。

因上求緣,果上努力~~~~ 作者:關注我更新論文解讀,轉載請注明原文連結:https://www.cnblogs.com/BlairGrowing/p/16599030.html