Search

Set Transformer: A framework for attention-based permutation-invariant neural networks

Type
Research
Technical
Published year
URL
Journal / Conference
Keyword
Status
Not started
Language
πŸ‡ΊπŸ‡Έ
Blog upload
Yes
Date
2021/12/15
1 more property

1. Introduction

Many ML tasks are defined on sets of instances.
β†’ These tasks do not depend on the order of elements
β†’ We need permutation invariant models for these tasks

2. Major contribution

Set Transformer
β€’
Used self-attention to encode every element in a set.
β†’ Able to encode pairwise- or higher-order interactions.
β€’
Authors introduced an efficient attention scheme inspired by inducing point methods from sparse Gaussian process literature.
β†’ Reduced the O(n2)\mathcal{O}(n^2) computation to O(nm)\mathcal{O}(nm)
β€’
Used self-attention to aggregate features
β†’ Beneficial when the problem requires multiple outputs that depend on each other
e.g.) meta-clustering

3. Background

Set-input problems?

Definition: A set of instances is given as an input and the corresponding target is a label for the entire set.
e.g.) 3D shape recognition, few shot image classification
Requirements for a model for set-input problems
β€’
Permutation invariance = order-independent
β€’
Input size invariance
c.f.) Ordinary MLP or RNNs violate these requirements.
Recent works
[Edwards & Storkey (2017)], [Zaheer et al. (2017)] proposed set pooling methods.
Framework
1.
Each element in a set is independently fed into a feed-forward neural network.
2.
The resulting embeddings are then aggregated using a pooling operation(mean, max, sum...)
β†’ this framework is proven to be a universal approximator for any set function.
β†’ However, it fails to learn complex mappings & interactions between elements in a set
e.g.) amortized clustering problem
Google search
Amortized clustering: Reusing the previous inference (clustering) results to accelerate the inference (clustering) of new dataset.

Pooling architecture for sets

Universal representation of permutation invariant functions
net({x1,...,xn})=ρ(pool({Ο•(x1),...,Ο•(xn)}))\text{net}(\{x_1,...,x_n\}) = \rho (\text{pool}(\{\phi(x_1),...,\phi(x_n)\}))
Ο•:encoder; ρ:decoder\phi: \text{encoder; } \rho: \text{decoder}
The model remains permutation-invariant even if the "encoder" Ο•\phi is a stack of permutation-equivariant(order-dependent) layers
e.g.) permutation-equivariant layer - order matters!
fi(x;{x1,...,xn})=Οƒi(Ξ»x+Ξ³pool({x1,...,xn}))f_i(x;\{x_1, ..., x_n\}) = \sigma_i(\lambda x+ \gamma \text{pool}(\{x_1,...,x_n\}))
Ξ»,Ξ³:learnableΒ scalarΒ variables;Β Οƒ(β‹…):non-linearΒ activationΒ function\lambda, \gamma : \text{learnable scalar variables; } \sigma(\cdot): \text{non-linear activation function}

Attention

A set with nn elements = nn query vectors of dimension each dqd_q β†’ Q∈RnΓ—dq:QueryΒ matrixQ \in \mathbb{R}^{n \times d_q}: \text{Query matrix}
An attention function Att(Q,K,V)\text{Att}(Q, K, V) maps queries QQ to outputs using nvn_v key-value pairs K∈RnvΓ—dq,V∈RnvΓ—dvK \in \mathbb{R}^{n_v \times d_q}, V \in \mathbb{R}^{n_v \times d_v}
Att(Q,K,V;Ο‰)=Ο‰(QK⊀)V\text{Att}(Q, K, V; \omega) = \omega(QK^\top)V
QK⊀:pairwiseΒ dotΒ productQK^\top : \text{pairwise dot product} β†’ measures how similar each pair of query and key vectors is.
Ο‰:activationΒ function\omega: \text{activation function}
usually, w(β‹…)=softmax(β‹…/d)w(\cdot) = \text{softmax}\Big(\cdot/\sqrt{d} \Big )
Ο‰(QK⊀)V:weightedΒ sumΒ ofΒ V\omega(QK^\top)V: \text{weighted sum of } V
from huidea_tistory

Self attention

From Google AI Blog
From Google AI Blog
β€’
Attention:
Attention score=⟨Hidden state of Encoder,Hidden state of Decoder⟩\text{Attention score} = \langle \text{Hidden state of Encoder},\text{Hidden state of Decoder} \rangle
β€’
Self-attention:
Attention score=⟨Hidden sate of Encoder, Hidden state of Encoder⟩\text{Attention score} = \lang \text{Hidden sate of Encoder, Hidden state of Encoder} \rang
β†’ Can approximate interactions between input data

Multi-head self-attention

Instead of computing a single attention function, this method first projects Q,K,VQ, K, V onto hh different dqM,dqM,dvMd^M_q, d^M_q, d^M_v-dimensional vectors
Multihead(Q,K,V;Ξ»,Ο‰)=concat(O1,...,Oh)WO\text{Multihead}(Q, K, V; \lambda, \omega) = \text{concat}(O_1, ...,O_h) W^O
whereΒ Oj=Att(QWjQ,KWjK,VWjV;Ο‰)\text{where } O_j = \text{Att}(QW^Q_j, KW^K_j, VW^V_j ;\omega)
From Attention is all you need

4. Deep-dive

4.1. Permutation equivariant (induced) set attention blocks

4.1.1. Taxonomies

MAB:MultiheadΒ AttentionΒ Block\text{MAB}: \text{Multihead Attention Block}
SAB:SetΒ AttentionΒ Block\text{SAB}: \text{Set Attention Block}
ISAB:InducedΒ SetΒ AttentionΒ Block\text{ISAB} : \text{Induced Set Attention Block}
X,Y∈RnΓ—d:setsΒ ofΒ d-dimensionalΒ vectorsX, Y \in \mathbb{R}^{n \times d}: \text{sets of } d \text{-dimensional vectors} β†’ Matrix
rFF:anyΒ row-wiseΒ feedforwardΒ layer\text{rFF}: \text{any row-wise feedforward layer} β†’ processes each instance independently and identically

4.1.2. MAB

MAB(X,Y)=LayerNorm(H+rFF(H))\text{MAB}(X, Y) = \text{LayerNorm}(H+\text{rFF}(H))
whereΒ H=LayerNorm(X+Multihead(X,Y,Y;Ο‰))\text{where }H = \text{LayerNorm}(X + \text{Multihead}(X, Y, Y; \omega))
c.f.) MAB\text{MAB} is an encoder block of the Transformer without positional encoding and dropout

4.1.3. SAB

A special form of MAB
SAB(X)=MAB(X,X)\text{SAB}(X) = \text{MAB}(X, X)
β†’ SAB\text{SAB} takes a set and performs self-attention.
But, a potential problem of using SABs is the quadratic time complexity O(n2)\mathcal{O}(n^2)
β†’ Authors introduce the ISAB(Induced Set Attention Block)

4.1.4. ISAB

Additionally define mm dd-dimensional vectors I∈RmΓ—d:inducingΒ pointsI \in \mathbb{R}^{m \times d}: \text{inducing points} ← trainable parameters
ISABm(X)=MAB(X,C)∈RnΓ—d\text{ISAB}_m(X) = \text{MAB}(X, C) \in \mathbb{R}^{n \times d}
whereΒ C=MAB(I,X)∈RmΓ—d\text{where } C= \text{MAB}(I, X) \in \mathbb{R}^{m \times d}
1.
Transform II into CC by attending to the input set.
2.
CC : the set of transformed inducing points, which contains information about XX, is again attended to by XX to finally produce a set of nn elements.
β†’ Similar to low-rank projection or autoencoder, but the goal of ISAB is to obtain good features for the final task.
e.g.) In amortized clustering, the inducing points could be the representation of each cluster.
Time complexity of ISAB is O(nm)\mathcal{O}(nm) β†’ Linear!
m:hyperparameterm: \text{hyperparameter}
BothΒ SAB(X),ISABm(X)Β areΒ permutationΒ equivariant!\text{Both }\text{SAB}(X), \text{ISAB}_m(X) \text{ are permutation equivariant!}

4.2. Pooling by Multi-head attention

Common aggregation scheme: dimension-wise average or maximum
Instead, authors propose applying multi-head attention on a learnable set of kk seed vectors S∈RkΓ—dS \in \mathbb{R}^{k \times d}. Z∈RnΓ—dZ \in \mathbb{R}^{n \times d} is the set of features constructed from an encoder.
PMA:PoolingΒ byΒ MultiheadΒ attention\text{PMA}: \text{Pooling by Multihead attention} with kk seed vectors
PMAk(Z)=MAB(S,rFF(Z))\text{PMA}_k(Z) = \text{MAB}(S, \text{rFF}(Z))
Output of PMAk\text{PMA}_k is a set of kk items.
In most cases, k=1k = 1
But for tasks such as amortized clustering which requires kk correlated outputs, we need to use kk seed vectors
To further model the interactions among the kk outputs, authors apply SAB
T=SAB(PMAk(Z))T = \text{SAB}(\text{PMA}_k(Z))

4.3. Overall Architecture

Need to stack multiple SABs to encode higher order interactions.
Encoder:X↦Z∈RnΓ—d\text{Encoder}: X \mapsto Z \in \mathbb{R}^{n \times d}
Encoder(X)=SAB(SAB(X))\text{Encoder}(X) = \text{SAB}(\text{SAB}(X))
Encoder(X)=ISABm(ISABm(X))\text{Encoder}(X) = \text{ISAB}_m(\text{ISAB}_m(X))
Time complexity for l\mathcal{l} stacks of SABs and ISABs are O(ln2)\mathcal{O}(ln^2) and O(lnm)\mathcal{O}(lnm)
Decoder aggregates ZZ into a single or a set of vectors which is fed into a feed-forward network to get final outputs.
Decoder(Z;Ξ»)=rFF(SAB(PMAk(Z)))∈RkΓ—d\text{Decoder}(Z; \lambda) = \text{rFF}(\text{SAB}(\text{PMA}_k(Z))) \in \mathbb{R}^{k \times d}

4.4. Analysis

The encoder of the set transformer is permutation equivariant.
But the set transformer is permutation invariant.

5. Experiments

5.1. Toy problem: Maximum value regression

Motivation: Can the model learn to find and attend to the maximum element?
The model with max-pooling can predict the output perfectly by learning its encoder to be an identity function.
Set transformer achieves comparable performance to the max-pooling model.

5.2. Counting unique characters

Motivation: Can the model learn the interactions between objects in a set?
Dataset: Omniglot
Goal: Predict the number of different characters inside the set

5.3. Amortized clustering with mixture of Gaussians

The log-likelihood of a dataset X={x1,...,xn}X = \{ x_1, ..., x_n\}
log⁑p(X;ΞΈ)=βˆ‘i=1nlogβ‘βˆ‘j=1kΟ€jN(xi;ΞΌj,diag(Οƒj2))\log p(X; \theta) = \sum^n_{i=1} \log \sum^k_{j=1} \pi_j \mathcal{N}(x_i;\mu_j, \text{diag}(\sigma^2_j))
Typical approach is to run an EM algorithm until convergence.
Dataset:
β€’
Synthetic 2D mixtures of Gaussians
β€’
Vectors from pretrained VGG network trained on CIFAR-100
Goal: Learn a generic meta-algorithm that directly maps the input set XX to ΞΈβˆ—(X)\theta^*(X)

6. Conclusion

Why I chose this article
From Meta Learning in Neural Networks: A Survey
From Set Transformer