Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View Abstract From the intuitive notion of disentanglement, the image variations corresponding to different factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images. However, the term usually results in a trade-off between disentanglement and generation quality. For the generative models pretrained without any disentanglement term, the generated images show semantically meaningful variations when traversing along different directions in the latent space. Based on this observation, we argue that it is possible to mitigate the trade-off by (i) leveraging the pretrained generative models with high generation quality, (ii) focusing on discovering the traversal directions as factors for disentangled representation learning. To achieve this, we propose Disentaglement via Contrast (DisCo) as a framework to model the variations based on the target disentangled representations, and contrast the variations to jointly discover disentangled directions and learn disentangled representations. DisCo achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained nondisentangled generative models including GAN, VAE, and Flow. Source code is at https://github.com/xrenaa/DisCo. 1 INTRODUCTION Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data, which is believed by many to be the only way to understand the world for AI fundamentally (Bengio & LeCun, 2007). To achieve the goal, as shown in Figure 1 (a), we need an encoder and a generator. The encoder to extract representations from images with each dimension corresponds to one factor individually. The generator (decoder) decodes the changing of each factor into different kinds of image variations. With supervision, we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively. However, this kind of exhaustive supervision is often not available in real-world data. The typical unsupervised methods are based on a generative model to build the above encoder and generator framework, e.g., VAE (Kingma & Welling, 2014) provides encoder and generator, and GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2019) provides generator. During the training process of the encoder and generator, to achieve disentangled representation, the typical methods rely on an additional disentanglement regularization term, e.g., the total correlation for VAE-based methods (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or mutual information for InfoGAN-based methods (Chen et al., 2016; Lin et al., 2020). ∗Equal contribution. Work done during internships at Microsoft Research Asia. †Corresponding author However, the extra terms usually result in a trade-off between disentanglement and generation quality (Burgess et al., 2018; Khrulkov et al., 2021). Furthermore, those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias (Locatello et al., 2019). Recent works (Shen & Zhou, 2021; Khrulkov et al., 2021; Karras et al., 2019; Härkönen et al., 2020; Voynov & Babenko, 2020) show that, for GANs purely trained for image generation, traversing along different directions in the latent space causes different variations of the generated image. This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN. The above observations indicate that training the encoder and generator simultaneous may not be the best choice. We provide an alternative route to learn disentangled representation: fix the pretrained generator, jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation, as shown in Figure 1(b). From the intuitive notion of disentangled representation, similar image variations should be caused by changing the same factor, and different image variations should be caused by changing different factors. This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework: Disentanglement via Contrast (DisCo) for disentangled representation learning. In DisCo, changing a factor is implemented by traversing one discovered direction in the latent space. For discovering the factors, DisCo adopts a typical network module, Navigator, to provides candidate traversal directions in the latent space (Voynov & Babenko, 2020; Jahanian et al., 2020; Shen et al., 2020). For disentangled representation learning, to model the various image variations, we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss. In addition to the above architecture innovations, we propose two key techniques for DisCo: (i) an entropy-based domination loss to encourage the encoded representations to be more disentangled, (ii) a hard negatives flipping strategy for better optimization of Contrastive Loss. We evaluate DisCo on three major generative models (GAN, VAE, and Flow) on three popular disentanglement datasets. DisCo achieves the state-of-the-art (SOTA) disentanglement performance compared to all the previous discovering-based methods and typical (VAE/InfoGAN-based) methods. Furthermore, we evaluate DisCo on the real-world dataset FFHQ (Karras et al., 2019) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models. Our main contributions can be summarized as: (i) To our best knowledge, DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations. (ii) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning. (iii) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE, GAN, or Flow models with the SOTA disentangled representation learning and latent space discovering. (iv) We propose two key techniques for DisCo: an entropy-based domination loss and a hard negatives flipping strategy. 2 RELATED WORK Typical unsupervised disentanglement. There have been a lot of studies on unsupervised disentangled representation learning based on VAE (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or InfoGAN (Chen et al., 2016; Lin et al., 2020). These methods achieve disentanglement via an extra regularization, which often sacrifices the generation quality (Burgess et al., 2018; Khrulkov et al., 2021). VAE-based methods disentangle the variations by factorizing aggregated posterior, and InfoGAN-based methods maximize the mutual information between latent factors and related observations. VAE-based methods achieve relatively good disentanglement performance but have low-quality generation. InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance. Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement. Interpretable directions in the latent space. Recently, researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision, especially for GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2020). Based on the fact that the GAN latent space often possesses semantically meaningful directions (Radford et al., 2015; Shen et al., 2020; Jahanian et al., 2020), Voynov & Babenko (2020) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN. The subsequent works focus on extracting the directions from a specific layer of GANs. Härkönen et al. (2020) search for important and meaningful directions by performing PCA in the style space of StyleGAN (Karras et al., 2019; 2020). Shen & Zhou (2021) propose to use the singular vectors of the first layer of a generator as the interpretable directions, and Khrulkov et al. (2021) extend this method to the intermediate layers by Jacobian matrix. All the above methods only discover the interpretable directions in the latent space, except for Khrulkov et al. (2021) which also learns disentangled representation of generated images by training an extra encoder in an extra stage. However, all these methods can not outperform the typical disentanglement methods. Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces. Contrastive Learning. Contrastive Learning gains popularity due to its effectiveness in representation learning (He et al., 2020; Grill et al., 2020; van den Oord et al., 2018; Hénaff, 2020; Li et al., 2020; Chen et al., 2020). Typically, contrastive approaches bring representations of different views of the same image (positive pairs) closer, and push representations of views from different images (negative pairs) apart using instance-level classification with Contrastive Loss. Recently, Contrastive Learning is extended to various tasks, such as image translation (Liu et al., 2021; Park et al., 2020) and controllable generation (Deng et al., 2020). In this work, we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space. Contrastive Learning is suitable for disentanglement due to: (i) the actual number of disentangled directions is usually unknown, which is similar to Contrastive Learning for retrieval (Le-Khac et al., 2020), (ii) it works in the representation space directly without any extra layers for classification or regression. 3 DISENTANGLEMENT VIA CONTRAST 3.1 OVERVIEW OF DISCO From the contrastive view of the intuitive notion of disentangled representation learning, we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation. The benefits of leveraging a pretrained generative model are two-fold: (i) the pretrained models with high-quality image generation are readily available, which is important for reflecting detailed image variations and downstream tasks like controllable generation; (ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning. DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations. More specifically, ∆-Contrastor is composed of two sharedweight Disentangling Encoders. The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders. In the Variation Space, by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions, the Navigator learns to discover disentangled directions as factors, and Disentangling Encoder learns to extract disentangled representations from images. Thus, traversing along the discovered directions causes distinct image variations, which causes separated dimensions of disentangled representations respond. Different from VAE-based or InfoGAN-based methods, our disentangled representations and factors are in two separate spaces, which actually does not affect the applications. Similar to the typical methods, the Disentangling Encoder can extract disentangled representations from images, and the pretrained generative model with discovered factors can be applied to controllable generation. Moreover, DisCo can be applied to different types of generative models. Here we provide a detailed workflow of DisCo. As Figure 2 shows, given a pretrained generative model G: Z → I, where Z ∈ RL denotes the latent space, and I denotes the image space, the workflow is: 1) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g., in the linear case, A ∈ RL×D is a learnable matrix, and each column is regarded as a candidate direction. 2) Image pairs G(z), G(z′) are generated. z is sampled from Z and z′ = z + A(d, ε), where d ∈ {1, ..., D} and ε ∈ R, and A(d, ε) denotes the shift along the dth direction with ε scalar. 3) The ∆-Contrastor, composed of two shared-weight Disentangling Encoders E, encodes the image pair to a sample v ∈ V as v(z, d, ε) = |E(G(z +A(d, ε)))−E(G(z))| , (1) where V ∈ RJ+ denotes the Variation Space. Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z . 3.2 DESIGN OF DISCO We present the design details of DisCo, which include: (i) the collection of query set Q = {qi}Bi=1, positive key set K+ = {k+i }Ni=1 and negative key set K− = {k − i }Mi=1, which are three subsets of the Variation Space V , (ii) the formulation of the Contrastive Loss. According to our goal of contrasting the variations, the samples from Q and K+ share the same traversal direction and should be pulled together, while the samples from Q and K− have different directions and should be pushed away. Recall that each sample v in V is determined as v(z, d, ε). To achieve the contrastive learning process, we construct the query sample qi = v(zi, di, εi), the key sample k+i = v(z + i , d + i , ε + i ) and the negative sample k − i = v(z − i , d − i , ε − i ). Specifically, we randomly sample a direction index d̂ from a discrete uniform distribution U{1, D} for {di}Bi=1 and {d+i }Ni=1 to guarantee they are the same. We randomly sample {d − i }Mi=1 from the set of the rest of the directions U{1, D} \ {d̂} individually and independently to cover the rest of directions in Navigator A. Note that the discovered direction should be independent with the starting point and the scale of variation, which is in line with the disentangled factors. Therefore, {zi}Bi=1, {z + i }Ni=1, {z − i }Mi=1 are all sampled from latent space Z , and {εi}Bi=1, {ε + i }Ni=1, {ε − i }Mi=1 are all sampled from a shared continuous uniform distribution U [−ϵ, ϵ] individually and independently. We normalize each sample in Q, K+, and K− to a unit vector to eliminate the impact caused by different shift scalars. For the design of Contrastive Loss, a well-known form of Contrastive Loss is InfoNCE (van den Oord et al., 2018): LNCE = − 1 |B| B∑ i=1 N∑ j=1 log exp(qi · k+j /τ)∑N+M s=1 exp(qi · ks/τ) , (2) where τ is a temperature hyper-parameter and {ki}N+Mi=1 = {k + i }Ni=1 ⋃ {k−i }Mi=1. The InfoNCE is originate from BCELoss (Gutmann & Hyvärinen, 2010). BCELoss has been used to achieve contrastive learning (Wu et al., 2018; Le-Khac et al., 2020; Mnih & Kavukcuoglu, 2013; Mnih & Teh, 2012). We choose to follow them to use BCELoss Llogits for reducing computational cost: Llogits = − 1 |B| B∑ i=1 ( l−i + l + i ) , (3) l+i = N∑ j=1 log σ(qi · k+j /τ), l − i = M∑ m=1 log(1− σ(qi · k−m/τ)), (4) where σ denotes the sigmoid function, l+i denotes the part for positive samples, and l − i denotes the part for the negative ones.Note that we use a shared positive set for B different queries to reduce the computational cost. 3.3 KEY TECHNIQUES FOR DISCO Entropy-based domination loss. By optimizing the Contrastive Loss, Navigator A is optimized to find the disentangled directions in the latent space, and Disentangling Encoder E is optimized to extract disentangled representations from images. To further make the encoded representations more disentangled, i.e., when traversing along one disentangled direction, only one dimension of the encoded representation should respond, we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot. To implement the entropy-based domination loss, we first get the mean c of Q and K+ as c = 1 |B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . (5) We then compute the probability as pi = exp c(i)/ ∑J j=1 exp c(j), where c(i) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as Led = − 1 J J∑ j=1 pj log(pj). (6) Hard negatives flipping. Since the latent space of the generative models is a high-dimension complex manifold, many different directions carry the same semantic meaning. These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss. The hard negatives here are different from the hard negatives in the works of self-supervised representation learning (He et al., 2020; Coskun et al., 2018), where they have reliable annotations of the samples. Here, our hard negatives are more likely to be “false” negatives, and we choose to flip these hard negatives into positives. Specifically, we use a threshold T to identify the hard negative samples, and use their similarity to the queries as the pseudo-labels for them: l̂−i = ∑ αij<T log(1− σ(αij)) + ∑ αij≥T αij log(σ(αij)), (7) where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore, the modified final BCELoss is: Llogits−f = 1 |B| B∑ i=1 ( l+i + l̂ − i ) . (8) Typical disentanglement baselines: Methods on pretrained GAN: Methods on pretrained VAE: Methods on pretrained Flow: Full objective. With the above two techniques, the full objective is: L = Llogits−f + λLed, (9) where λ is the weighting hyper-parameter for entropy-based domination loss Led. 4 EXPERIMENT In this section, we first follow the well-accepted protocol (Locatello et al., 2019; Khrulkov et al., 2021) to evaluate the learned disentangled representation, which also reflects the performance of discovered directions implicitly (Lin et al., 2020) (Section 4.1). Secondly, we follow Li et al. (2021a) to directly evaluate the discovered directions (Section 4.2). Finally, we conduct ablation study (Section 4.3). 4.1 EVALUATIONS ON DISENTANGLED REPRESENTATION 4.1.1 EXPERIMENTAL SETUP Datasets. We consider the following popular datasets in the disentanglement areas: Shapes3D (Kim & Mnih, 2018) with 6 ground truth factors, MPI3D (Gondal et al., 2019) with 7 ground truth factors, and Cars3D (Reed et al., 2015) with 3 ground truth factors. In the experiments of the above datasets, images are resized to the 64x64 resolution. Pretrained generative models. For GAN, we use the StyleGAN2 model (Karras et al., 2020). For VAE, we use a common structure with convolutions (Locatello et al., 2019). For Flow, we use Glow (Kingma & Dhariwal, 2018). Baseline. For the typical disentanglement baselines, we choose FactorVAE (Kim & Mnih, 2018), β-TCVAE (Chen et al., 2018) and InfoGAN-CR (Lin et al., 2020). For discovering-based methods, we consider serveral recent methods: GANspace (GS) (Härkönen et al., 2020), LatentDiscovery (LD) (Voynov & Babenko, 2020), ClosedForm (CF) (Shen & Zhou, 2021) and DeepSpectral (DS) (Khrulkov et al., 2021). For these methods, we follow Khrulkov et al. (2021) to train an additional encoder to extract disentangled representation. We are the first to extract disentangled representations from pretrained VAE and Flow, so we extend LD to VAE and Flow as a baseline. Disentanglement metrics. We mainly consider two representative ones: the Mutual Information Gap (MIG) (Chen et al., 2018) and the Disentanglement metric (DCI) (Eastwood & Williams, 2018). MIG requires each factor to be only perturbed by changes of a single dimension of representation. DCI requires each dimension only to encode the information of a single dominant factor. We evaluate the disentanglement in terms of both representation and factors. We also provide results for β-VAE score (Higgins et al., 2017) and FactorVAE score (Kim & Mnih, 2018) in Appendix B.3. Randomness. We consider the randomness caused by random seeds and the strength of the regularization term (Locatello et al., 2019). For random seeds, we follow the same setting as the baselines. Since DisCo does not have a regularization term, we consider the randomness of the pretrained generative models. For all methods, we ensure there are 25 runs, except that Glow only has one run, limited by GPU resources. More details are presented in Appendix A. 4.1.2 EXPERIMENTAL RESULTS The quantitative results are summarized in Table 1 and Figure 3. More details about the experimental settings and results are presented in Appendix A & C. DisCo vs. typical baselines. Our DisCo achieves the SOTA performance consistently in terms of MIG and DCI scores. The variance due to randomness of DisCo tends to be smaller than those typical baselines. We demonstrate that the method, which extracts disentangled representation from pretrained non-disentangled models, can outperform typical disentanglement baselines. DisCo vs. discovering-based methods. Among the baselines based on discovering pretrained GAN, CF achieves the best performance. DisCo outperforms CF in almost all the cases by a large margin. Besides, these baselines need an extra stage (Khrulkov et al., 2021) to get disentangled representation, while our Disentangling Encoder can directly extract disentangled representation. 4.2 EVALUATIONS ON DISCOVERED DIRECTIONS To evaluate the discovered directions, we compare DisCo on StyleGAN2 with GS, LD, CF and DS on the real-world dataset FFHQ (Karras et al., 2019)1. and adopt the comprehensive Manipulation Disentanglement Score (MDS) (Li et al., 2021a) as a metric. To calculate MDS, we use 40 CelebaHQ-Attributes predictors released by StyleGAN. Among them, we select Young, Smile, Bald and Blonde Hair, as they are attributes with an available predictor and commonly found by all methods at the same time. The results are summarized in Table 3. DisCo has shown better overall performance compared to other baselines, which verifies our assumption that learning disentangled representation benefits latent space discovering. We also provide qualitative comparisons in Figure 4. Finally, we provide an intuitive analysis in Appendix D for why DisCo can find those disentangled directions. 4.3 ABLATION STUDY In this section, we perform ablation study of DisCo only on GAN, limited by the space. For the experiments, we use the Shapes3D dataset, and the random seed is fixed. Choice of latent space. For style–based GANs (Karras et al., 2019; 2020), there is a style space W , which is the output of style network (MLP) whose input is a random latent space Z . As demonstrated in Karras et al. (2019), W is more interpretable than Z . We conduct experiments on W and Z respectively to see how the latent space influences the performance. As shown in Table 4, DisCo on W is better, indicating that the better the latent space is organized, the better disentanglement DisCo can achieve. Choices of A. Following the setting of Voynov & Babenko (2020), we mainly consider three options of A: a linear operator with all matrix columns having a unit length, a linear operator with orthonormal matrix columns, or a nonlinear operator of 3 fully-connected layers. The results are shown in Table 4. For latent spaces W and Z , A with unit-norm columns achieves nearly the best performance in terms of MIG and DCI scores. Compared to A with orthonormal matrix columns, using A with unitnorm columns is more expressive with less constraints. Another possible reason is that A is global without conditioned on the latent code z. A non-linear operator is more suitable for a local navigator A. For such a much more complex local and non-linear setting, more inductive bias or supervision should be introduced. Entropy-based domination loss. Here, we verify the effectiveness of entropy-based domination loss Led for disentanglement. For a desirable disentangled representation, one semantic meaning corresponds to one dimension. As shown in Table 4, Led can improve the performance by a large 1The above disentanglement metrics (DCI and MIG) are not available for FFHQ dataset. margin. We also visualize the Variation Space to further demonstrate the effectiveness of our proposed loss in Figure 5. Adding the domination loss makes the samples in the Variation Space to be one-hot, which is desirable for disentanglement. Hard negatives flipping. We run our DisCo with or without the hard negatives flipping strategy to study its influence. As shown in Table 4, flipping hard negatives can improve the disentanglement ability of DisCo. The reason is that the hard negatives have the same semantics as the positive samples. In this case, treating them as the hard negatives does not make sense. Flipping them with pseudo-labels can make the optimization of Contrastive Learning easier. Hyperparmeter N & M. We run DisCo with different ratios of N : M with a fixed sum of 96, and different sum of N +M with a fixed ratio 1 : 2 to study their impacts. As shown in Figure 6 (a), the best ratio is N : M = 32 : 64 = 1 : 2, as the red line (MIG) and blue line (DCI) in the figure show that larger or smaller ratios will hurt DisCo, which indicates DisCo requires a balance between N and M . As shown in Figure 6 (b), the sum of N +M has slight impact on DisCo. For other hyperparameters, we set them empirically, and more details are presented in Appendix A. Contrast vs. Classification. To verify the effectiveness of Contrast, we substitute it with classification by adopting an additional linear layer to recover the corresponding direction index and the shift along this direction. As Table 2 shows, Contrastive Learning outperforms Classification significantly. Concatenation vs. Variation. We further demonstrate that the Variation Space is crucial for DisCo. By replacing the difference operator with concatenation, the performance drops significantly (Table 2), indicating that the encoded representation is not well disentangled. On the other hand, the disentangled representations of images are achieved by Contrastive Learning in the Variation Space. 4.4 ANALYSIS OF DIFFERENT GENERATIVE MODELS As shown in Table 1, DisCo can be well generalized to different generative models (GAN, VAE, and Flow). DisCo on GAN and VAE can achieve relative good performance, while DisCo on Flow is not as good. The possible reason is similar to the choice of latent space of GAN. We assume the disentangled directions are global linear and thus use a linear navigator. In contrast to GAN and VAE, we suspect that Flow may not conform to this assumption well. Furthermore, Flow has the problems of high GPU cost and unstable training, which limit us to do further exploration. 5 CONCLUSION In this paper, we present an unsupervised and model-agnostic method DisCo, which is a Contrastive Learning framework to learn disentangled representation by exploiting pretrained generative models. We propose an entropy-based domination loss and a hard negatives flipping strategy to achieve better disentanglement. DisCo outperforms typical unsupervised disentanglement methods while maintaining high image quality. We pinpoint a new direction that Contrastive Learning can be well applied to extract disentangled representation from pretrained generative models. There may be some specific complex generative models, for which the global linear assumption of disentangled directions in the latent space could be a limitation. For future work, extending DisCo to the existing VAE-based disentanglement framework is an exciting direction. A.2 SETTING FOR BASELINES In this section, we introduce the implementation setting for the baselines (including randomness). VAE-based methods. We choose FactorVAE and β-TCVAE as the SOTA VAE-based methods, we follow Locatello et al. (2019) to use the same architecture of encoder and decoder. For the hyper-parameters, we use the the best settings by grid search. We set the latent dimension of representation to 10. For FactorVAE, we set the hyperparameter γ to 10. For β-TCVAE, we set the hyperparameter β to 6. For the random seeds, considering our method has 25 run, we run 25 times with different random seeds for each model to make the comparison fair. InfoGAN-based methods. We choose InfoGAN-CR as a baseline. We use the official implementation 2 with the best hyperparameter settings by grid search. For the random seeds, we run 25 times with different random seeds Discovering-based methods. We follow Khrulkov et al. (2021) to use the same settings for the following four baselines: LD (GAN), CF, GS, and DS. Similar to our method (DisCo), discoveringbased methods do not have a regularization term. Thus, for the randomness, we adopt the same strategy with DisCo. We take the top-10 directions for 5 different random seeds for GAN and 5 different random seeds for the additional encoder to learn disentangled representations. LD (VAE) & LD (Flow). We follow LD (GAN) to use the same settings and substitute the GAN with VAE / Glow. The only exception is the randomness for LD (Flow). We only run one random seed to pretrain the Glow and use one random seed for the encoder. A.3 MANIPULATION DISENTANGLEMENT SCORE As claimed in Li et al. (2021a), it is difficult to evaluate the performance on discovering the latent space among different methods, which often use model-specific hyper-parameters to control the editing strength. Thus, Li et al. (2021a) propose a comprehensive metric called Manipulation Disentanglement Score (MDS), which takes both the accuracy and the disentanglement of manipulation into consideration. For more details, please refer to Li et al. (2021a). A.4 DOMAIN GAP PROBLEM Please note that there exists a domain gap between the generated images of pretrained generative models and the real images. However, the good performance on disentanglement metrics shows that the domain gap has limited influence on DisCo. 2https://github.com/fjxmlzn/InfoGAN-CR A.5 ARCHITECTURE Here, we provide the model architectures in our work. For the architecture of StyleGAN2, we follow Khrulkov et al. (2021). For the architecture of Glow, we use the open-source implementation 3. 3https://github.com/rosinality/glow-pytorch B MORE EXPERIMENTS B.1 MORE QUALITATIVE COMPARISON We provide some examples for qualitative comparison. We first demonstrate the trade-off problem of the VAE-based methods. As shown in Figure 7, DisCo leverages the pretrained generative model and does not have the trade-off between disentanglement and generation quality. DisCo Furthermore, as shown in Figure 8 and Figure 9, VAE-based methods suffer from poor image quality. When changing one attribute, the results of discovering-based methods tend to also change other attributes. We also provide qualitative comparisons between DisCo and InfoGAN-CR. Note that the latent space of InfoGAN-CR is not aligned with the pretrained StyleGAN2. InfoGAN-CR also suffers from the trade-off problem, and its disentanglement ability is worse than DisCo. We explain the comparison in the main paper and show more manipulation comparisons here. B.2 ANALYSIS OF THE LEARNED DISENTANGLED REPRESENTATIONS We feed the images traversing the three most significant factors (wall color, floor color, and object color) of Shapes3D into the Disentangling Encoders and plot the corresponding dimensions of the encoded representations to visualize the learned disentangled space. The location of each point is the disentangled representation of the corresponding image. An ideal result is that all the points form a cube, and color variation is continuous. We consider three baselines that have relatively higher MIG and DCI: CF, DS, LD. As the figures below show, the points in the latent space of CF and DS are not well organized, and the latent space of all the three baselines are not well aligned with the axes, especially for LD. DisCo learns a well-aligned and well-organized latent space, which signifies a better disentanglement. CF DS LD Ours B.3 MORE QUANTITATIVE COMPARISON We provide additional quantitative comparisons in terms of β-VAE score and FactorVAE score. DisCo on pretrained GAN is comparable to discovering-based baselines in terms of β-VAE score and FactorVAE score, suggesting that some disagreement between these two scores and MIG/ DCI. However, note that the qualitative evaluation in Figure 8, Figure 9 and Section B.2 shows that the disentanglement ability of DisCo is better than all the baselines on Shapes3D dataset. Typical disentanglement baselines: Methods on pretrained GAN: Methods on pretrained VAE: Methods on pretrained Flow: We also provide an additional experiment on Noisy-DSprites dataset. We compare DisCo with β-TCVAE (the best typical method) and CF (the best discovering-based method) in terms of MIG and DCI metrics. C LATENT TRAVERSALS In this section, we visualize the disentangled directions of the latent space discovered by DisCo on each dataset. For Cars3D, Shapes3D, Anime and MNIST, the iamge resolution is 64× 64. For FFHQ, LSUN cat and LSUN church, the image resolution is 256× 256. Besides StyleGAN2, we also provide results of Spectral Norm GAN (Miyato et al., 2018) 4 on MNIST (LeCun et al., 2010) and Anime Face (Jin et al., 2017) to demonstrate that DisCo can be well generalized to other types of GAN. 4https://github.com/anvoynov/GANLatentDiscovery D AN INTUITIVE ANALYSIS FOR DISCO DisCo works by contrasting the variations resulted from traversing along the directions provided by the Navigator. Is the method sufficient to converge to the disentangled solution? Note that it is very challenging to answer this question. To our best knowledge, for unsupervised disentangled representation learning, there is no sufficient theoretical constraint to guarantee the convergence to a disentangled solution Locatello et al. (2019). Here we provide an intuitive analysis for DisCo and try to provide our thoughts on how DisCo find the disentangled direction in the latent space, which is supported by our observations on pretrained GAN both quantitatively and qualitatively. The intuitive analysis consists of two part: (i) The directions that can be discovered by DisCo have different variation patterns compared to random directions. (ii) DisCo hardly converges to the an entangled solution. D.1 WHAT KIND OF DIRECTIONS DISCO CAN CONVERGE TO? First, we visualize the latent space and show that there are some variation patterns in the latent space for disentangled factors. We design the following visualization method. Given a pretrained GAN and two directions in the latent space, we traverse along the plane expanded by the two directions to generate a grid of images. The range is large enough to include all values of these disentangled factors, and the step is small enough to obtain a dense grid. Then, we input these images into an encoder that trained with ground truth factors labels. We get a heatmap of each factor (the value is the response value corresponding dimension of the factor). In this way, we can observe the variation pattern that emerged in the latent space. We take the pretrained StyleGAN on Shapes3D (synthetic) and FFHQ (real-world). For Shapes3D, we take background color and floor color as the two factors (since they refer to different areas in the image, these two factors are disentangled). For FFHQ, we take smile (mouth) and bald (hair) as the two factors (disentangled for referring to different areas). We then choose random directions and the directions discovered by DisCo. The results are shown in Figure 27 and Figure 28. We find a clear difference between random directions and directions discovered by DisCo. This is because DisCo can learn the directions by separating the variations resulted from traversing along with them. However, not all directions can be separated. For those directions in which the variations are not able to be recognized or clustered by the encoder E, it is nearly impossible for DisCo to converge to them. Conversely, for those directions that can be easily recognized and clustered, DisCo will converge to them with a higher probability. From the following observations, we find that the variation patterns resulting from the directions corresponding to disentangled factors are easily recognized and clustered. D.2 WHY DISCO HARDLY CONVERGES TO THE ENTANGLED CASES? In the previous section, we show that DisCo can discover the directions with distinct variation patterns and exclude random directions. Here we discuss why DisCo can hardly converge to the following entangled case (trivial solution based on disentangled one). For example, suppose there is an entangled direction of factors A and B (A and B change with the same rate when traversing along with it) in the latent space of generative models, and DisCo can separate the variations resulting from the direction of A and the entangled direction. In that case, DisCo has no additional bias to update these directions to converge to disentangled ones. In the following text, for ease of referring to, we denote the entangled direction of factors A and B (A and B change with the same rate when traversing along with it) as A+B direction, and direction of factor A (only A change when we traverse along with it). The reasons for why DisCo is hardly converged to the case of A and A+B are two-fold: (i) Our encoder is a lightweight network (5 CNN layers + 3 FC layers). It is nearly impossible for it to separate the A and A+B directions. (ii) In the latent space of the pretrained generative models, the disentangled directions (A, B) are consistent at different locations. In contrast, the entangled directions (A+B) are not, as shown in Figure 29. We conduct the following experiments to verify them. For (i), we replace our encoder in DisCo with a ResNet-50 and train DisCo from scratch on the Shapes3D dataset. The loss, MIG, and DCI are presented in Table 11. The trivial solution is possible when the encoder is powerful enough to fit the A and A+B directions to “become orthogonal”. With this consideration, in DisCo we adopt a lightweight encoder to avoid this issue. For (ii), as the sketch Figure 29 demonstrates, the disentangled directions (”A“- blue color or “B”green color) are consistent, which is invariant to the location in the latent space, while the entangled directions (”A+B“- red color) is not consistent on different locations. The fundamental reason is that: the directions of the disentangled variations are invariant with the position in the latent space. However, the “rate” of the variation is not. E.g., at any point in the latent space, going “up” constantly changes the camera’s pose. However, at point a, going “up” with step 1 means rotating 10 degrees. At point b, going “up” with step 1 means rotating 5 degrees. When the variation “rate” of “A” and “B” are different, the “A+B” directions at different locations are not consistent. Based on the different properties of disentangled and entangled directions in the latent space, DisCo can discover the disentangled directions with contrastive loss. The contrastive loss can be understood from the clustered view (Wang & Isola, 2020; Li et al., 2021b). The variations from the disentangled directions are more consistent and can be better clustered compared to the variations from the entangled ones. Thus, DisCo can discover the disentangled directions in the latent space and learn disentangled representations from images. We further provide the following experiments to support our above analysis. D.2.1 QUANTITATIVE EXPERIMENT We compare the losses of three different settings: • A: For a navigator with disentangled directions, we fix the navigator and train the encoder until convergence. • A + B: For a navigator with entangled directions (we use the linear combination of the disentangled directions to initialize the navigator), we fix it and train the encoder until convergence. • A+B → A: After A+B is convergent, we update both the encoder and the navigator until convergence. The Contrastive loss after convergence is presented in Table 12. The results show that: (i) The disentangled directions (A) can lead to lower loss and better performance than entangled directions (A+B), indicating no trivial solution. (ii) Even though the encoder with A+B is converged, when we optimize the navigator, gradients will still backpropagate to the navigator and converge to A. D.2.2 QUALITATIVE EXPERIMENT We visualize the latent space of GAN in Figure 30 to verify the variation “rate” in the following way: in the latent space, we select two ground truth disentangled directions: floor color (A) and background color (B) obtained by supervision with InterFaceGAN (Shen et al., 2020), we conduct equally spaced sampling along the two disentangled directions: A (labeled with green color variation), B (labeled with gradient blue color) and composite direction (A+B, labeled with gradient red color) as shown in Figure 30 (a). Then we generate the images (also include other images on the grid as shown in Figure 30 (b) ), and feed the images in the bounding boxes into a “ground truth” encoder (trained with ground truth disentangled factors) to regress the “ground truth” disentangled representations of the images. In Figure 30 (c), the points labeled with green color are well aligned with the x-axis indicating only floor color change, points labeled with blue variation are well aligned with the y-axis indicating only background color change. However, the points labeled with red color are NOT aligned with any line, which indicates the directions of A+B are not consistent. Further, the variation “rate” is relevant to the latent space locations for the two disentangled directions. This observation well supports our idea shown in Figure 29. The different properties between disentangled and entangled directions enable DisCo to discover the disentangled directions in the latent space. E EXTENSION: BRIDGE THE PRETRAINED VAE AND PRETRAINED GAN Researchers are recently interested in improving image quality given the disentangled representation generated by typical disentanglement methods. Lee et al.(Lee et al., 2020) propose a post-processing stage using a GAN based on disentangled representations learned by VAE-based disentanglement models. This method scarifies a little generation ability due to an additional constraint. Similarly, Srivastava et al. (Srivastava et al., 2020) propose to use a deep generative model with AdaIN (Huang & Belongie, 2017) as a post-processing stage to improve the reconstruction ability. Following this setting, we can replace the encoder in DisCo (GAN) with an encoder pretrained by VAE-based disentangled baselines. In this way, we can bridge the pretrained disentangled VAE and pretrained GAN, as shown in Figure 31. Compared to previous methods, our method can fully utilize the state-of-the-art GAN and the state-of-the-art VAE-based method and does not need to train a deep generative model from scratch. F DISCUSSION ON RELATION BETWEEN BCELOSS AND NCELOSS We would like to present a deep discussion on the relation between the BCELoss Llogits and NCELoss LNCE . This discussion is related to the NCE paper Gutmann & Hyvärinen (2010), and InfoNCE paper van den Oord et al. (2018). The discussion is as following: (i) we first provide a formulation of a general problem and get two objectives, L1 and L2, and L1 is the upper bound of L2. (ii) Following Gutmann & Hyvärinen (2010), we show that L1 is aligned with LBCE under the setting of Gutmann & Hyvärinen (2010). (iii) Following van den Oord et al. (2018), we prove L2 is aligned with LNCE under the setting of van den Oord et al. (2018). (iii) We discuss the relation between these objectives and the loss in our paper. Part I. Assume we have S observations {xi}Si=1 from a data distribution p(x), each with a label Ci ∈ {0, 1}. The we denote the posterior probabilities as p+(x) = p(x|C = 1) and p−(x) = p(x|C = 0). We define two objectives as follow: L1 = − S∑ i=1 Ci logP (Ci = 1|xi) + (1− Ci) logP (Ci = 0|xi), (10) and L2 = − S∑ i=1 Ci logP (Ci = 1|xi) (11) Since − ∑S i=1(1− Ci) log p(Ci = 0|xi) ≥ 0, we have L1 ≥ L2. (12) L1 is the upper bound of L2. This a general formulation of a binary classification problem. In the context of our paper, we have a paired observation xi : (q, ki), with q as the query, and the key ki is either from a positive key set {k+j }Nj=1 or as negative key set {k−m}Mm=1 (i.e., {ki} N+M i=1 = {k + j }Nj=1 ⋃ {k−m}Mm=1), where M = S −N . And Ci is assigned as: Ci = { 1, ki ∈ {k+j }Nj=1 0, ki ∈ {k−m}Mm=1 (13) In our paper, we have h(x) = exp(q · k/τ). Part II. In this part, following Gutmann & Hyvärinen (2010), we show that L1 is aligned with Llogits (Equation 3 in the main paper) under the setting of Gutmann & Hyvärinen (2010). Following Gutmann & Hyvärinen (2010)), we assume the prior distribution P (C = 0) = P (C = 1) = 1/2, according to the Bayes rule, we have P (C = 1|x) = p(x|C = 1)P (C = 1) p(x|C = 1)P (C = 1) + p(x|C = 0)P (C = 0) = 1 1 + p −(x) p+(x) . (14) And P (C = 0|x) = 1− P (C = 1|x). On the other hand, we have a general form of BCELoss, as LBCE = − S∑ i=1 Ci log σ(q · ki/τ) + (1− Ci) log(1− σ(q · ki/τ)), (15) where σ(·) is the sigmoid function. We have σ(q · k/τ) = 1 1 + exp(−q · k/τ) = 1 1 + 1exp(q·k/τ) = 1 1 + 1h(x) , (16) From Gutmann & Hyvärinen (2010) Theorem 1, we know that when LBCE is minimized, we have h(x) = p+(x) p−(x) . (17) Thus, we know the BCELoss LBCE is a approximation of the objective L1. Part. III Following van den Oord et al. (2018), we prove L2 is aligned with LNCE (Equation 2 in the main paper) under the setting of van den Oord et al. (2018) From the typical contrastive setting (one positive sample, others are negative samples, following van den Oord et al. (2018)), we assume there is only one positive sample, others are negatives in {xi}Si=1. Then, the probability of xi sample from p+(x) rather then p−(x) is as follows, P (Ci = 1|xi) = p+(xi)Πl ̸=ip −(xl)∑S j=1 p +(xj)Πl ̸=ip−(xl) = p+(xi) p−(xi)∑S j=1 p+(xj) p−(xj) (18) From van den Oord et al. (2018), we know that when minimize Equation 11, we have h(x) = exp(q · k/τ) ∝ p+(x)p−(x) . In this case, we get the form of LNCE as LNCE = − S∑ i=1 Ci log exp(q · ki/τ)∑S j=1 exp(q · kj/τ) (19) LNCE is a approximate of L2. Part. IV When generalize the contrastive loss into our setting (N positive samples, M negative samples). The BCELoss (Equation 15) can be reformulated as The BCELoss (Equation 15) can be reformulated as L̂BCE = − N∑ j=1 log σ(q · k+j /τ)− M∑ m=1 log(1− σ(q · k−m/τ)). (20) Similarly, the NCEloss (Equation 19) can be reformulated as L̂NCE = − N∑ j=1 log exp(q · k+j /τ)∑M+N s=1 exp(q · ks/τ) (21) L̂BCE is aligned with Llogits (Equation 3 in our main paper), and L̂NCE is aligned with LNCE (Equation 2 in the main paper). Now we have L1 (approximated by LBCE) is the upper bound of L2 (approximated by LNCE). However, as you may notice, the assumptions we made in Part II and Part III are different, one is P (C = 0) = P (C = 1), the other one is only one positive sample, others are negative. Also the extent to our situation is more general case (N positives, others are negatives). However, they have the same objective, which is by contrasting positives and negatives, we can use h(x) = exp(q · k/τ) to estimate p+/p−. We can think the h(x) as a similarity score, i.e. if q and k are from a positive pair (they have the same direction in our paper), h(x) should be as large as possible (p+/p− > 1) and vice versa. From this way, we can learn the representations (q, k) to reflect the image variation, i.e., similar variations have higher score h(x) , while different kinds of variation have lower score h(x). Then with this meaningful representation, in the latent space, can help to discover the directions carrying different kinds of image variation. This is an understanding, from a contrastive learning view, of how our method works.
1. What is the focus and contribution of the paper on disentangled directions for pretrained models? 2. What are the strengths of the proposed framework, particularly in terms of its model-agnostic nature and ability to mitigate poor generation quality? 3. What are the weaknesses of the approach, especially regarding the requirement for multiple components and hyperparameters tuning? 4. Do you have any concerns about the necessity of certain components or their impact on the overall performance gain? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper presents a framework to model disentangled directions for pretrained models. Such an approach mitigates the problems with poor generation quality arising while training models with additional regularization terms to force disentanglement. The underlying idea is contrastive-based: similar image variations are caused by changing the same factors in contrast to the remaining image variations. The proposed framework is model-agnostic: it can be applied to GANs, VAEs and flow models. Review Strengths: The approach does not require any specific training. There is no fixed generative model type: it can be applied to GANs, VAEs and Flow models. The method significantly outperforms previous models in terms of disentanglement metrics. The method is quite stable to random seeds. The authors provide a thorough ablation study, report the model accuracy with std due to random seeds, check the model sensitivity to the values of hyperparameter T. Weaknesses: The approach requires many 'tricks' and parts to work: Navigator, Contrastor consisting of two weight sharing encoders, contrastive approach, hard negatives flipping. Each component requires its own set of hyperparameters. The overall performance gain is significant, and the necessity is partially covered in the Ablation study section. But I wonder if it is needed to have two encoders with shared weights or it is possible to have only one? Is it required to tune hyperparameters for every component?
ICLR
Title Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View Abstract From the intuitive notion of disentanglement, the image variations corresponding to different factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images. However, the term usually results in a trade-off between disentanglement and generation quality. For the generative models pretrained without any disentanglement term, the generated images show semantically meaningful variations when traversing along different directions in the latent space. Based on this observation, we argue that it is possible to mitigate the trade-off by (i) leveraging the pretrained generative models with high generation quality, (ii) focusing on discovering the traversal directions as factors for disentangled representation learning. To achieve this, we propose Disentaglement via Contrast (DisCo) as a framework to model the variations based on the target disentangled representations, and contrast the variations to jointly discover disentangled directions and learn disentangled representations. DisCo achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained nondisentangled generative models including GAN, VAE, and Flow. Source code is at https://github.com/xrenaa/DisCo. 1 INTRODUCTION Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data, which is believed by many to be the only way to understand the world for AI fundamentally (Bengio & LeCun, 2007). To achieve the goal, as shown in Figure 1 (a), we need an encoder and a generator. The encoder to extract representations from images with each dimension corresponds to one factor individually. The generator (decoder) decodes the changing of each factor into different kinds of image variations. With supervision, we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively. However, this kind of exhaustive supervision is often not available in real-world data. The typical unsupervised methods are based on a generative model to build the above encoder and generator framework, e.g., VAE (Kingma & Welling, 2014) provides encoder and generator, and GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2019) provides generator. During the training process of the encoder and generator, to achieve disentangled representation, the typical methods rely on an additional disentanglement regularization term, e.g., the total correlation for VAE-based methods (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or mutual information for InfoGAN-based methods (Chen et al., 2016; Lin et al., 2020). ∗Equal contribution. Work done during internships at Microsoft Research Asia. †Corresponding author However, the extra terms usually result in a trade-off between disentanglement and generation quality (Burgess et al., 2018; Khrulkov et al., 2021). Furthermore, those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias (Locatello et al., 2019). Recent works (Shen & Zhou, 2021; Khrulkov et al., 2021; Karras et al., 2019; Härkönen et al., 2020; Voynov & Babenko, 2020) show that, for GANs purely trained for image generation, traversing along different directions in the latent space causes different variations of the generated image. This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN. The above observations indicate that training the encoder and generator simultaneous may not be the best choice. We provide an alternative route to learn disentangled representation: fix the pretrained generator, jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation, as shown in Figure 1(b). From the intuitive notion of disentangled representation, similar image variations should be caused by changing the same factor, and different image variations should be caused by changing different factors. This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework: Disentanglement via Contrast (DisCo) for disentangled representation learning. In DisCo, changing a factor is implemented by traversing one discovered direction in the latent space. For discovering the factors, DisCo adopts a typical network module, Navigator, to provides candidate traversal directions in the latent space (Voynov & Babenko, 2020; Jahanian et al., 2020; Shen et al., 2020). For disentangled representation learning, to model the various image variations, we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss. In addition to the above architecture innovations, we propose two key techniques for DisCo: (i) an entropy-based domination loss to encourage the encoded representations to be more disentangled, (ii) a hard negatives flipping strategy for better optimization of Contrastive Loss. We evaluate DisCo on three major generative models (GAN, VAE, and Flow) on three popular disentanglement datasets. DisCo achieves the state-of-the-art (SOTA) disentanglement performance compared to all the previous discovering-based methods and typical (VAE/InfoGAN-based) methods. Furthermore, we evaluate DisCo on the real-world dataset FFHQ (Karras et al., 2019) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models. Our main contributions can be summarized as: (i) To our best knowledge, DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations. (ii) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning. (iii) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE, GAN, or Flow models with the SOTA disentangled representation learning and latent space discovering. (iv) We propose two key techniques for DisCo: an entropy-based domination loss and a hard negatives flipping strategy. 2 RELATED WORK Typical unsupervised disentanglement. There have been a lot of studies on unsupervised disentangled representation learning based on VAE (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or InfoGAN (Chen et al., 2016; Lin et al., 2020). These methods achieve disentanglement via an extra regularization, which often sacrifices the generation quality (Burgess et al., 2018; Khrulkov et al., 2021). VAE-based methods disentangle the variations by factorizing aggregated posterior, and InfoGAN-based methods maximize the mutual information between latent factors and related observations. VAE-based methods achieve relatively good disentanglement performance but have low-quality generation. InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance. Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement. Interpretable directions in the latent space. Recently, researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision, especially for GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2020). Based on the fact that the GAN latent space often possesses semantically meaningful directions (Radford et al., 2015; Shen et al., 2020; Jahanian et al., 2020), Voynov & Babenko (2020) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN. The subsequent works focus on extracting the directions from a specific layer of GANs. Härkönen et al. (2020) search for important and meaningful directions by performing PCA in the style space of StyleGAN (Karras et al., 2019; 2020). Shen & Zhou (2021) propose to use the singular vectors of the first layer of a generator as the interpretable directions, and Khrulkov et al. (2021) extend this method to the intermediate layers by Jacobian matrix. All the above methods only discover the interpretable directions in the latent space, except for Khrulkov et al. (2021) which also learns disentangled representation of generated images by training an extra encoder in an extra stage. However, all these methods can not outperform the typical disentanglement methods. Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces. Contrastive Learning. Contrastive Learning gains popularity due to its effectiveness in representation learning (He et al., 2020; Grill et al., 2020; van den Oord et al., 2018; Hénaff, 2020; Li et al., 2020; Chen et al., 2020). Typically, contrastive approaches bring representations of different views of the same image (positive pairs) closer, and push representations of views from different images (negative pairs) apart using instance-level classification with Contrastive Loss. Recently, Contrastive Learning is extended to various tasks, such as image translation (Liu et al., 2021; Park et al., 2020) and controllable generation (Deng et al., 2020). In this work, we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space. Contrastive Learning is suitable for disentanglement due to: (i) the actual number of disentangled directions is usually unknown, which is similar to Contrastive Learning for retrieval (Le-Khac et al., 2020), (ii) it works in the representation space directly without any extra layers for classification or regression. 3 DISENTANGLEMENT VIA CONTRAST 3.1 OVERVIEW OF DISCO From the contrastive view of the intuitive notion of disentangled representation learning, we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation. The benefits of leveraging a pretrained generative model are two-fold: (i) the pretrained models with high-quality image generation are readily available, which is important for reflecting detailed image variations and downstream tasks like controllable generation; (ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning. DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations. More specifically, ∆-Contrastor is composed of two sharedweight Disentangling Encoders. The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders. In the Variation Space, by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions, the Navigator learns to discover disentangled directions as factors, and Disentangling Encoder learns to extract disentangled representations from images. Thus, traversing along the discovered directions causes distinct image variations, which causes separated dimensions of disentangled representations respond. Different from VAE-based or InfoGAN-based methods, our disentangled representations and factors are in two separate spaces, which actually does not affect the applications. Similar to the typical methods, the Disentangling Encoder can extract disentangled representations from images, and the pretrained generative model with discovered factors can be applied to controllable generation. Moreover, DisCo can be applied to different types of generative models. Here we provide a detailed workflow of DisCo. As Figure 2 shows, given a pretrained generative model G: Z → I, where Z ∈ RL denotes the latent space, and I denotes the image space, the workflow is: 1) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g., in the linear case, A ∈ RL×D is a learnable matrix, and each column is regarded as a candidate direction. 2) Image pairs G(z), G(z′) are generated. z is sampled from Z and z′ = z + A(d, ε), where d ∈ {1, ..., D} and ε ∈ R, and A(d, ε) denotes the shift along the dth direction with ε scalar. 3) The ∆-Contrastor, composed of two shared-weight Disentangling Encoders E, encodes the image pair to a sample v ∈ V as v(z, d, ε) = |E(G(z +A(d, ε)))−E(G(z))| , (1) where V ∈ RJ+ denotes the Variation Space. Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z . 3.2 DESIGN OF DISCO We present the design details of DisCo, which include: (i) the collection of query set Q = {qi}Bi=1, positive key set K+ = {k+i }Ni=1 and negative key set K− = {k − i }Mi=1, which are three subsets of the Variation Space V , (ii) the formulation of the Contrastive Loss. According to our goal of contrasting the variations, the samples from Q and K+ share the same traversal direction and should be pulled together, while the samples from Q and K− have different directions and should be pushed away. Recall that each sample v in V is determined as v(z, d, ε). To achieve the contrastive learning process, we construct the query sample qi = v(zi, di, εi), the key sample k+i = v(z + i , d + i , ε + i ) and the negative sample k − i = v(z − i , d − i , ε − i ). Specifically, we randomly sample a direction index d̂ from a discrete uniform distribution U{1, D} for {di}Bi=1 and {d+i }Ni=1 to guarantee they are the same. We randomly sample {d − i }Mi=1 from the set of the rest of the directions U{1, D} \ {d̂} individually and independently to cover the rest of directions in Navigator A. Note that the discovered direction should be independent with the starting point and the scale of variation, which is in line with the disentangled factors. Therefore, {zi}Bi=1, {z + i }Ni=1, {z − i }Mi=1 are all sampled from latent space Z , and {εi}Bi=1, {ε + i }Ni=1, {ε − i }Mi=1 are all sampled from a shared continuous uniform distribution U [−ϵ, ϵ] individually and independently. We normalize each sample in Q, K+, and K− to a unit vector to eliminate the impact caused by different shift scalars. For the design of Contrastive Loss, a well-known form of Contrastive Loss is InfoNCE (van den Oord et al., 2018): LNCE = − 1 |B| B∑ i=1 N∑ j=1 log exp(qi · k+j /τ)∑N+M s=1 exp(qi · ks/τ) , (2) where τ is a temperature hyper-parameter and {ki}N+Mi=1 = {k + i }Ni=1 ⋃ {k−i }Mi=1. The InfoNCE is originate from BCELoss (Gutmann & Hyvärinen, 2010). BCELoss has been used to achieve contrastive learning (Wu et al., 2018; Le-Khac et al., 2020; Mnih & Kavukcuoglu, 2013; Mnih & Teh, 2012). We choose to follow them to use BCELoss Llogits for reducing computational cost: Llogits = − 1 |B| B∑ i=1 ( l−i + l + i ) , (3) l+i = N∑ j=1 log σ(qi · k+j /τ), l − i = M∑ m=1 log(1− σ(qi · k−m/τ)), (4) where σ denotes the sigmoid function, l+i denotes the part for positive samples, and l − i denotes the part for the negative ones.Note that we use a shared positive set for B different queries to reduce the computational cost. 3.3 KEY TECHNIQUES FOR DISCO Entropy-based domination loss. By optimizing the Contrastive Loss, Navigator A is optimized to find the disentangled directions in the latent space, and Disentangling Encoder E is optimized to extract disentangled representations from images. To further make the encoded representations more disentangled, i.e., when traversing along one disentangled direction, only one dimension of the encoded representation should respond, we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot. To implement the entropy-based domination loss, we first get the mean c of Q and K+ as c = 1 |B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . (5) We then compute the probability as pi = exp c(i)/ ∑J j=1 exp c(j), where c(i) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as Led = − 1 J J∑ j=1 pj log(pj). (6) Hard negatives flipping. Since the latent space of the generative models is a high-dimension complex manifold, many different directions carry the same semantic meaning. These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss. The hard negatives here are different from the hard negatives in the works of self-supervised representation learning (He et al., 2020; Coskun et al., 2018), where they have reliable annotations of the samples. Here, our hard negatives are more likely to be “false” negatives, and we choose to flip these hard negatives into positives. Specifically, we use a threshold T to identify the hard negative samples, and use their similarity to the queries as the pseudo-labels for them: l̂−i = ∑ αij<T log(1− σ(αij)) + ∑ αij≥T αij log(σ(αij)), (7) where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore, the modified final BCELoss is: Llogits−f = 1 |B| B∑ i=1 ( l+i + l̂ − i ) . (8) Typical disentanglement baselines: Methods on pretrained GAN: Methods on pretrained VAE: Methods on pretrained Flow: Full objective. With the above two techniques, the full objective is: L = Llogits−f + λLed, (9) where λ is the weighting hyper-parameter for entropy-based domination loss Led. 4 EXPERIMENT In this section, we first follow the well-accepted protocol (Locatello et al., 2019; Khrulkov et al., 2021) to evaluate the learned disentangled representation, which also reflects the performance of discovered directions implicitly (Lin et al., 2020) (Section 4.1). Secondly, we follow Li et al. (2021a) to directly evaluate the discovered directions (Section 4.2). Finally, we conduct ablation study (Section 4.3). 4.1 EVALUATIONS ON DISENTANGLED REPRESENTATION 4.1.1 EXPERIMENTAL SETUP Datasets. We consider the following popular datasets in the disentanglement areas: Shapes3D (Kim & Mnih, 2018) with 6 ground truth factors, MPI3D (Gondal et al., 2019) with 7 ground truth factors, and Cars3D (Reed et al., 2015) with 3 ground truth factors. In the experiments of the above datasets, images are resized to the 64x64 resolution. Pretrained generative models. For GAN, we use the StyleGAN2 model (Karras et al., 2020). For VAE, we use a common structure with convolutions (Locatello et al., 2019). For Flow, we use Glow (Kingma & Dhariwal, 2018). Baseline. For the typical disentanglement baselines, we choose FactorVAE (Kim & Mnih, 2018), β-TCVAE (Chen et al., 2018) and InfoGAN-CR (Lin et al., 2020). For discovering-based methods, we consider serveral recent methods: GANspace (GS) (Härkönen et al., 2020), LatentDiscovery (LD) (Voynov & Babenko, 2020), ClosedForm (CF) (Shen & Zhou, 2021) and DeepSpectral (DS) (Khrulkov et al., 2021). For these methods, we follow Khrulkov et al. (2021) to train an additional encoder to extract disentangled representation. We are the first to extract disentangled representations from pretrained VAE and Flow, so we extend LD to VAE and Flow as a baseline. Disentanglement metrics. We mainly consider two representative ones: the Mutual Information Gap (MIG) (Chen et al., 2018) and the Disentanglement metric (DCI) (Eastwood & Williams, 2018). MIG requires each factor to be only perturbed by changes of a single dimension of representation. DCI requires each dimension only to encode the information of a single dominant factor. We evaluate the disentanglement in terms of both representation and factors. We also provide results for β-VAE score (Higgins et al., 2017) and FactorVAE score (Kim & Mnih, 2018) in Appendix B.3. Randomness. We consider the randomness caused by random seeds and the strength of the regularization term (Locatello et al., 2019). For random seeds, we follow the same setting as the baselines. Since DisCo does not have a regularization term, we consider the randomness of the pretrained generative models. For all methods, we ensure there are 25 runs, except that Glow only has one run, limited by GPU resources. More details are presented in Appendix A. 4.1.2 EXPERIMENTAL RESULTS The quantitative results are summarized in Table 1 and Figure 3. More details about the experimental settings and results are presented in Appendix A & C. DisCo vs. typical baselines. Our DisCo achieves the SOTA performance consistently in terms of MIG and DCI scores. The variance due to randomness of DisCo tends to be smaller than those typical baselines. We demonstrate that the method, which extracts disentangled representation from pretrained non-disentangled models, can outperform typical disentanglement baselines. DisCo vs. discovering-based methods. Among the baselines based on discovering pretrained GAN, CF achieves the best performance. DisCo outperforms CF in almost all the cases by a large margin. Besides, these baselines need an extra stage (Khrulkov et al., 2021) to get disentangled representation, while our Disentangling Encoder can directly extract disentangled representation. 4.2 EVALUATIONS ON DISCOVERED DIRECTIONS To evaluate the discovered directions, we compare DisCo on StyleGAN2 with GS, LD, CF and DS on the real-world dataset FFHQ (Karras et al., 2019)1. and adopt the comprehensive Manipulation Disentanglement Score (MDS) (Li et al., 2021a) as a metric. To calculate MDS, we use 40 CelebaHQ-Attributes predictors released by StyleGAN. Among them, we select Young, Smile, Bald and Blonde Hair, as they are attributes with an available predictor and commonly found by all methods at the same time. The results are summarized in Table 3. DisCo has shown better overall performance compared to other baselines, which verifies our assumption that learning disentangled representation benefits latent space discovering. We also provide qualitative comparisons in Figure 4. Finally, we provide an intuitive analysis in Appendix D for why DisCo can find those disentangled directions. 4.3 ABLATION STUDY In this section, we perform ablation study of DisCo only on GAN, limited by the space. For the experiments, we use the Shapes3D dataset, and the random seed is fixed. Choice of latent space. For style–based GANs (Karras et al., 2019; 2020), there is a style space W , which is the output of style network (MLP) whose input is a random latent space Z . As demonstrated in Karras et al. (2019), W is more interpretable than Z . We conduct experiments on W and Z respectively to see how the latent space influences the performance. As shown in Table 4, DisCo on W is better, indicating that the better the latent space is organized, the better disentanglement DisCo can achieve. Choices of A. Following the setting of Voynov & Babenko (2020), we mainly consider three options of A: a linear operator with all matrix columns having a unit length, a linear operator with orthonormal matrix columns, or a nonlinear operator of 3 fully-connected layers. The results are shown in Table 4. For latent spaces W and Z , A with unit-norm columns achieves nearly the best performance in terms of MIG and DCI scores. Compared to A with orthonormal matrix columns, using A with unitnorm columns is more expressive with less constraints. Another possible reason is that A is global without conditioned on the latent code z. A non-linear operator is more suitable for a local navigator A. For such a much more complex local and non-linear setting, more inductive bias or supervision should be introduced. Entropy-based domination loss. Here, we verify the effectiveness of entropy-based domination loss Led for disentanglement. For a desirable disentangled representation, one semantic meaning corresponds to one dimension. As shown in Table 4, Led can improve the performance by a large 1The above disentanglement metrics (DCI and MIG) are not available for FFHQ dataset. margin. We also visualize the Variation Space to further demonstrate the effectiveness of our proposed loss in Figure 5. Adding the domination loss makes the samples in the Variation Space to be one-hot, which is desirable for disentanglement. Hard negatives flipping. We run our DisCo with or without the hard negatives flipping strategy to study its influence. As shown in Table 4, flipping hard negatives can improve the disentanglement ability of DisCo. The reason is that the hard negatives have the same semantics as the positive samples. In this case, treating them as the hard negatives does not make sense. Flipping them with pseudo-labels can make the optimization of Contrastive Learning easier. Hyperparmeter N & M. We run DisCo with different ratios of N : M with a fixed sum of 96, and different sum of N +M with a fixed ratio 1 : 2 to study their impacts. As shown in Figure 6 (a), the best ratio is N : M = 32 : 64 = 1 : 2, as the red line (MIG) and blue line (DCI) in the figure show that larger or smaller ratios will hurt DisCo, which indicates DisCo requires a balance between N and M . As shown in Figure 6 (b), the sum of N +M has slight impact on DisCo. For other hyperparameters, we set them empirically, and more details are presented in Appendix A. Contrast vs. Classification. To verify the effectiveness of Contrast, we substitute it with classification by adopting an additional linear layer to recover the corresponding direction index and the shift along this direction. As Table 2 shows, Contrastive Learning outperforms Classification significantly. Concatenation vs. Variation. We further demonstrate that the Variation Space is crucial for DisCo. By replacing the difference operator with concatenation, the performance drops significantly (Table 2), indicating that the encoded representation is not well disentangled. On the other hand, the disentangled representations of images are achieved by Contrastive Learning in the Variation Space. 4.4 ANALYSIS OF DIFFERENT GENERATIVE MODELS As shown in Table 1, DisCo can be well generalized to different generative models (GAN, VAE, and Flow). DisCo on GAN and VAE can achieve relative good performance, while DisCo on Flow is not as good. The possible reason is similar to the choice of latent space of GAN. We assume the disentangled directions are global linear and thus use a linear navigator. In contrast to GAN and VAE, we suspect that Flow may not conform to this assumption well. Furthermore, Flow has the problems of high GPU cost and unstable training, which limit us to do further exploration. 5 CONCLUSION In this paper, we present an unsupervised and model-agnostic method DisCo, which is a Contrastive Learning framework to learn disentangled representation by exploiting pretrained generative models. We propose an entropy-based domination loss and a hard negatives flipping strategy to achieve better disentanglement. DisCo outperforms typical unsupervised disentanglement methods while maintaining high image quality. We pinpoint a new direction that Contrastive Learning can be well applied to extract disentangled representation from pretrained generative models. There may be some specific complex generative models, for which the global linear assumption of disentangled directions in the latent space could be a limitation. For future work, extending DisCo to the existing VAE-based disentanglement framework is an exciting direction. A.2 SETTING FOR BASELINES In this section, we introduce the implementation setting for the baselines (including randomness). VAE-based methods. We choose FactorVAE and β-TCVAE as the SOTA VAE-based methods, we follow Locatello et al. (2019) to use the same architecture of encoder and decoder. For the hyper-parameters, we use the the best settings by grid search. We set the latent dimension of representation to 10. For FactorVAE, we set the hyperparameter γ to 10. For β-TCVAE, we set the hyperparameter β to 6. For the random seeds, considering our method has 25 run, we run 25 times with different random seeds for each model to make the comparison fair. InfoGAN-based methods. We choose InfoGAN-CR as a baseline. We use the official implementation 2 with the best hyperparameter settings by grid search. For the random seeds, we run 25 times with different random seeds Discovering-based methods. We follow Khrulkov et al. (2021) to use the same settings for the following four baselines: LD (GAN), CF, GS, and DS. Similar to our method (DisCo), discoveringbased methods do not have a regularization term. Thus, for the randomness, we adopt the same strategy with DisCo. We take the top-10 directions for 5 different random seeds for GAN and 5 different random seeds for the additional encoder to learn disentangled representations. LD (VAE) & LD (Flow). We follow LD (GAN) to use the same settings and substitute the GAN with VAE / Glow. The only exception is the randomness for LD (Flow). We only run one random seed to pretrain the Glow and use one random seed for the encoder. A.3 MANIPULATION DISENTANGLEMENT SCORE As claimed in Li et al. (2021a), it is difficult to evaluate the performance on discovering the latent space among different methods, which often use model-specific hyper-parameters to control the editing strength. Thus, Li et al. (2021a) propose a comprehensive metric called Manipulation Disentanglement Score (MDS), which takes both the accuracy and the disentanglement of manipulation into consideration. For more details, please refer to Li et al. (2021a). A.4 DOMAIN GAP PROBLEM Please note that there exists a domain gap between the generated images of pretrained generative models and the real images. However, the good performance on disentanglement metrics shows that the domain gap has limited influence on DisCo. 2https://github.com/fjxmlzn/InfoGAN-CR A.5 ARCHITECTURE Here, we provide the model architectures in our work. For the architecture of StyleGAN2, we follow Khrulkov et al. (2021). For the architecture of Glow, we use the open-source implementation 3. 3https://github.com/rosinality/glow-pytorch B MORE EXPERIMENTS B.1 MORE QUALITATIVE COMPARISON We provide some examples for qualitative comparison. We first demonstrate the trade-off problem of the VAE-based methods. As shown in Figure 7, DisCo leverages the pretrained generative model and does not have the trade-off between disentanglement and generation quality. DisCo Furthermore, as shown in Figure 8 and Figure 9, VAE-based methods suffer from poor image quality. When changing one attribute, the results of discovering-based methods tend to also change other attributes. We also provide qualitative comparisons between DisCo and InfoGAN-CR. Note that the latent space of InfoGAN-CR is not aligned with the pretrained StyleGAN2. InfoGAN-CR also suffers from the trade-off problem, and its disentanglement ability is worse than DisCo. We explain the comparison in the main paper and show more manipulation comparisons here. B.2 ANALYSIS OF THE LEARNED DISENTANGLED REPRESENTATIONS We feed the images traversing the three most significant factors (wall color, floor color, and object color) of Shapes3D into the Disentangling Encoders and plot the corresponding dimensions of the encoded representations to visualize the learned disentangled space. The location of each point is the disentangled representation of the corresponding image. An ideal result is that all the points form a cube, and color variation is continuous. We consider three baselines that have relatively higher MIG and DCI: CF, DS, LD. As the figures below show, the points in the latent space of CF and DS are not well organized, and the latent space of all the three baselines are not well aligned with the axes, especially for LD. DisCo learns a well-aligned and well-organized latent space, which signifies a better disentanglement. CF DS LD Ours B.3 MORE QUANTITATIVE COMPARISON We provide additional quantitative comparisons in terms of β-VAE score and FactorVAE score. DisCo on pretrained GAN is comparable to discovering-based baselines in terms of β-VAE score and FactorVAE score, suggesting that some disagreement between these two scores and MIG/ DCI. However, note that the qualitative evaluation in Figure 8, Figure 9 and Section B.2 shows that the disentanglement ability of DisCo is better than all the baselines on Shapes3D dataset. Typical disentanglement baselines: Methods on pretrained GAN: Methods on pretrained VAE: Methods on pretrained Flow: We also provide an additional experiment on Noisy-DSprites dataset. We compare DisCo with β-TCVAE (the best typical method) and CF (the best discovering-based method) in terms of MIG and DCI metrics. C LATENT TRAVERSALS In this section, we visualize the disentangled directions of the latent space discovered by DisCo on each dataset. For Cars3D, Shapes3D, Anime and MNIST, the iamge resolution is 64× 64. For FFHQ, LSUN cat and LSUN church, the image resolution is 256× 256. Besides StyleGAN2, we also provide results of Spectral Norm GAN (Miyato et al., 2018) 4 on MNIST (LeCun et al., 2010) and Anime Face (Jin et al., 2017) to demonstrate that DisCo can be well generalized to other types of GAN. 4https://github.com/anvoynov/GANLatentDiscovery D AN INTUITIVE ANALYSIS FOR DISCO DisCo works by contrasting the variations resulted from traversing along the directions provided by the Navigator. Is the method sufficient to converge to the disentangled solution? Note that it is very challenging to answer this question. To our best knowledge, for unsupervised disentangled representation learning, there is no sufficient theoretical constraint to guarantee the convergence to a disentangled solution Locatello et al. (2019). Here we provide an intuitive analysis for DisCo and try to provide our thoughts on how DisCo find the disentangled direction in the latent space, which is supported by our observations on pretrained GAN both quantitatively and qualitatively. The intuitive analysis consists of two part: (i) The directions that can be discovered by DisCo have different variation patterns compared to random directions. (ii) DisCo hardly converges to the an entangled solution. D.1 WHAT KIND OF DIRECTIONS DISCO CAN CONVERGE TO? First, we visualize the latent space and show that there are some variation patterns in the latent space for disentangled factors. We design the following visualization method. Given a pretrained GAN and two directions in the latent space, we traverse along the plane expanded by the two directions to generate a grid of images. The range is large enough to include all values of these disentangled factors, and the step is small enough to obtain a dense grid. Then, we input these images into an encoder that trained with ground truth factors labels. We get a heatmap of each factor (the value is the response value corresponding dimension of the factor). In this way, we can observe the variation pattern that emerged in the latent space. We take the pretrained StyleGAN on Shapes3D (synthetic) and FFHQ (real-world). For Shapes3D, we take background color and floor color as the two factors (since they refer to different areas in the image, these two factors are disentangled). For FFHQ, we take smile (mouth) and bald (hair) as the two factors (disentangled for referring to different areas). We then choose random directions and the directions discovered by DisCo. The results are shown in Figure 27 and Figure 28. We find a clear difference between random directions and directions discovered by DisCo. This is because DisCo can learn the directions by separating the variations resulted from traversing along with them. However, not all directions can be separated. For those directions in which the variations are not able to be recognized or clustered by the encoder E, it is nearly impossible for DisCo to converge to them. Conversely, for those directions that can be easily recognized and clustered, DisCo will converge to them with a higher probability. From the following observations, we find that the variation patterns resulting from the directions corresponding to disentangled factors are easily recognized and clustered. D.2 WHY DISCO HARDLY CONVERGES TO THE ENTANGLED CASES? In the previous section, we show that DisCo can discover the directions with distinct variation patterns and exclude random directions. Here we discuss why DisCo can hardly converge to the following entangled case (trivial solution based on disentangled one). For example, suppose there is an entangled direction of factors A and B (A and B change with the same rate when traversing along with it) in the latent space of generative models, and DisCo can separate the variations resulting from the direction of A and the entangled direction. In that case, DisCo has no additional bias to update these directions to converge to disentangled ones. In the following text, for ease of referring to, we denote the entangled direction of factors A and B (A and B change with the same rate when traversing along with it) as A+B direction, and direction of factor A (only A change when we traverse along with it). The reasons for why DisCo is hardly converged to the case of A and A+B are two-fold: (i) Our encoder is a lightweight network (5 CNN layers + 3 FC layers). It is nearly impossible for it to separate the A and A+B directions. (ii) In the latent space of the pretrained generative models, the disentangled directions (A, B) are consistent at different locations. In contrast, the entangled directions (A+B) are not, as shown in Figure 29. We conduct the following experiments to verify them. For (i), we replace our encoder in DisCo with a ResNet-50 and train DisCo from scratch on the Shapes3D dataset. The loss, MIG, and DCI are presented in Table 11. The trivial solution is possible when the encoder is powerful enough to fit the A and A+B directions to “become orthogonal”. With this consideration, in DisCo we adopt a lightweight encoder to avoid this issue. For (ii), as the sketch Figure 29 demonstrates, the disentangled directions (”A“- blue color or “B”green color) are consistent, which is invariant to the location in the latent space, while the entangled directions (”A+B“- red color) is not consistent on different locations. The fundamental reason is that: the directions of the disentangled variations are invariant with the position in the latent space. However, the “rate” of the variation is not. E.g., at any point in the latent space, going “up” constantly changes the camera’s pose. However, at point a, going “up” with step 1 means rotating 10 degrees. At point b, going “up” with step 1 means rotating 5 degrees. When the variation “rate” of “A” and “B” are different, the “A+B” directions at different locations are not consistent. Based on the different properties of disentangled and entangled directions in the latent space, DisCo can discover the disentangled directions with contrastive loss. The contrastive loss can be understood from the clustered view (Wang & Isola, 2020; Li et al., 2021b). The variations from the disentangled directions are more consistent and can be better clustered compared to the variations from the entangled ones. Thus, DisCo can discover the disentangled directions in the latent space and learn disentangled representations from images. We further provide the following experiments to support our above analysis. D.2.1 QUANTITATIVE EXPERIMENT We compare the losses of three different settings: • A: For a navigator with disentangled directions, we fix the navigator and train the encoder until convergence. • A + B: For a navigator with entangled directions (we use the linear combination of the disentangled directions to initialize the navigator), we fix it and train the encoder until convergence. • A+B → A: After A+B is convergent, we update both the encoder and the navigator until convergence. The Contrastive loss after convergence is presented in Table 12. The results show that: (i) The disentangled directions (A) can lead to lower loss and better performance than entangled directions (A+B), indicating no trivial solution. (ii) Even though the encoder with A+B is converged, when we optimize the navigator, gradients will still backpropagate to the navigator and converge to A. D.2.2 QUALITATIVE EXPERIMENT We visualize the latent space of GAN in Figure 30 to verify the variation “rate” in the following way: in the latent space, we select two ground truth disentangled directions: floor color (A) and background color (B) obtained by supervision with InterFaceGAN (Shen et al., 2020), we conduct equally spaced sampling along the two disentangled directions: A (labeled with green color variation), B (labeled with gradient blue color) and composite direction (A+B, labeled with gradient red color) as shown in Figure 30 (a). Then we generate the images (also include other images on the grid as shown in Figure 30 (b) ), and feed the images in the bounding boxes into a “ground truth” encoder (trained with ground truth disentangled factors) to regress the “ground truth” disentangled representations of the images. In Figure 30 (c), the points labeled with green color are well aligned with the x-axis indicating only floor color change, points labeled with blue variation are well aligned with the y-axis indicating only background color change. However, the points labeled with red color are NOT aligned with any line, which indicates the directions of A+B are not consistent. Further, the variation “rate” is relevant to the latent space locations for the two disentangled directions. This observation well supports our idea shown in Figure 29. The different properties between disentangled and entangled directions enable DisCo to discover the disentangled directions in the latent space. E EXTENSION: BRIDGE THE PRETRAINED VAE AND PRETRAINED GAN Researchers are recently interested in improving image quality given the disentangled representation generated by typical disentanglement methods. Lee et al.(Lee et al., 2020) propose a post-processing stage using a GAN based on disentangled representations learned by VAE-based disentanglement models. This method scarifies a little generation ability due to an additional constraint. Similarly, Srivastava et al. (Srivastava et al., 2020) propose to use a deep generative model with AdaIN (Huang & Belongie, 2017) as a post-processing stage to improve the reconstruction ability. Following this setting, we can replace the encoder in DisCo (GAN) with an encoder pretrained by VAE-based disentangled baselines. In this way, we can bridge the pretrained disentangled VAE and pretrained GAN, as shown in Figure 31. Compared to previous methods, our method can fully utilize the state-of-the-art GAN and the state-of-the-art VAE-based method and does not need to train a deep generative model from scratch. F DISCUSSION ON RELATION BETWEEN BCELOSS AND NCELOSS We would like to present a deep discussion on the relation between the BCELoss Llogits and NCELoss LNCE . This discussion is related to the NCE paper Gutmann & Hyvärinen (2010), and InfoNCE paper van den Oord et al. (2018). The discussion is as following: (i) we first provide a formulation of a general problem and get two objectives, L1 and L2, and L1 is the upper bound of L2. (ii) Following Gutmann & Hyvärinen (2010), we show that L1 is aligned with LBCE under the setting of Gutmann & Hyvärinen (2010). (iii) Following van den Oord et al. (2018), we prove L2 is aligned with LNCE under the setting of van den Oord et al. (2018). (iii) We discuss the relation between these objectives and the loss in our paper. Part I. Assume we have S observations {xi}Si=1 from a data distribution p(x), each with a label Ci ∈ {0, 1}. The we denote the posterior probabilities as p+(x) = p(x|C = 1) and p−(x) = p(x|C = 0). We define two objectives as follow: L1 = − S∑ i=1 Ci logP (Ci = 1|xi) + (1− Ci) logP (Ci = 0|xi), (10) and L2 = − S∑ i=1 Ci logP (Ci = 1|xi) (11) Since − ∑S i=1(1− Ci) log p(Ci = 0|xi) ≥ 0, we have L1 ≥ L2. (12) L1 is the upper bound of L2. This a general formulation of a binary classification problem. In the context of our paper, we have a paired observation xi : (q, ki), with q as the query, and the key ki is either from a positive key set {k+j }Nj=1 or as negative key set {k−m}Mm=1 (i.e., {ki} N+M i=1 = {k + j }Nj=1 ⋃ {k−m}Mm=1), where M = S −N . And Ci is assigned as: Ci = { 1, ki ∈ {k+j }Nj=1 0, ki ∈ {k−m}Mm=1 (13) In our paper, we have h(x) = exp(q · k/τ). Part II. In this part, following Gutmann & Hyvärinen (2010), we show that L1 is aligned with Llogits (Equation 3 in the main paper) under the setting of Gutmann & Hyvärinen (2010). Following Gutmann & Hyvärinen (2010)), we assume the prior distribution P (C = 0) = P (C = 1) = 1/2, according to the Bayes rule, we have P (C = 1|x) = p(x|C = 1)P (C = 1) p(x|C = 1)P (C = 1) + p(x|C = 0)P (C = 0) = 1 1 + p −(x) p+(x) . (14) And P (C = 0|x) = 1− P (C = 1|x). On the other hand, we have a general form of BCELoss, as LBCE = − S∑ i=1 Ci log σ(q · ki/τ) + (1− Ci) log(1− σ(q · ki/τ)), (15) where σ(·) is the sigmoid function. We have σ(q · k/τ) = 1 1 + exp(−q · k/τ) = 1 1 + 1exp(q·k/τ) = 1 1 + 1h(x) , (16) From Gutmann & Hyvärinen (2010) Theorem 1, we know that when LBCE is minimized, we have h(x) = p+(x) p−(x) . (17) Thus, we know the BCELoss LBCE is a approximation of the objective L1. Part. III Following van den Oord et al. (2018), we prove L2 is aligned with LNCE (Equation 2 in the main paper) under the setting of van den Oord et al. (2018) From the typical contrastive setting (one positive sample, others are negative samples, following van den Oord et al. (2018)), we assume there is only one positive sample, others are negatives in {xi}Si=1. Then, the probability of xi sample from p+(x) rather then p−(x) is as follows, P (Ci = 1|xi) = p+(xi)Πl ̸=ip −(xl)∑S j=1 p +(xj)Πl ̸=ip−(xl) = p+(xi) p−(xi)∑S j=1 p+(xj) p−(xj) (18) From van den Oord et al. (2018), we know that when minimize Equation 11, we have h(x) = exp(q · k/τ) ∝ p+(x)p−(x) . In this case, we get the form of LNCE as LNCE = − S∑ i=1 Ci log exp(q · ki/τ)∑S j=1 exp(q · kj/τ) (19) LNCE is a approximate of L2. Part. IV When generalize the contrastive loss into our setting (N positive samples, M negative samples). The BCELoss (Equation 15) can be reformulated as The BCELoss (Equation 15) can be reformulated as L̂BCE = − N∑ j=1 log σ(q · k+j /τ)− M∑ m=1 log(1− σ(q · k−m/τ)). (20) Similarly, the NCEloss (Equation 19) can be reformulated as L̂NCE = − N∑ j=1 log exp(q · k+j /τ)∑M+N s=1 exp(q · ks/τ) (21) L̂BCE is aligned with Llogits (Equation 3 in our main paper), and L̂NCE is aligned with LNCE (Equation 2 in the main paper). Now we have L1 (approximated by LBCE) is the upper bound of L2 (approximated by LNCE). However, as you may notice, the assumptions we made in Part II and Part III are different, one is P (C = 0) = P (C = 1), the other one is only one positive sample, others are negative. Also the extent to our situation is more general case (N positives, others are negatives). However, they have the same objective, which is by contrasting positives and negatives, we can use h(x) = exp(q · k/τ) to estimate p+/p−. We can think the h(x) as a similarity score, i.e. if q and k are from a positive pair (they have the same direction in our paper), h(x) should be as large as possible (p+/p− > 1) and vice versa. From this way, we can learn the representations (q, k) to reflect the image variation, i.e., similar variations have higher score h(x) , while different kinds of variation have lower score h(x). Then with this meaningful representation, in the latent space, can help to discover the directions carrying different kinds of image variation. This is an understanding, from a contrastive learning view, of how our method works.
1. What is the focus and contribution of the paper on disentanglement? 2. What are the strengths of the proposed approach, particularly in terms of its ability to achieve SOTA results and ensure good generation quality? 3. What are the weaknesses of the paper, especially regarding the proposed method's flaws and the use of outdated metrics? 4. Do you have any concerns or suggestions regarding the computation and use of MIG and DCI for discovering-based methods? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes DisCo, a framework that learns disentangled representations from pretrained entangled generative models. Extensive experimental results show that DisCo outperforms many baselines in both quantitative and qualitative evaluations. Review Pros: The proposed method is novel and achieves SOTA results in disentanglement while ensuring good generation quality. Extensive experiments and ablation studies. In general, the paper is well written and easy to read. Cons: There are still some flaws in the proposed method. Some details about how to compute MIG and DCI for discovering-based methods are missing. MIG and DCI metrics are out-of-date and may not well characterize disentanglement.
ICLR
Title Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View Abstract From the intuitive notion of disentanglement, the image variations corresponding to different factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images. However, the term usually results in a trade-off between disentanglement and generation quality. For the generative models pretrained without any disentanglement term, the generated images show semantically meaningful variations when traversing along different directions in the latent space. Based on this observation, we argue that it is possible to mitigate the trade-off by (i) leveraging the pretrained generative models with high generation quality, (ii) focusing on discovering the traversal directions as factors for disentangled representation learning. To achieve this, we propose Disentaglement via Contrast (DisCo) as a framework to model the variations based on the target disentangled representations, and contrast the variations to jointly discover disentangled directions and learn disentangled representations. DisCo achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained nondisentangled generative models including GAN, VAE, and Flow. Source code is at https://github.com/xrenaa/DisCo. 1 INTRODUCTION Disentangled representation learning aims to identify and decompose the underlying explanatory factors hidden in the observed data, which is believed by many to be the only way to understand the world for AI fundamentally (Bengio & LeCun, 2007). To achieve the goal, as shown in Figure 1 (a), we need an encoder and a generator. The encoder to extract representations from images with each dimension corresponds to one factor individually. The generator (decoder) decodes the changing of each factor into different kinds of image variations. With supervision, we can constrain each dimension of the representation only sensitive to one kind of image variation caused by changing one factor respectively. However, this kind of exhaustive supervision is often not available in real-world data. The typical unsupervised methods are based on a generative model to build the above encoder and generator framework, e.g., VAE (Kingma & Welling, 2014) provides encoder and generator, and GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2019) provides generator. During the training process of the encoder and generator, to achieve disentangled representation, the typical methods rely on an additional disentanglement regularization term, e.g., the total correlation for VAE-based methods (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or mutual information for InfoGAN-based methods (Chen et al., 2016; Lin et al., 2020). ∗Equal contribution. Work done during internships at Microsoft Research Asia. †Corresponding author However, the extra terms usually result in a trade-off between disentanglement and generation quality (Burgess et al., 2018; Khrulkov et al., 2021). Furthermore, those unsupervised methods have been proved to have an infinite number of entangled solutions without introducing inductive bias (Locatello et al., 2019). Recent works (Shen & Zhou, 2021; Khrulkov et al., 2021; Karras et al., 2019; Härkönen et al., 2020; Voynov & Babenko, 2020) show that, for GANs purely trained for image generation, traversing along different directions in the latent space causes different variations of the generated image. This phenomenon indicates that there is some disentanglement property embedded in the latent space of the pretrained GAN. The above observations indicate that training the encoder and generator simultaneous may not be the best choice. We provide an alternative route to learn disentangled representation: fix the pretrained generator, jointly discover the factors in the latent space of the generator and train the encoder to extract disentangled representation, as shown in Figure 1(b). From the intuitive notion of disentangled representation, similar image variations should be caused by changing the same factor, and different image variations should be caused by changing different factors. This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework: Disentanglement via Contrast (DisCo) for disentangled representation learning. In DisCo, changing a factor is implemented by traversing one discovered direction in the latent space. For discovering the factors, DisCo adopts a typical network module, Navigator, to provides candidate traversal directions in the latent space (Voynov & Babenko, 2020; Jahanian et al., 2020; Shen et al., 2020). For disentangled representation learning, to model the various image variations, we propose a novel ∆-Contrastor to build a Variation Space where we apply the contrastive loss. In addition to the above architecture innovations, we propose two key techniques for DisCo: (i) an entropy-based domination loss to encourage the encoded representations to be more disentangled, (ii) a hard negatives flipping strategy for better optimization of Contrastive Loss. We evaluate DisCo on three major generative models (GAN, VAE, and Flow) on three popular disentanglement datasets. DisCo achieves the state-of-the-art (SOTA) disentanglement performance compared to all the previous discovering-based methods and typical (VAE/InfoGAN-based) methods. Furthermore, we evaluate DisCo on the real-world dataset FFHQ (Karras et al., 2019) to demonstrate that it can discover SOTA disentangled directions in the latent space of pretrained generative models. Our main contributions can be summarized as: (i) To our best knowledge, DisCo is the first unified framework for jointly learning disentangled representation and discovering the latent space of pretrained generative models by contrasting the image variations. (ii) We propose a novel ∆-Contrastor to model image variations based on the disentangled representations for utilizing Contrastive Learning. (iii) DisCo is an unsupervised and model-agnostic method that endows non-disentangled VAE, GAN, or Flow models with the SOTA disentangled representation learning and latent space discovering. (iv) We propose two key techniques for DisCo: an entropy-based domination loss and a hard negatives flipping strategy. 2 RELATED WORK Typical unsupervised disentanglement. There have been a lot of studies on unsupervised disentangled representation learning based on VAE (Higgins et al., 2017; Burgess et al., 2018; Kumar et al., 2017; Kim & Mnih, 2018; Chen et al., 2018) or InfoGAN (Chen et al., 2016; Lin et al., 2020). These methods achieve disentanglement via an extra regularization, which often sacrifices the generation quality (Burgess et al., 2018; Khrulkov et al., 2021). VAE-based methods disentangle the variations by factorizing aggregated posterior, and InfoGAN-based methods maximize the mutual information between latent factors and related observations. VAE-based methods achieve relatively good disentanglement performance but have low-quality generation. InfoGAN-based methods have a relatively high quality of generation but poor disentanglement performance. Our method supplements generative models pretrained without disentanglement regularization term with contrastive learning in the Variation Space to achieve both high-fidelity image generation and SOTA disentanglement. Interpretable directions in the latent space. Recently, researchers have been interested in discovering the interpretable directions in the latent space of generative models without supervision, especially for GAN (Goodfellow et al., 2014; Miyato et al., 2018; Karras et al., 2020). Based on the fact that the GAN latent space often possesses semantically meaningful directions (Radford et al., 2015; Shen et al., 2020; Jahanian et al., 2020), Voynov & Babenko (2020) propose a regression-based method to explore interpretable directions in the latent space of a pretrained GAN. The subsequent works focus on extracting the directions from a specific layer of GANs. Härkönen et al. (2020) search for important and meaningful directions by performing PCA in the style space of StyleGAN (Karras et al., 2019; 2020). Shen & Zhou (2021) propose to use the singular vectors of the first layer of a generator as the interpretable directions, and Khrulkov et al. (2021) extend this method to the intermediate layers by Jacobian matrix. All the above methods only discover the interpretable directions in the latent space, except for Khrulkov et al. (2021) which also learns disentangled representation of generated images by training an extra encoder in an extra stage. However, all these methods can not outperform the typical disentanglement methods. Our method is the first to jointly learn the disentangled representation and discover the directions in the latent spaces. Contrastive Learning. Contrastive Learning gains popularity due to its effectiveness in representation learning (He et al., 2020; Grill et al., 2020; van den Oord et al., 2018; Hénaff, 2020; Li et al., 2020; Chen et al., 2020). Typically, contrastive approaches bring representations of different views of the same image (positive pairs) closer, and push representations of views from different images (negative pairs) apart using instance-level classification with Contrastive Loss. Recently, Contrastive Learning is extended to various tasks, such as image translation (Liu et al., 2021; Park et al., 2020) and controllable generation (Deng et al., 2020). In this work, we focus on the variations of representations and achieve SOTA disentanglement with Contrastive Learning in the Variation Space. Contrastive Learning is suitable for disentanglement due to: (i) the actual number of disentangled directions is usually unknown, which is similar to Contrastive Learning for retrieval (Le-Khac et al., 2020), (ii) it works in the representation space directly without any extra layers for classification or regression. 3 DISENTANGLEMENT VIA CONTRAST 3.1 OVERVIEW OF DISCO From the contrastive view of the intuitive notion of disentangled representation learning, we propose a DisCo to leverage pretrained generative models to jointly discover the factors embedded as directions in the latent space of the generative models and learn to extract disentangled representation. The benefits of leveraging a pretrained generative model are two-fold: (i) the pretrained models with high-quality image generation are readily available, which is important for reflecting detailed image variations and downstream tasks like controllable generation; (ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning. DisCo consists of a Navigator to provides candidate traversal directions in the latent space and a ∆-Contrastor to extract the representation of image variations and build a Variation Space based on the target disentangled representations. More specifically, ∆-Contrastor is composed of two sharedweight Disentangling Encoders. The variation between two images is modeled as the difference of their corresponding encoded representations extracted by the Disentangling Encoders. In the Variation Space, by pulling together the variation samples resulted from traversing the same direction and pushing away the ones resulted from traversing different directions, the Navigator learns to discover disentangled directions as factors, and Disentangling Encoder learns to extract disentangled representations from images. Thus, traversing along the discovered directions causes distinct image variations, which causes separated dimensions of disentangled representations respond. Different from VAE-based or InfoGAN-based methods, our disentangled representations and factors are in two separate spaces, which actually does not affect the applications. Similar to the typical methods, the Disentangling Encoder can extract disentangled representations from images, and the pretrained generative model with discovered factors can be applied to controllable generation. Moreover, DisCo can be applied to different types of generative models. Here we provide a detailed workflow of DisCo. As Figure 2 shows, given a pretrained generative model G: Z → I, where Z ∈ RL denotes the latent space, and I denotes the image space, the workflow is: 1) A Navigator A provides a total of D candidate traversal directions in the latent space Z , e.g., in the linear case, A ∈ RL×D is a learnable matrix, and each column is regarded as a candidate direction. 2) Image pairs G(z), G(z′) are generated. z is sampled from Z and z′ = z + A(d, ε), where d ∈ {1, ..., D} and ε ∈ R, and A(d, ε) denotes the shift along the dth direction with ε scalar. 3) The ∆-Contrastor, composed of two shared-weight Disentangling Encoders E, encodes the image pair to a sample v ∈ V as v(z, d, ε) = |E(G(z +A(d, ε)))−E(G(z))| , (1) where V ∈ RJ+ denotes the Variation Space. Then we apply Contrastive Learning in V to optimize the Disentangling Encoder E to extract disentangled representations and simultaneously enable Navigator A to find the disentangled directions in the latent space Z . 3.2 DESIGN OF DISCO We present the design details of DisCo, which include: (i) the collection of query set Q = {qi}Bi=1, positive key set K+ = {k+i }Ni=1 and negative key set K− = {k − i }Mi=1, which are three subsets of the Variation Space V , (ii) the formulation of the Contrastive Loss. According to our goal of contrasting the variations, the samples from Q and K+ share the same traversal direction and should be pulled together, while the samples from Q and K− have different directions and should be pushed away. Recall that each sample v in V is determined as v(z, d, ε). To achieve the contrastive learning process, we construct the query sample qi = v(zi, di, εi), the key sample k+i = v(z + i , d + i , ε + i ) and the negative sample k − i = v(z − i , d − i , ε − i ). Specifically, we randomly sample a direction index d̂ from a discrete uniform distribution U{1, D} for {di}Bi=1 and {d+i }Ni=1 to guarantee they are the same. We randomly sample {d − i }Mi=1 from the set of the rest of the directions U{1, D} \ {d̂} individually and independently to cover the rest of directions in Navigator A. Note that the discovered direction should be independent with the starting point and the scale of variation, which is in line with the disentangled factors. Therefore, {zi}Bi=1, {z + i }Ni=1, {z − i }Mi=1 are all sampled from latent space Z , and {εi}Bi=1, {ε + i }Ni=1, {ε − i }Mi=1 are all sampled from a shared continuous uniform distribution U [−ϵ, ϵ] individually and independently. We normalize each sample in Q, K+, and K− to a unit vector to eliminate the impact caused by different shift scalars. For the design of Contrastive Loss, a well-known form of Contrastive Loss is InfoNCE (van den Oord et al., 2018): LNCE = − 1 |B| B∑ i=1 N∑ j=1 log exp(qi · k+j /τ)∑N+M s=1 exp(qi · ks/τ) , (2) where τ is a temperature hyper-parameter and {ki}N+Mi=1 = {k + i }Ni=1 ⋃ {k−i }Mi=1. The InfoNCE is originate from BCELoss (Gutmann & Hyvärinen, 2010). BCELoss has been used to achieve contrastive learning (Wu et al., 2018; Le-Khac et al., 2020; Mnih & Kavukcuoglu, 2013; Mnih & Teh, 2012). We choose to follow them to use BCELoss Llogits for reducing computational cost: Llogits = − 1 |B| B∑ i=1 ( l−i + l + i ) , (3) l+i = N∑ j=1 log σ(qi · k+j /τ), l − i = M∑ m=1 log(1− σ(qi · k−m/τ)), (4) where σ denotes the sigmoid function, l+i denotes the part for positive samples, and l − i denotes the part for the negative ones.Note that we use a shared positive set for B different queries to reduce the computational cost. 3.3 KEY TECHNIQUES FOR DISCO Entropy-based domination loss. By optimizing the Contrastive Loss, Navigator A is optimized to find the disentangled directions in the latent space, and Disentangling Encoder E is optimized to extract disentangled representations from images. To further make the encoded representations more disentangled, i.e., when traversing along one disentangled direction, only one dimension of the encoded representation should respond, we thus propose an entropy-based domination loss to encourage the corresponding samples in the Variation Space to be one-hot. To implement the entropy-based domination loss, we first get the mean c of Q and K+ as c = 1 |B +N | ( B∑ i=1 qi + N∑ i=1 k+i ) . (5) We then compute the probability as pi = exp c(i)/ ∑J j=1 exp c(j), where c(i) is the i-th element of c and J is the number of dimensions of c. The entropy-based domination loss Led is calculated as Led = − 1 J J∑ j=1 pj log(pj). (6) Hard negatives flipping. Since the latent space of the generative models is a high-dimension complex manifold, many different directions carry the same semantic meaning. These directions with the same semantic meaning result in hard negatives during the optimization of Contrastive Loss. The hard negatives here are different from the hard negatives in the works of self-supervised representation learning (He et al., 2020; Coskun et al., 2018), where they have reliable annotations of the samples. Here, our hard negatives are more likely to be “false” negatives, and we choose to flip these hard negatives into positives. Specifically, we use a threshold T to identify the hard negative samples, and use their similarity to the queries as the pseudo-labels for them: l̂−i = ∑ αij<T log(1− σ(αij)) + ∑ αij≥T αij log(σ(αij)), (7) where l̂−i denotes the modified l − i , and αij = qi · k − j /τ . Therefore, the modified final BCELoss is: Llogits−f = 1 |B| B∑ i=1 ( l+i + l̂ − i ) . (8) Typical disentanglement baselines: Methods on pretrained GAN: Methods on pretrained VAE: Methods on pretrained Flow: Full objective. With the above two techniques, the full objective is: L = Llogits−f + λLed, (9) where λ is the weighting hyper-parameter for entropy-based domination loss Led. 4 EXPERIMENT In this section, we first follow the well-accepted protocol (Locatello et al., 2019; Khrulkov et al., 2021) to evaluate the learned disentangled representation, which also reflects the performance of discovered directions implicitly (Lin et al., 2020) (Section 4.1). Secondly, we follow Li et al. (2021a) to directly evaluate the discovered directions (Section 4.2). Finally, we conduct ablation study (Section 4.3). 4.1 EVALUATIONS ON DISENTANGLED REPRESENTATION 4.1.1 EXPERIMENTAL SETUP Datasets. We consider the following popular datasets in the disentanglement areas: Shapes3D (Kim & Mnih, 2018) with 6 ground truth factors, MPI3D (Gondal et al., 2019) with 7 ground truth factors, and Cars3D (Reed et al., 2015) with 3 ground truth factors. In the experiments of the above datasets, images are resized to the 64x64 resolution. Pretrained generative models. For GAN, we use the StyleGAN2 model (Karras et al., 2020). For VAE, we use a common structure with convolutions (Locatello et al., 2019). For Flow, we use Glow (Kingma & Dhariwal, 2018). Baseline. For the typical disentanglement baselines, we choose FactorVAE (Kim & Mnih, 2018), β-TCVAE (Chen et al., 2018) and InfoGAN-CR (Lin et al., 2020). For discovering-based methods, we consider serveral recent methods: GANspace (GS) (Härkönen et al., 2020), LatentDiscovery (LD) (Voynov & Babenko, 2020), ClosedForm (CF) (Shen & Zhou, 2021) and DeepSpectral (DS) (Khrulkov et al., 2021). For these methods, we follow Khrulkov et al. (2021) to train an additional encoder to extract disentangled representation. We are the first to extract disentangled representations from pretrained VAE and Flow, so we extend LD to VAE and Flow as a baseline. Disentanglement metrics. We mainly consider two representative ones: the Mutual Information Gap (MIG) (Chen et al., 2018) and the Disentanglement metric (DCI) (Eastwood & Williams, 2018). MIG requires each factor to be only perturbed by changes of a single dimension of representation. DCI requires each dimension only to encode the information of a single dominant factor. We evaluate the disentanglement in terms of both representation and factors. We also provide results for β-VAE score (Higgins et al., 2017) and FactorVAE score (Kim & Mnih, 2018) in Appendix B.3. Randomness. We consider the randomness caused by random seeds and the strength of the regularization term (Locatello et al., 2019). For random seeds, we follow the same setting as the baselines. Since DisCo does not have a regularization term, we consider the randomness of the pretrained generative models. For all methods, we ensure there are 25 runs, except that Glow only has one run, limited by GPU resources. More details are presented in Appendix A. 4.1.2 EXPERIMENTAL RESULTS The quantitative results are summarized in Table 1 and Figure 3. More details about the experimental settings and results are presented in Appendix A & C. DisCo vs. typical baselines. Our DisCo achieves the SOTA performance consistently in terms of MIG and DCI scores. The variance due to randomness of DisCo tends to be smaller than those typical baselines. We demonstrate that the method, which extracts disentangled representation from pretrained non-disentangled models, can outperform typical disentanglement baselines. DisCo vs. discovering-based methods. Among the baselines based on discovering pretrained GAN, CF achieves the best performance. DisCo outperforms CF in almost all the cases by a large margin. Besides, these baselines need an extra stage (Khrulkov et al., 2021) to get disentangled representation, while our Disentangling Encoder can directly extract disentangled representation. 4.2 EVALUATIONS ON DISCOVERED DIRECTIONS To evaluate the discovered directions, we compare DisCo on StyleGAN2 with GS, LD, CF and DS on the real-world dataset FFHQ (Karras et al., 2019)1. and adopt the comprehensive Manipulation Disentanglement Score (MDS) (Li et al., 2021a) as a metric. To calculate MDS, we use 40 CelebaHQ-Attributes predictors released by StyleGAN. Among them, we select Young, Smile, Bald and Blonde Hair, as they are attributes with an available predictor and commonly found by all methods at the same time. The results are summarized in Table 3. DisCo has shown better overall performance compared to other baselines, which verifies our assumption that learning disentangled representation benefits latent space discovering. We also provide qualitative comparisons in Figure 4. Finally, we provide an intuitive analysis in Appendix D for why DisCo can find those disentangled directions. 4.3 ABLATION STUDY In this section, we perform ablation study of DisCo only on GAN, limited by the space. For the experiments, we use the Shapes3D dataset, and the random seed is fixed. Choice of latent space. For style–based GANs (Karras et al., 2019; 2020), there is a style space W , which is the output of style network (MLP) whose input is a random latent space Z . As demonstrated in Karras et al. (2019), W is more interpretable than Z . We conduct experiments on W and Z respectively to see how the latent space influences the performance. As shown in Table 4, DisCo on W is better, indicating that the better the latent space is organized, the better disentanglement DisCo can achieve. Choices of A. Following the setting of Voynov & Babenko (2020), we mainly consider three options of A: a linear operator with all matrix columns having a unit length, a linear operator with orthonormal matrix columns, or a nonlinear operator of 3 fully-connected layers. The results are shown in Table 4. For latent spaces W and Z , A with unit-norm columns achieves nearly the best performance in terms of MIG and DCI scores. Compared to A with orthonormal matrix columns, using A with unitnorm columns is more expressive with less constraints. Another possible reason is that A is global without conditioned on the latent code z. A non-linear operator is more suitable for a local navigator A. For such a much more complex local and non-linear setting, more inductive bias or supervision should be introduced. Entropy-based domination loss. Here, we verify the effectiveness of entropy-based domination loss Led for disentanglement. For a desirable disentangled representation, one semantic meaning corresponds to one dimension. As shown in Table 4, Led can improve the performance by a large 1The above disentanglement metrics (DCI and MIG) are not available for FFHQ dataset. margin. We also visualize the Variation Space to further demonstrate the effectiveness of our proposed loss in Figure 5. Adding the domination loss makes the samples in the Variation Space to be one-hot, which is desirable for disentanglement. Hard negatives flipping. We run our DisCo with or without the hard negatives flipping strategy to study its influence. As shown in Table 4, flipping hard negatives can improve the disentanglement ability of DisCo. The reason is that the hard negatives have the same semantics as the positive samples. In this case, treating them as the hard negatives does not make sense. Flipping them with pseudo-labels can make the optimization of Contrastive Learning easier. Hyperparmeter N & M. We run DisCo with different ratios of N : M with a fixed sum of 96, and different sum of N +M with a fixed ratio 1 : 2 to study their impacts. As shown in Figure 6 (a), the best ratio is N : M = 32 : 64 = 1 : 2, as the red line (MIG) and blue line (DCI) in the figure show that larger or smaller ratios will hurt DisCo, which indicates DisCo requires a balance between N and M . As shown in Figure 6 (b), the sum of N +M has slight impact on DisCo. For other hyperparameters, we set them empirically, and more details are presented in Appendix A. Contrast vs. Classification. To verify the effectiveness of Contrast, we substitute it with classification by adopting an additional linear layer to recover the corresponding direction index and the shift along this direction. As Table 2 shows, Contrastive Learning outperforms Classification significantly. Concatenation vs. Variation. We further demonstrate that the Variation Space is crucial for DisCo. By replacing the difference operator with concatenation, the performance drops significantly (Table 2), indicating that the encoded representation is not well disentangled. On the other hand, the disentangled representations of images are achieved by Contrastive Learning in the Variation Space. 4.4 ANALYSIS OF DIFFERENT GENERATIVE MODELS As shown in Table 1, DisCo can be well generalized to different generative models (GAN, VAE, and Flow). DisCo on GAN and VAE can achieve relative good performance, while DisCo on Flow is not as good. The possible reason is similar to the choice of latent space of GAN. We assume the disentangled directions are global linear and thus use a linear navigator. In contrast to GAN and VAE, we suspect that Flow may not conform to this assumption well. Furthermore, Flow has the problems of high GPU cost and unstable training, which limit us to do further exploration. 5 CONCLUSION In this paper, we present an unsupervised and model-agnostic method DisCo, which is a Contrastive Learning framework to learn disentangled representation by exploiting pretrained generative models. We propose an entropy-based domination loss and a hard negatives flipping strategy to achieve better disentanglement. DisCo outperforms typical unsupervised disentanglement methods while maintaining high image quality. We pinpoint a new direction that Contrastive Learning can be well applied to extract disentangled representation from pretrained generative models. There may be some specific complex generative models, for which the global linear assumption of disentangled directions in the latent space could be a limitation. For future work, extending DisCo to the existing VAE-based disentanglement framework is an exciting direction. A.2 SETTING FOR BASELINES In this section, we introduce the implementation setting for the baselines (including randomness). VAE-based methods. We choose FactorVAE and β-TCVAE as the SOTA VAE-based methods, we follow Locatello et al. (2019) to use the same architecture of encoder and decoder. For the hyper-parameters, we use the the best settings by grid search. We set the latent dimension of representation to 10. For FactorVAE, we set the hyperparameter γ to 10. For β-TCVAE, we set the hyperparameter β to 6. For the random seeds, considering our method has 25 run, we run 25 times with different random seeds for each model to make the comparison fair. InfoGAN-based methods. We choose InfoGAN-CR as a baseline. We use the official implementation 2 with the best hyperparameter settings by grid search. For the random seeds, we run 25 times with different random seeds Discovering-based methods. We follow Khrulkov et al. (2021) to use the same settings for the following four baselines: LD (GAN), CF, GS, and DS. Similar to our method (DisCo), discoveringbased methods do not have a regularization term. Thus, for the randomness, we adopt the same strategy with DisCo. We take the top-10 directions for 5 different random seeds for GAN and 5 different random seeds for the additional encoder to learn disentangled representations. LD (VAE) & LD (Flow). We follow LD (GAN) to use the same settings and substitute the GAN with VAE / Glow. The only exception is the randomness for LD (Flow). We only run one random seed to pretrain the Glow and use one random seed for the encoder. A.3 MANIPULATION DISENTANGLEMENT SCORE As claimed in Li et al. (2021a), it is difficult to evaluate the performance on discovering the latent space among different methods, which often use model-specific hyper-parameters to control the editing strength. Thus, Li et al. (2021a) propose a comprehensive metric called Manipulation Disentanglement Score (MDS), which takes both the accuracy and the disentanglement of manipulation into consideration. For more details, please refer to Li et al. (2021a). A.4 DOMAIN GAP PROBLEM Please note that there exists a domain gap between the generated images of pretrained generative models and the real images. However, the good performance on disentanglement metrics shows that the domain gap has limited influence on DisCo. 2https://github.com/fjxmlzn/InfoGAN-CR A.5 ARCHITECTURE Here, we provide the model architectures in our work. For the architecture of StyleGAN2, we follow Khrulkov et al. (2021). For the architecture of Glow, we use the open-source implementation 3. 3https://github.com/rosinality/glow-pytorch B MORE EXPERIMENTS B.1 MORE QUALITATIVE COMPARISON We provide some examples for qualitative comparison. We first demonstrate the trade-off problem of the VAE-based methods. As shown in Figure 7, DisCo leverages the pretrained generative model and does not have the trade-off between disentanglement and generation quality. DisCo Furthermore, as shown in Figure 8 and Figure 9, VAE-based methods suffer from poor image quality. When changing one attribute, the results of discovering-based methods tend to also change other attributes. We also provide qualitative comparisons between DisCo and InfoGAN-CR. Note that the latent space of InfoGAN-CR is not aligned with the pretrained StyleGAN2. InfoGAN-CR also suffers from the trade-off problem, and its disentanglement ability is worse than DisCo. We explain the comparison in the main paper and show more manipulation comparisons here. B.2 ANALYSIS OF THE LEARNED DISENTANGLED REPRESENTATIONS We feed the images traversing the three most significant factors (wall color, floor color, and object color) of Shapes3D into the Disentangling Encoders and plot the corresponding dimensions of the encoded representations to visualize the learned disentangled space. The location of each point is the disentangled representation of the corresponding image. An ideal result is that all the points form a cube, and color variation is continuous. We consider three baselines that have relatively higher MIG and DCI: CF, DS, LD. As the figures below show, the points in the latent space of CF and DS are not well organized, and the latent space of all the three baselines are not well aligned with the axes, especially for LD. DisCo learns a well-aligned and well-organized latent space, which signifies a better disentanglement. CF DS LD Ours B.3 MORE QUANTITATIVE COMPARISON We provide additional quantitative comparisons in terms of β-VAE score and FactorVAE score. DisCo on pretrained GAN is comparable to discovering-based baselines in terms of β-VAE score and FactorVAE score, suggesting that some disagreement between these two scores and MIG/ DCI. However, note that the qualitative evaluation in Figure 8, Figure 9 and Section B.2 shows that the disentanglement ability of DisCo is better than all the baselines on Shapes3D dataset. Typical disentanglement baselines: Methods on pretrained GAN: Methods on pretrained VAE: Methods on pretrained Flow: We also provide an additional experiment on Noisy-DSprites dataset. We compare DisCo with β-TCVAE (the best typical method) and CF (the best discovering-based method) in terms of MIG and DCI metrics. C LATENT TRAVERSALS In this section, we visualize the disentangled directions of the latent space discovered by DisCo on each dataset. For Cars3D, Shapes3D, Anime and MNIST, the iamge resolution is 64× 64. For FFHQ, LSUN cat and LSUN church, the image resolution is 256× 256. Besides StyleGAN2, we also provide results of Spectral Norm GAN (Miyato et al., 2018) 4 on MNIST (LeCun et al., 2010) and Anime Face (Jin et al., 2017) to demonstrate that DisCo can be well generalized to other types of GAN. 4https://github.com/anvoynov/GANLatentDiscovery D AN INTUITIVE ANALYSIS FOR DISCO DisCo works by contrasting the variations resulted from traversing along the directions provided by the Navigator. Is the method sufficient to converge to the disentangled solution? Note that it is very challenging to answer this question. To our best knowledge, for unsupervised disentangled representation learning, there is no sufficient theoretical constraint to guarantee the convergence to a disentangled solution Locatello et al. (2019). Here we provide an intuitive analysis for DisCo and try to provide our thoughts on how DisCo find the disentangled direction in the latent space, which is supported by our observations on pretrained GAN both quantitatively and qualitatively. The intuitive analysis consists of two part: (i) The directions that can be discovered by DisCo have different variation patterns compared to random directions. (ii) DisCo hardly converges to the an entangled solution. D.1 WHAT KIND OF DIRECTIONS DISCO CAN CONVERGE TO? First, we visualize the latent space and show that there are some variation patterns in the latent space for disentangled factors. We design the following visualization method. Given a pretrained GAN and two directions in the latent space, we traverse along the plane expanded by the two directions to generate a grid of images. The range is large enough to include all values of these disentangled factors, and the step is small enough to obtain a dense grid. Then, we input these images into an encoder that trained with ground truth factors labels. We get a heatmap of each factor (the value is the response value corresponding dimension of the factor). In this way, we can observe the variation pattern that emerged in the latent space. We take the pretrained StyleGAN on Shapes3D (synthetic) and FFHQ (real-world). For Shapes3D, we take background color and floor color as the two factors (since they refer to different areas in the image, these two factors are disentangled). For FFHQ, we take smile (mouth) and bald (hair) as the two factors (disentangled for referring to different areas). We then choose random directions and the directions discovered by DisCo. The results are shown in Figure 27 and Figure 28. We find a clear difference between random directions and directions discovered by DisCo. This is because DisCo can learn the directions by separating the variations resulted from traversing along with them. However, not all directions can be separated. For those directions in which the variations are not able to be recognized or clustered by the encoder E, it is nearly impossible for DisCo to converge to them. Conversely, for those directions that can be easily recognized and clustered, DisCo will converge to them with a higher probability. From the following observations, we find that the variation patterns resulting from the directions corresponding to disentangled factors are easily recognized and clustered. D.2 WHY DISCO HARDLY CONVERGES TO THE ENTANGLED CASES? In the previous section, we show that DisCo can discover the directions with distinct variation patterns and exclude random directions. Here we discuss why DisCo can hardly converge to the following entangled case (trivial solution based on disentangled one). For example, suppose there is an entangled direction of factors A and B (A and B change with the same rate when traversing along with it) in the latent space of generative models, and DisCo can separate the variations resulting from the direction of A and the entangled direction. In that case, DisCo has no additional bias to update these directions to converge to disentangled ones. In the following text, for ease of referring to, we denote the entangled direction of factors A and B (A and B change with the same rate when traversing along with it) as A+B direction, and direction of factor A (only A change when we traverse along with it). The reasons for why DisCo is hardly converged to the case of A and A+B are two-fold: (i) Our encoder is a lightweight network (5 CNN layers + 3 FC layers). It is nearly impossible for it to separate the A and A+B directions. (ii) In the latent space of the pretrained generative models, the disentangled directions (A, B) are consistent at different locations. In contrast, the entangled directions (A+B) are not, as shown in Figure 29. We conduct the following experiments to verify them. For (i), we replace our encoder in DisCo with a ResNet-50 and train DisCo from scratch on the Shapes3D dataset. The loss, MIG, and DCI are presented in Table 11. The trivial solution is possible when the encoder is powerful enough to fit the A and A+B directions to “become orthogonal”. With this consideration, in DisCo we adopt a lightweight encoder to avoid this issue. For (ii), as the sketch Figure 29 demonstrates, the disentangled directions (”A“- blue color or “B”green color) are consistent, which is invariant to the location in the latent space, while the entangled directions (”A+B“- red color) is not consistent on different locations. The fundamental reason is that: the directions of the disentangled variations are invariant with the position in the latent space. However, the “rate” of the variation is not. E.g., at any point in the latent space, going “up” constantly changes the camera’s pose. However, at point a, going “up” with step 1 means rotating 10 degrees. At point b, going “up” with step 1 means rotating 5 degrees. When the variation “rate” of “A” and “B” are different, the “A+B” directions at different locations are not consistent. Based on the different properties of disentangled and entangled directions in the latent space, DisCo can discover the disentangled directions with contrastive loss. The contrastive loss can be understood from the clustered view (Wang & Isola, 2020; Li et al., 2021b). The variations from the disentangled directions are more consistent and can be better clustered compared to the variations from the entangled ones. Thus, DisCo can discover the disentangled directions in the latent space and learn disentangled representations from images. We further provide the following experiments to support our above analysis. D.2.1 QUANTITATIVE EXPERIMENT We compare the losses of three different settings: • A: For a navigator with disentangled directions, we fix the navigator and train the encoder until convergence. • A + B: For a navigator with entangled directions (we use the linear combination of the disentangled directions to initialize the navigator), we fix it and train the encoder until convergence. • A+B → A: After A+B is convergent, we update both the encoder and the navigator until convergence. The Contrastive loss after convergence is presented in Table 12. The results show that: (i) The disentangled directions (A) can lead to lower loss and better performance than entangled directions (A+B), indicating no trivial solution. (ii) Even though the encoder with A+B is converged, when we optimize the navigator, gradients will still backpropagate to the navigator and converge to A. D.2.2 QUALITATIVE EXPERIMENT We visualize the latent space of GAN in Figure 30 to verify the variation “rate” in the following way: in the latent space, we select two ground truth disentangled directions: floor color (A) and background color (B) obtained by supervision with InterFaceGAN (Shen et al., 2020), we conduct equally spaced sampling along the two disentangled directions: A (labeled with green color variation), B (labeled with gradient blue color) and composite direction (A+B, labeled with gradient red color) as shown in Figure 30 (a). Then we generate the images (also include other images on the grid as shown in Figure 30 (b) ), and feed the images in the bounding boxes into a “ground truth” encoder (trained with ground truth disentangled factors) to regress the “ground truth” disentangled representations of the images. In Figure 30 (c), the points labeled with green color are well aligned with the x-axis indicating only floor color change, points labeled with blue variation are well aligned with the y-axis indicating only background color change. However, the points labeled with red color are NOT aligned with any line, which indicates the directions of A+B are not consistent. Further, the variation “rate” is relevant to the latent space locations for the two disentangled directions. This observation well supports our idea shown in Figure 29. The different properties between disentangled and entangled directions enable DisCo to discover the disentangled directions in the latent space. E EXTENSION: BRIDGE THE PRETRAINED VAE AND PRETRAINED GAN Researchers are recently interested in improving image quality given the disentangled representation generated by typical disentanglement methods. Lee et al.(Lee et al., 2020) propose a post-processing stage using a GAN based on disentangled representations learned by VAE-based disentanglement models. This method scarifies a little generation ability due to an additional constraint. Similarly, Srivastava et al. (Srivastava et al., 2020) propose to use a deep generative model with AdaIN (Huang & Belongie, 2017) as a post-processing stage to improve the reconstruction ability. Following this setting, we can replace the encoder in DisCo (GAN) with an encoder pretrained by VAE-based disentangled baselines. In this way, we can bridge the pretrained disentangled VAE and pretrained GAN, as shown in Figure 31. Compared to previous methods, our method can fully utilize the state-of-the-art GAN and the state-of-the-art VAE-based method and does not need to train a deep generative model from scratch. F DISCUSSION ON RELATION BETWEEN BCELOSS AND NCELOSS We would like to present a deep discussion on the relation between the BCELoss Llogits and NCELoss LNCE . This discussion is related to the NCE paper Gutmann & Hyvärinen (2010), and InfoNCE paper van den Oord et al. (2018). The discussion is as following: (i) we first provide a formulation of a general problem and get two objectives, L1 and L2, and L1 is the upper bound of L2. (ii) Following Gutmann & Hyvärinen (2010), we show that L1 is aligned with LBCE under the setting of Gutmann & Hyvärinen (2010). (iii) Following van den Oord et al. (2018), we prove L2 is aligned with LNCE under the setting of van den Oord et al. (2018). (iii) We discuss the relation between these objectives and the loss in our paper. Part I. Assume we have S observations {xi}Si=1 from a data distribution p(x), each with a label Ci ∈ {0, 1}. The we denote the posterior probabilities as p+(x) = p(x|C = 1) and p−(x) = p(x|C = 0). We define two objectives as follow: L1 = − S∑ i=1 Ci logP (Ci = 1|xi) + (1− Ci) logP (Ci = 0|xi), (10) and L2 = − S∑ i=1 Ci logP (Ci = 1|xi) (11) Since − ∑S i=1(1− Ci) log p(Ci = 0|xi) ≥ 0, we have L1 ≥ L2. (12) L1 is the upper bound of L2. This a general formulation of a binary classification problem. In the context of our paper, we have a paired observation xi : (q, ki), with q as the query, and the key ki is either from a positive key set {k+j }Nj=1 or as negative key set {k−m}Mm=1 (i.e., {ki} N+M i=1 = {k + j }Nj=1 ⋃ {k−m}Mm=1), where M = S −N . And Ci is assigned as: Ci = { 1, ki ∈ {k+j }Nj=1 0, ki ∈ {k−m}Mm=1 (13) In our paper, we have h(x) = exp(q · k/τ). Part II. In this part, following Gutmann & Hyvärinen (2010), we show that L1 is aligned with Llogits (Equation 3 in the main paper) under the setting of Gutmann & Hyvärinen (2010). Following Gutmann & Hyvärinen (2010)), we assume the prior distribution P (C = 0) = P (C = 1) = 1/2, according to the Bayes rule, we have P (C = 1|x) = p(x|C = 1)P (C = 1) p(x|C = 1)P (C = 1) + p(x|C = 0)P (C = 0) = 1 1 + p −(x) p+(x) . (14) And P (C = 0|x) = 1− P (C = 1|x). On the other hand, we have a general form of BCELoss, as LBCE = − S∑ i=1 Ci log σ(q · ki/τ) + (1− Ci) log(1− σ(q · ki/τ)), (15) where σ(·) is the sigmoid function. We have σ(q · k/τ) = 1 1 + exp(−q · k/τ) = 1 1 + 1exp(q·k/τ) = 1 1 + 1h(x) , (16) From Gutmann & Hyvärinen (2010) Theorem 1, we know that when LBCE is minimized, we have h(x) = p+(x) p−(x) . (17) Thus, we know the BCELoss LBCE is a approximation of the objective L1. Part. III Following van den Oord et al. (2018), we prove L2 is aligned with LNCE (Equation 2 in the main paper) under the setting of van den Oord et al. (2018) From the typical contrastive setting (one positive sample, others are negative samples, following van den Oord et al. (2018)), we assume there is only one positive sample, others are negatives in {xi}Si=1. Then, the probability of xi sample from p+(x) rather then p−(x) is as follows, P (Ci = 1|xi) = p+(xi)Πl ̸=ip −(xl)∑S j=1 p +(xj)Πl ̸=ip−(xl) = p+(xi) p−(xi)∑S j=1 p+(xj) p−(xj) (18) From van den Oord et al. (2018), we know that when minimize Equation 11, we have h(x) = exp(q · k/τ) ∝ p+(x)p−(x) . In this case, we get the form of LNCE as LNCE = − S∑ i=1 Ci log exp(q · ki/τ)∑S j=1 exp(q · kj/τ) (19) LNCE is a approximate of L2. Part. IV When generalize the contrastive loss into our setting (N positive samples, M negative samples). The BCELoss (Equation 15) can be reformulated as The BCELoss (Equation 15) can be reformulated as L̂BCE = − N∑ j=1 log σ(q · k+j /τ)− M∑ m=1 log(1− σ(q · k−m/τ)). (20) Similarly, the NCEloss (Equation 19) can be reformulated as L̂NCE = − N∑ j=1 log exp(q · k+j /τ)∑M+N s=1 exp(q · ks/τ) (21) L̂BCE is aligned with Llogits (Equation 3 in our main paper), and L̂NCE is aligned with LNCE (Equation 2 in the main paper). Now we have L1 (approximated by LBCE) is the upper bound of L2 (approximated by LNCE). However, as you may notice, the assumptions we made in Part II and Part III are different, one is P (C = 0) = P (C = 1), the other one is only one positive sample, others are negative. Also the extent to our situation is more general case (N positives, others are negatives). However, they have the same objective, which is by contrasting positives and negatives, we can use h(x) = exp(q · k/τ) to estimate p+/p−. We can think the h(x) as a similarity score, i.e. if q and k are from a positive pair (they have the same direction in our paper), h(x) should be as large as possible (p+/p− > 1) and vice versa. From this way, we can learn the representations (q, k) to reflect the image variation, i.e., similar variations have higher score h(x) , while different kinds of variation have lower score h(x). Then with this meaningful representation, in the latent space, can help to discover the directions carrying different kinds of image variation. This is an understanding, from a contrastive learning view, of how our method works.
1. What is the focus of the paper regarding representation learning? 2. What are the strengths of the proposed method, particularly its simplicity and effectiveness? 3. What are the weaknesses of the paper, such as the lack of explanation for choosing semantically meaningful directions? 4. How does the reviewer assess the method's performance compared to prior works? 5. Do you have any questions or suggestions regarding the paper's content, such as removing certain statements or providing more explanations?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a novel representation learning technique to disentangle the latent space of pre-trained generative models, by discovering semantically meaningful directions in them. The method trains a navigator and a delta-contrastor network, which consists of 2 encoders sharing weights. First, random samples are perturbed along the directions obtained from the navigator. The perturbed vectors are then decoded with the pre-trained generator, then encoded and the difference between 2 samples are taken. The output is in the variation space, where a contrastive learning technique clusters together the samples that were perturbed with the same direction. Review Good: The idea is very simple and easy to implement. The paper is very well written and easy to understand. There are extensive ablations that show the effect of design choices and hyper-parameters. The qualitative results look very good for disentangling, the proposed method preserves e.g. the identity much better when changing other attributes, like smile or baldness. Quantitatively the proposed method shows better performance than the baseline for many datasets and 3 different kinds of generative model: GAN, VAE and Flow. This is impressive and shows the methods generality. Bad: Although the paper explained the method well from the perspective of reproducibility, it does not explain why the method should chose semantically meaningful directions. One can imagine a shortcut scenario, where the method learn 0.5a+0.5b and 0.5a-0.5b directions, where a and b are perfect semantically meaningful directions. In principle the training loss could be minimised with this solution as well (?). The reason why this does not happens is because of the implicit biases in the networks (?). But then why is this method performing better than prior works? "(ii) the factors are embedded in the pretrained model, severing as an inductive bias for unsupervised disentangled representation learning." This still allows for the mixed solution 0.5a+0.5b and 0.5a-0.5b. I think the following statement is incorrect, it should be removed: "A composed of 3 fully-connected layers performs poorly, indicating the disentangled directions of the latent space W of StyleGAN is nearly linear." W is nearly linear because there are good directions in it, and a linear method can perform well in it. The 3 layer network fails for some other reason, in principle it should work at least as good as the linear model, as it has the preresentation capacity. The method is very sensitive to the ratio between positive and negative samples. A very good tuning is needed, which is shown in the paper for most hyper-parameters. One might think that the gains come from the extensive tuning rather than the proposed idea itself. minor: Although the images are resized to 64x64, it would be nice to see full resolution results with e.g. the StyleGAN2 generator. Or was the generator also retrained with reduced size images (for faster training I guess)? some typos and grammar could be fixed, e.g. "... generative model are two-ford: ..."
ICLR
"Title\nLearning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrasti(...TRUNCATED)
"1. What is the focus of the paper regarding disentanglement learning?\n2. What are the strengths of(...TRUNCATED)
Summary Of The Paper Review
"Summary Of The Paper\nThis paper proposes to learn disentangled representations via contrastive lea(...TRUNCATED)
ICLR
"Title\nProMP: Proximal Meta-Policy Search\nAbstract\nCredit assignment in Meta-reinforcement learni(...TRUNCATED)
"1. What are the differences in gradient calculation between the original MAML and E-MAML?\n2. How d(...TRUNCATED)
Review
"Review\n\nIn this paper, the authors investigate the gradient calculation in the original MAML (Fin(...TRUNCATED)
ICLR
"Title\nProMP: Proximal Meta-Policy Search\nAbstract\nCredit assignment in Meta-reinforcement learni(...TRUNCATED)
"1. What is the focus of the paper regarding meta-reinforcement learning?\n2. What are the strengths(...TRUNCATED)
Review
"Review\nIn this paper, the author proposed an efficient surrogate loss for estimating Hessian in t(...TRUNCATED)
ICLR
"Title\nProMP: Proximal Meta-Policy Search\nAbstract\nCredit assignment in Meta-reinforcement learni(...TRUNCATED)
"1. What are the main contributions and improvements introduced by the paper regarding MAML and E-MA(...TRUNCATED)
Review
"Review\nThe paper first examines the objective function optimized in MAML and E-MAML and interprets(...TRUNCATED)
ICLR
"Title\nPutting Theory to Work: From Learning Bounds to Meta-Learning Algorithms\nAbstract\nMost of (...TRUNCATED)
"1. What is the focus of the reviewed paper regarding meta-learning?\n2. What are the concerns regar(...TRUNCATED)
Review
"Review\n########################################################################## Summary:\nThe pa(...TRUNCATED)
ICLR
"Title\nPutting Theory to Work: From Learning Bounds to Meta-Learning Algorithms\nAbstract\nMost of (...TRUNCATED)
"1. What are the limitations of the paper regarding its application of meta-learning theory in few-s(...TRUNCATED)
Review
"Review\nThe main motivation of this paper is based on the theoretical results of meta-learning. To (...TRUNCATED)
ICLR
"Title\nPutting Theory to Work: From Learning Bounds to Meta-Learning Algorithms\nAbstract\nMost of (...TRUNCATED)
"1. What are the contributions and novel aspects of the paper regarding meta-learning algorithms?\n2(...TRUNCATED)
Review
"Review\nTo improve the practical performance of meta-learning algorithms, this paper proposes two r(...TRUNCATED)

Cleaned Review Dataset for Reviewer2

This is a cleaned version of our dataset and can be directly used for fine-tuning. The raw data files including metadata for each paper is in this directory.

  • venue: venue of the paper;
  • paper_content: content of the paper divided into sections
  • prompt: prompt generated for the review based on our PGE pipeline
  • format: the format of the review
  • review: human-written review for the paper

Dataset Sources

We incorporate parts of the PeerRead and NLPeer datasets along with an update-to-date crawl from ICLR and NeurIPS on OpenReview and NeurIPS Proceedings.

Citation

If you find this dataset useful in your research, please cite the following paper:

@misc{gao2024reviewer2,
      title={Reviewer2: Optimizing Review Generation Through Prompt Generation}, 
      author={Zhaolin Gao and Kianté Brantley and Thorsten Joachims},
      year={2024},
      eprint={2402.10886},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
64

Models trained or fine-tuned on GitBag/Reviewer2_PGE_cleaned