id
stringlengths 11
20
| paper_text
stringlengths 29
163k
| review
stringlengths 666
24.3k
|
---|---|---|
iclr_2018_H1kMMmb0- | Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive. Consider a visual arithmetic task, where the goal is to carry out simple arithmetical algorithms on digits presented under natural conditions (e.g. hand-written, placed randomly). We propose a two-tiered architecture for tackling this problem. The lower tier consists of a heterogeneous collection of information processing modules, which can include pre-trained deep neural networks for locating and extracting characters from the image, as well as modules performing symbolic transformations on the representations extracted by perception. The higher tier consists of a controller, trained using reinforcement learning, which coordinates the modules in order to solve the high-level task. For instance, the controller may learn in what contexts to execute the perceptual networks and what symbolic transformations to apply to their outputs. The resulting model is able to solve a variety of tasks in the visual arithmetic domain, and has several advantages over standard, architecturally homogeneous feedforward networks including improved sample efficiency. | Summary: This work is a variant of previous work (Zaremba et al. 2016) that enables the use of (noisy) operators that invoke pre-trained neural networks and is trained with Actor-Critic. In this regard it lacks a bit of originality. The quality of the experimental evaluation is not great. The clarity of the paper could be improved upon but is otherwise fine. The existence of previous work (Zaremba et al. 2016) renders this work (including its contributions) not very significant. Relations to prior work are missing. But let's wait for the rebuttal phase.
Pros
-It is confirmed that noisy operators (in the form of neural networks) can be used on the visual arithmetic task
Cons
-Not very novel
-Experimental evaluation is wanting
The focus of this paper is on integrating perception and reasoning in a single system. This is done by specifying an interface that consists of a set of discrete operations (some of which involve perception) and memory slots. A parameterized policy that can make use of these these operations is trained via Actor-Critic to solve some reasoning tasks (arithmetics in this case).
The proposed system is a variant of previous work (Zaremba et al. 2016) on the concept of interfaces, and similarly learns a policy that utilizes such an interface to perform reasoning tasks, such as arithmetics. In fact, the only innovation proposed in this paper is to incorporate some actions that invoke a pre-trained neural network to “read” the symbol from an image, as opposed to parsing the symbol directly. However, there is no reason to expect that this would not function in previous work (Zaremba et al. 2016), even when the network is suboptimal (in which case the operator becomes noisy and the policy should adapt accordingly). Another notable difference is that the proposed system is trained with Actor-Critic as opposed to Q-learning, but this is not further elaborated on by the authors.
The proposed system is evaluated on a visual arithmetics task. The input consists of a 2x2 grid of extended MNIST characters. Each location in the grid then corresponds to the 28 x 28 pixel representation of the digit. Actions include shifting the “fovea” to a different entry of the grid, invoking the digit NN or the operator NN which parse the current grid entry, and some symbolic operations that operate on the memory. The fact that the input is divided into a 2x2 grid severely limits the novelty of this approach compared to previous work (Zaremba et al. 2016). Instead it would have been interesting to randomly spawn digits and operators in a 56 x 56 image and maintain 4 coordinates that specify a variable-sized grid that glimpses a part of the image. This would make the task severely more difficult, given fixed pre-trained networks. The addition of the salience network is unclear to me in the context of MNIST digits, since any pixel that is greater than 0 is salient? I presume that the LSTM uses this operator to evaluate whether the current entry contains a digit or an operator. If so, wouldn’t simply returning the glimpse be enough?
In the experiments the proposed system is compared to three CNNs on two different visual arithmetic tasks, one that includes operators as part of the input and one that incorporates operators only in the tasks description. In all cases the proposed method requires fewer samples to achieve the final performance, although given enough samples all of the CNNs will solve the tasks. This is not surprising as this comparison is rather unfair. The proposed system incorporates pre-trained modules, whose training samples are not taken into account. On the other hand the CNNs are trained from scratch and do not start with the capability to recognize digits or operators. Combined with the observation that all CNNs are able to solve the task eventually, there is little insight in the method's performance that can be gained from this comparison.
Although the visual arithmetics on a 2x2 grid is a toy task it would at least be nice to evaluate some of the policies that are learned by the LSTM (as done by Zaremba) to see if some intuition can be recovered from there. Proper evaluation on a more complex environment (or at least on that does not assume discrete grids) is much desired. When increasing the complexity (even if by just increasing the grid size) it would be good to compare to a recurrent method (Pyramid-LSTM, Pixel-RNN) as opposed to a standard CNN as it lacks memory capabilities and is clearly at a disadvantage compared to the LSTM.
Some detailed comments are:
The introduction invokes evidence from neuroscience to argue that the brain is composed of (discrete) modules, without reviewing any of the counter evidence (there may be a lot, given how bold this claim is).
From the introduction it is unclear why the visual arithmetic task is important.
Several statements including the first sentence lack citations.
The contribution section is not giving any credit to Zaremba et al. (2016) whereas this work is at best a variant of that approach.
In the experiment section the role of the saliency detector is unclear.
Experiment details are lacking and should be included.
The related work section could be more focused on the actual contribution being made.
It strikes me as odd that in the discussion the authors propose to make the entire system differentiable, since this goes against the motivation for this work.
Relation to prior work:
p 1: The authors write: "We also borrow the notion of an interface as proposed in Zaremba et al. (2016). An interface is a designed, task-specific machine that mediates the learning agent’s interaction with the external world, providing the agent with a representation (observation and action spaces) which is intended to be more conducive to learning than the raw representations. In this work we formalize an interface as a separate POMDP I with its own state, observation and action spaces."
This interface terminology for POMDPs was actually introduced in:
J. Schmidhuber. Reinforcement learning in Markovian and non-Markovian environments. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, NIPS'3, pages 500-506. San Mateo, CA: Morgan Kaufmann, 1991.
p 4: authors write: "For the policy πθ, we employ a Long Short-Term Memory (LSTM)"
Do the authors use the (cited) original LSTM of 1997, or do they also use the forget gates (recurrent units with gates) that most people are using now, often called the vanilla LSTM, by Gers et al (2000)?
p 4: authors write: "One obvious point of comparison to the current work is recent research on deep neural networks designed to learn to carry out algorithms on sequences of discrete symbols. Some of these frameworks, including the Differen-tiable Forth Interpreter (Riedel and Rocktäschel, 2016) and TerpreT (Gaunt et al., 2016b), achieve this by explicitly generating code, while others, including the Neural Turing Machine (NTM; Graves et al., 2014), Neural Random-Access Machine (NRAM; Kurach et al., 2015), Neural Programmer (NP; Neelakan- tan et al., 2015), Neural Programmer-Interpreter (NPI; Reed and De Freitas, 2015) and work in Zaremba et al. (2016) on learning algorithms using reinforcement learning, avoid gen- erating code and generally consist of a controller network that learns to perform actions in a (sometimes differentiable) external computational medium in order to carry out an algorithm."
Here the original work should be mentioned, on differentiable neural stack machines:
G.Z. Sun and H.H. Chen and C.L. Giles and Y.C. Lee and D. Chen. Connectionist Pushdown Automata that Learn Context-Free Grammars. IJCNN-90, Lawrence Erlbaum, Hillsdale, N.J., p 577, 1990.
Mozer, Michael C and Das, Sreerupa. A connectionist symbol manipulator that discovers the structure of context-free languages. Advances in Neural Information Processing Systems (NIPS), p 863-863, 1993. |
iclr_2018_B1ydPgTpW | In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions. Unlike other valuable items, license plates are not allocated an estimated price before auction. I propose that the task of predicting plate prices can be viewed as a natural language processing (NLP) task, as the value depends on the meaning of each individual character on the plate and its semantics. I construct a deep recurrent neural network (RNN) to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate. I demonstrate the importance of having a deep network and of retraining. Evaluated on 13 years of historical auction prices, the deep RNN's predictions can explain over 80 percent of price variations, outperforming previous models by a significant margin. I also demonstrate how the model can be extended to become a search engine for plates and to provide estimates of the expected price distribution. | Summary: The authors take two pages to describe the data they eventually analyze - Chinese license plates (sections 1,2), with the aim of predicting auction price based on the "luckiness" of the license plate number. The authors mentions other papers that use NN's to predict prices, contrasting them with the proposed model by saying they are usually shallow not deep, and only focus on numerical data not strings. Then the paper goes on to present the model which is just a vanilla RNN, with standard practices like batch normalization and dropout. The proposed pipeline converts each character to an embedding with the only sentence of description being "Each character is converted by a lookup table to a vector representation, known as character embedding." Specifics of the data, RNN training, and the results as well as the stability of the network to hyperparameters is also examined. Finally they find a "a feature vector for each plate by summing up the output of the last recurrent layer overtime." and the use knn on these features to find other plates that are grouped together to try to explain how the RNN predicts the prices of the plates. In section 7, the RNN is combined with a handcrafted feature model he criticized in a earlier section for being too simple to create an ensemble model that predicts the prices marginally better.
Specific Comments on Sections:
Comments: Sec 1,2
In these sections the author has somewhat odd references to specific economists that seem a little off topic, and spends a little too much time in my opinion setting up this specific data.
Sec 3
The author does not mention the following reference: "Deep learning for stock prediction using numerical and textual information" by Akita et al. that does incorporate non-numerical info to predict stock prices with deep networks.
Sec 4
What are the characters embedded with? This is important to specify. Is it Word2vec or something else? What does the lookup table consist of? References should be added to the relevant methods.
Sec 5
I feel like there are many regression models that could have been tried here with word2vec embeddings that would have been an interesting comparison. LSTMs as well could have been a point of comparison.
Sec 6
Nothing too insightful is said about the RNN Model.
Sec 7
The ensembling was a strange extension especially with the Woo model given that the other MLP architecture gave way better results in their table.
Overall: This is a unique NLP problem, and it seems to make a lot of sense to apply an RNN here, considering that word2vec is an RNN. However comparisons are lacking and the paper is not presented very scientifically. The lack of comparisons made it feel like the author cherry picked the RNN to outperform other approaches that obviously would not do well. |
iclr_2018_HkxF5RgC- | Published as a conference paper at ICLR 2018 SPARSE PERSISTENT RNNS: SQUEEZING LARGE RECURRENT NETWORKS ON- CHIP
Recurrent Neural Networks (RNNs) are powerful tools for solving sequence-based problems, but their efficacy and execution time are dependent on the size of the network. Following recent work in simplifying these networks with model pruning and a novel mapping of work onto GPUs, we design an efficient implementation for sparse RNNs. We investigate several optimizations and tradeoffs: Lamport timestamps, wide memory loads, and a bank-aware weight layout. With these optimizations, we achieve speedups of over 6× over the next best algorithm for a hidden layer of size 2304, batch size of 4, and a density of 30%. Further, our technique allows for models of over 5× the size to fit on a GPU for a speedup of 2×, enabling larger networks to help advance the state-of-the-art. We perform case studies on NMT and speech recognition tasks in the appendix, accelerating their recurrent layers by up to 3×. | This paper introduces sparse persistent RNNs, a mechanism to add pruning to the existing work of stashing RNN weights on a chip. The paper describes the use additional mechanisms for synchronization and memory loading.
The evaluation in the main paper is largely on synthetic workloads (i.e. large layers with artificial sparsity). With evaluation largely over layers instead of applications, I was left wondering whether there is an actual benefit on real workloads. Furthermore, the benefit over dense persistent RNNs for OpenNMT application (of absolute 0.3-0.5s over dense persistent rnns?) did not appear significant unless you can convince me otherwise.
Storing weights persistent on chip should give a sharp benefit when all weights fit on the chip. One suggestion I have to strengthen the paper is to claim that due to pruning, now you can support a larger number of methods or method configurations and to provide examples of those.
To summarize, the paper adds the ability to support pruning over persistent RNNs. However, Narang et. al., 2017 already explore this idea, although briefly. Furthermore, the gains from the sparsity appear rather limited over real applications. I would encourage the authors to put the NMT evaluation in the main paper (and perhaps add other workloads). Furthermore, a host of techniques are discussed (Lamport timestamps, memory layouts) and implementing them on GPUs is not trivial. However, these are well known and the novelty or even the experience of implementing these on GPUs should be emphasized. |
iclr_2018_BkJ3ibb0- | DEFENSE-GAN: PROTECTING CLASSIFIERS AGAINST ADVERSARIAL ATTACKS USING GENERATIVE MODELS
In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. | The authors describe a new defense mechanism against adversarial attacks on classifiers (e.g., FGSM). They propose utilizing Generative Adversarial Networks (GAN), which are usually used for training generative models for an unknown distribution, but have a natural adversarial interpretation. In particular, a GAN consists of a generator NN G which maps a random vector z to an example x, and a discriminator NN D which seeks to discriminate between an examples produced by G and examples drawn from the true distribution. The GAN is trained to minimize the max min loss of D on this discrimination task, thereby producing a G (in the limit) whose outputs are indistinguishable from the true distribution by the best discriminator.
Utilizing a trained GAN, the authors propose the following defense at inference time. Given a sample x (which has been adversarially perturbed), first project x onto the range of G by solving the minimization problem z* = argmin_z ||G(z) - x||_2. This is done by SGD. Then apply any classifier trained on the true distribution on the resulting x* = G(z*).
In the case of existing black-box attacks, the authors argue (convincingly) that the method is both flexible and empirically effective. In particular, the defense can be applied in conjunction with any classifier (including already hardened classifiers), and does not assume any specific attack model. Nevertheless, it appears to be effective against FGSM attacks, and competitive with adversarial training specifically to defend against FGSM.
The authors provide less-convincing evidence that the defense is effective against white-box attacks. In particular, the method is shown to be robust against FGSM, RAND+FGSM, and CW white-box attacks. However, it is not clear to me that the method is invulnerable to novel white-box attacks. In particular, it seems that the attacker can design an x which projects onto some desired x* (using some other method entirely), which then fools the classifier downstream.
Nevertheless, the method is shown to be an effective tool for hardening any classifier against existing black-box attacks
(which is arguably of great practical value). It is novel and should generate further research with respect to understanding its vulnerabilities more completely.
Minor Comments:
The sentence starting “Unless otherwise specified…” at the top of page 7 is confusing given the actual contents of Tables 1 and 2, which are clarified only by looking at Table 5 in the appendix. This should be fixed. |
iclr_2018_r1Zi2Mb0- | Neural architecture search (NAS), the task of finding neural architectures automatically, has recently emerged as a promising approach for discovering better models than ones designed by humans alone. However, most success stories are for vision tasks and have been quite limited for text, except for a small language modeling datasets. In this paper, we explore NAS for text sequences at scale, by first focusing on the task of language translation and later extending to reading comprehension. We conduct extensive searches over the recurrent cells and attention similarity functions for standard sequence-to-sequence models across two translation tasks, IWSLT English-Vietnamese and WMT German-English. We report challenges in performing cell searches as well as demonstrate initial success on attention searches with translation improvements over strong baselines. In addition, we show that results on attention searches are transferable to reading comprehension on the SQuAD dataset. | This paper proposes a method to find an effective structure of RNNs and attention mechanisms by searching programs over the stack-oriented execution engine.
Although the new point in this paper looks only the representation paradigm of each program: (possibly variable length) list of the function applications, that could be a flexible framework to find a function without any prior structures like Fig.1-left.
However, the design of the execution engine looks not well-designed. E.g., authors described that the engine ignores the binary operations that could not be executed at the time. But in my thought, such operations should not be included in the set of candidate operations, i.e., the set of candidates should be constrained directly by the state of the stack.
Also, including repeating "identity" operations (in the candidates of attention operations) seems that some unnecessary redundancy is introduced into the search space. The same expressiveness could be achieved by predicting a special token only once at the end of the sequence (namely, "end-of-sequence" token as just same as usual auto-regressive RNN-based decoder models).
Comparison in experiments looks meaningless. Score improvement is slight nevertheless authors paid much computation cost for searching accurate network structures. The conventional method (Zoph&Le,17) in row 3 of Table 1 looks not comparable with proposed methods because it is trained by an out-of-domain task (LM) using conventional (tree-based) search space. Authors should at least show the result by applying the conventional search space to the tasks of this paper.
In Table 2, the "our baseline" looks cheap because the dot product is the least attention model in those proposed in past studies.
The catastrophic score drop in the rows 5 and 7 in Table 1 looks interesting, but the paper does not show enough comprehension about this phenomenon, which makes the proposed method hard to apply other tasks.
The same problem exists in the setting of the hyperparameters in the reward functions. According to the footnote, there are largely different settings about the value of \beta, which suggest a sensitivity by changing this parameter. Authors should provide some criterion to choose these hyperparameters. |
iclr_2018_Hk-FlMbAZ | In the adversarial-perturbation problem of neural networks, an adversary starts with a neural network model F and a point x that F classifies correctly, and applies a small perturbation to x to produce another point x that F classifies incorrectly. In this paper, we propose taking into account the inherent confidence information produced by models when studying adversarial perturbations, where a natural measure of "confidence" is F (x) ∞ (i.e. how confident F is about its prediction?). Motivated by a thought experiment based on the manifold assumption, we propose a "goodness property" of models which states that confident regions of a good model should be well separated. We give formalizations of this property and examine existing robust training objectives in view of them. Interestingly, we find that a recent objective by Madry et al. encourages training a model that satisfies well our formal version of the goodness property, but has a weak control of points that are wrong but with low confidence. However, if Madry et al.'s model is indeed a good solution to their objective, then good and bad points are now distinguishable and we can try to embed uncertain points back to the closest confident region to get (hopefully) correct predictions. We thus propose embedding objectives and algorithms, and perform an empirical study using this method. Our experimental results are encouraging: Madry et al.'s model wrapped with our embedding procedure achieves almost perfect success rate in defending against attacks that the base model fails on, while retaining good generalization behavior. | The authors argue that "good" classifiers naturally represent the classes in a classification as well-separated manifolds, and that adversarial examples are low-confidence examples lying near to one of these manifolds. The authors suggest "fixing" adversarial examples by projecting them back to the manifold, essentially by finding a point near the adversarial example that has high confidence.
There are numerous issues here, which taken together, make the whole story pretty unconvincing.
The term "manifold" is used very sloppily. To be fair, this is unfortunately common in modern machine learning. An actual manifold is a specific mathematical structure with specific properties. In ML, what is generally hypothesized is that the data (often per class) lives "near" to some "low-dimensional" structure. In this paper, even the low-dimensionality isn't used --- the "manifold assumption" is used as a stand-in for "the regions associated with different classes are well-separated." (This is partially discussed in Section 6, where the authors point out correctly that the same defense as used here could be used with a 1-nn model.) This is fine as far as it goes, but the paper refs Basri & Jacobs 2016 multiple times as if it says anything relevant about this paper: Basri & Jacobs is specifically about the ability of deep nets to fit data that falls on (actual, mathematical) manifolds. This reference doesn't add much to the present story.
The essential argument of the paper rests on the "Postulate: (A good model) F is confident on natural points drawn from the manifolds, but has low confidence on points outside of the manifolds."
This postulate is sloppy and speculative. For instance, taken in its strong form, if believe the postulate, then a good model:
1. Can classify all "natural points" from all classes with 100% accuracy.
2. Can detect adversarial points with 100% accuracy because all high-confidence points are correct classifications and all low-confidence points are adversarial.
3. All adversarial examples will be low-confidence.
Point 1 makes it clear that no good model F fully satisfying the postulate exists --- models never achieve 100% accuracy on difficult real-world distributions. But the method for dealing with adversarial examples seems to require Points 2 and 3 being true.
To be fair, the paper more-or-less admits that how true these points are is not known and is important. Nevertheless, I think this paper comes pretty close to arguing something that I *think* is not true, and doesn't do much to back up its argument. Because of the quality of the writing (generally sloppy), it's hard to tell, but I believe the authors are basically arguing that:
a. You can generally easily detect adversarial points because they are low confidence.
b. If you go through a procedure to find a point near your adversarial point that is high-confidence, you'll get the "correct" (or perhaps "original") class back.
I think b follows from a, but a is extremely suspect. I do not personally work in adversarial examples, and briefly looking at the literature, it seems that most authors *do* focus on how something is classified and not its confidence, but I don't think it's *that* hard to generate high-confidence adversarial examples. Early work by Goodfellow et al. ("Explaining and Harnessing Adversarial Examples", Figure 1, shows an example where the incorrect classification has very high confidence. The present paper only uses Carlini-Wagner attacks. From a read of Carlini-Wagner, it seems they are heavily concerned with finding *minimal* perturbations to achieve a given misclassification; this will of course produce low-confidence adversaries, but I see no reason why this is a general property of all adversarial examples.
The experiments are weak. I applaud the authors for mentioning the experiments are very preliminary, but that doesn't make them any less weak.
What are we to make of the one image discussed at the end of Section 5 and shown in Figure 1? The authors note that the original image gives low-confidence for the correct class. (Does this mean that the classifier isn't "good"? Is it evidence against some kind of manifold assumption?) The authors note the adversarial category has significantly higher confidence, and say "in this case, it seems that it is the vagueness of the signals/data that lead to a natural difficulty." But the signals and data are ALWAYS vague. If they weren't, machine learning would be easy. This paper proposes something, looks at a tiny number of examples, and already finds a counterexample to the theory. What's the evidence *for* the theory?
A lot of writing is given over to how this method is "semantic", and I just don't buy it. The connection to manifolds is weak. The basic argument here is really "(1) If our classifiers produce smooth well-separated high-confidence regions, (2) then we can detect adversaries because they're low-confidence, and (3) we can correct adversaries by projecting them back to high-confidence." (1) seems vastly unlikely to me based on all my experience: neural nets often get things wrong, they often get things wrong with high confidence, and when they're right, the confidence is at least sometimes low. The authors use a sloppy postulate about good models and so could perhaps argue I've never seen a good model, but the methods of this paper require a good model. (2) seems to follow logically from (1). (3) is also suspect --- perturbations which are *minimal* can be corrected as this paper does (and Carlini-Wagner attacks are minimal by design), but there's no reason to expect general perturbations to be minimal.
The writing is poor throughout. It's generally readable, but the wordings are often odd, and sometimes so odd it's hard to tell what was meant. For instance, I spent awhile trying to decide whether the authors assumed common classifiers are "good" (according to the postulate) or whether this paper was about a way to *make* classifiers good (I eventually decided the former). |
iclr_2018_HyzbhfWRW | Published as a conference paper at ICLR 2018 LEARN TO PAY ATTENTION
We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack. | This paper proposes a network with the standard soft-attention mechanism for classification tasks, where the global feature is used to attend on multiple feature maps of local features at different intermediate layers of CNN. The attended features at different feature maps are then used to predict the final classes by either concatenating features or ensembling results from individual attended features. The paper shows that the proposed model outperforms the baseline models in classification and weakly supervised segmentation.
Strength:
- It is interesting idea to use the global feature as a query in the attention mechanism while classification tasks do not naturally involve a query unlike other tasks such as visual question answering and image captioning.
- The proposed model shows superior performances over GAP in multiple tasks.
Weakness:
- There are a lot of missing references. There have been a bunch of works using the soft-attention mechanism in many different applications including visual question answering [A-C], attribute prediction [D], image captioning [E,F] and image segmentation [G]. Only two previous works using the soft-attention (Bahdanau et al., 2014; Xu et al., 2015) are mentioned in Introduction but they are not discussed while other types of attention models (Mnih et al., 2014; Jaderberg et al., 2015) are discussed more.
- Section 2 lacks discussions about related work but is more dedicated to emphasizing the contribution of the paper.
- The global feature is used as the query vector for the attention calculation. Thus, if the global feature contains information for a wrong class, the attention quality should be poor too. Justification on this issue can improve the paper.
- [H] reports the performance on the fine-grained bird classification using different type of attention mechanism. Comparison and justification with this method can improve the paper. The performance in [H] is almost 10 % point higher accuracy than the proposed model.
- In the segmentation experiments, the models are trained on extremely small images, which is unnatural in segmentation scenarios. Experiments on realistic settings should be included. Moreover, [G] introduces a method of using an attention model for segmentation, while the paper does not contain any discussion about it.
Overall, I am concerned that the proposed model is not well discussed with important previous works. I believe that the comparisons and discussions with these works can greatly improve the paper.
I also have some questions about the experiments:
- Is there any reasoning why we have to simplify the concatenation into an addition in Section 3.2? They are not equivalent.
- When generating the fooling images of VGG-att, is the attention module involved, or do you use the same fooling images for both VGG and VGG-att?
Minor comments:
- Fig. 1 -> Fig. 2 in Section 3.1. If not, Fig. 2 is never referred.
References
[A] Huijuan Xu and Kate Saenko. Ask, attend and answer: Exploring question-guided spatial attention for visual question answering. In ECCV, 2016.
[B] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. In CVPR, 2016.
[C] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Deep compositional question answering with neural module networks. In CVPR, 2016.
[D] Paul Hongsuck Seo, Zhe Lin, Scott Cohen, Xiaohui Shen, and Bohyung Han. Hierarchical attention networks. arXiv preprint arXiv:1606.02393, 2016.
[E] Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. Image captioning with semantic attention. In CVPR, 2016.
[F] Jonghwan Mun, Minsu Cho, and Bohyung Han. Text-Guided Attention Model for Image Captioning. AAAI, 2017.
[G] Seunghoon Hong, Junhyuk Oh, Honglak Lee and Bohyung Han, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, In CVPR, 2016.
[H] Max Jaderberg, Karen Simonyan, Andrew Zisserman, Koray Kavukcuoglu, Spatial Transformer Networks, NIPS, 2015 |
iclr_2018_SyhcXjy0Z | The paper proposes and demonstrates a Deep Convolutional Neural Network (DCNN) architecture to identify users with disguised face attempting a fraudulent ATM transaction. The recent introduction of Disguised Face Identification (DFI) framework Singh et al. (2017) proves the applicability of deep neural networks for this very problem. All the ATMs nowadays incorporate a hidden camera in them and capture the footage of their users. However, it is impossible for the police to track down the impersonators with disguised faces from the ATM footage. The proposed deep convolutional neural network is trained to identify, in real time, whether the user in the captured image is trying to cloak his identity or not. The output of the DCNN is then reported to the ATM to take appropriate steps and prevent the swindler from completing the transaction. The network is trained using a dataset of images captured in similar situations as of an ATM. The comparatively low background clutter in the images enables the network to demonstrate high accuracy in feature extraction and classification for all the different disguises. | As one can see by the title, the originality (application of DCNN) and significance (limited to ATM domain) is very limited. If this is still enough for ICLR, the paper could be okay. However, even so one can clearly see that the architecture, the depth, the regularization techniques, and the evaluation are clearly behind the state of the art. Especially for this problem domain, drop-out and data augmentation should be investigated.
Only one dataset is used for the evaluation and it seems to be very limited and small. Moreover, it seems that the same subjects (even if it is other pictures) may appear in the training set and test set as they were randomly selected. Looking into the referece (to get the details of the dataset - from a workshop of the IEEE International Conference on Computer Vision Workshops (ICCVW) 2017) reveals, that it has only 25 subjects and 10 disguises. This makes it even likely that the same subject with the same disguise appears in the training and test set.
A very bad manner, which unfortunately is often performed by deep learning researchers with limited pattern recognition background, is that the accuracy on the test set is measured for every timestamp and finally the highest accuracy is reported. As such you perform an optimization of the paramerter #iterations on the test set, making it a validation set and not an independent test set.
Minor issues:
make sure that the capitalization in the references is correct (ATM should be capital, e.g., by putting {ATM} - and many more things). |
iclr_2018_B1ae1lZRb | APPRENTICE: USING KNOWLEDGE DISTILLATION TECHNIQUES TO IMPROVE LOW-PRECISION NET- WORK ACCURACY
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems -the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline. | The authors investigate knowledge distillation as a way to learn low precision networks. They propose three training schemes to train a low precision student network from a teacher network. They conduct experiments on ImageNet-1k with variants of ResNets and multiple low precision regimes and compare performance with previous works
Pros:
(+) The paper is well written, the schemes are well explained
(+) Ablations are thorough and comparisons are fair
Cons:
(-) The gap with full precision models is still large
(-) Transferability of the learned low precision models to other tasks is not discussed
The authors tackle a very important problem, the one of learning low precision models without comprosiming performance. For scheme-A, the authors show the performance of the student network under many low precision regimes and different depths of teacher networks. One observation not discussed by the authors is that the performance of the student network under each low precision regime doesn't improve with deeper teacher networks (see Table 1, 2 & 3). As a matter of fact, under some scenarios performance even decreases.
The authors do not discuss the gains of their best low-precision regime in terms of computation and memory.
Finally, the true applications for models with a low memory footprint are not necessarily related to image classification models (e.g. ImageNet-1k). How good are the low-precision models trained by the authors at transferring to other tasks? Is it possible to transfer student-teacher training practices to other tasks? |
iclr_2018_SysEexbRb | CRITICAL POINTS OF LINEAR NEURAL NETWORKS: ANALYTICAL FORMS AND LANDSCAPE PROPERTIES
Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms. In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks. We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum. Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks. One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum. | This paper studies the critical points of shallow and deep linear networks. The authors give a (necessary and sufficient) characterization of the form of critical points and use this to derive necessary and sufficient conditions for which critical points are global optima. Essentially this paper revisits a classic paper by Baldi and Hornik (1989) and relaxes a few requires assumptions on the matrices. I have not checked the proofs in detail but the general strategy seems sound. While the exposition of the paper can be improved in my view this is a neat and concise result and merits publication in ICLR. The authors also study the analytic form of critical points of a single-hidden layer ReLU network. However, given the form of the necessary and sufficient conditions the usefulness of of these results is less clear.
Detailed comments:
- I think in the title/abstract/intro the use of Neural nets is somewhat misleading as neural nets are typically nonlinear. This paper is mostly about linear networks. While a result has been stated for single-hidden ReLU networks. In my view this particular result is an immediate corollary of the result for linear networks. As I explain further below given the combinatorial form of the result, the usefulness of this particular extension to ReLU network is not very clear. I would suggest rewording title/abstract/intro
- Theorem 1 is neat, well done!
- Page 4 p_i’s in proposition 1
From my understanding the p_i have been introduced in Theorem 1 but given their prominent role in this proposition they merit a separate definition (and ideally in terms of the A_i directly).
- Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5
Are these characterizations computable i.e. given X and Y can one run an algorithm to find all the critical points or at least the parameters used in the characterization p_i, V_i etc?
- Theorems 1, prop 1, prop 2, prop 3, Theorem 3, prop 4 and 5
Would recommend a better exposition why these theorems are useful. What insights do you gain by knowing these theorems etc. Are less sufficient conditions that is more intuitive or useful. (an insightful sufficient condition in some cases is much more valuable than an unintuitive necessary and sufficient one).
- Page 5 Theorem 2
Does this theorem have any computational implications? Does it imply that the global optima can be found efficiently, e.g. are saddles strict with a quantifiable bound?
- Page 7 proposition 6 seems like an immediate consequence of Theorem 1 however given the combinatorial nature of the K_{I,J} it is not clear why this theorem is useful. e.g . back to my earlier comment w.r.t. Linear networks given Y and X can you find the parameters of this characterization with a computationally efficient algorithm? |
iclr_2018_S1Ow_e-Rb | Prior work on speech and audio processing has demonstrated the ability to obtain excellent performance when learning directly from raw audio waveforms using convolutional neural networks (CNNs). However, the exact inner workings of a CNN remain unclear, which hinders further developments and improvements into this direction. In this paper, we theoretically analyze and explain how deep CNNs learn from raw audio waveforms and identify potential limitations of existing network structures. Based on this analysis, we further propose a new network architecture (called SimpleNet), which offers a very simple but concise structure and high model interpretability. | The paper proposes a CNN-based based approach for speech processing using raw waveforms as input. An analysis of convolution and pooling layers applied on waveforms is first presented. An architecture called SimpleNet is then presented and evaluated on two speech tasks: emotion recognition and gender classification.
This paper propose a theoretical analysis of convolution and pooling layers to motivate the SimpleNet architecture. To my understanding, the analysis is flawed (see comments below). The SimpleNet approach is interesting but not sufficiently backed with experimental results. The network analysis is minimal and provides almost no insights. I therefore recommend to reject the paper.
Detailed comments:
Section 1:
* “Therefore, it remains unknown what actual features CNNs learn from waveform”. This is not true, several works on speech recognition have shown that a convolution layer taking raw speech as input can be seen as a bank of learned filters. For instance in the context of speech recognition, [9] showed that the filters learn phoneme-specific responses, [10] showed that the learned filters are close to Mel filter banks and [7] showed that the learned filters are related to MRASTA features and Gabor filters. The authors should discuss these previous works in the paper.
Section 2:
* Section 2.1 seems unnecessary, I think it’s safe to assume that the Shannon-Nyquist theorem and the definition of convolution are known by the reader.
* Section 2.2.2 & 2.2.3: I don't follow the justification that stacking convolutions are not needed: the example provided is correct if two convolutions are directly stacked without non-linearity, but the conclusion does not hold with a non-linearity and/or a pooling layer between the convolutions: two stacked convolutions with non-linearities are not equivalent to a single convolution. To my understanding, the same problem is present for the pooling layer: the presented conclusion that pooling introduces aliasing is only valid for two directly stacked pooling layers and is not correct for stacked blocks of convolution/pooling/non-linearity.
* Section 2.2.5: The ReLU can be seen as a half-wave rectifier if it is applied directly to the waveform. However, it is usually not the case as it is applied on the output of the convolution and/or pooling layers. Therefore I don’t see the point of this section.
* Section 2.2.6: In this section, the authors discuss the differences between spectrogram-based and waveforms-based approaches, assuming that spectrogram-based approach have fixed filters. But spectrogram can also be used as input to CNNs (i.e. using learned filters) for instance in speech recognition [1] or emotion recognition [11]. Thus the comparison could be more interesting if it was between spectrogram-based and raw waveform-based approaches when the filters are learned in both cases.
Section 3:
* Figure 4 is very interesting, and is in my opinion a stronger motivation for SimpleNet that the analysis presented in Section 2.
* Using known filterbanks such as Mel or Gammatone filters as initialization point for the convolution layer is not novel and has been already investigated in [7,8,10] in the context of speech recognition.
Section 4:
* On emotion recognition, the results show that the proposed approach is slightly better, but there is some issues: the average recall metric is usually used for this task due to class imbalance (see [1] for instance). Could the authors provide results with this metric ? Also IEMOCAP is a well-used corpus for this task, could the authors provide some baselines performance for comparison (e.g. [11]) ?
* For gender classification, there is no gain from SimpleNet compared to the baselines. The authors also mention that some utterances have overlapping speech. These utterances are easy to find from the annotations provided with the corpus, so it should be easy to remove them for the train and test set. Overall, in the current form, the results are not convincing.
* Section 4.3: The analysis is minimal: it shows that filters changed after training (as already presented in Figure 4). I don't follow completely the argument that the filters should focus on low frequency. It is more informative, but one could expect that the filters will specialized, thus some of them will focus on high frequencies, to model the high frequency events such as consonants or unvoiced event.
It could be very interesting to relate the learned filters to the labels: are some filters learned to model specific emotions ? For gender classification, are some filters focusing on the average pitch frequency of male and female speaker ?
* Finally, it would be nice to see if the claims in Section 2 about the fact that only one convolution layer is needed and that stacking pooling layers can hurt the performance are verified experimentally: for instance, experiments with more than one pair of convolution/pooling could be presented.
Minor comments:
* More references for raw waveforms-based approach for speech recognition should be added [3,4,6,7,8,9] in the introduction.
* I don’t understand the first sentence of the paper: “In the field of speech and audio processing, due to the lack of tools to directly process high dimensional data …”. Is this also true for any pattern recognition fields ?
* For the MFCCs reference in 2.2.2, the authors should cite [12].
* Figure 6: Only half of the spectrum should be presented.
References:
[1] H. Lee, P. Pham, Y. Largman, and A. Y. Ng. Unsupervised feature learning for audio classification using convolutional deep belief networks. In Advances in Neural Information Processing Systems 22, pages 1096–1104, 2009.
[2] Schuller, Björn, Stefan Steidl, and Anton Batliner. "The interspeech 2009 emotion challenge." Tenth Annual Conference of the International Speech Communication Association. 2009.
[3] N. Jaitly, G. Hinton, Learning a better representation of speech sound waves using restricted Boltzmann machines, in: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011, pp. 5884–5887.
[4] D. Palaz, R. Collobert, and M. Magimai.-Doss. Estimating Phoneme Class Conditional Probabilities from Raw Speech Signal using Convolutional Neural Networks, INTERSPEECH 2013, pages 1766–1770.
[5] Van den Oord, Aaron, Sander Dieleman, and Benjamin Schrauwen. "Deep content-based music recommendation." Advances in neural information processing systems. 2013.
[6] Z.Tuske, P.Golik, R.Schluter, H.Ney, Acoustic Modeling with Deep Neural Networks Using Raw Time Signal for LVCSR,
in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), Singapore, 2014, pp. 890–894.
[7] P. Golik, Z. Tuske, R. Schlu ̈ter, H. Ney, Convolutional Neural Networks for Acoustic Modeling of Raw Time Signal in LVCSR, in: Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015, pp. 26–30.
[8] Yedid Hoshen and Ron Weiss and Kevin W Wilson, Speech Acoustic Modeling from Raw Multichannel Waveforms, International Conference on Acoustics, Speech, and Signal Processing, 2015.
[9] D. Palaz, M. Magimai-Doss, and R. Collobert. Analysis of CNN-based Speech Recognition System using Raw Speech as Input, INTERSPEECH 2015, pages 11–15.
[10] T. N. Sainath, R. J. Weiss, A. Senior, K. W. Wilson, and O. Vinyals. Learning the Speech Front-end With Raw Waveform CLDNNs. Proceedings of the Annual Conference of the International Speech Communication Association (INTERSPEECH), 2015.
[11] Satt, Aharon & Rozenberg, Shai & Hoory, Ron. (2017). Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms. 1089-1093. Interspeech 2017.
[12] S. Davis and P. Mermelstein. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Transactions on Acoustics, Speech and Signal Processing, 28(4):357–366, 1980. |
iclr_2018_SJ60SbW0b | Deep neural networks are able to solve tasks across a variety of domains and modalities of data. Despite many empirical successes, we lack the ability to clearly understand and interpret the learned mechanisms that contribute to such effective behaviors and more critically, failure modes. In this work, we present a general method for visualizing an arbitrary neural network's inner mechanisms and their power and limitations. Our dataset-centric method produces visualizations of how a trained network attends to components of its inputs. The computed "attention masks" support improved interpretability by highlighting which input attributes are critical in determining output. We demonstrate the effectiveness of our framework on a variety of deep neural network architectures in domains from computer vision and natural language processing. The primary contribution of our approach is an interpretable visualization of attention that provides unique insights into the network's underlying decision-making process irrespective of the data modality. | The authors of this paper proposed a data-driven black-box visualization scheme. The paper primarily focuses on neural network models in the experiment section. The proposed method iteratively optimize learnable masks for each training example to find the most relevant content in the input that was "attended" by the neural network. The authors empirically demonstrated their method on image and text classification tasks.
Strength:
- The paper is well-written and easy to follow.
- The qualitative analysis of the experimental results nicely illustrated how the learnt latent attention masks match with our intuition about how neural networks make its classification predictions.
Weakness:
- Most of the experiments in the paper are performed on small neural networks and simple datesets. I found the method will be more compiling if the authors can show visualization results on ImageNet models. Besides simple object recognition tasks, other more interesting tasks to test out the proposed visualization method are object detection models like end-to-end fast R-CNN, video classification models, and image-captioning models. Overall, the current set of experiments are limited to showcase the effectiveness of the proposed method.
- It is unclear how the hyperparameter is chosen for the proposed method. How does the \beta affect the visualization quality? It would be great to show a range of samples from high to low beta values. Does it require tuning for different visualization samples? Does it vary over different datasets? |
iclr_2018_r1NYjfbR- | Published as a conference paper at ICLR 2018 GENERATIVE NETWORKS AS INVERSE PROBLEMS WITH SCATTERING TRANSFORMS
Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder. | The paper proposes a generative model for images that does no require to learn a discriminator (as in GAN’s) or learned embedding. The proposed generator is obtained by learning an inverse operator for a scattering transform.
The paper is well written and clear. The main contribution of the work is to show that one can design an embedding with some desirable properties and recover, to a good degree, most of the interesting aspects of generative models. However, the model doesn’t seem to be able to produce high quality samples. In my view, having a learned pseudo-inverse for scattering coefficients is interesting on its own right. The authors should show more clearly the generalization capabilities to test samples. Is the network able to invert images that follow the train distribution but are not in the training set?
As the authors point out, the representation is non-invertible. It seems that using an L2 loss in pixel space for training the generator would necessarily lead to blurred reconstructions (and samples) (as it produces a point estimate). Unless the generator overfits the training data, but then it would not generalize. The reason being that many images would lie in the level set for a given feature vector, and the generator cannot deterministically disambiguate which one to match.
The sampling method described in Section 3.2 does not suffer from this problem, although as the authors point out, a good initialization is required. Would it make sense to combine the two? Use the generator network to produce a good initial condition and then refine it with the iterative procedure.
This property is exploited in the conditional generation setting in:
Bruna, J. et al "Super-resolution with deep convolutional sufficient statistics." arXiv preprint arXiv:1511.05666 (2015).
The samples produced by the model are of poorer quality than those obtained with GAN’s. Clearly the model is assigning mass to regions of the space where there are not valid images. (similar effect that suffer models train with MLE). Could you please comment on this point?
The title is a bit misleading in my view. “Analyzing GANs” suggests analyzing the model in general, this is, its architecture and training method (e.g. loss functions etc). However the analysis concentrates in the structure of the generator and the particular case of inverting scattering coefficients.
However, I do find very interesting the analysis provided in Section 3.2. The idea of using meaningful intermediate (and stable) targets for the first two layers seems like a very good idea. Are there any practical differences in terms of quality of the results? This might show in more complex datasets.
Could you please provide details on what is the dimensionality of the scattering representation at different scales? Say, how many coefficients are in S_5?
In Figure 3, it would be good to show some interpolation results for test images as well, to have a visual reference.
The authors mention that considering the network as a memory storage would allow to better recover known faces from unknown faces. It seems that it would be known from unknown images. Meaning, it is not clear why this method would generalize to novel image from the same individuals. Also, the memory would be quite rigid, as adding a new image would require adapting the generator.
Other minor points:
Last paragraph of page 1, “Th inverse \Phi…” is missing the ‘e’.
Some references (to figures or citations) seem to be missing, e.g. at the end of page 4, at the beginning of page 5, before equation (6).
Also, some citations should be corrected, for instance, at the end of the first paragraph of Section 3.1:
“… wavelet filters Malat (2016).”
Sould be:
“... wavelet filters (Malat, 2016).”
First paragraph of Section 3.3. The word generator is repeated. |
iclr_2018_rJ8rHkWRb | This work introduces a simple network for producing character aware word embeddings. Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word. The learned word representations are shown to be very sparse and facilitate improved results on language modeling tasks, despite using markedly fewer parameters, and without the need to apply dropout. A final experiment suggests that weight sharing contributes to sparsity, increases performance, and prevents overfitting. | This paper presents a new model for composing representations of characters into word embeddings. The starting point of their argument is to include position-specific embeddings of characters rather than just position-independent characters. By adding together position-specific vectors, reasonable results are obtained.
This is an interesting result, but I have a few recommendations to improve the paper.
1) It is a bit hard to assess since it is not evaluated on a standard datasets. There are a number standard datasets for open vocabulary language modeling. E.g., the MWC corpus (http://k-kawakami.com/research/mwc), or even the Penn Treebank (although it is conventionally modeled in closed vocabulary form).
2) There are many existing models for composing characters into words. In addition to those cited in the paper, see the citations listed below. Comparison with those is crucial in a paper like this.
3) Since the predictions are done at the word type level, it is unclear how vocabulary set of the corpus is determined, and what is done with OOV word types at test time (while it is possible to condition on them using the technique in the paper, it is not possible to use this technique for generation).
4) The analysis is interesting, but a more intuitive explanation would be to show nearest neighbor plots.
Some missing citations:
Composing characters into words:
dos Santos and Zadrozny. (2014 ICML) http://proceedings.mlr.press/v32/santos14.pdf
Ling et al. (2015 EMNLP) Finding Function in Form. https://arxiv.org/abs/1508.02096
Additionally, using explicit positional features in modeling language has been used:
Vaswani et al. (2017) Attention is all you need https://arxiv.org/abs/1706.03762
and a variety of other sources. |
iclr_2018_Syjha0gAZ | We study the problem of multiset prediction. The goal of multiset prediction is to train a predictor that maps an input to a multiset consisting of multiple items. Unlike existing problems in supervised learning, such as classification, ranking and sequence generation, there is no known order among items in a target multiset, and each item in the multiset may appear more than once, making this problem extremely challenging. In this paper, we propose a novel multiset loss function by viewing this problem from the perspective of sequential decision making. The proposed multiset loss function is empirically evaluated on two families of datasets, one synthetic and the other real, with varying levels of difficulty, against various baseline loss functions including reinforcement learning, sequence, and aggregated distribution matching loss functions. The experiments reveal the effectiveness of the proposed loss function over the others. | Summary:
The paper considers the prediction problem where labels are given as multisets. The authors give a definition of a loss function for multisets and show experimental results. The results show that the proposed methods optimizing the loss function perform better than other alternatives.
Comments:
The problem of predicting multisets looks challenging and interesting. The experimental results look nice. On the other hand, I have several concerns about writing and technical discussions.
First of all, the goal of the problem is not exactly stated. After I read the experimental section, I realized that the goal is to optimize the exact match score (EM) or F1 measure w.r.t. the ground truth multisets. This goal should be explicitly stated in the paper. Now then, the approach of the paper is to design surrogate loss functions to optimize these criteria.
The technical discussions for defining the proposed loss function seems not reliable for the reasons below. Therefore, I do not understand the rationale of the definition of the proposed loss function.:
- An exact definition of the term multiset is not given. If I understand it correctly, a multiset is a “set” of instances allowing duplicated ones.
- There is no definition of Prec or Rec (which look like Precision and Recall) in Remark 1. The definitions appear in Appendix, which might not be well-defined. For example, let y, Y be mutisets , y=[a, a, a] and Y = [a, b]. Then, by definition, Prec(y,Y)=3/3 =1. Is this what you meant? (Maybe, the ill-definedness comes from the lack of definition of inclusion in a mutiset.)
- I cannot follow the proof of Remark 1 since it does not seem to take account of the randomness by the distribution \pi^*.
- I do not understand the definition of the oracle policy exactly. It seems to me that, the oracle policy knows the correct label (multi-set) \calY for each instance x and use it to construct \calY_t. But, this implicit assumption is not explicitly mentioned.
- In (1), (2) and Definition 3, what defines \calY_t? If \calY_t is determined by some “optimal” oracle, you cannot define the loss function in Def. 3 since it is not known a priori. Or, if the learner determines \calY_t, I don’t understand why the oracle policy is optimal since it depends on the learner’s choices.
Also, I expect an investigation of theoretical properties of the proposed loss function, e.g., relationship to EM or F1 or other loss functions. Without understanding the theoretical properties and the rationale, I cannot judge the goodness of the experimental results (look good though). In other words, I cannot judge the paper in a qualitative perspective, not in a quantitative view.
As a summary, I think the technical contribution of the paper is marginal because of the lack of reliable mathematical discussion or investigation. |
iclr_2018_HJzgZ3JCW | EFFICIENT SPARSE-WINOGRAD CONVOLUTIONAL NEURAL NETWORKS
Convolutional Neural Networks (CNNs) are computationally intensive, which limits their application on mobile devices. Their energy is dominated by the number of multiplies needed to perform the convolutions. Winograd's minimal filtering algorithm (Lavin, 2015) and network pruning (Han et al., 2015) can reduce the operation count, but these two methods cannot be directly combined -applying the Winograd transform fills in the sparsity in both the weights and the activations. We propose two modifications to Winograd-based CNNs to enable these methods to exploit sparsity. First, we move the ReLU operation into the Winograd domain to increase the sparsity of the transformed activations. Second, we prune the weights in the Winograd domain to exploit static weight sparsity. For models on CIFAR-10, CIFAR-100 and ImageNet datasets, our method reduces the number of multiplications by 10.4×, 6.8× and 10.8× respectively with loss of accuracy less than 0.1%, outperforming previous baselines by 2.0×-3.0×. We also show that moving ReLU to the Winograd domain allows more aggressive pruning. | Summary:
The paper presents a modification of the Winograd convolution algorithm that enables a reduction of multiplications in a forward pass of 10.8x almost without loss of accuracy.
This modification combines the reduction of multiplications achieved by the Winograd convolution algorithm with weight pruning in the following way:
- weights are pruned after the Winograd transformation, to prevent the transformation from filling in zeros, thus preserving weight sparsity
- the ReLU activation function associated with the previous layer is applied to the Winograd transform of the input activations, not directly to the spatial-domain activations, also yielding sparse activations
This way sparse multiplication can be performed. Because this yields a network, which is not mathematically equivalent to a vanilla or Winograd CNN, the method goes through three stages: dense training, pruning and retraining. The authors highlight that a dimension increase in weights and ReLU activations provide a more powerful representation and that stable dynamic activation densities over layer depths benefit the representational power of ReLU layers.
Review:
The paper shows good results using the proposed method and the description is easy to follow. I particularly like Figure 1.
I only have a couple of questions/comments:
1) I’m not familiar with the term m-specific (“Matrices B, G and A are m-specific.”) and didn’t find anything that seemed related in a very quick google search. Maybe it would make sense to add at least an informal description.
2) Although small filters are the norm, you could add a note, describing up to what filter sizes this method is applicable. Or is it almost exactly the same as for general Winograd CNNs?
3) I think it would make sense to mention weight and activation quantization in the intro as well (even if you leave a combination with quantization for future work), e.g. Rastegari et al. (2016), Courbariaux et al. (2015) and Lin et al. (2015)
4) Figure 5 caption has a typo: “acrruacy”
References:
Courbariaux, Matthieu, Yoshua Bengio, and Jean-Pierre David. "Binaryconnect: Training deep neural networks with binary weights during propagations." In Advances in Neural Information Processing Systems, pp. 3123-3131. 2015.
Lin, Zhouhan, Matthieu Courbariaux, Roland Memisevic, and Yoshua Bengio. "Neural networks with few multiplications." arXiv preprint arXiv:1510.03009 (2015).
Rastegari, Mohammad, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. "Xnor-net: Imagenet classification using binary convolutional neural networks." In European Conference on Computer Vision, pp. 525-542. Springer International Publishing, 2016. |
iclr_2018_HJYQLb-RW | Generative Adversarial Networks (GANs) have been proposed as an approach to learning generative models. While GANs have demonstrated promising performance on multiple vision tasks, their learning dynamics are not yet well understood, neither in theory nor in practice. In particular, the work in this domain has been focused so far only on understanding the properties of the stationary solutions that this dynamics might converge to, and of the behavior of that dynamics in this solutions' immediate neighborhood. To address this issue, in this work we take a first step towards a principled study of the GAN dynamics itself. To this end, we propose a model that, on one hand, exhibits several of the common problematic convergence behaviors (e.g., vanishing gradient, mode collapse, diverging or oscillatory behavior), but on the other hand, is sufficiently simple to enable rigorous convergence analysis. This methodology enables us to exhibit an interesting phenomena: a GAN with an optimal discriminator provably converges, while guiding the GAN training using only a first order approximation of the discriminator leads to unstable GAN dynamics and mode collapse. This suggests that such usage of the first order approximation of the discriminator, which is a de-facto standard in all the existing GAN dynamics, might be one of the factors that makes GAN training so challenging in practice. Additionally, our convergence result constitutes the first rigorous analysis of a dynamics of a concrete parametric GAN. | The authors proposes to study the impact of GANS in two different settings:
1. at each iteration, train the discriminator to convergence and do a (or a few) gradient steps for updating the generator
2. just do a few gradient steps for the discriminator and the generator
This is done in a very toy example: a one dimensional equally weighted mixture of two Gaussian distributions.
Clarity: the text is reasonably well written, but with some redundancy (e.g. see section 2.1) , and quite a few grammatical and mathematical typos here and there. (e.g. Lemma 4.2., $f$ should be $g$, p7 Rect(0) is actually the empty set, etc..)
Gaining insights into the mechanics of training GANs is indeed important. The authors main finding is that, in this very particular setting, it seems that training the discriminator to convergence leads to convergence. Indeed, in real settings, people have tried such strategies for WGAN for examples. For standard GANs, if one adds a little bit of noise to the labels for example, people have also reported good result for such a strategy (although, without label smoothing, this will indeed leads to problems).
Although I have not checked all the mathematical fine details, the approach/proof looks sound (although it is not at all clear too me why the choice of gradient step-sizes does not play a more important roles the the stated results). My biggest complain is that the situation analyzed is so simple (although the convergence proof is far from trivial) that I am not at all convinced that this sheds much light on more realistic examples. Since this is the main meat of the paper (i.e. no methodological innovations), I feel that this is too little an innovation for deserving publication in ICLR2018. |
iclr_2018_rJUYGxbCW | PIXELDEFEND: LEVERAGING GENERATIVE MODELS TO UNDERSTAND AND DEFEND AGAINST ADVERSARIAL EXAMPLES
Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63% to 84% for Fashion MNIST and from 32% to 70% for CIFAR-10. | The authors propose to use a generative model of images to detect and defend against adverarial examples. White-box attacks against standard models for image recognition (Resnet and VGG) are considered, and a generative model (a PixelCNN) is trained on the same data as the classifiers. The authors first show that adversarial examples created by the white-box attacks correspond to low likelihood region (according to the pixelCNN), which first gives a classification rule for detecting adversarial examples.
Then, to turn the genrative model into a defensive algorithm, the authors propose to preprocess test images by approximately maximizing the likelihood under similar constraints as the attacker of images, to "project" adversarial examples back to high-density regions (as estimated by the generative model). As a heuristic method, the authors propose to greedily maximize the likelihood of the incoming images pixel-by-pixel, which is possible because of the specific form of the PixelCNN likelihood in the context of l-infty attacks. An "adaptive" version of the algorithm, in which the preprocessing is used only when the likelihood of an example is below a certain threshold, is also proposed.
Experiments are carried out on Fashion MNIST and CIFAR-10. At a high level, the message is that projecting the image into a high density region is sufficient to correct for a significant portions of the mistakes made on adversarial examples. The main result is that this approach based on generative models seems to work even on against the strongest attacks.
Overall, the idea proposed in the paper, using a generative model to detect and filter out spurious patterns that can appear in adversarial examples, is rather intuitive. The experimental result that adversarial examples can somehow be corrected by a generative model is also interesting. The design choice of PixelCNN, which allows for a greedy optimization seems reasonable in that setting.
Whereas the paper is an interesting step forward, the paper still doesn't provide definitive arguments in favor of using such approaches in practice. There is a significant loss in accuracy on clean examples (2% on CIFAR-10 for a resnet), and more generally against weaker opponents such as the fast gradient sign. Thus, in reality, the experiments show that the pipeline generative model + classifier is robust against the strongest white box methods for this classifier, but on the other hand these methods do not transfer well to new models. This somewhat weakens the result, since robustness against these methods that do not transfer well is achieved by changing the model. |
iclr_2018_HJr4QJ26W | GANs provide a framework for training generative models which mimic a data distribution. However, in many cases we wish to train a generative model to optimize some auxiliary objective function within the data it generates, such as making more aesthetically pleasing images. In some cases, these objective functions are difficult to evaluate, e.g. they may require human interaction. Here, we develop a system for efficiently training a GAN to increase a generic rate of positive user interactions, for example aesthetic ratings. To do this, we build a model of human behavior in the targeted domain from a relatively small set of interactions, and then use this behavioral model as an auxiliary loss function to improve the generative model. As a proof of concept, we demonstrate that this system is successful at improving positive interaction rates simulated from a variety of objectives, and characterize some factors that affect its performance. | This paper proposes a technique to improve the output of GANs by maximising a separate score that aims to mimic human interactions.
Summary:
The goal of the technique to involve human interaction in generative processes is interesting. The proposed addition of a new loss function for this purpose is an obvious choice, not particularly involved. It is unclear to me whether the paper has value in its current form, that is without experimental results for the task it achieves. It feels to premature for publication.
More comments:
The main problem with this paper is that the proposed systems is designed for a human interaction setting but no such experiment is done or presented. The title is misleading, this may be the direction where the authors of the submission want to go, but the title “.. with human interactions” is clearly misleading. “Model of human interactions” may be more appropriate.
The technical idea of this paper is to introduce a separate score in the GAN training process. This modifies the generator objective. Besides “fooling” the discriminator, the generator objective is to maximise user interaction with the generated batch of images. This is an interesting objective but since no interactive experiments presented in this paper, the rest of the experiments hinges on the definition of “PIR” (positive interaction rate)using a model of human interaction. Instead of real interactions, the submission proposes to maximise the activations of hidden units in a separate neural network. By choosing the hierarchy level and type of filter the results of the GAN differ.
I could not appreciate the results in Figure 2 since I was missing the definition of PIR, how it is drawn in the training setup. Further I found it not surprising that the PIR changes when a highly parameterised model is trained for this task. The PIR value comes from a separate network not directly accessible during training time, nonetheless I would have been surprised to not see an increase. Please comment in the rebuttal and I would appreciate if the details of the synthetic PIR values on the training set could be explained.
- Technically it was a bit unclear to me how the objective is defined. There is a PIR per level and filter (as defined in C4) but in the setup the L_{PIR} was mentioned to be a scalar function, how are the values then summarized? There is a PIR per level and feature defined in C4.
- What does the PIR with the model in Section 3 stand for? Shouldn’t be something like “uniqueness”, that is how unique is an image in a batch of images be a better indicator? Besides, the intent of what possibly interesting PIR examples will be was unclear.
E.g., the statement at the end of 2.1 is unclear at that point in the document. How is the PIR drawn exactly? What does it represent? Is there a PIR per image? It becomes clear later, but I suggest to revisit this description in a new version.
- Also I suggest to move more details from Section C4 into the main text in Section 3. The high level description in Section 3. |
iclr_2018_S1GDXzb0b | Imitation learning from demonstrations usually relies on learning a policy from trajectories of optimal states and actions. However, in real life expert demonstrations, often the action information is missing and only state trajectories are available. We present a model-based imitation learning method that can learn environment-specific optimal actions only from expert state trajectories. Our proposed method starts with a model-free reinforcement learning algorithm with a heuristic reward signal to sample environment dynamics, which is then used to train the state-transition probability. Subsequently, we learn the optimal actions from expert state trajectories by supervised learning, while back-propagating the error gradients through the modeled environment dynamics. Experimental evaluations show that our proposed method successfully achieves performance similar to (state, action) trajectory-based traditional imitation learning methods even in the absence of action information, with much fewer iterations compared to conventional model-free reinforcement learning methods. We also demonstrate that our method can learn to act from only video demonstrations of expert agent for simple games and can learn to achieve desired performance in less number of iterations. | Model-Based Imitation Learning from State Trajectories
SIGNIFICANCE AND ORIGINALITY:
The authors propose a model-based method for accelerating the learning of a policy
by observing only the state transitions of an expert trace.
This is an important problem in many fields such as robotics where
finding a feasible policy is hard using pure RL methods.
The authors propose a unique two step method to find a high-quality model-based policy.
First: To create the environment model for the model-based learner,
they need a source of state transitions with actions ( St, At,xa St+1 ).
To generate these samples, they first employ a model-free algorithm.
The model-free algorithm is trained to try to duplicate the expert state at each trajectory.
In continuous domains, the state is not unique … so they build a soft next state predictor
that gives a probability over next states favoring those demonstrated by the expert.
Since the transitions were generated by the agent acting in the environment,
these transitions have both states and actions ( St, At, St+1 ).
These are added to a pool.
The authors argue that the policy found by this model-free learner is
not highly accurate or guaranteed to converge, but presumably is good at
generating transitions relevant to the expert’s policy.
(Perhaps slowly reducing the \sigma in the reward would improve accuracy?)
I guess if expert trace data is sparse, the model-free learner can generate a lot
of transitions which enable it to create accurate dynamics models which in turn
allow it to extract more information out of sparse expert traces?
Second: They then train a model based agent using the collected transitions ( St, At, St+1 ).
They formulate the problem as a maximum likelihood problem with two terms:
an action dynamics model which is learned from local exploration using the learner’s own actions and outcomes
and expert policy model in terms of the actions learned above
that maximizes the probability of the observed expert’s trajectory.
This is a nice clean formulation that integrates the two processes.
I thought the comparison to an encoder - decoder network was interesting.
The authors do a good job of positioning the work in the context of recent work in IML.
It looks like the authors extract position information from flappy bird frames,
so the algorithm is only using images for obstacle reasoning?
QUALITY
The propose model is described fairly completely and evaluated on
a “reaching" problem and the "flappy bird” game domain.
The evaluation framework is described in enough detail to replicate the results.
Interestingly, the assisted method starts off much higher in the “reacher” task.
Presumably this task is easy to observe the correct actions.
The flappy bird test shows off the difference between unassisted learning (DQN),
model free learning with the heuristic reward (DQN+reward prediction)
and model based learning.
Interestingly, DQN + heuristic reward approaches expert performance
while behavioral cloning never achieves expert performance level even though it has actions.
Why does the model-based method only run to 600 steps and stopped before convergence??
Does it not converge to expert level?? If so, this would be useful to know.
There are minor grammatical mistakes that can be corrected.
After equation 5, the authors suggest categorical loss for discrete problems,
but cross-entropy loss might work better. Maybe this is what they meant.
CLARITY
The overall approach and algorithms are described fairly clearly. Some minor typos here and there.
Algorithm 1 does not make clear the relationship between the model learned in step 2 and the algorithms in steps 4 to 6.
I would reverse the order of a few things to align with a right to left ordering principle.
In Figure 1, put the model free transition generator on the left and the model-based sample consumer on the right.
In Figure 3, put the “reacher” test on the left and the “flappy bird” on the right.
PROS AND CONS
Interesting idea for learning quickly from small numbers of samples of expert state trajectories.
Not clear that method converges on all problems.
Not clear that the method is able to extract the state from video — authors had to extract position manually
(this point is more about their deep architecture than the imitation framework they describe -
though perhaps a key argument for the authors is the ability to work with small numbers of
expert samples and still be able to train deep methods ) ??
POST REVIEW SUBMISSION:
The authors make a number of clarifying comments to improve the text and add the reference suggested by another reviewer. |
iclr_2018_Sk0pHeZAW | Deep learning is becoming more widespread in its application due to its power in solving complex classification problems. However, deep learning models often require large memory and energy consumption, which may prevent them from being deployed effectively on embedded platforms, limiting their applications. This work addresses the problem by proposing methods Weight Reduction Quantisation for compressing the memory footprint of the models, including reducing the number of weights and the number of bits to store each weight. Beside, applying with sparsity-inducing regularization, our work focuses on speeding up stochastic variance reduced gradients (SVRG) optimization on non-convex problem. Our method that minibatch SVRG with 1 regularization on non-convex problem has faster and smoother convergence rates than SGD by using adaptive learning rates. Experimental evaluation of our approach uses MNIST and CIFAR-10 datasets on LeNet-300-100 and LeNet-5 models, showing our approach can reduce the memory requirements both in the convolutional and fully connected layers by up to 60× without affecting their test accuracy. | The authors present an l-1 regularized SVRG based training algorithm that is able to force many weights of the network to be 0, hence leading to good compression of the model. The motivation for l-1 regularization is clear as it promotes sparse models, which lead to lower storage overheads during inference. The use of SVRG is motivated by the fact that it can, in some cases, provide faster convergence than SGD.
Unfortunately, the authors do not compare with some key literature. For example there has been several techniques that use sparsity, and group sparsity [1,2,3], that lead to the same conclusion as the paper here: models can be significantly sparsified while not affecting the test accuracy of the trained model.
Then, the novelty of the technique presented is also unclear, as essentially the algorithm is simply SVRG with l1 regularization and then some quantization. The experimental evaluation does not strongly support the thesis that the presented algorithm is much better than SGD with l1 regularization. In the presented experiments, the gap between the performance of SGD and SVRG is small (especially in terms of test error), and overall the savings in terms of the number of weights is similar to Deep compression. Hence, it is unclear how the use of SVRG over SGD improves things. Eg in figure 2 the differences in top-1 error of SGD and SVRG, for the same number of weights is very similar (it’s unclear also why Fig 2a uses top-1 and Fig 2b uses top-5 error). I also want to note that all experiments were run on LeNet, and not on state of the art models (eg ResNets).
Finally, the paper is riddled with typos. I attach below some of the ones I found in pages 1 and 2
Overall, although the topic is very interesting, the contribution of this paper is limited, and it is unclear how it compares with other similar techniques that use group sparsity regularization, and whether SVRG offers any significant advantages over l1-SGD.
typos:
“ This work addresses the problem by proposing methods Weight Reduction Quantisation”
-> This work addresses the problem by proposing a Weight Reduction Quantisation method
“Beside, applying with sparsity-inducing regularization”
-> Beside, applying sparsity-inducing regularization
“Our method that minibatch SVRG with l-1 regularization on non-convex problem”
-> Our minibatch SVRG with l-1 regularization method on non-convex problem
“As well as providing,l1 regularization is a powerful compression techniques to penalize some weights to be zero”
-> “l1 regularization is a powerful compression technique that forces some weights to be zero”
The problem 1 can
-> The problem in Eq.(1) can
“it inefficiently encourages weight”
-> “it inefficiently encourages weights”
————
[1] Learning Structured Sparsity in Deep Neural Networks
http://papers.nips.cc/paper/6504-learning-structured-sparsity-in-deep-neural-networks.pdf
[2] Fast ConvNets Using Group-wise Brain Damage
https://arxiv.org/pdf/1506.02515.pdf
[3] Sparse Convolutional Neural Networks
https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liu_Sparse_Convolutional_Neural_2015_CVPR_paper.pdf |
iclr_2018_B12Js_yRb | Published as a conference paper at ICLR 2018 LEARNING TO COUNT OBJECTS IN NATURAL IMAGES FOR VISUAL QUESTION ANSWERING
Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-theart accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%. | Summary:
- This paper proposes a hand-designed network architecture on a graph of object proposals to perform soft non-maximum suppression to get object count.
Contribution:
- This paper proposes a new object counting module which operates on a graph of object proposals.
Clarity:
- The paper is well written and clarity is good. Figure 2 & 3 helps the readers understand the core algorithm.
Pros:
- De-duplication modules of inter and intra object edges are interesting.
- The proposed method improves the baseline by 5% on counting questions.
Cons:
- The proposed model is pretty hand-crafted. I would recommend the authors to use something more general, like graph convolutional neural networks (Kipf & Welling, 2017) or graph gated neural networks (Li et al., 2016).
- One major bottleneck of the model is that the proposals are not jointly finetuned. So if the proposals are missing a single object, this cannot really be counted. In short, if the proposals don’t have 100% recall, then the model is then trained with a biased loss function which asks it to count all the objects even if some are already missing from the proposals. The paper didn’t study what is the recall of the proposals and how sensitive the threshold is.
- The paper doesn’t study a simple baseline that just does NMS on the proposal domain.
- The paper doesn’t compare experiment numbers with (Chattopadhyay et al., 2017).
- The proposed algorithm doesn’t handle symmetry breaking when two edges are equally confident (in 4.2.2 it basically scales down both edges). This is similar to a density map approach and the problem is that the model doesn’t develop a notion of instance.
- Compared to (Zhou et al., 2017), the proposed model does not improve much on the counting questions.
- Since the authors have mentioned in the related work, it would also be more convincing if they show experimental results on CL
Conclusion:
- I feel that the motivation is good, but the proposed model is too hand-crafted. Also, key experiments are missing: 1) NMS baseline 2) Comparison with VQA counting work (Chattopadhyay et al., 2017). Therefore I recommend reject.
References:
- Kipf, T.N., Welling, M., Semi-Supervised Classification with Graph Convolutional Networks. ICLR 2017.
- Li, Y., Tarlow, D., Brockschmidt, M., Zemel, R. Gated Graph Sequence Neural Networks. ICLR 2016.
Update:
Thank you for the rebuttal. The paper is revised and I saw NMS baseline is added. I understood the reason not to compare with certain related work. The rebuttal is convincing and I decided to increase my rating, because adding the proposed counting module achieve 5% increase in counting accuracy. However, I am a little worried that the proposed model may be hard to reproduce due to its complexity and therefore choose to give a 6. |
iclr_2018_HJJ0w--0W | We present Tensor-Train RNN (TT-RNN), a novel family of neural sequence architectures for multivariate forecasting in environments with nonlinear dynamics. Long-term forecasting in such systems is highly challenging, since there exist long-term temporal dependencies, higher-order correlations and sensitivity to error propagation. Our proposed tensor recurrent architecture addresses these issues by learning the nonlinear dynamics directly using higher order moments and high-order state transition functions. Furthermore, we decompose the higherorder structure using the tensor-train (TT) decomposition to reduce the number of parameters while preserving the model performance. We theoretically establish the approximation properties of Tensor-Train RNNs for general sequence inputs, and such guarantees are not available for usual RNNs. We also demonstrate significant long-term prediction improvements over general RNN and LSTM architectures on a range of simulated environments with nonlinear dynamics, as well on real-world climate and traffic data. | The paper proposes Tensor-Train RNN and Tensor-Train LSTM (TT-RNN/TLSTM), a RNN/LSTM architecture whose hidden unit at time t h_t is computed from the tensor-vector product between a tensor of weights and a concatenation of hidden units from the previous L time steps. The motivation is to incorporate previous hidden states and high-order correlations among them to better predict long-term temporal dependencies for seq2seq problems. To address the issue of the number of parameters growing exponentially in the rank of the tensor, the model uses a low rank decomposition called the ‘tensor-train decomposition’ to make the number of parameters linear in the rank. Some theoretical analysis on the number of hidden units required for a given estimation error, and experimental results have been provided for synthetic and real sequential data.
First of all, the presentation of the method in section 2.1 is confusing and there seem to be various ambiguities in the notation that harms understanding of the method. The tensor-vector product in equation (6) appears problematic. The notation that I think is standard is as follows: given a tensor W \in R^{n_1 \times … \times n_P} and vectors v_p \in R^{n_p}, the tensor-vector product W \times_{p=1}^P v_p = vec(W) \otimes_{p=1}^P v_p = \sum{i_1,...,i_P} \prod_{p=1}^P v_{p,i_p}. So I’m guessing you want to get rid of the \otimes signs (the kronecker products) in (6) or you want to remove the summation and write W \times_{p=1}^P s_{t-1}. Also \alpha that appears in (6) is never defined. Is it another index? This is confusing because you say W is P-dimensional but have P+1 indices for it including alpha (W_{\alpha i_1 … i_p}). Moreover the dimensionality of W^{hx} x_t in (6) is R^H judging from the notation in page 2, but isn’t the tensor-vector product a scalar? Also am I correct in thinking that s_{t-1} should be [1, h_{t-1}^T, …, h_{t-L}^T], i.e. a vector of length LH+1 rather than a matrix? The notation from page 2 implies that you are using column vectors, so the definition of s_{t-1} makes it appear as an (L+1) by H matrix, which could make the reader interpret s_{t-1;i_1} in (6) as vectors instead of scalars (this is reinforced by the kronecker product between these s_{t-1;i_p}). I had to work this out from the number of parameters (HL+1)^P in section 2.2. The diagram of s_{t-1} in Figure 3 is also confusing, because it isn’t obvious that the unlabelled grey bars are copies of s_{t-1}. Also I notice that the name ‘Tensor Train RNN/LSTM’ has been used in Yang et al, 2017. You probably want to avoid using the same name since the models are different. It would be nice if you could explain in a bit more detail about how they are different in the related work section.
Assuming I have understood the method correctly, the idea of using tensor products to incorporate higher order interactions between the hidden states at different times appears sensible. From the theoretical analysis, you claim that 1) smoother f is easier to approximate, and 2) polynomial interactions are more efficient than linear ones. The first point seems fairly self-explanatory and doesn’t seem to require a proof. The second point isn’t so convincing because you have two additive terms on the right hand side of the inequality in Theorem 3.1 (btw I’m guessing you want the inequality to be the other way round): the first term is independent of p, and the second decreases exponentially with p. Your second point would only hold if this first term is reasonably small, but this doesn’t seem obvious to me.
Regarding the experiments, I’m sceptical as to whether a grid search over hyperparameters for TLSTM vs grid search over the same hyperparameters for (M)LSTM provides a fair comparison. You probably want to compare the models given the same number of parameters, since given the same state size, TLSTM will have many more parameters than (M)LSTM. A plot of x-axis: # parameters, y-axis: average RMSE at convergence would be informative. Moreover for figure 8, you probably want to control the time taken for training instead of just comparing validation loss at the same number of steps. I imagine the best performing TLSTM model will have many more parameters and hence take much longer to train than the best performing LSTM model.
Moreover, it seems as though the increased prediction accuracy from LSTM is marginal considering you have 3 more hyperparameters to tune (L,S,P - what was the value of P used for the experiments?) and that tuning them is important to prevent overfitting.
I’m also curious as to how TLSTM compares to hierarchical RNN approaches for modelling long-term dependencies. It will be interesting to compare against models like Stacked LSTM (Graves, 2013), Grid LSTM (Kalchbrenner, 2015) and HM LSTM (Chung, 2017). These models have mostly been evaluated on text, but I don’t see any reason they can’t be extended to sequential forecasting on time series data. Also regularisation techniques such as batch-norm for LSTMs (Cooijmans et al, 2016) and layer-norm (Ba et al, 2016) seem to help a lot for increasing prediction accuracy. Did you investigate these techniques to control overfitting?
Other minor comments on presentation:
For figure 6, the legends are inconsistent with the caption. Also you might want to overlay predictions on top of the ground truth for better comparison and also to save space.
Overall, I think there are vast scopes for improvement in presentation and comparisons with other methods, and hence find the paper not yet ready for publication. |
iclr_2018_rk6qdGgCZ | We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively (as would be expected for standard weight decay) but by an additive constant factor. We propose a simple way to resolve this issue by decoupling weight decay and the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). We also demonstrate that longer optimization runs require smaller weight decay values for optimal results and introduce a normalized variant of weight decay to reduce this dependence. Finally, we propose a version of Adam with warm restarts (AdamWR) that has strong anytime performance while achieving state-ofthe-art results on CIFAR-10 and ImageNet32x32. Our source code will become available after the review process. | At the heart of the paper, there is a single idea: to decouple the weight decay from the number of steps taken by the optimization process (the paragraph at the end of page 2 is the key to the paper). This is an important and largely overlooked area of implementation and most off-the-shelf optimization algorithms, unfortunately, miss this point, too. I think that the proposed implementation should be taken seriously, especially in conjunction with the discussion that has been carried out with the work of Wilson et al., 2017 (https://arxiv.org/abs/1705.08292).
The introduction does a decent job explaining why it is necessary to pay attention to the norm of the weights as the training progresses within its scope. However, I would like to add a couple more points to the discussion:
- "Optimal weight decay is a function (among other things) of the total number of epochs / batch passes." in principle, it is a function of weight updates. Clearly, it depends on the way the decay process is scheduled. However, there is a bad habit in DL where time is scaled by the number of epochs rather than the number of weight updates which sometimes lead to misleading plots (for instance, when comparing two algorithms with different batch sizes).
- Another ICLR 2018 submission has an interesting take on the norm of the weights and the algorithm (https://openreview.net/forum?id=HkmaTz-0W¬eId=HkmaTz-0W). Figure 3 shows the histograms of SGD/ADAM with and without WD (the *un-fixed* version), and it clearly shows how the landscape appear misleadingly different when one doesn't pay attention to the weight distribution in visualizations.
- In figure 2, it appears that the training process has three phases, an initial decay, a steady progress, and a final decay that is more pronounced in AdamW. This final decay also correlates with the better test error of the proposed method. This third part also seems to correspond to the difference between Adam and AdamW through the way they branch out after following similar curves. One wonders what causes this branching and whether the key the desired effects are observed at the bottom of the landscape.
- The paper concludes with "Advani & Saxe (2017) analytically showed that in the limited data regime of deep networks the presence of eigenvalues that are zero forms a frozen subspace in which no learning occurs and thus smaller (e.g., zero) initial weight norms should be used to achieve best generalization results." Related to this there is another ICLR 2018 submission (https://openreview.net/forum?id=rJrTwxbCb), figure 1 shows that the eigenvalues of the Hessian of the loss have zero forms at the bottom of the landscape, not at the beginning. Back to the previous point, maybe that discussion should focus on the second and third phases of the training, not the beginning.
- Finally, it would also be interesting to discuss the relation of the behavior of the weights at the last parts of the training and its connection to pruning.
I'm aware that one can easily go beyond the scope of the paper by adding more material. Therefore, it is not completely reasonable to expect all such possible discussions to take place at once. The paper as it stands is reasonably self-contained and to the point. Just a minor last point that is irrelevant to the content of the work: The slash punctuation mark that is used to indicate 'or' should be used without spaces as in 'epochs/batch'.
Edit: Thanks very much for the updates and refinements. I stand by my original score and would like to indicate my support for this style of empirical work in scientific conferences. |
iclr_2018_rkfOvGbCW | Published as a conference paper at ICLR 2018 MEMORY-BASED PARAMETER ADAPTATION
Deep neural networks have excelled on a wide range of problems, from vision to language and game playing. Neural networks very gradually incorporate information into weights as they process data, requiring very low learning rates. If the training distribution shifts, the network is slow to adapt, and when it does adapt, it typically performs badly on the training distribution before the shift. Our method, Memory-based Parameter Adaptation, stores examples in memory and then uses a context-based lookup to directly modify the weights of a neural network. Much higher learning rates can be used for this local adaptation, reneging the need for many iterations over similar data before good predictions can be made. As our method is memory-based, it alleviates several shortcomings of neural networks, such as catastrophic forgetting, fast, stable acquisition of new knowledge, learning with an imbalanced class labels, and fast learning during evaluation. We demonstrate this on a range of supervised tasks: large-scale image classification and language modelling. | This article introduces a new method to improve neural network performances on tasks ranging from continual learning (non-stationary target distribution, appearance of new classes, adaptation to new tasks, etc) to better handling of class imbalance, via a hybrid architecture between nearest neighbours and neural net.
After an introduction summarizing their goal, the authors introduce their Model-based parameter adaptation: this hybrid architecture enriches classical deep architectures with a non-parametric “episodic” memory, which is filled at training time with (possibly learned) encodings of training examples and then polled at inference time to refine the neural network parameters with a few steps of gradient in a direction determined by the closest neighbours in memory to the input being processed. The authors justify this inference-time SGD update with three different interpretations: one linked in Maximum A Posteriori optimization, another to Elastic Weight Regularisation (the current state of the art in continual learning), and one generalising attention mechanisms (although to be honest that later was more elusive to this reviewer). The mandatory literature review on the abundant recent uses of memory in neural networks is then followed by experiments on continual learning tasks involving permuted MNIST tasks, ImageNET incremental inclusion of classes, ImageNet unbalanced, and two language modeling tasks.
This is an overall very interesting idea, which has the merit of being rather simple in its execution and can be combined with many other methods: it is fully compatible with any optimiser (e.g. ADAM) and can be tacked on top of EWC (which the authors do). The justification is clear, the examples reasonably thorough. It is a very solid paper, which this reviewer believes to be of real interest to the ICLR community.
The following important clarifications from the authors could make it even better:
* Algorithm 1 in its current form seems to imply an infinite memory, which the experiments make clear is not the case. Therefore: how does the algorithm decide what entries to discard when the memory fills up?
* In most non-trivial settings, the parameter $gamma$ of the encoding is learned, and therefore older entries in the memory lose any ability to be compared to more recent encodings. How do the authors handle this obsolescence of the memory, other than the trivial scheme of relying on KNN to only match recent entries?
* Because gamma needs to be “recent”, this means “theta” is also recent: could the authors give a good intuition on how the two sets of parameters can evolve at different enough timescales to really make the episodic memory relevant? Is it anything else than relying on the fact that the lower levels of a neural net converge before the upper levels?
* Table 1: could the authors explain why the pre-trained Parametric (and then Mixture) models have the best AUC in the low-data regime, whereas MbPA was designed very much to be superior in such regimes?
* Paragraph below equation (5), page 3: why not including the regularisation term, whereas the authors just went to great pain to explain it? Rationale? Not including it is also akin to using an improper non-information prior on theta^x independent of theta, which is quite a strong choice to be made “by default”.
* The extra complexity of choosing the learning rate alpha_M and the number of MpAB steps is worrying this reviewer somewhat. In practice, in Section 4.1the authors explain using grid search to tune the parameters. Is this reviewer correct in understanding that this search is done across all tasks, as opposed to only the first task? And if so, doesn’t this grid search introduce an information leak by bringing information from the whole pre-determined set of task, therefore undermining the very “continuous learning” aim? How do the algorithm performs if the grid search is done only on the first task?
* Figure 3: the text could clarify that the accuracy is measured across all tasks seen so far. It would be interesting to add a figure (in the Appendix) showing the evolution of the accuracy *per task*, not just the aggregated accuracy.
* In the related works linking neural networks to encoded episodic memory, the authors might want to include the stream of research on HMAX of Anselmi et al 2014 (https://arxiv.org/pdf/1311.4158.pdf) , Leibo et al 2015 (https://arxiv.org/abs/1512.08457), and Blundell et al 2016 (https://arxiv.org/pdf/1606.04460.pdf ).
Minor typos:
* Figure 4: the title of the key says “New/Old” but then the lines read, in order, “Old” then “New” -- it would be nicer to have them in the same order.
* Section 5: missing period between "ephemeral gradient modifications" and "Further".
* Section 4.2, parenthesis should be "perform well across all 1000 classes", not "all 100 classes".
With the above clarifications, this article could become a very remarked contribution. |
iclr_2018_rJqfKPJ0Z | During the last years, a remarkable breakthrough has been made in AI domain thanks to artificial deep neural networks that achieved a great success in many machine learning tasks in computer vision, natural language processing, speech recognition, malware detection and so on. However, they are highly vulnerable to easily crafted adversarial examples. Many investigations have pointed out this fact and different approaches have been proposed to generate attacks while adding a limited perturbation to the original data. The most robust known method so far is the so called C&W attack [1]. Nonetheless, a countermeasure known as feature squeezing coupled with ensemble defense showed that most of these attacks can be destroyed [6]. In this paper, we present a new method we call Centered Initial Attack (CIA) whose advantage is twofold : first, it insures by construction the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process that degrades the quality of attacks. Second, it is robust against recently introduced defenses such as feature squeezing, JPEG encoding and even against a voting ensemble of defenses. While its application is not limited to images, we illustrate this using five of the current best classifiers on ImageNet dataset among which two are adversarialy retrained on purpose to be robust against attacks. With a fixed maximum perturbation of only 1.5% on any pixel, around 80% of attacks (targeted) fool the voting ensemble defense and nearly 100% when the perturbation is only 6%. While this shows how it is difficult to defend against CIA attacks, the last section of the paper gives some guidelines to limit their impact. | This paper presents a reparametrization of the perturbation applied to features in adversarial examples based attacks. It tests this attack variation on against Inception-family classifiers on ImageNet. It shows some experimental robustness to JPEG encoding defense.
Specifically about the method: Instead of perturbating a feature x_i by delta_i, as in other attacks, with delta_i in range [-Delta_i, Delta_i], they propose to perturbate x_i^*, which is recentered in the domain of x_i through a heuristic ((x_i ± Delta_i + domain boundary that would be clipped)/2), and have a similar heuristic for computing a Delta_i^*. Instead of perturbating x_i^* directly by delta_i, they compute the perturbed x by x_i^* + Delta_i^* * g(r_i), so they follow the gradient of loss to misclassify w.r.t. r (instead of delta).
+/-:
+ The presentation of the method is clear.
+ ImageNet is a good dataset to benchmark on.
- (!) The (ensemble) white-box attack is effective but the results are not compared to anything else, e.g. it could be compared to (vanilla) FGSM nor C&W.
- The other attack demonstrated is actually a grey-box attack, as 4 out of the 5 classifiers are known, they are attacking the 5th, but in particular all the 5 classifiers are Inception-family models.
- The experimental section is a bit sloppy at times (e.g. enumerating more than what is actually done, starting at 3.1.1.).
- The results on their JPEG approximation scheme seem too explorative (early in their development) to be properly compared.
I think that the paper need some more work, in particular to make more convincing experiments that the benefit lies in CIA (baselines comparison), and that it really is robust across these defenses shown in the paper. |
iclr_2018_H1uR4GZRZ | Published as a conference paper at ICLR 2018 STOCHASTIC ACTIVATION PRUNING FOR ROBUST ADVERSARIAL DEFENSE
Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy. In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. SAP prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. We can apply SAP to pretrained networks, including adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness against attacks, increasing accuracy and preserving calibration. | This paper investigates a new approach to prevent a given classifier from adversarial examples. The most important contribution is that the proposed algorithm can be applied post-hoc to already trained networks. Hence, the proposed algorithm (Stochastic Activation Pruning) can be combined with algorithms which prevent from adversarial examples during the training.
The proposed algorithm is clearly described. However there are issues in the presentation.
In section 2-3, the problem setting is not suitably introduced.
In particular one sentence that can be misleading:
“Given a classifier, one common way to generate an adversarial example is to perturb the input in direction of the gradient…”
You should explain that given a classifier with stochastic output, the optimal way to generate an adversarial example is to perturb the input proportionally to the gradient. The practical way in which the adversarial examples are generated is not known to the player. An adversary could choose any policy. The only thing the player knows is the best adversarial policy.
In section 4, I do not understand why the adversary uses only the sign and not also the value of the estimated gradient. Does it come from a high variance? If it is the case, you should explain that the optimal policy of the adversary is approximated by “fast gradient sign method”.
In comparison to dropout algorithm, SAP shows improvements of accuracy against adversarial examples. SAP does not perform as well as adversarial training, but SAP could be used with a trained network.
Overall, this paper presents a practical method to prevent a classifier from adversarial examples, which can be applied in addition to adversarial training. The presentation could be improved. |
iclr_2018_r1ayG7WRZ | As machine learning becomes ubiquitous, deployed systems need to be as accurate as they can. As a result, machine learning service providers have a surging need for useful, additional training data that benefits training, without giving up all the details about the trained program. At the same time, data owners would like to trade their data for its value, without having to first give away the data itself before receiving compensation. It is difficult for data providers and model providers to agree on a fair price without first revealing the data or the trained model to the other side. Escrow systems only complicate this further, adding an additional layer of trust required of both parties. Currently, data owners and model owners don't have a fair pricing system that eliminates the need to trust a third party and training the model on the data, which 1) takes a long time to complete, 2) does not guarantee that useful data is paid valuably and that useless data isn't, without trusting in the third party with both the model and the data. Existing improvements to secure the transaction focus heavily on encrypting or approximating the data, such as training on encrypted data, and variants of federated learning. As powerful as the methods appear to be, we show them to be impractical in our use case with real world assumptions for preserving privacy for the data owners when facing black-box models. Thus, a fair pricing scheme that does not rely on secure data encryption and obfuscation is needed before the exchange of data. This paper proposes a novel method for fair pricing using data-model efficacy techniques such as influence functions, model extraction, and model compression methods, thus enabling secure data transactions. We successfully show that without running the data through the model, one can approximate the value of the data; that is, if the data turns out redundant, the pricing is minimal, and if the data leads to proper improvement, its value is properly assessed, without placing strong assumptions on the nature of the model. Future work will be focused on establishing a system with stronger transactional security against adversarial attacks that will reveal details about the model or the data to the other party. | Summary
The paper addresses the issues of fair pricing and secure transactions between model and data providers in the context of machine learning real-world application.
Major
The paper addresses an important issue regarding the real-world application of machine learning, that is, the transactions between data and model provider and the associated aspects of fairness, pricing, privacy, and security.
The originality and significance of the work reported in this paper are difficult to comprehend. This is largely due to the lack of clarity, in general, and the lack of distinction between what is known and what is proposed. I failed to find any clear description of the proposed approach and any evaluation of the main idea.
Most of the discussions in the paper are difficult to follow due to that many of the statements are vague or unclear. There are some examples of this vagueness illustrated under “minor issues”. Together, the many minor issues contribute to a major communication issue, which significantly reduces readability of the paper. A majority of the references included in the reference section lack some or all of the required meta data.
In my view, the paper is out of scope for ICLR. Neither the CFP overview nor the (non-exhaustive) list of relevant topics suggest otherwise. In very general terms, the paper could of course be characterised as dealing with machine learning implementation/platform/application but the issues discussed are more connected to privacy, security, fair transactions, and pricing.
In summary; although there is no universal rule on how to structure research papers, a more traditional structure (introduction, aim & scope, background, related work, method, results, analysis, conclusions & future work) would most certainly have benefitted the paper through improved clarity and readability. Although some interesting works on adversarial learning, federated learning, and privace-preserving training are cited in the paper, the review and use of these references did not contribute to a better understanding of the topic or the significance of the contribution in this paper. I was unable to find any support in the paper for the strong general result stated in the abstract (“We successfully show that without running the data through the model, one can approximate the value of the data”).
Minor issues (examples)
- “Models trained only a small scale of data” (missing word)
- “to prevent useful data from not being paid” (unclear meaning)
- “while the company may decline reciprocating gifts such as academic collaboration, while using the data for some other service in the future” (unclear meaning)
- “since any data given up is given up ” (unclear meaning)
- “a user of a centralized service who has given up their data will have trouble telling if their data exchange was fair at all (even if their evaluation was purely psychological)” (unclear meaning)
- “For a generally deployed model, it can take any form. Designing a transaction strategy for each one can be time-consuming and difficult to reason about” (unclear meaning)
- “(et al., 2017)” (unknown reference)
- “Osbert Bastani, Carolyn Kim, and Hamsa Bastani. Interpreting blackbox models via model extraction, 2017” (incomplete reference data)
- “Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding, 2015.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015.
Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions, 2017.” (Incomplete reference data)
- “H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agera y Arcas. Communication-efficient learning of deep networks from decentralized data. 2016.” (Incomplete reference data)
- “et al. Richard Craid.” (Incorrect author reference style)
- “Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, and Yoichi Sato. Privacy-preserving visual learning using doubly permuted homomorphic encryption, 2017.
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization, 2016.” (Incomplete reference data) |
iclr_2018_BJhxcGZCW | Online healthcare services can provide the general public with ubiquitous access to medical knowledge and reduce the information access cost for both individuals and societies. To promote these benefits, it is desired to effectively expand the scale of high-quality yet novel relational medical entity pairs that embody rich medical knowledge in a structured form. To fulfill this goal, we introduce a generative model called Conditional Relationship Variational Autoencoder (CRVAE), which can discover meaningful and novel relational medical entity pairs without the requirement of additional external knowledge. Rather than discriminatively identifying the relationship between two given medical entities in a free-text corpus, we directly model and understand medical relationships from diversely expressed medical entity pairs. The proposed model introduces the generative modeling capacity of variational autoencoder to entity pairs, and has the ability to discover new relational medical entity pairs solely based on the existing entity pairs. Beside entity pairs, relationship-enhanced entity representations are obtained as another appealing benefit of the proposed method. Both quantitative and qualitative evaluations on real-world medical datasets demonstrate the effectiveness of the proposed method in generating relational medical entity pairs that are meaningful and novel. | In the medical context, this paper describes the classic problem of "knowledge base completion" from structured data only (no text). The authors argue for the advantages of a generative VAE approach (but without being convincing). They do not cite the extensive literature on KB completion. They present experimental results on their own data set, evaluating only against simpler baselines of their own VAE approach, not the pre-existing KB methods.
The authors seem unaware of a large literature on "knowledge base completion." E.g. [Bordes, Weston, Collobert, Bengio, AAAI, 2011], [Socher et al 2013 NIPS], [Wang, Wang, Guo 2015 IJCAI], [Gardner, Mitchell 2015 EMNLP], [Lin, Liu, Sun, Liu, Zhu AAAI 2015], [Neelakantan, Roth, McCallum 2015],
The paper claims that operating on pre-structured data only (without using text) is an advantage. I don't find the argument convincing. There are many methods that can operate on pre-structured data only, but also have the ability to incorporate text data when available, e.g. "universal schema" [Riedel et al, 2014].
The paper claims that "discriminative approaches" need to iterate over all possible entity pairs to make predictions. In their generative approach they say they find outputs by "nearest neighbor search." But the same efficient search is possible in many of the classic "discriminatively-trained" KB completion models also.
It is admirable that the authors use an interesting (and to my knowledge novel) data set. But the method should also be evaluated on multiple now-standard data sets, such as FB15K-237 or NELL-995. The method is evaluated only against their own VAE-based alternatives. It should be evaluated against multiple other standard KB completion methods from the literature, such as Jason Weston's Trans-E, Richard Socher's Tensor Neural Nets, and Neelakantan's RNNs. |
iclr_2018_rJ695PxRW | discovering-order-unordered
The assumption that data samples are independently identically distributed is the backbone of many learning algorithms. Nevertheless, datasets often exhibit rich structures in practice, and we argue that there exist some unknown orders within the data instances. Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically. Specifically, we assume that the instances are sampled from a Markov chain. Our goal is to learn the transitional operator of the chain as well as the generation order by maximizing the generation probability under all possible data permutations. One of our key ideas is to use neural networks as a soft lookup table for approximating the possibly huge, but discrete transition matrix. This strategy allows us to amortize the space complexity with a single model and make the transitional operator generalizable to unseen instances. To ensure the learned Markov chain is ergodic, we propose a greedy batch-wise permutation scheme that allows fast training. Empirically, we evaluate the learned Markov chain by showing that GMNs are able to discover orders among data instances and also perform comparably well to state-of-the-art methods on the one-shot recognition benchmark task. | The authors deal with the problem of implicit ordering in a dataset and the challenge of recovering it, i.e. when given a random dataset with no explicit ordering in the samples, the model is able to recover an ordering. They propose to learn a distance-metric-free model that assumes a Markov chain as the generative mechanism of the data and learns not only the transition matrix but also the optimal ordering of the observations.
> Abstract
“Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically. ”
I am not sure what automatically refers here to. Do the authors mean that the GMN model does not explicitly assume any ordering in the observed dataset? This needs to be better stated here.
“Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically; given an unordered dataset, it outputs the best -most possible- ordering.”
Most of the models assume an explicit ordering in the dataset and use it as an integral modelling assumption. Contrary to that they propose a model where no ordering assumption is made explicitly, but the model itself will recover it if any.
> Introduction
The introduction is fairly well structured and the example of the joint locations in different days helps the reader.
In the last paragraph of page 1, “we argue that … a temporal model can generate it.”, the authors present very good examples where ordered observations (ballerina poses, video frames) can be shuffled and then the proposed model can recover a temporal ordering out of them. What I would like to think also here is about an example where the recovered ordering will also be useful as such. An example where the recovered ordering will increase the importance of the inferred solution would be more interesting..
2. Related work
This whole section is not clear how it relates to the proposed model GMN. Rewriting is strongly suggested.
The authors mention Deep Generative models and One-shot learning methods as related work but the way this section is constructed makes it hard for the reader to see the relation. It is important that first the authors discuss the characteristics of GMN that makes it similar to Deep generative models and the one-shot learning models. They should briefly explain the characteristics of DGN and one-shot learning so that the readers see the relationship.
Also, the authors never mention that the architecture they propose is deep.
Regarding the last paragraph of page 2, “Our approach can be categorised … can be computed efficiently.”:
Not sure why the authors assume that the samples can be sampled from an unmixed chain. An unmixed chain can also result in observing data that do not exhibit the real underlying relationships. Also the authors mention couple of characteristics of the GMN but without really explaining them. What are the explicit and implicit models [1] … this needs more details.
[1] P. J. Diggle and R. J. Gratton. Monte Carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society. Series B (Methodological), pages 193–227, 1984.
“Second, prior approaches were proposed based on the notion of denoising models. In other words, their goal was generating high-quality images; on the other hand, we aim at discovering orders in datasets.” —>this bit is confusing. Do the authors mean that prior approaches were considering the observed ordering as part of the model assumptions and were just focusing on the denoising?
3. Generative Markov models
First, I would like to draw the attention of the authors on the terminology they use. The states here are not the latent states usually referred in the literature of Markov chains. The states here are observed and should not be confused with the emissions also usually stated in the corresponding literature. There are as many states as the number of observations and not differentiation is made for ties. All these are based on my understanding of the model.
In the Equation just before equation (1), on the left hand side, shouldn’t \pi be after the `;’. It’s an average over the possible \pi. We cannot consider the average over \pi when we also want to find the optimal \pi. The sum doesn’t need to be there. Shouldn’t it just be max_{\theta, \pi} log P({s_i}^{n}_{i=1}; \pi, \theta) ?
Equation (1), same. The summation over the possible \pi is confusing. It’s an optimisation problem…
page 4, section 3.1: The discussion about the use of Neural Net for the construction of the transition matrix needs expansion. It is unclear how the matrix is constructed. Please add more details. E.g. use of soft-max non-linear transformation so that the output of the Neural Net can be interpreted as the probabilities of jumping to one of the possible states. In this fashion, we map the input (current state) and transform it to the probability gf occupying states at the next time step.
Why this needs expansion: The construction of the transition matrix is the one that actually plays the role of the distance metric in the related models. More specifically, the choice of the non-linear function that outputs the transition probability is crucial; e.g. a smooth function will output comparable transition probabilities to similar inputs (i.e. similar states).
section 3.2:
My concern about averaging over \pi applies on the equations here too.
“However, without further assumption on the structure of the transitional operator..”—> I think the choice of the nonlinear function in the output node of the NN is actually related to the transition matrix and defines the probabilities. It is a confusing statement to make and authors need to discuss more about it. After all, what is the driving force of the inference? This is a problem/task where the observations are considered in a number of different permutations. As such, the ordering is not fixed and the main driving force regarding the best choice of ordering should come from the architecture of the transition matrix; what kind of transitions does the Neural Net architecture favour? Distance free metric but still assumptions are made that favour specific transitions over others.
“At first, Alg. 1 enumerates all the possible states appearing in the first time step. For each of the following steps, it finds the next state by maximizing the transition probability at the current step, i.e., a local search to find the next state. ” —> local search in the sense that the algorithm chooses as the next state the state with the biggest transition probability (to it) as defined in the Neural Net (transition operator) output? This is a deterministic step, right?
4.1 DISCOVERING ORDERS IN DATASETS
Nice description of the datasets. In the <MSR_SenseCam> the choice of one of the classes needs to be supported. Why? What do the authors expect to happen if a number of instances from different classes are chosen?
4.1.1 IMPLICIT ORDERS IN DATASETS
The explanation of the inferred orderings for the GMN and Nearest Neighbour model is not clear. In figure 2, what forces the GMN to make distinguishable transitions as opposed to the Nearest neighbour approach that prefers to get stuck to similar states? Is it the transition matrix architecture as defined by the neural network?
>> Figure 10: why use of X here? Why not keep being consistent by using s?
*** DO the authors test the model performance on a ordered dataset (after shuffling it…) ? Is the model able of recovering the order? ** |
iclr_2018_BJMuY-gRW | We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition function and the parser, thus eliminating the need for externally-provided parse trees which are normally required for Tree-LSTM. It can therefore be seen as a tree-based RNN that is unsupervised with respect to the parse trees. As it is fully differentiable, our model is easily trained with an off-the-shelf gradient descent method and backpropagation. We demonstrate that it achieves better performance compared to various supervised Tree-LSTM architectures on a textual entailment task and a reverse dictionary task. Finally, we show how performance can be improved with an attention mechanism which fully exploits the parse chart, by attending over all possible subspans of the sentence. | The paper presents a model titled the "unsupervised tree-LSTM," in which the authors mash up a dynamic-programming chart and a recurrent neural network. As far as I can glean, the topology of the neural network is constructed using the chart of a CKY parser. When combining different constituents, an energy function is computed (equation 6) and the resulting energies are passed through a softmax. The architecture achieves impressive results on two tasks: SNLI and the reverse dictionary of Hill et al. (2016).
Overall, I found the paper deeply uninspired. The authors downplay the similarity of their paper to that of Le and Zuidema (2015), which I did not appreciate. It's true that Le and Zuidema take a parse forest from an existing parser, but it still contains an exponential number of trees, as does the work in here. Note that exposition in Le and Zuidema (2015) discusses the pruned case as well, i.e., a compete parse forest. The authors of this paper simply write "Le and Zuidema (2015) propose a model that takes as input a parse forest from an external parser, in order to deal with uncertainty." I would encourage the authors to revisit Le and Zuidema (2015), especially section 3.2, and consider the technical innovations over the existing work. I believe the primary difference (other using an LSTM instead of a convnet) is to replace max-pooling with softmax-pooling. Do these two architectural changes matter? The experiments offer no empirical comparison. In short, the insight of having an end-to-end differentiable function based on a dynamic-programming chart is pretty common -- the idea is in the air. The authors provide yet another instantiation of such an approach, but this time with an LSTM.
The technical exposition is also relatively poor. The authors could have expressed their network using a clean recursion, following the parse chart, but opted not to, and, instead, provided a round-about explanation in English. Thus, despite the strong results, I would not like to see this work in the proceedings, due to the lack of originality and poor technical discussion. If the paper were substantially cleaned-up, I would be willing to increase my rating. |
iclr_2018_BkCV_W-AZ | Deep reinforcement learning algorithms that estimate state and state-action value functions have been shown to be effective in a variety of challenging domains, including learning control strategies from raw image pixels. However, algorithms that estimate state and state-action value functions typically assume a fully observed state and must compensate for partial or non-Markovian observations by using finite-length frame-history observations or recurrent networks. In this work, we propose a new deep reinforcement learning algorithm based on counterfactual regret minimization that iteratively updates an approximation to a cumulative clipped advantage function and is robust to partially observed state. We demonstrate that on several partially observed reinforcement learning tasks, this new class of algorithms can substantially outperform strong baseline methods: on Pong with single-frame observations, and on the challenging Doom (ViZDoom) and Minecraft (Malmö) first-person navigation benchmarks. | This paper presents Advantage-based Regret Minimization, somewhat similar to advantage actor-critic with REINFORCE.
The main focus of the paper seems to be the motivation/justification of this algorithm with connection to the regret minimization literature (and without Markov assumptions).
The claim that ARM is more robust to partially observable domains is supported by experiments where it outperforms DQN.
There are several things to like about this paper:
- The authors do a good job of reviewing/referencing several papers in the field of "regret minimization" that would probably be of interest to the ICLR community + provide non-obvious connections / summaries of these perspectives.
- The issue of partial observability is good to bring up, rather than simply relying on the MDP framework that is often taken as a given in "deep reinforcement learning".
- The experimental results show that ARM outperforms DQN on a suite of deep RL tasks.
However, there are also some negatives:
- Reviewing so much of the CFR-literature in a short paper means that it ends up feeling a little rushed and confused.
- The ultimate algorithm *seems* like it is really quite similar to other policy gradient methods such as A3C, TRPO etc. At a high enough level, these algorithms can be written the same way... there are undoubtedly some key differences in how they behave, but it's not spelled out to the reader and I think the connections can be missed.
- The experiment/motivation I found most compelling was 4.1 (since it clearly matches the issue of partial observability) but we only see results compared to DQN... it feels like you don't put a compelling case for the non-Markovian benefits of ARM vs other policy gradient methods. Yes A3C and TRPO seem like they perform very poorly compared to ARM... but I'm left wondering how/why?
I feel like this paper is in a difficult position of trying to cover a lot of material/experiments in too short a paper.
A lot of the cited literature was also new to me, so it could be that I'm missing something about why this is so interesting.
However, I came away from this paper quite uncertain about the real benefits/differences of ARM versus other similar policy gradient methods... I also didn't feel the experimental evaluations drove a clear message except "ARM did better than all other methods on these experiments"... I'd want to understand how/why and whether we should expect this universally.
The focus on "regret minimization perspectives" didn't really get me too excited...
Overall I would vote against acceptance for this version. |
iclr_2018_B1tC-LT6W | We propose and evaluate new techniques for compressing and speeding up dense matrix multiplications as found in the fully connected and recurrent layers of neural networks for embedded large vocabulary continuous speech recognition (LVCSR). For compression, we introduce and study a trace norm regularization technique for training low rank factored versions of matrix multiplications. Compared to standard low rank training, we show that our method leads to good accuracy versus number of parameter trade-offs and can be used to speed up training of large models. For speedup, we enable faster inference on ARM processors through new open sourced kernels optimized for small batch sizes, resulting in 3x to 7x speed ups over the widely used gemmlowp library. Beyond LVCSR, we expect our techniques and kernels to be more generally applicable to embedded neural networks with large fully connected or recurrent layers. | The authors propose a strategy for compressing RNN acoustic models in order to deploy them for embedded applications. The technique consists of first training a model by constraining its trace norm, which allows it to be well-approximated by a truncated SVD in a second fine-tuning stage. Overall, I think this is interesting work, but I have a few concerns which I’ve listed below:
1. Section 4, which describes the experiments of compressing server sized acoustic models for embedded recognition seems a bit “disjoint” from the rest of the paper. I had a number of clarification questions spefically on this section:
- Am I correct that the results in this section do not use the trace-norm regularization at all? It would strengthen the paper significantly if the experiments presented on WSJ in the first section were also conducted on the “internal” task with more data.
- How large are the training/test sets used in these experiments (for test sets, number of words, for training sets, amount of data in hours (is this ~10,000hrs), whether any data augmentation such as multi-style training was done, etc.)
- What are the “tier-1” and “tier-2” models in this section? It would also aid readability if the various models were described more clearly in this section, with an emphasis on structure, output targets, what LMs are used, how are the LMs pruned for the embedded-size models, etc. Also, particularly given that the focus is on embedded speech recognition, of which the acoustic model is one part, I would like a few more details on how decoding was done, etc.
- The details in appendix B are interesting, and I think they should really be a part of the main paper. That being said, the results in Section B.5, as the authors mention, are somewhat preliminary, and I think the paper would be much stronger if the authors can re-run these experiments were models are trained to convergence.
- The paper focuses fairly heavily on speech recognition tasks, and I wonder if it would be more suited to a conference on speech recognition.
2. Could the authors comment on the relative training time of the models with the trace-norm regularizer, L2-regularizer and the unconstrained model in terms of convergence time.
3. Clarification question: For the WSJ experiments was the model decoded without an LM? If no LM was used, then the choice of reporting results in terms of only CER is reasonable, but I think it would be good to also report WERs on the WSJ set in either case.
4. Could the authors indicate the range of values of \lambda_{rec} and \lambda_{nonrec} that were examined in the work? Also, on a related note, in Figure 2, does each point correspond to a specific choice of these regularization parameters?
5. Figure 4: For the models in Figure 4, it would be useful to indicate the starting CER of the stage-1 model before stage-2 training to get a sense of how stage-2 training impacts performance.
6. Although the results on the WSJ set are interesting, I would be curious if the same trends and conclusions can be drawn from a larger dataset -- e.g., the internal dataset that results are reported on later in the paper, or on a set like Switchboard. I think these experiments would strengthen the paper.
7. The experiments in Section 3.2.3 were interesting, since they demonstrate that the model can be warm-started from a model that hasn’t fully converged. Could the authors also indicate the CER of the model used for initialization in addition to the final CER after stage-2 training in Figure 5.
8. In Section 4, the authors mention that quantization could be used to compress models further although this is usually degrades WER by 2--4% relative. I think the authors should consider citing previous works which have examined quantization for embedded speech recognition [1], [2]. In particular, note that [2] describes a technique for training with quantized forward passes which results in models that have smaller performance degradation relative to quantization after training.
References:
[1] Vincent Vanhoucke, Andrew Senior, and Mark Mao, “Improving the speed of neural networks on cpus,” in Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011.
[2] Raziel Alvarez, Rohit Prabhavalkar, Anton Bakhtin, “On the efficient representation and execution of deep acoustic models,” Proc. of Interspeech, pp. 2746 -- 2750, 2016.
9. Minor comment: The authors use the term “warmstarting” to refer to the process of training NNs by initializing from a previous model. It would be good to clarify this in the text. |
iclr_2018_ryZERzWCZ | A variety of learning objectives have been recently proposed for training generative models. We show that many of them, including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, VAE, β-VAE, adversarial autoencoders, AVB, and InfoVAE, are Lagrangian duals of the same primal optimization problem. This generalization reveals the implicit modeling trade-offs between flexibility and computational requirements being made by these models. Furthermore, we characterize the class of all objectives that can be optimized under certain computational constraints. Finally, we show how this new Lagrangian perspective can explain undesirable behavior of existing methods and provide new principled solutions. | Update after rebuttal
==========
Thanks for your response on my questions. The stated usefulness of the method unfortunately do not answer my worry about the significance. It remains unclear to me how much "real" difference the presented results would make to advance the existing work on generative models. Also, the authors did not promised any major changes in the final version in this direction, which is why I have reduced my score.
I do believe that this work could be useful and should be resubmitted. There are two main things to improve. First, the paper need more work on improving the clarity. Second, more work needs to be added to show that the paper will make a real difference to advance/improve existing methods.
==========
Before rebuttal
==========
This paper proposes an optimization problem whose Lagrangian duals contain many existing objective functions for generative models. Using this framework, the paper tries to generalize the optimization problems by defining computationally-tractable family which can be expressed in terms of existing objective functions.
The paper has interesting elements and the results are original. The main issue is that the significance is unclear. The writing in Section 3 is unclear for me, which further made it challenging to understand the consequences of the theorems presented in that section.
Here is a big-picture question that I would like to know answer for. Do the results of sec 3 help us identify a more useful/computationally tractable model than exiting approaches? Clarification on this will help me evaluate the significance of the paper.
I have three main clarification points. First, what is the importance of T1, T2, and T3 classes defined in Def. 7, i.e., why are these classes useful in solving some problems? Second, is the opposite relationship in Theorem 1, 2, and 3 true as well, e.g., is every linear combination of beta-ELBO and VMI is equivalent to a likelihood-based computable-objective of KL info-encoding family? Is the same true for other theorems?
Third, the objective of section 3 is to show that "only some choices of lambda lead to a dual with a tractable equivalent form". Could you rewrite the theorems so that they truly reflect this, rather than stating something which only indirectly imply the main claim of the paper.
Some small comments:
- Eq. 4. It might help to define MI to remind readers.
- After Eq. 7, please add a proof (may be in the Appendix). It is not that straightforward to see this. Also, I suppose you are saying Eq. 3 but with f from Eq. 4.
- Line after Eq. 8, D_i is "one" of the following... Is it always the same D_i for all i or it could be different? Make this more clear to avoid confusion.
- Last line in Para after Eq. 15, "This neutrality corresponds to the observations made in.." It might be useful to add a line explaining that particular "observation"
- Def. 7, the names did not make much sense to me. You can add a line explaining why this name is chosen.
- Def. 8, the last equation is unclear. Does the first equivalence impy the next one?
- Writing in Sec. 3.3 can be improved. e.g., "all linear operations on log prob." is very unclear, "stated computational constraints" which constraints? |
iclr_2018_HyDMX0l0Z | Generative Adversarial Networks (GANs), when trained on large datasets with diverse modes, are known to produce conflated images which do not distinctly belong to any of the modes. We hypothesize that this problem occurs due to the interaction between two facts: (1) For datasets with large variety, it is likely that the modes lie on separate manifolds. (2) The generator (G) is formulated as a continuous function, and the input noise is derived from a connected set, due to which G's output is a connected set. If G covers all modes, then there must be some portion of G's output which connects them. This corresponds to undesirable, conflated images. We develop theoretical arguments to support these intuitions. We propose a novel method to break the second assumption via learnable discontinuities in the latent noise space. Equivalently, it can be viewed as training several generators, thus creating discontinuities in the G function. We also augment the GAN formulation with a classifier C that predicts which noise partition/generator produced the output images, encouraging diversity between each partition/generator. We experiment on MNIST, celebA, STL-10, and a difficult dataset with clearly distinct modes, and show that the noise partitions correspond to different modes of the data distribution, and produce images of superior quality. | Summary:
The paper studies the problem of learning distributions with disconnected support. The paper is very well written, and the analysis is mostly correct, with some important exceptions. However, there are a number of claims that are unverified, and very important baselines are missing. I suggest improving the paper taking into account the following remarks and I will strongly consider improving the score.
Detailed comments:
- The paper is very well written, which is a big plus.
- There are a number of claims in the paper that are not supported by experiments, citations, or a theorem.
- Sections 3.1 - 3.3 can be summarized to "Connected prior + continuous generator => connected support". Thus, to allow for disconnected support, the authors propose to have a discontinuous generator. However to me it seems that a trivial and important attack to this problem is to allow a simple disconnected prior, such as a mixture between uniforms, or at least an approximately disconnected (given the superexponential decay of the gaussian pdf) of a mixture of gaussians, which is very common. The authors fail to mention this obvious alternative, or explore it further, which I think weakens the paper.
- Another standard approach to attacking diverse datasets such as imagenet is adding noise in the intermediate layers of the generator (this was done by EBGAN and the Improved GAN paper by Salimans et al.). It seems to me that this baseline is missing.
- Section 3.4, paragraph 3, "the outputs corresponding to vectors linearly interpolated from z_1 to z_2 show a smooth". Actually, this is known to not perform very well often, indeed the interpolations are done through great circles in z_1 and z_2. See https://www.youtube.com/watch?v=myGAju4L7O8 for example.
- Lemma 1 is correct, but the analysis on the paragraph following is flat out wrong. The fact that a certain z has high density doesn't imply that the sample g_\theta(z) has high density! You're missing the Jacobian term appearing in the change of variables. Indeed, it's common to see neural nets spreading appart regions of high probability to the extent that each individual output point has low density (this is due in its totallity to the fact that ||\nabla_x g_\theta(z)|| can be big.
- Borrowing from the previous comment, the evidence to support result 5 is insufficient. I think the authors have the right intuition, but no evidence or citation is presented to motivate result 5. Indeed, DCGANs are known to have extremely sharp interpolations, suggesting that small jumps in z lead to large jumps in images, thus having the potential to assign low probability to tunnels.
- A citation, experiment or a theorem is missing showing that the K of a generator is small enough in an experiment with separated manifolds. Until that evidence is presented, section 3.5 is anecdotal.
- The second paragraph of section 3.6 is a very astute observation, but again it is necessary to show some evidence to verify this intuition.
- The authors then propose to partition the prior space by training separate first layers for the generator in a maximally discriminative way, and then at inference time just sampling which layer to use uniformly. It's important to note that this has a problem when the underlying separated manifolds in the data are not equiprobable. For example, if we use N = 2 in CelebRoom but we use 30% faces and 70% bedrooms, I would still expect tunneling due to the fact that one of the linear layers has to cover both faces and bedrooms.
- MNIST is known to be a very poor benchmark for image generation, and it should be avoided.
- I fail to see an improvement in quality in CelebA. It's nice to see some minor form on clustering when using generator's prediction, but this has been seen in many other algorithms (e.g. ALI) with much better results long before. I have to say also the official baseline for 64x64 images in wgangp (that I've used several times) gives much better results than the ones presented in this paper https://github.com/igul222/improved_wgan_training/blob/master/gan_64x64.py .
- The experiments in celebRoom are quite nice, and a good result, but we are still missing a detailed analysis for most of the assumptions and improvements claimed in the paper. It's very hard to make very precise claims about the improvements of this algorithm in such a complex setting without having even studied the standard baselines (e.g. noise at every layer of the generator, which has very public and well established code https://github.com/openai/improved-gan/blob/master/imagenet/generator.py).
- I would like to point a lot of tunneling issues can be seen and studied in toy datasets. The authors may want to consider doing targeted experiments to evaluate their assumptions.
=====================
After the rebuttal I've increased my score. The authors did a great job at addressing some of the concerns. I still think there is more room to be done as to justifying the approach, dealing properly with tunneling when we're not in the somewhat artificial case of equiprobable partitions, and primarily at understanding the extent to which tunneling is a problem in current methods. The revision is a step forward in this direction, but still a lot remains to be done. I would like to see simple targeted experiments aimed at testing how much and in what way tunneling is a problem in current methods before I see high dimensional non quantitative experiments.
In the case where the paper gets rejected I would highly recommend the acceptance at the workshop due to the paper raising interesting questions and hinting to a partial solution, even though the paper may not be at a state to be published at a conference venue like ICLR. |
iclr_2018_ryZ3KCy0W | Application of deep learning has been successful in various domains such as image recognition, speech recognition and natural language processing. However, the research on its application in graph mining is still in an early stage. Here we present the first generic deep learning approach to the graph link weight prediction problem based on node embeddings. We evaluate this approach with three different node embedding techniques experimentally and compare its performance with two state-of-the-art non deep learning baseline approaches. Our experiment results suggest that this deep learning approach outperforms the baselines by up to 70% depending on the dataset and embedding technique applied. This approach shows that deep learning can be successfully applied to link weight prediction to improve prediction accuracy. | Although this paper aims at an interesting and important task, the reviewer does not feel it is ready to be published.
Below are some detailed comments:
Pros
- Numerous public datasets are used for the experiments
- Good introductions for some of the existing methods.
Cons
- The novelty is limited. The basic idea of the proposed method is to simply concatenate the embeddings of two nodes (via activation separately) from both side of edges, which is straightforward and produces only marginal improvement over existing methods (the comparison of Figure 1 and Figure 3 would suggest this fact). The optimization algorithm is not novel either.
- Lack of detailed description and analysis for the proposed model S. In Section 5.2, only brief descriptions are given for the proposed approach.
- The selected baseline methods are too weak as competitors, some important relevant methods are also missing in the comparisons. For the graph embedding learning task, one of the state-of-the-art approach is conducting Graph Convolutional Networks (GCNs), and GCNs seem to be able to tackle this problem as well. Moreover, the target task of this paper is mathematically identical to the rating prediction problem (if we treat the weight matrix of the graph as the rating matrix, and consider the nodes as users, for example), which can be loved by a classic collaborative filtering solution such as matrix factorization. The authors probably need to survey and compared against the proposed approach. |
iclr_2018_H139Q_gAW | Convolution Neural Network (CNN) has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features. Recently, there has been an increasing interest in extending CNNs to the general spatial domain. Although various types of graph convolution and geometric convolution methods have been proposed, their connections to traditional 2D-convolution are not well-understood. In this paper, we show that depthwise separable convolution is a path to unify the two kinds of convolution methods in one mathematical view, based on which we derive a novel Depthwise Separable Graph Convolution that subsumes existing graph convolution methods as special cases of our formulation. Experiments show that the proposed approach consistently outperforms other graph convolution and geometric convolution baselines on benchmark datasets in multiple domains. | The paper presents a Depthwise Separable Graph Convolution network that aims
at generalizing Depthwise convolutions, that exhibit a nice performance in image
related tasks, to the graph domain. In particular it targets
Graph Convolutional Networks.
In the abstract the authors mention that the Depthwise Separable Graph Convolution
that they propose is the key to understand the connections between geometric
convolution methods and traditional 2D ones. I am afraid I have to disagree as
the proposed approach is not giving any better understanding of what needs to be
done and why. It is an efficient way to mimic what has worked so far for the planar
domain but I would not consider it as fundamental in "closing the gap".
I feel that the text is often redundant and that it could be simplified a lot.
For example the authors state in various parts that DSC does not work on
non-Euclidean data. Section 2 should be clearer and used to better explain
related approaches to motivate the proposed one.
In fact, the entire motivation, at least for me, never went beyond the simple fact
that this happens to be a good way to improve performance. The intuition given
is not sufficient to substantiate some of the claims on generality and understanding
of graph based DL.
In 3.1, at point (2), the authors mention that DSC filters are learned from the
data whereas GC uses a constant matrix. This is not correct, as also reported in
equation 2. The matrix U is learned from the data as well.
Equation (4) shows that the proposed approach would weight Q different GC
layers. In practical terms this is a linear combination of these graph
convolutional layers.
What is not clear is the \Delta_{ij} definition. It is first introduced in 2.3
and described as the relative position of pixel i and pixel j on the image, but
then used in the context of a graph in (4). What is the coordinate system used
by the authors in this case? This is a very important point that should be made
clearer.
Why is the Related Work section at the end? I would put it at the front.
The experiments compare with the recent relevant literature. I think that having
less number of parameters is a good thing in this setting as the data is scarce,
however I would like to see a more in-depth comparison with respect to the number
of features produced by the model itself. For example GCN has a representation
space (latent) much smaller than DSCG.
No statistics over multiple runs are reported, and given the high variance of
results on these datasets I would like them to be reported.
I think the separability of the filters in this case brings the right level of
simplification to the learning task, however as it also holds for the planar case
it is not clear whether this is necessarily the best way forward.
What are the underlying mathematical insights that lead towards selecting
separable convolutions?
Overall I found the paper interesting but not ground-breaking. A nice application
of the separable principle to GCN. Results are also interesting but should be
further verified by multiple runs. |
iclr_2018_BJuWrGW0Z | Published as a conference paper at ICLR 2018 DYNAMIC NEURAL PROGRAM EMBEDDINGS FOR PRO- GRAM REPAIR
Neural program embeddings have shown much promise recently for a variety of program analysis tasks, including program synthesis, program repair, codecompletion, and fault localization. However, most existing program embeddings are based on syntactic features of programs, such as token sequences or abstract syntax trees. Unlike images and text, a program has well-defined semantics that can be difficult to capture by only considering its syntax (i.e. syntactically similar programs can exhibit vastly different run-time behavior), which makes syntaxbased program embeddings fundamentally limited. We propose a novel semantic program embedding that is learned from program execution traces. Our key insight is that program states expressed as sequential tuples of live variable values not only capture program semantics more precisely, but also offer a more natural fit for Recurrent Neural Networks to model. We evaluate different syntactic and semantic program embeddings on the task of classifying the types of errors that students make in their submissions to an introductory programming class and on the CodeHunt education platform. Our evaluation results show that the semantic program embeddings significantly outperform the syntactic program embeddings based on token sequences and abstract syntax trees. In addition, we augment a search-based program repair system with predictions made from our semantic embedding and demonstrate significantly improved search efficiency. | Summary of paper: The paper proposes an RNN-based neural network architecture for embedding programs, focusing on the semantics of the program rather than the syntax. The application is to predict errors made by students on programming tasks. This is achieved by creating training data based on program traces obtained by instrumenting the program by adding print statements. The neural network is trained using this program traces with an objective for classifying the student error pattern (e.g. list indexing, branching conditions, looping bounds).
---
Quality: The experiments compare the three proposed neural network architectures with two syntax-based architectures. It would be good to see a comparison with some techniques from Reed & De Freitas (2015) as this work also focuses on semantics-based embeddings.
Clarity: The paper is clearly written.
Originality: This work doesn't seem that original from an algorithmic point of view since Reed & De Freitas (2015) and Cai et. al (2017) among others have considered using execution traces. However the application to program repair is novel (as far as I know).
Significance: This work can be very useful for an educational platform though a limitation is the need for adding instrumentation print statements by hand.
---
Some questions/comments:
- Do we need to add the print statements for any new programs that the students submit? What if the structure of the submitted program doesn't match the structure of the intended solution and hence adding print statements cannot be automated?
---
References
Cai, J., Shin, R., & Song, D. (2017). Making Neural Programming Architectures Generalize via Recursion. In International Conference on Learning Representations (ICLR). |
iclr_2018_ry0WOxbRZ | Generative adversarial networks (GANs) are a powerful framework for generative tasks. However, they are difficult to train and tend to miss modes of the true data generation process. Although GANs can learn a rich representation of the covered modes of the data in their latent space, the framework misses an inverse mapping from data to this latent space. We propose Invariant Encoding Generative Adversarial Networks (IVE-GANs), a novel GAN framework that introduces such a mapping for individual samples from the data by utilizing features in the data which are invariant to certain transformations. Since the model maps individual samples to the latent space, it naturally encourages the generator to cover all modes. We demonstrate the effectiveness of our approach in terms of generative performance and learning rich representations on several datasets including common benchmark image generation tasks. | This paper presents the IVE-GAN, a model that introduces en encoder to the Generative Adversarial Network (GAN) framework. The model is evaluated qualitatively through samples and reconstructions on a synthetic dataset, MNIST and CelebA.
Summary:
The evaluation is superficial, no quantitative evaluation is presented and key aspects of the model are not explored. Overall, there just is not enough innovation or substance to warrant publication at this point.
Impact:
The motivation given throughout the introduction -- to add an encoder (inference) network to GANs -- is a bit odd in the light of the existing literature. In addition to the BiGAN/ALI models that were cited, there are a number of (not cited) papers with various ways of combining GANs with VAE encoders to accomplish exactly this. If your goal was to improve reconstructions in ALI, one could simply add an reconstruction (or cycle) penalty to the ALI objective as advocated in the (not cited) ALICE paper (Li et al., 2017 -- "ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching").
The training architecture presented here is novel as far as I know, though I am unconvinced that it represents an optimum in model design space. The model presented in the ALICE paper would seem to be a more elegant solution to the motivation given in this paper.
Model Feature:
The authors should discuss in detail the interaction between the regular GAN pipeline and the introduced variant (with the transformations). Why is the standard GAN objective thrown in? I assume it is to allow you to sample directly from the noise in z (as opposed to z' which is used for reconstruction), but this is not discussed in much detail. The GAN objective and the added IVE objective seem like they will interact in not all together beneficial ways, with the IVE component pushing to make the distribution in z complicated. This would result in a decrease in sample quality. Does it? Exploration of this aspect of the model should be included in the empirical evaluation.
Also the addition of the transformations added to the proposed IVE pipeline seem to cause the latent variations z' to encode these variations rather than the natural variations that exist in the dataset. They would seem to make it difficult to encode someone face and make some natural manipulates (such as adjusting the smile) that are not included in this transformations.
Empirical Evaluation:
Comparison to BiGAN/ALI: The authors motivate their work by drawing comparisons to BiGAN/ALI, showing CelebA reconstructions from the ALI paper in the appendix. The comparison is not fair for two reasons, (1) authors should state that their reconstructions are made at a higher resolution (seems like 128x128, which is now standard but was not so when the BiGAN/ALI papers came out, they were sampled at 64x64), also, unlike the ALI results, they authors cut the background away from the CelebA faces. This alone could account for the difference between the two models, as ICE-GAN only has to encode the variability extant in faces and hair, ALI had to additionally encode the much greater variability in the background. The failure to control the experimental conditions makes this comparison inappropriate.
There is no quantitative evaluations at all. While many GAN papers do not place an emphasis on quantitative evaluations, at this point, I consider the complete lack of such an evaluation as a weakness of the paper.
Finally, based on just the samples offered in the paper, which is admittedly a fairly weak standard, the model does not seem to be among the state-of-the-art on CelebA that have been reported in the literature. Given the rapid progress that is being made, I do not feel this should be could against this particular paper, but the quality of samples cannot be considered a compelling reason to accept the paper.
Minor comment:
The authors appear to be abusing the ICLR style file by not leaving a blank line between paragraphs. This is annoying and not at all necessary since ICLR does not have a strict page limit.
Figure 1 is not consistent with the model equations (in Eqns. 3). In particular, Figure 1 is missing the standard GAN component of the model.
I assume that the last term in Eqns 3 should have G(z) as opposed to G(z',E(x)). Is that right? |
iclr_2018_By9iRkWA- | Attention models have been intensively studied to improve NLP tasks such as machine comprehension via both question-aware passage attention model and selfmatching attention model. Our research proposes phase conductor (PhaseCond) for attention models in two meaningful ways. First, PhaseCond, an architecture of multi-layered attention models, consists of multiple phases each implementing a stack of attention layers producing passage representations and a stack of inner or outer fusion layers regulating the information flow. Second, we extend and improve the dot-product attention function for PhaseCond by simultaneously encoding multiple question and passage embedding layers from different perspectives. We demonstrate the effectiveness of our proposed model PhaseCond on the SQuAD dataset, showing that our model significantly outperforms both stateof-the-art single-layered and multiple-layered attention models. We deepen our results with new findings via both detailed qualitative analysis and visualized examples showing the dynamic changes through multi-layered attention models. | This paper proposes a new machine comprehension model, which integrates several contributions like different embeddings for gate function and passage representation function, self-attention layers and highway network based fusion layers. The proposed method was evaluated on the SQuAD dataset only, and marginal improvement was observed compared to the baselines.
(1) One concern I have for this paper is about the evaluation. The paper only evaluates the proposed method on the SQuAD data with systems submitted in July 2017, and the improvement is not very large. As a result, the results are not suggesting significance or generalizability of the proposed method.
(2) The paper gives some ablation tests like reducing the number of layers and removing the gate-specific question embedding, which help a lot for understanding how the proposed methods contribute to the improvement. However, the results show that the deeper self-attention layers are indeed useful (but still not improving a lot, about 0.7-0.8%). The other proposed components contribute less significant. As a result, I suggest the authors add more ablation tests regarding (1) replacing the outer-fusion with simple concatenation (it should work for two attention layers); (2) removing the inner-fusion layer and only use the final layer's output, and using residual connections (like many NLP papers did) instead of the more complicated GRU stuff.
(3) Regarding the ablation in Table 2, my first concern is that the improvement seems small (~0.5%). As a result, I am wondering whether this separated question embedding really brings new information, or the similar improvement can be achieved by increasing the size of LSTM layers. For example, if we use the single shared question embeddings, but increase the size from 128 to some larger number like 192, can we observe similar improvement. I suggest the authors try this experiment as well and I hope the answer is no, as separated input embeddings for gate functions was verified to be useful in some "old" works with syntactic features as gate values, like "Semantic frame identification with distributed word representations" and "Learning composition models for phrase embeddings" etc.
(4) Please specify which version of the SQuAD leaderboard is used in Table 3. Is it a snapshot of the Jul 14 one? Because this paper is not comparing to the state-of-the-art, no specification of the leaderboard version may confuse the other reviewers and readers. By the way, it will be better to compare to the snapshot of Oct 2017 as well, indicating the position of this work during the submission deadline.
Minor issues:
(1) There are typos in Figure 1 regarding the notations of Question Features and Passage Features.
(2) In Figure 1, I suggest adding an "N \times" symbol to the left of the Q-P Attention Layer and remove the current list of such layers, in order to be consistent to the other parts of the figure.
(3) What is the relation between the "PhaseCond, QPAtt+" in Table 2 and the "PhaseCond" in Table 3? I was assuming that those are the same system but did not see the numbers match each other. |
iclr_2018_Skk3Jm96W | We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and E-RL 2 . Results are presented on a novel environment we call 'Krazy World' and a set of maze environments. We show E-MAML and E-RL 2 deliver better performance on tasks where exploration is important. | The paper proposes a trick of extending objective functions to drive exploration in meta-RL on top of two recent so-called meta-RL algorithms, Model-Agnostic Meta-Learning (MAML) and RL^2.
Pros:
+ Quite simple but promising idea to augment exploration in MAML and RL^2 by taking initial sampling distribution into account.
+ Excellent analysis of learning curves with variances across two different environments. Charts across different random seeds and hyperparameters indicate reproducibility.
Cons/Typos/Suggestions:
- The brief introduction to meta-RL is missing lots of related work - see below.
- Equation (3) and equations on the top of page 4: Mathematically, it looks better to swap \mathrm{d}\tau and \mathrm{d}\bar{\tau}, to obtain a consistent ordering with the double integrals.
- In page 4, last paragraph before Section 5, “However, during backward pass, the future discounted returns for the policy gradient computation will zero out the contributions from exploratory episodes”: I did not fully understand this - please explain better.
- It is not very clear if the authors use REINFORCE or more advanced approaches like TRPO/PPO/DDPG to perform policy gradient updates?
- I'd like to see more detailed hyperparameter settings.
- Figures 10, 11, 12, 13, 14: Too small to see clearly. I would propose to re-arrange the figures in either [2, 2]-layout, or a single column layout, particularly for Figure 14.
- Figures 5, 6, 9: Wouldn't it be better to also use log-scale on the x-axis for consistent comparison with curves in Krazy World experiments ?
3. It could be very interesting to benchmark also in Mujoco environments, such as modified Ant Maze.
Overall, the idea proposed in this paper is interesting. I agree with the authors that a good learner should be able to generalize to new tasks with very few trials compared with learning each task from scratch. This, however, is usually called transfer learning, not metalearning. As mentioned above, experiments in more complex, continuous control tasks with Mujoco simulators might be illuminating.
Relation to prior work:
p 2: Authors write: "Recently, a flurry of new work in Deep Reinforcement Learning has provided the foundations for tackling RL problems that were previously thought intractable. This work includes: 1) Mnih et al. (2015; 2016), which allow for discrete control in complex environments directly from raw images. 2) Schulman et al. (2015); Mnih et al. (2016); Schulman et al. (2017); Lillicrap et al. (2015), which have allowed for high-dimensional continuous control in complex environments from raw state information."
Here it should be mentioned that the first RL for high-dimensional continuous control in complex environments from raw state information was actually published in mid 2013:
(1) Koutnik, J., Cuccu, G., Schmidhuber, J., and Gomez, F. (July 2013). Evolving large-scale neural networks for vision-based reinforcement learning. GECCO 2013, pages 1061-1068, Amsterdam. ACM.
p2: Authors write: "In practice, these methods are often not used due to difficulties with high-dimensional observations, difficulty in implementation on arbitrary domains, and lack of promising results."
Not quite true - RL robots with high-dimensional video inputs and intrinsic motivation learned to explore in 2015:
(2) Kompella, Stollenga, Luciw, Schmidhuber. Continual curiosity-driven skill acquisition from high-dimensional video inputs for humanoid robots. Artificial Intelligence, 2015.
p2: Authors write: "Although this line of work does not explicitly deal with exploration in meta learning, it remains a large source of inspiration for this work."
p2: Authors write: "To the best of our knowledge, there does not exist any literature addressing the topic of exploration in meta RL."
But there is such literature - see the following meta-RL work where exploration is the central issue:
(3) J. Schmidhuber. Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002.
The RL method of this paper is the one from the original meta-RL work:
(4) J. Schmidhuber. On learning how to learn learning strategies. Technical Report FKI-198-94, Fakultät für Informatik, Technische Universität München, November 1994.
Which then led to:
(5) J. Schmidhuber, J. Zhao, N. Schraudolph. Reinforcement learning with self-modifying policies. In S. Thrun and L. Pratt, eds., Learning to learn, Kluwer, pages 293-309, 1997.
p2: "In hierarchical RL, a major focus is on learning primitives that can be reused and strung together. These primitives will frequently enable better exploration, since they’ll often relate to better coverage over state visitation frequencies. Recent work in this direction includes (Vezhnevets et al., 2017; Bacon & Precup, 2015; Tessler et al., 2016; Rusu et al., 2016)."
These are very recent refs - one should cite original work on hierarchical RL including:
J. Schmidhuber. Learning to generate sub-goals for action sequences. In T. Kohonen, K. Mäkisara, O. Simula, and J. Kangas, editors, Artificial Neural Networks, pages 967-972. Elsevier Science Publishers B.V., North-Holland, 1991.
M. B. Ring. Incremental Development of Complex Behaviors through Automatic Construction of Sensory-Motor Hierarchies. Machine Learning: Proceedings of the Eighth International Workshop, L. Birnbaum and G. Collins, 343-347, Morgan Kaufmann, 1991.
M. Wiering and J. Schmidhuber. HQ-Learning. Adaptive Behavior 6(2):219-246, 1997
References to original work on meta-RL are missing. How does the approach of the authors relate to the following approaches?
(6) J. Schmidhuber. Gödel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006.
(7) J. Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Diploma thesis, TUM, 1987.
Papers (4,5) above describe a universal self-referential, self-modifying RL machine. It can implement and run all kinds of learning algorithms on itself, but cannot learn them by gradient descent (because it's RL). Instead it uses what was later called the success-story algorithm (5) to handle all the meta-learning and meta-meta-learning etc.
Ref (7) above also has a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms, and uses what's now called Genetic Programming (GP), but applied to itself, to recursively evolve better GP methods through meta-GP and meta-meta-GP etc.
Ref (6) is about an optimal way of learning or the initial code of a learning machine through self-modifications, again with a universal programming language such that the system can learn to implement and run all kinds of computable learning algorithms.
General recommendation: Accept, provided the comments are taken into account, and the relation to previous work is established. |
iclr_2018_H1BLjgZCb | Published as a conference paper at ICLR 2018 GENERATING NATURAL ADVERSARIAL EXAMPLES
Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, these malicious perturbations are often unnatural, not semantically meaningful, and not applicable to complicated domains such as language. In this paper, we propose a framework to generate natural and legible adversarial examples that lie on the data manifold, by searching in semantic space of dense and continuous data representation, utilizing the recent advances in generative adversarial networks. We present generated adversaries to demonstrate the potential of the proposed approach for black-box classifiers for a wide range of applications such as image classification, textual entailment, and machine translation. We include experiments to show that the generated adversaries are natural, legible to humans, and useful in evaluating and analyzing black-box classifiers. | Summary:
A method for creation of semantical adversary examples in suggested. The ‘semantic’ property is measured by building a latent space with mapping from this space to the observable (generator) and back (inverter). The generator is trained with a WGAN optimization. Semantic adversarials examples are them searched for by inverting an example to its sematic encoding and running local search around it in that space. The method is tested for generation of images on MNist and part of LSUM data and for creation of text examples which are adversarial in some sense to inference and translation sentences. It is shown that the distance between adversarial example and the original example in the latent space is proportional to the accuracy of the classifier inspected.
Page 3: It seems that the search algorithm has a additional parameter: r_0, the size of the area in which search is initiated. This should be explicitly said and the parameter value should be stated.
Page 4:
- the implementation details of the generator, critic and invertor networks are not given in enough details, and instead the reader is referred to other papers. This makes this paper non-clear as a stand alone document, and is a problem for a paper which is mostly based on experiments and their results: the main networks used are not described.
- the visual examples are interesting, but it seems that they are able to find good natural adversary examples only for a weak classifier. In the MNist case, the examples for thr random forest are nautral and surprising, but those for the LE-Net are often not: they often look as if they indeed belong to the other class (the one pointed by the classifier). In the churce-vs. tower case, a relatively weak MLP classifier was used. It would be more instructive to see the results for a better, convolutional classifier.
Page 5:
- the description of the various networks used for text generation is insufficient for understanding:
o The AREA is described in two sentences. It is not clear how this module is built, was loss was it used to optimize in the first place, and what elements of it are re0used for the current task
o ‘inverter’ here is used in a sense which is different than in previous sections of the paper: earlier it denoted the mapping from output (images) to the underlying latent space. Here it denote a mapping between two latent spaces.
o It is not clear what the ‘four-layers strided CNN’ is: its structure, its role in the system. How is it optimized?
o In general: a block diagram showing the relation between all the system’s components may be useful, plus the details about the structure and optimization of the various modules. It seems that the system here contains 5 modules instead of the three used before (critic, generator and inverter), but this is not clear enough. Also which modules are pre-trained, which are optimized together,a nd which are optimized separately is not clear.
o SNLI data should be described: content, size, the task it is used for
Pro:
- A novel idea of producing natural adversary examples with a GAN
- The generated examples are in some cases useful for interpretation and network understanding
- The method enables creation of adversarial examples for block box classifiers
Cons
- The idea implementation is basic. Specifically search algorithm presented is quite simplistic, and no variations other than plain local search were developed and tested
- The generated adversarial examples created for successful complex classifiers are often not impressive and useful (they are either not semantical, or semantical but correctly classified by the classifier). Hence It is not clear if the latent space used by the method enables finding of interesting adversarial examples for accurate classifiers. |
iclr_2018_rJl3yM-Ab | EVIDENCE AGGREGATION FOR ANSWER RE-RANKING IN OPEN-DOMAIN QUESTION ANSWERING
A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. Existing methods usually extract answers from single passages independently. But some questions require a combination of evidence from across different sources to answer correctly. In this paper, we propose two models which make use of multiple passages to generate their answers. Both use an answerreranking approach which reorders the answer candidates generated by an existing state-of-the-art QA model. We propose two methods, namely, strengthbased re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer. | Traditional open-domain QA systems typically have two steps: passage retrieval and aggregating answers extracted from the retrieved passages. This paper essentially follows the same paradigm, but leverages the state-of-the-art reading comprehension models for answer extraction, and develops the neural network models for the aggregating component. Although the idea seems incremental, the experimental results do seem solid. The paper is generally easy to follow, but in several places the presentation can be further improved.
Detailed comments/questions:
1. In Sec. 2.2, the justification for adding H^{aq} and \bar{H}^{aq} is to downweigh the impact of stop word matching. I feel this is a somewhat indirect and less effective design, if avoiding stop words is really the reason. A standard preprocessing step may be better.
2. In Sec. 2.3, it seems that the final score is just the sum of three individual normalized scores. It's not truly a "weighted" combination, where the weights are typically assumed to be tuned.
3. Figure 3: Connecting the dots in the two subfigures on the right does not make sense. Bar charts should be used instead.
4. The end of Sec. 4.2: I feel it's a bad example, as the passage does not really support the answer. The fact that "Sesame Street" got picked is probably just because it's more famous.
5. It'd be interesting to see how traditional IR answer aggregation methods perform, such as simple classifiers or heuristics by word matching (or weighted by TFIDF) and counting. This will demonstrates the true advantages of leveraging modern NN models.
Pros:
1. Updating a traditional open-domain QA approach with neural models
2. Experiments demonstrate solid positive results
Cons:
1. The idea seems incremental
2. Presentation could be improved |
iclr_2018_SJdCUMZAW | Grasping an object and precisely stacking it on another is a difficult task for traditional robotic control or hand-engineered approaches. Here we examine the problem in simulation and provide techniques aimed at solving it via deep reinforcement learning. We introduce two straightforward extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), which make it significantly more data-efficient and scalable. Our results show that by making extensive use of offpolicy data and replay, it is possible to find high-performance control policies that successfully achieve precise stacking behaviour in > 95% of 1000 randomly initialized configurations. Further, our results on data efficiency hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots. | The title is too generic and even a bit misleading. Dexterous manipulation usually refers to more complex skills, like in-hand manipulation or using the fingers to turn an object, and not simple pick and place tasks. Reinforcement learning methods are generally aiming to be data-efficient, and the method does not seem designed specifically for dexterous manipulation (which is actually a positive point, as it is more general).
The paper presents two extensions for DDPG: multiple network updates per physical interactions, and asynchronous updates from multiple robots. As the authors themselves state, these contributions are fairly straightforward, and the contributions are largely based on prior works. The authors do evaluate the methods with different parameter settings to see the effects on learning performance.
The simulation environment is fairly basic and seems unrealistic. The hand always starts close to the blocks, which are close together, so the inverse kinematics will be close to linear. The blocks are always oriented in the same direction and they can connect easily with no need to squeeze or wiggle them together. The task seems more difficult from the description in the paper, and the authors should describe the environment in more detail.
Does the robot learn to flip the blocks over such that they can be stacked? The videos show the
blocks turning over accidentally, but then the robot seems to give up. Having the robot learn to turn the blocks would make for a more challenging task and a better policy.
The paper’s third contribution is a recipe for constructing shaped reward functions for composite tasks. The method relies on a predefined task structure (reach-grasp-stack) and is very similar to reward shaping already used in many other reinforcement learning for manipulation papers. A comparison of different methods for defining the rewards and a more formal description of the reward generation procedure would improve the impact of this section. The authors should also consider using tasks with longer sequences of actions, e.g., stacking four blocks.
The fourth and final listed contribution is learning from demonstrated states. Providing the robot with prior knowledge and easier partial tasks will result in faster learning. This result is not surprising. It is not clear though how applicable this approach is for a real robot system. It effectively assumes that the robot can grasp the block and pick it up, such that it can learn the stacking part, while simultaneously still learning how to grasp the block and pick it up. For testing the real robot applicability, the authors should try having the robot learn the task without simulation resets.
What are the actual benefits of using deep learning in this scenario? The authors mention skill representations, such as dynamic motor primitives, which employ significantly more prior knowledge than a deep network. However, as demonstrations of the task are provided, the task is divided into steps, the locations of the objects and finger tips are given, a suitable reward function is provided, and the generalization is only over the object positions, why not train a set of DMPs and optimize them with some additional reinforcement learning? The authors should consider adding a Cartesian DMP policy as a benchmark, as well as discussing the benefits of the proposed approach given the prior knowledge. |
iclr_2018_B1i7ezW0- | We exploit a recently derived inversion scheme for arbitrary deep neural networks to develop a new semi-supervised learning framework that applies to a wide range of systems and problems. The approach reaches current state-of-the-art methods on MNIST and provides reasonable performances on SVHN and CIFAR10. Through the introduced method, residual networks are for the first time applied to semi-supervised tasks. Experiments with one-dimensional signals highlight the generality of the method. Importantly, our approach is simple, efficient, and requires no change in the deep network architecture. | In summary, the paper is based on a recent work Balestriero & Baraniuk 2017 to do semi-supervised learning. In Balestriero & Baraniuk, it is shown that any DNN can be approximated via a linear spline and hence can be inverted to produce the "reconstruction" of the input, which can be naturally used to do unsupervised or semi-supervised learning. This paper proposes to use automatic differentiation to compute the inverse function efficiently. The idea seems interesting. However, I think there are several main drawbacks, detailed as follows:
1. The paper lacks a coherent and complete review of the semi-supervised deep learning. Herewith some important missing papers, which are the previous or current state-of-the-art.
[1] Laine S, Aila T. Temporal Ensembling for Semi-Supervised Learning[J]. arXiv preprint arXiv:1610.02242, ICLR 2016.
[2] Li C, Xu K, Zhu J, et al. Triple Generative Adversarial Nets[J]. arXiv preprint arXiv:1703.02291, NIPS 2017.
[3] Dai Z, Yang Z, Yang F, et al. Good Semi-supervised Learning that Requires a Bad GAN[J]. arXiv preprint arXiv:1705.09783, NIPS 2017.
Besides, some papers should be mentioned in the related work such as Kingma et. al. 2014. I'm not an expert of the network inversion and not sure whether the related work of this part is sufficient or not.
2. The motivation is not sufficient and not well supported.
As stated in the introduction, the authors think there are several drawbacks of existing methods including "training instability, lack of topology generalization and computational complexity." Based on my knowledge, there are two main families of semi-supervised deep learning methods, classified by depending on deep generative models or not. The generative approaches based on VAEs and GANs are time consuming, but according to my experience, the training of VAE-based methods are stable and the topology generalization ability of such methods are good. Besides, the feed-forward approaches including [1] mentioned above are efficient and not too sensitive with respect to the network architectures. Overall, I think the drawbacks mentioned in the paper are not common in existing methods and I do not see clear benefits of the proposed method. Again, I strongly suggest the authors to provide a complete review of the literature.
Further, please explicitly support your claim via experiments. For instance, the proposed method should be compared with the discriminative approaches including VAT and [1] in terms of the training efficiency. It's not fair to say GAN-based methods require more training time because these methods can do generation and style-class disentanglement while the proposed method cannot.
3. The experimental results are not so convincing.
First, please systematically compare your methods with existing methods on the widely adopted benchmarks including MNIST with 20, 100 labels and SVHN with 500, 1000 labels and CIFAR10 with 4000 labels. It is not safe to say the proposed method is the state-of-the-art by only showing the results in one setting.
Second, please report the results of the proposed method with comparable architectures used in previous methods and state clearly the number of parameters in each model. Resnet is powerful but previous methods did not use that.
Last, show the sensitive results of the proposed method by tuning alpha and beta. For instance, please show what is the actual contribution of the proposed reconstruction loss to the classification accuracy with the other losses existing or not?
I think the quality of the paper should be further improved by addressing these problems and currently it should be rejected. |
iclr_2018_ry6-G_66b | ACTIVE NEURAL LOCALIZATION
Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose "Active Neural Localizer", a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of traditional filtering-based localization methods, by using a structured belief of the state with multiplicative interactions to propagate belief, and combines it with a policy model to localize accurately while minimizing the number of steps required for localization. Active Neural Localizer is trained end-to-end with reinforcement learning. We use a variety of simulation environments for our experiments which include random 2D mazes, random mazes in the Doom game engine and a photo-realistic environment in the Unreal game engine. The results on the 2D environments show the effectiveness of the learned policy in an idealistic setting while results on the 3D environments demonstrate the model's capability of learning the policy and perceptual model jointly from raw-pixel based RGB observations. We also show that a model trained on random textures in the Doom environment generalizes well to a photo-realistic office space environment in the Unreal engine. | The paper describes a neural network-based approach to active localization based upon RGB images. The framework employs Bayesian filtering to maintain an estimate of the agent's pose using a convolutional network model for the measurement (perception) function. A convolutional network models the policy that governs the action of the agent. The architecture is trained in an end-to-end manner via reinforcement learning. The architecture is evaluated in 2D and 3D simulated environments of varying complexity and compared favorably to traditional (structured) approaches to passive and active localization.
As the paper correctly points out, there is large body of work on map-based localization, but relatively little attention has been paid to decision theoretic formulations to localization, whereby the agent's actions are chosen in order to improve localization accuracy. More recent work instead focuses on the higher level objective of navigation, whereby any effort act in an effort to improve localization are secondary to the navigation objective. The idea of incorporating learned representations with a structured Bayesian filtering approach is interesting, but it's utility could be better motivated. What are the practical benefits to learning the measurement and policy model beyond (i) the temptation to apply neural networks to this problem and (ii) the ability to learn these in an end-to-end fashion? That's not to say that there aren't benefits, but rather that they aren't clearly demonstrated here. Further, the paper seems to assume (as noted below) that there is no measurement uncertainty and, with the exception of the 3D evaluations, no process noise.
The evaluation demonstrates that the proposed method yields estimates that are more accurate according to the proposed metric than the baseline methods, with a significant reduction in computational cost. However, the environments considered are rather small by today's standards and the baseline methods almost 20 years old. Further, the evaluation makes a number of simplifying assumptions, the largest being that the measurements are not subject to noise (the only noise that is present is in the motion for the 3D experiments). This assumption is clearly not valid in practice. Further, it is not clear from the evaluation whether the resulting distribution that is maintained is consistent (e.g., are the estimates over-/under-confident?). This has important implications if the system were to actually be used on a physical system. Further, while the computational requirements at test time are significantly lower than the baselines, the time required for training is likely very large. While this is less of an issue in simulation, it is important for physical deployments. Ideally, the paper would demonstrate performance when transferring a policy trained in simulation to a physical environment (e.g., using diversification, which has proven effective at simulation-to-real transfer).
Comments/Questions:
* The nature of the observation space is not clear.
* Recent related work has focused on learning neural policies for navigation, and any localization-specific actions are secondary to the objective of reaching the goal. It would be interesting to discuss how one would balance the advantages of choosing actions that improve localization with those in the context of a higher-level task (or at least including a cost on actions as with the baseline method of Fox et al.).
* The evaluation that assigns different textures to each wall is unrealistic.
* It is not clear why the space over which the belief is maintained flips as the robot turns and shifts as it moves.
* The 3D evaluation states that a 360 deg view is available. What happens when the agent can only see in one (forward) direction?
* AML includes a cost term in the objective. Did the author(s) experiment with setting this cost to zero?
* The 3D environments rely upon a particular belief size (70 x 70) being suitable for all environments. What would happen if the test environment was larger than those encountered in training?
* The comment that the PoseNet and VidLoc methods "lack a strainghtforward method to utilize past map data to do localization in a new environment" is unclear.
* The environments that are considered are quite small compared to the domains currently considered for
* Minor: It might be better to move Section 3 into Section 4 after introducing notation (to avoid redundancy).
* The paper should be proofread for grammatical errors (e.g., "bayesian" --> "Bayesian", "gaussian" --> "Gaussian")
UPDATES FOLLOWING AUTHORS' RESPONSE
(Apologies if this is a duplicate. I added a comment in light of the authors' response, but don't see it and so I am updating my review for completeness).
I appreciate the authors's response to the initial reviews and thank them for addressing several of my comments.
RE: Consistency
My concerns regarding consistency remain. For principled ways of evaluating the consistency of an estimator, see Bar-Shalom "Estimation with Applications to Tracking and Navigation".
RE: Measurement/Process Noise
The fact that the method assumes perfect measurements and, with the exception of the 3D experiments, no process noise is concerning as neither assumptions are valid for physical systems. Indeed, it is this noise in particular that makes localization (and its variants) challenging.
RE: Motivation
The response didn't address my comments about the lack motivation for the proposed method. Is it largely the temptation of applying an end-to-end neural method to a new problem? The paper should be updated to make the advantages over traditional approaches to active localization. |
iclr_2018_SJ3dBGZ0Z | Log-linear models models are widely used in machine learning, and in particular are ubiquitous in deep learning architectures in the form of the softmax. While exact inference and learning of these requires linear time, it can be done approximately in sub-linear time with strong concentrations guarantees. In this work, we present LSH Softmax, a method to perform sub-linear learning and inference of the softmax layer in the deep learning setting. Our method relies on the popular Locality-Sensitive Hashing to build a well-concentrated gradient estimator, using nearest neighbors and uniform samples. We also present an inference scheme in sub-linear time for LSH Softmax using the Gumbel distribution. On language modeling, we show that Recurrent Neural Networks trained with LSH Softmax perform on-par with computing the exact softmax while requiring sub-linear computations. | In this paper, the authors propose a new approximation of the softmax, based on approximate nearest neighbors search and sampling.
More precisely, they propose to approximate to partition function (which is the bottleneck to compute the softmax and its gradient), by using:
- the top-k classes (retrieved using LSH) ;
- uniform samples (to account for the tail of the distribution).
They describe how this technique can be used for learning, by performing sparse updates for the gradient (corresponding to the elements used to compute the partition function), and re-hashing the updated element of the softmax layers.
In section 5, they show how this method can be implemented on GPU, using standard operations available in neural networks framework such as TensorFlow or PyTorch.
Finally, they compare their approach to importance sampling and negative sampling, using language modeling as a benchmark.
They use 3 standards datasets to perform the evaluations: penn treebank, text8 and wikitext-2.
Pros:
- well written and easy to read paper
- interesting theoretical guarantees of the approximation
Cons:
- a bit incremental
- weak empirical evaluations
- no support for the claim of efficient GPU implementation
== Incremental ==
While the theoretical justification of the methods are interesting, these are not a contribution of the paper (but of previous work by Mussmann et al.).
In fact, the main contribution of this paper is to show how to apply the technique of Mussmann et al. in the setup of neural network.
The main difference with Mussmann et al. is the necessity of re-hashing the updated elements of the softmax at each step.
Other previous works have also proposed to use LSH to speed up computations in neural network, but are not discussed in the paper (see list of references).
== Weak evaluations ==
I believe that the empirical evaluation of section 6 are a bit weak.
First, there is a large gap between the perplexity obtained using the proposed method and the exact softmax (e.g. 97 v.s. 83 on ptb, 115 v.s. 95 on wikitext-2).
Thus, I do not believe that the experiments support the claim that the proposed method "perform on-par with computing the exact softmax".
Moreover, these numbers are pretty far from what other papers have reported on these datasets with similar models (I am wondering if the gap would be even larger with SOTA models).
Second, the authors do not report any runtime numbers for their method and the baselines on GPUs.
I believe that it would be more fair to plot the learning curves (Fig. 1) using the runtime instead of the number of epochs.
== Efficient implementation ==
In section 5, the authors claims that their approach can be efficiently implemented on GPUs.
However, several of the operations used by their approach are inefficient, especially when using mini-batches.
The authors state that only step 2 is inefficient, but I also believe that step 3 is (compared to sampling approaches).
Indeed, for their method, each example of a mini-batch uses a different set of elements to approximate the partition function (while for other sampling methods, the same set is used for the whole batch).
Thus a matrix-matrix multiplication is replaced by n matrix-vector multiplication (n is the batch size).
While these can be performed in parallel, it is much less efficient than a matrix-matrix multiplication.
Finally, the only runtime numbers provided by the authors comparing their approach to sampling is for a CPU implementation with a batch of size 1.
This setting is super favorable to their approach, but a bit unrealistic for most practical settings.
== Missing references ==
Scalable and Sustainable Deep Learning via Randomized Hashing
Ryan Spring, Anshumali Shrivastava
A New Unbiased and Efficient Class of LSH-Based Samplers and Estimators for Partition Function Computation in Log-Linear Models
Ryan Spring, Anshumali Shrivastava
Deep networks with large output spaces
Sudheendra Vijayanarasimhan, Jonathon Shlens, Rajat Monga & Jay Yagnik |
iclr_2018_rk6H0ZbRb | It is becoming increasingly clear that many machine learning classifiers are vulnerable to adversarial examples. In attempting to explain the origin of adversarial examples, previous studies have typically focused on the fact that neural networks operate on high dimensional data, they overfit, or they are too linear. Here we show that distributions of logit differences have a universal functional form. This functional form is independent of architecture, dataset, and training protocol; nor does it change during training. This leads to adversarial error having a universal scaling, as a power-law, with respect to the size of the adversarial perturbation. We show that this universality holds for a broad range of datasets (MNIST, CIFAR10, ImageNet, and random data), models (including state-of-the-art deep networks, linear models, adversarially trained networks, and networks trained on randomly shuffled labels), and attacks (FGSM, step l.l., PGD). Motivated by these results, we study the effects of reducing prediction entropy on adversarial robustness. Finally, we study the effect of network architectures on adversarial sensitivity. To do this, we use neural architecture search with reinforcement learning to find adversarially robust architectures on CIFAR10. Our resulting architecture is more robust to white and black box attacks compared to previous attempts. | This work presents an empirical study aiming at improving the understanding of the vulnerability of neural networks to adversarial examples. Paraphrasing the authors, the main observation of the study is that the vulnerability is due to an inherent uncertainty that neural networks have about their predictions ( the difference between the logits). This is consistent across architectures, datasets. Further, the authors note that "the universality is not a result of the specific content of these datasets nor the ability of the model to generalize."
While this empirical study contains valuable information, its above conclusions are factually wrong. It can be theoretically proven at least using two routes. They are also in contradiction with other empirical observations consistent across several previous studies.
1-Constructive counter-argument: Consider a neural network that always outputs a constant prediction. It (1) is by definition independent of any dataset (2) generalizes perfectly (3) has zero adversarial error, hence contradicting the central statement of the paper.
2- Analysis-based counter-argument: Consider a neural network with one hidden layer and two classes. It is easy to show that the difference between the scores (logits) of the two classes is linear in the operator norm of the hidden weight matrix and linear in the L2-norm of the last weight vector. Therefore, the robustness of the model indeed depends on its capability to generalize because the latter is essentially governed by the geometric margin of the linear separator and the spectral norm of the weight matrix (see [1,2,3]). QED.
3- Further, the lack of calibration of neural networks and its causes are well known. Among other things, it is due to the use of building blocks (such as batch-norm [4]), regularization (e.g., weight decay) or the use of softmax+cross-entropy during training. While this is convenient for optimization reasons, it indeed hurts the calibration. The authors should try to train a neural network with a large margin criteria and see if the same phenomenon still holds when they measure the geometric margin. Another alternative is to use a temperature with the softmax[4]. Therefore, the observations of the empirical study cannot be generalized to neural networks and should be explicitly restricted to neural networks using softmax with cross-entropy as criteria.
I believe the conclusions of this study are misleading, hence I recommend to reject the paper.
[1] Spectrally Normalized Margin-bounds Margin bounds for neural networks (Bartlett et al., 2017)
[2] Parseval Networks: Improving Robustness to Adversarial Examples (Cisse et al., 2017)
[3] Formal Guarantees on the Robustness of a classifier against adversarial examples (Hein et al., 2017)
[4] On the Calibration of Modern Neural Networks (Guo et al., 2017) |
iclr_2018_SJvrXqvaZ | Asynchronous Advantage Actor Critic (A3C) is an effective Reinforcement Learning (RL) algorithm for a wide range of tasks, such as Atari games and robot control. The agent learns policies and value function through trial-and-error interactions with the environment until converging to an optimal policy. Robustness and stability are critical in RL; however, neural network can be vulnerable to noise from unexpected sources and is not likely to withstand very slight disturbances. We note that agents generated from mild environment using A3C are not able to handle challenging environments. Learning from adversarial examples, we proposed an algorithm called Adversary Robust A3C (AR-A3C) to improve the agent's performance under noisy environments. In this algorithm, an adversarial agent is introduced to the learning process to make it more robust against adversarial disturbances, thereby making it more adaptive to noisy environments. Both simulations and real-world experiments are carried out to illustrate the stability of the proposed algorithm. The AR-A3C algorithm outperforms A3C in both clean and noisy environments. | The authors propose an extension of adversarial reinforcement learning to A3C. The proposed technique is of modest contribution and the experimental results do not provide sufficient validation of the approach.
The authors propose extending A3C to produce more robust policies by training a zero-sum game with two agents: a protagonist and an antagonist. The protagonist is attempting to achieve the given task while the antagonist's goal is for the task to fail.
The contribution of this work, AR-A3C, is extending adversarial reinforcement learning, namely robust RL (RRL) and robust adversarial RL (RARL), to A3C. In the context of this prior work the novelty is extending the family of adversarial RL methods. However, the proposed method is still within the same family methods as demonstrated by RARL.
The authors state that AR-A3C requires half as many rollouts as compared to RARL. However, no empirical comparison between the two methods is performed. The paper only performs analysis against the A3C and no other adversarial baseline and on only one environment: cartpole. While they show transfer to the real world cartpole with this technique, there is not sufficient analysis to satisfactorily demonstrate the benefits of the proposed technique.
The paper reads well. There are a few notational issues in the paper that should be addressed. The authors mislabel the value function V as the action value, or Q function. The action value function is action dependent where the value function is not. As a much more minor issue, the authors introduce y as the discount factor, which deviates from the standard notation of \gamma without any obvious reason to do so.
Double blind was likely compromised with the youtube video, which was linked to a real name account instead of an anonymous account.
Overall, the proposed technique is of modest contribution and the experimental results do not provide sufficient validation of the approach. |
iclr_2018_ryacTMZRZ | Computing distances between examples is at the core of many learning algorithms for time series. Consequently, a great deal of work has gone into designing effective time series distance measures. We present Jiffy, a simple and scalable distance metric for multivariate time series. Our approach is to reframe the task as a representation learning problem-rather than design an elaborate distance function, we use a CNN to learn an embedding such that the Euclidean distance is effective. By aggressively max-pooling and downsampling, we are able to construct this embedding using a highly compact neural network. Experiments on a diverse set of multivariate time series datasets show that our approach consistently outperforms existing methods. | This paper presents a solid empirical analysis of a simple idea for learning embeddings of time series: training a convolutional network with a custom pooling layer that generates a fixed size representation to classify time series, then use the fixed size representation for other tasks. The primary innovation is a custom pooling operation that looks at a fraction of a sequence, rather than a fixed window. The experiments are fairly thorough (albeit with some sizable gaps) and show that the proposed approach outperforms DTW, as well as embeddings learned using Siamese networks. On the whole, I like the line of inquiry and the elegant simplicity of the proposed approach, but the paper has some flaws (and there are some gaps in both motivation and the experiments) that led me to assign a lower score. I encourage the authors to address these flaws as much as possible during the review period. If they succeed in doing so, I am willing to raise my score.
QUALITY
I appreciate this line of research in general, but there are some flaws in its motivation and in the design of the experiments. Below I list strengths (+) and weaknesses (-):
+ Time series representation learning is an important problem with a large number of real world applications. Existing solutions are often computationally expensive and complex and fail to generalize to new problems (particularly with irregular sampling, missing values, heterogeneous data types, etc.). The proposed approach is conceptually simple and easy to implement, faster to train than alternative metric learning approaches, and learns representations that admit fast comparisons, e.g., Euclidean distance.
+ The experiments are pretty thorough (albeit with some noteworthy gaps) -- they use multiple benchmark data sets and compare against strong baselines, both traditional (DTW) and deep learning (Siamese networks).
+ The proposed approach performs best on average!
- The custom pooling layer is the most interesting part and warrants additional discussion. In particular, the "naive" approach would be to use global pooling over the full sequence [4]. The authors should advance an argument to motivate %-length pooling and perhaps add a global pooling baseline to the experiments.
- Likewise, the authors need to fully justify the use of channel-wise (vs. multi-channel) convolutions and perhaps include a multi-channel convolution baseline.
- There is something incoherent about training a convolutional network to classify time series, then discarding the classification layer and using the internal representation as input to a 1NN classifier. While this yields an apples-to-apples comparison in the experiments, I am skeptical anyone would do this in practice. Why not simply use the classifier (I am dubious the 1NN would outperform it)? To address this, I recommend the authors do two things: (1) report the accuracy of the learned classifier; (2) discuss the dynamic above -- either admit to the reader that this is a contrived comparison OR provide a convincing argument that someone might use embeddings + KNN classifier instead of the learned classifier. If embeddings + KNN outperforms the learned classifier, that would surprise me, so that would warrant some discussion.
- On a related note, are the learned representations useful for tasks other than the original classification task? This would strengthen the value proposition of this approach. If, however, the learned representations are "overfit" to the classification task (I suspect they are), and if the learned classifier outperforms embeddings + 1NN, then what would I use these representations for?
- I am modestly surprised that this approach outperformed Siamese networks. The authors should report the Siamese architectures -- and how hyperparameters were tuned on all neural nets -- to help convince the reader that the comparison is fair.
- To that end, did the Siamese convolutional network use the same base architecture as the proposed classification network (some convolutions, custom pooling, etc.)? If not, then that experiment should be run to help determine the relative contributions of the custom pooling layer and the loss function.
- Same notes above re: triplet network -- the authors should report results in Table 2 and disclose architecture details.
- A stronger baseline would be a center loss [1] network (which often outperforms triplets).
- The authors might consider adding at least one standard unsupervised baseline, e.g., a sequence-to-sequence autoencoder [2,3].
CLARITY
The paper is clearly written for the most part, but there is room for improvement:
- The %-length pooling requires a more detailed explanation, particularly of its motivation. There appears to be a connection to other time series representations that downsample while preserving shape information -- the authors could explore this. Also, they should add a figure with a visual illustration of how it works (and maybe how it differs from global pooling), perhaps using a contrived example.
- How was the %-length pooling implemented? Most deep learning frameworks only provide pooling layers with fixed length windows, though I suspect it is probably straightforward to implement variable-width pooling layers in an imperative framework like PyTorch.
- Figure 1 is not well executed and probably unnecessary. The solid colored volumes do not convey useful information about the structure of the time series or the neural net layers, filters, etc. Apart from the custom pooling layer, the architecture is common and well understood by the community -- thus, the figure can probably be removed.
- The paper needs to fully describe neural net architectures and how hyperparameters were tuned.
ORIGINALITY
The paper scores low on originality. As the authors themselves point out, time series metric learning -- even using deep learning -- is an active area of research. The proposed approach is refreshing in its simplicity (rather than adding additional complexity on top of existing approaches), but it is straightforward -- and I suspect it has been used previously by others in practice, even if it has not been formally studied. Likewise, the proposed %-length pooling is uncommon, but it is not novel per se (dynamic pooling has been used in NLP [5]). Channel-wise convolutional networks have been used for time series classification previously [6].
SIGNIFICANCE
Although I identified several flaws in the paper's motivation and experimental setup, I think it has some very useful findings, at least for machine learning practitioners. Within NLP, there appears to be gradual shift toward using convolutional, instead of recurrent, architectures. I wonder if papers like this one will contribute toward a similar shift in time series analysis. Convolutional architectures are typically much easier and faster to train than RNNs, and the main motivation for RNNs is their ability to deal with variable length sequences. Convolutional architectures that can effectively deal with variable length sequences, as the proposed one appears to do, would be a welcome innovation.
REFERENCES
[1] Wen, et al. A Discriminative Feature Learning Approach for Deep Face Recognition. ECCV 2016.
[2] Fabius and van Amersfoort. Variational Recurrent Auto-Encoders. ICLR 2015 Workshop Track.
[3] Tikhonov and Yamshchikov. Music generation with variational recurrent autoencoder supported by history. arXiv.
[4] Hertel, Phan, and Mertins. Classifying Variable-Length Audio Files with All-Convolutional Networks and Masked Global Pooling.
[5] Kalchbrenner, Grefenstette, and Blunsom. A Convolutional Neural Network for Modelling Sentences. ACL 2014.
[6] Razavian and Sontag. Temporal Convolutional Neural Networks for Diagnosis from Lab Tests. arXiv. |
iclr_2018_rkfbLilAb | We develop a reinforcement learning based search assistant which can assist users through a set of actions and sequence of interactions to enable them realize their intent. Our approach caters to subjective search where the user is seeking digital assets such as images which is fundamentally different from the tasks which have objective and limited search modalities. Labeled conversational data is generally not available in such search tasks and training the agent through human interactions can be time consuming. We propose a stochastic virtual user which impersonates a real user and can be used to sample user behavior efficiently to train the agent which accelerates the bootstrapping of the agent. We develop A3C algorithm based context preserving architecture which enables the agent to provide contextual assistance to the user. We compare the A3C agent with Q-learning and evaluate its performance on average rewards and state values it obtains with the virtual user in validation episodes. Our experiments show that the agent learns to achieve higher rewards and better states. | The paper "IMPROVING SEARCH THROUGH A3C REINFORCEMENT LEARNING BASED CONVERSATIONAL AGENT" proposes to define an agent to guide users in information retrieval tasks. By proposing refinements of the query, categorizations of the results or some other bookmarking actions, the agent is supposed to help the user in achieving his search. The proposed agent is learned via reinforcement learning.
My concern with this paper is about the experiments that are only based on simulated agents, as it is the case for learning. While it can be questionable for learning (but we understand why it is difficult to overcome), it is very problematic for the experiments to not have anything that demonstrates the usability of the approach in a real-world scenario. I have serious doubts about the performances of such an artificially learned approach for achieving real-world search tasks. Also, for me the experimental section is not sufficiently detailed, which lead to not reproducible results. Moreover, authors should have considered baselines (only the two proposed agents are compared which is clearly not sufficient).
Also, both models have some issues from my point of view. First, the Q-learning methods looks very complex: how could we expect to get an accurate model with 10^7 states ? No generalization about the situations is done here, examples of trajectories have to be collected for each individual considered state, which looks very huge (especially if we think about the number of possible trajectories in such an MDP). The second model is able to generalize from similar situations thanks to the neural architecture that is proposed. However, I have some concerns about it: why keeping the history of actions in the inputs since it is captured by the LSTM cell ? It is a redondant information that might disturb the process. Secondly, the proposed loss looks very heuristic for me, it is difficult to understand what is really optimized here. Particularly, the loss entropy function looks strange to me. Is it classical ? Are there some references of such a method to maintain some exploration ability. I understand the need of exploration, but including it in the loss function reduces the interpretability of the objective (wouldn't it be preferable to use a more classical loss but with an epsilon greedy policy?).
Other remarks:
- In the begining of "varying memory capacity" section, what is "100, 150 and 250" ? Time steps ? What is the unit ? Seconds ?
- I did not understand the "Capturing seach context at local and global level" at all
- In the loss entropy formula, the two negation signs could be removed |
iclr_2018_HkCnm-bAb | Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization. Here we consider a family of combinatorial games, arising from work of Erdos, Selfridge, and Spencer, and we propose their use as environments for evaluating and comparing different approaches to reinforcement learning. These games have a number of appealing features: they are challenging for current learning approaches, but they form (i) a low-dimensional, simply parametrized environment where (ii) there is a linear closed form solution for optimal behavior from any state, and (iii) the difficulty of the game can be tuned by changing environment parameters in an interpretable way. We use these Erdos-Selfridge-Spencer games not only to compare different algorithms, but also to compare approaches based on supervised and reinforcement learning, to analyze the power of multi-agent approaches in improving performance, and to evaluate generalization to environments outside the training set. | The paper presents Erdos-Selfridge-Spencer games as environments for investigating
deep reinforcement learning algorithms. The proposed games are interesting and clearly challenging, but I am not sure what they tell us about the algorithms chosen to test them. There are some clarity issues with the justification and evaluation which undermine the message the authors are trying to make.
In particular, I have the following concerns:
• these games have optimal policies that are expressible as a linear model, meaning that if the architecture or updating of the learning algorithm is such that there is a bias towards exploring these parts of policy space, then they will perform better than more general algorithms. What does this tell us about the relative merits of each approach? The authors could do more to formally motivate these games as "difficult" for any deep learning architecture if possible.
• the authors compare linear models with non-linear models at some point for attacker policies, but it is unclear whether these linear models are able to express the optimal policy. In fact, there is a level of non-determinism in how the attacker policies are encoded which means that an optimal policy cannot be (even up to soft-max) expressed by the agent (as I read things the number of pieces chosen in level l is always chosen uniformly randomly).
• As the authors state, this paper is an empirical evaluation, and the theorems presented are derived from earlier work. There is possibly too much focus on the proofs of these theorems.
• There are a number of ambiguities and errors which places difficulties on the interpretation (and potential replication) of the experiments. As this is an empirical study, this is the yardstick by which the paper should be judged. In particular, this relates to:
◦ The architecture of each of the tested Deep RL methods.
◦ What is done to select appropriate tuning parameters of the tested Deep RL methods, if anything.
◦ It is unclear whether 'incorrect actions' in the supervised learning evaluations, refer to non-optimal actions, or simply actions that do not preserve the dominance of the defender, e.g. both partitions may have potential >0.5
◦ Fig 4. right looks like a reward signal, but is labelled Proportion correct. The text is not clear enough to be sure which it is.
◦ Fig 4. left and right has 4 methods: rl rewards, rl correct actions, sup rewards, and sup correct actions. The specifics of how these methods are constructed is unclear from the paper.
◦ What parts of the evaluation explores how well these methods are able to represent the states (feature/representation learning) and what parts are evaluating the propagation of sparse rewards (the reinforcment learning core)? The authors could be clearer and more targetted with respect to this question.
There is value in this work, but in its current state I do not think it is ready for publicaiton.
# Detailed notes
[p4, end of sec 3] The authors say that the difficulty of the games can be varied with "continuous changes in potential", but the potential is derived from the discrete initial game state, so these values are not continuously varying (even though it is possible to adjust them by non-integer amounts).
[p4, sec 4.1]
"strategy unevenly partitions the occupied levels...with the proportional difference between the two sets being sampled randomly"
What is meant by this? The proportional difference between the two sets is discussed as if it is a continuous property, but must be chosen from the discrete set of all available partitions. If one partition one is chosen uniformly randomly from all possibly sets A, B (and the potential proportion calculated) then I don't know why it would be written in this way. That suggests that proportions that are closer to 1:1 are chosen more often than "extreme" partitions, but how? This feels a little under-justified.
"very different states A, B (uneven potential, disjoint occupied levels)"
Are these states really "very different", or at least for the reasons indicated. Later on (Theorem 3) we see how an optimal partition is generated. This chooses a partition where one part contains all pieces in layer (l+1) and above and one part with all pieces in layer (l-1) and below, with layer l being distributed between the two parts. The first part will typically have a slightly lower potential than the other and all layers other than layer l will be disjoint.
[p6, Fig 4] The right plot y-limits vary between -1 and 1 so it cannot represent a proportion of correct actions. Also, in the text the authors say:
>> The results, shown in Figure 4 are surprising. Reinforcement learning
>> is better at playing the game, but does worse at predicting optimal moves.
I am not sure which plot shows the playing of the game. Is this the right hand plot? In which case are we looking at rewards? In fact, I am a little confused as to what is being shown here. Is "sup rewards" a supervised learning method trained on rewards, or evaluated on rewards, or both? And how is this done. The text is just not clear enough.
[p7 Fig 6 and text] Here the authors are comparing how well agents select the optimal actions as compared to how close they are to the end of the game. This relates to the "surprising" fact that "Reinforcement learning is better at playing the game, but does worse at predicting optimal moves.". I think an important point here is how many training/test examples there are in each bin. If there are more in the range 3-7 moves from the end of the game, than there are outside this range, then the supervised learner will
[p8 proof of theorem 3]
"φ(A l+1 ) < 0.5 and φ(A l ) > 0.5."
Is it true that both these inequalities are strict?
"Since A l only contains pieces from levels K to l + 1"
In fact this should read from levels K to l.
"we can move k < m − n pieces from A l+1 to A l"
Do the authors mean that we can define a partition A, B where A = A_{l+1} plus some (but not all) elements in level l (A_{l}\setminus A_{l+1})?
"...such that the potential of the new set equals 0.5"
It will equal exactly 0.5 as suggested, but the authors could make it more precise as to why (there is a value n+k < l (maybe <=l) such that (n+k)*2^{-(K-l+1)}=0.5 (guaranteed). They should also indicate why this then justifies their proof (namely that phi(S0)-0.5 >= 0.5).
[p8 paramterising action space] A comment: this doesn't give as much control as the authors suggest. Perhaps the agent should also chose the proportion of elements in layer l to set A. For instance, if there are a large number of elements in l, and or phi(A_{l+1}) is very close to 0.5 (or phi(A_l) is very close to 0.5) then this doesn't give the attacker the opportunity to fine tune the policy to select very good partitions. It is unclear expected level of control that agents have under various conditions (K and starting states).
[p9 Fig 8] As the defender's score is functionally determined by the attackers score, it doesn't help to include this on the plot. It just distracts from the signal. |
iclr_2018_ryG6xZ-RZ | Workshop track -ICLR 2018 DLVM: A MODERN COMPILER INFRASTRUCTURE FOR DEEP LEARNING SYSTEMS
Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domainspecific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler infrastructure inspired by LLVM, DLVM is more modular and more generic than existing deep learning compiler frameworks, and supports tensor DSLs with high expressivity. With our prototypical staged DSL embedded in Swift, we argue that the DLVM system enables a form of modular, safe and performant frameworks for deep learning. | The success of Deep Learning is, in no small part, due the development of libraries and frameworks which have made building novel models much easier, faster and less error prone and also make taking advantage of modern hardware (such as GPUs) more accessible. This is still a vital area of work, as new types of models and hardware are developed.
This work argues that prior solutions do not take advantage of the fact that a tensor compiler is, essentially, just a compiler. They introduce DLVM (and NNKit) which comprises LLVM based compiler infrastructure and a DSL allowing the use of Swift to describe a typed tensor graph. Unusually, compared to most frameworks, gradients are calculated using source code transformation, which is argued to allow for easier optimization.
This paper is not well-adapted for an ICLR audience, many of which are not experts in compilers or LLVM. For example, the Figure 3, table 1 would be benefit from being shorter with more exposition on what the reader should understand and take away from them.
The primary weakness of this work is the lack of careful comparison with existing framework. The authors mention several philosophical arguments in favor of their approach, but is there a concrete example of an model which is cumbersome to write in an existing framework but easy here? (e.g. recent libraries pytorch, TF eager can express conditional logic much more simply than previous approaches, its easy to communicate why you might use them). Because of this work seems likely to be of limited interest to the ICLR audience, most of which are potentially interested users rather than compiler experts. There is also no benchmarking, which is at odds with the claims the compiler approaches allows easier optimization.
One aspect that seemed under-addressed and which often a crucial aspect of a good framework, is how general purpose code e.g. for loading data or logging interacts with the accelerated tensor code. |
iclr_2018_SJFM0ZWCb | Unsupervised learning of time series data, also known as temporal clustering, is a challenging problem in machine learning. Here we propose a novel algorithm, Deep Temporal Clustering (DTC), to naturally integrate dimensionality reduction and temporal clustering into a single end-to-end learning framework, fully unsupervised. The algorithm utilizes an autoencoder for temporal dimensionality reduction and a novel temporal clustering layer for cluster assignment. Then it jointly optimizes the clustering objective and the dimensionality reduction objective. Based on requirement and application, the temporal clustering layer can be customized with any temporal similarity metric. Several similarity metrics and state-of-the-art algorithms are considered and compared. To gain insight into temporal features that the network has learned for its clustering, we apply a visualization method that generates a region of interest heatmap for the time series. The viability of the algorithm is demonstrated using time series data from diverse domains, ranging from earthquakes to spacecraft sensor data. In each case, we show that the proposed algorithm outperforms traditional methods. The superior performance is attributed to the fully integrated temporal dimensionality reduction and clustering criterion. | Summary:
The authors proposed an unsupervised time series clustering methods built with deep neural networks. The proposed model is equipped with an encoder-decoder and a clustering model. First, the encoder employs CNN to shorten the time series and extract local temporal features, and the CNN is followed by bidirectional LSTMs to get the encoded representations. A temporal clustering model and a DCNN decoder are applied on the encoded representations and jointly trained. An additional heatmap generator component can be further included in the clustering model. The authors compared the proposed method with hierarchical clustering with 4 different temporal similarity methods on several univariate time series datasets.
Detailed comments:
The problem of unsupervised time series clustering is important and challenging. The idea of utilizing deep learning models to learn encoded representations for clustering is interesting and could be a promising solution.
One potential limitation of the proposed method is that it is only designed for univariate time series of the same temporal length, which limits the usage of this model in practice. In addition, given that the input has fixed length, clustering baselines for static data can be easily applied and should be compared to demonstrate the necessity of temporal clustering.
Some important details are missing or lack of explanations. For example, what is the size of each layer and the dimension of the encoded space? How much does the model shorten the input time series and how is this be determined?
How does the model combine the heatmap output (which is a sequence of the same length as the time series) and the clustering output (which is a vector of size K) in Figure 1? The heatmap shown in Figure 3 looks like the negation of the decoded output (i.e., lower value in time series -> higher value in heatmap). How do we interpret the generated heatmap?
From the experimental results, it is difficult to judge which method/metric is the best. For example, in Figure 4, all 4 DTC-methods achieved the best performance on one or two datasets. Though several datasets are evaluated in experiments, they are relatively small. Even the largest dataset (Phalanges OutlinesCorrect) has only 2 thousand samples, and the best performance is achieved by one of the baseline, with AUC score only 0.586 for binary classification.
Minor suggestion:
In Figure 3, instead of showing the decoded output (reconstruction), it may be more helpful to visualize the encoded time series since the clustering method is applied directly on those encoded representations. |
iclr_2018_SyzKd1bCW | Published as a conference paper at ICLR 2018 BACKPROPAGATION THROUGH THE VOID: OPTIMIZING CONTROL VARIATES FOR BLACK-BOX GRADIENT ESTIMATION
Gradient-based optimization is the foundation of deep learning and reinforcement learning, but is difficult to apply when the mechanism being optimized is unknown or not differentiable. We introduce a general framework for learning low-variance, unbiased gradient estimators, applicable to black-box functions of discrete or continuous random variables. Our method uses gradients of a surrogate neural network to construct a control variate, which is optimized jointly with the original parameters. We demonstrate this framework for training discrete latent-variable models. We also give an unbiased, action-conditional extension of the advantage actor-critic reinforcement learning algorithm. | This paper suggests a new approach to performing gradient descent for blackbox optimization or training discrete latent variable models. The paper gives a very clear account of existing gradient estimators and finds a way to combine them so as to construct and optimize a differentiable surrogate function. The resulting new gradient estimator is then studied both theoretically and empirically. The empirical study shows the benefits of the new estimator for training discrete variational autoencoders and for performing deep reinforcement learning.
To me, the main strengths of the paper is the very clear account of existing gradient estimators (among other things it helped me understand obscurities of the Q-prop paper) and a nice conceptual idea. The empirical study itself is more limited and the paper suffers from a few mistakes and missing information, but to me the good points are enough to warrant publication of the paper in a good conference like ICLR.
Below are my comments for the authors.
---------------------------------
General, conceptual comments:
When reading (6), it is clear that the framework performs regression of $c_\phi$ towards the unknown $f$ simultaneously with optimization over $c_\phi$.
Taking this perspective, I would be glad to see how the regression part performs with respect to standard least square regression,
i.e. just using $||f(b)-c_\phi(b)||^2$ as loss function. You may compare the speed of convergence of $c_\phi$ towards $f$ using (6) and the least squared error.
You may also investigate the role of this regression part into the global g_LAX optimization by studying the evolution of the components of (6).
Related to the above comment, in Algo. 1, you mention "f(.)" as given to the algo. Actually, the algo does not know f itself, otherwise it would not be blackbox optimization. So you may mean different things. In a batch setting, you may give a batch of [x,f(x) (,cost(x)?)] points to the algo. You more probably mean here that you have an "oracle" that, given some x, tells you f(x) on demand. But the way you are sampling x is not specified clearly.
This becomes more striking when you move to reinforcement learning problems, which is my main interest. The RL algorithm itself is not much specified. Does it use a replay buffer (probably not)? Is it on-policy or off-policy (probably on-policy)? What about the exploration policy? I want to know more... Probably you just replace (10) with (11) in A2C, but this is not clearly specified.
In Section 4, can you explain why, in the RL case, you must introduce stochasticity to the inputs? Is this related to the exploration issue (see above)?
Last sentence of conclusion: you are too allusive about the relationship between your learned control variate and the Q-function. I don't get it, and I want to know more...
-----------------------------------
Local comments:
Backpropagation through the void: I don't understand why this title. I'm not a native english speaker, I'm probably missing a reference to something, I would be glad to get it.
Figure 1 right. Caption states variance, but it is log variance. Why does it oscillate so much with RELAX?
Beginning of 3.1: you may state more clearly that optimizing $c_\phi$ the way you do it will also "minimize" the variance, and explain better why ("we require the gradient of the variance of our gradient estimator"...). It took me a while to get it.
In 3.1.1 a weighting based on $\d/\d\theta log p(b)$ => shouldn't you write $... log p(b|\theta)$ as before?
Figure 2 is mentioned in p.3, it should appear much sooner than p6.
In Figure 2, there is nothing about the REINFORCE PART. Why?
In 3.4 you alternate sums over an infinite horizon and sums over T time steps. You should stick to the T horizon case, as you mention the case T=1 later.
p6 Related work
The link to the work of Salimans 2017 is far from obvious, I would be glad to know more...
Q-prop (Haarnoja et al.,2017): this is not the adequate reference to Q-prop, it should be (Gu et al. 2016), you have it correct later ;)
Figure 3: why do you stop after so few epochs? I wondered how expensive is the computation of your estimator, but since in the RL case you go up to 50 millions (or 4 millions?), it's probably not the issue. I would be glad to see another horizontal lowest validation error for your RELAX estimator (so you need to run more epochs).
"ELBO" should be explained here (it is only explained in the appendices).
6.2, Table 1: Best obtained training objective: what does this mean? Should it be small or large? You need to explain better. How much is the modest improvement (rather give relative improvement in the text?)? To me, you should not defer Table 3 to an appendix (nor Table 4).
Figure 4: Any idea why A2C oscillates so much on inverted pendulum? Any idea why variance starts to decrease after 500 episodes using RELAX? Isn't related to the combination of regression and optimization, as suggested above?
About Double Inverted Pendulum, Appendix E3 mentions 50 million frames, but the figure shows 4 millions steps. Where is the truth?
Why do you give steps for the reward, and episodes for log-variance? The caption mentions "variance (log-scale)", but saying "log-variance" would be more adequate.
p9: the optimal control variate: what is this exactly? How do you compare a control variate over another? This may be explained in Section 2.
GAE (Kimura, 2000). I'm glad you refer to former work (there is a very annoying tendency those days to refer only to very recent papers from a small set of people who do not correctly refer themselves to previous work), but you may nevertheless refer to John Schulman's paper about GAEs anyways... ;)
Appendix E.1 could be reorganized, with a common hat and then E.1.1 for one layer model(s?) and E.1.2 for the two layer model(s?)
A sensitivity analysis wrt to your hyper-parameters would be welcome, this is true for all empirical studies.
In E2, is the output layer linear? You just say it is not ReLU...
The networks used in E2 are very small (a standard would be 300 and 400 neurons in hidden layers). Do you have a constraint on this?
"As our control variate does not have the same interpretation as the value function of A2C, it was not directly clear how to add reward bootstrapping and other variance reduction techniques common in RL into our model. We leave the task of incorporating these and other variance reduction techniques to future work."
First, this is important, so if this is true I would move this to the main text (not in appendix).
But also, it seems to me that the first sentence of E3 contradicts this, so where is the truth?
{0.01,0.003,0.001} I don't believe you just tried these values. Most probably, you played with other values before deciding to perform grid search on these, right?
The same for 25 in E3.
Globally, you experimental part is rather weak, we would expect a stronger methodology, more experiments also with more difficult benchmarks (half-cheetah and the whole gym zoo ;)), more detailed analyses of the results, but to me the value of your paper is more didactical and conceptual than experimental, which I really appreciate, so I will support your paper despite these weaknesses.
Good luck! :)
---------------------------------------
Typos:
p5
monte-carlo => Monte(-)Carlo (no - later...)
taylor => Taylor
you should always capitalize Section, equation, table, figure, appendix, ...
gradient decent => descent (twice)
p11: probabalistic
p15 ELU(Djork-... => missing space |
iclr_2018_B1EGg7ZCb | Autonomous vehicles are becoming more common in city transportation. Companies will begin to find a need to teach these vehicles smart city fleet coordination. Currently, simulation based modeling along with hand coded rules dictate the decision making of these autonomous vehicles. We believe that complex intelligent behavior can be learned by these agents through Reinforcement Learning. In this paper, we discuss our work for solving this system by adapting the Deep Q-Learning (DQN) model to the multi-agent setting. Our approach applies deep reinforcement learning by combining convolutional neural networks with DQN to teach agents to fulfill customer demand in an environment that is partially observable to them. We also demonstrate how to utilize transfer learning to teach agents to balance multiple objectives such as navigating to a charging station when its energy level is low. The two evaluations presented show that our solution has shown that we are successfully able to teach agents cooperation policies while balancing multiple objectives. | The main contribution of the paper seems to be the application to this problem, plus minor algorithmic/problem-setting contributions that consist in considering partial observability and to balance multiple objectives. On one hand, fleet management is an interesting and important problem. On the other hand, although the experiments are well designed and illustrative, the approach is only tested in a small 7x7 grid and 2 agents and in a 10x10 grid with 4 agents. In spirit, these simulations are similar to those in the original paper by M. Egorov. Since the main contribution is to use an existing algorithm to tackle a practical application, it would be more interesting to tweak the approach until it is able to tackle a more realistic scenario (mainly larger scale, but also more realistic dynamics with traffic models, real data, etc.).
Simulation results compare MADQN with Dijkstra's algorithm as a baseline, which offers a myopic solution where each agent picks up the closest customer. Again, since the main contribution is to solve a specific problem, it would be worthy to compare with a more extensive benchmark, including state of the art algorithms used for this problem (e.g., heuristics and metaheuristics).
The paper is clear and well written. There are several minor typos and formatting errors (e.g., at the end of Sec. 3.3, the authors mention Figure 3, which seems to be missing, also references [Egorov, Maxim] and [Palmer, Gregory] are bad formatted).
-- Comments and questions to the authors:
1. In the introduction, please, could you add references to what is called "traditional solutions"?
2. Regarding the partial observability, each agent knows the location of all agents, including itself, and the location of all obstacles and charging locations; but it only knows the location of customers that are in its vision range. This assumption seems reasonable if a central station broadcasts all agents' positions and customers are only allowed to stop vehicles in the street, without ever contacting the central station; otherwise if agents order vehicles in advance (e.g., by calling or using an app) the central station should be able to communicate customers locations too. On the other hand, if no communication with the central station is allowed, then positions of other agents may be also partial observable. In other words, the proposed partial observability assumption requires some further motivation. Moreover, in Sec. 4.3, it is said that agents can see around them +10 spaces away; however, experiments are run in 7x7 and 10x10 grid worlds, meaning that the agents are able to observe the grid completely.
3. The fact that partial observability helped to alleviate the credit-assignment noise caused by the missing customer penalty might be an artefact of the setting. For instance, since the reward has been designed arbitrarily, it could have been defined as giving a penalty for those missing customers that are at some distance of an agent.
4. Please, could you explain the last sentence of Sec. 4.3 that says "The drawback here is that the agents will not be able to generalize to other unseen maps that may have very different geographies." In particular, how is this sentence related to partial observability? |
iclr_2018_S1q_Cz-Cb | We present a novel approach for training neural machines which incorporates additional supervision on the machine's interpretable components (e.g., neural memory). To cleanly capture the kind of neural machines to which our method applies, we introduce the concept of a differential neural computational machine (∂NCM) and show that several existing architectures (e.g., NTMs, NRAMs) can be instantiated as a ∂NCM and can thus benefit from any amount of additional supervision over their interpretable components. Based on our method, we performed a detailed experimental evaluation with NTM and NRAM machines, showing the approach leads to significantly better convergence and generalization capabilities of the learning phase than standard training using only input-output examples. | Summary
This paper presents differentiable Neural Computational Machines (∂NCM), an abstraction of existing neural abstract machines such as Neural Turing Machines (NTMs) and Neural Random Access Machines (NRAMs). Using this abstraction, the paper proposes loss terms for incorporating supervision on execution traces. Adding supervision on execution traces in ∂NCM improves performance over NTM and NRAM which are trained end-to-end from input/output examples only. The observation that adding additional forms of supervision through execution traces improves generalization may be unsurprising, but from what I understand the main contribution of this paper lies in the abstraction of existing neural abstract machines to ∂NCM. However, this abstraction does not seem to be particularly useful for defining additional losses based on trace information. Despite the generic subtrace loss (Eq 8), there is no shared interface between ∂NCM versions of NTM and NRAM that would allow one to reuse the same subtrace loss in both cases. The different subtrace losses used for NTM and NRAM (Eq 9-11) require detailed knowledge of the underlying components of NTM and NRAM (write vector, tape, register etc.), which questions the value of ∂NCM as an abstraction.
Weaknesses
As explained in the summary, it is not clear to me why the abstraction to NCM is useful if one still needs to define specific subtrace losses for different neural abstract machines.
The approach seems to be very susceptible to the weight of the subtrace loss λ, at least when training NTMs. In my understanding each of the trace supervision information (hints, e.g. the ones listed in Appendix F) provides a sensible inductive bias we would the NTM to incorporate. Are there instances where these biases are noisy, and if not, could we incorporate all of them at the same time despite the susceptibility w.r.t λ?
NTMs and other recent neural abstract machines are often tested on rather toyish algorithmic tasks. I have the impression providing extra supervision in form of execution traces makes these tasks even more toyish. For instance, when providing input-output examples as well as the auxiliary loss in Eq6, what exactly is left to learn? What I like about Neural-Programmer Interpreters and Neural Programmer [1] is that they are tested on less toyish tasks (a computer vision and a question answering task respectively), and I believe the presented method would be more convincing for a more realistic downstream task where hints are noisy (as mentioned on page 5).
Minor Comments
p1: Why is Grefenstette et al. (2015) an extension of NTMs or NRAMs? While they took inspiration from NTMs, their Neural Stack has not much resemblance with this architecture.
p2: What is B exactly? It would be good to give a concrete example at this point. I have the feeling it might even be better to explain NCMs in terms of the communication between κ, π and M first, so starting with what I, O, C, B, Q are before explaining what κ and π are (this is done well for NTM as ∂NCM in the table on page 4). In addition, I think it might be better to explain the Controller before the Processor. Furthermore, Figure 2a should be referenced in the text here.
p4 Eq3: There are two things confusing in these equations. First, w is used as the write vector here, whereas on page 3 this is a weight of the neural network. Secondly, π and κ are defined on page 2 as having an element from W as first argument, which are suddenly omitted on page 4.
p4: The table for NRAM as ∂NCM needs a bit more explanation. Where does {1}=I come from? This is not obvious from Appendix B either.
p3 Fig2/p4 Eq4: Related to the concern regarding the usefulness of the ∂NCM abstraction: While I see how NTMs fit into the NCM abstraction, this is not obvious at all for NRAMs, particularly since in Fig 2c modules are introduced that do not follow the color scheme of κ and π in Fig 2a (ct, at, bt and the registers).
p5: There is related work for incorporating trace supervision into a neural abstract machine that is otherwise trained end-to-end from input-output examples [2].
p5: "loss on example of difficulties" -> "loss on examples of the same difficulty"
p5: Do you have an example for a task and hints from a noisy source?
Citation style: sometimes citation should be in brackets, for example "(Graves et al. 2016)" instead of "Graves et al. (2016)" in the first paragraph of the introduction.
[1] Neelakantan et al. Neural programmer: Inducing latent programs with gradient descent. ICLR. 2015.
[2] Bosnjak et al. Programming with a Differentiable Forth Interpreter. ICML. 2017. |
iclr_2018_HyrCWeWCb | TRUST-PCL: AN OFF-POLICY TRUST REGION METHOD FOR CONTINUOUS CONTROL
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO. | This paper presents a policy gradient method that employs entropy regularization and entropy constraint at the same time. The entropy regularization on action probability is to encourage the exploration of the policy, while the entropy constraint is to stabilize the gradient.
The major weakness of this paper is the unclear presentation. For example, the algorithm is never fully described, though a handful variants are discussed. How the off-policy version is implemented is missing.
In experiments, why the off-policy version of TRPO is not compared. Comparing the on-policy results, PCL does not show a significant advantage over TRPO. Moreover, the curves of TRPO is so unstable, which is a bit uncommon.
What is the exploration strategy in the experiments? I guess it was softmax probability. However, in many cases, softmax does not perform a good exploration, even if the entropy regularization is added.
Another issue is the discussion of the entropy regularization in the objective function. This regularization, while helping exploration, do changes the original objective. When a policy is required to pass through a very narrow tunnel of states, the regularization that forces a wide action distribution could not have a good performance. Thus it would be more interesting to see experiments on more complex benchmark problems like humanoids. |
iclr_2018_HklpCzC6- | Inspired by the combination of feedforward and iterative computations in the visual cortex, and taking advantage of the ability of denoising autoencoders to estimate the score of a joint distribution, we propose a novel approach to iterative inference for capturing and exploiting the complex joint distribution of output variables conditioned on some input variables. This approach is applied to image pixel-wise segmentation, with the estimated conditional score used to perform gradient ascent towards a mode of the estimated conditional distribution. This extends previous work on score estimation by denoising autoencoders to the case of a conditional distribution, with a novel use of a corrupted feedforward predictor replacing Gaussian corruption. An advantage of this approach over more classical ways to perform iterative inference for structured outputs, like conditional random fields (CRFs), is that it is not any more necessary to define an explicit energy function linking the output variables. To keep computations tractable, such energy function parametrizations are typically fairly constrained, involving only a few neighbors of each of the output variables in each clique. We experimentally find that the proposed iterative inference from conditional score estimation by conditional denoising autoencoders performs better than comparable models based on CRFs or those not using any explicit modeling of the conditional joint distribution of outputs. | The paper proposes an image segmentation method which iteratively refines the semantic segmentation mask obtained from a deep net. To this end the authors investigate a denoising auto-encoder (DAE). Its purpose is to provide a semantic segmentation which improves upon its input in terms of the log-likelihood.
More specifically, the authors `propose to condition the autoencoder with an additional input’ (page 1). To this end they use features obtained from the deep net. Instead of training the DAE with ground truth y, the authors found usage of the deep net prediction to yield better results.
The proposed approach is evaluated on the CamVid dataset.
Summary:
——
I think the paper discusses a very interesting topic and presents an elegant approach. A few points are missing which would provide significantly more value to a reader. Specifically, an evaluation on the classical Pascal VOC dataset, details regarding the training protocol of the baseline (which are omitted right now), an assessment regarding stability of the proposed approach (not discussed right now), and a clear focus of the paper on segmentation or conditioning. See comments below for details and other points.
Comments:
——
1. When training the DAE, a combination of squared loss and categorical cross-entropy loss is used. What’s the effect of the squared error loss and would the categorical cross-entropy on its own be sufficient? This question remains open when reading the submission.
2. The proposed approach is evaluated on the CamVid dataset which is used less compared to the standard and larger Pascal VOC dataset. I conjecture that the proposed approach wouldn’t work too well on Pascal VOC. On Pascal VOC, images are distinctly different from each other whereas subsequent frames are similar in CamVid, i.e., the road is always located at the bottom center of the image. The proposed architecture is able to take advantage of this dataset bias, but would fail to do so on Pascal VOC, which has a much more intricate bias. It would be great if the authors could check this hypothesis and report quantitative results similar to Tab. 1 and Fig. 4 for Pascal VOC.
3. The authors mention a grid-search for the stepsize and the number of iterations. What values were selected in the end on the CamVid and hopefully the Pascal VOC dataset?
4. Was the dense CRF applied out of the box, or were its parameters adjusted for good performance on the CamVid validation dataset? While parameters such as the number of iterations and epsilon are tuned for the proposed approach on the CamVid validation set, the submission doesn’t specify whether a similar procedure was performed for the CRF baseline.
5. Fig. 4 seems to indicate that the proposed approach doesn’t converge. Hence an appropriate stepsize and a reasonable number of iterations need to be chosen on a validation set. Choosing those parameters guarantees that the method performs well on average, but individual results could potentially be entirely wrong, particularly if large step sizes are chosen. I suspect this effect to be more pronounced on the Pascal VOC dataset (hence my conjecture in point 2). To further investigate this property, as a reader, I’d be curious to get to know the standard deviation/variance of the accuracy in addition to the mean IoU. Again, it would be great if the authors could check this hypothesis and report those results.
6. I find the experimental section to be slightly disconnected from the initial description. Specifically, the paper `proposes to condition the autoencoder with an additional input’ (page 1). No experiments are conducted to validate this proposal. Hence the main focus of the paper (image segmentation or DAE conditioning) remains vague. If the authors choose to focus on image segmentation, a comparison to state-of-the-art should be provided on classical datasets such as Pascal VOC, if DAE conditioning is the focus, some experiments in this direction should be included in addition to the Pascal VOC results.
Minor comment:
——
- I find it surprising that the authors choose not to cite some related work on combining deep nets with structured prediction. |
iclr_2018_rJXMpikCZ | GRAPH ATTENTION NETWORKS
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of computationally intensive matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training). | The paper introduces a neural network architecture to operate on graph-structured
data named Graph Attention Networks.
Key components are an attention layer and the possibility to learn how to
weight different nodes in the neighborhood without requiring spectral decompositions
which are costly to be computed.
I found the paper clearly written and very well presented. I want to thank
the author for actively participating in the discussions and in clarifying already
many of the details that I was missing.
As also reported in the comments by T. Kipf I found the lack of comparison to previous
works on attention and on constructions of NN for graph data are missing.
In particular MoNet seems a more general framework, using features to compute node
similarity is another way to specify the "coordinate system" for convolution.
I would argue that in many cases the graph is given and that one would have
to exploit its structure rather than the simple first order neighbors structure.
I feel, in fact, that the paper deals mainly with "localized metric-learning" rather than
using the information in the graph itself. There is no
explicit usage of the graph beyond the selection of the local neighborhood.
In many ways when I first read it I though it would be a modified version of
memory networks (which have not been cited). Sec. 2.1 is basically describing
a way to learn a matrix W so that the attention layer produces the weights to be
used for convolution, or the relative coordinate system, which is to me a
memory network like construction, where the memory is given by the neighborhood.
I find the idea to use the multi-head attention very interesting, but one should
consider the increase in number of parameters in the experimental section.
I agree that the proposed method is computationally efficient but the authors
should keep in mind that parallelizing across all edges involves lot of redundant
copies (e.g. in a distributed system) as the neighborhoods highly overlap, at
least for interesting graphs.
The advantage with respect to methods that try to use LSTM in this domain
in a naive manner is clear, however the similarity function (attention) in this
work could be interpreted as the variable dictating the visit ordering.
The authors seem to emphasize the use of GPU as the best way to scale their work
but I tend to think that when nodes have varying degrees they would be highly
unused. Main reason why they are widely used now is due to the structure in the
representation of convolutional operations.
Also in case of sparse data GPUs are not the best alternative.
Experiments are very well described and performed, however as explained earlier
some comparisons are needed.
An interesting experiment could be to use the attention weights as adjacency
matrix for GCN.
Overall I liked the paper and the presentation, I think it is a simple yet
effective way of dealing with graph structure data. However, I think that in
many interesting cases the graph structure is relevant and cannot be used
just to get the neighboring nodes (e.g. in social network analysis). |
iclr_2018_H1kG7GZAW | VARIATIONAL INFERENCE OF DISENTANGLED LATENT CONCEPTS FROM UNLABELED OBSERVATIONS
Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder's output. We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality). | ########## UPDATED AFTER AUTHOR RESPONSE ##########
Thanks for the good revision and response that addressed most of my concerns. I am bumping up my score.
###############################################
This paper presents a Disentangled Inferred Prior (DIP-VAE) method for learning disentangled features from unlabeled observations following the VAE framework. The basic idea of DIP-VAE is to enforce the aggregated posterior q(z) = E_x [q(z | x)] to be close to an identity matrix as implied by the commonly chosen standard normal prior p(z). The authors propose to moment-match q(z) given it is hard to minimize the KL-divergence between q(z) and p(z). This leads to one additional term to the regular VAE objective (in two parts, on- and off-diagonal). It has the similar property as beta-VAE (Higgins et al. 2017) but without sacrificing the reconstruction quality. Empirically the authors demonstrate that DIP-VAE can effectively learn disentangled features, perform comparably better than beta-VAE and at the same time retain the reconstruction quality close to regular VAE (beta-VAE with beta = 1).
The paper is overall well-written with minor issues (listed below). I think the idea of enforcing an aggregated (marginalized) posterior q(z) to be close to the standard normal prior p(z) makes sense, as opposed to enforcing each individual posterior q(z|x) to be close to p(z) as (beta-)VAE objective suggests. I would like to make some connection to some work on understanding VAE objective (Hoffman & Johnson 2016, ELBO surgery: yet another way to carve up the variational evidence lower bound) where they derived something along the same line of an aggregated posterior q(z). In Hoffman & Johnson, it is shown that KL(q(z) | p(z)) is in fact buried in ELBO, and the inequality gap in Eq (3) is basically a mutual information term between z and n (the index of the data point). Similar observations have led to the development of VAMP-prior (Tomczak & Welling 2017, VAE with a VampPrior). Following the derivation in Hoffman & Johnson, DIP-VAE is basically adding a regularization parameter to the KL(q(z) | p(z)) term in standard ELBO. I think this interpretation is complementary to (and in my opinion, more clear than) the one that’s described in the paper.
My concerns are mostly regarding the empirical studies:
1. One of my main concern is on the empirical results in Table 1. The disentanglement metric score for beta-VAE is suspiciously lower than what’s reported in Higgins et al., where they reported a 99.23% disentanglement metric score on 2D shape dataset. I understand the linear classier is different, but still the difference is too large to ignore. Hence my current more neutral review rating.
2. Regarding the correlational plots (the bottom row of Table 3 and 4), I don’t think I can see any clear patterns (especially on CelebA). I wonder what’s the point of including them here and if there is a point, please explain them clearly in the paper.
3. Figure 2 is also a little confusing to me. If I understand the procedure correctly, a good disentangled feature would imply smaller correlations to other features (i.e., the numbers in Figure 2 should be smaller for better disentangled features). However, looking at Figure 2 and many other plots in the appendix, I don’t think DIP-VAE has a clear win here. Is my understanding correct? If so, what exactly are you trying to convey in Figure 2?
Minor comments:
1. In Eq (6) I think there are typos in terms of the definition of Cov_q(z)(z)? It appears as only the second term in Eq (5).
2. Hyperparameter subsection in section 3: Shouldn’t \lambda_od be larger if the entanglement is mainly reflected in the off-diagonal entries? Why the opposite?
3. Can you elaborate on how a running estimate of Cov_p(x)(\mu(x)) is maintained (following Eq (6)). It’s not very clear at the current state of the paper.
4. Can we have error bars in Table 2? Some of the numbers are possibly hitting the error floor.
5. Table 5 and 6 are not very necessary, unless there is a clear point. |
iclr_2018_B17JTOe0- | EMERGENCE OF GRID-LIKE REPRESENTATIONS BY TRAINING RECURRENT NEURAL NETWORKS TO PERFORM SPATIAL LOCALIZATION
Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties. The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the mechanisms and functional significance of these spatial representations remain largely mysterious. As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits. | This paper aims at better understanding the functional role of grid cells found in the entorhinal cortex by training an RNN to perform a navigation task.
On the positive side:
This is the first paper to my knowledge that has shown that grid cells arise as a product of a navigation task demand. I enjoyed reading the paper which is in general clearly written. I have a few, mostly cosmetic, complaints but this can easily be addressed in a revision.
On the negative side:
The manuscript is not written in a way that is suitable for the target ICLR audience which will include, for the most part, readers that are not expert on the entorhinal cortex and/or spatial navigation.
First, the contributions need to be more clearly spelled out. In particular, the authors tend to take shortcuts for some of their statements. For instance, in the introduction, it is stated that previous attractor network type of models (which are also recurrent networks) “[...] require hand-crafted and fined tuned connectivity patterns, and the evidence of such specific 2D connectivity patterns has been largely absent.” This statement is problematic for two reasons:
(i) It is rather standard in the field of computational neuroscience to start from reasonable assumptions regarding patterns of neural connectivity then proceed to show that the resulting network behaves in a sensible way and reproduces neuroscience data. This is not to say that demonstrating that these patterns can arise as a byproduct is not important, on the contrary. These are just two complementary lines of work. In the same vein, it would be silly to dismiss the present work simply because it lacks spikes.
(ii) the authors do not seem to address one of the main criticisms they make about previous work and in particular "[a lack of evidence] of such specific 2D connectivity patterns". My understanding is that one of the main assumptions made in previous work is that of a center-surround pattern of lateral connectivity. I would argue that there is a lot of evidence for local inhibitory connection in the cortex. Somewhat related to this point, it would be insightful to show the pattern of local connections learned in the RNN to see how it differs from the aforementioned pattern of connectivity.
Second, the navigation task used needs to be better justified. Why training a network to predict 2D spatial location from velocity inputs? Why is this a reasonable starting point to study the emergence of grid cells? It might be obvious to the authors but it will not be to the ICLR audience. Dead-reckoning (i.e., spatial localization from velocity inputs) is of critical ecological relevance for many animals. This needs to be spelled out and a reference needs to be added. As a side note, I would have expected the authors to use actual behavioral data but instead, the network is trained using artificial trajectories based on "modified Brownian motion”. This seems like an important assumption of the manuscript but the issue is brushed off and not discussed. Why is this a reasonable assumption to make? Is there any reference demonstrating that rodent locomotory behavior in a 2D arena is random?
Figure 4 seems kind of strange. I do not understand how the “representative units” are selected and where the “late” selectivity on the far right side in panel a arises if not from “early” units that would have to travel “far” from the left side… Apologies if I am missing something obvious.
I found the study of the effect of regularization to be potentially the most informative for neuroscience but it is only superficially treated. It would have been nice to see a more systematic treatment of the specifics of the regularization needed to get grid cells. |
iclr_2018_rJIgf7bAZ | In the pursuit of increasingly intelligent learning systems, abstraction plays a vital role in enabling sophisticated decisions to be made in complex environments. The options framework provides formalism for such abstraction over sequences of decisions. However most models require that options be given a priori, presumably specified by hand, which is neither efficient, nor scalable. Indeed, it is preferable to learn options directly from interaction with the environment. Despite several efforts, this remains a difficult problem: many approaches require access to a model of the environmental dynamics, and inferred options are often not interpretable, which limits our ability to explain the system behavior for verification or debugging purposes. In this work we develop a novel policy gradient method for the automatic learning of policies with options. This algorithm uses inference methods to simultaneously improve all of the options available to an agent, and thus can be employed in an off-policy manner, without observing option labels. Experimental results show that the options learned are interpretable. Further, we find that the method presented here is more sample efficient than existing methods, leading to faster and more stable learning of policies with options. | The paper presents a new policy gradient technique for learning options. The option index is treated as latent variable and, in order to compute the policy gradient, the option distribution for the current sample is computed by using a forward pass. Hence, a single sample can be used to update all options and not just the option that has been used for this sample.
The idea of the paper is good but the novelty is limited. As noted by the authors, the idea of using inference for option discovery has already been presented in Daniel2016. Note that the option discovery process is Daniel2016 is not limited to linear sub-policies, only the policy update strategy is. So the main contribution is to use a new policy update strategy, i.e., policy gradients, for inference based option discovery. Thats fine but should be stated more clearly in the paper. The paper is also written very well and the topic is relevant for the ICLR conference.
However, the paper has two main problems:
- The results are not convincing. In most domains, the performance is similar to the A3C algorithm (which does not use inference based option discovery), so the impact of this paper seems limited.
- One of the main assumptions of the algorithm is wrong. The assumption is that rewards from the past are not correlated with actions in the future conditioned on the state s_t (otherwise we would always have a correlation) ,which is needed to use the policy gradient theorem. The assumption is only true for MDPs. However, using the option index as latent variable yields a PoMDP. There, this assumption does not hold any more. Example: Reward at time step t-1 depends on the action, which again depends on the option o_t-1. Action at time step t depends on o_t. Hence, there is a strong correlation between reward r_t-1 and action a_t+1 as o_t and o_t+1 are strongly correlated. o_t is not a conditional variable of the policy as it is not part of the state, thats why this assumption does not work any more.
Summary: The paper is well written and presents a good extension of inference based option discovery. However, the results are not convincing and there is a crucial issue in the assumptions of the algorithm. |
iclr_2018_SJ-C6JbRW | Published as a conference paper at ICLR 2018 MASTERING THE DUNGEON: GROUNDED LANGUAGE LEARNING BY MECHANICAL TURKER DESCENT
Contrary to most natural language processing research, which makes use of static datasets, humans learn language interactively, grounded in an environment. In this work we propose an interactive learning procedure called Mechanical Turker Descent (MTD) and use it to train agents to execute natural language commands grounded in a fantasy text adventure game. In MTD, Turkers compete to train better agents in the short term, and collaborate by sharing their agents' skills in the long term. This results in a gamified, engaging experience for the Turkers and a better quality teaching signal for the agents compared to static datasets, as the Turkers naturally adapt the training data to the agent's abilities. | TL;DR of paper: Improved human-in-the-loop data collection using crowdsourcing. The basic gist is that on every round, N mechanical turkers will create their own dataset. Each turker gets a copy of a base model which is trained on their own dataset, and each trained model is evaluated on all the other turker datasets. The top-performing models get a cash bonus, incentivizing turkers to provide high quality training data. A new base model is trained on the pooled-together data of all the turkers, and a new round begins. The results indicate an improvement over static data collection.
This idea of HITL dataset creation is interesting, because the competitive aspect incentivizes turkers to produce high quality data. Judging by the feedback given by turkers in the appendix, the workers seem to enjoy the competitive aspect, which would hopefully lead to better data. The results seem to suggest that MTD provides an improvement over non-HITL methods.
The authors repeatedly emphasize the "collaborative" aspect of MTD, saying that the turkers have to collaborate to produce similar dataset distributions, but this is misleading because the turkers don't get to see other datasets. MTD is mostly competitive, and the authors should reduce the emphasis on a stretched definition of collaboration.
One questionable aspect of MTD is that the turkers somehow have to anticipate what are the best examples for the model to train with. That is, the turkers have to essentially perform the example selection process in active learning with relatively little interaction with the training model. While the turkers are provided immediate feedback when the model already correctly classifies the proposed training example, it seems difficult for turkers to anticipate when an example is too hard, because they have no idea about the learning process.
My biggest criticism is that MTD seems more like an NLP paper rather than an ICLR paper. I gave a 7 because I like the idea, but I wouldn't be upset if the AC recommends submitting to an NLP conference instead. |
iclr_2018_ByJbJwxCW | Recent advances in computing technology and sensor design have made it easier to collect longitudinal or time series data from patients, resulting in a gigantic amount of available medical data. Most of the medical time series lack annotations or even when the annotations are available they could be subjective and prone to human errors. Earlier works have developed natural language processing techniques to extract concept annotations and/or clinical narratives from doctor notes. However, these approaches are slow and do not use the accompanying medical time series data. To address this issue, we introduce the problem of concept annotation for the medical time series data, i.e., the task of predicting and localizing medical concepts by using the time series data as input. We propose Relational Multi-Instance Learning (RMIL) -a deep Multi Instance Learning framework based on recurrent neural networks, which uses pooling functions and attention mechanisms for the concept annotation tasks. Empirical results on medical datasets show that our proposed models outperform various multi-instance learning models. | The paper addresses the classification of medical time-series data by formulating the problem as a multi-instance learning (MIL) task, where there is an instance for each timestep of each time series, labels are observed at the time-series level (i.e. for each bag), and the goal is to perform instance-level and series-level (i.e. bag-level) prediction. The main difference from the typical MIL setup is that there is a temporal relationship between the instances in each bag. The authors propose to model this using a recurrent neural network architecture. The aggregation function which maps instance-level labels to bag-level labels is modeled using a pooling layer (this is actually a nice way to describe multi-instance classification assumptions using neural network terminology). An attention mechanism is also used.
The proposed time-series MIL problem formulation makes sense. The RNN approach is novel to this setting, if somewhat incremental. One very positive aspect is that results are reported exploring the impact of the choice of recurrent neural network architecture, pooling function, and attention mechanism. Results on a second dataset are reported in the appendix, which greatly increases confidence in the generalizability of the experiments. One or more additional datasets would have helped further solidify the results, although I appreciate that medical datasets are not always easy to obtain. Overall, this is a reasonable paper with no obvious major flaws. The novelty and impact may be greater on the application side than on the methodology side.
Minor suggestions:
-The term "relational multi-instance learning" seems to suggest a greater level of generality than the work actually accomplishes. The proposed methods can only handle time-series / longitudinal dependencies, not arbitrary relational structure. Moreover, multi-instance learning is typically viewed as an intermediary level of structure "in between" propositional learning (i.e. the standard supervised learning setting) and fully relational learning, so the "relational multi-instance learning" terminology sounds a little strange. Cf.:
De Raedt, L. (2008). Logical and relational learning. Springer Science & Business Media.
-Pg 3, a capitalization typo: "the Multi-instance learning framework"
-The equation for the bag classifier on page 4 refers to the threshold-based MI assumption, which should be attributed to the following paper:
Weidmann, N., Frank, E. & Pfahringer, B. 2003. A two-level learning method for generalized multi-instance problems. In Proceedings of the 14th European Conference on Machine Learning,
Springer, 468-479.
(See also: J. R. Foulds and E. Frank. A review of multi-instance learning assumptions. Knowledge Engineering Review, 25(1):1-25, 2010. )
- Pg 5, "Table 1" vs "table 1" - be consistent.
-A comparison to other deep learning MIL methods, i.e. those that do not exploit the time-series nature of the problem, would be valuable. I wouldn't be surprised if other reviewers insist on this. |
iclr_2018_SyjjD1WRb | We establish a theoretical link between evolutionary algorithms and variational parameter optimization of probabilistic generative models with binary hidden variables. While the novel approach is independent of the actual generative model, here we use two such models to investigate its applicability and scalability: a noisy-OR Bayes Net (as a standard example of binary data) and Binary Sparse Coding (as a model for continuous data). Learning of probabilistic generative models is first formulated as approximate maximum likelihood optimization using variational expectation maximization (EM). We choose truncated posteriors as variational distributions in which discrete latent states serve as variational parameters. In the variational E-step, the latent states are then optimized according to a tractable free-energy objective. Given a data point, we can show that evolutionary algorithms can be used for the variational optimization loop by (A) considering the bit-vectors of the latent states as genomes of individuals, and by (B) defining the fitness of the individuals as the (log) joint probabilities given by the used generative model. As a proof of concept, we apply the novel evolutionary EM approach to the optimization of the parameters of noisy-OR Bayes nets and binary sparse coding on artificial and real data (natural image patches). Using point mutations and single-point cross-over for the evolutionary algorithm, we find that scalable variational EM algorithms are obtained which efficiently improve the data likelihood. In general we believe that, with the link established here, standard as well as recent results in the field of evolutionary optimization can be leveraged to address the difficult problem of parameter optimization in generative models. | ## Review summary
Overall, the paper makes an interesting effort to tightly integrate
expectation-maximization (EM) training algorithms with evolutionary algorithms
(EA). However, I found the technical description lacking key details and the
experimental comparisons inadequate. There were no comparisons to non-
evolutionary EM algorithms, even though they exist for the models in question.
Furthermore, the suggested approach lacks a principled way to select
and tune key hyperparameters. I think the broad idea of using EA as a substep
within a monotonically improving free energy algorithm could be interesting,
but needs far more experimental justification.
## Pros / Stengths
+ effort to study more than one model family
+ maintaining monotonic improvement in free energy
## Cons / Limitations
- poor technical description and justification of the fitness function
- lack of comparisons to other, non-EA algorithms
- lack of study of hyperparameter sensitivity
## Paper summary
The paper suggests a variant of the EM algorithm for binary hidden variable
models, where the M-step proceeds as usual but the E-step is different in two
ways. First, following work by J. Lucke et al on Truncated Posteriors, the
true posterior over the much larger space of all possible bit vectors is
approximated by a more tractable small population of well-chosen bit vectors,
each with some posterior weight. Second, this set of bit vectors is updated
using an evolutionary/genetic algorithm. This EA is the core contribution,
since the work on Trucated Posteriors has appeared before in the literature.
The overall EM algorithm still maintains monotonic improvement of a free
energy objective.
Two well-known generative models are considered: Noisy-Or models for discrete
datasets and Binary Sparse Coding for continuous datasets. Each has a
previously known, closed-form M-step (given in supplement). The focus is on
the E-step: how to select the H-dimensional bit vector for each data point.
Experiments on artificial bars data and natural image patch datasets compare
several variants of the proposed method, while varying a few EA method
substeps such as selecting parents by fitness or randomly, including crossover
or not, or using generic or specialized mutation rates.
## Significance
Combining evolutionary algorithms (EA) within EM has been done previously, as
in Martinez and Vitria (Pattern Recog. Letters, 2000) or Pernkopf and
Bouchaffra (IEEE TPAMI, 2005) for mixture models. However, these efforts seem
to use EA in an "outer loop" to refine different runs of EM, while the present
approach uses EA in a substep of a single run of EM. I guess this is
technically different, but it is already well known that any E-step method
which monotonically improves the free energy is a valid algorithm. Thus, the
paper's significance hinges on demonstrating that the particular E step chosen
is better than alternatives. I don't think the paper succeeded very well at
this: there were no comparisons to non-EA algorithms, or to approaches that
use EA in the "outer loop" as above.
## Clarity of Technical Approach
What is \tilde{log P} in Eq. 7? This seems a fundamental expression. Its
plain-text definition is: "the logarithm of the joint probability where
summands that do not depend on the state s have been elided". To me, this
definition is not precise enough for me to reproduce confidently... is it just
log p(s_n, y_n | theta)? I suggest revisions include a clear mathematical
definition. This omission inhibits understanding of this paper's core
contributions.
Why does the fitness expression F defined in Eq. 7 satisfy the necessary
condition for fitness functions in Eq. 6? This choice of fitness function does
not seem intuitive to me. I think revisions are needed to *prove* this fitness
function obeys the comparison property in Eq. 6.
How can we compute the minimization substep in Eq. 7 (min_s \tilde{logP})? Is
this just done by exhaustive search over bit vectors? I think this needs
clarification.
## Quality of Experiments
The experiments are missing a crucial baseline: non-EA algorithms. Currently
only several varieties of EA are compared, so it is impossible to tell if the
suggested EA strategies even improve over non-EA baselines. As a specific
example, previous work already cited in this paper -- Henniges et al (2000) --
has developed a non-EA EM algorithm for Binary Sparse Coding, which already
uses the truncated posterior formulation. Why not compare to this?
The proposed algorithm has many hyperparameters, including number of
generations, number of parents, size of the latent space H, size of the
truncation, etc. The current paper offers little advice about selecting these
values intelligently, but presumably performance is quite sensitive to these
values. I'd like to see some more discussion of this and (ideally) more
experiments to help practitioners know which parameters matter most,
especially in the EA substep.
Runtime analysis is missing as well: Is runtime dominated by the EA step? How
does it compare to non-EA approaches? How big of datasets can the proposed
method scale to?
The reader walks away from the current toy bars experiment somewhat confused.
The Noisy-Or experiment did not favor crossover and and favored specialized
mutations, while the BSC experiment reached the opposite conclusions. How does
one design an EA for a new dataset, given this knowledge? Do we need to
exhaustively try all different EA substeps, or are there smarter lessons to
learn?
## Detailed comments
Bottom of page 1: I wouldn't say that "variational EM" is an approximation to
EM. Sometimes moving from EM to variational EM can mean we estimate posteriors
(not point estimates) for both local (example-specific) and global parameters.
Instead, the *approximation* comes simply from restricting the solution space
to gain tractability.
Sec. 2: Make clear earlier that hidden var "s" is assumed to be discrete, not
continuous.
After Mutation section: Remind readers that "N_g" is number of generations |
iclr_2018_HyTrSegCb | Neural conversational models are widely used in applications like personal assistants and chat bots. These models seem to give better performance when operating on word level. However, for fusion languages like French, Russian and Polish, vocabulary size sometimes become infeasible since most of the words have lots of word forms. To reduce vocabulary size we propose a new pipeline for building conversational models: first generate words in standard form and then transform them into a grammatically correct sentence. For this task we propose a neural network architecture that efficiently employs correspondence between standardised and target words and significantly outperforms character-level models while being 2x faster in training and 20% faster at evaluation. The proposed pipeline gives better performance than character-level conversational models according to assessor testing. | In this work, the authors propose a sequence-to-sequence architecture that learns a mapping from a normalized sentence to a grammatically correct sentence. The proposed technique is a simple modification to the standard encoder-decoder paradigm which makes it more efficient and better suited to this task. The authors evaluate their technique using three morphologically rich languages French, Polish and Russian and obtain promising results.
The morphological agreement task would be an interesting contribution of the paper, with wider potential. But one concern that I have is regarding the evaluation metrics used for it. Firstly, word accuracy rate doesn't seem appropriate, as it does not measure morphological agreement. Secondly, sentence accuracy (w.r.t. the sentences from which the normalized sentences are derived) is not indicative of morphological agreement: even "wrong" sentences in the output could be perfectly valid in terms of agreement. A grammatical error rate (fraction of grammatically wrong sentences produced) would probably be a better measure.
Another concern I have is regarding the quality of the baseline: Additional variants of the baseline models should be considered and the best one reported. Specifically, in the conversation task, have the authors considered switching the order of normalized answer and context in the input? Also, the word order of the normalized answer and/or context could be reversed (as is done in sequence-to-sequence translation models).
Also, many experimental details are missing from the draft:
-- What are the sizes of the train/test sets derived from the OpenSubtitles database?
-- Details of the validation sets used to tune the models.
-- In Section 5.4, no details of the question-answer corpus are provided. How many pairs were extracted? How many were used for training and testing?
-- In Section 5.4.1, how many assessors participated in the evaluation and how many questions were evaluated?
-- In some of the tables (e.g. 6, 7, 8) which show example sentences from Polish, Russian and French, please provide some more information in the accompanying text on how to interpret these examples (since most readers may not be familiar with these languages).
Pros:
-- Efficient model
-- Proposed architecture is general enough to be useful for other sequence-to-sequence problems
Cons:
-- Evaluation metrics for the morphological agreement task are unsatisfactory
-- It would appear that the baselines could be improved further using standard techniques |
iclr_2018_r1l4eQW0Z | Published as a conference paper at ICLR 2018 KERNEL IMPLICIT VARIATIONAL INFERENCE
Recent progress in variational inference has paid much attention to the flexibility of variational posteriors. One promising direction is to use implicit distributions, i.e., distributions without tractable densities as the variational posterior. However, existing methods on implicit posteriors still face challenges of noisy estimation and computational infeasibility when applied to models with high-dimensional latent variables. In this paper, we present a new approach named Kernel Implicit Variational Inference that addresses these challenges. As far as we know, for the first time implicit variational inference is successfully applied to Bayesian neural networks, which shows promising results on both regression and classification tasks. | This paper presents Kernel Implicit Variational Inference (KIVI), a novel class of implicit variational distributions. KIVI relies on a kernel approximation to directly estimate the density ratio. Importantly, the optimal kernel approximation in KIVI has closed-form solution, which allows for faster training since it avoids gradient ascent steps that may soon get "outdated" as the optimization over the variational distribution runs. The paper presents experiments on a variety of scenarios to show the performance of KIVI.
Up to my knowledge, the idea of estimating the density ratio using kernels is novel. I found it interesting, specially since there is a closed-form solution for this estimate. The closed form solution involves a matrix inversion, but this shouldn't be an issue, as the matrix size is controlled by the number of samples, which is a parameter that the practitioner can choose. I also found interesting the implicit MMNN architecture proposed in Section 4.
The experiments seem convincing too, although I believe the paper could probably be improved by comparing with other implicit VI methods, such as [Liu & Feng], [Tran et al.], or others.
My major criticism with the paper is the quality of the writing. I found quite a few errors in every page, which significantly affects readability. I strongly encourage the authors to carefully review the entire paper and search for typos, grammatical errors, unclear sentences, etc.
Please find below some further comments broken down by section.
Section 1: In the introduction, it is unclear to me what "protect these models" means. Also, in the second paragraph, the authors talk about "often leads to biased inference". The concept to "biased inference" is unclear. Finally, the sentence "the variational posterior we get in this way does not admit a tractable likelihood" makes no sense to me; how can a posterior admit (or not admit) a likelihood?
Section 3: The first paragraph of the KIVI section is also unclear to me. In Section 3.1, it looks like the cost function L(\hat(r)) is different from the loss in Eq. 1, so it should have a different notation. In Eq. 4, I found it confusing whether L(r)=J(r). Also, it would be nice to include a brief description of why the expectation in Eq. 4 is taken w.r.t. p(z) instead of q(z), for those readers who are less familiar with [Kanamori et al.]. Finally, the motivation behind the "reverse ratio trick" was unclear to me (the trick is clear, but I didn't fully understand why it's needed).
Section 4: The first paragraph of the example can be improved with a brief discussion of why the methods of [Mescheder et al.] and [Song et al.] "are nor applicable". Also, the paragraph above Eq. 11 ("When modeling a matrix...") was unclear to me.
Section 6: In Figure 1(a), I think there must be something wrong, because it is well-known that VI tends to cover one of the modes of the posterior only due to the form of the KL divergence (in contrast to EP, which should look like the curve in the figure). Additionally, Figure 3(a) (and the explanation in the text) was unclear to me. Finally, I disagree with the discussion regarding overfitting in Figure 3(b): that plot doesn't show overfitting because it is a plot of the training loss (and overfitting occurs on test); instead it looks like an optimization issue that makes the bound decrease.
**** EDITS AFTER AUTHORS' REBUTTAL ****
I increased the rating to 7 after reading the revised version. |
iclr_2018_SySpa-Z0Z | Many regularization methods have been proposed to prevent overfitting in neural networks. Recently, a regularization method has been proposed to optimize the variational lower bound of the Information Bottleneck Lagrangian. However, this method cannot be generalized to regular neural network architectures. We present the activation norm penalty that is derived from the information bottleneck principle and is theoretically grounded in a variation dropout framework. Unlike in previous literature, it can be applied to any general neural network. We demonstrate that this penalty can give consistent improvements to different state of the art architectures both in language modeling and image classification. We present analyses on the properties of this penalty and compare it to other methods that also reduce mutual information. | This paper tries to create a mapping between activation norm penalties and information bottleneck framework using variational dropout framework. While I find the path taken interesting, the paper itself is hard to follow, mostly due to constantly flipping notation (cons section below lists some of the issues) and other linguistic errors. In the current form, this work is somewhere between a theoretical paper and an empirical one, however for a theoretical one it lacks strictness, while for empirical one - novelty.
From theoretical perspective:
The main claim in this paper seems to be (10), however it is not formalised in any form of theorem, and so -- lacks a lock of strictness. Even under the assumption that it is updated, and made more strict - what is a crucial problem is a claim, that after arriving at:
tr[ K(X, X) ] - ln( det[ K(X, X) ] )
dropping the log determinant is anyhow justified, to keep the reasoning/derivation of the whole method sound. Authors claim that quote "As for the determinant of the covariance matrix of Gaussian Process, we cannot easily evaluate or
derive its gradient, so we do not include it in our computation." Which is not a justification for treating the derivation as a proper connection between penalising activities norm and information bottleneck idea. Terms like this will emerge in many other models, where one assumes diagonal covariance Gaussians; in fact the easiest model to justify this penalty is just to say one introduces diagonal Gaussian prior over activations, and that's it. Well justified penalty, easy to connect to many generalisation bound claims. However in the current approach the connection is simply not proven in the paper.
From practical perspective:
Activation norm penalties are well known objects, used for many years (L1 activation penalty for at least 6 years now, see "Deep Sparse Rectifier Neural Networks"; various activation penalties, including L2, changes in L2, etc. in Krueger PhD dissertation). Consequently for a strong empirical paper I would expect much more baselines, including these proposed by Krueger et al.
Pros:
- empirical improvements shown on two different classes of problems.
- interesting path through variational dropout is taken to show some equivalences
Cons:
- there is no proper proof of claimed connection between IB and ANP, as it would require a determinant of K(X, X) to be 1.
- work is not strict enough for a theoretical paper, and does not include enough comparison for empirical one
- paper is full of typing/formatting/math errors/not well explained objects, which make it hard to read, to name a few:
* fonts of objects used in equations change through the text - there is a \textbf{W} and normal W, \textbf{I} and normal I, similarly with X, Ts etc. without any explanation. I am assuming font are assigned randomly and they represent the same object.
* dydp in (5) is in the wrong integral
* \sigma switches meaning between non-linearity and variance
* what does it mean to define a normal distribution with 0 variance (like in (1)). Did authors mean an actual "degenerate Gaussian", which does not have a PDF? But then p(y|t) is used in (5), and under such definition it does not exist, only CDF does.
* \Sigma_1 in 3.2 is undefined, was r(t) supposed to be following N(0, \Sigma_1) instead of written N(0, I)? |
iclr_2018_B1QRgziT- | Published as a conference paper at ICLR 2018 SPECTRAL NORMALIZATION FOR GENERATIVE ADVERSARIAL NETWORKS
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_ projection. | This paper borrows the classic idea of spectral regularization, recently applied to deep learning by Yoshida and Miyato (2017) and use it to normalize GAN objectives. The ensuing GAN, coined SN-GAN, essentially ensures the Lipschitz property of the discriminator. This Lipschitz property has already been proposed by recent methods and has showed some success. However, the authors here argue that spectral normalization is more powerful; it allows for models of higher rank (more non-zero singular values) which implies a more powerful discriminator and eventually more accurate generator. This is demonstrated in comparison to weight normalization in Figure 4. The experimental results are very good and give strong support for the proposed normalization.
While the main idea is not new to machine learning (or deep learning), to the best of my knowledge it has not been applied on GANs. The paper is overall well written (though check Comment 3 below), it covers the related work well and it includes an insightful discussion about the importance of high rank models. I am recommending acceptance, though I anticipate to see a more rounded evaluation of the exact mechanism under which SN improves over the state of the art. More details in the comments below.
Comments:
1. One concern about this paper is that it doesn’t fully answer the reasons why this normalization works better. I found the discussion about rank to be very intuitive, however this intuition is not fully tested. Figure 4 reports layer spectra for SN and WN. The authors claim that other methods, like (Arjovsky et al. 2017) also suffer from the same rank deficiency. I would like to see the same spectra included.
2. Continuing on the previous point: maybe there is another mechanism at play beyond just rank that give SN its apparent edge? One way to test the rank hypothesis and better explain this method is to run a couple of truncated-SN experiments. What happens if you run your SN but truncate its spectrum after every iteration in order to make it comparable to the rank of WN? Do you get comparable inception scores? Or does SN still win?
3. Section 4 needs some careful editing for language and grammar. |
iclr_2018_r1lfpfZAb | Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice's Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text. | This paper proposes to bring together multiple inductive biases that hope to correct for inconsistencies in sequence decoding. Building on previous works that utilize modified objectives to generate sequences, this work proposes to optimize for the parameters of a pre-defined combination of various sub-objectives. The human evaluation is straight-forward and meaningful to compensate for the well-known inaccuracies of automatic evaluation.
While the paper points out that they introduce multiple inductive biases that are useful to produce human-like sentences, it is not entirely correct that the objective is being learnt as claimed in portions of the paper. I would like this point to be clarified better in the paper.
I think showing results on grounded generation tasks like machine translation or image-captioning would make a stronger case for evaluating relevance. I would like to see comparisons on these tasks.
----
After reading the paper in detail again and the replies, I am downgrading my rating for this paper. While I really like the motivation and the evaluation proposed by this work, I believe that fixing the mismatch between the goals and the actual approach will make for a stronger work.
As pointed out by other reviewers, while the goals and evaluation seem to be more aligned with Gricean maxims, some components of the objective are confusing. For instance, the length penalty encourages longer sentences violating quantity, manner (be brief) and potentially relevance. Further, the repetition model address the issue of RNNs failing to capture long-term contextual dependencies -- how much does such a modified objective affect models with attention / hierarchical models is not clear from the formulation.
As pointed out in my initial review evaluation of relevance on the current task is not entirely convincing. A very wide variety of topics are feasible for a given context sentence. Grounded generation like MT / captioning would have been a more convincing evaluation. For example, Wu et al. (and other MT works) use a coverage term and this might be one of the indicators of relevance.
Finally, I am not entirely convinced by the update regarding "learning the objective". While I agree with the authors that the objective function is being dynamically updated, the qualities of good language is encoded manually using a wide variety of additional objectives and only the relative importance of each of them is learnt. |
iclr_2018_r1Dx7fbCW | Published as a conference paper at ICLR 2018 GENERALIZING ACROSS DOMAINS VIA CROSS-GRADIENT TRAINING
We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD parallelly trains a label and a domain classifier on examples perturbed by loss gradients of each other's objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training. | This paper proposed a domain generalization approach by domain-dependent data augmentation. The augmentation is guided by a network that is trained to classify a data point to different domains. Experiments on four datasets verify the effectiveness of the proposed approach.
Strengths:
+ The proposed classification model is domain-dependent, as opposed to being domain-invariant. This is new and differs from most existing works on domain adaptation/generalization, to the best of my knowledge.
+ The experiments show that the proposed method outperforms two baselines. However, more related approaches could be included to strengthen the experiments (see below for details).
Weaknesses:
- The paper studies domain generalization and yet fails to position it in the right literature. By a simple search of "domain generalization" using Google Scholar, I found several existing works on this problem and have listed some below. The authors may consider to include them in both the related works and the experiments.
Questions:
1. It is intuitive to directly define the data augmentation by x_i+Grad_x J_d. Why is it necessary to instead define it as the inverse transformation G^{-1}(g') and then go through the approximations to derive the final augmentation?
2. Is the CrossGrad training necessary? What if one trains the network in two steps? Step 1: learn G using J_d and a regularization to avoid misclassification over the labels using the original data. Step 2: Learn the classification network (possibly different from G) by the domain-dependent augmentation.
Saeid Motiian, Marco Piccirilli, Donald A. Adjeroh, and Gianfranco Doretto. Unified deep supervised
domain adaptation and generalization. In IEEE International Conference on Computer
Vision (ICCV), 2017.
Muandet, K., Balduzzi, D. and Schölkopf, B., 2013. Domain generalization via invariant feature representation. In Proceedings of the 30th International Conference on Machine Learning (ICML-13) (pp. 10-18).
Xu, Z., Li, W., Niu, L. and Xu, D., 2014, September. Exploiting low-rank structure from latent domains for domain generalization. In European Conference on Computer Vision (pp. 628-643). Springer, Cham.
Ghifary, M., Bastiaan Kleijn, W., Zhang, M. and Balduzzi, D., 2015. Domain generalization for object recognition with multi-task autoencoders. In Proceedings of the IEEE international conference on computer vision (pp. 2551-2559).
Gan, C., Yang, T. and Gong, B., 2016. Learning attributes equals multi-source domain generalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 87-97). |
iclr_2018_H11lAfbCW | The learnability of different neural architectures can be characterized directly by computable measures of data complexity. In this paper, we reframe the problem of architecture selection as understanding how data determines the most expressive and generalizable architectures suited to that data, beyond inductive bias. After suggesting algebraic topology as a measure for data complexity, we show that the power of a network to express the topological complexity of a dataset in its decision boundary is a strictly limiting factor in its ability to generalize. We then provide the first empirical characterization of the topological capacity of neural networks. Our empirical analysis shows that at every level of dataset complexity, neural networks exhibit topological phase transitions and stratification. This observation allowed us to connect existing theory to empirically driven conjectures on the choice of architectures for a single hidden layer neural networks. | The authors propose to use the homology of the data as a measurement of the expressibility of a deep neural network. The paper is mostly experimental. The theoretical section (3.1) is only reciting existing theory (Bianchini et al.). Theorem 3.1 is not surprising either: it basically says spaces with different topologies differ at some parts.
As for the experiments, the idea is tested on synthetic and real data. On synthetic data, it is shown that the number of neurons of the network is correlated with the homology it can express. On real data, the tool of persistent homology is applied. It is observed that the data in the final layer do have non-trivial signal in terms of persistent homology.
I do like the general idea of the paper. It has great potentials. However, it is much undercooked. In particular, it could be improved as follows:
* 1) the main message of the paper is unclear to me. Results observed in the synthetic experiments seem to be a confirmation of the known results by Bianchini et al.: the Betti number a network can express is linear to the number of hidden units, h, when the input dimension n is a constant.
To be convinced, I would like to see much stronger experimental evidence: Reporting results on a single layer network is unsettling. It is known that the network expressibility is highly related to the depth (Eldan & Shamir 2016). So what about networks with more layers? Is the stratification observation statistically significant? These experiments are possible for synthetic data.
* 2) The usage of persistent homology is not well justified. A major part of the paper is devoted to persistent homology. It is referred to as a robust computation of the homology and is used in the real data experiments. However, persistent homology itself was not originally invented to recover the homology of a fixed space. It was intended to discover homology groups at all different scales (in terms of the function value). Even with the celebrated stability theorem (Cohen-Steiner et al. 2007) and statistical guarantees (Chazal et al. 2015), the relationship between the Vietoris-Rips filtration persistent homology and the homology of the classifier region/boundary is not well established. To make a solid statement, I suggest authors look into the following papers
Homology and robustness of level and interlevel sets
P Bendich, H Edelsbrunner, D Morozov, A Patel, Homology, Homotopy and Applications 15 (1), 51-72, 2013
Herbert Edelsbrunner, Michael Kerber: Alexander Duality for Functions: the Persistent Behavior of Land and Water and Shore. Proceedings of the 28th Annual Symposium on Computational Geometry, pp. 249-258 (SoCG 2012)
There are also existing work on how the homology of a manifold or stratified space can be recovered using its samples. They could be useful. But the settings are different: in this problem, we have samples from the positive/negative regions, rather than the classification boundary.
Finally, the gap in concepts carries to experiments. When persistent homology of different real data are reported. It is unclear how they reflect the actually topology of the classification region/boundary. There are also significant amount of approximation due to the natural computational limitation of persistent homology. In particular, LLE and subsampling are used for the computation. These methods can significantly hurt persistent homology computation. A much more proper way is via the sparsification approach.
SimBa: An Efficient Tool for Approximating Rips-Filtration Persistence via Simplicial Batch-Collapse
T. K. Dey, D. Shi and Y. Wang. Euro. Symp. Algorithms (ESA) 2016, 35:1--35:16
* 3) Finally, to support the main thesis, it is crucial to show that the topological measure is revealing information existing ones do not. Some baseline methods such as other geometric information (e.g., volume and curvature) are quite necessary.
* 4) Important papers about persistent homology in learning could be cited:
Using persistent homology in deep convolutional neural network:
Deep Learning with Topological Signatures
C. Hofer, R. Kwitt, M. Niethammer and A. Uhl, NIPS 2017
Using persistent homology as kernels:
Sliced Wasserstein Kernel for Persistence Diagrams
Mathieu Carrière, Marco Cuturi, Steve Oudot, ICML 2017.
* 5) Minor comments:
Small typos here and there: y axis label of Fig 5, conclusion section. |
iclr_2018_HJJ23bW0b | Published as a conference paper at ICLR 2018 INITIALIZATION MATTERS: ORTHOGONAL PREDICTIVE STATE RECURRENT NEURAL NETWORKS
Learning to predict complex time-series data is a fundamental challenge in a range of disciplines including Machine Learning, Robotics, and Natural Language Processing. Predictive State Recurrent Neural Networks (PSRNNs) (Downey et al., 2017) are a state-of-the-art approach for modeling time-series data which combine the benefits of probabilistic filters and Recurrent Neural Networks into a single model. PSRNNs leverage the concept of Hilbert Space Embeddings of distributions (Smola et al., 2007) to embed predictive states into a Reproducing Kernel Hilbert Space, then estimate, predict, and update these embedded states using Kernel Bayes Rule. Practical implementations of PSRNNs are made possible by the machinery of Random Features, where input features are mapped into a new space where dot products approximate the kernel well. Unfortunately PSRNNs often require a large number of RFs to obtain good results, resulting in large models which are slow to execute and slow to train. Orthogonal Random Features (ORFs) (Yu et al., 2016) is an improvement on RFs which has been shown to decrease the number of RFs required for pointwise kernel approximation. Unfortunately, it is not clear that ORFs can be applied to PSRNNs, as PSRNNs rely on Kernel Ridge Regression as a core component of their learning algorithm, and the theoretical guarantees of ORF do not apply in this setting. In this paper, we extend the theory of ORFs to Kernel Ridge Regression and show that ORFs can be used to obtain Orthogonal PSRNNs (OPSRNNs), which are smaller and faster than PSRNNs. In particular, we show that OPSRNN models clearly outperform LSTMs and furthermore, can achieve accuracy similar to PSRNNs with an order of magnitude smaller number of features needed. | I was very confused by some parts of the paper that are simple copy-past from the paper of Downey et al. which has been accepted for publication in NIPS. In particular, in section 3, several sentences are taken as they are from the Downey et al.’s paper. Some examples :
« provide a compact representation of a dynamical system
by representing state as a set of predictions of features of future observations. »
« a predictive state is defined as… , where… is a vector of features of future observations and ... is a vector of
features of historical observations. The features are selected such that ... determines the distribution
of future observations … Filtering is the process of mapping a predictive state… »
Even the footnote has been copied & pasted: « For convenience we assume that the system is k-observable: that is, the distribution of all future observations
is determined by the distribution of the next k observations. (Note: not by the next k observations
themselves.) At the cost of additional notation, this restriction could easily be lifted. »
« This approach is fast, statistically consistent, and reduces to simple
linear algebra operations. »
Normally, I should have stopped reviewing, but I decided to continue since those parts only concerned the preliminaries part.
A key element in PSRNN is to used as an initialization a kernel ridge regression. The main result here, is to show that using orthogonal random features approximates well the original kernel comparing to random fourrier features as considered in PSRNN. This result is formally stated and proved in the paper.
The paper comes with some experiments in order to empirically demonstrate the superiority orthogonal random features over RFF. Three data sets are considered (Swimmer, Mocap and Handwriting).
I found it that the contribution of the paper is very limited. The connexion to PSRNN is very tenuous since the main results are about the regression part. in Theorems 2 and 3 there are no mention to PSRNN.
Also the experiment is not very convincing. The datasets are too small with observations in low dimensions, and I found it not very fair to consider LSTM in such settings.
Some minor remarks:
- p3: We use RFs-> RFFs
- p5: ||X||, you mean |X| the size of the dataset
- p12: Eq (9). You need to add « with probability $1-\rho$ as in Avron’s paper.
- p12: the derivation of Eq (10) from Eq (9) needs to be detailed.
I thank the author for their detailed answers. Some points have been clarified but other still raise issues. In particular, I continue thinking that the contribution is limited. Accordingly, I did not change my scores. |
iclr_2018_ByJ7obb0b | Training methods for deep networks are primarily variants on stochastic gradient descent. Techniques that use (approximate) second-order information are rarely used because of the computational cost and noise associated with those approaches in deep learning contexts. However, in this paper, we show how feedforward deep networks exhibit a low-rank derivative structure. This low-rank structure makes it possible to use second-order information without needing approximations and without incurring a significantly greater computational cost than gradient descent. To demonstrate this capability, we implement Cubic Regularization (CR) on a feedforward deep network with stochastic gradient descent and two of its variants. There, we use CR to calculate learning rates on a per-iteration basis while training on the MNIST and CIFAR-10 datasets. CR proved particularly successful in escaping plateau regions of the objective function. We also found that this approach requires less problem-specific information (e.g. an optimal initial learning rate) than other first-order methods in order to perform well. | This paper proposes to set a global step size gradient-based optimization algorithms such as SGD and Adam using second order information. Instead of using second-order information to compute the update directly (as is done in e.g. Newton method), it is used to estimate the change of the objective function in a pre-computed direction. This is computationally much cheaper than full Newton because (a) the Hessian does not need to be inverted (b) vector-Hessian multiplication is only O(#parameters) for a single sample.
There are many issues.
### runtime and computational issues ###
Firstly, the paper does not clearly specify the algorithm it espouses. It states: "once the step direction had been determined, we considered that fixed, took the average of gT Hg and gT ∇f over all of the sample points to produce m (α) and then solved for a single αj value" You should present pseudo-code for this computation and not leave the reader to determine the detailed order of computation for himself. As it stands, it is not only difficult for the reader to infer these details, but also laborious to determine the computational cost per iteration on some network the reader might wish to apply your algorithm to. Since the paper discusses the computational cost of CR only in vague terms, you should at least provide pseudo-code.
Specifically, consider equation (80) at the very end of the appendix and consider the very last term in that equation. It contains d^2v/dwdw. This is a "heavy" term containing the second derivative of the last hidden layer with respect to weights. You do not specify how you compute this term or quantities involving this term. In a ReLU network, this term is zero due to local linearity, but since you claim that your algorithm is applicable to general networks, this term needs to be analyzed further.
While the precise algorithm you suggest is unclear, it's purpose is also unclear. You only use the Hessian to compute the g^THg terms, i.e. for Hessian-vector multiplication. But it is well-known that Hessian-vector multiplication is "relatively cheap" in deep networks and this fact has been used for several algorithms, e.g. http://www.iro.umontreal.ca/~lisa/pointeurs/ECML2011_CAE.pdf and https://arxiv.org/pdf/1706.04859.pdf. How is your method for computing g^THg different and why is it superior?
Also note that the low-rank structure of deep gradients is well-known and not a contribution of this paper. See e.g. https://www.usenix.org/system/files/conference/atc17/atc17-zhang.pdf
### Experiments ###
The experiments are very weak. In a network where weights are initialized to sensible values, your algorithm is shown not to improve upon straight SGD. You only demonstrate superior results when the weights are badly initialized. However, there are a very large number of techniques already that avoid the "SGD on ReLU network with bad initial weights" problem. The most well-known are batch normalization, He initialization and Adam but there are many others. I don't think it's a stretch to consider that problem "solved". Your algorithm is not shown to address any other problems, but what's worse is that it doesn't even seem to address that problem well. While your learning curves are better than straight SGD, I suspect they are well below the respective curves for He init or batchnorm. In any case, you would need to compare your algorithm against these state-of-the-art methods if your goal is to overcome bad initializations. Also, in appendix A, you state that CR can't even address weights that were initialized to values that are too large.
You claim that your algorithm helps with "overcoming plateaus". While I have heard the claim that deep network optimization suffers from intermediate plateaus before, I have not seen a paper studying / demonstrating this behavior. I suggest you cite several papers that do this and then replicate the plateau situations that arose in those papers and show that CR overcomes them, instead of resorting to a platenau situation that is essentially artificially induced by intentionally bad hyperparameter choices.
I do not understand why your initial learning rate for SGD in figures 2 and 3 (0.02 and 0.01 respectively) differ so much from the initial learning rate under CR. Aren't you trying to show that CR can find the "correct" learning rate? Wouldn't that suggest that initial learning rate for SGD should be comparable to the early learning rates chosen by CR? Wouldn't that suggest you should start SGD with a learning rate of around 2 and 0.35 respectively? Since you are annealing the learning rate for SGD, it's going to decline and get close to 0.02 / 0.01 anyway at some point. While this may not be as good as CR or indeed batchnorm or Adam, the blue constant curve you are showing does not seem to be a fair representation of what SGD can do.
You say the minibatch size is 32. For MNIST, this means that 1 epoch is around 1500 iterations. That means your plots only show the first epoch of training. But MNIST does not converge in 1 epoch. You should show the error curve until convergence is reached. Same for CIFAR.
"we are not interested in network performance measures such as accuracy and validation error" I strongly suspect your readers may be interested in those things. You should show validation classification error or at least training classification error in addition to cross-entropy error.
"we will also focus on optimization iteration rather than wall clock time" Again, your readers care more about the latter. You need to show either error curves by clock time or the total time to convergence or supplement your iteration-based graphs with a detailed discussion of how long an iteration takes.
The scope of the experiments is limited because only a single network architecture is considered, and it is not a state-of-the art architecture (no convolution, no normalization mechanism, no skip connections).
You state that you ran experiments on Adam, Adadelta and Adagrad, but you do not show the Adam results. You say in the text that they were the least favorable for CR. This suggests that you omitted the detailed results because they were unfavorable to you. This is, of course, unacceptable!
### (Un)suitability of ReLU for second-order analysis ###
You claim to use second-order information over the network to set the step size. Unfortuantely, ReLU networks do not have second-order information! They are locally linear. All their nonlinearity is contained in non-differentiable region boundaries. While this may lead to the Hessian being cheaper to compute, it means it is not representative of the actual behavior of the network. In fact, the only second-order information that is brought to bear in your experiments is the second-order information of the error function. I am not saying that this particular second-order information could not be useful, but you need to make a distinction in your paper between network second-order info and error function second-order info and make explicit that you only use the former in your experiments. As far as I know, most second-order papers use either tanh or a smoothed ReLU (such as the smoothed hinge used recently by Koh & Liang (https://arxiv.org/pdf/1703.04730.pdf)) for experiments to overcome the local linearity.
### The \sigma hyperparameter ###
You claim that \sigma is not as important / hard to set as \alpha in SGD or Adam. You state: "We also found that this ap- proach requires less problem-specific information (e.g. an optimal initial learning rate) than other first-order methods in order to perform well." You have not provided sufficient evidence for this claim. You say that \sigma can be chosen by considering powers of 10. In many networks, choosing \alpha by considering powers of 10 is sufficient! Even if powers of 2 are considered for \alpha, this would reduce the search effort only by factor log_2(10). Also, what if the range of \sigma values that need to be considered is larger than the range of \alpha values? Then setting \sigma would take more effort.
You do not give precise protocols how you set \sigma and how you set \alpha for non-CR algorithms. This should be clearly specified in Appendix A as it is central to your argument of easing hyperparameter search.
### Minor points ###
- Your introduction could benefit from a few more citations
- "The rank of the weighted sum of low rank components (as occurs with mini-batch sampling) is generally larger than the rank of the summed components, however." I don't understand this. Every sum can be viewed as a weighted sum and vice versa.
- Equation (8) could be motivated a bit better. I know it derives from Taylor's theorem, but it might be good to discuss how Taylor's theorem (and its assumptions) relate to deep networks.
- why the name "cubic regularization"? shouldn't it be something like "quadratic step size tuning"?
.
.
.
The reason I am giving a 2 instead of a 1 is because the core idea behind the algorithm given seems to me to have potential, but the execution is sorely lacking.
A final suggestion: You advertise as one of your algorithms upsides that it uses exact Hessian information. Howwever, since you only care about the scale of the second-order term and not its direction, I suspect exact calculation is far from necessary and you could get away with very cheap approximations, using for example techniques such as mean field analysis (e.g. http://papers.nips.cc/paper/6322-exponential-expressivity-in-deep-neural-networks-through-transient-chaos.pdf). |
iclr_2018_B1e5ef-C- | A COMPRESSED SENSING VIEW OF UNSUPERVISED TEXT EMBEDDINGS, BAG-OF-n-GRAMS, AND LSTMS
Low-dimensional vector embeddings, computed using LSTMs or simpler techniques, are a popular approach for capturing the "meaning" of text and a form of unsupervised learning useful for downstream tasks. However, their power is not theoretically understood. The current paper derives formal understanding by looking at the subcase of linear embedding schemes. Using the theory of compressed sensing we show that representations combining the constituent word vectors are essentially information-preserving linear measurements of Bag-of-n-Grams (BonG) representations of text. This leads to a new theoretical result about LSTMs: low-dimensional embeddings derived from a low-memory LSTM are provably at least as powerful on classification tasks, up to small error, as a linear classifier over BonG vectors, a result that extensive empirical work has thus far been unable to show. Our experiments support these theoretical findings and establish strong, simple, and unsupervised baselines on standard benchmarks that in some cases are state of the art among word-level methods. We also show a surprising new property of embeddings such as GloVe and word2vec: they form a good sensing matrix for text that is more efficient than random matrices, the standard sparse recovery tool, which may explain why they lead to better representations in practice. | My review reflects more from the compressive sensing perspective, instead that of deep learners.
In general, I find many of the observations in this paper interesting. However, this paper is not strong enough as a theory paper; rather, the value lies perhaps in its fresh perspective.
The paper studies text embeddings through the lens of compressive sensing theory. The authors proved that, for the proposed embedding scheme, certain LSTMs with random initialization are at least as good as the linear classifiers; the theorem is almost a direction application of the RIP of random Rademacher matrices. Several simplifying assumptions are introduced, which rendered the implication of the main theorem vague, but it can serve as a good start for the hardcore statistical learning-theoretical analysis to follow.
The second contribution of the paper is the (empirical) observation that, in terms of sparse recovery of embedded words, the pretrained embeddings are better than random matrices, the latter being the main focus of compressive sensing theory. Partial explanations are provided, again using results in compressive sensing theory. In my personal opinion, the explanations are opaque and unsatisfactory. An alternative route is suggested in my detailed review.
Finally, extensive experiments are conducted and they are in accordance with the theory.
My most criticism regarding this paper is the narrow scope on compressive sensing, and this really undermines the potential contribution in Section 5.
Specifically, the authors considered only Basis Pursuit estimators for sparse recovery, and they used the RIP of design matrices as the main tool to argue what is explainable by compressive sensing and what is not. This seems to be somewhat of a tunnel-visioning for me: There are a variety of estimators in sparse recovery problems, and there are much less restrictive conditions than RIP of the design matrices that guarantee perfect recovery.
In particular, in Section 5, instead of invoking [Donoho&Tanner 2005], I believe that a more plausible approach is through [Chandrasekaran et al. 2012]. There, a simple deterministic condition (the null space property) for successful recovery is proved. It would be of direct interest to check whether such condition holds for a pretrained embedding (say GloVe) given some BoWs. Furthermore, it is proved in the same paper that Restricted Strong Convexity (RSC) alone is enough to guarantee successful recovery; RIP is not required at all. While, as the authors argued in Section 5.2, it is easy to see that pretrained embeddings can never possess RIP, they do not rule out the possibility of RSC.
Exactly the same comments above apply to many other common estimators (lasso, Dantzig selector, etc.) in compressive sensing which might be more tolerant to noise.
Several minor comments:
1. Please avoid the use of “information theory”, especially “classical information theory”, in the current context. These words should be reserved to studies of Channel Capacity/Source Coding `a la Shannon. I understand that in recent years people are expanding the realm of information theory, but as compressive sensing is a fascinating field that deserves its own name, there’s no need to mention information theory here.
2. In Theorem 4.1, please be specific about how the l2-regularization is chosen.
3. In Section 4.1, please briefly describe why you need to extend previous analysis to the Lipschitz case. I understood the necessity only through reading proofs.
4. Can the authors briefly comment on the two assumptions in Section 4, especially the second one (on n- cooccurrence)? Is this practical?
5. Page 1, there is a typo in the sentence preceding [Radfors et al., 2017].
6. Page 2, first paragraph of related work, the sentence “Our method also closely related to ...” is incomplete.
7. Page 2, second paragraph of related work, “Pagliardini also introduceD a linear ...”
8. Page 9, conclusion, the beginning sentence of the second paragraph is erroneous.
[1] Venkat Chandrasekaran, Benjamin Recht, Pablo A. Parrilo, Alan S. Willsky, “The Convex Geometry of Linear Inverse Problems”, Foundations of Computational Mathematics, 2012. |
iclr_2018_Bkl1uWb0Z | Previous work has demonstrated the benefits of incorporating additional linguistic annotations such as syntactic trees into neural machine translation. However the cost of obtaining those syntactic annotations is expensive for many languages and the quality of unsupervised learning linguistic structures is too poor to be helpful. In this work, we aim to improve neural machine translation via source side dependency syntax but without explicit annotation. We propose a set of models that learn to induce dependency trees on the source side and learn to use that information on the target side. Importantly, we also show that our dependency trees capture important syntactic features of language and improve translation quality on two language pairs En-De and En-Ru. | This paper describes a method to induce source-side dependency structures in service to neural machine translation. The idea of learning soft dependency arcs in tandem with an NMT objective is very similar to recent notions of self-attention (Vaswani et al., 2017, cited) or previous work on latent graph parsing for NMT (Hashimoto and Tsuruoka, 2017, cited). This paper introduces three innovations: (1) they pass the self-attention scores through a matrix-tree theorem transformation to produce marginals over tree-constrained head probabilities; (2) they explicitly specify how the dependencies are to be used, meaning that rather than simply attending over dependency representations with a separate attention, they select a soft word to attend to through the traditional method, and then attend to that word’s soft head (called Shared Attention in the paper); and (3) they gate when attention is used. I feel that the first two ideas are particularly interesting. Unfortunately, the results of the NMT experiments are not particularly compelling, with overall gains over baseline NMT being between 0.6 and 0.8 BLEU. However, they include a useful ablation study that shows fairly clearly that both ideas (1) and (2) contribute equally to their modest gains, and that without them (FA-NMT Shared=No in Table 2), there would be almost no gains at all. Interesting side-experiments investigate their accuracy as a dependency parser, with and without a hard constraint on the system’s latent dependency decisions.
This paper has some very good ideas, and asks questions that are very much worth asking. In particular, the question of whether a tree constraint is useful in self-attention is very worthwhile. Unfortunately, this is mostly a negative result, with gains over “flat attention” being relatively small. I also like the “Shared Attention” - it makes a lot of sense to say that if the “semantic” attention mechanism has picked a particular word, one should also attend to that word’s head; it is not something I would have thought of on my own. The paper is also marred by somewhat weak writing, with a number of disfluencies and awkward phrasings making it somewhat difficult to follow.
In terms of specific criticisms:
I found the motivation section to be somewhat weak. We need a better reason than morphology to want to do source-side dependency parsing. All published error analyses of strong NMT systems (Bentivogli et al, EMNLP 2016; Toral and Sanchez-Cartagena, EACL 2017; Isabelle et al, EMNLP 2017) have shown that morphology is a strength, not a weakness of these systems, and the sorts of head selection problems shown in Figure 1 are, in my experience, handled capably by existing LSTM-based systems.
The paper mentions “significant improvements” in only two places: the introduction and the conclusion. With BLEU score differences being so low, the authors should specify how statistical significance is measured; ideally using a technique that accounts for the variance of random restarts (i.e.: Clark et al, ACL 2011).
Equation (3): I couldn’t find the definition for H anywhere.
Sentence before Equation (5): I believe there is a typo here, “f takes z_i” should be “f takes u_t”.
First section of Section 3: please cite the previous work you are talking about in this sentence.
My understanding was that the dependency marginals in p(z_{i,j}=1|x,\phi) in Equation (11) are directly used as \beta_{i,j}. If I’m correct, that’s probably worth spelling out explicitly in Equation (11): \beta_{i,j} = p(z_{i,j}=1|x,\phi) = …
I don’t don’t feel like the clause between equations (17) and (18), “when sharing attention weights from the decoder with the encoder” is a good description of your clever “shared attention” idea. In general, I found this region of the paper, including these two equations and the text between them, very difficult to follow.
Section 4.4: It’s very very good that you compared to “flat attention”, but it’s too bad for everyone cheering for linguistically-informed syntax that the results weren’t better.
Table 5: I had a hard time understanding Table 5 and the corresponding discussion. What are “production percentages”?
Finally, it would have been interesting to include the FA system in the dependency accuracy experiment (Table 4), to see if it made a big difference there. |
iclr_2018_SJiHOSeR- | An objective of pro-activity in dialog systems is to enhance the usability of conversational agents by enabling them to initiate conversation on their own. While dialog systems have become increasingly popular during the last couple of years, current task oriented dialog systems are still mainly reactive and users tend to initiate conversations. In this paper, we propose to introduce the paradigm of contextual bandits as framework for pro-active dialog systems. Contextual bandits have been the model of choice for the problem of reward maximization with partial feedback since they fit well to the task description. As a second contribution, we introduce and explore the notion of memory into this paradigm. We propose two differentiable memory models that act as parts of the parametric reward estimation function. The first one, Convolutional Selective Memory Networks, uses a selection of past interactions as part of the decision support. The second model, called Contextual Attentive Memory Network, implements a differentiable attention mechanism over the past interactions of the agent. The goal is to generalize the classic model of contextual bandits to settings where temporal information needs to be incorporated and leveraged in a learnable manner. Finally, we illustrate the usability and performance of our model for building a pro-active mobile assistant through an extensive set of experiments. | The paper "CONTEXTUAL MEMORY BANDIT FOR PRO-ACTIVE DIALOG ENGAGEMENT" proposes to address the problem of pro-active dialog engagement by the mean of a bandit framework that selects dialog situations w.r.t. to the context of the system. Authors define a neural archiecture managing memory with the mean of a contextual attention mechanism.
My main concern about this paper is that the proposal is not enough well described. A very large amount of technical details are missing for allowing the reader to understand the model (and reproduce the experiments). The most important ones are about the exploration policies which are not described at all, while it is a very central point of the paper. The only discussion given w.r.t. the exploration policy is a very general overview about Thompson Sampling. But nothing is said about how it is implemented in the case of the proposed model. How is estimated p(\Theta|D) ? Ok authors give p(\Theta|D) as a product between prior and likelihood. But it is not sufficient to get p(\Theta|D), the evidence should also been considered (for instance by using variational inference). Also, what is the prior of the parameters ? How is distributed r given a,x and \Theta ?
Also, not enough justification is given about the general idea of the model. Authors should give more intuitions about the mechanism they propose. Figure 2 should be able to help, but no reference to this figure is given in the text, so it is very difficult to extract any information from it. Authors only (roughly) describe the architecture without justifying their choices.
At last, the experiments really fail at demonstrating the relevance of the approach, as only questionable artificial data is used. On the first hand it appears mandatory to me to consider some (even minimal) experiments on real data for such proposal. On the other one, the simulated data used there cannot correspond to cues to validate the approach since they appear very far from real scenarios: the trajectories do not depend on what is recommended. Ok only the recommended places reveal some reward but it appears not as a sufficiently realistic scenario to me. Also, very too few baselines are considered: only different versions of the proposal and a random baseline are considered. A classical contextual bandit instance (such as LinUCB) would have been a minimum.
Other remarks:
- the definition of q is not given
- user is part of the context x in the bandit section but not after where it is denoted as u.
- the notion of time window should be more formally described
- How is built the context is not clear in the experiments section |
iclr_2018_ByJHuTgA- | ON THE STATE OF THE ART OF EVALUATION IN NEURAL LANGUAGE MODELS
Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset. | The authors perform a comprehensive validation of LSTM-based word and character language models, establishing that recent claims that other structures can consistently outperform the older stacked LSTM architecture result from failure to fully explore the hyperparameter space. Instead, with more thorough hyperparameter search, LSTMs are found to achieve state-of-the-art results on many of these language modeling tasks.
This is a significant result in language modeling and a milestone in deep learning reproducibility research. The paper is clearly motivated and authoritative in its conclusions but it's somewhat lacking in detailed model or experiment descriptions.
Some further points:
- There are several hyperparameters set to the "standard" or "default" value, like Adam's beta parameter and the batch size/BPTT length. Even if it would be prohibitive to include them in the overall hyperparameter search, the community is curious about their effect and it would be interesting to hear if the authors' experience suggests that these choices are indeed reasonably well-justified.
- The description of the model is ambiguous on at least two points. First, it wasn't completely clear to me what the down-projection is (if it's simply projecting down from the LSTM hidden size to the embedding size, it wouldn't represent a hyperparameter the tuner can set, so I'm assuming it's separate and prior to the conventional output projection). Second, the phrase "additive skip connections combining outputs of all layers" has a couple possible interpretations (e.g., skip connections that jump from each layer to the last layer or (my assumption) skip connections between every pair of layers?).
- Fully evaluating the "claims of Collins et al. (2016), that capacities of various cells are very similar and their apparent
differences result from trainability and regularisation" would likely involve adding a fourth cell to the hyperparameter sweep, one whose design is more arbitrary and is neither the result of human nor machine optimization.
- The reformulation of the problem of deciding embedding and hidden sizes into one of allocating a fixed parameter budget towards the embedding and recurrent layers represents a significant conceptual step forward in understanding the causes of variation in model performance.
- The plot in Figure 2 is clear and persuasive, but for reproducibility purposes it would also be nice to see an example set of strong hyperparameters in a table. The history of hyperparameter proposals and their perplexities would also make for a fantastic dataset for exploring the structure of RNN hyperparameter spaces. For instance, it would be helpful for future work to know which hyperparameters' effects are most nearly independent of other hyperparameters.
- The choice between tied and clipped (Sak et al., 2014) LSTM gates, and their comparison to standard untied LSTM gates, is discussed only minimally, although it represents a significant difference between this paper and the most "standard" or "conventional" LSTM implementation (e.g., as provided in optimized GPU libraries). In addition to further discussion on this point, this result also suggests evaluating other recently proposed "minor changes" to the LSTM architecture such as multiplicative LSTM (Krause et al., 2016)
- It would also have been nice to see a comparison between the variational/recurrent dropout parameterization "in which there is further sharing of masks between gates" and the one with "independent noise for the gates," as described in the footnote. There has been some confusion in the literature as to which of these parameterizations is better or more standard; simply justifying the choice of parameterization a little more would also help. |
iclr_2018_Byd-EfWCb | Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. Introducing the concept of an optimal representation space, we provide a simple theoretical resolution to this apparent paradox. In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models. To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process. Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks. | ------ updates to review: ---------
I think the paper is much improved. It is much more clear and the experiments are more focused and more closely connected to the earlier content in the paper. Thanks to the authors for trying to address all of my concerns.
I now better understand in what sense the representation space is optimal. I had been thinking (or perhaps "hoping" is a better word) that the term "optimal" implied maximal in terms of some quantifiable measure, but it's more of an empirical "optimality". This makes the paper an empirical paper based on a reasonable and sensible intuition, rather than a theoretical result. This was a little disappointing to me, but I do still think the paper is marginally above the acceptance threshold and have increased my score accordingly.
------ original review below: --------
This paper is about rethinking how to use encoder-decoder architectures for representation learning when the training objective contains a similarity between the decoder output and the encoding of something else. For example, for the skip-thought RNN encoder-decoder that encodes a sentence and decodes neighboring sentences: rather than use the final encoder hidden state as the representation of the sentence, the paper uses some function of the decoder, since the training objective is to maximize each dot product between a decoder hidden state and the embedding of a context word. If dot product (or cosine similarity) is going to be used as the similarity function for the representation, then it makes more sense, the paper argues, to use the decoder hidden state(s) as the representation of the input sentence. The paper considers both averaging and concatenating hidden states. One difficulty here is that the neighboring sentences are typically not available in downstream tasks, so the paper runs the decoder to produce a predicted sentence one word-at-a-time, using the predicted words as inputs to the decoder RNNs. Then those decoder RNN hidden states are used via averaging or concatenation as the representation of a sentence in downstream tasks.
This paper is a source of contributions, but I think in its current form it is not yet ready for publication.
Pros:
I think it makes sense to pay attention to the training objective when deciding how to use the model for downstream tasks.
I like the empirical investigation of combining RNN and BOW encoders and decoders.
The experimental results show that a single encoder-decoder model can be trained and then two different functions of it can be used at test time for different kinds of tasks (RNN-RNN for supervised transfer and RNN-RNN-mean for unsupervised transfer). I think this is an interesting result.
Cons:
I have several concerns. The first relate to the theoretical arguments and their empirical support.
Regarding the theoretical arguments:
First, the paper discusses the notion of an "optimal representation space" and describes the argument as theoretical, but I don't see much of a theoretical argument here.
As far as I can tell, the paper does not formally define its terms or define in what sense the representation space is "optimal". I can only find heuristic statements like those in the paragraph in Sec 3.2 that begins "These observations...". What exactly is meant formally by statements like "any model where the decoder is log-linear with respect to the encoder" or "that distance is optimal with respect to the model’s objective"? It seems like the paper may want to start with formal definitions of an encoder and a decoder, then define what is meant by a "decoder that is log-linear with respect to the encoder", and define what it means for a distance to be optimal with respect to a training objective. That seems necessary in order to provide the foundation to make any theoretical statement about choices for encoders, decoders, and training objectives. I am still not exactly sure what that theoretical statement might look like, but maybe defining the terms would help the authors get started in heading toward the goal of defining a statement to prove.
Second, the paper's theoretical story seems to diverge almost immediately from the choices used in the model and experimental procedure.
For example, in Sec. 3.2, it is stated that cosine similarity "is the appropriate similarity measure in the case of log-linear decoders." But the associated footnote (footnote 2) seems to admit a contradiction here by noting that actually the appropriate similarity measure is dot product: "Evidently, the correct measure is actually the dot product." This is a bit confusing.
It also raises a question: If cosine similarity will be used later for computing similarity, then why not try using cosine similarity in place of dot product in the model? That is, replace "u_w \cdot h_i" in Eq. (2) with "cos(u_w, h_i)". If the paper's story is correct (and if I understand the ideas correctly), training with cosine similarity should work better than training with dot product, because the similarity function used during training is more similar to that used in testing. This seems like a natural experiment to try. Other natural experiments would be to vary both the similarity function used in the model during training and the similarity function used at test time. The authors' claims could be validated if the optimal choices always use the same choice for the training and test-time similarity functions. That is, if Euclidean distance is used during training, then will Euclidean distance be the best choice at test time?
Another example of the divergence lies in the use of the skip-thought decoder on downstream tasks. Since the decoder hidden states depend on neighboring sentences and these are considered to be unavailable at test time, the paper "unrolls" the decoder for several steps by using it to predict words which are then used as inputs on the next time step. To me, this is a potentially very significant difference between training and testing. Since much of the paper is about reconciling training and testing conditions in terms of the representation space and similarity function, this difference feels like a divergence from the theoretical story. It is only briefly mentioned at the end of Sec. 3.3 and then discussed again later in the experiments section. I think this should be described in more detail in Section 3.3 because it is an important note about how the model will be used in practice.
It would be nice to be able to quantify the impact (of unrolling the decoder with predicted words) by, for example, using the decoder on a downstream evaluation dataset that has neighboring sentences in it. Then the actual neighboring sentences can be used as inputs to the decoder when it is unrolled, which would be closer to the training conditions and we could empirically see the difference. Perhaps there is an evaluation dataset with ordered sentences so that the authors could empirically compare using real vs predicted inputs to the decoder on a downstream task?
The above experiments might help to better connect the experiments section with the theoretical arguments.
Other concerns, including more specific points, are below:
Sec. 2:
When describing inferior performance of RNN-based models on unsupervised sentence similarity tasks, the paper states: "While this shortcoming of SkipThought and RNN-based models in general has been pointed out, to the best of our knowledge, it has never been systematically addressed in the literature before."
The authors may want to check Wieting & Gimpel (2017) (and its related work) which investigates the inferiority of LSTMs compared to word averaging for unsupervised sentence similarity tasks. They found that averaging the encoder hidden states can work better than using the final encoder hidden state; the authors may want to try that as well.
Sec. 3.2:
When describing FastSent, the paper includes "Due to the model's simplicity, it is particularly fast to train and evaluate, yet has shown state-of-the-art performance in unsupervised similarity tasks (Hill et al., 2015)."
I don't think it makes much sense to cite the SimLex-999 paper in this context, as that is a word similarity task and that paper does not include any results of FastSent. Maybe the Hill et al (2016) FastSent citation was meant instead? But in that case, I don't think it is quite accurate to make the claim that FastSent is SOTA on unsupervised similarity tasks. In the original FastSent paper (Hill et al., 2016), FastSent is not as good as CPHRASE or "DictRep BOW+embs" on average across the unsupervised sentence similarity evaluations. FastSent is also not as good as sent2vec from Pagliardini et al (2017) or charagram-phrase from Wieting et al. (2016).
Sec. 3.3:
In describing skip-thought, the paper states: "While computationally complex, it is currently the state-of-the-art model for supervised transfer tasks (Hill et al., 2016)."
I don't think it is accurate to state that skip-thought is still state-of-the-art for supervised transfer tasks, in light of recent work (Conneau et al., 2017; Gan et al., 2017).
Sec. 3.3:
When discussing averaging the decoder hidden states, the paper states: "Intuitively, this corresponds to destroying the word order information the decoder has learned." I'm not sure this strong language can be justified here. Is there any evidence to suggest that averaging the decoder hidden states will destroy word order information? The hidden states may be representing word order information in a way that is robust to averaging, i.e., in a way such that the average of the hidden states can still lead to the reconstruction of the word order.
Sec. 4:
What does it mean to use an RNN encoder and a BOW decoder? This seems to be a strongly-performing setting and competitive with RNN-mean, but I don't know exactly what this means.
Minor things:
Sec. 3.1:
When defining v_w, it would be helpful to make explicit that it's in \mathbb{R}^d.
Sec. 4:
For TREC question type classification, I think the correct citation should be Li & Roth (2002) instead of Vorhees (2002).
Sec. 5:
I think there's a typo in the following sentence: "Our results show that, for example, the raw encoder output for SkipThought (RNN-RNN) achieves strong performance on supervised transfer, whilst its mean decoder output (RNN-mean) achieves strong performance on supervised transfer." I think "unsupervised" was meant in the latter mention.
References:
Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. EMNLP.
Gan, Z., Pu, Y., Henao, R., Li, C., He, X., & Carin, L. (2017). Learning generic sentence representations using convolutional neural networks. EMNLP.
Li, X., & Roth, D. (2002). Learning question classifiers. COLING.
Pagliardini, M., Gupta, P., & Jaggi, M. (2017). Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. arXiv preprint arXiv:1703.02507.
Wieting, J., Bansal, M., Gimpel, K., & Livescu, K. (2016). Charagram: Embedding words and sentences via character n-grams. EMNLP.
Wieting, J., & Gimpel, K. (2017). Revisiting Recurrent Networks for Paraphrastic Sentence Embeddings. ACL. |
iclr_2018_SyBBgXWAZ | Generative models such as Variational Auto Encoders (VAEs) and Generative Adversarial Networks (GANs) are typically trained for a fixed prior distribution in the latent space, such as uniform or Gaussian. After a trained model is obtained, one can sample the Generator in various forms for exploration and understanding, such as interpolating between two samples, sampling in the vicinity of a sample or exploring differences between a pair of samples applied to a third sample. In this paper, we show that the latent space operations used in the literature so far induce a distribution mismatch between the resulting outputs and the prior distribution the model was trained on. To address this, we propose to use distribution matching transport maps to ensure that such latent space operations preserve the prior distribution, while minimally modifying the original operation. Our experimental results validate that the proposed operations give higher quality samples compared to the original operations. | The authors demonstrate experimentally a problem with the way common latent space operations such as linear interpolation are performed for GANs and VAEs. They propose a solution based on matching distributions using optimal transport. Quite heavy machinery to solve a fairly simple problem, but their approach is practical and effective experimentally (though the gain over the simple SLERP heuristic is often marginal). The problem they describe (and so the solution) deserves to be more widely known.
Major comments:
The paper is quite verbose, probably unnecessarily so. Firstly, the authors devote over 2 pages to examples that distribution mismatches can arise in synthetic cases (section 2). This point is well made by a single example (e.g. section 2.2) and the interesting part is that this is also an issue in practice (experimental section). Secondly, the authors spend a lot of space on the precise derivation of the optimal transport map for the uniform distribution. The fact that the optimal transport computation decomposes across dimensions for pointwise operations is very relevant, and the matching of CDFs, but I think a lot of the mathematical detail could be relegated to an appendix, especially the detailed derivation of the particular CDFs.
Minor comments:
It seems worth highlighting that in practice, for the common case of a Gaussian, the proposed method for linear interpolation is just a very simple procedure that might be called "projected linear interpolation", where the generated vector is multiplied by a constant. All the optimal transport theory is nice, but it's helpful to know that this is simple to apply in practice.
Might I suggest a very simple approach to fixing the distribution mismatch issue? Train with a spherical uniform prior. When interpolating, project the linear interpolation back to the sphere. This matches distribution, and has the attractive property that the entire geodesic between two points lies in a region with typical probability density. This would also work for vicinity sampling.
In section 1, overfitting concerns seem like a strange way to motivate the desire for smoothness. Overfitting is relatively easy to compensate for, and investigating the latent space is interesting regardless.
When discussing sampling from VAEs as opposed to GANs, it would be good to mention that one has to sample from p(x | z) not just p(z).
Lots of math typos such as t - 1 should be 1 - t in (2), "V times a times r" instead of "Var" in (3) and "s times i times n" instead of "sin", etc, sqrt(1) * 2 instead of sqrt(12), inconsistent bolding of vectors. Also strange use of blackboard bold Z to mean a vector of random variables instead of the integers.
Could cite an existing source for the fact that most mass for a Gaussian is concentrated on a thin shell (section 2.2), e.g. David MacKay Information Theory, Inference and Learning Algorithms.
At the end of section 2.4, a plot of the final 1D-to-1D optimal transport function (for a few different values of t) for the uniform case would be incredibly helpful.
Section 3 should be a subsection of section 2.
For both SLERP and the proposed method, there's quite a sudden change around the midpoint of the interpolation in Figure 2. It would be interesting to plot more points around the midpoint to see the transition in more detail. (A small inkling that samples from the proposed approach might change fastest qualitatively near the midpoint of the interpolation perhaps maybe be seen in Figure 1, since the angle is changing fastest there??) |
iclr_2018_Skj8Kag0Z | Published as a conference paper at ICLR 2018 STABILIZING ADVERSARIAL NETS WITH PREDICTION METHODS
Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. These difficulties come from the fact that optimal weights for adversarial nets correspond to saddle points, and not minimizers, of the loss function. The alternating stochastic gradient methods typically used for such problems do not reliably converge to saddle points, and when convergence does happen it is often highly sensitive to learning rates. We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. We show, both in theory and practice, that the proposed method reliably converges to saddle points, and is stable with a wider range of training parameters than a non-prediction method. This makes adversarial networks less likely to "collapse," and enables faster training with larger learning rates. | NOTE:
I'm very willing to change my recommendation if I turn out to be wrong
about the issues I'm addressing and if certain parts of the experiments are fixed.
Having said that, I do (think I) have some serious issues:
both with the experimental evaluation and with the theoretical results.
I'm pretty sure about the experimental evaluation and less sure about the theoretical results.
THEORETICAL CLAIMS:
These are the complaints I'm not as sure about:
Theorem 1 assumes that L is convex/concave.
This is not generally true for GANs.
That's fine and it doesn't necessarily make the statement useless, but:
If we are willing to assume that L is convex/concave,
then there already exist other algorithms that will provably converge
to a saddle point (I think). [1] contains an explanation of this.
Given that there are other algorithms with the same theoretical guarantees,
and that those algorithms don't magically make GANs work better,
I am much less convinced about the value of your theorem.
In [0] they show that GANs trained with simultaneous gradient descent are locally asymptotically stable,
even when L is not convex/concave.
This seems like it makes your result a lot less interesting, though perhaps I'm wrong to think this?
Finally, I'm not totally sure you can show that simultaneous gradient descent won't converge
as well under the assumptions you made.
If you actually can't show that, then the therom *is* useless,
but it's also the thing I've said that I'm the least sure about.
EXPERIMENTAL EVALUATION:
Regarding the claims of being able to train with a higher learning rate:
I would consider this a useful contribution if it were shown that (by some measure of GAN 'goodness')
a high goodness was achieved faster because a higher learning rate was used.
Your experiments don't support this claim presently, because you evaluate all the models at the same step.
In fact, it seems like both evaluated Stacked GAN models get worse performance with the higher learning rate.
This calls into question the usefulness of training with a higher learning rate.
The performance is not a huge amount worse though (based on my understanding of Inception Scores),
so if it turns out that you could get that performance
in 1/10th the time then that wouldn't be so bad.
Regarding the experiment with Stacked GANs, the scores you report are lower than what they report [2].
Their reported mean score for joint training is 8.59.
Are the baseline scores you report from an independent reproduction?
Also, the model they have trained uses label information.
Does your model use label information?
Given that your reported improvements are small, it would be nice to know what the proposed mechanism is by
which the score is improved.
With a score of 7.9 and a standard deviation of 0.08, presumably none of the baseline model runs
had 'stability issues', so it doesn't seem like 'more stable training' can be the answer.
Finally, papers making claims about fixing GAN stability should support those claims by solving problems
with GANs that people previously had a hard time solving (due to instability).
I don't believe this is true of CIFAR10 (especially if you're using the class information).
See [3] for an example of a paper that does this by generating 128x128 Imagenet samples with a single generator.
I didn't pay as much attention to the non-GAN experiments because
a) I don't have as much context for evaluating them, because they are a bit non-standard.
b) I had a lot of issues with the GAN experiments already and I don't think the paper should be accepted unless those are addressed.
[0] https://arxiv.org/abs/1706.04156 (Gradient Descent GAN Optimization is Locally Stable)
[1] https://arxiv.org/pdf/1705.07215.pdf (On Convergence and Stability of GANs)
[2] https://arxiv.org/abs/1612.04357 (Stacked GAN)
[3] https://openreview.net/forum?id=B1QRgziT (Spectral Regularization for GANs)
EDIT:
As discussed below, I have slightly raised my score.
I would raise it more if more of my suggestions were implemented (although I'm aware that the authors don't have much (any?) time for this - and that I am partially to blame for that, since I didn't respond that quickly).
I have also slightly raised my confidence.
This is because now I've had more time to think about the paper, and because the authors didn't really address a lot of my criticisms (which to me seems like evidence that some of my criticisms were correct). |
iclr_2018_SJSVuReCZ | Regularization is a big issue for training deep neural networks. In this paper, we propose a new information-theory-based regularization scheme named SHADE for SHAnnon DEcay. The originality of the approach is to define a prior based on conditional entropy, which explicitly decouples the learning of invariant representations in the regularizer and the learning of correlations between inputs and labels in the data fitting term. We explain why this quantity makes our model able to achieve invariance with respect to input variations. We empirically validate the efficiency of our approach to improve classification performances compared to standard regularization schemes on several standard architectures. | Summary:
The paper presents an information theoretic regularizer for deep learning
algorithms. The regularizer aims to enforce compression of the learned
representation while conditioning upon the class label so preventing the
learned code from being constant across classes. The presentation of the Z
latent variable used to simplify the calculation of the entropy H(Y|C) is
confusing and needs revision, but otherwise the paper is interesting.
Major Comments:
- The statement that I(X;Y) = I(C;Y) + H(Y|C) relies upon several properties
of Y which are not apparent in the text (namely that Y is a function of X,
so I(X;Y) should be maximal, and Y is a smaller code space than X so it should
be H(Y)). If Y is a larger code space than X then it should still be true, but
the logic is more complicated.
- The latent code for Z is unclear. Given the use of ReLUs it seems like Y
will be O or +ve, and Z will be 0 when Y is 0 and 1 otherwise, so I'm
unclear as to when the value H(Y|Z) will be non-zero. The data is then
partitioned within a batch based on this Z value, and monte carlo sampling is
used to estimate the variance of Y conditioned on Z, but it's really unclear
as to how this behaves as a regularizer, how the z is sampled for each monte
carlo run, and how this influences the gradient. The discussion in Appendix C
doesn't mention how the Z values are generated.
- The discussion on how this method differs from the information bottleneck is
odd, as the bottleneck is usually minimising the encoding mutual information
I(X;Y) minus the decoding mutual information I(Y;C). So directly minimising
H(Y|C) is similar to the IB, and also minimising H(Y|C) will affect I(C;Y) as
I(C;Y) = H(Y) - H(Y|C).
- The fine tuning experiments (Section 4.2) contain no details on the
parameters of that tuning (e.g. gradient optimiser, number of epochs,
batch size, learning rates etc).
- Section 4.4 is obvious, and I'd consider it a bug if regularising with label
information performed worse than regularising without label information.
Essentially it's still adding supervision after you've removed the
classification loss, so it's natural that it would perform better. This
experiment could be moved to the appendix without hurting the paper.
- In appendix A an upper bound is given for the reconstruction error in terms
of the conditional entropy. This bound should be related to one of the many
upper bounds (e.g. Hellman & Raviv) for the Bayes rate of a predictor, as
there is a fairly wide literature in this area.
Minor Comments:
- The authors do not state what kind of input variations they are trying to
make the model invariant to, and as it applies to CNNs there are multiple
different kinds, many of which are not amenable to a regularization based
system for inducing invariance.
- The authors should remind the reader once that I(X;Y) = H(Y) - H(Y|X) = H(X) -
H(X|Y), as this fact is used multiple times throughout the paper, and it may
not necessarily be known by readers in the deep learning community.
- Computing H(Y|C) does not necessarily require computing c separate
entropies, there are multiple different approaches for computing this
entropy.
- The exposition in section 3 could be improved by saying that H(X|Y) measures
how much the representation compresses the input, with high values meaning
large amounts of compression, as much of X is thrown away when generating Y.
- The figures are difficult to read when printed in grayscale, the graphs
should be made more readable when printed this way (e.g. different symbols,
dashed lines etc).
- There are several typos (e.g. pg 5 "staking" -> "stacking"). |
iclr_2018_ryRh0bb0Z | MULTI-VIEW DATA GENERATION WITHOUT VIEW SUPERVISION
The development of high-dimensional generative models has recently gained a great surge of interest with the introduction of variational auto-encoders and generative adversarial neural networks. Different variants have been proposed where the underlying latent space is structured, for example, based on attributes describing the data to generate. We focus on a particular problem where one aims at generating samples corresponding to a number of objects under various views. We assume that the distribution of the data is driven by two independent latent factors: the content, which represents the intrinsic features of an object, and the view, which stands for the settings of a particular observation of that object. Therefore, we propose a generative model and a conditional variant built on such a disentangled latent space. This approach allows us to generate realistic samples corresponding to various objects in a high variety of views. Unlike many multiview approaches, our model doesn't need any supervision on the views but only on the content. Compared to other conditional generation approaches that are mostly based on binary or categorical attributes, we make no such assumption about the factors of variations. Our model can be used on problems with a huge, potentially infinite, number of categories. We experiment it on four image datasets on which we demonstrate the effectiveness of the model and its ability to generalize. | The paper proposes a GAN-based method for image generation that attempts to separate latent variables describing fixed "content" of objects from latent variables describing properties of "view" (all dynamic properties such as lighting, viewpoint, accessories, etc). The model is further extended for conditional generation and demonstrated on a range of image benchmark data sets.
The core idea is to train the model on pairs of images corresponding to the same content but varying in views, using adversarial training to discriminate such examples from generated pairs. This is a reasonable procedure and it seems to work well, but also conceptually quite straightforward -- this is quite likely how most people working in the field would solve this problem, standard GAN techniques are used for training the generator and discriminator, and the network architecture is directly borrowed from Radford et al. (2015) and not even explained at all in the paper. The conditional variant is less obvious, requiring two kinds of negative images, and again the proposed approach seems technically sound.
Given the simplicity of the algorithmic choices, the potential novelty of the paper lies more in the problem formulation itself, which considers the question of separating two sets of latent variables from each other in setups where one of them (the "view") can vary from pair to pair in arbitrary manner and no attributes characterising the view are provided. This is an interesting problem setup, but not novel as such and unfortunately the paper does not do a very good job in putting it into the right context. The work is contrasted only against recent GAN-based image generation literature (where covariates for the views are often included) and the aspects related to multi-view learning are described only at the level of general intuition, instead of relating to the existing literature on the topic. The only relevant work cited from this angle is Mathieu et al. (2016), but even that is dismissed lightly by saying it is worse in generative tasks. How about the differences (theoretical and empirical) between the proposed approach and theirs in disentangling the latent variables? One would expect to see more discussion on this, given the importance of this property as motivation for the method.
The generative story using three sets of latent variables, one shared, to describe a pair of objects corresponds to inter-battery factor analysis (IBFA) and is hence very closely related to canonical correlation analysis as well (Tucker "An inter-battery method of factor analysis", Psychometrika, 1958; Klami et al. "Bayesian canonical correlation analysis", JMLR, 2013). Linear CCA naturally would not be sufficient for generative modeling and its non-linear variants (e.g. Wang et al. "Deep variational canonical correlation analysis", arXiv:1610.03454, 2016; Damianou et al. "Manifold relevance determination", ICML, 2012) would not produce visually pleasing generative samples either, but the relationship is so close that these models have even been used for analysing setups identical to yours (e.g. Li et al. "Cross-pose face recognition by canonical correlation analysis", arXiv:1507.08076, 2015) but with goals other than generation. Consequently, the reader would expect to learn something about the relationship between the proposed method and the earlier literature building on the same latent variable formulation. A particularly interesting question would be whether the proposed model actually is a direct GAN-based extension of IBFA, and if not then how does it differ. Use of adversarial training to encourage separation of latent variables is clearly a reasonable idea and quite likely does better job than the earlier solutions (typically based on some sort of group-sparsity assumption in shared-private factorisation) with the possible or even likely exception of Mathieu at al. (2016), and aspects like this should be explicitly discussed to extend the contribution from pure image generation to multi-view literature in general.
The empirical experiments are somewhat non-informative, relying heavily on visual comparisons and only satisfying the minimum requirement of demonstrating that the method does its job. The results look aesthetically more pleasing than the baselines, but the reader does not learn much about how the method actually behaves in practice; when does it break down, how sensitive it is to various choices (network structure, learning algorithm, amount of data, how well the content and view can be disentangled from each other, etc.). In other words, the evaluation is a bit lazy somewhat in the same sense as the writing and treatment of related work; the authors implemented the model and ran it on a collection of public data sets, but did not venture further into scientific reporting of the merits and limitations of the approach.
Finally, Table 1 seems to have some min/max values the wrong way around.
Revision of the review in light of the author response:
The authors have adequately addressed my main remarks, and while doing so have improved both the positioning of the paper amongst relevant literature and the somewhat limited empirical comparisons. In particular, the authors now discuss alternative multi-view generative models not based on GANs and the revised paper includes considerably extended set of numerical comparisons that better illustrate the advantage over earlier techniques. I have increased my preliminary rating to account for these improvements. |
iclr_2018_SJd0EAy0b | Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but only ad hoc solutions have been obtained this far. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning algorithms, and propose to extend this by introducing a multi-shot unsupervised learning framework. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks. | The paper proposes a new method to compute embeddings of multirelational graphs. In particular, the paper proposes so-called E-Cells and R-Cells to answer queries of the form (h,r,?), (?,r,t), and(h,?,t). The proposed method (GEN), is evaluated on standard datasets for link prediction as well as datasets for node classification.
The paper tackles an interesting problem, as learning from graphs via embedding methods has become increasingly important. The experimental results of the proposed model, especially for the node classification tasks, look promising. Unfortunately, the paper makes a number of claims which are not justified or seem to result from misconceptions about related methods. For instance, the abstract labels prior work as "ad hoc solutions" and claims to propose a principled approach. However, I do not see how the proposed method is a more principled than previously proposed methods. For instance, methods such as RESCAL, TransE, HolE or ComplEx can be motivated as compositional models that reflect the compositional structure of relational data. Furthermore, RESCAL-like models can be linked to prior research in cognitive science on relational memory [3]. HolE explicitly motivates its modeling through its relation to models for associative memory.
Furthermore, due to their compositional nature, these model are all able to answer the queries considered in the paper (i.e, (h,r,?), (h,?,t), (?,r,t)) and are implicitly trained to do so. The HolE paper discusses this for instance when relating the model to associative memory. For RESCAL, [4] shows how even more complicated queries involving logical connectives and quantification can be answered. It is therefore not clear how to proposed method improves over these models.
With regard to the evaluation: It is nice that the authors provided an evaluation which compares to several SOTA methods. However, it is unclear under which setting these results where obtained. In particular, how were the hyperparameter for each model chosen and which parameters ranges were considered in the grid search. Appendix B.2 in the supplementary seems to specify the parameter setting for GEN, but it is unclear whether the same parameters where chosen for the competing models and whether they were trained with similar methods (e.g., dropout, learning rate decay etc.). The big difference in performance of HolE and ComplEx is also surprising, as they are essentially the same model (e.g. see [1,2]). It is therefore not clear to me which conclusions we can draw from the reported numbers.
Further comments:
- p.3: The statement "This is the actual way we humans learn the meaning of concepts expressed by a statement" requires justification
- p.4: The authors state that the model is trained unsupervised, but eq. 10 clearly uses supervised information in form of labels.
- p.4: In 3.1, E-cells are responsible to answer queries of the form (h,r,?) and (?, r, t), while Section 3.2 says E-Cells are used to answer (h, ?, t). I assume in the later case, the task is actually to answer (h,r,?)?
- p.2: Making a closed-world assumption is quite problematic in this context, especially when taking a principled approach. Many graphs such as Freebase are very incomplete and make an explicit open-world assumption.
- The paper uses a unusual definition of one-shot/multi-shot learning, which makes it confusing to read at first. The authors might consider using different terms to improve readability.
- Paper would benefit if the model is presented earlier. GEN Cells are defined only in Section 3.2, but the model is discussed earlier. Reversing the order might improve presentation.
[1] K. Hayashi et al: "On the Equivalence of Holographic and Complex Embeddings for Link Prediction", 2017
[2] T.Trouillon et al: "Complex and holographic embeddings of knowledge graphs: a comparison", 2017
[3] G. Halford et al: "Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology", 1998.
[4] D. Krompaß et al: "Querying factorized probabilistic triple databases", 2014 |
iclr_2018_rJrTwxbCb | We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk. We present numerical evidence and mathematical justifications to the following conjectures laid out by : Fixing data, increasing the number of parameters merely scales the bulk of the spectrum; fixing the dimension and changing the data (for instance adding more clusters or making the data less separable) only affects the outliers. We believe that our observations have striking implications for non-convex optimization in high dimensions. First, the flatness of such landscapes (which can be measured by the singularity of the Hessian) implies that classical notions of basins of attraction may be quite misleading. And that the discussion of wide/narrow basins may be in need of a new perspective around over-parametrization and redundancy that are able to create large connected components at the bottom of the landscape. Second, the dependence of a small number of large eigenvalues to the data distribution can be linked to the spectrum of the covariance matrix of gradients of model outputs. With this in mind, we may reevaluate the connections within the data-architecturealgorithm framework of a model, hoping that it would shed light on the geometry of high-dimensional and non-convex spaces in modern applications. In particular, we present a case that links the two observations: small and large batch gradient descent appear to converge to different basins of attraction but we show that they are in fact connected through their flat region and so belong to the same basin. | This paper has at its core an interesting, novel, tentative claim, backed up by simple experiments, that small batch gradient descent and large batch gradient descent may converge to points in the same basin of attraction, contrary to the discussion (but not the actual experimental results) of Keskar et al. (2016). In general, there is a pressing need for insight into the qualitative behavior of gradient-based optimization and this area is of immense interest to many machine learning practitioners. Unfortunately the interesting tentative insights are surrounded by many unsubstantiated and only tangentially related theoretical discussions. Overall the paper has the appearance of lacking a sharp focus. This is a shame since I found the core of the paper very interesting and thought provoking.
Major comments:
While the paper has some interesting tentative experimental insights, the relationship between theory and experiment is complicated. The theoretical claims are vague and wide ranging, and are not all individually well supported or even tested by the experiments. Rather than including lots of small potential insights which the authors have had about what may be going on during gradient-based optimization, I'd prefer to see a paper with much tighter focus with a small number of theoretical claims well supported by experiments (it's fine if the experiments are simplistic as here; that's still interesting).
A large amount of the paper hinges on being able to ignore the second term in (6), and this fact is referred to many times, but the theoretical and experimental justification for this claim is very thin.
The authors mention overparameterization repeatedly, and it's in the title, but they never define it. It also doesn't appear to take center stage in their experimental investigations (if it is in fact critical to the experiments then it should be made clearer how).
Throughout this paper there is not a clear distinction between eigenvalues being zero and eigenvalues being close to zero, or similarly between the Hessian being singular and ill-conditioned. This distinction is particularly important in the theoretical discussion.
It would be helpful to be clearer about the differences between this work and that presented in Sagun et al. (2016).
Minor comments:
The assumption that the target y is real is at odds with many regression problems and practically all classification. It might be worth generalizing the discussion to multidimensional targets.
It would be good to have some citations to support the claim that often "the number of parameters M is comparable to the number of examples N (if not much larger)". With 1-dimensional targets as considered here, that sounds like a recipe for extreme overfitting and poor generalization. Generically based on counting constraints and free parameters one would expect to be able to fit exactly any dataset of N output values using a model with M free parameters. (With P-dimensional targets the relevant comparison would be M vs N P rather than M vs N).
At the end of intro to section 1, "loss is non-degenerate" should be "Hessian of the loss is non-degenerate"? Also, didn't the paper cited assume at least one negative eigenvalue at any saddle point, rather than non-degeneracy?
In section 1.1, it would be helpful to explain the precise sense in which "overparameterized" is being used. Hopefully it is in the sense that there are more parameters than needed for good performance at the true global minimum (the additional parameters helping with the process of *finding* a good minimum rather than its existence) or in the sense that M -> infinity for N "equal to" infinity. If it is in the sense that M >> N then I'm not sure of the relevance to practical machine learning.
It would be helpful to use a log scale for the plot in Figure 1. The claim that the Hessian is ill-conditioned depends on the condition number, which is impossible to estimate from the plot.
The fact that "wide basins, as opposed to narrow ones, generalize better" is not a new claim of the Keskar et al. paper. I'd argue it's well-known and part of the classical explanation of why maximum likelihood methods overfit and Bayesian ones don't. See for example MacKay, Information Theory Inference and Learning Algorithms.
"It turns out that the Hessian is degenerate at any given point" makes it sound like the result is a theoretical one. As I understand it, the experimental investigation in Sagun et al. (2016) just shows that the Hessian may often be ill-conditioned. As above, more clarity is also needed about whether it is literally degenerate or just approximately so, in which case ill-conditioned is probably a more appropriate word. Ill-conditioned is also more appropriate than singular in "slightly singular but extremely so".
How much data was used for the simple experiments in Figure 1? Infinite data? What data was used?
It would be helpful to spell out the intuition in "Intuitively, this kind of singularity...".
I don't think the decomposition (5) is required to "explain why having more parameters than samples results in degenerate Hessian matrices". Generically one would expect that with 1-dimensional targets, N datapoints and N + Q parameters, there would be a Q-dimensional submanifold of parameter space on which the loss would be zero. Of course there would be a few conditions needed to make this into a precise statement, but no need for assuming the second term is negligible.
Is the conventional decomposition of the loss into l o f used for the generalized Gauss Newton that f is a function only of the input to the neural net and the model parameters, but not the target? I could be wrong, but that was always my interpretation.
It's not clear whether the phrase "bottom of the landscape" used several times in the paper refers to the neighborhood of local minima or of global minima.
What is the justification for assuming l'(f(w)) and grad f(w) are not correlated? That seems unlikely to be true in general! Also spell out why this implies the second term can be ignored. I'm a bit skeptical of the claim in general. It's easy to come up with counterexamples. For example take l to be the identity (say f has a relu applied to it to ensure everything is well formed).
"Immediately, this implies that there are at least M - N trivial eigenvalues of the Hessian". Make it clear that trivial here means approximately not exactly zero (in which case a good word would be "small"); this follows since the second term in (5) is only approximately zero. In fact it should be possible to prove there are M - N values which are exactly zero, but that doesn't follow from the argument presented. As above I'd argue this analysis is somewhat beside the point since N should be greater than M in practice to prevent severe overfitting.
In section 3.1, "trivial eigenvalues" should be "non-trivial eigenvalues".
What's the relevance of using PCA on the data in Figure 2 when it comes to analyzing training neural nets? Also, is there any reason 2 classes breaks the trend?
What size of data was used for the experiments to plot figure 2 and figure 3? Infinite?
It's not completely clear what the takeaway is from Figure 3. I presume this is supporting the point that the eigenvalues of the Hessian at convergence consist of a bulk and outliers. The could be stated explicitly. Is there any significance to the fact that the number of clusters is equal to the number of outliers? Is this supporting some broader claim of the paper?
Figure 4, 5, 6 would benefit from being log plots, and make the claim that the bulk has the same shape independent of data much stronger.
The x-axis in Figure 5 is not "ordered counts of eigenvalues" but "index of eigenvalues", and in Figure 6 is not "ratios of eigenvalues" but ratio of the index. In the caption for Figure 6, "scaled by their ratio" is not clear.
I don't follow why Figure 6 confirms that "the effects of the ignored term in the decomposition is small" for negative eigenvalues.
In section 3.3, when saying the variances of the steps are different but the means are similar, it may interesting to note that the variance is often the dominant term and much greater in magnitude than the mean when doing SGD (at least that's what I've experienced).
What's the meaning of "elbow at similar levels"? What's the significance?
In section 4 it is claimed that overparameterization is what "leads to flatness at the bottom of the landscape which is easy to optimize". The bulk-outlier view suggests that adding extra parameters may just add extra dimensions to the flat region, but why is optimizing 100 values in a flat 100-dimensional space easier than optimizing 10 values in a flat 10-dimensional space?
In section 4.1, "fair comparison" is misleading since it depends on perspective. If one cares about compute time then certainly measuring steps rather than epochs would not be a fair comparison!
What's the relevance of the fact that random initial points in high-dimensional spaces are almost always nearly orthogonal (N.B. the "nearly" should be added)? This seems to be assuming something about the mapping from initial point to basin of attraction.
What's the meaning of "extending away from either end points appear to be confirming the sharpness of [the] LB solution"? Is this shown somewhere?
It would be helpful to highlight the key difference to Keskar et al. (which I believe is initializing SB training from LB point rather than from scratch). I presume the claim is that Keskar et al. only found their "inverted camel hump" linear interpolation results due to the random initialization, and that this would also often be observed for, say, two random LB-from-scratch trainings (which may randomly fall into different basins of attraction). If this is the intended point then it would be good to make this explicit.
In "the first terms starts to dominate", to dominate what? The gradient, or the second term in (5)? If the latter, what is the relevance of this?
Why "even" in "Even when the weight space has large flat regions"?
In the last paragraph of section 4.1, it might be worth spelling out that (as I understand it) the idea is that the small batch method finds itself in a poor region to begin with, since the average loss over an SB-noise-sized neighborhood of the LB point is actually not very good, and so there is a non-zero gradient through flat space to a place where the average loss over an SB-noise-sized neighborhood is good.
In section 5, "we see that even large batch methods are able to get to the level where small batch methods go" seems strange. Isn't this of training set loss? Isn't the "level" people care about the test set loss?
In appendix A, the meaning of consecutive in "largest consecutive gap" and "largest consecutive ratio" was not clear to me.
Appendix B is only referred to in a footnote. What is its significance for the main theme of the paper? I'd suggest either making it more prominent or putting it in a separate paper. |
iclr_2018_r1CE9GWR- | Generative Adversarial Networks (GANs) have become a popular method to learn a probability model from data. Many GAN architectures with different optimization metrics have been introduced recently. Instead of proposing yet another architecture, this paper aims to provide an understanding of some of the basic issues surrounding GANs. First, we propose a natural way of specifying the loss function for GANs by drawing a connection with supervised learning. Second, we shed light on the generalization peformance of GANs through the analysis of a simple LQG setting: the generator is linear, the loss function is quadratic and the data is drawn from a Gaussian distribution. We show that in this setting: 1) the optimal GAN solution converges to population Principal Component Analysis (PCA) as the number of training samples increases; 2) the number of samples required scales exponentially with the dimension of the data; 3) the number of samples scales almost linearly if the discriminator is constrained to be quadratic. Thus, linear generators and quadratic discriminators provide a good balance for fast learning. | *Paper summary*
The paper considers GANs from a theoretical point of view. The authors approach GANs from the 2-Wasserstein point of view and provide several insights for a very specific setting. In my point of view, the main novel contribution of the paper is to notice the following fact:
(*) It is well known that the 2-Wasserstein distance W2(PY,QY) between multivariate Gaussian PY and its empirical version QY scales as $n^{-2/d}$, i.e. converges very slow as the dimensionality of the space $d$ increases. In other words, QY is not such a good way to estimate PY in this setting. A somewhat better way is use a Gaussian distribution PZ with covariance matrix S computed as a sample covariance of QY. In this case W2(PY, PZ) scales as $\sqrt{d/n}$.
The paper introduces this observation in a very strange way within the context of GANs. Moreover, I think the final conclusion of the paper (Eq. 19) has a mistake, which makes it hard to see why (*) has any relation to GANs at all.
There are several other results presented in the paper regarding relation between PCA and the 2-Wasserstein minimization for Gaussian distributions (Lemma 1 & Theorem 1). This is indeed an interesting point, however the proof is almost trivial and I am not sure if this provides any significant contribution for the future research.
Overall, I think the paper contains several novel ideas, but its structure requires a *significant* rework and in the current form it is not ready for being published.
*Detailed comments*
In the first part of the paper (Section 2) the authors propose to use the optimal transport distance Wc(PY, g(PX)) between the data distribution PY (or its empirical version QY) and the model as the objective for GAN optimization. This idea is not novel: WGAN [1] proposed (and successfully implemented) to minimize the particular case of W1 distance by going through the dual form, [2] proposed to approach any Wc using auto-encoder reformulation of the primal (and also shoed that [5] is doing exactly W2 minimization), and [3] proposed the same using Sinkhorn algorithm. So this point does not seem to be novel.
The rest of the paper only considers 2-Wasserstein distance with Gaussian PY and Gaussian g(PX) (which I will abbreviate with R), which looks like an extremely limited scenario (and certainly has almost no connection to the applications of GANs).
Section 3 first establishes a relation between PCA and minimizing 2-Wasserstein distance for Gaussian distributions (Lemma 1, Theorem 1). Then the authors show that if R minimizes W2(PY, R) and QR minimizes W2(QY, QR) then the excess loss W2(PY, QR) - W2(PY, R) approaches zero at the rate $n^{-2/d}$ (both for linear and unconstrained generators). This result basically provides an upper bound showing that GANs need exponentially many samples to minimize W2 distance. I don't find these results novel, as they already appeared in [4] with a matching lower bound for the case of Gaussians (Theorem B.1 in Appendix can be modified easily to show this). As the authors note in the conclusion of Section 3, these results have little to do with GANs, as GANs are known to learn quite quickly (which contradicts the theory of Section 3).
Finally, in Section 4 the authors approach the same W2 problem from its dual form and notice that for the LQG model the optimal discriminator is quadratic. Based on this they reformulate the W2 minimization for LQG as the constrained optimization with respect to p.d. matrix A (Eq 16). The same conclusion does not work unfortunately for W2(QY, R), which is the real training objective of GANs. Theorem 3 shows that nevertheless, if we still constrain discriminator in the dual form of W2(QY, R) to be quadratic, the resulting soliton QR* performs the empirical PCA of Pn.
This leads to the final conclusion of the paper, which I think contains a mistake. In Eq 19 the first equation, according to the definitions of the authors, reads
\[
W2(PY, QR) = W2(PY, PZ), (**)
\]
where QR is trained to minimize min_R W2(QY, R) and PZ is as defined in (*) in the beginning of these notes.
However, PZ is not the solution of min_R W2(QY, R) as the authors notice in the 2nd paragraph of page 8. Thus (**) is not true (at least, it is not proved in the current version of the text). PZ is a solution of min_R W2(QY, R) *where the discriminator is constrained to be quadratic*. This mismatch is especially strange, given the authors emphasize in the introduction that they provide bounds on divergences which are the same as used during the training (see 2nd paragraph on page 2) --- here the bound is on W2, but the empirical GAN actually does a regularized training (with constrained discriminator).
Finally, I don't think the experiments provide any convincing insights, because the authors use W1-minimization to illustrate properties of the W2. Essentially the authors say "we don't have a way to perform W2 minimization, so we rather do the W1 minimization and assume that these two are kind of similar".
* Other comments *
(1) Discussion in Section 2.1 seems to never play a role in the paper.
(2) Page 4: in p-Wasserstein distance, ||.|| does not need to be a Euclidean metric. It can be any metric.
(3) Lemma 2 seems to repeat the result from (Canas and Rosasco, 2012) as later cited by authors on page 7?
(4) It is not obvious how does Theorem 2 translate to the excess loss?
(5) Section 4. I am wondering how exactly the authors are going to compute the conjugate of the discriminator, given the discriminator most likely is a deep neural network?
[1] Arjovsky et al., Wasserstein GAN, 2017
[2] Bousquet et al, From optimal transport to generative modeling: the VEGAN cookbook, 2017
[3] Genevay et al., Learning Generative Models with Sinkhorn Divergences, 2017
[4] Arora et al, Generalization and equilibrium in GANs, 2017
[5] Makhazani et al., Adversarial Autoencoders, 2015 |
iclr_2018_B1al7jg0b | Published as a conference paper at ICLR 2018 OVERCOMING CATASTROPHIC INTERFERENCE USING CONCEPTOR-AIDED BACKPROPAGATION
Catastrophic interference has been a major roadblock in the research of continual learning. Here we propose a variant of the back-propagation algorithm, "conceptor-aided backprop" (CAB), in which gradients are shielded by conceptors against degradation of previously learned tasks. Conceptors have their origin in reservoir computing, where they have been previously shown to overcome catastrophic forgetting. CAB extends these results to deep feedforward networks. On the disjoint and permuted MNIST tasks, CAB outperforms two other methods for coping with catastrophic interference that have recently been proposed. | [Reviewed on January 12th]
This article applies the notion of “conceptors” -- a form of regulariser introduced by the same author a few years ago, exhibiting appealing boolean logic pseudo-operations -- to prevent forgetting in continual learning,more precisely in the training of neural networks on sequential tasks. It proposes itself as an improvement over the main recent development of the field, namely Elastic Weight Consolidation. After a brief and clear introduction to conceptors and their application to ridge regression, the authors explain how to inject conceptors into Stochastic Gradient Descent and finally, the real innovation of the paper, into Backpropagation. Follows a section of experiments on variants of MNIST commonly used for continual learning.
Continual learning in neural networks is a hot topic, and this article contributes a very interesting idea. The notion of conceptors is appealing in this particular use for its interpretation in terms of regularizer and in terms of Boolean logic. The numeric examples, although quite toy, provide a clear illustration.
A few things are still missing to back the strong claims of this paper:
* Some considerations of the computational costs: the reliance on the full NxN correlation matrix R makes me fear it might be costly, as it is applied to every layer of the neural networks and hence is the largest number of units in a layer. This is of course much lighter than if it were the covariance matrix of all the weights, which would be daunting, but still deserves to be addressed, if only with wall time measures.
* It could also be welcome to use a more grounded vocabulary, e.g. on p.2 “Figure 1 shows examples of conceptors computer from three clouds of sample state points coming from a hypothetical 3-neuron recurrent network that was drive with input signals from three difference sources” could be much more simply said as “Figure 1 shows the ellipses corresponding to three sets of R^3 points”. Being less grandiose would make the value of this article nicely on its own.
* Some examples beyond the contrived MNIST toy examples would be welcome. For example, the main method this article is compared to (EWC) had a very strong section on Reinforcement learning examples in the Atari framework, not only as an illustration but also as a motivation. I realise not everyone has the computational or engineering resources to try extensively on multiple benchmarks from classification to reinforcement learning. Nevertheless, without going to that extreme, it might be worth adding an extra demo on something bigger than MNIST. The authors transparently explain in their answer that they do not (yet!) belong to the deep learning community and hope finding some collaborations to pursue this further. If I may make a suggestion, I think their work would get much stronger impact by doing it the reverse way: first finding the collaboration, then adding this extra empirical results, which then leads to a bigger impact publication.
The later point would normally make me attribute a score of "6: Marginally above acceptance threshold" by current DL community standards, but because there is such a pressing need for methods to tackle this problem, and because this article can generate thinking along new lines about this, I give it a 7 : Good paper, accept. |
iclr_2018_HJSA_e1AW | Optimization algorithms for training deep models not only affects the convergence rate and stability of the training process, but are also highly related to the generalization performance of trained models. While adaptive algorithms, such as Adam and RMSprop, have shown better optimization performance than stochastic gradient descent (SGD) in many scenarios, they often lead to worse generalization performance than SGD, when used for training deep neural networks (DNNs). In this work, we identify two problems regarding the direction and step size for updating the weight vectors of hidden units, which may degrade the generalization performance of Adam. As a solution, we propose the normalized direction-preserving Adam (ND-Adam) algorithm, which controls the update direction and step size more precisely, and thus bridges the generalization gap between Adam and SGD. Following a similar rationale, we further improve the generalization performance in classification tasks by regularizing the softmax logits. By bridging the gap between SGD and Adam, we also shed some light on why certain optimization algorithms generalize better than others. | Method:
The paper is missing analysis of some important related works such as
"Beyond convexity: Stochastic quasi-convex optimization" by E. Hazan et al. (2015)
where Stochastic Normalized Gradient Descent (SNGD) was proposed.
Then, normalized gradient versions of AdaGrad and Adam were proposed in
"Normalized Gradient with Adaptive Stepsize Method for Deep
Neural Network Training" by A. W. Yu et al. (2017).
Another work which I find to be relevant is
"Follow the Signs for Robust Stochastic Optimization" by L. Balles and P. Hennig (2017).
From my personal experiments, restricting w_i to have L2 norm of 1, i.e., to be +-1
leads to worse generalization. One reason for this is that weight decay is not
really functioning since it cannot move w_i to 0 or make its amplitude any smaller.
Please correct me if I misunderstand something here.
The presence of +-1 weights moves us to the area of low-precision NNs,
or more specifically, NNs with binary / binarized weights as in
"BinaryConnect: Training Deep Neural Networks with
binary weights during propagations" by M. Courbariaux et al. (2015)
and
"Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1" by M. Courbariaux et al. (2016).
Regarding
"Moreover, the magnitude of each update does not depend on themagnitude of the gradient. Thus, ND-Adam is more robust to improper initialization, and vanishing or exploding gradients."
If the magnitude of each update does not depend on the magnitude of the gradient, then the algorithm heavily depends on the learning rate. Otherwise, it does not have any means to approach the optimum in a reasonable number of steps *when* it is initialized very / unreasonably far from it. The claim of your second sentence is not supported by the paper.
Evaluation:
I am not confident that the presented experimental validation is fair. First, the original WRN paper and many other papers with ResNets used weight decay of 0.0005 and not 0.001 or 0.002 as used for SGD in this paper. It is unclear why this setting was changed. One could just use \alpha_0 = 0.05 and \lambda = 0.0005.
Then, I don't see why the authors use WRN-22-7.5 which is different from WRN-28-10 which was suggested in the original study and used in several follow-up works. The difference between WRN-22-7.5 and WRN-28-10 is unlikely to be significant,
the former might have about only 2 times less parameters which should barely change the final validation errors. However, the use of WRN-22-7.5 makes it impossible to easily compare the presented results to the results of Zagoruyko who had 3.8\% with WRN-28-10. I believe that the use of the setup of Zagoruyko for WRN-22-7.5 would allow to get much better results than 4.5\% and 4.49\% shown for SGD and likely better 4.14\% shown for ND-Adam. I note that the use of WRN-22-7.5 is unlikely to be due to the used hardware because later in paper the authors refer to WRN-34-7.5.
My intuition is that the proposed ND-Adam moves the algorithm back to SGD but with potentially harmful constraints of w_i=+-1. Even the values of \alpha^v_0 found for ND-Adam (e.g., \alpha^v_0=0.05 in Figure 1B) are in line of what would be optimal values of \alpha_0 for SGD.
I find it uncomfortable that BN-Softmax is introduced here to support the use of an optimization algorithm, moreover, that the values of \gamma_c are different for CIFAR-10 and CIFAR-100. I wonder if the proposed values are optimal (and therefore selected) for all three tested algorithms or only for Adam-ND. I expect that hyperparameters of SGD and Adam would also need to be revised to account for BN-Softmax. |
iclr_2018_Bym0cU1CZ | Conventional methods model open domain dialogue generation as a black box through end-to-end learning from large scale conversation data. In this work, we make the first step to open the black box by introducing dialogue acts into open domain dialogue generation. The dialogue acts are generally designed and reveal how people engage in social chat. Inspired by analysis on real data, we propose jointly modeling dialogue act selection and response generation, and perform learning with human-human conversations tagged with a dialogue act classifier and a reinforcement approach to further optimizing the model for long-term conversation. With the dialogue acts, we not only achieve significant improvement over state-of-the-art methods on response quality for given contexts and long-term conversation in both machine-machine simulation and human-machine conversation, but also are capable of explaining why such achievements can be made. | The authors use a distant supervision technique to add dialogue act tags as a conditioning factor for generating responses in open-domain dialogues. In their evaluations, this approach, and one that additionally uses policy gradient RL with discourse-level objectives to fine-tune the dialogue act predictions, outperform past models for human-scored response quality and conversation engagement.
While this is a fairly straightforward idea with a long history, the authors claim to be the first to use dialogue act prediction for open-domain (rather than task-driven) dialogue. If that claim to originality is not contested, and the authors provide additional assurances to confirm the correctness of the implementations used for baseline models, this article fills an important gap in open-domain dialogue research and suggests a fruitful future for structured prediction in deep learning-based dialogue systems.
Some points:
1. The introduction uses "scalability" throughout to mean something closer to "ability to generalize." Consider revising the wording here.
2. The dialogue act tag set used in the paper is not original to Ivanovic (2005) but derives, with modifications, from the tag set constructed for the DAMSL project (Jurafsky et al., 1997; Stolcke et al., 2000). It's probably worth citing some of this early work that pioneered the use of dialogue acts in NLP, since they discuss motivations for building DA corpora.
3. In Section 2.1, the authors don't explicitly mention existing DA-annotated corpora or discuss specifically why they are not sufficient (is there e.g. a dataset that would be ideal for the purposes of this paper except that it isn't large enough?)
3. The authors appear to consider only one option (selecting the top predicted dialogue act, then conditioning the response generator on this DA) among many for inference-time search over the joint DA-response space. A more comprehensive search strategy (e.g. selecting the top K dialogue acts, then evaluating several responses for each DA) might lead to higher response diversity.
4. The description of the RL approach in Section 3.2 was fairly terse and included a number of ad-hoc choices. If these choices (like the dialogue termination conditions) are motivated by previous work, they should be cited. Examples (perhaps in the appendix) might also be helpful for the reader to understand that the chosen termination conditions or relevance metrics are reasonable.
5. The comparison against previous work is missing some assurances I'd like to see. While directly citing the codebases you used or built off of is fantastic, it's also important to give the reader confidence that the implementations you're comparing to are the same as those used in the original papers, such as by mentioning that you can replicate or confirm quantitative results from the papers you're comparing to. Without that there could always be the chance that something is missing from the implementation of e.g. RL-S2S that you're using for comparison.
6. Table 5 is not described in the main text, so it isn't clear what the different potential outputs of e.g. the RL-DAGM system result from (my guess: conditioning the response generation on the top 3 predicted dialogue acts?)
7. A simple way to improve the paper's clarity for readers would be to break up some of the very long paragraphs, especially in later sections. It's fine if that pushes the paper somewhat over the 8th page.
8. A consistent focus on human evaluation, as found in this paper, is probably the right approach for contemporary dialogue research.
9. The examples provided in the appendix are great. It would be helpful to have confirmation that they were selected randomly (rather than cherry-picked). |
iclr_2018_rkpoTaxA- | Published as a conference paper at ICLR 2018 SELF-ENSEMBLING FOR VISUAL DOMAIN ADAPTATION
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen & Valpola (2017)) of temporal ensembling (Laine & Aila (2017)), a technique that achieved state of the art results in the area of semi-supervised learning. We introduce a number of modifications to their approach for challenging domain adaptation scenarios and evaluate its effectiveness. Our approach achieves state of the art results in a variety of benchmarks, including our winning entry in the VISDA-2017 visual domain adaptation challenge. In small image benchmarks, our algorithm not only outperforms prior art, but can also achieve accuracy that is close to that of a classifier trained in a supervised fashion. | This paper presents a domain adaptation algorithm based on the self-ensembling method proposed by [Tarvainen & Valpola, 2017]. The main idea is to enforce the agreement between the predictions of the teacher and the student classifiers on the target domain samples while training the student to perform well on the source domain. The teacher network is simply an exponential moving average of different versions of the student network over time.
Pros:
+ The paper is well-written and easy to read
+ The proposed method is a natural extension of the mean teacher semi-supervised learning model by [Tarvainen & Valpola, 2017]
+ The model achieves state-of-the-art results on a range of visual domain adaptation benchmarks (including top performance in the VisDA17 challenge)
Cons:
- The model is tailored to the image domain as it makes heavy use of the data augmentation. That restricts its applicability quite significantly. I’m also very interested to know how the proposed method works when no augmentation is employed (for fair comparison with some of the entries in Table 1).
- I’m not particularly fond of the engineering tricks like confidence thresholding and the class balance loss. They seem to be essential for good performance and thus, in my opinion, reduce the value of the main idea.
- Related to the previous point, the final VisDA17 model seems to be engineered too heavily to work well on a particular dataset. I’m not sure if it provides many interesting insights for the scientific community at large.
In my opinion, it’s a borderline paper. While the best reported quantitative results are quite good, it seems that achieving those requires a significant engineering effort beyond just applying the self-ensembling idea.
Notes:
* The paper somewhat breaks the anonymity of the authors by mentioning the “winning entry in the VISDA-2017”. Maybe it’s not a big issue but in my opinion it’s better to remove references to the competition entry.
* Page 2, 2.1, line 2, typo: “stanrdard” -> “standard”
Post-rebuttal revision:
After reading the authors' response to my review, I decided to increase the score by 2 points. I appreciate the improvements that were made to the paper but still feel that this work a bit too engineering-heavy, and the title does not fully reflect what's going on in the full pipeline. |
iclr_2018_HJewuJWCZ | Published as a conference paper at ICLR 2018 LEARNING TO TEACH
Teaching plays a very important role in our society, by spreading human knowledge and educating our next generations. A good teacher will select appropriate teaching materials, impact suitable methodologies, and set up targeted examinations, according to the learning behaviors of the students. In the field of artificial intelligence, however, one has not fully explored the role of teaching, and pays most attention to machine learning. In this paper, we argue that equal attention, if not more, should be paid to teaching, and furthermore, an optimization framework (instead of heuristics) should be used to obtain good teaching strategies. We call this approach "learning to teach". In the approach, two intelligent agents interact with each other: a student model (which corresponds to the learner in traditional machine learning algorithms), and a teacher model (which determines the appropriate data, loss function, and hypothesis space to facilitate the training of the student model). The teacher model leverages the feedback from the student model to optimize its own teaching strategies by means of reinforcement learning, so as to achieve teacher-student co-evolution. To demonstrate the practical value of our proposed approach, we take the training of deep neural networks (DNN) as an example, and show that by using the learning to teach techniques, we are able to use much less training data and fewer iterations to achieve almost the same accuracy for different kinds of DNN models (e.g., multi-layer perceptron, convolutional neural networks and recurrent neural networks) under various machine learning tasks (e.g., image classification and text understanding). | This paper focuses on the problem of "machine teaching", i.e., how to select a good strategy to select training data points to pass to a machine learning algorithm, for faster learning. The proposed approach leverages reinforcement learning by defining the reward as how fast the learner learns, and use policy gradient to update the teacher parameters. I find the definition of the "state" in this case very interesting. The experimental results seem to show that such a learned teacher strategy makes machine learning algorithms learn faster.
Overall I think that this paper is decent. The angle the authors took is interesting (essentially replacing one level of the bi-level optimization problem in machine teaching works with a reinforcement learning setup). The problem formulation is mostly reasonable, and the evaluation seems quite convincing. The paper is well-written: I enjoyed the mathematical formulation (Section 3). The authors did a good job of using different experiments (filtration number analysis, and teaching both the same architecture and a different architecture) to intuitively explain what their method actually does.
At the same time, though, I see several important issues that need to be addressed if this paper is to be accepted. Details below.
1. As much as I enjoyed reading Section 3, it is very redundant. In some cases it is good to outline a powerful and generic framework (like the authors did here with defining "teaching" in a very broad sense, including selecting good loss functions and hypothesis spaces) and then explain that the current work focuses on one aspect (selecting training data points). However, I do not see it being the case here. In my opinion, selecting good loss functions and hypothesis spaces are much harder problems than data teaching - except maybe when one use a pre-defined set of possible loss functions and select from it. But that is not very interesting (if you can propose new loss functions, that would be way cooler). I also do not see how to define an intuitive set of "states" in that case. Therefore, I think this section should be shortened. I also think that the authors should not discuss the general framework and rather focus on "data teaching", which is the only focus of the current paper. The abstract and introduction should also be modified accordingly to more honestly reflect the current contributions.
2. The authors should do a better job at explaining the details of the state definition, especially the student model features and the combination of data and current learner model.
3. There is only one definition of the reward - related to batch number when the accuracy first exceeds a threshold. Is accuracy stable, can it drop back down below the threshold in the next epoch? The accuracy on a held-out test set is not guaranteed to be monotonically increasing, right? Is this a problem in practice (it seems to happen on your curves)? What about other potential reward definitions? And what would they potentially lead to?
4. Experimental results are averaged over 5 repeated runs - a bit too small in my opinion.
5. Can the authors show convergence of the teacher parameter \theta? I think it is important to see how fast the teacher model converges, too.
6. In some of your experiments, every training method converges to the same accuracy after enough training (Fig.2b), while in others, not quite (Fig. 2a and 2c). Why is this the case? Does it mean that you have not run enough iterations for the baseline methods? My intuition is that if the learner algorithm is convex, then ultimately they will all get to the same accuracy level, so the task is just to get there quicker. I understand that since the learner algorithm is an NN, this is not the case - but more explanation is necessary here - does your method also reduces the empirical possibility to get stuck in local minima?
7. More explanation is needed towards Fig.4c. In this case, using a teacher model trained on a harder task (CIFAR10) leads to much improved student training on a simpler task (MNIST). Why?
8. Although in terms of "effective training data points" the proposed method outperforms the other methods, in terms of time (Fig.5) the difference between it and say, NoTeach, is not that significant (especially at very high desired accuracy). More explanation needed here.
Read the rebuttal and revision and slightly increased my rating. |
iclr_2018_BkN_r2lR- | Published as a conference paper at ICLR 2018 IDENTIFYING ANALOGIES ACROSS DOMAINS
Identifying analogies across domains without supervision is an important task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision. | This paper adds an interesting twist on top of recent unpaired image translation work. A domain-level translation function is jointly optimized with an instance-level matching objective. This yields the ability to extract corresponding image pairs out of two unpaired datasets, and also to potentially refine unpaired translation by subsequently training a paired translation function on the discovered matches. I think this is a promising direction, but the current paper has unconvincing results, and it’s not clear if the method is really solving an important problem yet.
My main criticism is with the experiments and results. The experiments focus almost entirely on the setting where there actually exist exact matches between the two image sets. Even the partial matching experiments in Section 4.1.2 only quantify performance on the images that have exact matches. This is a major limitation since the compelling use cases of the method are in scenarios where we do not have exact matches. It feels rather contrived to focus so much on the datasets with exact matches since, 1) these datasets actually come as paired data and, in actual practice, supervised translation can be run directly, 2) it’s hard to imagine datasets that have exact but unknown matches (I welcome the authors to put forward some such scenarios), 3) when exact matches exist, simpler methods may be sufficient, such as matching edges. There is no comparison to any such simple baselines.
I think finding analogies that are not exact matches is much more compelling. Quantifying performance in this case may be hard, and the current paper only offers a few qualitative results. I’d like to see far more results, and some attempt at a metric. One option would be to run user studies where humans judge the quality of the matches. The results shown in Figure 2 don’t convince me, not just because they are qualitative and few, but also because I’m not sure I even agree that the proposed method is producing better results: for example, the DiscoGAN results have some artifacts but capture the texture better in row 3.
I was also not convinced by the supervised second step in Section 4.3. Given that the first step achieves 97% alignment accuracy, it’s no surprised that running an off-the-shelf supervised method on top of this will match the performance of running on 100% correct data. In other words, this section does not really add much new information beyond what we could already infer given that the first stage alignment was so successful.
What I think would be really interesting is if the method can improve performance on datasets that actually do not have ground truth exact matches. For example, the shoes and handbags dataset or even better, domain adaptation datasets like sim to real.
I’d like to see more discussion of why the second stage supervised problem is beneficial. Would it not be sufficient to iterate alpha and T iterations enough times until alpha is one-hot and T is simply training against a supervised objective (Equation 7)?
Minor comments:
1. In the intro, it would be useful to have a clear definition of “analogy” for the present context.
2. Page 2: a link should be provided for the Putin example, as it is not actually in Zhu et al. 2017.
3. Page 3: “Weakly Supervised Mapping” — I wouldn’t call this weakly supervised. Rather, I’d say it’s just another constraint / prior, similar to cycle-consistency, which was referred to under the “Unsupervised” section.
4. Page 4 and throughout: It’s hard to follow which variables are being optimized over when. For example, in Eqn. 7, it would be clearer to write out the min over optimization variables.
5. Page 6: The Maps dataset was introduced in Isola et al. 2017, not Zhu et al. 2017.
6. Page 7: The following sentence is confusing and should be clarified: “This shows that the distribution matching is able to map source images that are semantically similar in the target domain.”
7. Page 7: “This shows that a good initialization is important for this task.” — Isn’t this more than initialization? Rather, removing the distributional and cycle constraints changes the overall objective being optimized.
8. In Figure 2, are the outputs the matched training images, or are they outputs of the translation function?
9. Throughout the paper, some citations are missing enclosing parentheses. |
iclr_2018_SkxqZngC- | Topic modeling of text documents is one of the most important tasks in representation learning. In this work, we propose iTM-VAE, which is a Bayesian nonparametric (BNP) topic model with variational auto-encoders. On one hand, as a BNP topic model, iTM-VAE potentially has infinite topics and can adapt the topic number to data automatically. On the other hand, different with the other BNP topic models, the inference of iTM-VAE is modeled by neural networks, which has rich representation capacity and can be computed in a simple feed-forward manner. Two variants of iTM-VAE are also proposed in this paper, where iTM-VAE-Prod models the generative process in products-of-experts fashion for better performance and iTM-VAE-G places a prior over the concentration parameter such that the model can adapt a suitable concentration parameter to data automatically. Experimental results on 20News and Reuters RCV1-V2 datasets show that the proposed models outperform the state-of-the-arts in terms of perplexity, topic coherence and document retrieval tasks. Moreover, the ability of adjusting the concentration parameter to data is also confirmed by experiments. | "topic modeling of text documents one of most important tasks"
Does this claim have any backing?
"inference of HDP is more complicated and not easy to be applied to new models" Really an artifact of the misguided nature of earlier work. The posterior for the $\vec\pi$ of a elements of DP or HDP can be made a Dirichlet, made finite by keeping a "remainder" term and appropriate augmentation. Hughes, Kim and Sudderth (2015) have avoided stick-breaking and CRPs altogether, as have others in earlier work. Extensive models building on simple HDP doing all sorts of things have been developed.
Variational stick-breaking methods never seemed to have worked well. I suspect you could achieve better results by replacing them as well, but you would have to replace the tree of betas and extend your Kumaraswamy distribution, so it may not work. Anyway, perhaps an avenue for future work.
"infinite topic models" I've always taken the view that the use of the word "infinite" in machine learning is a kind of NIPSian machismo. In HDP-LDA at least, the major benefit in model performance comes from fitting what you call $\vec\pi$, which is uniform in vanilla LDA, and note that the number of topics "found" by a HDP-LDA sampler can be made to vary quite widely by varying what you call $\alpha$, so any statement about the "right" number of topics is questionable. So the claim in 3rd paragraph of Section 2, "superior" and "self-determined topic number" I'd say are misguided. Plenty of experimental work to support this.
In Related Work, you seem to only mention HDP for non-parametric topic models. More work exists, for instance using Pitman-Yor distributions for modelling words and using Gibbs samplers that are efficient and don't rely on the memory hungry HCRP.
Good to see a prior is placed on the concentration parameter. Very important and not well done in the community, usually.
ADDED: Originally done by Teh et al for HDP-LDA, and subsequently done
by several, including Kim et al 2016. Others stress the importance of this. You need to
cite at least Teh et al. in 5.4 to show this isn't new and the importance is well known.
The Prod version is a very nice idea. Great results. This looks original, but I'm not expert enough in the huge masses of new deep neural network research popping up.
You've upped the standard a bit by doing good experimental work. Oftentimes this is done poorly and one is left wondering. A lot of effort went into this.
ADDED: usually like to see more data sets experimented with
What code is used for HDP-LDA? Teh's original Matlab HCRP sampler does pretty well because at least he samples hyperparameters and can scale to 100k documents (yes, I tried). The comparison with LDA makes me suspicious. For instance, on 20News, a good non-parametric LDA will find well over 400 topics and roundly beat LDA on just 50 or 200. If reporting LDA, or HDP-LDA, it should be standard to do hyperparameter fitting and you need to mention what you did as this makes a big difference.
ADDED: 20News results still poor for HPD, but its probably the implementation used ... their
online variational algorithm only has advantages for large data sets
Pros:
* interesting new prod model with good results
* alternative "deep" approach to a HDL-LDA model
* good(-ish) experimental work
Cons:
* could do with a competitive non-parametric LDA implementation
ADDED: good review responses generally |
iclr_2018_SkF2D7g0b | Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can "transfer" to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query access to the target model's class probabilities, which do not rely on transferability. We also propose strategies to decouple the number of queries required to generate each adversarial sample from the dimensionality of the input. An iterative variant of our attack achieves close to 100% adversarial success rates for both targeted and untargeted attacks on DNNs. We carry out extensive experiments for a thorough comparative evaluation of black-box attacks and show that the proposed Gradient Estimation attacks outperform all transferability based black-box attacks we tested on both MNIST and CIFAR-10 datasets, achieving adversarial success rates similar to well known, state-of-the-art white-box attacks. We also apply the Gradient Estimation attacks successfully against a real-world content moderation classifier hosted by Clarifai. Furthermore, we evaluate black-box attacks against state-of-theart defenses. We show that the Gradient Estimation attacks are very effective even against these defenses. | The authors consider new attacks for generating adversarial samples against neural networks. In particular, they are interested in approximating gradient-based white-box attacks such as FGSM in a black-box setting by estimating gradients from queries to the classifier. They assume that the attacker is able to query, for any example x, the vector of probabilities p(x) corresponding to each class.
Given such query access it’s trivial to estimate the gradients of p using finite differences. As a consequence one can implement FGSM using these estimates assuming cross-entropy loss, as well as a logit-based loss. They consider both iterative and single-step FGSM attacks in the targeted (i.e. the adversary’s goal is to switch the example’s label to a specific alternative label) and un-targeted settings (any mislabelling is a success). They compare themselves to transfer black-box attacks, where the adversary trains a proxy model and generates the adversarial sample by running a white-box attack on that model. For a number of target classifiers on both MNIST and CIFAR-10, they show that these attacks outperform the transfer-based attacks, and are comparable to white-box attacks, while maintaining low distortion on the attack samples.
One drawback of estimating gradients using finite differences is that the number of queries required scales with the dimensionality of the examples, which can be prohibitive in the case of images. They therefore describe two practical approaches for query reduction — one based on random feature grouping, and the other on PCA (which requires access to training data). They once again demonstrate the effectiveness of these methods across a number of models and datasets, including models deploying adversarially trained defenses.
Finally, they demonstrate compelling real-world deployment against Clarifai classification models designed to flag “Not Safe for Work” content.
Overall, the paper provides a very thorough experimental examination of a practical black-box attack that can be deployed against real-world systems. While there are some similarities with Chen et al. with respect to utilizing finite-differences to estimate gradients, I believe the work is still valuable for its very thorough experimental verification, as well as the practicality of their methods. The authors may want to be more explicit about their claim in the Related Work section that the running time of their attack is “40x” less than that of Chen et al. While this is believable, there is no running time comparison in the body of the paper. |
iclr_2018_rJbs5gbRW | Modern neural network architectures take advantage of increasingly deeper layers, and various advances in their structure to achieve better performance. While traditional explicit regularization techniques like dropout, weight decay, and data augmentation are still being used in these new models, little about the regularization and generalization effects of these new structures have been studied. Besides being deeper than their predecessors, could newer architectures like ResNet and DenseNet also benefit from their structures' implicit regularization properties? In this work, we investigate the skip connection's effect on network's generalization features. Through experiments, we show that certain neural network architectures contribute to their generalization abilities. Specifically, we study the effect that low-level features have on generalization performance when they are introduced to deeper layers in DenseNet, ResNet as well as networks with 'skip connections'. We show that these low-level representations do help with generalization in multiple settings when both the quality and quantity of training data is decreased. | The paper studies the effect of different network structures (plain CNN, ResNet and DenseNet). This is an interesting line of research to pursue, however, it gives an impression that a large amount of recent work in this direction has not been considered by the authors. The paper contains ONLY 4 references.
Some references that might be useful to consider in the paper:
- K. Greff et. al. Highway and Residual Networks learn Unrolled Iterative Estimation.
- C. Zang et. al. UNDERSTANDING DEEP LEARNING REQUIRES RETHINKING GENERALIZATION
- Q. Liao el. al. Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex
- A. Veit et. al. Residual Networks Behave Like Ensembles of Relatively Shallow Networks
- K. He at. Al Identity Mappings in Deep Residual Networks
The writing and the structure of the paper could be significantly improved. From the paper, it is difficult to understand the contributions. From the ones listed in Section 1, it seems that most of the contributions were shown in the original ResNet and DenseNet papers. Given, questionable contribution and a lack of relevant citations, it is difficult to recommend for acceptance of the paper.
Other issues:
Section 2: “Skip connection …. overcome the overfitting”, could the authors comment on this a bit more or point to relevant citation?
Section 2: “We increase the number of skip connections from 0 to 28”, it is not clear to me how this is done.
Section 3.1.1 “deep Linear model”, what the authors mean with this? Multiple layers without a nonlinearity? Is it the same as Cascade Net?
Section 3.2 From the data description, it is not clear how the training data was obtained. Could the authors provide more details on this?
Section 3.2 “…, only 3 of them are chosen to be displayed…”, how the selection process was done?
Section 3.2 “Instead of showing every layer’s output we exhibit the 3th, 5th, 7th, 9th, 11th, 13th and the final layer’s output”, according to the description in Fig. 7 we should be able to see 7 columns, this description does not correspond to Fig. 7.
Section 4 “This paper investigates how skip connections works in vision tasks…” I do not find experiments with vision datasets in the paper. In order to claim this, I would encourage the authors to run tests on a CV benchmark dataset (e. g. ImageNet) |
iclr_2018_HJjvxl-Cb | Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an offpolicy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy-that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds. | This paper proposes a soft actor-critic method aiming at lowering sample complexity and achieving a new convergence guarantee. However, the current paper has some correctness issues, is missing some related work and lacks a clear statement of innovation.
The first issue is that augmenting reward by adding an entropy term to the original RL objective is not clearly innovative. The connections, and improvements upon, other approaches need to be made more clear. In particular, the connection to the work by Haarnoja is unclear. There is this statement: “Although the soft Q-learning algorithm proposed by Haarnoja et al. (2017) has a value function and actor network, it is not a true actor-critic algorithm: the Q- function is estimating the optimal Q-function, and the actor does not directly affect the Q-function except through the data distribution. Hence, Haarnoja et al. (2017) motivates the actor network as an approximate sampler, rather than the actor in an actor-critic algorithm. Crucially, the convergence of this method hinges on how well this sampler approximates the true posterior. In contrast, we prove that our method converges to the optimal policy from a given policy class, regardless of the policy parameterization.” The last sentence suggests that the key difference is that any policy parameterization can be used, making the previous sentences less clear. Is the key extension on the proof, and so on the use of the projection with the KL-divergence?
Further, there is a missing connection to the paper “Guided policy search”, Levine and Koltun. Though it is a different framework, it clearly mentioned it uses the augmented reward to learn the sub-optimal policies (for differential dynamic program). The DDPG paper mentioned that DDPG can be also used within the GPS framework. That work is different, but a discussion should nonetheless be included about connections.
If the key novelty in this work is an extension on the theory, to allow any policy parameterization, and empirical results demonstrating improved performance over Haarnoja et al., there appear to be correctness issues in both, as laid out below.
The key novelty in the theory seems to be to use a projection onto the space of policies, using a KL divergence. There are, however, currently too many unclear or misspecified steps to verify correctness.
1. The definition of pinew in Equation (6) is for one specific state s_t; shouldn’t this be across all states? If it is for one state, then E_{pinew} makes sense, since pi is only specified as a conditional distribution (not a joint distribution); if it is supposed to be expected value across all states, then what is E_{pinew}? Is it with the stationary distribution of pinew?
2. The proof for Lemma 1 is hard to follow, because Z is not defined. I mostly was able to guess, based on Haarnoja et al., but the step between (18) and (19) where E_{pinew} [Zpiold] - E_{piold} [Zpiold] is dropped is unclear to me. Zpiold does not depend on actions, so if the expectation is only w.r.t. to the action, then it cancels. This goes back to point 1, where it wouldn’t make much sense for the KL to only depend on actions. In fact, if pinew has to be computed separately for each state, then we are really back to tabular policies.
3. “There is no need in principle to include a separate function approximator for the state value, since it is related to the Q-function and policy according to Qθ (st , at ) − log πφ (at |st )” This is not true, since you rely on the fact that you have separate network parameters to get an unbiased gradient estimate in (10).
The experimental results also appear to have some correctness issues.
1. For the claim that the algorithm does better, this is also difficult to gauge because the graphs are unclear. In particular, it is not explained why the lines end early. How were multiple gradients incorporated into the update? Did you wait 4 (or 16) steps until computing a gradient update? This might explain why the lines end early, but then invalidates how the lines are drawn. Rather, the lines should be extended, where each point is plotted each 4 (or 16) steps. Doing this would remove the effect that the lines with 4 or 16 seem to learn faster (but really are just plotted on a different x-axis).
2. There is a lack of experimental details. This includes missing details about neural network architectures used by each algorithm, parameter tuning details, how multiple gradients are used, etc. This omission makes the experiments not reproducible.
3. Although DDPG is claimed to be very sensitive to parameter changes, and the proposed algorithm is more stable, there is no parameter sensitivity results showed.
Minor comments:
1. Graph font is much too small.
2. Typo in (10), should be V(s_{t+1})
3. Because the proof of Lemma 1 is so straightforward (just redefining reward), it would be better to actually spell it out, give a definition of entropy, etc. |
iclr_2018_S1fduCl0b | Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner where knowledge gained from previous tasks is retained and used for future learning. It is essential towards the development of intelligent machines that can adapt to their surroundings. In this work we focus on a lifelong learning approach to generative modeling where we continuously incorporate newly observed streaming distributions into our learnt model. We do so through a student-teacher architecture which allows us to learn and preserve all the distributions seen so far without the need to retain the past data nor the past models. Through the introduction of a novel cross-model regularizer, the student model leverages the information learnt by the teacher, which acts as a summary of everything seen till now. The regularizer has the additional benefit of reducing the effect of catastrophic interference that appears when we learn over streaming data. We demonstrate its efficacy on streaming distributions as well as its ability to learn a common latent representation across a complex transfer learning scenario. | The paper proposed a teacher-student framework and a modified objective function to adapt VAE training to streaming data setting. The qualitative experimental result shows that the learned model can generate reasonable-looking samples. I'm not sure about what conclusion to make from the numerical result, as the test negative ELBO actually increased after decreasing initially. Why did it increase?
The modified objective function is a little ad-hoc, and it's unclear how to relate the overall objective function to Bayesian posterior inference (what exactly is the posterior that the encoder tries to approximate?). There is a term in the objective function that is synthetic data specific. Does that imply that the objective function is different depending on if the data is synthetic or real? What is the motivation/justification of choosing KL(Q_student||Q_teacher) as regularisation instead of the other way around? Would that make a difference in the goodness of the learned model? If not, wouldn't KL(Q_teacher||Q_student) result reduction in the variance of gradients and therefore a better choice?
Details on the minimum number of real samples per interval for the model to be able to learn is also missing. Also, how many synthetic samples per real samples are needed? How is the update with respect to synthetic sample scheduled? Given infinite amount of streaming data with a fixed number of classes/underlying distributions and interval length, and sample the class of each interval (uniformly) randomly, will the model/algorithm converge? Is there a minimum number of real examples that the student learner needs to see before it can be turned into a teacher?
Other question: How is the number of latent category J of the latent discrete distribution chosen?
Quality: The numerical experiment doesn't really compare to any other streaming benchmark and is a little unsatisfying. Without a streaming benchmark or a realistic motivating example in which the proposed scheme makes a significant difference, it's difficult to judge the contribution of this work.
Clarity: The manuscript is reasonably well-written. (minor: Paragraph 2, section 5, 'in principle' instead of 'in principal')
Originality: Average. The student-teacher framework by itself isn't novel. The modifications to the objective function appears to be novel as far as I am aware, but it doesn't require much special insights.
Significance: Below average. I think it will be very helpful if the authors can include a realistic motivating example where lifelong unsupervised learning is critical, and demonstrate that the proposed scheme makes a difference in the example. |
iclr_2018_B1ZZTfZAW | Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks (RNNs) in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from 'serialised' MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data, and demonstrate results from differentially private training of the RCGAN. | The authors propose to use synthetic data generated by GANs as a replacement for personally identifiable data in training ML models for privacy-sensitive applications such as medicine. In particular it demonstrates adversarial training of a recurrent generator for an ICU monitoring multidimensional time series, proposes to evaluate such models by the performance (on real data) of supervised classifiers trained on the synthetic data ("TSTR"), and empirically analyzes the privacy implications of training and using such a model.
This paper touches on many interesting issues -- deep/recurrent models of time series, privacy-respecting ML, adaptation from simulated to real-world domains. But it is somewhat unfocused and does not seem make a clear contribution to any of these.
The recurrent GAN architecture does not appear particularly novel --- the authors note that similar architectures have been used for discrete tasks such language modeling (and fail to note work that uses convolutional or recurrent generators for video prediction, a more relevant continuous task, see e.g. http://carlvondrick.com/tinyvideo/, or autoregressive approaches to deep models of time series, e.g. WaveNet https://arxiv.org/abs/1609.03499) and there is no obvious new architectural innovation.
I also find it difficult to assess whether the proposed model is actually generating reasonable time series. It may be true that "one plot showing synthetic ICU data would not provide enough information to evaluate its actual similarity to the real data" because it could not rule out that case that the model has captured the marginal distribution in each dimension but not joint structure. However producing marginal distributions that look reasonable is at least a *necessary* condition and without seeing those plots it is hard to rule out that the model may be producing highly unrealistic samples.
The basic privacy paradigm proposed seems to be:
1. train a GAN using private data
2. generate new synthetic data, assume this data does not leak private information
3. train a supervised classifier on the private data
so that the GAN training-sampling loop basically functions as an anonymization procedure. For this to pan out, we'd need to see that the GAN samples are a) useful for a range of supervised tasks, and b) do not leak private information. But the results in Table 2 show that the TSTR results are quite a lot worse than real data in most cases, and it's not obvious that the small set of tasks evaluated are representative of all tasks people might care about. The attempts to demonstrate empirically that the GAN does not memorize training data aren't particularly convincing; this is an adversarial setting so the fact that a *particular* test doesn't reveal private data doesn't imply that a determined attacker wouldn't succeed. In this vein, the experiments with DP-SGD are more interesting, although a more direct comparison would be helpful (it is frustrating to flip back and forth between Tables 2 and 3 in an attempt to tease out relative performance) and and it is not clear how the settings (ε = 0.5 and δ ≤ 9.8 × 10−3) were selected or whether they provide a useful level of privacy. That said I agree this is an interesting avenue for future work.
Finally it's worth noting that discarding patients with missing data is unlikely to be innocuous for ICU applications; data are quite often not missing at random (e.g., a patient going into a seizure may dislocate a sensor). It appears that the analysis in this paper threw out more than 90% of the patients in their original dataset, which would present serious concerns in using the resulting synthetic data to represent the population at large. One could imagine coding missing data in various ways (e.g. asking the generator to produce a missingness pattern as well as a time series and allowing the discriminator to access only the masked time series, or explicitly building a latent variable model) and some sort of principled approach to missing data seems crucial for meaningful results on this application. |
iclr_2018_SyAbZb-0Z | Deep learning models require extensive architecture design exploration and hyperparameter optimization to perform well on a given task. The exploration of the model design space is often made by a human expert, and optimized using a combination of grid search and search heuristics over a large space of possible choices. Neural Architecture Search (NAS) is a Reinforcement Learning approach that has been proposed to automate architecture design. NAS has been successfully applied to generate Neural Networks that rival the best human-designed architectures. However, NAS requires sampling, constructing, and training hundreds to thousands of models to achieve well-performing architectures. This procedure needs to be executed from scratch for each new task. The application of NAS to a wide set of tasks currently lacks a way to transfer generalizable knowledge across tasks. In this paper, we present the Multitask Neural Model Search (MNMS) controller. Our goal is to learn a generalizable framework that can condition model construction on successful model searches for previously seen tasks, thus significantly speeding up the search for new tasks. We demonstrate that MNMS can conduct an automated architecture search for multiple tasks simultaneously while still learning well-performing, specialized models for each task. We then show that pre-trained MNMS controllers can transfer learning to new tasks. By leveraging knowledge from previous searches, we find that pre-trained MNMS models start from a better location in the search space and reduce search time on unseen tasks, while still discovering models that outperform published human-designed models. | The paper proposes an extension of the Neural Architecture Search approach, in which a single RNN controller is trained with RL to select hyperparameters for child networks that must perform different tasks. The architecture includes the notion of a "task embedding", that helps the controller keeping track of similarity between tasks, to facilitate transfer across related tasks.
The paper is very well written, and based on a simple but interesting idea. It also deals with core issues in current machine learning.
On the negative side, there is just one experiment, and it is somewhat limited. In the experiment, the proposed model is trained on two very different tasks (English sentiment analysis and Spanish language detection), and then asked to generalize to another English sentiment analysis task and to a Spanish sentiment analysis task. The models converge faster to high accuracy in the proposed transfer learning setup than when trained one a single task with the same architecture search strategy. Moreover, the task embedding for the new English task is closer to that of the training English task, and the same for the training/test Spanish tasks.
My main concern with the experiment is that the approach is only tested in a setup in which there is a huge difference between two classes of tasks (English vs Spanish), so the model doesn't need to learn very sophisticated task embeddings to group the tasks correctly for transfer. It would be good to see other experiments where there is less of a trivial structure distinguishing tasks, to check if transfer helps.
Also, I find it surprising that the Corpus Cine sentiment task embedding is not correlated at all with the SST sentiment task. If the controller is really learning something interesting about the nature of the tasks, I would have expected a differential effect, such that IMDB is only correlated with SST, but Corpus Cine is correlated to both the Spanish language identification task and SST. Perhaps, this is worth some discussion.
Finally, it's not clear to me why the multitask architecture was used in the experiment even when no multi-task pre-training was conducted: shouldn't the simple neural architecture search method be used in this case?
Minor points:
"diffferentiated": different?
"outputted actions": output actions
"the aim of increase the training stability": the aim of increasing training stability
Insert references for Polyak averaging and Savitzky-Golay filtering.
Figure 3: specify that the Socher 2013 result is for SST
Figure 4: does LSS stand for SST?
I'm confused by Fig. 6: why aren't the diagonal values 100%?
MNMS is referred to as MNAS in Figure 5.
For architecture search, the neuroevolution literature should also be cited (https://www.oreilly.com/ideas/neuroevolution-a-different-kind-of-deep-learning). |