arxiv_id
stringlengths 9
12
| abstract
stringlengths 431
13.8k
|
---|---|
2409.03137 | Momentum based optimizers are central to a wide range of machine learning applications. These typically rely on an Exponential Moving Average (EMA) of gradients, which decays exponentially the present contribution of older gradients. This accounts for gradients being local linear approximations which lose their relevance as the iterate moves along the loss landscape. This work questions the use of a single EMA to accumulate past gradients and empirically demonstrates how this choice can be sub-optimal: a single EMA cannot simultaneously give a high weight to the immediate past, and a non-negligible weight to older gradients. Building on this observation, we propose AdEMAMix, a simple modification of the Adam optimizer with a mixture of two EMAs to better take advantage of past gradients. Our experiments on language modeling and image classification showquite surprisingly—that gradients can stay relevant for tens of thousands of steps. They help to converge faster, and often to lower minima: e.g., a 1.3B parameter AdEMAMix LLM trained on 101B tokens performs comparably to an AdamW model trained on 197B tokens (+95%). Moreover, our method significantly slowsdown model forgetting during training. Our work motivates further exploration of different types of functions to leverage past gradients, beyond EMAs. |
2402.10110 | Instruction tuning is critical to large language models (LLMs) for achieving better instruction following and task adaptation capabilities but its success heavily relies on the training data quality. Many recent methods focus on improving the data quality but often overlook the compatibility of the data with the student model being finetuned. This paper introduces Selective Reflection-Tuning, a novel paradigm that synergizes a teacher LLM’s reflection and introspection for improving existing data quality with the data selection capability of the student LLM, to automatically refine existing instruction-tuning data. This teacher-student collaboration produces high-quality and student-compatible instruction-response pairs, resulting in sample-efficient instruction tuning and LLMs of superior performance. Selective Reflection-Tuning is a data augmentation and synthesis that generally improves LLM finetuning and self-improvement without collecting brand-new data. We apply our method to Alpaca and WizardLM data and achieve much stronger and top-tier 7B and 13B LLMs.
Our codes, models, and data will be released athttps://github.com/tianyi-lab/Reflection_Tuning. |
2408.09015 | With the rise of language and multimodal models of ever-increasing size, pretraining a general-purpose foundational model and adapting it to downstream tasks has become common practice. To this end, adaptation efficiency can be a critical bottleneck given the large model sizes, hence efficient finetuning methods such as LoRA have become prevalent. However, LoRA is typically applied with the same rank across all model layers, despite mounting evidence from transfer learning literature that during finetuning, later layers diverge more from pretrained weights. Inspired by the theory and observations around feature learning and module criticality, we develop a simplemodel disagreementbased technique to predict the rank of a given module relative to the other modules.
Empirically, AdaRank generalizes notably better on unseen data than models of uniform ranks with the same number of parameters. Compared to prior work, AdaRank has the unique advantage of leaving the pretraining and adaptation stages completely intact: no need for any additional objectives or regularizers, which can hinder adaptation accuracy and performance. Our code is publicly available athttps://github.com/google-research/google-research/tree/master/adaptive_low_rank. |
2409.03733 | While scaling training compute has led to remarkable improvements in large language models (LLMs), scaling inference compute has not yet yielded analogous gains. We hypothesize that a core missing component is a lack of diverse LLM outputs, leading to inefficient search due to models repeatedly sampling highly similar, yet incorrect generations. We empirically demonstrate that this lack of diversity can be mitigated by searching over candidate plans for solving a problem in natural language. Based on this insight, we propose PLANSEARCH, a novel search algorithm which shows strong results across HumanEval+, MBPP+, and LiveCodeBench (a contamination-free benchmark for competitive coding). PLANSEARCH generates a diverse set of observations about the problem and then uses these observations to construct plans for solving the problem. By searching over plans in natural language rather than directly over code solutions, PLANSEARCH explores a significantly more diverse range of potential solutions compared to baseline search methods. Using PLANSEARCH on top of Claude 3.5 Sonnet achieves a state-of-the-art pass@200 of 77.0% on LiveCodeBench, outperforming both the best score achieved without search (pass@1 = 41.4%) and using standard repeated sampling (pass@200 = 60.6%). Finally, we show that, across all models, search algorithms, and benchmarks analyzed, we can accurately predict performance gains due to search as a direct function of the diversity over generated ideas. "If you fail to plan, you plan to fail." - Mastermind, Taylor Swift |
2401.15077v2 | Autoregressive decoding makes the inference of Large Language Models (LLMs) time-consuming. In this paper, we reconsider speculative sampling and derive two key observations. Firstly, autoregression at the feature (second-to-top-layer) level is more straightforward than at the token level. Secondly, the inherent uncertainty in feature (second-to-top-layer) level autoregression constrains its performance. Based on these insights, we introduce EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency) , a simple yet highly efficient speculative sampling framework. By incorporating a token sequence advanced by one time step, EAGLE effectively resolves the uncertainty, enabling precise second-to-top-layer feature prediction with minimal overhead. We conducted comprehensive evaluations of EAGLE, including all models from the Vicuna and LLaMA2-Chat series, the MoE model Mixtral 8x7B Instruct, and tasks in dialogue, code generation, mathematical reasoning, and instruction following. For LLaMA2-Chat 70B, EAGLE achieved a latency speedup ratio of2.7x-3.5x, doubled throughput, while maintaining the distribution of the generated text. |
2406.16858v2 | Inference with modern Large Language Models (LLMs) is expensive and time-consuming, and speculative sampling has proven to be an effective solution. Most speculative sampling methods such as EAGLE use a static draft tree, implicitly assuming that the acceptance rate of draft tokens depends only on their position. Interestingly, we found that the acceptance rate of draft tokens is also context-dependent. In this paper, building upon EAGLE, we propose EAGLE-2, which introduces a new technique of context-aware dynamic draft tree into drafting modeling. This improvement leverages the fact that the draft model of EAGLE is well-calibrated: the confidence scores from the draft model approximate acceptance rates with small errors. We conducted extensive evaluations on three series of LLMs and six tasks, with EAGLE-2 achieving speedup ratios 3.05x-4.26x, which is 20%-40% faster than EAGLE-1. EAGLE-2 also ensures that the distribution of the generated text remains unchanged, making it alosslessacceleration algorithm. |
2308.04623v1 | Recent advances with large language models (LLM) illustrate their diverse capabilities. We propose a novel algorithm, staged speculative decoding, to accelerate LLM inference in small-batch, on-device scenarios. We address the low arithmetic intensity of small-batch inference by improving upon previous work in speculative decoding. First, we restructure the speculative batch as a tree, which reduces generation costs and increases the expected tokens per batch. Second, we add a second stage of speculative decoding. Taken together, we reduce single-batch decoding latency by 3.16x with a 762M parameter GPT-2-L model while perfectly preserving output quality. |
2408.08541 | Large Language Models (LLMs) are typically shipped with tokenizers that *deterministically* encode text into so-called *canonical* token sequences, to which the LLMs assign probability values. One common assumption is that the probability of a piece of text is the probability of its canonical token sequence. However, the tokenization of a string is not unique: e.g., the Llama2 tokenizer encodes Tokens as [Tok,ens], but [Tok,en,s] also represents the same text. In this paper, we study noncanonical tokenizations. We prove that, given a string, it is computationally hard to find the most likely tokenization for an autoregressive LLM, as well as to compute the marginal probability over all possible tokenizations. We then show how the marginal is, in most cases, indistinguishable from the canonical probability. Surprisingly, we then empirically demonstrate the existence of a significant amount of signal hidden within tokenization space. Notably, by simply aggregating the probabilities of noncanonical tokenizations, we achieve improvements across a range of LLM evaluation benchmarks for a variety of architectures, including transformers and state space models. |
2408.17344 | This paper presentsrerankers, a Python library which provides an easy-to-use interface to the most commonly used re-ranking approaches. Re-ranking is an integral component of many retrieval pipelines; however, there exist numerous approaches to it, relying on different implementation methods.rerankersunifies these methods into a single user-friendly interface, allowing practitioners and researchers alike to explore different methods while only changing a single line of Python code. Moreover,rerankersensures that its implementations are done with the fewest dependencies possible, and re-uses the original implementation whenever possible, guaranteeing that our simplified interface results in no performance degradation compared to more complex ones. The full source code and list of supported models are updated regularly and available athttps://github.com/answerdotai/rerankers. |
2408.16673 | Large language models rely on Supervised Fine-Tuning (SFT) to specialize in downstream tasks. Cross Entropy (CE) loss is the de facto choice in SFT, but it often leads to overfitting and limited output diversity due to its aggressive updates to the data distribution. This paper aim to address these issues by introducing the maximum entropy principle, which favors models with flatter distributions that still effectively capture the data. Specifically, we develop a new distribution matching method called GEM, which solves reverse Kullback-Leibler divergence minimization with an entropy regularizer. For the SFT of Llama-3-8B models, GEM outperforms CE in several aspects. First, when applied to the UltraFeedback dataset to develop general instruction-following abilities, GEM exhibits reduced overfitting, evidenced by lower perplexity and better performance on the IFEval benchmark. Furthermore, GEM enhances output diversity, leading to performance gains of up to 7 points on math reasoning and code generation tasks using best-of-n sampling, even without domain-specific data. Second, when fine-tuning with domain-specific datasets for math reasoning and code generation, GEM also shows less overfitting and improvements of up to 10 points compared with CE. |
2408.16293 | Language models have demonstrated remarkable performance in solving reasoning tasks; however, even the strongest models still occasionally make reasoning mistakes. Recently, there has been active research aimed at improving reasoning accuracy, particularly by using pretrained language models to “self-correct” their mistakes via multi-round prompting. In this paper, we follow this line of work but focus on understanding the usefulness of incorporating “error-correction” data directly into the pretraining stage. This data consists of erroneous solution steps immediately followed by their corrections. Using a synthetic math dataset, we show promising results: this type of pretrain data can help language models achieve higher reasoning accuracy directly (i.e., through simple auto-regression, without multi-round prompting) compared to pretraining on the same amount of error-free data. We also delve into many details, such as (1) how this approach differs from beam search, (2) how such data can be prepared, (3) whether masking is needed on the erroneous tokens, (4) the amount of error required, (5) whether such data can be deferred to the fine-tuning stage, and many others. |
2312.07104 | Large language models (LLMs) are increasingly used for complex tasks requiring multiple chained generation calls, advanced prompting techniques, control flow, and interaction with external environments. However, efficient systems for programming and executing these applications are lacking.
To bridge this gap, we introduce SGLang, aStructuredGenerationLanguage for LLMs. SGLang is designed for the efficient programming of LLMs and incorporates primitives for common LLM programming patterns. We have implemented SGLang as a domain-specific language embedded in Python, and we developed an interpreter, a compiler, and a high-performance runtime for SGLang.
These components work together to enable optimizations such as parallelism, batching, caching, sharing, and other compilation techniques. Additionally, we propose RadixAttention, a novel technique that maintains a Least Recently Used (LRU) cache of the Key-Value (KV) cache for all requests in a radix tree, enabling automatic KV cache reuse across multiple generation calls at runtime. SGLang simplifies the writing of LLM programs and boosts execution efficiency.
Our experiments demonstrate that SGLang can speed up common LLM tasks by up to, while reducing code complexity and enhancing control. |
2408.13359 | Finding the optimal learning rate for language model pretraining is a challenging task.
This is not only because there is a complicated correlation between learning rate, batch size, number of training tokens, model size, and other hyperparameters but also because it is prohibitively expensive to perform a hyperparameter search for large language models with Billions or Trillions of parameters.
Recent studies propose using small proxy models and small corpus to perform hyperparameter searches and transposing the optimal parameters to large models and large corpus.
While the zero-shot transferability is theoretically and empirically proven for model size related hyperparameters, like depth and width, the zero-shot transfer from small corpus to large corpus is underexplored.
In this paper, we study the correlation between optimal learning rate, batch size, and number of training tokens for the recently proposed WSD scheduler.
After thousands of small experiments, we found a power-law relationship between variables and demonstrated its transferability across model sizes.
Based on the observation, we propose a new learning rate scheduler,Power scheduler, that is agnostic about the number of training tokens and batch size.
The experiment shows that combining the Power scheduler with Maximum Update Parameterization () can consistently achieve impressive performance with one set of hyperparameters regardless of the number of training tokens, batch size, model size, and even model architecture.
Our 3B dense and MoE models trained with the Power scheduler achieve comparable performance as state-of-the-art small language models.
We open source these pretrained models athttps://ibm.biz/BdKhLa. |
2408.12857 | Recently, a wide range of memory-efficient LLM training algorithms have gained substantial popularity. These methods leverage the low-rank structure of gradients to project optimizer states into a subspace using projection matrix found by singular value decomposition (SVD) . However, convergence of these algorithms is highly dependent on the update rules of their projection matrix. In this work, we provide thefirstconvergence guarantee for arbitrary update rules of projection matrix. This guarantee is generally applicable to optimizers that can be analyzed with Hamiltonian Descent, including most common ones, such as LION, Adam. Inspired by our theoretical understanding, we propose Online Subspace Descent, a new family of subspace descent optimizer without SVD. Instead of updating the projection matrix with eigenvectors, Online Subspace Descent updates the projection matrix with online PCA. Online Subspace Descent is flexible and introduces only minimum overhead to training. We show that for the task of pretraining LLaMA models ranging from 60M to 7B parameters on the C4 dataset, Online Subspace Descent achieves lower perplexity and better downstream tasks performance than state-of-the-art low-rank training methods across different settings and narrows the gap with full-rank baselines.111Code is available athttps://github.com/kyleliang919/Online-Subspace-Descent. |
2405.12981 | Key-value (KV) caching plays an essential role in accelerating decoding for transformer-based autoregressive large language models (LLMs) . However, the amount of memory required to store the KV cache can become prohibitive at long sequence lengths and large batch sizes. Since the invention of the transformer, two of the most effective interventions discovered for reducing the size of the KV cache have been Multi-Query Attention (MQA) and its generalization, Grouped-Query Attention (GQA) . MQA and GQA both modify the design of the attention block so that multiple query heads can share a single key/value head, reducing the number of distinct key/value heads by a large factor while only minimally degrading accuracy. In this paper, we show that it is possible to take Multi-Query Attention a step further by also sharing key and value heads between adjacent layers, yielding a new attention design we call Cross-Layer Attention (CLA) . With CLA, we find that it is possible to reduce the size of the KV cache by anotherwhile maintaining nearly the same accuracy as unmodified MQA. In experiments training 1B- and 3B-parameter models from scratch, we demonstrate that CLA provides a Pareto improvement over the memory/accuracy tradeoffs which are possible with traditional MQA, enabling inference with longer sequence lengths and larger batch sizes than would otherwise be possible. |
2105.11921 | Professional summaries are written with document-level information, such as the theme of the document, in mind. This is in contrast with most seq2seq decoders which simultaneously learn to focus on salient content, while deciding what to generate, at each decoding step. With the motivation to narrow this gap, we introduce Focus Attention Mechanism, a simple yet effective method to encourage decoders to proactively generate tokens that are similar or topical to the input document. Further, we propose a Focus Sampling method to enable generation of diverse summaries, an area currently understudied in summarization. When evaluated on the BBC extreme summarization task, two state-of-the-art models augmented with Focus Attention generate summaries that are closer to the target and more faithful to their input documents,
outperforming their vanilla counterparts onrougeand multiple faithfulness measures.
We also empirically demonstrate that Focus Sampling is more effective in generating diverse and faithful summaries than top-or nucleus sampling-based decoding methods. |
2407.01082 | Large Language Models (LLMs) generate longform text by successively sampling the next token based on the probability distribution of the token vocabulary at each decoding step. Current popular truncation sampling methods such as top-sampling, also known as nucleus sampling, often struggle to balance coherence and creativity in generating text, particularly when using higher temperatures. To address this issue, we propose min-, a dynamic truncation sampling method, that establishes a minimum base percentage threshold for tokens, which the scales according to the probability of the top candidate token. Through experiments on several benchmarks, such as GPQA, GSM8K and AlpacaEval Creative Writing, we demonstrate that min-improves the coherence and quality of generated text even at high temperatures, while also facilitating more creative and diverse outputs compared to top-and other sampling methods. As of writing, min-has been adopted by multiple open-source LLM implementations, and have been independently assessed by members of the open-source LLM community, further validating its practical utility and potential111Code, evaluation code and results available athttps://github.com/menhguin/minp_paper/. |
2406.01436 | Knowledge editing is a rising technique for efficiently updating factual knowledge in Large Language Models (LLMs) with minimal alteration of parameters. However, recent studies have identified concerning side effects, such as knowledge distortion and the deterioration of general abilities, that have emerged after editing. This survey presents a comprehensive study of these side effects, providing a unified view of the challenges associated with knowledge editing in LLMs. We discuss related works and summarize potential research directions to overcome these limitations. Our work highlights the limitations of current knowledge editing methods, emphasizing the need for deeper understanding of inner knowledge structures of LLMs and improved knowledge editing methods. To foster future research, we have released the complementary materials such as paper collection publicly111https://github.com/MiuLab/EditLLM-Survey.**footnotetext:Equal contribution. |
2401.17585 | Current approaches of knowledge editing struggle to effectively propagate updates to interconnected facts.
In this work, we delve into the barriers that hinder the appropriate propagation of updated knowledge within these models for accurate reasoning.
To support our analysis, we introduce a novel reasoning-based benchmark –ReCoE(Reasoning-basedCounterfactualEditing dataset) – which covers six common reasoning schemes in real world.
We conduct a thorough analysis of existing knowledge editing techniques, including input-augmentation, finetuning, and locate-and-edit.
We found that all model editing methods show notably low performance on this dataset, especially in certain reasoning schemes.
Our analysis over the chain-of-thought generation of edited models further uncover key reasons behind the inadequacy of existing knowledge editing methods from a reasoning standpoint, involving aspects onfact-wise editing,fact recallability, andcoherencein generation. We will make our benchmark publicly available. |
2402.11078 | Fine-tuning is dismissed as not effective for model editing
due to its poor performance compared to more specialized methods.
However, fine-tuning is simple, agnostic to the architectural details of the model being edited,
and able to leverage ongoing advances in standard training methods (e.g., PEFT) ,
making it an appealing choice for a model editor.
In this work, we show that pure fine-tuning can be a viable approach to model editing.
We propose a slight modification of naive fine-tuning with two key ingredients.
First, we optimize the conditional likelihood rather than the full likelihood.
Second, we augment the data with random paraphrases and facts to encourage generalization and locality.
Our experiments on ZsRE andCounterFactshow that this simple modification
allows fine-tuning to often match or outperform specialized editors in the edit score. |
1704.04368 | Neural sequence-to-sequence models have provided a viable new approach
forabstractivetext summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text) .
However, these models have two shortcomings: they are liable to reproduce factual details inaccurately, and they tend to repeat themselves.
In this work we propose a novel architecture that augments the standard sequence-to-sequence attentional model in two orthogonal ways.
First, we use a hybrid pointer-generator network that can copy words from the source text viapointing, which aids accurate reproduction of information, while retaining the ability to produce novel words through thegenerator.
Second, we usecoverageto keep track of what has been summarized, which discourages repetition.
We apply our model to theCNN / Daily Mailsummarization
task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points. |
2407.18940 | Literature search questions, such as “where can I find research on the evaluation of consistency in generated summaries?” pose significant challenges for modern search engines and retrieval systems. These questions often require a deep understanding of research concepts and the ability to reason over entire articles. In this work, we introduce LitSearch, a retrieval benchmark comprising 597 realistic literature search queries about recent ML and NLP papers. LitSearch is constructed using a combination of (1) questions generated by GPT-4 based on paragraphs containing inline citations from research papers and
(2) questions about recently published papers, manually written by their authors.
All LitSearch questions were manually examined or edited by experts to ensure high quality. We extensively benchmark state-of-the-art retrieval models and also evaluate two LLM-based reranking pipelines.
We find a significant performance gap between BM25 and state-of-the-art dense retrievers, with a 24.8% difference in absolute recall@5. The LLM-based reranking strategies further improve the best-performing dense retriever by 4.4%. Additionally, commercial search engines and research tools like Google Search perform poorly on LitSearch, lagging behind the best dense retriever by 32 points. Taken together, these results show that LitSearch is an informative new testbed for retrieval systems while catering to a real-world use case.111Our dataset and code are available athttps://github.com/princeton-nlp/LitSearch. |
2403.03853 | As Large Language Models (LLMs) continue to advance in performance, their size has escalated significantly, with current LLMs containing billions or even trillions of parameters. However, in this study, we discovered that many layers of LLMs exhibit high similarity, and some layers play a negligible role in network functionality. Based on this observation, we define a metric called Block Influence (BI) to gauge the significance of each layer in LLMs. We then propose a straightforward pruning approach: layer removal, in which we directly delete the redundant layers in LLMs based on their BI scores. Experiments demonstrate that our method, which we call ShortGPT, significantly outperforms previous state-of-the-art (SOTA) methods in model pruning. Moreover, ShortGPT is orthogonal to quantization-like methods, enabling further reduction in parameters and computation. The ability to achieve better results through simple layer removal, as opposed to more complex pruning techniques, suggests a high degree of redundancy in the model architecture. |
2408.01099 | Recently, pre-trained model and efficient parameter tuning have achieved remarkable success in natural language processing and high-level computer vision with the aid of masked modeling and prompt tuning. In low-level computer vision, however, there have been limited investigations on pre-trained models and even efficient fine-tuning strategy has not yet been explored despite its importance and benefit in various real-world tasks such as alleviating memory inflation issue when integrating new tasks on AI edge devices. Here, we propose a novel efficient parameter tuning approach dubbed contribution-based low-rank adaptation (CoLoRA) for multiple image restorations along with effective pre-training method with random order degradations (PROD) . Unlike prior arts that tune all network parameters, our CoLoRA effectively fine-tunes small amount of parameters by leveraging LoRA (low-rank adaptation) for each new vision task with our contribution-based method to adaptively determine layer by layer capacity for that task to yield comparable performance to full tuning. Furthermore, our PROD strategy allows to extend the capability of pre-trained models with improved performance as well as robustness to bridge synthetic pre-training and real-world fine-tuning. Our CoLoRA with PROD has demonstrated its superior performance in various image restoration tasks across diverse degradation types on both synthetic and real-world datasets for known and novel tasks.
Project page:https://janeyeon.github.io/colora/. |
2406.16797 | Existing methods for adapting large language models (LLMs) to new tasks are not suited to multi-task adaptation because they modify all the model weights–causing destructive interference between tasks. The resulting effects, such as catastrophic forgetting of earlier tasks, make it challenging to obtain good performance on multiple tasks at the same time.
To mitigate this, we propose Lottery Ticket Adaptation (LoTA) , a sparse adaptation method that identifies and optimizes only a sparse subnetwork of the model. We evaluate LoTA on a wide range of challenging tasks such as instruction following, reasoning, math, and summarization. LoTA obtains better performance than full fine-tuning and low-rank adaptation (LoRA) , and maintains good performance even after training on other tasks – thus, avoiding catastrophic forgetting. By extracting and fine-tuning overlottery tickets(orsparse task vectors) , LoTA also enables model merging over highly dissimilar tasks. Our code is madepublicly available.111A preliminary version of this work has appeared in. |
2407.21417 | Modern language models (LMs) need to follow human instructions while being faithful; yet, they often fail to achieve both.
Here, we provide concrete evidence of a trade-off between instruction following (i.e., follow open-ended instructions) and faithfulness (i.e., ground responses in given context) when training LMs with these objectives. For instance, fine-tuning LLaMA-7B on instruction following datasets renders it less faithful. Conversely, instruction-tuned Vicuna-7B shows degraded performance at following instructions when further optimized on tasks that require contextual grounding. One common remedy is multi-task learning (MTL) with data mixing, yet it remains far from achieving a synergic outcome. We propose a simple yet effective method that relies onRejection Sampling for ContinuedSelf-instructionTuning (ReSet) , which significantly outperforms vanilla MTL. Surprisingly, we find that less is more, as trainingReSetwith high-quality, yet substantially smaller data (three-fold less) yields superior results. Our findings offer a better understanding of objective discrepancies in alignment training of LMs. |
2408.08274 | The Mixture of Experts (MoE) framework has become a popular architecture for large language models due to its superior performance over dense models.
However, training MoEs from scratch in a large-scale regime is prohibitively expensive. Existing methods mitigate this by pre-training multiple dense expert models independently and using them to initialize an MoE. This is done by using experts’ feed-forward network (FFN) to initialize the MoE’s experts while merging other parameters. However, this method limits the reuse of dense model parameters to only the FFN layers, thereby constraining the advantages when "upcycling" these models into MoEs.
We propose BAM (Branch-Attend-Mix) , a simple yet effective method that addresses this shortcoming.
BAM makes full use of specialized dense models by not only using their FFN to initialize the MoE layers but also leveraging experts’ attention parameters fully by initializing them into a soft-variant of Mixture of Attention (MoA) layers.
We explore two methods for upcycling attention parameters: 1) initializing separate attention experts from dense models including all attention parameters for the best model performance; and 2) sharing key and value parameters across all experts to facilitate for better inference efficiency.
To further improve efficiency, we adopt a parallel attention transformer architecture to MoEs, which allows the attention experts and FFN experts to be computed concurrently.
Our experiments on seed models ranging from 590 million to 2 billion parameters demonstrate that BAM surpasses baselines in both perplexity and downstream task performance, within the same computational and data constraints. |
2408.07852 | While many capabilities oflanguage models(LMs) improve with increased training budget, the influence of scale on hallucinations is not yet fully understood.
Hallucinations come in many forms, and there is no universally accepted definition. We thus focus on studying only those hallucinations
where a correct answer appears verbatim in the training set.
To fully control the training data content, we construct aknowledge graph(KG) -based dataset, and use it to train a set of increasingly large LMs.
We find that for a fixed dataset, larger and longer-trained LMs hallucinate less.
However, hallucinating on% of the training data requires an order of magnitude larger model, and thus an order of magnitude more compute, thanreported was optimal.
Given this costliness, we study how hallucination detectors depend on scale.
While we see detector size improves performance on fixed LM’s outputs, we find an inverse relationship between the scale of the LM and the detectability of its hallucinations. |
2202.05262v5 | We analyze the storage and recall of factual associations in autoregressive transformer language models, finding evidence that these associations correspond to localized, directly-editable computations. We first develop a causal intervention for identifying neuronactivationsthat are decisive in a model’s factual predictions. This reveals a distinct set of steps in middle-layer feed-forward modules that mediate factual predictions while processing subject tokens. To test our hypothesis that these computations correspond to factual association recall, we modify feed-forwardweightsto update specific factual associations using Rank-One Model Editing (ROME) . We find that ROME is effective on a standard zero-shot relation extraction (zsRE) model-editing task. We also evaluate ROME on a new dataset of difficult counterfactual assertions, on which it simultaneously maintains both specificity and generalization, whereas other methods sacrifice one or another. Our results confirm an important role for mid-layer feed-forward modules in storing factual associations and suggest that direct manipulation of computational mechanisms may be a feasible approach for model editing. The code, dataset, visualizations, and an interactive demo notebook are available
athttps://rome.baulab.info/. |
2307.12950v3 | We propose Reinforcement Learning from Contrast Distillation (RLCD) , a method for aligning language models to follow natural language principles without using human feedback.
RLCD trains a preference model using simulated preference pairs that contain both a high-quality and low-quality example, generated using contrasting positive and negative prompts. The preference model is then used to improve a base unaligned language model via reinforcement learning. Empirically, RLCD outperforms RLAIFand context distillationbaselines across three diverse alignment tasks—harmlessness, helpfulness, and story outline generation—and on both 7B and 30B model scales for preference data simulation. |
2402.17834v1 | We introduce StableLM 2 1.6B, the first in a new generation of our language model series.
In this technical report, we present in detail the data and training procedure leading to the base and instruction-tuned versions of StableLM 2 1.6B.
The weights for both models are available via Hugging Face for anyone to download and use111https://huggingface.co./stabilityai/stablelm-2-1_6b222https://huggingface.co./stabilityai/stablelm-2-zephyr-1_6b.
The report contains thorough evaluations of these models, including zero- and few-shot benchmarks, multilingual benchmarks, and the MT benchmark focusing on multi-turn dialogues.
At the time of publishing this report, StableLM 2 1.6B was the state-of-the-art open model under 2B parameters by a significant margin.
Given its appealing small size, we also provide throughput measurements on a number of edge devices.
In addition, we open source several quantized checkpoints and provide their performance metrics compared to the original model. |
2307.07924v5 | Software engineering is a domain characterized by intricate decision-making processes, often relying on nuanced intuition and consultation. Recent advancements in deep learning have started to revolutionize software engineering practices through elaborate designs implemented at various stages of software development. In this paper, we present an innovative paradigm that leverages large language models (LLMs) throughout the entire software development process, streamlining and unifying key processes through natural language communication, thereby eliminating the need for specialized models at each phase.
At the core of this paradigm lies ChatDev, a virtual chat-powered software development company that mirrors the established waterfall model, meticulously dividing the development process into four distinct chronological stages: designing, coding, testing, and documenting. Each stage engages a team of "software agents", such as programmers, code reviewers, and test engineers, fostering collaborative dialogue and facilitating a seamless workflow. The chat chain acts as a facilitator, breaking down each stage into atomic subtasks. This enables dual roles, allowing for proposing and validating solutions through context-aware communication, leading to efficient resolution of specific subtasks.
The instrumental analysis of ChatDev highlights its remarkable efficacy in software generation, enabling the completion of the entire software development process in under seven minutes at a cost of less than one dollar. It not only identifies and alleviates potential vulnerabilities but also rectifies potential hallucinations while maintaining commendable efficiency and cost-effectiveness. The potential of ChatDev unveils fresh possibilities for integrating LLMs into the realm of software development. Our code is available athttps://github.com/OpenBMB/ChatDev. |
2408.07089 | Recent advancements in Chain-of-Thoughts (CoT) and Program-of-Thoughts (PoT) methods have greatly enhanced language models’ mathematical reasoning capabilities, facilitating their integration into instruction tuning datasets with LLMs. However, existing methods for large-scale dataset creation require substantial seed data and high computational costs for data synthesis, posing significant challenges for scalability. We introduceInfinityMath, a scalable instruction tuning dataset for programmatic mathematical reasoning. The construction pipeline emphasizes decoupling numbers from mathematical problems to synthesize number-independent programs, enabling efficient and flexible scaling while minimizing dependency on specific numerical values. Fine-tuning experiments with open-source language and code models, such as Llama2 and CodeLlama, demonstrate the practical benefits of InfinityMath. These fine-tuned models, showed significant relative improvements on both in-domain and out-of-domain benchmarks, ranging from 184.7% to 514.3% on average. Additionally, these models exhibited high robustness on theGSM8K+andMATH+benchmarks, which are enhanced version of test sets with simply the number variations. InfinityMathensures that models are more versatile and effective across a broader range of mathematical problems. The data is available athttps://huggingface.co./datasets/flagopen/InfinityMATH. |
2404.08819v2 | State-space models (SSMs) have emerged as a potential alternative architecture for building large language models (LLMs) compared to the previously ubiquitous transformer architecture. One theoretical weakness of transformers is that they cannot express certain kinds of sequential computation and state tracking, which SSMs are explicitly designed to address via their close architectural similarity to recurrent neural networks (RNNs) .But do SSMs truly have an advantage (over transformers) in expressive power for state tracking?Surprisingly, the answer is no. Our analysis reveals that the expressive power of SSMs is limited very similarly to transformers: SSMs cannot express computation outside the complexity class. In particular, this means they cannot solve simple state-tracking problems like permutation composition. It follows that SSMs are provably unable to accurately track chess moves with certain notation, evaluate code, or track entities in a long narrative. To supplement our formal analysis, we report experiments showing that Mamba-style SSMs indeed struggle with state tracking. Thus, despite its recurrent formulation, the “state” in an SSM is an illusion: SSMs have similar expressiveness limitations to non-recurrent models like transformers, which may fundamentally limit their ability to solve real-world state-tracking problems. |
2404.18424v2 | The current use of large language models (LLMs) for zero-shot document ranking follows one of two ways: 1) prompt-based re-ranking methods, which require no further training but are feasible for only re-ranking a handful of candidate documents due to the associated computational costs; and 2) unsupervised contrastive trained dense retrieval methods, which can retrieve relevant documents from the entire corpus but require a large amount of paired text data for contrastive training.
In this paper, we propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus. Our method only requires prompts to guide an LLM to generate query and document representations for effective document retrieval. Specifically, we prompt the LLMs to represent a given text using a single word, and then use the last token’s hidden states and the corresponding logits associated to the prediction of the next token to construct a hybrid document retrieval system. The retrieval system harnesses both dense text embedding and sparse bag-of-words representations given by the LLM. Our experimental evaluation on the BEIR zero-shot document retrieval datasets illustrates that this simple prompt-based LLM retrieval method can achieve a similar or higher retrieval effectiveness than state-of-the-art LLM embedding methods that are trained with large amounts of unsupervised data, especially when using a larger LLM.111Code for fully reproducing the results is available athttps://github.com/ielab/PromptReps. |
2404.19553v1 | We extend the context length of Llama-3-8B-Instruct from 8K to 80K via QLoRA fine-tuning111The model is noted as Llama-3-8B-Instruct-80K-QLoRA given its max context length during fine-tuning. However, users could apply the model for even longer contexts via extrapolation.. The entire training cycle is super efficient, which takes 8 hours on one 8xA800 (80G) GPU machine. The resulted model exhibits superior performances across a broad range of evaluation tasks, such as NIHS, topic retrieval, and long-context language understanding; meanwhile, it also well preserves the original capability over short contexts. The dramatic context extension is mainly attributed to merely 3.5K synthetic training samples generated by GPT-4 , which indicates the LLMs’ inherent (yet largely underestimated) potential to extend its original context length. In fact, the context length could be extended far beyond 80K with more computation resources. Therefore, the team will publicly release the entire resources (including data, model, data generation pipeline, training code) so as to facilitate the future research from the community:https://github.com/FlagOpen/FlagEmbedding. |
2402.04553v1 | We present a novel approach to accelerate stochastic gradient descent (SGD) by utilizing curvature information obtained from Hessian-vector products or finite differences of parameters and gradients, similar to the BFGS algorithm. Our approach involves two preconditioners: a matrix-free preconditioner and a low-rank approximation preconditioner. We update both preconditioners online using a criterion that is robust to stochastic gradient noise and does not require line search or damping. To preserve the corresponding symmetry or invariance, our preconditioners are constrained to certain connected Lie groups. The Lie group’s equivariance property simplifies the preconditioner fitting process, while its invariance property eliminates the need for damping, which is commonly required in second-order optimizers. As a result, the learning rate for parameter updating and the step size for preconditioner fitting are naturally normalized, and their default values work well in most scenarios. Our proposed approach offers a promising direction for improving the convergence ofSGDwith low computational overhead. We demonstrate that Preconditioned SGD (PSGD) outperforms SoTA on Vision, NLP, and RL tasks across multiple modern deep-learning architectures. We have provided code for reproducingtoyandlarge scale experimentsin this paper. |
2405.07987 | We argue that representations in AI models, particularly deep networks, are converging. First, we survey many examples of convergence in the literature: over time and across multiple domains, the ways by which different neural networks represent data are becoming more aligned.
Next, we demonstrate convergence across data modalities: as vision models and language models get larger, they measure distance between datapoints in a more and more alike way.
We hypothesize that this convergence is driving toward a shared statistical model of reality, akin to Plato’s concept of an ideal reality. We term such a representation theplatonic representationand discuss several possible selective pressures toward it. Finally, we discuss the implications of these trends, their limitations, and counterexamples to our analysis. |
2408.03943 | What do we want from machine intelligence? We envision machines that are not justtoolsfor thought, butpartnersin thought: reasonable, insightful, knowledgeable, reliable, and trustworthy systems that thinkwithus. Current artificial intelligence (AI) systems satisfy some of these criteria, some of the time. In this Perspective, we show how the science of collaborative cognition can be put to work to engineer systems that really can be called “thought partners,” systems built to meet our expectations and complement our limitations. We lay out several modes of collaborative thought in which humans and AI thought partners can engage and propose desiderata for human-compatible thought partnerships. Drawing on motifs from computational cognitive science, we motivate an alternative scaling path for the design of thought partners and ecosystems around their use through a Bayesian lens, whereby the partners we construct actively build and reason over models of the human and world. |
2406.15334 | The recent success of interleaved Large Multimodal Models (LMMs) in few-shot learning suggests that in-context learning (ICL) with many examples can be promising for learning new tasks. However, thismany-shotmultimodal ICL setting has one crucial problem: it is fundamentally limited by the model’s context length set at pretraining. The problem is especially prominent in the multimodal domain, which processes both text and images, requiring additional tokens. This motivates the need for a multimodal method to compress many shots into fewer tokens without finetuning. In this work, we enable LMMs to perform multimodal, many-shot in-context learning by leveraging Multimodal Task Vectors (MTV) —compact implicit representations of in-context examples compressed in the model’s attention heads. Specifically, we first demonstrate the existence of such MTV in LMMs and then leverage these extracted MTV to enable many-shot in-context learning for various vision-and-language tasks. Our experiments suggest that MTV can scale in performance with the number of compressed shots and generalize to similar out-of-domain tasks without additional context length for inference. |
2402.11782 | Retrieval-augmented language models are being increasingly tasked with subjective, contentious, and conflicting queries such as “is aspartame linked to cancer”.
To resolve these ambiguous queries, one must search through a large range of websites and considerwhich, if any, of this evidence do I find convincing?In this work, we study how LLMs answer this question.
In particular, we constructConflictingQA, a dataset that pairs controversial queries with a series of real-world evidence documents that contain different facts (e.g., quantitative results) , argument styles (e.g., appeals to authority) , and answers (YesorNo) . We use this dataset to perform sensitivity and counterfactual analyses to explore which text features most affect LLM predictions. Overall, we find that current models rely heavily on therelevanceof a website to the query, while largely ignoringstylisticfeatures that humans find important such as whether a text contains scientific references or is written with a neutral tone.
Taken together, these results highlight the importance of RAG corpus quality (e.g., the need to filter misinformation) , and possibly even a shift in how LLMs are trained to better align with human judgements. |
2402.14207 | We study how to apply large language models to write grounded and organized long-form articles from scratch, with comparable breadth and depth to Wikipedia pages. This underexplored problem poses new challenges at thepre-writingstage, including how to research the topic and prepare an outline prior to writing.
We proposeSTORM, a writing system for theSynthesis ofTopicOutlines throughRetrieval andMulti-perspective Question Asking.
STORM models the pre-writing stage by (1) discovering diverse perspectives in researching the given topic,
(2) simulating conversations where writers carrying different perspectives pose questions to a topic expert grounded on trusted Internet sources, (3) curating the collected information to create an outline. For evaluation, we curate FreshWiki, a dataset of recent high-quality Wikipedia articles, and formulate outline assessments to evaluate the pre-writing stage.
We further gather feedback from experienced Wikipedia editors. Compared to articles generated by an outline-driven retrieval-augmented baseline, more of STORM’s articles are deemed to be organized (by a 25% absolute increase) and broad in coverage (by 10%) .
The expert feedback also helps identify new challenges for generating grounded long articles, such as source bias transfer and over-association of unrelated facts. |
2408.04682 | Recent large language models (LLMs) advancements sparked a growing research interest in tool assisted LLMs solving real-world challenges, which calls for comprehensive evaluation of tool-use capabilities. While previous works focused on either evaluating over stateless web services (RESTful API) , based on a single turn user prompt, or an off-policy dialog trajectory,ToolSandbox111ToolSandboxevaluation framework is released athttps://github.com/apple/ToolSandboxincludes stateful tool execution, implicit state dependencies between tools, a built-in user simulator supporting on-policy conversational evaluation and a dynamic evaluation strategy for intermediate and final milestones over an arbitrary trajectory. We show that open source and proprietary models have a significant performance gap, and complex tasks like State Dependency, Canonicalization and Insufficient Information defined inToolSandboxare challenging even the most capable SOTA LLMs, providing brand-new insights into tool-use LLM capabilities. |
2105.09938 | While programming is one of the most broadly applicable skills in modern society, it is unclear how well state-of-the-art machine learning models can write code. Despite its importance, there has been surprisingly little work on evaluating code generation, and it can be difficult to assess code generation performance in an accurate and rigorous manner. To meet this challenge, we introduce APPS, a benchmark for code generation. Unlike prior work in more restricted settings, our benchmark measures the ability of models to take an arbitrary natural language specification and generate satisfactory Python code. Similar to how companies assess candidate software developers, we evaluate models by checking their generated code on test cases. Our benchmark includesproblems, which range from having simple one-line solutions to being substantial algorithmic challenges. We fine-tune large language models on both GitHub and our training set, and we find that the prevalence of syntax errors is decreasing exponentially as models improve. Recent models such as GPT-Neo can pass approximatelyof the test cases of introductory problems, so we find that machine learning models are now beginning to learn how to code. As the social significance of automatic code generation increases over the coming years, our benchmark can provide an objective measure for tracking advancements. |
2408.04093 | Self-attention is the core mathematical operation of modern transformer architectures and is also a significant computational bottleneck due to its quadratic complexity in the sequence length.
In this work, we derive the scalar energy function whose gradient computes the self-attention block, thus elucidating the theoretical underpinnings of self-attention, providing a Bayesian interpretation of the operation and linking it closely with energy-based models such as Hopfield Networks.
Our formulation reveals that the reduction across the sequence axis can be efficiently computed in parallel through a tree reduction.
Our algorithm, for parallelizing attention computation across multiple GPUs enables cross-device decoding to be performedasymptoticallyfaster (up tofaster in our experiments) than alternative approaches such as Ring Attention, while also requiring significantly less communication volume and incurringless peak memory.
Our code is publicly available here:https://github.com/Zyphra/tree_attention. |
2310.17813 | The push to train ever larger neural networks has motivated the study of initialization and training at large network width.
A key challenge is to scale training so that a network’s internal representations evolve nontrivially at all widths, a process known asfeature learning.
Here, we show that feature learning is achieved by scaling thespectral normof weight matrices and their updates like, in contrast to widely used but heuristic scalings based on Frobenius norm and entry size.
Our spectral scaling analysis also leads to an elementary derivation ofmaximal update parametrization.
All in all, we aim to provide the reader with a solid conceptual understanding of feature learning in neural networks. |
2408.04614 | We propose a new method, instruction back-and-forth translation, to construct high-quality synthetic data grounded in world knowledge for aligning large language models (LLMs) . Given documents from a web corpus, we generate and curate synthetic instructions using the backtranslation approach proposed by, and rewrite the responses to improve their quality further based on the initial documents. Fine-tuning with the resulting (backtranslated instruction, rewritten response) pairs yields higher win rates on AlpacaEval than using other common instruction datasets such as Humpback, ShareGPT, Evol-Instruct, Open Orca, Alpaca-GPT4 and Self-instruct. We also demonstrate that rewriting the responses with an LLM outperforms direct distillation, and the two generated text distributions exhibit significant distinction in embedding space.
Further analysis shows that our backtranslated instructions are of higher quality than other sources of synthetic instructions, while our responses are more diverse and complex than those obtained from distillation. Overall we find that instruction back-and-forth translation combines the best of both worlds—making use of the information diversity and quantity found on the web, while ensuring the quality of the responses which is necessary for effective alignment. |
2407.07726v1 | PaliGemma is an open Vision-Language Model (VLM) that is based on the SigLIP-So400m vision encoder and the Gemma-2B language model.
It is trained to be a versatile and broadly knowledgeable base model that is effective to transfer. It achieves strong performance on a wide variety of open-world tasks.
We evaluate PaliGemma on almost 40 diverse tasks including standard VLM benchmarks, but also more specialized tasks such as remote-sensing and segmentation. |
2407.21118 | KV-Cache compression methods generally sample a KV-Cache of effectual tokens or quantize it into lower bits.
However, these methods cannot exploit the redundancy of the hidden dimension of KV tensors.
This paper investigates a unique hidden dimension approach calledPalu, a novel KV-Cache compression framework that utilizes low-rank projection. Palu decomposes the linear layers into low-rank matrices, caches the smaller intermediate states, and reconstructs the full keys and values on the fly. To improve accuracy, compression rate, and efficiency,Palufurther encompasses (1) a medium-grained low-rank decomposition scheme, (2) an efficient rank search algorithm, (3) a low-rank-aware quantization algorithm, and (4) matrix fusion with optimized GPU kernels.
Our extensive experiments with popular LLMs show thatPalucan compress KV-Cache by more than91.25%while maintaining a significantlybetter accuracy (up to 1.19 lower perplexity) than state-of-the-art KV-Cache quantization methods at a similar or even higher memory usage. When compressing KV-Cache for 50%,Paludelivers up to1.61end-to-end speedupfor the attention module. Our code is publicly available at:https://github.com/shadowpa0327/Palu. |
2406.04604 | When using language models (LMs) to solve complex problems, humans might struggle to understand the LM-generated solutions and repair the flawed ones.
To assist humans in repairing them, we propose to automatically decompose complex solutions into multiple simpler pieces that correspond to specific subtasks. We introduce a novel objective for learning task decomposition, termedassistive value(AssistV) , which measures the feasibility and speed for humans to repair the decomposed solution. We collect a dataset of human repair experiences on different decomposed solutions. Utilizing the collected data as in-context examples, we then learn to critique, refine, and rank decomposed solutions to improveAssistV. We validate our method under competitive programming problems: under 177 hours of human study, our method enables non-experts to solve 33.3% more problems, speeds them up by 3.3x, and empowers them to match unassisted experts. |
2405.16528 | Training of large neural networks requires significant computational resources. Despite advances using low-rank adapters and quantization, pretraining of models such as LLMs on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we proposeLoQT, a method for efficiently training quantized models.LoQTuses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models, which we demonstrate experimentally for language modeling and downstream task adaptation. We find thatLoQTenables efficient training of models up to 7B parameters on a consumer-grade 24GB GPU. We also demonstrate the feasibility of training a 13B parameter model using per-layer gradient updates on the same hardware. https://github.com/sebulo/LoQT |
2408.02442 | Structured generation, the process of producing content in standardized formats like JSON and XML, is widely utilized in real-world applications to extract key output information from large language models (LLMs) .
This study investigates whether such constraints on generation space impact LLMs’ abilities, including reasoning and domain knowledge comprehension.
Specifically, we evaluate LLMs’ performance when restricted to adhere to structured formats versus generating free-form responses across various common tasks.
Surprisingly, we observe a significant decline in LLMs’ reasoning abilities under format restrictions.
Furthermore, we find that stricter format constraints generally lead to greater performance degradation in reasoning tasks. |
2408.03314 | Enabling LLMs to improve their outputs by using more test-time computation is a critical step towards building generally self-improving agents that can operate on open-ended natural language. In this paper, we study the scaling of inference-time computation in LLMs, with a focus on answering the question:if an LLM is allowed to use a fixed but non-trivial amount of inference-time compute, how much can it improve its performance on a challenging prompt?Answering this question has implications not only on the achievable performance of LLMs, but also on the future of LLM pretraining and how one should tradeoff inference-time and pre-training compute. Despite its importance, little research attempted to understand the scaling behaviors of various test-time inference methods. Moreover, current work largely provides negative results for a number of these strategies. In this work, we analyze two primary mechanisms to scale test-time computation:(1) searching against dense, process-based verifier reward models; and(2) updating the model’s distribution over a response adaptively, given the prompt at test time. We find that in both cases, the effectiveness of different approaches to scaling test-time compute critically varies depending on the difficulty of the prompt. This observation motivates applying a “compute-optimal” scaling strategy, which acts to most effectively allocate test-time compute adaptively per prompt. Using this compute-optimal strategy, we can improve the efficiency of test-time compute scaling by more than4compared to a best-of-N baseline. Additionally, in a FLOPs-matched evaluation, we find that on problems where a smaller base model attains somewhat non-trivial success rates, test-time compute can be used to outperform a14larger model. |
2407.09941 | A wide array of sequence models are built on a framework modeled after Transformers, comprising alternating sequence mixer and channel mixer layers. This paper studies a unifying *matrix mixer* view of sequence mixers that can be conceptualized as a linear map on the input sequence. This framework encompasses a broad range of well-known sequence models, including the self-attention of Transformers as well as recent strong alternatives such as structured state space models (SSMs), and allows understanding downstream characteristics such as efficiency and expressivity through properties of their structured matrix class. We identify a key axis of matrix parameterizations termed *sequence alignment*, which increases the flexibility and performance of matrix mixers, providing insights into the strong performance of Transformers and recent SSMs such as Mamba. Furthermore, the matrix mixer framework offers a systematic approach to developing sequence mixers with desired properties, allowing us to develop several new sub-quadratic sequence models. In particular, we propose a natural bidirectional extension of the Mamba model (**Hydra**), parameterized as a *quasiseparable matrix mixer*, which demonstrates superior performance over other sequence models including Transformers on non-causal tasks. As a drop-in replacement for attention layers, Hydra outperforms BERT by 0.8 points on the GLUE benchmark and ViT by 2% Top-1 accuracy on ImageNet. |
2004.11714 | Text generation is ubiquitous in many NLP tasks, from summarization, to dialogue and machine translation.
The dominant parametric approach is based on locally normalized models which predict one word at a time. While these
work remarkably well, they are plagued by exposure bias due to the greedy nature of the generation process.
In this work, we investigateun-normalizedenergy-based models (EBMs) which operate not at the token but at the sequence
level. In order to make training tractable, we first work in the residual of a pretrained locally normalized
language model and second we train using noise contrastive estimation. Furthermore, since the EBM works at the
sequence level, we can leverage pretrained bi-directional contextual representations, such as BERT and RoBERTa.
Our experiments on two large language modeling datasets show that residual EBMs yield lower perplexity
compared to locally normalized baselines. Moreover, generation via importance sampling is very efficient and of higher quality
than the baseline models according to human evaluation. |
2407.21075 | We present foundation language models developed to power Apple Intelligence features, including a3 billion parameter model designed to run efficiently on devices and a large server-based language model designed for Private Cloud Compute.
These models are designed to perform a wide range of tasks efficiently, accurately, and responsibly.
This report describes the model architecture, the data used to train the model, the training process, how the models are optimized for inference, and the evaluation results. We highlight our focus on Responsible AI and how the principles are applied throughout the model development. |
2408.02666 | Model-based evaluation is at the heart of successful model
development –
as a reward model for training, and as a replacement for human evaluation.
To train such evaluators, the standard approach is to collect a large amount of human preference judgments over model responses, which is costly and the data becomes stale as models improve.
In this work, we present an approach that aims to improve evaluatorswithout human annotations, using synthetic training data only. Starting from unlabeled instructions, our iterative
self-improvement scheme generates contrasting model outputs and
trains an LLM-as-a-Judge to produce reasoning traces and final judgments, repeating this training at each new iteration using the improved predictions. Without any labeled preference data, our Self-Taught Evaluator can improve
a strong LLM (Llama3-70B-Instruct) from 75.4 to 88.3 (88.7 with majority vote) on RewardBench.
This outperforms commonly used LLM judges such as GPT-4 and matches the performance of the top-performing reward models trained with labeled examples. |
2407.20224 | Knowledge editing techniques have been increasingly adopted to efficiently correct the false or outdated knowledge in Large Language Models (LLMs) , due to the high cost of retraining from scratch. Meanwhile, one critical but under-explored question is:can knowledge editing be used to inject harm into LLMs?In this paper, we propose to reformulate knowledge editing as a new type of safety threat for LLMs, namelyEditing Attack, and conduct a systematic investigation with a newly constructed datasetEditAttack. Specifically, we focus on two typical safety risks of Editing Attack includingMisinformation InjectionandBias Injection. For the risk of misinformation injection, we first categorize it intocommonsense misinformation injectionandlong-tail misinformation injection. Then, we
find thatediting attacks can inject both types of misinformation into LLMs, and the effectiveness is particularly high for commonsense misinformation injection. For the risk of bias injection, we discover that not only can biased sentences be injected into LLMs with high effectiveness, but alsoone single biased sentence injection can cause a bias increase in general outputs of LLMs, which are even highly irrelevant to the injected sentence, indicating a catastrophic impact on the overall fairness of LLMs. Then, we further illustrate thehigh stealthiness of editing attacks, measured by their impact on the
general knowledge and reasoning capacities of LLMs, and
show the hardness of defending editing attacks with empirical evidence. Our discoveries demonstrate the emerging misuse risks of knowledge editing techniques on compromising the safety alignment of LLMs.Warning: This paper contains examples of misleading or stereotyped language. |
2407.00121 | Large language models (LLMs) have recently shown tremendous promise in serving as the backbone to agentic systems, as demonstrated by their performance in multi-faceted, challenging benchmarks like SWE-Bench and Agent-Bench.
However, to realize the true potential of LLMs as autonomous agents, they must learn to identify, call, and interact with external tools and application program interfaces (APIs) to complete complex tasks. These tasks together are termedfunction calling.
Endowing LLMs with function calling abilities leads to a myriad of advantages, such as access to current and domain-specific information in databases and knowledge sources, and the ability to outsource tasks that can be reliably performed by tools, e.g., a Python interpreter or calculator.
While there has been significant progress in function calling with LLMs, there is still a dearth of open models that perform on par with proprietary LLMs like GPT, Claude, and Gemini.
Therefore, in this work, we introduce theGranite-20B-FunctionCalling111The model will be available soon athttps://huggingface.co./ibm-granite/model under an Apache 2.0 license. The model is trained using a multi-task training approach on seven fundamental tasks encompassed in function calling, those being Nested Function Calling, Function Chaining, Parallel Functions, Function Name Detection, Parameter-Value Pair Detection, Next-Best Function, and Response Generation. We present a comprehensive evaluation on multiple out-of-domain datasets comparingGranite-20B-FunctionCallingto more than 15 other best proprietary and open models.Granite-20B-FunctionCallingprovides the best performance among all open models on the Berkeley Function Calling Leaderboard and fourth overall.
As a result of the diverse tasks and datasets used for training our model, we show thatGranite-20B-FunctionCallinghas better generalizability on multiple tasks in seven different evaluation datasets. |
2407.20243 | Embeddings from Large Language Models (LLMs) have emerged as critical components in various applications, particularly for information retrieval.
While high-dimensional embeddings generally demonstrate superior performance as they contain more salient information, their practical application is frequently hindered by elevated computational latency and the associated higher cost.
To address these challenges, we propose Matryoshka-Adaptor, a novel tuning framework designed for the customization of LLM embeddings.
Matryoshka-Adaptor facilitates substantial dimensionality reduction while maintaining comparable performance levels, thereby achieving a significant enhancement in computational efficiency and cost-effectiveness.
Our framework directly modifies the embeddings from pre-trained LLMs which is designed to be seamlessly integrated with any LLM architecture, encompassing those accessible exclusively through black-box APIs.
Also, it exhibits efficacy in both unsupervised and supervised learning settings.
A rigorous evaluation conducted across a diverse corpus of English, multilingual, and multimodal datasets consistently reveals substantial gains with Matryoshka-Adaptor.
Notably, with Google and OpenAI Embedding APIs, Matryoshka-Adaptor achieves a reduction in dimensionality ranging from two- to twelve-fold without compromising performance across multiple BEIR datasets. |
2310.10505 | Alignment is crucial for training large language models. The predominant strategy is Reinforcement Learning from Human Feedback (RLHF) , with Proximal Policy Optimization (PPO) as the de-facto algorithm. Yet, PPO is known to struggle with computational inefficiency, a challenge that this paper aims to address. We identify three important properties of RLHF tasks: fast simulation, deterministic transitions, and trajectory-level rewards, which arenotleveraged in PPO. Based on these properties, we develop ReMax, a new algorithm tailored for RLHF. The design of ReMax builds on the celebrated algorithm REINFORCE but is enhanced with a new variance-reduction technique. ReMax offers threefold advantages over PPO: first, it is simple to implement with just 6 lines of code. It further eliminates more than 4 hyper-parameters in PPO, which are laborious to tune. Second, ReMax reduces memory usage by about 50%. To illustrate, PPO runs out of memory when fine-tuning a Llama2-7B model on A100-80GB GPUs, whereas ReMax can support the training. Even though memory-efficient techniques (e.g., ZeRO and offload) are employed for PPO to afford training, ReMax can utilize a larger batch size to increase throughput. Third, in terms of wall-clock time, PPO is about twice as slow as ReMax per iteration. Importantly, these improvements do not sacrifice task performance. We hypothesize that these advantages can be maintained in larger-scale models.111Our implementation of ReMax is available athttps://github.com/liziniu/ReMax |
2406.11939v1 | The rapid evolution of language models has necessitated the development of more challenging benchmarks. Current static benchmarks often struggle to consistently distinguish between the capabilities of different models and fail to align with real-world user preferences. On the other hand, live crowd-sourced platforms like the Chatbot Arena collect a wide range of natural prompts and user feedback. However, these prompts vary in sophistication and the feedback cannot be applied offline to new models. In order to ensure that benchmarks keep up with the pace of LLM development, we address how one can evaluate benchmarks on their ability to confidently separate models and their alignment with human preference. Under these principles, we developed BenchBuilder, a living benchmark that filters high-quality prompts from live data sources to enable offline evaluation on fresh, challenging prompts. BenchBuilder identifies seven indicators of a high-quality prompt, such as the requirement for domain knowledge, and utilizes an LLM annotator to select a high-quality subset of prompts from various topic clusters. The LLM evaluation process employs an LLM judge to ensure a fully automated, high-quality, and constantly updating benchmark. We apply BenchBuilder on prompts from the Chatbot Arena to create Arena-Hard-Auto v0.1: 500 challenging user prompts from a wide range of tasks. Arena-Hard-Auto v0.1 offers 3x tighter confidence intervals than MT-Bench and achieves a state-of-the-art 89.1% agreement with human preference rankings, all at a cost of only $25 and without human labelers. The BenchBuilder pipeline enhances evaluation benchmarks and provides a valuable tool for developers, enabling them to extract high-quality benchmarks from extensive data with minimal effort. |
2407.21787 | Scaling the amount of compute used to train language models has dramatically improved their capabilities. However, when it comes to inference, we often limit the amount of compute to only one attempt per problem. Here, we explore inference compute as another axis for scaling by increasing the number of generated samples. Across multiple tasks and models, we observe that coverage – the fraction of problems solved by any attempt – scales with the number of samples over four orders of magnitude. In domains like coding and formal proofs, where all answers can be automatically verified, these increases in coverage directly translate into improved performance.
When we apply repeated sampling to SWE-bench Lite, the fraction of issues solved with DeepSeek-V2-Coder-Instruct increases from 15.9% with one sample to 56% with 250 samples, outperforming the single-attempt state-of-the-art of 43% which uses more capable frontier models.
Moreover, using current API pricing, amplifying the cheaper DeepSeek model with five samples is more cost-effective and solves more issues than paying a premium for one sample from GPT-4o or Claude 3.5 Sonnet. Interestingly, the relationship between coverage and the number of samples is often log-linear and can be modelled with an exponentiated power law, suggesting the existence of inference-time scaling laws.
Finally, we find that identifying correct samples out of many generations remains an important direction for future research in domains without automatic verifiers.
When solving math word problems from GSM8K and MATH, coverage with Llama-3 models grows to over 95% with 10,000 samples.
However, common methods to pick correct solutions from a sample collection, such as majority voting or reward models, plateau beyond several hundred samples and fail to fully scale with the sample budget. |
2407.21770 | We introduce MoMa, a novel modality-aware mixture-of-experts (MoE) architecture designed for pre-training mixed-modal, early-fusion language models. MoMa processes images and text in arbitrary sequences by dividing expert modules into modality-specific groups. These groups exclusively process designated tokens while employing learned routing within each group to maintain semantically informed adaptivity. Our empirical results reveal substantial pre-training efficiency gains through this modality-specific parameter allocation. Under a 1-trillion-token training budget, the MoMa 1.4B model, featuring 4 text experts and 4 image experts, achieves impressive FLOPs savings: 3.7overall, with 2.6for text and 5.2for image processing compared to a compute-equivalent dense baseline, measured by pre-training loss. This outperforms the standard expert-choice MoE with 8 mixed-modal experts, which achieves 3× overall FLOPs savings (3for text, 2.8for image) . Combining MoMa with mixture-of-depths (MoD) further improves pretraining FLOPs savings to 4.2overall (text: 3.4, image: 5.3) ,
although this combination hurts performance in causal inference due to increased sensitivity to router accuracy. These results demonstrate MoMa’s potential to significantly advance the efficiency of mixed-modal, early-fusion language model pre-training, paving the way for more resource-efficient and capable multimodal AI systems. |
2010.13369 | Recently, Transformer-based language models have demonstrated remarkable performance across many NLP domains.
However, the unsupervised pre-training step of these models suffers from unbearable overall computational expenses.
Current methods for accelerating the pre-training either rely on massive parallelism with advanced hardware or are not applicable to language modeling.
In this work, we propose a method based onprogressive layer droppingthat speeds the training of Transformer-based language models, not at the cost of excessive hardware resources but from model architecture change and training technique boosted efficiency.
Extensive experiments on BERT show that the proposed method achieves
a 24% time reduction on average per sample and allows the pre-training to be 2.5faster than the baseline to get a similar accuracy on downstream tasks. While being faster, our pre-trained models are equipped with strong knowledge transferability, achieving comparable and sometimes higher GLUE score than the baseline when pre-trained with the same number of samples. |
2304.07193 | The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision.
These models could greatly simplify the use of images in any system by producing general-purpose visual features, i.e., features that work across image distributions and tasks without finetuning.
This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources.
We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size.
Most of the technical contributions aim at accelerating and stabilizing the training at scale.
In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature.
In terms of models, we train a ViT modelwith 1B parameters and distill it into a series of smaller models that surpass the best available general-purpose features, OpenCLIPon most of the benchmarks at image and pixel levels. |
2407.21018 | Large Language Models (LLMs) have revolutionized the field of natural language processing, achieving unprecedented performance across a variety of applications by leveraging increased model sizes and sequence lengths.
However, the associated rise in computational and memory costs poses significant challenges, particularly in managing long sequences due to the quadratic complexity of the transformer attention mechanism.
This paper focuses on the long-context scenario, addressing the inefficiencies in KV cache memory consumption during inference.
Unlike existing approaches that optimize the memory based on the sequence lengths, we uncover that the channel dimension of the KV cache exhibits significant redundancy, characterized by unbalanced magnitude distribution and low-rank structure in attention weights.
Based on these observations, we propose ThinK, a novel query-dependent KV cache pruning method designed to minimize attention weight loss while selectively pruning the least significant channels. Our approach not only maintains or enhances model accuracy but also achieves a reduction in memory costs by over 20% compared with vanilla KV cache eviction methods. Extensive evaluations on the LLaMA3 and Mistral models across various long-sequence datasets confirm the efficacy of ThinK, setting a new precedent for efficient LLM deployment without compromising performance. We also outline the potential of extending our method to value cache pruning, demonstrating ThinK’s versatility and broad applicability in reducing both memory and computational overheads. |
2310.11451 | Large Language Models (LLMs) inherently encode a wealth of knowledge within their parameters through pre-training on extensive corpora. While prior research has delved into operations on these parameters to manipulate the underlying implicit knowledge—encompassing detection, editing, and merging—there remains an ambiguous understanding regarding their transferability across models with varying scales. In this paper, we seek to empirically investigate knowledge transfer from larger to smaller models through a parametric perspective. To achieve this, we employ sensitivity-based techniques to extract and align knowledge-specific parameters between different LLMs. Moreover, the LoRA module is used as the intermediary mechanism for injecting the extracted knowledge into smaller models. Evaluations across four benchmarks validate the efficacy of our proposed method. Our findings highlight the critical factors contributing to the process of parametric knowledge transfer, underscoring the transferability of model parameters across LLMs of different scales. We release code and data athttps://github.com/maszhongming/ParaKnowTransfer. |
2402.12219 | The quality of finetuning data is crucial for aligning large language models (LLMs) with human values.
Current methods to improve data quality are either labor-intensive or prone to factual errors caused by LLM hallucinations.
This paper explores elevating the quality of existing instruction data to better align with human values, introducing a simple and effective approach namedReAlign, whichreformatsthe responses of instruction data into a format that better aligns with pre-established criteria and the collated evidence.
This approach minimizes human annotation, hallucination, and the difficulty in scaling, remaining orthogonal to existing alignment techniques.
Experimentally,ReAlignsignificantly boosts the general alignment ability, math reasoning, factuality, and readability of the LLMs. Encouragingly,withoutintroducing any additional data or advanced training techniques, and merely by reformatting the response, LLaMA-2-13B’s mathematical reasoning ability onGSM8Kcan be improvedfrom 46.77% to 56.63%in accuracy.
Additionally, a mere 5% ofReAligndata yields a 67% boost in general alignment ability measured by the Alpaca dataset.
This work highlights the need for further research into thescienceandmechanistic interpretabilityof LLMs. We have made the associated code and data publicly accessible to support future studies athttps://github.com/GAIR-NLP/ReAlign. |
2207.02598 | Machine learning (ML) models are typically optimized for their accuracy on a given dataset.
However, this predictive criterion rarely captures all desirable properties of a model, in particular how well it matches a domain expert’sunderstandingof a task.
Underspecificationrefers to the existence of multiple models that are indistinguishable in their in-domain accuracy, even though they differ in other desirable properties such as out-of-distribution (OOD) performance.
Identifying these situations is critical for assessing the reliability of ML models. We formalize the concept of underspecification and propose a method to identify and partially address it.
We train multiple models with an independence constraint that forces them to implement different functions.
They discover predictive features that are otherwise ignored by standard empirical risk minimization (ERM) , which we then distill into a global model with superior OOD performance.
Importantly, we constrain the models to align with the data manifold to ensure that they discover meaningful features.
We demonstrate the method on multiple datasets in computer vision (collages, WILDS-Camelyon17, GQA) and discuss general implications of underspecification.
Most notably, in-domain performance cannot serve for OOD model selection without additional assumptions. |
2011.03395 | ML models often exhibit unexpectedly poor behavior when they are deployed in real-world domains. We identify underspecification as a key reason for these failures. An ML pipeline is underspecified when it can return many predictors with equivalently strong held-out performance in the training domain. Underspecification is common in modern ML pipelines, such as those based on deep learning. Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains. This ambiguity can lead to instability and poor model behavior in practice, and is a distinct failure mode from previously identified issues arising from structural mismatch between training and deployment domains. We show that this problem appears in a wide variety of practical ML pipelines, using examples from computer vision, medical imaging, natural language processing, clinical risk prediction based on electronic health records, and medical genomics. Our results show the need to explicitly account for underspecification in modeling pipelines that are intended for real-world deployment in any domain. |
2405.14838 | When leveraging language models for reasoning tasks, generating explicit chain-of-thought (CoT) steps often proves essential for achieving high accuracy in final outputs. In this paper, we investigate if models can be taught to internalize these CoT steps. To this end, we propose a simple yet effective method for internalizing CoT steps: starting with a model trained for explicit CoT reasoning, we gradually remove the intermediate steps and finetune the model. This process allows the model to internalize the intermediate reasoning steps, thus simplifying the reasoning process while maintaining high performance. Our approach enables a GPT-2 Small model to solve 9-by-9 multiplication with up to 99% accuracy, whereas standard training cannot solve beyond 4-by-4 multiplication. Furthermore, our method proves effective on larger language models, such as Mistral 7B, achieving over 50% accuracy on GSM8K without producing any intermediate steps. |
1901.09335 | Large-batch SGD is important for scaling training of deep neural networks.
However, without fine-tuning hyperparameter schedules, the generalization of the model may be hampered.
We propose to use batch augmentation: replicating instances of samples within the same batch with different data augmentations. Batch augmentation acts as a regularizer and an accelerator, increasing both generalization and performance scaling.
We analyze the effect of batch augmentation on gradient variance, and show that it empirically improves convergence for a wide variety of deep neural networks and datasets.
Our results show that batch augmentation reduces the number of necessary SGD updates to achieve the same accuracy as the state-of-the-art.
Overall, this simple yet effective method enables faster training and better generalization by allowing more computational resources to be used concurrently. |
1902.05509 | MultiGrain is a network architecture producing compact vector representations that are suited both for image classification and particular object retrieval.
It builds on a standard classification trunk.
The top of the network produces an embedding containing coarse and fine-grained information, so that images can be recognized based on the object class, particular object, or if they are distorted copies.
Our joint training is simple: we minimize a cross-entropy loss for classification and a ranking loss
that determines if two images are identical up to data augmentation, with no need for additional labels.
A key component of MultiGrain is a pooling layer that takes advantage of high-resolution images with a network trained at a lower resolution. When fed to a linear classifier, the learned embeddings provide state-of-the-art classification accuracy.
For instance, we obtain 79.4% top-1 accuracy with a ResNet-50 learned on Imagenet, which is a +1.8% absolute improvement over the AutoAugment method.
When compared with the cosine similarity, the same embeddings perform on par with the state-of-the-art for image retrieval at moderate resolutions. |
2406.15972 | Continual learning aims to allow models to learn new tasks without forgetting what has been learned before. This work introduces Elastic Variational Continual Learning with Weight Consolidation (EVCL) , a novel hybrid model that integrates the variational posterior approximation mechanism of Variational Continual Learning (VCL) with the regularization-based parameter-protection strategy of Elastic Weight Consolidation (EWC) . By combining the strengths of both methods, EVCL effectively mitigates catastrophic forgetting and enables better capture of dependencies between model parameters and task-specific data. Evaluated on five discriminative tasks, EVCL consistently outperforms existing baselines in both domain-incremental and task-incremental learning scenarios for deep discriminative models. |
2403.14606 | Artificial intelligence has recently experienced remarkable advances, fueled by
large models, vast datasets, accelerated hardware, and, last but not least,
the transformative power of differentiable programming. This new programming
paradigm enables end-to-end differentiation of complex computer programs
(including those with control flows and data structures) , making gradient-based
optimization of program parameters possible. As an emerging paradigm, differentiable programming builds upon several areas
of computer science and applied mathematics, including automatic
differentiation, graphical models, optimization and statistics. This book
presents a comprehensive review of the fundamental concepts useful for
differentiable programming. We adopt two main perspectives, that of
optimization and that of probability, with clear analogies between the two. Differentiable programming is not merely the differentiation of
programs, but also the thoughtful design of programs intended for
differentiation. By making programs differentiable, we inherently introduce
probability distributions over their execution, providing a means to quantify
the uncertainty associated with program outputs. |
2206.07137 | Training on web-scale data can take months. But most computation and time is wasted onredundantand noisy points that are already learnt or not learnable. To accelerate training, we introduceReducible Holdout Loss Selection(RHO-LOSS) , a simple but principled technique which selects approximately those points for training that most reduce the model’s generalization loss. As a result, RHO-LOSS mitigates the weaknesses of existing data selection methods: techniques from the optimization literature typically select “hard” (e.g. high loss) points, but such points are often noisy (not learnable) or less task-relevant. Conversely, curriculum learning prioritizes “easy” points, but such points need not be trained on once learnt. In contrast, RHO-LOSS selects points that are learnable, worth learning, and not yet learnt. RHO-LOSS trains in far fewer steps than prior art, improves accuracy, and speeds up training on a wide range of datasets, hyperparameters, and architectures (MLPs, CNNs, and BERT) . On the large web-scraped image datasetClothing-1M, RHO-LOSS trains in 18x fewer steps and reaches 2% higher final accuracy than uniform data shuffling.††Code:https://github.com/OATML/RHO-Loss |
2404.01413 | The proliferation of generative models, combined with pretraining on web-scale data, raises a timely question: what happens when these models are trained on their own generated outputs? Recent investigations into model-data feedback loops proposed that such loops would lead to a phenomenon termedmodel collapse, under which performance progressively degrades with each model-data feedback iteration until fitted models become useless. However, those studies largely assumed that new datareplaceold data over time, where an arguably more realistic assumption is that dataaccumulateover time. In this paper, we ask: what effect does accumulating data have on model collapse?
We empirically study this question by pretraining sequences of language models on text corpora. We confirm that replacing the original real data by each generation’s synthetic data does indeed tend towards model collapse, then demonstrate that accumulating the successive generations of synthetic data alongside the original real data avoids model collapse; these results hold across a range of model sizes, architectures, and hyperparameters. We obtain similar results for deep generative models on other types of real data: diffusion models for molecule conformation generation and variational autoencoders for image generation. To understand why accumulating data can avoid model collapse, we use an analytically tractable framework introduced by prior work in which a sequence of linear models are fit to the previous models’ outputs. Previous work used this framework to show that if data are replaced, the test error increases with the number of model-fitting iterations; we extend this argument to prove that if data instead accumulate, the test error has a finite upper bound independent of the number of iterations, meaning model collapse no longer occurs.
Our work provides consistent empirical and theoretical evidence that data accumulation avoids model collapse. |
2407.17465 | The Maximal Update Parametrization (µP) aims to make the optimal hyperparameters (HPs) of a model independent of its size, allowing them to be swept using a cheap proxy model rather than the full-size target model.
We present a new scheme, u-µP, which improves upon µP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low-precision.
The two techniques have a natural affinity: µP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that activations, weights and gradients begin training with a scale of one.
This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-µP models reaching a lower loss than comparable µP models and working out-of-the-box in FP8. |
2202.12837 | Large language models (LMs) are able to in-context learn—perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs.
However, there has been little understanding ofhowthe model learns andwhichaspects of the demonstrations contribute to end task performance.
In this paper, we show that ground truth demonstrations are in fact not required—randomly replacing labels in the demonstrations barely hurts performance on a range of classification and multi-choce tasks, consistently over 12 different models including GPT-3.
Instead, we find that other aspects of the demonstrations are the key drivers of end task performance, including the fact that they provide a few examples of
(1) the label space, (2) the distribution of the input text, and (3) the overall format of the sequence.
Together, our analysis provides a new way of understanding how and why in-context learning works, while opening up new questions about how much can be learned from large language models through inference alone. |
2402.18819 | In-context learning (ICL) exhibits dual operating modes:task learning,i.e. acquiring a new skill from in-context samples, andtask retrieval,i.e., locating and activating a relevant pretrained skill.
Recent theoretical work investigates various mathematical models to analyze ICL, but existing models explain only one operating mode at a time.
We introduce a probabilistic model, with which one can explain the dual operating modes of ICL simultaneously.
Focusing on in-context learning of linear functions, we extend existing models for pretraining data by introducing multiple task groups and task-dependent input distributions.
We then analyze the behavior of the optimally pretrained model under the squared loss,i.e., the MMSE estimator of the label given in-context examples.
Regarding pretraining task distribution as prior and in-context examples as the observation, we derive the closed-form expression of the task posterior distribution.
With the closed-form expression, we obtain a quantitative understanding of the two operating modes of ICL.
Furthermore, we shed light on an unexplained phenomenon observed in practice: under certain settings, the ICL risk initially increases and then decreases with more in-context examples.
Our model offers a plausible explanation for this “early ascent” phenomenon: a limited number of in-context samples may lead to the retrieval of an incorrect skill, thereby increasing the risk, which will eventually diminish as task learning takes effect with more in-context samples.
We also theoretically analyze ICL with biased labels,e.g., zero-shot ICL, where in-context examples are assigned random labels.
Lastly, we validate our findings and predictions via experiments involving Transformers and large language models.
The code for our project is available in the GitHub repository:https://github.com/UW-Madison-Lee-Lab/Dual_Operating_Modes_of_ICL. |
2407.14414 | Language models can be used to solve long-horizon planning problems in two distinct modes. In afast‘System-’ mode, models directly generate plans without any explicit search or backtracking, and in aslow‘System-’ mode, they plan step-by-step by explicitly searching over possible actions. While System-planning is typically more effective, it is also more computationally expensive, often making it infeasible for long plans or large action spaces. Moreover, isolated System-or System-planning ignores the user’s end goals and constraints (e.g., token budget) , failing to provide ways for the user to control their behavior. To this end, we propose theSystem-Planner, a controllable planning framework with language models that is capable of generating hybrid plans and balancing between the two planning modes based on the difficulty of the problem at hand. System-consists of (i) a controller, (ii) a System-Planner, and (iii) a System-Planner. Based on a user-specified hybridization factorgoverning the degree to which the system uses System-vs. System-, the controller decomposes a planning problem into sub-goals, and classifies them as easy or hard to be solved by either System-or System-, respectively. We fine-tune all three components on top of a single base LLM, requiring only search traces as supervision. Experiments with two diverse planning tasks – Maze Navigation and Blocksworld – show that our System-Planner outperforms a System-Planner, a System-Planner trained to approximate A∗search, and also a symbolic planner (A∗search) , given an exploration budget. We also demonstrate the following key properties of our planner: (1) controllability: by adjusting the hybridization factor(e.g., System-vs. System-) we can perform more (or less) search, improving performance, (2) flexibility: by building a neuro-symbolic variant composed of a neural System-planner and a symbolic System-planner, we can take advantage of existing symbolic methods, and (3) generalizability: by learning from different search algorithms (BFS, DFS, A∗) , we show that our method is robust to the choice of search algorithm used for training.111Code available at:https://github.com/swarnaHub/System-1.x |
2406.11794 | We introduce DataComp for Language Models (DCLM) , a testbed for controlled dataset experiments with the goal of improving language models.
As part ofDCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations.
Participants in theDCLMbenchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at
model scales ranging from 412M to 7B parameters.
As a baseline forDCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set.
The resulting dataset,DCLM-baseline, enables training a 7B parameter language model from scratch to% 5-shot accuracy on MMLU with 2.6T training tokens.
Compared to MAP-Neo, the previous state-of-the-art in open-data language models,DCLM-baselinerepresents a 6.6 percentage point improvement on MMLU while being trained with 40% less compute.
Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63% & 66%) , and performs similarly on an average of 53 natural language understanding tasks while being trained withless compute than Llama 3 8B.
Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation.
We release theDCLMbenchmark, framework, models, and datasets athttps://datacomp.ai/dclm. |
2403.13787 | Reward models (RMs) are at the crux of successful RLHF to align pretrained models to human preferences, yet there has been relatively little study that focuses on evaluation of those reward models.
Evaluating reward models presents an opportunity to understand the opaque technologies used for alignment of language models and which values are embedded in them.
To date, very few descriptors of capabilities, training methods, or open-source reward models exist.
In this paper, we presentRewardBench, a benchmark dataset and code-base for evaluation, to enhance scientific understanding of reward models.
TheRewardBenchdataset is a collection of prompt-win-lose trios spanning chat, reasoning, and safety, to benchmark how reward models perform on challenging, structured and out-of-distribution queries.
We created specific comparison datasets for RMs that have subtle, but verifiable reasons (e.g. bugs, incorrect facts) why one answer should be preferred to another.
On theRewardBenchleaderboard, we evaluate reward models trained with a variety of methods, such as the direct MLE training of classifiers and the implicit reward modeling of Direct Preference Optimization (DPO) , and on a spectrum of datasets.
We present many findings on propensity for refusals, reasoning limitations, and instruction following shortcomings of various reward models towards a better understanding of the RLHF process. |
1911.00172v2 | We introduceNN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a-nearest neighbors (NN) model.
The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data.
Applying this augmentation to a strongWikitext-103LM, with neighbors drawn from the original training set, ourNN-LM achieves a new state-of-the-art perplexity of 15.79 – a 2.9 point improvement with no additional training.
We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training.
Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge.
Together, these results strongly suggest that learning similarity between sequences of text
is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail. |
2407.10969 | We introduce,Q-Sparse, a simple yet effective approach to training sparsely-activated large language models (LLMs) .Q-Sparseenablesfull sparsity of activationsin LLMs which can bring significant efficiency gains in inference. This is achieved by applying top-sparsification to the activations and the straight-through-estimator to the training. We also introduceBlockQ-Sparsefor batch training and inference. The key results from this work are, (1) Q-Sparsecan achieve results comparable to those of baseline LLMs while being much more efficient at inference time;
(2) We present an inference-optimal scaling law for sparsely-activated LLMs; (3) Q-Sparseis effective in different settings, including training-from-scratch, continue-training of off-the-shelf LLMs, and finetuning; (4) Q-Sparseworks for both full-precision and 1-bit LLMs (e.g.,BitNet b1.58) . Particularly, the synergy of BitNet b1.58 andQ-Sparse(can be equipped with MoE) provides the cornerstone and a clear path to revolutionize the efficiency, including cost and energy consumption, of future LLMs. |
2407.12077 | We introduce GoldFinch, a hybrid Linear Attention/Transformer sequence model that uses a new technique to efficiently generate a highly compressed and reusable KV-Cache in linear time and space with respect to sequence length. GoldFinch stacks our new GOLD transformer on top of an enhanced version of the Finch (RWKV-6) architecture. We train up to 1.5B parameter class models of the Finch, Llama, and GoldFinch architectures, and find dramatically improved modeling performance relative to both Finch and Llama. Our cache size savings increase linearly with model layer count, ranging from 756-2550 times smaller than the traditional transformer cache for common sizes, enabling inference of extremely large context lengths even on limited hardware. Although autoregressive generation has O(n) time complexity per token because of attention, pre-fill computation of the entire initial cache state for a submitted context costs only O(1) time per token due to the use of a recurrent neural network (RNN) to generate this cache. We release our trained weights and training code under the Apache 2.0 license for community use.111Code at:https://github.com/recursal/GoldFinch-paperModel weights at:https://huggingface.co./recursal/GoldFinch-paper |
2407.12267 | We present a new approach for generating 3D house wireframes with semantic enrichment using an autoregressive model. Unlike conventional generative models that independently process vertices, edges, and faces, our approach employs a unified wire-based representation for improved coherence in learning 3D wireframe structures. By re-ordering wire sequences based on semantic meanings, we facilitate seamless semantic integration during sequence generation. Our two-phase technique merges a graph-based autoencoder with a transformer-based decoder to learn latent geometric tokens and generate semantic-aware wireframes. Through iterative prediction and decoding during inference, our model produces detailed wireframes that can be easily segmented into distinct components, such as walls, roofs, and rooms, reflecting the semantic essence of the shape. Empirical results on a comprehensive house dataset validate the superior accuracy, novelty, and semantic fidelity of our model compared to existing generative models.
More results and details can be found onhttps://vcc.tech/research/2024/3DWire. |
2406.19223 | Tokenizers are crucial for encoding information in Large Language Models, but their development has recently stagnated, and they contain inherent weaknesses. Major limitations include computational overhead, ineffective vocabulary use, and unnecessarily large embedding and head layers. Additionally, their performance is biased towards a reference corpus, leading to reduced effectiveness for underrepresented languages.
To remedy these issues, we proposeT-Freewhich directly embeds words through sparse activation patterns over character triplets, and does not require a reference corpus.T-Freeinherently exploits morphological similarities and allows for strong compression of embedding layers.
In our exhaustive experimental evaluation, we achieve competitive downstream performance with a parameter reduction
of more than 85% on these layers.
Further,T-Freeshows significant improvements incross-lingual transfer learning. |
2402.03496 | Adaptive gradient optimizers like Adam(W) are the default training algorithms for many deep learning architectures, such as transformers.
Their diagonal preconditioner is based on the gradient outer product which is incorporated into the parameter update via a square root.
While these methods are often motivated as approximate second-order methods, the square root represents a fundamental difference.
In this work, we investigate how the behavior of adaptive methods changes when we remove the root, i.e. strengthen their second-order motivation.
Surprisingly, we find that such square-root-free adaptive methodsclosethe generalization gap to SGD on convolutional architectures, whilemaintainingtheir root-based counterpart’s performance on transformers.
The second-order perspective also has practical benefits for the development of adaptive methods with non-diagonal preconditioner.
In contrast to root-based counterparts like Shampoo, they do not require numerically unstable matrix square roots and therefore work well in low precision, which we demonstrate empirically.
This raises important questions regarding the currently overlooked role of adaptivity for the success of adaptive methods since the success is often attributed to sign descent induced by the root. |
2407.01449 | Documents are visually rich structures that convey information through text, as well as tables, figures, page layouts, or fonts. While modern document retrieval systems exhibit strong performance on query-to-text matching, they struggle to exploit visual cues efficiently, hindering their performance on practical document retrieval applications such as Retrieval Augmented Generation.
To benchmark current systems on visually rich document retrieval, we introduce the Visual Document Retrieval BenchmarkViDoRe, composed of various page-level retrieving tasks spanning multiple domains, languages, and settings.
The inherent shortcomings of modern systems motivate the introduction of a new retrieval model architecture,ColPali, which leverages the document understanding capabilities of recent Vision Language Models to produce high-quality contextualized embeddings solely from images of document pages. Combined with a late interaction matching mechanism,ColPalilargely outperforms modern document retrieval pipelines while being drastically faster and end-to-end trainable.
We release all project artifacts athttps://huggingface.co./vidore. |
2407.11966 | Good weight initialization serves as an effective measure to reduce the training cost of a deep neural network (DNN) model. The choice of how to initialize parameters is challenging and may require manual tuning, which can be time-consuming and prone to human error. To overcome such limitations, this work takes a novel step towards building aweight generatorto synthesize the neural weights for initialization. We use the image-to-image translation task with generative adversarial networks (GANs) as an example due to the ease of collecting model weights spanning a wide range. Specifically, we first collect a dataset with various image editing concepts and their corresponding trained weights, which are later used for the training of the weight generator. To address the different characteristics among layers and the substantial number of weights to be predicted, we divide the weights into equal-sized blocks and assign each block an index. Subsequently, a diffusion model is trained with such a dataset using both text conditions of the concept and the block indexes. By initializing the image translation model with the denoised weights predicted by our diffusion model, the training requires onlyseconds. Compared to training from scratch (i.e., Pix2pix) , we achieve atraining time acceleration for a new concept while obtaining even better image generation quality. |
2401.16380 | Large language models are trained on massive scrapes of the web, which are often unstructured, noisy, and poorly phrased.
Current scaling laws show that learning from such data requires an abundance of both compute and data, which grows with the size of the model being trained.
This is infeasible both because of the large compute costs and duration associated with pre-training, and the impending scarcity of high-quality data on the web.
In this work, we proposeWebRephraseAugmentedPre-training (WRAP) that uses an off-the-shelf instruction-tuned model prompted to paraphrase documents on the web in specific styles such as “like Wikipedia” or in “question-answer format” to jointly pre-train LLMs on real and synthetic rephrases. First, we show that usingWRAPon the C4 dataset, which is naturally noisy,
speeds up pre-training by. At the same pre-training compute budget, it
improves perplexity by more than 10% on average across different subsets of the Pile, and improves zero-shot question answer accuracy across 13 tasks by more than 2%. Second, we investigate the impact of the re-phrasing style on the performance of the model, offering insights into how the composition of the training data can impact the performance of LLMs in OOD settings. Our gains are attributed to the fact that re-phrased synthetic data has higher utility than just real data because it (i) incorporates style diversity that closely reflects downstream evaluation style, and (ii) has higher ‘quality’ than web-scraped data. |
2406.18518 | The advancement of function-calling agent models requires diverse, reliable, and high-quality datasets. This paper presents APIGen, an automated data generation pipeline designed to synthesize verifiable high-quality datasets for function-calling applications. We leverage APIGen and collect 3,673 executable APIs across 21 different categories to generate diverse function-calling datasets in a scalable and structured manner. Each data in our dataset is verified through three hierarchical stages: format checking, actual function executions, and semantic verification, ensuring its reliability and correctness. We demonstrate that models trained with our curated datasets, even with only 7B parameters, can achieve state-of-the-art performance on the Berkeley Function-Calling Benchmark, outperforming multiple GPT-4 models. Moreover, our 1B model achieves exceptional performance, surpassing GPT-3.5-Turbo and Claude-3 Haiku. We release a dataset containing 60,000 high-quality entries, aiming to advance the field of function-calling agent domains. The dataset is available on Huggingface 1and the project homepage 2. |
2406.00770 | Fine-tuning large pre-trained language models with Evol-Instruct has achieved encouraging results across a wide range of tasks. However, designing effective evolving methods for instruction evolution requires substantial human expertise. This paper proposesAuto Evol-Instruct, an end-to-end framework that evolves instruction datasets using large language models without any human effort. The framework automatically analyzes and summarizes suitable evolutionary strategies for the given instruction data and iteratively improves the evolving method based on issues exposed during the instruction evolution process. Our extensive experiments demonstrate that the best method optimized byAuto Evol-Instructoutperforms human-designed methods on various benchmarks, including MT-Bench, AlpacaEval, GSM8K, and HumanEval. |
2407.09450 | Large language models (LLMs) have shown remarkable capabilities, but still struggle with processing extensive contexts, limiting their ability to maintain coherence and accuracy over long sequences. In contrast, the human brain excels at organising and retrieving episodic experiences across vast temporal scales, spanning a lifetime. In this work, we introduce EM-LLM, a novel approach that integrates key aspects of human episodic memory and event cognition into LLMs, enabling them to effectively handle practically infinite context lengths while maintaining computational efficiency. EM-LLM organises sequences of tokens into coherent episodic events using a combination of Bayesian surprise and graph-theoretic boundary refinement in an on-line fashion. When needed, these events are retrieved through a two-stage memory process, combining similarity-based and temporally contiguous retrieval for efficient and human-like access to relevant information. Experiments on the LongBench dataset demonstrate EM-LLM’s superior performance, outperforming the state-of-the-art InfLLM model with an overall relative improvement ofacross various tasks, including aimprovement on the PassageRetrieval task. Furthermore, our analysis reveals strong correlations between EM-LLM’s event segmentation and human-perceived events, suggesting a bridge between this artificial system and its biological counterpart. This work not only advances LLM capabilities in processing extended contexts but also provides a computational framework for exploring human memory mechanisms, opening new avenues for interdisciplinary research in AI and cognitive science. |
2407.07565 | In this paper we consider contamination by code generation test sets, in particular in their use in modern large language models.
We discuss three possible sources of such contamination and show findings supporting each of them: (i) direct data leakage, (ii) indirect data leakage through the use of synthetic data and (iii) overfitting to evaluation sets during model selection. Key to our findings is a new dataset of 161 prompts with their associated python solutions, dataset which is released athttps://huggingface.co./datasets/CohereForAI/lbpp. |
2406.15877 | Automated software engineering has been greatly empowered by the recent advances in Large Language Models (LLMs) for programming. While current benchmarks have shown that LLMs can perform various software engineering tasks like human developers, the majority of their evaluations are limited to short and self-contained algorithmic tasks.
Solving challenging and practical programming tasks requires the capability of utilizingdiverse function calls as toolsto efficiently implement functionalities like data analysis and web development.
In addition, using multiple tools to solve a task needs compositional reasoning by accurately understandingcomplex instructions. Fulfilling both of these characteristics can pose a great challenge for LLMs.
To assess how well LLMs can solve challenging and practical programming tasks, we introduceBigCodeBench, a benchmark that challenges LLMs to invoke multiple function calls as tools from 139 libraries and 7 domains for 1,140 fine-grained programming tasks.
To evaluate LLMs rigorously, each programming task encompasses 5.6 test cases with an average branch coverage of 99%.
In addition, we propose a natural-language-oriented variant ofBigCodeBench,BigCodeBench-Instruct, that automatically transforms the original docstrings into short instructions only with essential information. Our extensive evaluation of 60 LLMs shows thatLLMs are not yet capable of following complex instructions to use function calls precisely, with scores up to 60%, significantly lower than the human performance of 97%. The results underscore the need for further advancements in this area. |
2407.09025 | Spreadsheets are characterized by their extensive two-dimensional grids, flexible layouts, and varied formatting options, which pose significant challenges for large language models (LLMs) . In response, we introduceSpreadsheetLLM, pioneering an efficient encoding method designed to unleash and optimize LLMs’ powerful understanding and reasoning capability on spreadsheets. Initially, we propose a vanilla serialization approach that incorporates cell addresses, values, and formats. However, this approach was limited by LLMs’ token constraints, making it impractical for most applications. To tackle this challenge, we developSheetCompressor, an innovative encoding framework that compresses spreadsheets effectively for LLMs. It comprises three modules: structural-anchor-based compression, inverse index translation, and data-format-aware aggregation. It significantly improves performance in spreadsheet table detection task, outperforming the vanilla approach by 25.6% in GPT4’s in-context learning setting. Moreover, fine-tuned LLM withSheetCompressorhas an average compression ratio of 25×, but achieves a state-of-the-art 78.9% F1 score, surpassing the best existing models by 12.3%.
Finally, we propose Chain of Spreadsheet for downstream tasks of spreadsheet understanding and validate in a new and demanding spreadsheet QA task. We methodically leverage the inherent layout and structure of spreadsheets, demonstrating thatSpreadsheetLLMis highly effective across a variety of spreadsheet tasks. |
2406.06608 | Generative Artificial Intelligence (GenAI) systems are being increasingly deployed across all parts of industry and research settings. Developers and end users interact with these systems through the use of prompting or prompt engineering. While prompting is a widespread and highly researched concept, there exists conflicting terminology and a poor ontological understanding of what constitutes a prompt due to the area’s nascency. This paper establishes a structured understanding of prompts, by assembling a taxonomy of prompting techniques and analyzing their use. We present a comprehensive vocabulary of 33 vocabulary terms, a taxonomy of 58 text-only prompting techniques, and 40 techniques for other modalities. We further present a meta-analysis of the entire literature on natural language prefix-prompting. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 146