--- tags: - merge - mergekit - Maths - Mistral base_model: - mlabonne/OmniBeagle-7B - WizardLM/WizardMath-7B-V1.1 license: apache-2.0 language: - en library_name: transformers pipeline_tag: text-generation model-index: - name: Pearl-7B-slerp results: - task: type: text-generation metrics: - name: Average type: Average value: 72.75 - name: ARC type: ARC value: 68.00 - name: GSM8K type: GSM8K value: 73.62 - name: Winogrande type: Winogrande value: 68.00 - name: TruthfulQA type: TruthfulQA value: 62.35 - name: HellaSwag type: HellaSwag value: 87.16 source: name: Open LLM Leaderboard url: https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard ---
# Pearl-7B-slerp, an xtraordinary 7B model for maths **03-22-2024 - To date, louisbrulenaudet/Pearl-34B-ties is the "Best 🤝 base merges and moerges model of around 30B" on the Open LLM Leaderboard.** Pearl-7B-slerp is a merge of the following models: * [mlabonne/OmniBeagle-7B](https://huggingface.co./mlabonne/OmniBeagle-7B) * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co./WizardLM/WizardMath-7B-V1.1) ### Evaluation The evaluation was performed using the HuggingFace Open LLM Leaderboard. | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | Params (B) | |-------------------------------------------|------------|-------|-----------|-------|------------|------------|-------|--------------| | **louisbrulenaudet/Pearl-7B-slerp** |**72.75** | 68.00 | 87.16 | 64.04 | 62.35 | 81.29 |**73.62**| 7.24 | | mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | 70.22 | 87.63 | 71.16 | 64.58 | 81.37 | 60.73 | 46.7 | | microsoft/phi-2 | 61.33 | 61.09 | 75.11 | 58.11 | 44.47 | 74.35 | 54.81 | 2.78 | | microsoft/Orca-2-13b | 58.64 | 60.67 | 79.81 | 60.37 | 56.41 | 76.64 | 17.97 | 13 | | mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | 54.52 | 75.63 | 55.38 | 56.28 | 73.72 | 14.25 | 7.24 | | meta-llama/Llama-2-7b-hf | 50.97 | 53.07 | 78.59 | 46.87 | 38.76 | 74.03 | 14.48 | 6.74 | Spherical Linear Interpolation (SLERP) serves as a technique for seamlessly interpolating between two vectors while maintaining a constant rate of change and upholding the geometric properties of the spherical space in which these vectors exist. Opting for SLERP over traditional linear interpolation is motivated by various considerations. Linear interpolation in high-dimensional spaces may result in a reduction in the magnitude of the interpolated vector, diminishing the scale of weights. Additionally, in many cases, the alteration in the weights' direction conveys more meaningful information, such as feature learning and representation, compared to the magnitude of change. $$ {\displaystyle \operatorname {slerp} (p_{0},p_{1};t)={\frac {\sin {[(1-t)\Omega }]}{\sin \Omega }}p_{0}+{\frac {\sin[t\Omega ]}{\sin \Omega }}p_{1}.}$$ The implementation of SLERP involves the following steps: - Normalize the input vectors to unit length, ensuring they signify directions rather than magnitudes. - Calculate the angle between these vectors using their dot product. - If the vectors are nearly collinear, the method defaults to linear interpolation for efficiency. Otherwise, SLERP calculates scale factors based on the interpolation factor t (where t=0 corresponds to 100% of the first vector, and t=1 corresponds to 100% of the second vector) and the angle between the vectors. - Utilize these computed factors to weigh the original vectors, and then sum them to derive the interpolated vector. In essence, SLERP provides a robust mechanism for interpolating vectors, offering advantages in preserving directional information and mitigating issues associated with linear interpolation in high-dimensional spaces. ## Configuration ```yaml slices: - sources: - model: mlabonne/OmniBeagle-7B layer_range: [0, 32] - model: WizardLM/WizardMath-7B-V1.1 layer_range: [0, 32] merge_method: slerp base_model: mlabonne/OmniBeagle-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "louisbrulenaudet/Pearl-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` ## Citing & Authors If you use this code in your research, please use the following BibTeX entry. ```BibTeX @misc{louisbrulenaudet2023, author = {Louis Brulé Naudet}, title = {Pearl-7B-slerp, an xtraordinary 7B model for maths}, year = {2023} howpublished = {\url{https://huggingface.co./louisbrulenaudet/Pearl-7B-slerp}}, } ``` ## Feedback If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).