--- base_model: Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant_16bit language: ['en', 'es'] license: apache-2.0 tags: ['text-generation-inference', 'transformers', 'unsloth', 'mistral', 'gguf'] datasets: ['iamtarun/python_code_instructions_18k_alpaca', 'jtatman/python-code-dataset-500k', 'flytech/python-codes-25k', 'Vezora/Tested-143k-Python-Alpaca', 'codefuse-ai/CodeExercise-Python-27k', 'Vezora/Tested-22k-Python-Alpaca', 'mlabonne/Evol-Instruct-Python-26k'] library_name: adapter-transformers metrics: --- # Uploaded model [](https://github.com/Agnuxo1) - **Developed by:** Agnuxo(https://github.com/Agnuxo1) - **License:** apache-2.0 - **Finetuned from model :** Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) ## Benchmark Results This model has been fine-tuned for various tasks and evaluated on the following benchmarks: Model Size: 3,821,079,552 parameters Required Memory: 14.23 GB For more details, visit my [GitHub](https://github.com/Agnuxo1). Thanks for your interest in this model!