Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Code-Mistral-7B

This Model is trained on refined version of my dataset Code-290k-ShareGPT. Besides this it is trained on following datasets:

Code-Feedback

orca-math-word-problems-200k

Openhermes

The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model.

This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format.

Kindly note this is qLoRA version, a rare exception.

Training: Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral.

Example Prompt: This model uses ChatML prompt format.

<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

You can modify above Prompt as per your requirement.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Example Output

C++

image/jpeg

Error Resolving

image/jpeg

Matrices

image/jpeg

Machine Learning

image/jpeg

Downloads last month
3
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train blockblockblock/Code-Mistral-7B-bpw3.7