For the electrical engineering community

A unique, deployable and efficient 2.7 billion parameters model in the field of electrical engineering. This repo contains the adapters from the LoRa fine-tuning of the phi-2 model from Microsoft. It was trained on the STEM-AI-mtl/Electrical-engineering dataset combined with garage-bAInd/Open-Platypus.

  • Developed by: STEM.AI
  • Model type: Q&A and code generation
  • Language(s) (NLP): English
  • Finetuned from model: microsoft/phi-2

Direct Use

Q&A related to electrical engineering, and Kicad software. Creation of Python code in general, and for Kicad's scripting console.

Refer to microsoft/phi-2 model card for recommended prompt format.

Inference script

Standard

GPTQ format

Training Details

Training Data

Dataset related to electrical engineering: STEM-AI-mtl/Electrical-engineering It is composed of queries, 65% about general electrical engineering, 25% about Kicad (EDA software) and 10% about Python code for Kicad's scripting console.

In additionataset related to STEM and NLP: garage-bAInd/Open-Platypus

Training Procedure

LoRa script

A LoRa PEFT was performed on a 48 Gb A40 Nvidia GPU.

Model Card Authors

STEM.AI: [email protected]
William Harbec

Downloads last month
81
Safetensors
Model size
2.78B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for STEM-AI-mtl/phi-2-electrical-engineering

Finetunes
1 model
Quantizations
2 models

Datasets used to train STEM-AI-mtl/phi-2-electrical-engineering