Edit model card

Model Card for qa-expert-7B-V1.0

This model aims to handle Multi-hop Question answering by splitting a multi-hop questions into a sequence of single questions, handle these single questions then summarize the information to get the final answer.

Model Details

This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the dataset: khaimaitien/qa-expert-multi-hop-qa-V1.0

You can get more information about how to use/train the model from this repo: https://github.com/khaimt/qa_expert

Model Sources [optional]

How to Get Started with the Model

First, you need to clone the repo: https://github.com/khaimt/qa_expert

Then install the requirements:

pip install -r requirements.txt

Here is the example code:

from qa_expert import get_inference_model, InferenceType
def retrieve(query: str) -> str:
    # You need to implement this retrieval function, input is a query and output is a string
    # This can be treated as the function to call in function calling of OpenAI
    return context

model_inference = get_inference_model(InferenceType.hf, "khaimaitien/qa-expert-7B-V1.0")
answer, messages = model_inference.generate_answer(question, retriever_func)
Downloads last month
24
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.