context length higher than 100K
#13
by
Iamexperimenting
- opened
Hi @rishdotblog , what approach you would recommend if my context length exceed 100K and parallely, if I host this model, it will take longer time to generate the output right. because I need to pass the entire database schema along with the question as a prompt to the model.
Yup that is correct. For very large schemas, you can either fine-tune the model on your own schema, or use a pruning function to prune the schema before you send it to the model.
rishdotblog
changed discussion status to
closed
@wongjingping , could you please confirm whether domain based fine tuning or chat based fine tuning ?
can you please provide an example for pruning function?