Llama-3.1-8B-Instruct-Legal-NLI

This model is a fine-tuned version of Meta's Llama-3.1-8B model, specifically trained for Legal Natural Language Inference (NLI) tasks. It can determine the relationship between legal premises and hypotheses as either Entailed, Contradicted, or Neutral. The model has been trained on the LegalLens NLI Shared Task dataset.

Model Details

  • Base Model: meta-llama/Meta-Llama-3.1-8B
  • Task: Legal Natural Language Inference
  • Training Method: QLoRA fine-tuning
  • Training Dataset: LegalLensNLI-SharedTask
  • Languages: English

Performance

The model achieves strong performance on the evaluation set:

  • Accuracy: 86.1%
  • Macro F1 Score: 85.8%

Training Details

The model was trained using the following configuration:

  • LoRA Config:

    • Alpha: 32
    • Rank: 16
    • Dropout: 0.05
    • Target Modules: ['down_proj', 'gate_proj', 'o_proj', 'v_proj', 'up_proj', 'q_proj', 'k_proj']
  • Training Parameters:

    • Learning Rate: 2e-4
    • Epochs: 30
    • Batch Size: 1
    • Gradient Accumulation Steps: 4
    • Max Sequence Length: 512

Intended Use

This model is designed for:

  • Legal document analysis
  • Understanding relationships between legal statements
  • Automated legal reasoning tasks
  • Legal compliance verification

Limitations

  • Limited to English legal text
  • Performance may vary on legal domains not represented in the training data
  • Should not be used as sole decision-maker for legal matters
  • Requires legal expertise for proper interpretation of results
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train khalidrajan/Llama-3.1-8B-Instruct-Legal-NLI