File size: 8,235 Bytes
99127d4
9e8d83b
 
99127d4
 
9e8d83b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d980dae
9e8d83b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99127d4
9e8d83b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78747a5
 
9e8d83b
 
 
 
399576a
 
9e8d83b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d160f7
9e8d83b
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
language:
- en
license: other
license_name: open-australian-legal-corpus
license_link: https://huggingface.co./datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md
tags:
- law
- legal
- australia
- embeddings
annotations_creators:
- no-annotation
language_creators:
- found
language_details: en-AU, en-GB
pretty_name: Open Australian Legal Embeddings
size_categories:
- 1M<n<10M
source_datasets:
- umarbutler/open-australian-legal-corpus
task_categories:
- text-retrieval
task_ids:
- document-retrieval
viewer: true
dataset_info:
  features:
  - name: version_id
    dtype: string
  - name: type
    dtype: string
  - name: jurisdiction
    dtype: string
  - name: source
    dtype: string
  - name: citation
    dtype: string
  - name: url
    dtype: string
  - name: is_last_chunk
    dtype: bool
  - name: text
    dtype: string
  - name: embedding
    list: float32
  config_name: train
  splits:
  - name: train
    num_bytes: 28500857221
    num_examples: 5208238
  download_size: 45586801753
  dataset_size: 28500857221
---
<!-- To update the above `dataset_info` section, please run the following command: `datasets-cli test open_australian_legal_embeddings.py --save_info --all_configs`. -->

# **Open Australian Legal Embeddings ‍⚖️**
<a href="https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings" alt="Release"><img src="https://img.shields.io/badge/release-v1.0.0-green"></a>

The Open Australian Legal Embeddings are the first open-source embeddings of Australian legislative and judicial documents.

Trained on the largest open database of Australian law, the [Open Australian Legal Corpus](https://huggingface.co./datasets/umarbutler/open-australian-legal-corpus), the Embeddings consist of roughly 5.2 million 384-dimensional vectors embedded with [`BAAI/bge-small-en-v1.5`](https://huggingface.co./BAAI/bge-small-en-v1.5).

The Embeddings open the door to a wide range of possibilities in the field of Australian legal AI, including the development of document classifiers, search engines and chatbots.

To ensure their accessibility to as wide an audience as possible, the Embeddings are distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co./datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md).

## Usage 👩‍💻
The below code snippet illustrates how the Embeddings may be loaded and queried via the [Hugging Face Datasets](https://huggingface.co./docs/datasets/index) Python library:
```python
import itertools
import sklearn.metrics.pairwise

from datasets import load_dataset
from sentence_transformers import SentenceTransformer

model = SentenceTransformer('BAAI/bge-small-en-v1.5')
instruction = 'Represent this sentence for searching relevant passages: '

oale = load_dataset('umarbutler/open_australian_legal_embeddings', split='train', streaming=True) # Set `streaming` to `False` if you wish to load the entire dataset into memory (unadvised unless you have at least 64 GB of RAM).

# Sample the first 100,000 embeddings.
sample = list(itertools.islice(oale, 100000))

# Embed a query.
query = model.encode(instruction + 'Who is the Governor-General of Australia?', normalize_embeddings=True)

# Identify the most similar embedding to the query.
similarities = sklearn.metrics.pairwise.cosine_similarity([query], [embedding['embedding'] for embedding in sample])
most_similar_index = similarities.argmax()
most_similar = sample[most_similar_index]

# Print the most similar text.
print(most_similar['text'])
```

To speed up the loading of the Embeddings, you may wish to install [`orjson`](https://github.com/ijl/orjson).

## Structure 🗂️
The Embeddings are stored in [`data/embeddings.jsonl`](https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/embeddings.jsonl), a json lines file where each line is a list of 384 32-bit floating point numbers. Associated metadata is stored in [`data/metadatas.jsonl`](https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/metadatas.jsonl) and the corresponding texts are located in [`data/texts.jsonl`](https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/texts.jsonl).

The metadata fields are the same as those used for the [Open Australian Legal Corpus](https://huggingface.co./datasets/umarbutler/open-australian-legal-corpus#structure-%F0%9F%97%82%EF%B8%8F), barring the `text` field, which was removed, and with the addition of the `is_last_chunk` key, which is a boolean flag for whether a text is the last chunk of a document (used to detect and remove corrupted documents when creating and updating the Embeddings).

## Creation 🧪
All documents in the [Open Australian Legal Corpus](https://huggingface.co./datasets/umarbutler/open-australian-legal-corpus#statistics-%F0%9F%93%8A) were split into semantically meaningful chunks up to 512-tokens-long (as determined by [`bge-small-en-v1.5`](https://huggingface.co./BAAI/bge-small-en-v1.5)'s tokeniser) with the [`semchunk`](https://github.com/umarbutler/semchunk) Python library. These chunks included a header embedding documents' titles, jurisdictions and types in the following format:
```perl
Title: {title}
Jurisdiction: {jurisdiction}
Type: {type}
{text}
```

When embedded into the above header, the names of jurisdictions were capitalised and stripped of hyphens. The `commonwealth` jurisdiction was also renamed to 'Commonwealth of Australia'. In the cases of types, `primary_legislation` became 'Act', `secondary_legislation` became 'Regulation', `bill` became 'Bill' and `decision` became 'Judgment'.

The chunks were then vectorised by [`bge-small-en-v1.5`](https://huggingface.co./BAAI/bge-small-en-v1.5) on a single GeForce RTX 2080 Ti with a batch size of 32 via the [`SentenceTransformers`](https://www.sbert.net/) library.

The resulting embeddings were serialised as json-encoded lists of floats by [`orjson`](https://github.com/ijl/orjson) and stored in [`data/embeddings.jsonl`](https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/embeddings.jsonl). The corresponding metadata and texts (with their headers removed) were saved to [`data/metadatas.jsonl`](https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/metadatas.jsonl) and [`data/texts.jsonl`](https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings/blob/main/data/texts.jsonl), respectively.

The code used to create and update the Embeddings may be found [here](https://github.com/umarbutler/open-australian-legal-embeddings-creator).

## Changelog 🔄
All notable changes to the Embeddings are documented in its [Changelog 🔄](https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings/blob/main/CHANGELOG.md).

This project adheres to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## Licence 📜
The Embeddings are distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co./datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md).

## Citation 🔖
If you've relied on the Embeddings for your work, please cite:
```latex
@misc{butler-2023-open-australian-legal-embeddings,
    author = {Butler, Umar},
    year = {2023},
    title = {Open Australian Legal Embeddings},
    publisher = {Hugging Face},
    version = {1.0.0},
    doi = {10.57967/hf/1347},
    url = {https://huggingface.co./datasets/umarbutler/open-australian-legal-embeddings}
}
```

## Acknowledgements 🙏
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.

The author thanks the creators of the many Python libraries relied upon in the creation of the Embeddings.

Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs.