Polars
Polars is a fast DataFrame library written in Rust with Arrow as its foundation.
π‘ Learn more about how to get the dataset URLs in the List Parquet files guide.
Letβs start by grabbing the URLs to the train
split of the tasksource/blog_authorship_corpus
dataset from the dataset viewer API:
import requests
r = requests.get("https://datasets-server.huggingface.co/parquet?dataset=tasksource/blog_authorship_corpus")
j = r.json()
urls = [f['url'] for f in j['parquet_files'] if f['split'] == 'train']
urls
['https://huggingface.co./datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet', 'https://huggingface.co./datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet']
To read from a single Parquet file, use the read_parquet
function to read it into a DataFrame and then execute your query:
import polars as pl
df = (
pl.read_parquet("https://huggingface.co./datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet")
.group_by("sign")
.agg(
[
pl.count(),
pl.col("text").str.len_chars().mean().alias("avg_blog_length")
]
)
.sort("avg_blog_length", descending=True)
.limit(5)
)
print(df)
shape: (5, 3)
βββββββββββββ¬ββββββββ¬ββββββββββββββββββ
β sign β count β avg_blog_length β
β --- β --- β --- β
β str β u32 β f64 β
βββββββββββββͺββββββββͺββββββββββββββββββ‘
β Cancer β 38956 β 1206.521203 β
β Leo β 35487 β 1180.067377 β
β Aquarius β 32723 β 1152.113682 β
β Virgo β 36189 β 1117.198209 β
β Capricorn β 31825 β 1102.397361 β
βββββββββββββ΄ββββββββ΄ββββββββββββββββββ
To read multiple Parquet files - for example, if the dataset is sharded - youβll need to use the concat
function to concatenate the files into a single DataFrame:
import polars as pl
df = (
pl.concat([pl.read_parquet(url) for url in urls])
.group_by("sign")
.agg(
[
pl.count(),
pl.col("text").str.len_chars().mean().alias("avg_blog_length")
]
)
.sort("avg_blog_length", descending=True)
.limit(5)
)
print(df)
shape: (5, 3)
ββββββββββββ¬ββββββββ¬ββββββββββββββββββ
β sign β count β avg_blog_length β
β --- β --- β --- β
β str β u32 β f64 β
ββββββββββββͺββββββββͺββββββββββββββββββ‘
β Aquarius β 49687 β 1191.417212 β
β Leo β 53811 β 1183.878222 β
β Cancer β 65048 β 1158.969161 β
β Gemini β 51985 β 1156.069308 β
β Virgo β 60399 β 1140.958443 β
ββββββββββββ΄ββββββββ΄ββββββββββββββββββ
Lazy API
Polars offers a lazy API that is more performant and memory-efficient for large Parquet files. The LazyFrame API keeps track of what you want to do, and itβll only execute the entire query when youβre ready. This way, the lazy API doesnβt load everything into RAM beforehand, and it allows you to work with datasets larger than your available RAM.
To lazily read a Parquet file, use the scan_parquet
function instead. Then, execute the entire query with the collect
function:
import polars as pl
q = (
pl.scan_parquet("https://huggingface.co./datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet")
.group_by("sign")
.agg(
[
pl.count(),
pl.col("text").str.len_chars().mean().alias("avg_blog_length")
]
)
.sort("avg_blog_length", descending=True)
.limit(5)
)
df = q.collect()