url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.14B
1.87B
node_id
stringlengths
18
19
number
int64
3.74k
6.19k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
2
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6193/comments
https://api.github.com/repos/huggingface/datasets/issues/6193/events
https://github.com/huggingface/datasets/issues/6193
1,872,285,153
I_kwDODunzps5vmM3h
6,193
Dataset loading script method does not work with .pyc file
{ "login": "riteshkumarumassedu", "id": 43389071, "node_id": "MDQ6VXNlcjQzMzg5MDcx", "avatar_url": "https://avatars.githubusercontent.com/u/43389071?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riteshkumarumassedu", "html_url": "https://github.com/riteshkumarumassedu", "followers_url": "https://api.github.com/users/riteshkumarumassedu/followers", "following_url": "https://api.github.com/users/riteshkumarumassedu/following{/other_user}", "gists_url": "https://api.github.com/users/riteshkumarumassedu/gists{/gist_id}", "starred_url": "https://api.github.com/users/riteshkumarumassedu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riteshkumarumassedu/subscriptions", "organizations_url": "https://api.github.com/users/riteshkumarumassedu/orgs", "repos_url": "https://api.github.com/users/riteshkumarumassedu/repos", "events_url": "https://api.github.com/users/riteshkumarumassedu/events{/privacy}", "received_events_url": "https://api.github.com/users/riteshkumarumassedu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-29T19:35:06
2023-08-29T19:35:06
null
NONE
null
### Describe the bug The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file. While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ? ### Steps to reproduce the bug 1. Create a dataset loading script to read the custom data. 2. compile the code to make sure that .pyc file is created 3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script. ### Expected behavior The code should make use of .pyc file and run without any error. ### Environment info NA
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6193/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6192/comments
https://api.github.com/repos/huggingface/datasets/issues/6192/events
https://github.com/huggingface/datasets/pull/6192
1,871,911,640
PR_kwDODunzps5ZDGnI
6,192
Set minimal fsspec version requirement to 2023.1.0
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005972 / 0.011353 (-0.005381) | 0.003636 / 0.011008 (-0.007372) | 0.080254 / 0.038508 (0.041746) | 0.059564 / 0.023109 (0.036455) | 0.310615 / 0.275898 (0.034717) | 0.359307 / 0.323480 (0.035827) | 0.003408 / 0.007986 (-0.004578) | 0.002941 / 0.004328 (-0.001388) | 0.063699 / 0.004250 (0.059449) | 0.046072 / 0.037052 (0.009020) | 0.318670 / 0.258489 (0.060181) | 0.369677 / 0.293841 (0.075836) | 0.026995 / 0.128546 (-0.101552) | 0.007954 / 0.075646 (-0.067693) | 0.261667 / 0.419271 (-0.157604) | 0.045167 / 0.043533 (0.001634) | 0.314276 / 0.255139 (0.059137) | 0.348871 / 0.283200 (0.065672) | 0.021748 / 0.141683 (-0.119935) | 1.438598 / 1.452155 (-0.013557) | 1.530119 / 1.492716 (0.037403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196894 / 0.018006 (0.178888) | 0.445757 / 0.000490 (0.445267) | 0.002842 / 0.000200 (0.002642) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024923 / 0.037411 (-0.012488) | 0.075186 / 0.014526 (0.060661) | 0.087193 / 0.176557 (-0.089364) | 0.147496 / 0.737135 (-0.589639) | 0.087083 / 0.296338 (-0.209255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423545 / 0.215209 (0.208336) | 4.187927 / 2.077655 (2.110273) | 2.008656 / 1.504120 (0.504536) | 1.791313 / 1.541195 (0.250119) | 1.849836 / 1.468490 (0.381346) | 0.499458 / 4.584777 (-4.085318) | 2.983206 / 3.745712 (-0.762506) | 2.801005 / 5.269862 (-2.468856) | 1.886207 / 4.565676 (-2.679469) | 0.057343 / 0.424275 (-0.366932) | 0.006666 / 0.007607 (-0.000941) | 0.483948 / 0.226044 (0.257904) | 4.874818 / 2.268929 (2.605890) | 2.439393 / 55.444624 (-53.005231) | 2.049861 / 6.876477 (-4.826616) | 2.217050 / 2.142072 (0.074977) | 0.589760 / 4.805227 (-4.215467) | 0.125298 / 6.500664 (-6.375366) | 0.061123 / 0.075469 (-0.014347) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234721 / 1.841788 (-0.607067) | 18.193756 / 8.074308 (10.119448) | 13.682835 / 10.191392 (3.491443) | 0.129345 / 0.680424 (-0.551078) | 0.016589 / 0.534201 (-0.517612) | 0.332355 / 0.579283 (-0.246928) | 0.358408 / 0.434364 (-0.075955) | 0.382044 / 0.540337 (-0.158293) | 0.535403 / 1.386936 (-0.851533) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006193 / 0.011353 (-0.005160) | 0.003674 / 0.011008 (-0.007335) | 0.062481 / 0.038508 (0.023973) | 0.062096 / 0.023109 (0.038987) | 0.449592 / 0.275898 (0.173694) | 0.479245 / 0.323480 (0.155765) | 0.004793 / 0.007986 (-0.003193) | 0.002896 / 0.004328 (-0.001433) | 0.062887 / 0.004250 (0.058636) | 0.050049 / 0.037052 (0.012997) | 0.454940 / 0.258489 (0.196451) | 0.486115 / 0.293841 (0.192274) | 0.028585 / 0.128546 (-0.099961) | 0.007954 / 0.075646 (-0.067692) | 0.067744 / 0.419271 (-0.351528) | 0.040473 / 0.043533 (-0.003060) | 0.448408 / 0.255139 (0.193269) | 0.472423 / 0.283200 (0.189223) | 0.020549 / 0.141683 (-0.121133) | 1.563618 / 1.452155 (0.111463) | 1.520149 / 1.492716 (0.027432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226604 / 0.018006 (0.208598) | 0.417615 / 0.000490 (0.417126) | 0.003386 / 0.000200 (0.003186) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027264 / 0.037411 (-0.010147) | 0.081709 / 0.014526 (0.067184) | 0.091793 / 0.176557 (-0.084763) | 0.145559 / 0.737135 (-0.591576) | 0.091869 / 0.296338 (-0.204469) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462917 / 0.215209 (0.247708) | 4.629512 / 2.077655 (2.551857) | 2.555715 / 1.504120 (1.051595) | 2.388064 / 1.541195 (0.846870) | 2.458320 / 1.468490 (0.989830) | 0.511615 / 4.584777 (-4.073162) | 3.124566 / 3.745712 (-0.621146) | 2.839190 / 5.269862 (-2.430672) | 1.894551 / 4.565676 (-2.671126) | 0.059565 / 0.424275 (-0.364710) | 0.006481 / 0.007607 (-0.001126) | 0.532023 / 0.226044 (0.305979) | 5.361507 / 2.268929 (3.092579) | 2.982594 / 55.444624 (-52.462031) | 2.644870 / 6.876477 (-4.231606) | 2.831476 / 2.142072 (0.689404) | 0.607381 / 4.805227 (-4.197846) | 0.126067 / 6.500664 (-6.374597) | 0.062130 / 0.075469 (-0.013339) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350442 / 1.841788 (-0.491345) | 18.829553 / 8.074308 (10.755245) | 14.796701 / 10.191392 (4.605309) | 0.145393 / 0.680424 (-0.535031) | 0.018218 / 0.534201 (-0.515983) | 0.335500 / 0.579283 (-0.243783) | 0.359190 / 0.434364 (-0.075174) | 0.388377 / 0.540337 (-0.151960) | 0.534994 / 1.386936 (-0.851942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ff7629eb72f499d841d64aa03f97e0b1707d1cc7 \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6192). All of your documentation changes will be reflected on that endpoint." ]
2023-08-29T15:23:41
2023-08-29T15:31:58
null
CONTRIBUTOR
null
Fix https://github.com/huggingface/datasets/issues/6141 Colab installs 2023.6.0, so we should be good 🙂
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6192/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6192", "html_url": "https://github.com/huggingface/datasets/pull/6192", "diff_url": "https://github.com/huggingface/datasets/pull/6192.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6192.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6191/comments
https://api.github.com/repos/huggingface/datasets/issues/6191/events
https://github.com/huggingface/datasets/pull/6191
1,871,634,840
PR_kwDODunzps5ZCKmv
6,191
Add missing `revision` argument
{ "login": "qgallouedec", "id": 45557362, "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qgallouedec", "html_url": "https://github.com/qgallouedec", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "repos_url": "https://api.github.com/users/qgallouedec/repos", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6191). All of your documentation changes will be reflected on that endpoint." ]
2023-08-29T13:05:04
2023-08-29T13:30:30
null
CONTRIBUTOR
null
I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6191/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6191", "html_url": "https://github.com/huggingface/datasets/pull/6191", "diff_url": "https://github.com/huggingface/datasets/pull/6191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6191.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6190/comments
https://api.github.com/repos/huggingface/datasets/issues/6190/events
https://github.com/huggingface/datasets/issues/6190
1,871,582,175
I_kwDODunzps5vjhPf
6,190
`Invalid user token` even when correct user token is passed!
{ "login": "Vaibhavs10", "id": 18682411, "node_id": "MDQ6VXNlcjE4NjgyNDEx", "avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vaibhavs10", "html_url": "https://github.com/Vaibhavs10", "followers_url": "https://api.github.com/users/Vaibhavs10/followers", "following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}", "gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}", "starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions", "organizations_url": "https://api.github.com/users/Vaibhavs10/orgs", "repos_url": "https://api.github.com/users/Vaibhavs10/repos", "events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}", "received_events_url": "https://api.github.com/users/Vaibhavs10/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This is because `download_config.use_auth_token` is deprecated - you should use `download_config.token` instead", "Works! Thanks for the quick fix! <3" ]
2023-08-29T12:37:03
2023-08-29T13:01:10
2023-08-29T13:01:09
MEMBER
null
### Describe the bug I'm working on a dataset which comprises other datasets on the hub. URL: https://huggingface.co./datasets/open-asr-leaderboard/datasets-test-only Note: Some of the sub-datasets in this metadataset require explicit access. All the other datasets work fine, except, `common_voice`. ### Steps to reproduce the bug https://github.com/Vaibhavs10/scratchpad/blob/main/cv_datasets_bug_repro.ipynb ### Expected behavior It should work if the provided access token is valid (as it does for all the other datasets) ### Environment info datasets version -> 2.14.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6190/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6189/comments
https://api.github.com/repos/huggingface/datasets/issues/6189/events
https://github.com/huggingface/datasets/pull/6189
1,871,569,855
PR_kwDODunzps5ZB8Z9
6,189
Don't alter input in Features.from_dict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006166 / 0.011353 (-0.005187) | 0.003643 / 0.011008 (-0.007365) | 0.080966 / 0.038508 (0.042458) | 0.060538 / 0.023109 (0.037429) | 0.309205 / 0.275898 (0.033307) | 0.351007 / 0.323480 (0.027527) | 0.003592 / 0.007986 (-0.004393) | 0.002880 / 0.004328 (-0.001448) | 0.062957 / 0.004250 (0.058707) | 0.049015 / 0.037052 (0.011963) | 0.309436 / 0.258489 (0.050947) | 0.362695 / 0.293841 (0.068854) | 0.027818 / 0.128546 (-0.100728) | 0.008030 / 0.075646 (-0.067616) | 0.262678 / 0.419271 (-0.156594) | 0.046024 / 0.043533 (0.002491) | 0.316246 / 0.255139 (0.061107) | 0.337454 / 0.283200 (0.054254) | 0.022529 / 0.141683 (-0.119154) | 1.432492 / 1.452155 (-0.019662) | 1.499646 / 1.492716 (0.006929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190931 / 0.018006 (0.172925) | 0.428053 / 0.000490 (0.427564) | 0.002839 / 0.000200 (0.002639) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024042 / 0.037411 (-0.013370) | 0.073952 / 0.014526 (0.059426) | 0.905973 / 0.176557 (0.729417) | 0.177767 / 0.737135 (-0.559368) | 0.125779 / 0.296338 (-0.170559) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398997 / 0.215209 (0.183788) | 3.959575 / 2.077655 (1.881920) | 1.907038 / 1.504120 (0.402918) | 1.732908 / 1.541195 (0.191713) | 1.757038 / 1.468490 (0.288548) | 0.495917 / 4.584777 (-4.088860) | 3.021437 / 3.745712 (-0.724275) | 2.793960 / 5.269862 (-2.475901) | 1.827753 / 4.565676 (-2.737923) | 0.057143 / 0.424275 (-0.367132) | 0.006583 / 0.007607 (-0.001024) | 0.469402 / 0.226044 (0.243357) | 4.685623 / 2.268929 (2.416695) | 2.325200 / 55.444624 (-53.119424) | 1.985559 / 6.876477 (-4.890918) | 2.151208 / 2.142072 (0.009136) | 0.589498 / 4.805227 (-4.215730) | 0.125433 / 6.500664 (-6.375231) | 0.060834 / 0.075469 (-0.014636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228217 / 1.841788 (-0.613571) | 18.076089 / 8.074308 (10.001780) | 13.814460 / 10.191392 (3.623068) | 0.144674 / 0.680424 (-0.535750) | 0.016749 / 0.534201 (-0.517452) | 0.332839 / 0.579283 (-0.246444) | 0.357211 / 0.434364 (-0.077153) | 0.380367 / 0.540337 (-0.159971) | 0.531177 / 1.386936 (-0.855759) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006006 / 0.011353 (-0.005347) | 0.003552 / 0.011008 (-0.007456) | 0.061822 / 0.038508 (0.023313) | 0.057724 / 0.023109 (0.034615) | 0.462326 / 0.275898 (0.186428) | 0.492842 / 0.323480 (0.169362) | 0.004833 / 0.007986 (-0.003152) | 0.002847 / 0.004328 (-0.001481) | 0.062278 / 0.004250 (0.058028) | 0.046754 / 0.037052 (0.009702) | 0.464185 / 0.258489 (0.205696) | 0.496416 / 0.293841 (0.202576) | 0.028949 / 0.128546 (-0.099597) | 0.008038 / 0.075646 (-0.067608) | 0.067572 / 0.419271 (-0.351700) | 0.041176 / 0.043533 (-0.002356) | 0.460047 / 0.255139 (0.204908) | 0.482728 / 0.283200 (0.199528) | 0.020047 / 0.141683 (-0.121635) | 1.455958 / 1.452155 (0.003804) | 1.525730 / 1.492716 (0.033014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283643 / 0.018006 (0.265637) | 0.443046 / 0.000490 (0.442556) | 0.041019 / 0.000200 (0.040819) | 0.000340 / 0.000054 (0.000286) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026229 / 0.037411 (-0.011182) | 0.081498 / 0.014526 (0.066972) | 0.091412 / 0.176557 (-0.085145) | 0.146621 / 0.737135 (-0.590514) | 0.092113 / 0.296338 (-0.204225) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463525 / 0.215209 (0.248315) | 4.629852 / 2.077655 (2.552198) | 2.564831 / 1.504120 (1.060711) | 2.386976 / 1.541195 (0.845781) | 2.457757 / 1.468490 (0.989266) | 0.507317 / 4.584777 (-4.077460) | 3.142418 / 3.745712 (-0.603294) | 2.851642 / 5.269862 (-2.418219) | 1.894444 / 4.565676 (-2.671233) | 0.058495 / 0.424275 (-0.365780) | 0.006453 / 0.007607 (-0.001154) | 0.545363 / 0.226044 (0.319319) | 5.448092 / 2.268929 (3.179164) | 2.996328 / 55.444624 (-52.448296) | 2.664666 / 6.876477 (-4.211811) | 2.832247 / 2.142072 (0.690174) | 0.597631 / 4.805227 (-4.207596) | 0.126101 / 6.500664 (-6.374563) | 0.062573 / 0.075469 (-0.012896) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.366502 / 1.841788 (-0.475286) | 18.872990 / 8.074308 (10.798682) | 14.892114 / 10.191392 (4.700722) | 0.146668 / 0.680424 (-0.533756) | 0.017876 / 0.534201 (-0.516325) | 0.338490 / 0.579283 (-0.240793) | 0.357471 / 0.434364 (-0.076893) | 0.398730 / 0.540337 (-0.141608) | 0.542464 / 1.386936 (-0.844472) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a6ff3e846d86814fa6962326e9346a4f1f1e8a80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009132 / 0.011353 (-0.002221) | 0.005796 / 0.011008 (-0.005212) | 0.119495 / 0.038508 (0.080987) | 0.081708 / 0.023109 (0.058599) | 0.432940 / 0.275898 (0.157042) | 0.466793 / 0.323480 (0.143313) | 0.006464 / 0.007986 (-0.001521) | 0.004308 / 0.004328 (-0.000021) | 0.086344 / 0.004250 (0.082093) | 0.065987 / 0.037052 (0.028935) | 0.445213 / 0.258489 (0.186724) | 0.482405 / 0.293841 (0.188564) | 0.053553 / 0.128546 (-0.074993) | 0.015320 / 0.075646 (-0.060326) | 0.455669 / 0.419271 (0.036397) | 0.071619 / 0.043533 (0.028086) | 0.434843 / 0.255139 (0.179704) | 0.503224 / 0.283200 (0.220025) | 0.038280 / 0.141683 (-0.103403) | 1.901877 / 1.452155 (0.449722) | 2.040406 / 1.492716 (0.547690) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268275 / 0.018006 (0.250269) | 0.622795 / 0.000490 (0.622305) | 0.004572 / 0.000200 (0.004372) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032514 / 0.037411 (-0.004898) | 0.100619 / 0.014526 (0.086093) | 0.118407 / 0.176557 (-0.058149) | 0.190311 / 0.737135 (-0.546824) | 0.117160 / 0.296338 (-0.179178) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629836 / 0.215209 (0.414627) | 6.236124 / 2.077655 (4.158470) | 2.750775 / 1.504120 (1.246655) | 2.380111 / 1.541195 (0.838916) | 2.487279 / 1.468490 (1.018789) | 0.849568 / 4.584777 (-3.735209) | 5.571308 / 3.745712 (1.825596) | 4.934114 / 5.269862 (-0.335747) | 3.205478 / 4.565676 (-1.360198) | 0.104804 / 0.424275 (-0.319471) | 0.009856 / 0.007607 (0.002248) | 0.753352 / 0.226044 (0.527308) | 7.523482 / 2.268929 (5.254554) | 3.660088 / 55.444624 (-51.784537) | 2.726493 / 6.876477 (-4.149984) | 3.011344 / 2.142072 (0.869271) | 1.093410 / 4.805227 (-3.711817) | 0.229758 / 6.500664 (-6.270906) | 0.081516 / 0.075469 (0.006047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.700199 / 1.841788 (-0.141588) | 25.238736 / 8.074308 (17.164428) | 23.188131 / 10.191392 (12.996739) | 0.257862 / 0.680424 (-0.422562) | 0.028885 / 0.534201 (-0.505316) | 0.510693 / 0.579283 (-0.068590) | 0.648474 / 0.434364 (0.214110) | 0.576314 / 0.540337 (0.035976) | 0.800606 / 1.386936 (-0.586330) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009426 / 0.011353 (-0.001927) | 0.006205 / 0.011008 (-0.004803) | 0.083947 / 0.038508 (0.045438) | 0.089164 / 0.023109 (0.066055) | 0.540500 / 0.275898 (0.264602) | 0.578825 / 0.323480 (0.255345) | 0.006792 / 0.007986 (-0.001194) | 0.005125 / 0.004328 (0.000797) | 0.083284 / 0.004250 (0.079034) | 0.067539 / 0.037052 (0.030487) | 0.544330 / 0.258489 (0.285841) | 0.593836 / 0.293841 (0.299995) | 0.050647 / 0.128546 (-0.077899) | 0.014688 / 0.075646 (-0.060959) | 0.095977 / 0.419271 (-0.323295) | 0.062326 / 0.043533 (0.018793) | 0.536096 / 0.255139 (0.280957) | 0.578691 / 0.283200 (0.295492) | 0.035488 / 0.141683 (-0.106194) | 1.911145 / 1.452155 (0.458990) | 1.977647 / 1.492716 (0.484931) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368365 / 0.018006 (0.350359) | 0.609836 / 0.000490 (0.609346) | 0.054720 / 0.000200 (0.054520) | 0.000465 / 0.000054 (0.000411) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036057 / 0.037411 (-0.001355) | 0.126434 / 0.014526 (0.111908) | 0.124740 / 0.176557 (-0.051817) | 0.198907 / 0.737135 (-0.538228) | 0.138201 / 0.296338 (-0.158137) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.684814 / 0.215209 (0.469605) | 6.738182 / 2.077655 (4.660527) | 3.231054 / 1.504120 (1.726934) | 2.889550 / 1.541195 (1.348355) | 2.933985 / 1.468490 (1.465495) | 0.867176 / 4.584777 (-3.717601) | 5.465475 / 3.745712 (1.719763) | 4.928370 / 5.269862 (-0.341492) | 3.126382 / 4.565676 (-1.439294) | 0.129673 / 0.424275 (-0.294603) | 0.009755 / 0.007607 (0.002148) | 0.797860 / 0.226044 (0.571816) | 8.003178 / 2.268929 (5.734250) | 4.081658 / 55.444624 (-51.362966) | 3.303837 / 6.876477 (-3.572640) | 3.574577 / 2.142072 (1.432505) | 1.064674 / 4.805227 (-3.740554) | 0.232894 / 6.500664 (-6.267770) | 0.082298 / 0.075469 (0.006829) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.858701 / 1.841788 (0.016913) | 25.839794 / 8.074308 (17.765485) | 24.291425 / 10.191392 (14.100033) | 0.250181 / 0.680424 (-0.430243) | 0.034479 / 0.534201 (-0.499722) | 0.540754 / 0.579283 (-0.038529) | 0.615996 / 0.434364 (0.181632) | 0.631499 / 0.540337 (0.091161) | 0.838719 / 1.386936 (-0.548217) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0b6bb2f0e7a460d4ed04855eafe1184a7ce7c09c \"CML watermark\")\n" ]
2023-08-29T12:29:47
2023-08-29T13:04:59
2023-08-29T12:52:48
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6189/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6189", "html_url": "https://github.com/huggingface/datasets/pull/6189", "diff_url": "https://github.com/huggingface/datasets/pull/6189.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6189.patch", "merged_at": "2023-08-29T12:52:48" }
true
https://api.github.com/repos/huggingface/datasets/issues/6188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6188/comments
https://api.github.com/repos/huggingface/datasets/issues/6188/events
https://github.com/huggingface/datasets/issues/6188
1,870,987,640
I_kwDODunzps5vhQF4
6,188
[Feature Request] Check the length of batch before writing so that empty batch is allowed
{ "login": "namespace-Pt", "id": 61188463, "node_id": "MDQ6VXNlcjYxMTg4NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/namespace-Pt", "html_url": "https://github.com/namespace-Pt", "followers_url": "https://api.github.com/users/namespace-Pt/followers", "following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}", "gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}", "starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions", "organizations_url": "https://api.github.com/users/namespace-Pt/orgs", "repos_url": "https://api.github.com/users/namespace-Pt/repos", "events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}", "received_events_url": "https://api.github.com/users/namespace-Pt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-29T06:37:34
2023-08-29T06:37:34
null
NONE
null
### Use Case I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown: ``` ValueError: Schema and number of arrays unequal ``` This is because the empty batch does not comply with the schema of other batches. I think an empty batch should be allowed to facilitate coding (one does not need to assign an empty list manually for all keys.) A simple fix is to check the length of `batch` before writing: ``` if len(batch): writer.write_batch(batch) ``` instead of https://github.com/huggingface/datasets/blob/74d60213dcbd7c99484c62ce1d3dfd90a1df0770/src/datasets/arrow_dataset.py#L3493
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6188/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6187/comments
https://api.github.com/repos/huggingface/datasets/issues/6187/events
https://github.com/huggingface/datasets/issues/6187
1,870,936,143
I_kwDODunzps5vhDhP
6,187
Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory
{ "login": "andysingal", "id": 20493493, "node_id": "MDQ6VXNlcjIwNDkzNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andysingal", "html_url": "https://github.com/andysingal", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "organizations_url": "https://api.github.com/users/andysingal/orgs", "repos_url": "https://api.github.com/users/andysingal/repos", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "received_events_url": "https://api.github.com/users/andysingal/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! You can load this dataset with:\r\n```python\r\ndata_files = {\r\n \"train\": \"/content/PUBHEALTH/train.tsv\",\r\n \"validation\": \"/content/PUBHEALTH/dev.tsv\",\r\n \"test\": \"/content/PUBHEALTH/test.tsv\",\r\n}\r\n\r\ntsv_datasets_reloaded = load_dataset(\"csv\", data_files=data_files, sep=\"\\t\")\r\n```\r\n\r\nTo support your `load_dataset` call, defining aliases for the packaged builders, as suggested in https://github.com/huggingface/datasets/issues/5625, must be implemented. We can consider adding this feature if more people request it.\r\n \r\n(Also answered on the Discord [here](https://discord.com/channels/879548962464493619/1145956791134470224/1146071491260186744))" ]
2023-08-29T05:49:56
2023-08-29T16:21:45
null
NONE
null
### Describe the bug ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>() 5 } 6 ----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files) 8 csv_datasets_reloaded 2 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1489 raise e1 from None 1490 if isinstance(e1, FileNotFoundError): -> 1491 raise FileNotFoundError( 1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub ``` ### Steps to reproduce the bug ``` data_files = { "train": "/content/PUBHEALTH/train.tsv", "validation": "/content/PUBHEALTH/dev.tsv", "test": "/content/PUBHEALTH/test.tsv", } tsv_datasets_reloaded = load_dataset("tsv", data_files=data_files) tsv_datasets_reloaded ``` ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-48-6a7b3e847019> in <cell line: 7>() 5 } 6 ----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files) 8 csv_datasets_reloaded 2 frames /usr/local/lib/python3.10/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1489 raise e1 from None 1490 if isinstance(e1, FileNotFoundError): -> 1491 raise FileNotFoundError( 1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub ``` ### Expected behavior load the data, push to hub ### Environment info jupyter notebook RTX 3090
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6187/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6186/comments
https://api.github.com/repos/huggingface/datasets/issues/6186/events
https://github.com/huggingface/datasets/issues/6186
1,869,431,457
I_kwDODunzps5vbUKh
6,186
Feature request: add code example of multi-GPU processing
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "That'd be a great idea! @mariosasko or @lhoestq, would it be possible to fix the code snippet or do you have another suggested way for doing this?" ]
2023-08-28T10:00:59
2023-08-29T17:39:03
null
CONTRIBUTOR
null
### Feature request Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu Currently the docs has a small [section](https://huggingface.co./docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work for me out-of-the-box. Let's say you have a PyTorch model that can do translation, and you have multiple GPUs. In that case, you'd like to duplicate the model on each GPU, each processing (translating) a chunk of the data in parallel. Here's how I tried to do that: ``` from datasets import load_dataset from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from multiprocess import set_start_method import torch import os dataset = load_dataset("mlfoundations/datacomp_small") tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M") model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M") # put model on each available GPU # also, should I do it like this or use nn.DataParallel? model.to("cuda:0") model.to("cuda:1") set_start_method("spawn") def translate_captions(batch, rank): os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count()) texts = batch["text"] inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device) translated_tokens = model.generate( **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"], max_length=30 ) translated_texts = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True) batch["translated_text"] = translated_texts return batch updated_dataset = dataset.map(translate_captions, with_rank=True, num_proc=2, batched=True, batch_size=256) ``` I've personally tried running this script on a machine with 2 A100 GPUs. ## Error 1 Running the code snippet above from the terminal (python script.py) resulted in the following error: ``` Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 125, in _main prepare(preparation_data) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 289, in run_path return _run_module_code(code, init_globals, run_name, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/niels/python_projects/datacomp/datasets_multi_gpu.py", line 16, in <module> set_start_method("spawn") File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 247, in set_start_method raise RuntimeError('context has already been set') RuntimeError: context has already been set ``` ## Error 2 Then, based on [this Stackoverflow answer](https://stackoverflow.com/a/71616344/7762882), I put the `set_start_method("spawn")` section in a try: catch block. This resulted in the following error: ``` File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/dataset_dict.py", line 817, in <dictcomp> k: dataset.map( File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2926, in map with Pool(nb_of_missing_shards, initargs=initargs, initializer=initializer) as pool: File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 119, in Pool return Pool(processes, initializer, initargs, maxtasksperchild, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 215, in __init__ self._repopulate_pool() File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 306, in _repopulate_pool return self._repopulate_pool_static(self._ctx, self.Process, File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 329, in _repopulate_pool_static w.start() File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/process.py", line 121, in start self._popen = self._Popen(self) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 288, in _Popen return Popen(process_obj) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. ``` So then I put the last line under a `if __name__ == '__main__':` block. Then the code snippet seemed to work, but it seemed that it's only leveraging a single GPU (based on monitoring `nvidia-smi`): ``` Mon Aug 28 12:19:24 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100-SXM... On | 00000000:01:00.0 Off | 0 | | N/A 55C P0 76W / 275W | 8747MiB / 81920MiB | 0% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A100-SXM... On | 00000000:47:00.0 Off | 0 | | N/A 67C P0 274W / 275W | 59835MiB / 81920MiB | 100% Default | | | | Disabled | ``` Both GPUs should have equal GPU usage, but I've always noticed that the last GPU has way more usage than the other ones. This made me think that `os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())` might not work inside a Python script, especially if done after importing PyTorch? ### Motivation Would be great to clarify how to do multi-GPU data processing. ### Your contribution If my code snippet can be fixed, I can contribute it to the docs :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6186/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6185/comments
https://api.github.com/repos/huggingface/datasets/issues/6185/events
https://github.com/huggingface/datasets/issues/6185
1,868,077,748
I_kwDODunzps5vWJq0
6,185
Error in saving the PIL image into *.arrow files using datasets.arrow_writer
{ "login": "HaozheZhao", "id": 14247682, "node_id": "MDQ6VXNlcjE0MjQ3Njgy", "avatar_url": "https://avatars.githubusercontent.com/u/14247682?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HaozheZhao", "html_url": "https://github.com/HaozheZhao", "followers_url": "https://api.github.com/users/HaozheZhao/followers", "following_url": "https://api.github.com/users/HaozheZhao/following{/other_user}", "gists_url": "https://api.github.com/users/HaozheZhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/HaozheZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HaozheZhao/subscriptions", "organizations_url": "https://api.github.com/users/HaozheZhao/orgs", "repos_url": "https://api.github.com/users/HaozheZhao/repos", "events_url": "https://api.github.com/users/HaozheZhao/events{/privacy}", "received_events_url": "https://api.github.com/users/HaozheZhao/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "You can cast the `input_image` column to the `Image` type to fix the issue:\r\n```python\r\nds.cast_column(\"input_image\", datasets.Image())\r\n```" ]
2023-08-26T12:15:57
2023-08-29T14:49:58
null
NONE
null
### Describe the bug I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects. I am saving the json using the following script: ``` def save_to_arrow(path,temp): with ArrowWriter(path=path,writer_batch_size=20) as writer: writer.write_batch(temp) writer.finalize() ``` However, when I attempt to restore the dataset and use the ```Dataset.from_file(path)``` function to load the arrow file, there seems to be an issue with the PIL.Image object in the dataset. The list of PIL.Images appears as follows rather than a normal PIL.Image object: ![1693051705440](https://github.com/huggingface/datasets/assets/14247682/03b204c2-d0fa-4d19-beff-6f4d7b83c848) ### Steps to reproduce the bug 1. Storing the data json into arrow files: ``` def save_to_arrow(path,temp): with ArrowWriter(path=path,writer_batch_size=20) as writer: writer.write_batch(temp) writer.finalize() save_to_arrow( path, json_file ) ``` 2. try to load the arrow file into the Dataset object using the ```Dataset.from_file(path)``` ### Expected behavior Except to saving the contained "image" feature as a list PIL.Image objects as the arrow file. And I can restore the dataset from the file. ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.17 - Python version: 3.8.17 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6185/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6184
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6184/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6184/comments
https://api.github.com/repos/huggingface/datasets/issues/6184/events
https://github.com/huggingface/datasets/issues/6184
1,867,766,143
I_kwDODunzps5vU9l_
6,184
Map cache does not detect function changes in another module
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "This issue is a duplicate of https://github.com/huggingface/datasets/issues/3297. This is a limitation of `dill`, a package we use for caching (non-`__main__` module objects are serialized by reference). You can find more info about it here: https://github.com/uqfoundation/dill/issues/424.\r\n\r\nIn your case, moving \r\n```\r\ndata = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train')\r\ndata = data.map(transform)\r\n``` \r\nto `test.py` and setting `transform.__module__ = None` at the end of `dataset.py` should fix the issue.", "I understand this may be a limitation of an upstream tool, but for a user for datasets this is very annoying, as when you have dozens of different datasets with different preprocessing functions you can't really move them all into the same file. It may be worth seeing if there is a way to specialize the dependency (eg. subclass it) and enforce behaviors that makes sense for your product.\r\n\r\nI was able to work around this for now by setting `__module__ = None`. If such workarounds are required for now it may be better to document it somewhere than a single obscure issue from a long time ago.\r\n\r\nAs this is a duplicate issue I'm closing it.\r\n\r\nI have another issue with the cache https://github.com/huggingface/datasets/issues/6179 can you take a look?" ]
2023-08-25T22:59:14
2023-08-29T20:57:07
2023-08-29T20:56:49
NONE
null
```python # dataset.py import os import datasets if not os.path.exists('/tmp/test.json'): with open('/tmp/test.json', 'w') as file: file.write('[{"text": "hello"}]') def transform(example): text = example['text'] # text += ' world' return {'text': text} data = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train') data = data.map(transform) ``` ```python # test.py import dataset print(next(iter(dataset.data))) ``` Initialize cache ``` python3 test.py # {'text': 'hello'} ``` Edit dataset.py and uncomment the commented line, run again ``` python3 test.py # {'text': 'hello'} # expected: {'text': 'hello world'} ``` Clear cache and run again ``` rm -rf ~/.cache/huggingface/datasets/* python3 test.py # {'text': 'hello world'} ``` If instead the two files are combined, then changes to the function are detected correctly. But it's expected when working on any realistic codebase that things will be modularized into separate files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6184/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6183/comments
https://api.github.com/repos/huggingface/datasets/issues/6183/events
https://github.com/huggingface/datasets/issues/6183
1,867,743,276
I_kwDODunzps5vU4As
6,183
Load dataset with non-existent file
{ "login": "freQuensy23-coder", "id": 64750224, "node_id": "MDQ6VXNlcjY0NzUwMjI0", "avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4", "gravatar_id": "", "url": "https://api.github.com/users/freQuensy23-coder", "html_url": "https://github.com/freQuensy23-coder", "followers_url": "https://api.github.com/users/freQuensy23-coder/followers", "following_url": "https://api.github.com/users/freQuensy23-coder/following{/other_user}", "gists_url": "https://api.github.com/users/freQuensy23-coder/gists{/gist_id}", "starred_url": "https://api.github.com/users/freQuensy23-coder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/freQuensy23-coder/subscriptions", "organizations_url": "https://api.github.com/users/freQuensy23-coder/orgs", "repos_url": "https://api.github.com/users/freQuensy23-coder/repos", "events_url": "https://api.github.com/users/freQuensy23-coder/events{/privacy}", "received_events_url": "https://api.github.com/users/freQuensy23-coder/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Same problem", "This was fixed in https://github.com/huggingface/datasets/pull/6155, which will be included in the next release (or you can install `datasets` from source to use it immediately)." ]
2023-08-25T22:21:22
2023-08-29T13:26:22
2023-08-29T13:26:22
NONE
null
### Describe the bug When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" - ```SchemaInferenceError: Please pass `features` or at least one example when writing data``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('json', data_files='/home/alexey/unreal_file.json') ``` ### Expected behavior Raise os FileNotFound error or custom error with informative message ### Environment info ``` # packages in environment at /home/alexey/.conda/envs/alex_LoRA: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu accelerate 0.21.0 pypi_0 pypi aiohttp 3.8.5 pypi_0 pypi aiosignal 1.3.1 pypi_0 pypi antlr4-python3-runtime 4.9.3 pypi_0 pypi appdirs 1.4.4 pypi_0 pypi asttokens 2.0.5 pyhd3eb1b0_0 async-timeout 4.0.3 pypi_0 pypi attrs 23.1.0 pypi_0 pypi backcall 0.2.0 pyhd3eb1b0_0 bitsandbytes 0.41.1 pypi_0 pypi bzip2 1.0.8 h7b6447c_0 ca-certificates 2023.05.30 h06a4308_0 certifi 2023.7.22 pypi_0 pypi charset-normalizer 3.2.0 pypi_0 pypi click 8.1.6 pypi_0 pypi cmake 3.27.2 pypi_0 pypi comm 0.1.2 py310h06a4308_0 contourpy 1.1.0 pypi_0 pypi cycler 0.11.0 pypi_0 pypi datasets 2.14.4 pypi_0 pypi debugpy 1.6.7 py310h6a678d5_0 decorator 5.1.1 pyhd3eb1b0_0 dill 0.3.7 pypi_0 pypi docker-pycreds 0.4.0 pypi_0 pypi executing 0.8.3 pyhd3eb1b0_0 filelock 3.12.2 pypi_0 pypi fire 0.5.0 pypi_0 pypi fonttools 4.42.0 pypi_0 pypi frozenlist 1.4.0 pypi_0 pypi fsspec 2023.6.0 pypi_0 pypi gitdb 4.0.10 pypi_0 pypi gitpython 3.1.32 pypi_0 pypi huggingface-hub 0.16.4 pypi_0 pypi idna 3.4 pypi_0 pypi ipykernel 6.25.0 py310h2f386ee_0 ipython 8.12.2 py310h06a4308_0 ipython-genutils 0.2.0 pypi_0 pypi ipywidgets 8.0.4 py310h06a4308_0 jedi 0.18.1 py310h06a4308_1 jinja2 3.1.2 pypi_0 pypi jsonschema 4.19.0 pypi_0 pypi jsonschema-specifications 2023.7.1 pypi_0 pypi jupyter_client 8.1.0 py310h06a4308_0 jupyter_core 5.3.0 py310h06a4308_0 jupyterlab_widgets 3.0.5 py310h06a4308_0 kiwisolver 1.4.4 pypi_0 pypi ld_impl_linux-64 2.38 h1181459_1 libffi 3.3 he6710b0_2 libgcc-ng 11.2.0 h1234567_1 libgomp 11.2.0 h1234567_1 libsodium 1.0.18 h7b6447c_0 libstdcxx-ng 11.2.0 h1234567_1 libuuid 1.41.5 h5eee18b_0 lightning-utilities 0.9.0 pypi_0 pypi lit 16.0.6 pypi_0 pypi markupsafe 2.1.3 pypi_0 pypi matplotlib 3.7.2 pypi_0 pypi matplotlib-inline 0.1.6 py310h06a4308_0 mpmath 1.3.0 pypi_0 pypi multidict 6.0.4 pypi_0 pypi multiprocess 0.70.15 pypi_0 pypi nbformat 4.2.0 pypi_0 pypi ncurses 6.4 h6a678d5_0 nest-asyncio 1.5.6 py310h06a4308_0 networkx 3.1 pypi_0 pypi numpy 1.25.2 pypi_0 pypi nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi nvidia-curand-cu11 10.2.10.91 pypi_0 pypi nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi nvidia-nccl-cu11 2.14.3 pypi_0 pypi nvidia-nvtx-cu11 11.7.91 pypi_0 pypi omegaconf 2.3.0 pypi_0 pypi openssl 1.1.1v h7f8727e_0 packaging 23.0 py310h06a4308_0 pandas 2.0.3 pypi_0 pypi parso 0.8.3 pyhd3eb1b0_0 pathtools 0.1.2 pypi_0 pypi peft 0.4.0 pypi_0 pypi pexpect 4.8.0 pyhd3eb1b0_3 pickleshare 0.7.5 pyhd3eb1b0_1003 pillow 10.0.0 pypi_0 pypi pip 23.2.1 py310h06a4308_0 platformdirs 2.5.2 py310h06a4308_0 plotly 5.16.1 pypi_0 pypi prompt-toolkit 3.0.36 py310h06a4308_0 protobuf 4.24.0 pypi_0 pypi psutil 5.9.0 py310h5eee18b_0 ptyprocess 0.7.0 pyhd3eb1b0_2 pure_eval 0.2.2 pyhd3eb1b0_0 pyarrow 12.0.1 pypi_0 pypi pygments 2.15.1 py310h06a4308_1 pyparsing 3.0.9 pypi_0 pypi python 3.10.0 h12debd9_5 python-dateutil 2.8.2 pyhd3eb1b0_0 pytorch-lightning 2.0.6 pypi_0 pypi pytz 2023.3 pypi_0 pypi pyyaml 6.0.1 pypi_0 pypi pyzmq 25.1.0 py310h6a678d5_0 readline 8.2 h5eee18b_0 referencing 0.30.2 pypi_0 pypi regex 2023.8.8 pypi_0 pypi requests 2.31.0 pypi_0 pypi rpds-py 0.9.2 pypi_0 pypi safetensors 0.3.2 pypi_0 pypi scipy 1.11.1 pypi_0 pypi sentencepiece 0.1.99 pypi_0 pypi sentry-sdk 1.29.2 pypi_0 pypi setproctitle 1.3.2 pypi_0 pypi setuptools 68.0.0 py310h06a4308_0 six 1.16.0 pyhd3eb1b0_1 smmap 5.0.0 pypi_0 pypi sqlite 3.41.2 h5eee18b_0 stack_data 0.2.0 pyhd3eb1b0_0 sympy 1.12 pypi_0 pypi tenacity 8.2.3 pypi_0 pypi termcolor 2.3.0 pypi_0 pypi tk 8.6.12 h1ccaba5_0 tokenizers 0.13.3 pypi_0 pypi torch 2.0.1 pypi_0 pypi torchmetrics 1.0.3 pypi_0 pypi tornado 6.3.2 py310h5eee18b_0 tqdm 4.66.1 pypi_0 pypi traitlets 5.7.1 py310h06a4308_0 transformers 4.31.0 pypi_0 pypi triton 2.0.0 pypi_0 pypi typing-extensions 4.7.1 pypi_0 pypi tzdata 2023.3 pypi_0 pypi urllib3 2.0.4 pypi_0 pypi wandb 0.15.8 pypi_0 pypi wcwidth 0.2.5 pyhd3eb1b0_0 wheel 0.38.4 py310h06a4308_0 widgetsnbextension 4.0.5 py310h06a4308_0 xxhash 3.3.0 pypi_0 pypi xz 5.4.2 h5eee18b_0 yarl 1.9.2 pypi_0 pypi zeromq 4.3.4 h2531618_0 zlib 1.2.13 h5eee18b_0 active environment : None user config file : /home/alexey/.condarc populated config files : conda version : 23.1.0 conda-build version : 3.22.0 python version : 3.9.13.final.0 virtual packages : __archspec=1=x86_64 __cuda=12.0=0 __glibc=2.35=0 __linux=5.19.0=0 __unix=0=0 base environment : /opt/anaconda/anaconda3 (read only) conda av data dir : /opt/anaconda/anaconda3/etc/conda conda av metadata url : None channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /opt/anaconda/anaconda3/pkgs /home/alexey/.conda/pkgs envs directories : /home/alexey/.conda/envs /opt/anaconda/anaconda3/envs platform : linux-64 user-agent : conda/23.1.0 requests/2.31.0 CPython/3.9.13 Linux/5.19.0-46-generic ubuntu/22.04.2 glibc/2.35 UID:GID : 1009:1009 netrc file : /home/alexey/.netrc offline mode : False ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6183/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6183/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6182
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6182/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6182/comments
https://api.github.com/repos/huggingface/datasets/issues/6182/events
https://github.com/huggingface/datasets/issues/6182
1,867,203,131
I_kwDODunzps5vS0I7
6,182
Loading Meteor metric in HF evaluate module crashes due to datasets import issue
{ "login": "dsashulya", "id": 42322648, "node_id": "MDQ6VXNlcjQyMzIyNjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/42322648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dsashulya", "html_url": "https://github.com/dsashulya", "followers_url": "https://api.github.com/users/dsashulya/followers", "following_url": "https://api.github.com/users/dsashulya/following{/other_user}", "gists_url": "https://api.github.com/users/dsashulya/gists{/gist_id}", "starred_url": "https://api.github.com/users/dsashulya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dsashulya/subscriptions", "organizations_url": "https://api.github.com/users/dsashulya/orgs", "repos_url": "https://api.github.com/users/dsashulya/repos", "events_url": "https://api.github.com/users/dsashulya/events{/privacy}", "received_events_url": "https://api.github.com/users/dsashulya/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Our minimal Python version requirement is 3.8, so we dropped `importlib_metadata`. \r\n\r\nFeel free to open a PR in the `evaluate` repo to replace the problematic import with\r\n```python\r\nif PY_VERSION < version.parse(\"3.8\"):\r\n import importlib_metadata\r\nelse:\r\n import importlib.metadata as importlib_metadata\r\n```" ]
2023-08-25T14:54:06
2023-08-25T17:36:33
null
NONE
null
### Describe the bug When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14``` ### Steps to reproduce the bug ``` from evaluate import load meteor = load("meteor") ``` produces the following error: ``` from datasets.config import importlib_metadata, version ImportError: cannot import name 'importlib_metadata' from 'datasets.config' (<path_to_project>/venv/lib/python3.9/site-packages/datasets/config.py) ``` ### Expected behavior ```datasets``` of v2.10 has the following workaround in ```config.py```: ``` if PY_VERSION < version.parse("3.8"): import importlib_metadata else: import importlib.metadata as importlib_metadata ``` However, it's absent in v2.14 which might be the cause of the issue. ### Environment info - `datasets` version: 2.14.4 - Platform: macOS-13.5-arm64-arm-64bit - Python version: 3.9.6 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - Evaluate version: 0.4.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6182/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6181/comments
https://api.github.com/repos/huggingface/datasets/issues/6181/events
https://github.com/huggingface/datasets/pull/6181
1,867,035,522
PR_kwDODunzps5Yy2VO
6,181
Fix import in `image_load` doc
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009072 / 0.011353 (-0.002281) | 0.006088 / 0.011008 (-0.004920) | 0.134520 / 0.038508 (0.096011) | 0.074935 / 0.023109 (0.051826) | 0.480364 / 0.275898 (0.204466) | 0.568943 / 0.323480 (0.245464) | 0.006821 / 0.007986 (-0.001164) | 0.004941 / 0.004328 (0.000612) | 0.083274 / 0.004250 (0.079023) | 0.061080 / 0.037052 (0.024028) | 0.478960 / 0.258489 (0.220471) | 0.542720 / 0.293841 (0.248879) | 0.058023 / 0.128546 (-0.070524) | 0.020120 / 0.075646 (-0.055526) | 0.492680 / 0.419271 (0.073409) | 0.079118 / 0.043533 (0.035585) | 0.425087 / 0.255139 (0.169948) | 0.603228 / 0.283200 (0.320028) | 0.044102 / 0.141683 (-0.097581) | 2.138848 / 1.452155 (0.686693) | 2.454418 / 1.492716 (0.961702) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255745 / 0.018006 (0.237738) | 0.587559 / 0.000490 (0.587069) | 0.006872 / 0.000200 (0.006672) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038480 / 0.037411 (0.001069) | 0.115479 / 0.014526 (0.100953) | 0.138395 / 0.176557 (-0.038161) | 0.218007 / 0.737135 (-0.519129) | 0.128866 / 0.296338 (-0.167472) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.756089 / 0.215209 (0.540880) | 7.754631 / 2.077655 (5.676976) | 3.615716 / 1.504120 (2.111596) | 2.994327 / 1.541195 (1.453132) | 3.196169 / 1.468490 (1.727679) | 1.066937 / 4.584777 (-3.517840) | 6.079595 / 3.745712 (2.333883) | 5.455523 / 5.269862 (0.185661) | 3.559036 / 4.565676 (-1.006640) | 0.113044 / 0.424275 (-0.311231) | 0.011401 / 0.007607 (0.003794) | 0.961475 / 0.226044 (0.735430) | 8.664226 / 2.268929 (6.395298) | 4.203804 / 55.444624 (-51.240821) | 3.122437 / 6.876477 (-3.754039) | 3.549168 / 2.142072 (1.407095) | 1.213035 / 4.805227 (-3.592193) | 0.274725 / 6.500664 (-6.225939) | 0.094499 / 0.075469 (0.019030) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.770299 / 1.841788 (-0.071489) | 27.644591 / 8.074308 (19.570283) | 23.239529 / 10.191392 (13.048137) | 0.270185 / 0.680424 (-0.410238) | 0.033563 / 0.534201 (-0.500638) | 0.588301 / 0.579283 (0.009018) | 0.658746 / 0.434364 (0.224382) | 0.644476 / 0.540337 (0.104139) | 0.834314 / 1.386936 (-0.552622) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011021 / 0.011353 (-0.000332) | 0.006719 / 0.011008 (-0.004289) | 0.087669 / 0.038508 (0.049161) | 0.088905 / 0.023109 (0.065796) | 0.594230 / 0.275898 (0.318332) | 0.620929 / 0.323480 (0.297449) | 0.006776 / 0.007986 (-0.001210) | 0.004725 / 0.004328 (0.000396) | 0.082006 / 0.004250 (0.077756) | 0.072164 / 0.037052 (0.035111) | 0.604489 / 0.258489 (0.346000) | 0.598520 / 0.293841 (0.304679) | 0.057534 / 0.128546 (-0.071013) | 0.016799 / 0.075646 (-0.058847) | 0.115029 / 0.419271 (-0.304243) | 0.070013 / 0.043533 (0.026481) | 0.561773 / 0.255139 (0.306634) | 0.624097 / 0.283200 (0.340897) | 0.043518 / 0.141683 (-0.098164) | 2.017089 / 1.452155 (0.564934) | 2.188159 / 1.492716 (0.695443) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.386476 / 0.018006 (0.368469) | 0.633195 / 0.000490 (0.632705) | 0.028469 / 0.000200 (0.028269) | 0.000159 / 0.000054 (0.000104) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.040020 / 0.037411 (0.002609) | 0.112927 / 0.014526 (0.098402) | 0.143663 / 0.176557 (-0.032894) | 0.205931 / 0.737135 (-0.531204) | 0.177814 / 0.296338 (-0.118524) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711542 / 0.215209 (0.496333) | 7.518535 / 2.077655 (5.440880) | 3.714930 / 1.504120 (2.210810) | 3.031999 / 1.541195 (1.490804) | 3.328497 / 1.468490 (1.860006) | 0.858912 / 4.584777 (-3.725865) | 6.108384 / 3.745712 (2.362672) | 5.184329 / 5.269862 (-0.085532) | 3.622589 / 4.565676 (-0.943087) | 0.096933 / 0.424275 (-0.327342) | 0.008727 / 0.007607 (0.001120) | 0.830102 / 0.226044 (0.604057) | 8.331959 / 2.268929 (6.063030) | 4.165106 / 55.444624 (-51.279519) | 3.477003 / 6.876477 (-3.399474) | 3.794225 / 2.142072 (1.652153) | 1.237667 / 4.805227 (-3.567561) | 0.233731 / 6.500664 (-6.266933) | 0.076682 / 0.075469 (0.001213) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.944813 / 1.841788 (0.103026) | 27.666997 / 8.074308 (19.592689) | 24.562677 / 10.191392 (14.371285) | 0.279320 / 0.680424 (-0.401104) | 0.037802 / 0.534201 (-0.496399) | 0.553579 / 0.579283 (-0.025704) | 0.718229 / 0.434364 (0.283865) | 0.623456 / 0.540337 (0.083118) | 0.856777 / 1.386936 (-0.530159) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4c2a9d31d5e720e85976af8b457d45755a7e6911 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007716 / 0.011353 (-0.003637) | 0.004624 / 0.011008 (-0.006384) | 0.099987 / 0.038508 (0.061479) | 0.082651 / 0.023109 (0.059542) | 0.376277 / 0.275898 (0.100379) | 0.401210 / 0.323480 (0.077730) | 0.004528 / 0.007986 (-0.003458) | 0.003763 / 0.004328 (-0.000566) | 0.076274 / 0.004250 (0.072024) | 0.062933 / 0.037052 (0.025881) | 0.393881 / 0.258489 (0.135392) | 0.431695 / 0.293841 (0.137854) | 0.036795 / 0.128546 (-0.091752) | 0.009935 / 0.075646 (-0.065712) | 0.343638 / 0.419271 (-0.075634) | 0.061456 / 0.043533 (0.017923) | 0.372235 / 0.255139 (0.117096) | 0.412994 / 0.283200 (0.129794) | 0.027993 / 0.141683 (-0.113690) | 1.798018 / 1.452155 (0.345863) | 1.898502 / 1.492716 (0.405786) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237330 / 0.018006 (0.219324) | 0.494956 / 0.000490 (0.494467) | 0.003543 / 0.000200 (0.003343) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034084 / 0.037411 (-0.003327) | 0.093407 / 0.014526 (0.078881) | 0.108378 / 0.176557 (-0.068179) | 0.177016 / 0.737135 (-0.560119) | 0.108622 / 0.296338 (-0.187716) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456449 / 0.215209 (0.241240) | 4.522405 / 2.077655 (2.444750) | 2.206564 / 1.504120 (0.702444) | 1.994185 / 1.541195 (0.452990) | 2.083785 / 1.468490 (0.615295) | 0.563352 / 4.584777 (-4.021425) | 4.207295 / 3.745712 (0.461583) | 3.783061 / 5.269862 (-1.486800) | 2.372874 / 4.565676 (-2.192802) | 0.066907 / 0.424275 (-0.357368) | 0.009013 / 0.007607 (0.001406) | 0.537852 / 0.226044 (0.311808) | 5.349928 / 2.268929 (3.081000) | 2.759409 / 55.444624 (-52.685215) | 2.345972 / 6.876477 (-4.530505) | 2.630559 / 2.142072 (0.488486) | 0.681134 / 4.805227 (-4.124093) | 0.157898 / 6.500664 (-6.342766) | 0.071638 / 0.075469 (-0.003831) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.470730 / 1.841788 (-0.371058) | 22.479252 / 8.074308 (14.404944) | 16.543080 / 10.191392 (6.351688) | 0.191943 / 0.680424 (-0.488481) | 0.021641 / 0.534201 (-0.512560) | 0.467571 / 0.579283 (-0.111712) | 0.486728 / 0.434364 (0.052364) | 0.543359 / 0.540337 (0.003021) | 0.733968 / 1.386936 (-0.652968) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008135 / 0.011353 (-0.003218) | 0.004662 / 0.011008 (-0.006347) | 0.077218 / 0.038508 (0.038710) | 0.092220 / 0.023109 (0.069111) | 0.481219 / 0.275898 (0.205321) | 0.530373 / 0.323480 (0.206893) | 0.006418 / 0.007986 (-0.001568) | 0.003924 / 0.004328 (-0.000404) | 0.076681 / 0.004250 (0.072431) | 0.068693 / 0.037052 (0.031641) | 0.491938 / 0.258489 (0.233449) | 0.540501 / 0.293841 (0.246660) | 0.038106 / 0.128546 (-0.090441) | 0.010035 / 0.075646 (-0.065611) | 0.084502 / 0.419271 (-0.334769) | 0.057234 / 0.043533 (0.013701) | 0.483239 / 0.255139 (0.228100) | 0.510026 / 0.283200 (0.226826) | 0.028770 / 0.141683 (-0.112913) | 1.854937 / 1.452155 (0.402783) | 1.948268 / 1.492716 (0.455552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.380192 / 0.018006 (0.362186) | 0.523318 / 0.000490 (0.522828) | 0.051153 / 0.000200 (0.050953) | 0.000691 / 0.000054 (0.000637) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036838 / 0.037411 (-0.000573) | 0.109202 / 0.014526 (0.094676) | 0.124110 / 0.176557 (-0.052446) | 0.186717 / 0.737135 (-0.550419) | 0.124088 / 0.296338 (-0.172250) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506411 / 0.215209 (0.291202) | 5.045421 / 2.077655 (2.967766) | 2.711911 / 1.504120 (1.207791) | 2.531668 / 1.541195 (0.990474) | 2.635680 / 1.468490 (1.167190) | 0.578395 / 4.584777 (-4.006382) | 4.206891 / 3.745712 (0.461178) | 3.851063 / 5.269862 (-1.418799) | 2.388327 / 4.565676 (-2.177350) | 0.068041 / 0.424275 (-0.356234) | 0.008769 / 0.007607 (0.001162) | 0.594170 / 0.226044 (0.368125) | 5.953138 / 2.268929 (3.684210) | 3.290586 / 55.444624 (-52.154038) | 2.877086 / 6.876477 (-3.999390) | 3.138600 / 2.142072 (0.996528) | 0.686393 / 4.805227 (-4.118834) | 0.156541 / 6.500664 (-6.344123) | 0.071514 / 0.075469 (-0.003955) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.613514 / 1.841788 (-0.228274) | 23.593185 / 8.074308 (15.518877) | 17.146647 / 10.191392 (6.955255) | 0.177230 / 0.680424 (-0.503193) | 0.023661 / 0.534201 (-0.510540) | 0.472367 / 0.579283 (-0.106916) | 0.484614 / 0.434364 (0.050250) | 0.547150 / 0.540337 (0.006813) | 0.843726 / 1.386936 (-0.543210) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#dba64cd381bfe384cb64ab9826f6054a0f1df1ff \"CML watermark\")\n" ]
2023-08-25T13:12:19
2023-08-25T16:12:46
2023-08-25T16:02:24
CONTRIBUTOR
null
Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6181/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6181", "html_url": "https://github.com/huggingface/datasets/pull/6181", "diff_url": "https://github.com/huggingface/datasets/pull/6181.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6181.patch", "merged_at": "2023-08-25T16:02:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/6180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6180/comments
https://api.github.com/repos/huggingface/datasets/issues/6180/events
https://github.com/huggingface/datasets/pull/6180
1,867,032,578
PR_kwDODunzps5Yy1r-
6,180
Use `hf-internal-testing` repos for hosting test dataset repos
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006505 / 0.011353 (-0.004847) | 0.003950 / 0.011008 (-0.007058) | 0.084554 / 0.038508 (0.046046) | 0.074376 / 0.023109 (0.051267) | 0.350184 / 0.275898 (0.074286) | 0.380704 / 0.323480 (0.057224) | 0.004011 / 0.007986 (-0.003975) | 0.003890 / 0.004328 (-0.000438) | 0.065483 / 0.004250 (0.061232) | 0.054912 / 0.037052 (0.017860) | 0.359586 / 0.258489 (0.101097) | 0.403360 / 0.293841 (0.109519) | 0.030614 / 0.128546 (-0.097932) | 0.008530 / 0.075646 (-0.067117) | 0.288220 / 0.419271 (-0.131052) | 0.052270 / 0.043533 (0.008737) | 0.352557 / 0.255139 (0.097418) | 0.380509 / 0.283200 (0.097309) | 0.025513 / 0.141683 (-0.116170) | 1.488469 / 1.452155 (0.036315) | 1.559182 / 1.492716 (0.066466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266163 / 0.018006 (0.248157) | 0.596345 / 0.000490 (0.595855) | 0.004368 / 0.000200 (0.004168) | 0.000211 / 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027137 / 0.037411 (-0.010274) | 0.082251 / 0.014526 (0.067725) | 0.094745 / 0.176557 (-0.081812) | 0.148756 / 0.737135 (-0.588379) | 0.094580 / 0.296338 (-0.201758) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383506 / 0.215209 (0.168297) | 3.823147 / 2.077655 (1.745493) | 1.859627 / 1.504120 (0.355507) | 1.687969 / 1.541195 (0.146775) | 1.720786 / 1.468490 (0.252296) | 0.476552 / 4.584777 (-4.108225) | 3.539558 / 3.745712 (-0.206154) | 3.209032 / 5.269862 (-2.060830) | 1.999643 / 4.565676 (-2.566034) | 0.056484 / 0.424275 (-0.367791) | 0.007443 / 0.007607 (-0.000164) | 0.456089 / 0.226044 (0.230044) | 4.562522 / 2.268929 (2.293593) | 2.348286 / 55.444624 (-53.096338) | 1.984323 / 6.876477 (-4.892154) | 2.148988 / 2.142072 (0.006915) | 0.570761 / 4.805227 (-4.234466) | 0.131439 / 6.500664 (-6.369225) | 0.059752 / 0.075469 (-0.015717) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276803 / 1.841788 (-0.564985) | 19.406812 / 8.074308 (11.332504) | 13.979088 / 10.191392 (3.787696) | 0.157418 / 0.680424 (-0.523006) | 0.018051 / 0.534201 (-0.516150) | 0.392307 / 0.579283 (-0.186976) | 0.406603 / 0.434364 (-0.027760) | 0.458450 / 0.540337 (-0.081888) | 0.622569 / 1.386936 (-0.764367) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006552 / 0.011353 (-0.004800) | 0.004060 / 0.011008 (-0.006948) | 0.063522 / 0.038508 (0.025014) | 0.072537 / 0.023109 (0.049428) | 0.398452 / 0.275898 (0.122554) | 0.422059 / 0.323480 (0.098579) | 0.005577 / 0.007986 (-0.002409) | 0.003413 / 0.004328 (-0.000916) | 0.064095 / 0.004250 (0.059845) | 0.056883 / 0.037052 (0.019831) | 0.407119 / 0.258489 (0.148630) | 0.435889 / 0.293841 (0.142048) | 0.031549 / 0.128546 (-0.096998) | 0.008418 / 0.075646 (-0.067228) | 0.070315 / 0.419271 (-0.348957) | 0.047828 / 0.043533 (0.004295) | 0.398705 / 0.255139 (0.143566) | 0.416986 / 0.283200 (0.133786) | 0.022304 / 0.141683 (-0.119379) | 1.512597 / 1.452155 (0.060442) | 1.570588 / 1.492716 (0.077871) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.295100 / 0.018006 (0.277094) | 0.541883 / 0.000490 (0.541393) | 0.007375 / 0.000200 (0.007175) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030877 / 0.037411 (-0.006534) | 0.090807 / 0.014526 (0.076281) | 0.106155 / 0.176557 (-0.070402) | 0.155546 / 0.737135 (-0.581589) | 0.103847 / 0.296338 (-0.192492) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441176 / 0.215209 (0.225967) | 4.401025 / 2.077655 (2.323371) | 2.394764 / 1.504120 (0.890644) | 2.226434 / 1.541195 (0.685239) | 2.247248 / 1.468490 (0.778758) | 0.489149 / 4.584777 (-4.095628) | 3.642468 / 3.745712 (-0.103244) | 3.235597 / 5.269862 (-2.034265) | 1.992660 / 4.565676 (-2.573016) | 0.057457 / 0.424275 (-0.366818) | 0.007192 / 0.007607 (-0.000415) | 0.515978 / 0.226044 (0.289934) | 5.147728 / 2.268929 (2.878800) | 2.837394 / 55.444624 (-52.607230) | 2.505753 / 6.876477 (-4.370723) | 2.653090 / 2.142072 (0.511018) | 0.583274 / 4.805227 (-4.221954) | 0.132116 / 6.500664 (-6.368548) | 0.058794 / 0.075469 (-0.016675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.331630 / 1.841788 (-0.510158) | 20.056890 / 8.074308 (11.982582) | 14.950561 / 10.191392 (4.759169) | 0.165449 / 0.680424 (-0.514975) | 0.020161 / 0.534201 (-0.514040) | 0.395791 / 0.579283 (-0.183492) | 0.415631 / 0.434364 (-0.018733) | 0.474440 / 0.540337 (-0.065898) | 0.643060 / 1.386936 (-0.743876) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#712185ed5e9cb3ff6d6528b4528882d51935f334 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007440 / 0.011353 (-0.003913) | 0.004456 / 0.011008 (-0.006552) | 0.099498 / 0.038508 (0.060990) | 0.077579 / 0.023109 (0.054470) | 0.374934 / 0.275898 (0.099036) | 0.409590 / 0.323480 (0.086110) | 0.005876 / 0.007986 (-0.002110) | 0.003642 / 0.004328 (-0.000687) | 0.076781 / 0.004250 (0.072531) | 0.060185 / 0.037052 (0.023133) | 0.374762 / 0.258489 (0.116273) | 0.445608 / 0.293841 (0.151767) | 0.036557 / 0.128546 (-0.091990) | 0.009941 / 0.075646 (-0.065706) | 0.345214 / 0.419271 (-0.074058) | 0.061912 / 0.043533 (0.018379) | 0.378346 / 0.255139 (0.123207) | 0.415275 / 0.283200 (0.132076) | 0.027396 / 0.141683 (-0.114287) | 1.776602 / 1.452155 (0.324447) | 1.827683 / 1.492716 (0.334967) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235227 / 0.018006 (0.217221) | 0.491846 / 0.000490 (0.491356) | 0.004987 / 0.000200 (0.004787) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032517 / 0.037411 (-0.004894) | 0.099217 / 0.014526 (0.084691) | 0.109749 / 0.176557 (-0.066807) | 0.176190 / 0.737135 (-0.560946) | 0.109868 / 0.296338 (-0.186471) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455188 / 0.215209 (0.239979) | 4.560143 / 2.077655 (2.482489) | 2.249928 / 1.504120 (0.745809) | 2.032808 / 1.541195 (0.491614) | 2.090096 / 1.468490 (0.621605) | 0.567813 / 4.584777 (-4.016964) | 4.338299 / 3.745712 (0.592587) | 3.701589 / 5.269862 (-1.568273) | 2.404805 / 4.565676 (-2.160871) | 0.067931 / 0.424275 (-0.356344) | 0.009011 / 0.007607 (0.001404) | 0.542565 / 0.226044 (0.316521) | 5.406578 / 2.268929 (3.137650) | 2.773508 / 55.444624 (-52.671116) | 2.402926 / 6.876477 (-4.473550) | 2.679318 / 2.142072 (0.537246) | 0.683781 / 4.805227 (-4.121446) | 0.155970 / 6.500664 (-6.344694) | 0.070108 / 0.075469 (-0.005361) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541583 / 1.841788 (-0.300205) | 21.592562 / 8.074308 (13.518254) | 16.426868 / 10.191392 (6.235476) | 0.168618 / 0.680424 (-0.511806) | 0.021560 / 0.534201 (-0.512641) | 0.467062 / 0.579283 (-0.112221) | 0.479968 / 0.434364 (0.045604) | 0.540747 / 0.540337 (0.000410) | 0.775502 / 1.386936 (-0.611434) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008632 / 0.011353 (-0.002721) | 0.004523 / 0.011008 (-0.006485) | 0.075814 / 0.038508 (0.037306) | 0.087096 / 0.023109 (0.063987) | 0.482136 / 0.275898 (0.206238) | 0.529969 / 0.323480 (0.206489) | 0.006882 / 0.007986 (-0.001103) | 0.003720 / 0.004328 (-0.000609) | 0.076232 / 0.004250 (0.071981) | 0.069307 / 0.037052 (0.032254) | 0.491554 / 0.258489 (0.233065) | 0.528989 / 0.293841 (0.235148) | 0.042219 / 0.128546 (-0.086327) | 0.009717 / 0.075646 (-0.065929) | 0.103006 / 0.419271 (-0.316266) | 0.060377 / 0.043533 (0.016844) | 0.484454 / 0.255139 (0.229315) | 0.536072 / 0.283200 (0.252872) | 0.027482 / 0.141683 (-0.114201) | 1.844677 / 1.452155 (0.392522) | 2.001800 / 1.492716 (0.509083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252367 / 0.018006 (0.234361) | 0.483601 / 0.000490 (0.483111) | 0.007445 / 0.000200 (0.007245) | 0.000163 / 0.000054 (0.000108) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036463 / 0.037411 (-0.000948) | 0.108837 / 0.014526 (0.094311) | 0.122256 / 0.176557 (-0.054300) | 0.186455 / 0.737135 (-0.550681) | 0.122270 / 0.296338 (-0.174069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506291 / 0.215209 (0.291082) | 5.038044 / 2.077655 (2.960389) | 2.751017 / 1.504120 (1.246897) | 2.553655 / 1.541195 (1.012460) | 2.612724 / 1.468490 (1.144234) | 0.581755 / 4.584777 (-4.003022) | 4.376012 / 3.745712 (0.630300) | 3.749755 / 5.269862 (-1.520107) | 2.394059 / 4.565676 (-2.171618) | 0.068727 / 0.424275 (-0.355548) | 0.008714 / 0.007607 (0.001107) | 0.607371 / 0.226044 (0.381326) | 6.062053 / 2.268929 (3.793125) | 3.278378 / 55.444624 (-52.166247) | 2.866417 / 6.876477 (-4.010060) | 3.056150 / 2.142072 (0.914077) | 0.695090 / 4.805227 (-4.110137) | 0.155274 / 6.500664 (-6.345390) | 0.071106 / 0.075469 (-0.004363) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584552 / 1.841788 (-0.257236) | 23.092569 / 8.074308 (15.018260) | 17.381905 / 10.191392 (7.190513) | 0.206535 / 0.680424 (-0.473888) | 0.025401 / 0.534201 (-0.508800) | 0.514297 / 0.579283 (-0.064986) | 0.507487 / 0.434364 (0.073123) | 0.566883 / 0.540337 (0.026545) | 0.811074 / 1.386936 (-0.575862) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5fb01295bff860f09a4c466e745f3840f851efdc \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008400 / 0.011353 (-0.002953) | 0.004872 / 0.011008 (-0.006136) | 0.104434 / 0.038508 (0.065926) | 0.074411 / 0.023109 (0.051302) | 0.395970 / 0.275898 (0.120072) | 0.431661 / 0.323480 (0.108181) | 0.005365 / 0.007986 (-0.002621) | 0.005495 / 0.004328 (0.001167) | 0.081255 / 0.004250 (0.077004) | 0.057141 / 0.037052 (0.020089) | 0.397242 / 0.258489 (0.138753) | 0.456052 / 0.293841 (0.162211) | 0.048362 / 0.128546 (-0.080184) | 0.014077 / 0.075646 (-0.061569) | 0.351128 / 0.419271 (-0.068143) | 0.067842 / 0.043533 (0.024309) | 0.372820 / 0.255139 (0.117681) | 0.407917 / 0.283200 (0.124717) | 0.037707 / 0.141683 (-0.103976) | 1.677136 / 1.452155 (0.224981) | 1.764614 / 1.492716 (0.271897) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269850 / 0.018006 (0.251844) | 0.601458 / 0.000490 (0.600969) | 0.006500 / 0.000200 (0.006300) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030340 / 0.037411 (-0.007072) | 0.098041 / 0.014526 (0.083515) | 0.107270 / 0.176557 (-0.069287) | 0.173502 / 0.737135 (-0.563633) | 0.113296 / 0.296338 (-0.183043) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.575788 / 0.215209 (0.360579) | 5.723878 / 2.077655 (3.646223) | 2.326339 / 1.504120 (0.822219) | 2.130667 / 1.541195 (0.589472) | 2.080885 / 1.468490 (0.612395) | 0.800936 / 4.584777 (-3.783841) | 5.227888 / 3.745712 (1.482176) | 4.592647 / 5.269862 (-0.677214) | 2.935765 / 4.565676 (-1.629911) | 0.095909 / 0.424275 (-0.328367) | 0.008763 / 0.007607 (0.001156) | 0.697362 / 0.226044 (0.471318) | 6.968105 / 2.268929 (4.699176) | 3.129070 / 55.444624 (-52.315554) | 2.554818 / 6.876477 (-4.321658) | 2.776005 / 2.142072 (0.633933) | 1.017064 / 4.805227 (-3.788163) | 0.211552 / 6.500664 (-6.289112) | 0.072132 / 0.075469 (-0.003338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517072 / 1.841788 (-0.324716) | 23.737742 / 8.074308 (15.663433) | 22.236447 / 10.191392 (12.045055) | 0.235408 / 0.680424 (-0.445016) | 0.031889 / 0.534201 (-0.502312) | 0.458997 / 0.579283 (-0.120286) | 0.610513 / 0.434364 (0.176149) | 0.536508 / 0.540337 (-0.003830) | 0.750137 / 1.386936 (-0.636799) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008696 / 0.011353 (-0.002657) | 0.005374 / 0.011008 (-0.005634) | 0.077974 / 0.038508 (0.039466) | 0.083471 / 0.023109 (0.060362) | 0.498890 / 0.275898 (0.222992) | 0.517570 / 0.323480 (0.194090) | 0.006523 / 0.007986 (-0.001462) | 0.004315 / 0.004328 (-0.000013) | 0.082262 / 0.004250 (0.078012) | 0.064828 / 0.037052 (0.027776) | 0.473101 / 0.258489 (0.214612) | 0.534172 / 0.293841 (0.240331) | 0.051884 / 0.128546 (-0.076662) | 0.015191 / 0.075646 (-0.060455) | 0.084307 / 0.419271 (-0.334965) | 0.066050 / 0.043533 (0.022517) | 0.518007 / 0.255139 (0.262868) | 0.511145 / 0.283200 (0.227946) | 0.045302 / 0.141683 (-0.096381) | 1.670973 / 1.452155 (0.218818) | 1.829225 / 1.492716 (0.336509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.436537 / 0.018006 (0.418531) | 0.608380 / 0.000490 (0.607890) | 0.075211 / 0.000200 (0.075011) | 0.000733 / 0.000054 (0.000679) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039117 / 0.037411 (0.001706) | 0.103525 / 0.014526 (0.088999) | 0.124413 / 0.176557 (-0.052144) | 0.192352 / 0.737135 (-0.544783) | 0.120379 / 0.296338 (-0.175959) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.673338 / 0.215209 (0.458129) | 6.799435 / 2.077655 (4.721780) | 3.600913 / 1.504120 (2.096793) | 2.881008 / 1.541195 (1.339814) | 2.667154 / 1.468490 (1.198664) | 0.868775 / 4.584777 (-3.716002) | 5.517063 / 3.745712 (1.771351) | 4.646706 / 5.269862 (-0.623156) | 2.914825 / 4.565676 (-1.650852) | 0.098784 / 0.424275 (-0.325491) | 0.011504 / 0.007607 (0.003897) | 0.724233 / 0.226044 (0.498188) | 7.311045 / 2.268929 (5.042117) | 3.685490 / 55.444624 (-51.759135) | 2.892360 / 6.876477 (-3.984117) | 3.253189 / 2.142072 (1.111117) | 0.983065 / 4.805227 (-3.822162) | 0.201097 / 6.500664 (-6.299567) | 0.068020 / 0.075469 (-0.007450) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.793904 / 1.841788 (-0.047884) | 24.451356 / 8.074308 (16.377048) | 21.697191 / 10.191392 (11.505799) | 0.228545 / 0.680424 (-0.451879) | 0.034600 / 0.534201 (-0.499601) | 0.483253 / 0.579283 (-0.096030) | 0.615103 / 0.434364 (0.180739) | 0.564600 / 0.540337 (0.024262) | 0.799688 / 1.386936 (-0.587248) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#74d60213dcbd7c99484c62ce1d3dfd90a1df0770 \"CML watermark\")\n" ]
2023-08-25T13:10:26
2023-08-25T16:58:02
2023-08-25T16:46:22
CONTRIBUTOR
null
Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6180/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6180", "html_url": "https://github.com/huggingface/datasets/pull/6180", "diff_url": "https://github.com/huggingface/datasets/pull/6180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6180.patch", "merged_at": "2023-08-25T16:46:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/6179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6179/comments
https://api.github.com/repos/huggingface/datasets/issues/6179/events
https://github.com/huggingface/datasets/issues/6179
1,867,009,016
I_kwDODunzps5vSEv4
6,179
Map cache with tokenizer
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "https://github.com/huggingface/datasets/issues/5147 may be a solution, by passing in the tokenizer in a fn_kwargs and ignoring it in the fingerprint calculations", "I have a similar issue. I was using a Jupyter Notebook and every time I call the map function it performs tokenization from scratch again although the cache files of last run still exists. \r\n\r\nI ran with 20 processes and now in the cache folder there are two groups of cached results of tokenized dataset:\r\n\r\n```\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:56:46 2023 cache-1982fea76aa54a13_00001_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 13:02:08 2023 cache-1982fea76aa54a13_00004_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:56:40 2023 cache-1982fea76aa54a13_00005_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Sat Aug 26 12:50:59 2023 cache-1982fea76aa54a13_00006_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:57:37 2023 cache-1982fea76aa54a13_00007_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:57:31 2023 cache-1982fea76aa54a13_00008_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:59:47 2023 cache-1982fea76aa54a13_00010_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Sat Aug 26 12:59:44 2023 cache-1982fea76aa54a13_00011_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Sat Aug 26 12:55:24 2023 cache-1982fea76aa54a13_00012_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Sat Aug 26 12:56:21 2023 cache-1982fea76aa54a13_00013_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:57:24 2023 cache-1982fea76aa54a13_00014_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 13:00:48 2023 cache-1982fea76aa54a13_00015_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:56:56 2023 cache-1982fea76aa54a13_00017_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:56:54 2023 cache-1982fea76aa54a13_00018_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Sat Aug 26 12:57:27 2023 cache-1982fea76aa54a13_00019_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:15:40 2023 cache-454431f643cdc5e8_00000_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:46 2023 cache-454431f643cdc5e8_00001_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:14:53 2023 cache-454431f643cdc5e8_00002_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:13:10 2023 cache-454431f643cdc5e8_00003_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:13:04 2023 cache-454431f643cdc5e8_00004_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:42 2023 cache-454431f643cdc5e8_00005_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Wed Aug 23 19:01:29 2023 cache-454431f643cdc5e8_00006_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:41 2023 cache-454431f643cdc5e8_00007_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:14:04 2023 cache-454431f643cdc5e8_00008_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:17:41 2023 cache-454431f643cdc5e8_00009_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:17:06 2023 cache-454431f643cdc5e8_00010_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Wed Aug 23 19:17:16 2023 cache-454431f643cdc5e8_00011_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Wed Aug 23 19:15:13 2023 cache-454431f643cdc5e8_00012_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 241 MB Wed Aug 23 19:16:01 2023 cache-454431f643cdc5e8_00013_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:35 2023 cache-454431f643cdc5e8_00014_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:16:20 2023 cache-454431f643cdc5e8_00015_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:14:48 2023 cache-454431f643cdc5e8_00016_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 18:59:32 2023 cache-454431f643cdc5e8_00017_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:17:58 2023 cache-454431f643cdc5e8_00018_of_00020.arrow\r\n.rw-r--r-- fad3ew bii_dsc_community 240 MB Wed Aug 23 19:15:25 2023 cache-454431f643cdc5e8_00019_of_00020.arrow\r\n```\r\n\r\ncan we specify the cache file for map so that it won't redo everything again?", "@Luosuu [map](https://huggingface.co./docs/datasets/v2.14.4/en/package_reference/main_classes#datasets.Dataset.map) has cache_file_name parameter\r\n\r\nIn my case, I do want the cache to detect when the map function changes, so I can't pass a constant cache file name." ]
2023-08-25T12:55:18
2023-08-26T22:08:07
null
NONE
null
Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session. Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with... setup ``` from transformers import AutoTokenizer AutoTokenizer.from_pretrained('bert-base-uncased').save_pretrained("tok") ``` this prints different value each time ``` from transformers import AutoTokenizer from datasets.utils.py_utils import dumps # Huggingface datasets print(hash(dumps(AutoTokenizer.from_pretrained("tok")))) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6179/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6178/comments
https://api.github.com/repos/huggingface/datasets/issues/6178/events
https://github.com/huggingface/datasets/issues/6178
1,866,610,102
I_kwDODunzps5vQjW2
6,178
'import datasets' throws "invalid syntax error"
{ "login": "elia-ashraf", "id": 128580829, "node_id": "U_kgDOB6n83Q", "avatar_url": "https://avatars.githubusercontent.com/u/128580829?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elia-ashraf", "html_url": "https://github.com/elia-ashraf", "followers_url": "https://api.github.com/users/elia-ashraf/followers", "following_url": "https://api.github.com/users/elia-ashraf/following{/other_user}", "gists_url": "https://api.github.com/users/elia-ashraf/gists{/gist_id}", "starred_url": "https://api.github.com/users/elia-ashraf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elia-ashraf/subscriptions", "organizations_url": "https://api.github.com/users/elia-ashraf/orgs", "repos_url": "https://api.github.com/users/elia-ashraf/repos", "events_url": "https://api.github.com/users/elia-ashraf/events{/privacy}", "received_events_url": "https://api.github.com/users/elia-ashraf/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This seems to be related to your environment and not the `datasets` code (e.g., this could happen when exposing the Python 3.9 site packages to a lower Python version (interpreter))" ]
2023-08-25T08:35:14
2023-08-29T14:57:17
null
NONE
null
### Describe the bug Hi, I have been trying to import the datasets library but I keep gtting this error. `Traceback (most recent call last): File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code exec(code_obj, self.user_global_ns, self.user_ns) Cell In[2], line 1 import datasets File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/__init__.py:22 from .arrow_dataset import Dataset File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/arrow_dataset.py:67 from .arrow_writer import ArrowWriter, OptimizedTypedSequence File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/arrow_writer.py:27 from .features import Features, Image, Value File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/features/__init__.py:17 from .audio import Audio File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/features/audio.py:11 from ..download.streaming_download_manager import xopen, xsplitext File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/download/__init__.py:10 from .streaming_download_manager import StreamingDownloadManager File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/download/streaming_download_manager.py:18 from aiohttp.client_exceptions import ClientError File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/__init__.py:7 from .connector import * # noqa File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/connector.py:12 from .client import ClientRequest File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/client.py:144 yield from asyncio.async(resp.release(), loop=loop) ^ SyntaxError: invalid syntax` I have simply used these commands: `import datasets` and `from datasets import load_dataset` ### Environment info The library has been installed a virtual machine on JupyterHub. Although I have used this library multiple times (on the same VM) before, to train/test an ASR or other ML models, I had never encountered this error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6178/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6177
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6177/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6177/comments
https://api.github.com/repos/huggingface/datasets/issues/6177/events
https://github.com/huggingface/datasets/pull/6177
1,865,490,962
PR_kwDODunzps5Ytky-
6,177
Use object detection images from `huggingface/documentation-images`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005847 / 0.011353 (-0.005506) | 0.003488 / 0.011008 (-0.007521) | 0.079545 / 0.038508 (0.041037) | 0.055114 / 0.023109 (0.032005) | 0.312694 / 0.275898 (0.036796) | 0.338808 / 0.323480 (0.015329) | 0.004573 / 0.007986 (-0.003413) | 0.002818 / 0.004328 (-0.001510) | 0.062102 / 0.004250 (0.057852) | 0.044072 / 0.037052 (0.007019) | 0.317682 / 0.258489 (0.059192) | 0.354139 / 0.293841 (0.060298) | 0.026905 / 0.128546 (-0.101641) | 0.007990 / 0.075646 (-0.067656) | 0.260071 / 0.419271 (-0.159201) | 0.043658 / 0.043533 (0.000125) | 0.313828 / 0.255139 (0.058689) | 0.339678 / 0.283200 (0.056478) | 0.020076 / 0.141683 (-0.121607) | 1.446321 / 1.452155 (-0.005834) | 1.527046 / 1.492716 (0.034330) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197801 / 0.018006 (0.179795) | 0.432874 / 0.000490 (0.432385) | 0.004093 / 0.000200 (0.003893) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023505 / 0.037411 (-0.013906) | 0.072377 / 0.014526 (0.057852) | 0.081058 / 0.176557 (-0.095498) | 0.141628 / 0.737135 (-0.595507) | 0.081622 / 0.296338 (-0.214716) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395005 / 0.215209 (0.179795) | 3.949006 / 2.077655 (1.871352) | 1.934028 / 1.504120 (0.429908) | 1.756065 / 1.541195 (0.214871) | 1.778719 / 1.468490 (0.310229) | 0.501279 / 4.584777 (-4.083498) | 3.032120 / 3.745712 (-0.713592) | 2.859751 / 5.269862 (-2.410110) | 1.885924 / 4.565676 (-2.679753) | 0.057236 / 0.424275 (-0.367039) | 0.006704 / 0.007607 (-0.000903) | 0.465794 / 0.226044 (0.239750) | 4.648622 / 2.268929 (2.379694) | 2.345649 / 55.444624 (-53.098975) | 1.981122 / 6.876477 (-4.895355) | 2.148235 / 2.142072 (0.006163) | 0.591466 / 4.805227 (-4.213761) | 0.125262 / 6.500664 (-6.375402) | 0.061305 / 0.075469 (-0.014164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243932 / 1.841788 (-0.597856) | 17.912110 / 8.074308 (9.837802) | 13.662097 / 10.191392 (3.470705) | 0.148051 / 0.680424 (-0.532373) | 0.016778 / 0.534201 (-0.517423) | 0.340342 / 0.579283 (-0.238941) | 0.351720 / 0.434364 (-0.082644) | 0.377837 / 0.540337 (-0.162501) | 0.521163 / 1.386936 (-0.865774) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006011 / 0.011353 (-0.005342) | 0.003549 / 0.011008 (-0.007459) | 0.063579 / 0.038508 (0.025071) | 0.056196 / 0.023109 (0.033087) | 0.448879 / 0.275898 (0.172981) | 0.491542 / 0.323480 (0.168062) | 0.004597 / 0.007986 (-0.003389) | 0.002790 / 0.004328 (-0.001539) | 0.063257 / 0.004250 (0.059006) | 0.045653 / 0.037052 (0.008600) | 0.459714 / 0.258489 (0.201225) | 0.491371 / 0.293841 (0.197530) | 0.028124 / 0.128546 (-0.100422) | 0.008016 / 0.075646 (-0.067630) | 0.069418 / 0.419271 (-0.349853) | 0.040393 / 0.043533 (-0.003140) | 0.450978 / 0.255139 (0.195839) | 0.472075 / 0.283200 (0.188875) | 0.020006 / 0.141683 (-0.121677) | 1.451946 / 1.452155 (-0.000209) | 1.513557 / 1.492716 (0.020840) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225416 / 0.018006 (0.207410) | 0.412287 / 0.000490 (0.411797) | 0.004075 / 0.000200 (0.003875) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025949 / 0.037411 (-0.011463) | 0.080633 / 0.014526 (0.066108) | 0.089960 / 0.176557 (-0.086597) | 0.144530 / 0.737135 (-0.592606) | 0.091427 / 0.296338 (-0.204911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462311 / 0.215209 (0.247102) | 4.605063 / 2.077655 (2.527408) | 2.541083 / 1.504120 (1.036963) | 2.356341 / 1.541195 (0.815147) | 2.389824 / 1.468490 (0.921334) | 0.507397 / 4.584777 (-4.077380) | 3.079023 / 3.745712 (-0.666689) | 2.792025 / 5.269862 (-2.477837) | 1.846931 / 4.565676 (-2.718746) | 0.058422 / 0.424275 (-0.365853) | 0.006409 / 0.007607 (-0.001199) | 0.530648 / 0.226044 (0.304604) | 5.321030 / 2.268929 (3.052101) | 2.978335 / 55.444624 (-52.466289) | 2.641188 / 6.876477 (-4.235288) | 2.780450 / 2.142072 (0.638378) | 0.593864 / 4.805227 (-4.211363) | 0.125394 / 6.500664 (-6.375270) | 0.061432 / 0.075469 (-0.014037) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337142 / 1.841788 (-0.504646) | 18.841575 / 8.074308 (10.767267) | 14.678622 / 10.191392 (4.487230) | 0.144491 / 0.680424 (-0.535933) | 0.018145 / 0.534201 (-0.516056) | 0.339376 / 0.579283 (-0.239907) | 0.339129 / 0.434364 (-0.095235) | 0.394842 / 0.540337 (-0.145495) | 0.547924 / 1.386936 (-0.839012) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#57af0ab30796df59d28bf933e756ffbe5f34db1e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006478 / 0.011353 (-0.004875) | 0.003845 / 0.011008 (-0.007163) | 0.084179 / 0.038508 (0.045671) | 0.071327 / 0.023109 (0.048217) | 0.315206 / 0.275898 (0.039308) | 0.353477 / 0.323480 (0.029997) | 0.005267 / 0.007986 (-0.002719) | 0.003282 / 0.004328 (-0.001046) | 0.064062 / 0.004250 (0.059811) | 0.051940 / 0.037052 (0.014888) | 0.332004 / 0.258489 (0.073515) | 0.363199 / 0.293841 (0.069358) | 0.030546 / 0.128546 (-0.098000) | 0.008453 / 0.075646 (-0.067193) | 0.287636 / 0.419271 (-0.131636) | 0.051999 / 0.043533 (0.008466) | 0.325220 / 0.255139 (0.070081) | 0.355324 / 0.283200 (0.072125) | 0.023417 / 0.141683 (-0.118266) | 1.473370 / 1.452155 (0.021215) | 1.596903 / 1.492716 (0.104186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212645 / 0.018006 (0.194638) | 0.463766 / 0.000490 (0.463276) | 0.002834 / 0.000200 (0.002634) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028424 / 0.037411 (-0.008987) | 0.082188 / 0.014526 (0.067662) | 0.777186 / 0.176557 (0.600629) | 0.218290 / 0.737135 (-0.518845) | 0.099098 / 0.296338 (-0.197240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.387138 / 0.215209 (0.171929) | 3.845655 / 2.077655 (1.768000) | 1.929812 / 1.504120 (0.425692) | 1.718263 / 1.541195 (0.177069) | 1.760933 / 1.468490 (0.292443) | 0.475171 / 4.584777 (-4.109606) | 3.523366 / 3.745712 (-0.222346) | 3.167322 / 5.269862 (-2.102540) | 1.975164 / 4.565676 (-2.590513) | 0.056106 / 0.424275 (-0.368169) | 0.007448 / 0.007607 (-0.000159) | 0.459824 / 0.226044 (0.233779) | 4.590566 / 2.268929 (2.321638) | 2.377968 / 55.444624 (-53.066656) | 2.034052 / 6.876477 (-4.842425) | 2.224976 / 2.142072 (0.082904) | 0.575901 / 4.805227 (-4.229326) | 0.131546 / 6.500664 (-6.369118) | 0.059266 / 0.075469 (-0.016203) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254783 / 1.841788 (-0.587005) | 19.497795 / 8.074308 (11.423487) | 13.937672 / 10.191392 (3.746280) | 0.164092 / 0.680424 (-0.516332) | 0.017915 / 0.534201 (-0.516286) | 0.391430 / 0.579283 (-0.187853) | 0.403681 / 0.434364 (-0.030683) | 0.457711 / 0.540337 (-0.082626) | 0.620395 / 1.386936 (-0.766541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006793 / 0.011353 (-0.004560) | 0.004101 / 0.011008 (-0.006907) | 0.064780 / 0.038508 (0.026272) | 0.071087 / 0.023109 (0.047977) | 0.401963 / 0.275898 (0.126065) | 0.433085 / 0.323480 (0.109605) | 0.005348 / 0.007986 (-0.002638) | 0.003289 / 0.004328 (-0.001039) | 0.065209 / 0.004250 (0.060958) | 0.054202 / 0.037052 (0.017150) | 0.405629 / 0.258489 (0.147140) | 0.440326 / 0.293841 (0.146485) | 0.032283 / 0.128546 (-0.096263) | 0.008510 / 0.075646 (-0.067137) | 0.071144 / 0.419271 (-0.348127) | 0.047414 / 0.043533 (0.003881) | 0.402065 / 0.255139 (0.146926) | 0.421217 / 0.283200 (0.138017) | 0.021924 / 0.141683 (-0.119759) | 1.490067 / 1.452155 (0.037913) | 1.539134 / 1.492716 (0.046417) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280072 / 0.018006 (0.262066) | 0.456130 / 0.000490 (0.455641) | 0.020926 / 0.000200 (0.020726) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032040 / 0.037411 (-0.005371) | 0.092343 / 0.014526 (0.077817) | 0.104866 / 0.176557 (-0.071690) | 0.156631 / 0.737135 (-0.580505) | 0.107203 / 0.296338 (-0.189136) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426268 / 0.215209 (0.211059) | 4.255539 / 2.077655 (2.177884) | 2.285077 / 1.504120 (0.780957) | 2.114277 / 1.541195 (0.573083) | 2.159242 / 1.468490 (0.690752) | 0.489421 / 4.584777 (-4.095356) | 3.630797 / 3.745712 (-0.114915) | 3.205238 / 5.269862 (-2.064624) | 1.985846 / 4.565676 (-2.579830) | 0.057436 / 0.424275 (-0.366839) | 0.007154 / 0.007607 (-0.000454) | 0.507294 / 0.226044 (0.281250) | 5.050105 / 2.268929 (2.781176) | 2.750474 / 55.444624 (-52.694151) | 2.404116 / 6.876477 (-4.472360) | 2.576483 / 2.142072 (0.434411) | 0.584909 / 4.805227 (-4.220318) | 0.130695 / 6.500664 (-6.369969) | 0.059743 / 0.075469 (-0.015726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352702 / 1.841788 (-0.489086) | 19.687944 / 8.074308 (11.613636) | 14.991847 / 10.191392 (4.800455) | 0.185164 / 0.680424 (-0.495260) | 0.020314 / 0.534201 (-0.513887) | 0.395162 / 0.579283 (-0.184121) | 0.408917 / 0.434364 (-0.025447) | 0.467049 / 0.540337 (-0.073288) | 0.649209 / 1.386936 (-0.737727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#885518608ceab83b7ed8ceba7a0b72bc68096026 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006142 / 0.011353 (-0.005211) | 0.003621 / 0.011008 (-0.007387) | 0.079880 / 0.038508 (0.041372) | 0.059283 / 0.023109 (0.036173) | 0.310971 / 0.275898 (0.035072) | 0.351620 / 0.323480 (0.028140) | 0.003453 / 0.007986 (-0.004532) | 0.003785 / 0.004328 (-0.000543) | 0.062395 / 0.004250 (0.058145) | 0.047614 / 0.037052 (0.010562) | 0.312688 / 0.258489 (0.054199) | 0.363762 / 0.293841 (0.069921) | 0.027051 / 0.128546 (-0.101495) | 0.007920 / 0.075646 (-0.067726) | 0.261080 / 0.419271 (-0.158192) | 0.044476 / 0.043533 (0.000943) | 0.312615 / 0.255139 (0.057476) | 0.343672 / 0.283200 (0.060472) | 0.022723 / 0.141683 (-0.118960) | 1.441449 / 1.452155 (-0.010706) | 1.509253 / 1.492716 (0.016536) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193171 / 0.018006 (0.175165) | 0.434771 / 0.000490 (0.434281) | 0.003114 / 0.000200 (0.002914) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024209 / 0.037411 (-0.013203) | 0.073891 / 0.014526 (0.059365) | 0.083497 / 0.176557 (-0.093060) | 0.144962 / 0.737135 (-0.592173) | 0.084594 / 0.296338 (-0.211745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392512 / 0.215209 (0.177303) | 3.912692 / 2.077655 (1.835037) | 1.914010 / 1.504120 (0.409890) | 1.743827 / 1.541195 (0.202632) | 1.829244 / 1.468490 (0.360753) | 0.497740 / 4.584777 (-4.087037) | 2.979222 / 3.745712 (-0.766490) | 2.849786 / 5.269862 (-2.420076) | 1.874411 / 4.565676 (-2.691265) | 0.057270 / 0.424275 (-0.367005) | 0.006673 / 0.007607 (-0.000934) | 0.460724 / 0.226044 (0.234679) | 4.600617 / 2.268929 (2.331689) | 2.333178 / 55.444624 (-53.111446) | 1.999902 / 6.876477 (-4.876575) | 2.170600 / 2.142072 (0.028528) | 0.587716 / 4.805227 (-4.217511) | 0.126374 / 6.500664 (-6.374290) | 0.061926 / 0.075469 (-0.013543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.229767 / 1.841788 (-0.612021) | 18.494462 / 8.074308 (10.420154) | 13.799801 / 10.191392 (3.608409) | 0.137952 / 0.680424 (-0.542472) | 0.017037 / 0.534201 (-0.517164) | 0.333252 / 0.579283 (-0.246031) | 0.357276 / 0.434364 (-0.077088) | 0.380069 / 0.540337 (-0.160268) | 0.526968 / 1.386936 (-0.859968) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006185 / 0.011353 (-0.005168) | 0.003595 / 0.011008 (-0.007413) | 0.063371 / 0.038508 (0.024863) | 0.060461 / 0.023109 (0.037351) | 0.455016 / 0.275898 (0.179118) | 0.490505 / 0.323480 (0.167026) | 0.004738 / 0.007986 (-0.003247) | 0.002852 / 0.004328 (-0.001477) | 0.064161 / 0.004250 (0.059910) | 0.047411 / 0.037052 (0.010359) | 0.453815 / 0.258489 (0.195326) | 0.485354 / 0.293841 (0.191513) | 0.028358 / 0.128546 (-0.100188) | 0.008101 / 0.075646 (-0.067545) | 0.068399 / 0.419271 (-0.350873) | 0.040928 / 0.043533 (-0.002605) | 0.462263 / 0.255139 (0.207124) | 0.478773 / 0.283200 (0.195574) | 0.019787 / 0.141683 (-0.121896) | 1.475798 / 1.452155 (0.023643) | 1.563890 / 1.492716 (0.071174) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239701 / 0.018006 (0.221695) | 0.417442 / 0.000490 (0.416953) | 0.005895 / 0.000200 (0.005695) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026155 / 0.037411 (-0.011256) | 0.081264 / 0.014526 (0.066738) | 0.089734 / 0.176557 (-0.086822) | 0.143965 / 0.737135 (-0.593171) | 0.092156 / 0.296338 (-0.204182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456420 / 0.215209 (0.241211) | 4.545675 / 2.077655 (2.468020) | 2.477141 / 1.504120 (0.973022) | 2.295142 / 1.541195 (0.753947) | 2.349525 / 1.468490 (0.881035) | 0.502485 / 4.584777 (-4.082292) | 3.072347 / 3.745712 (-0.673365) | 2.798565 / 5.269862 (-2.471296) | 1.849030 / 4.565676 (-2.716647) | 0.057789 / 0.424275 (-0.366487) | 0.006436 / 0.007607 (-0.001172) | 0.529648 / 0.226044 (0.303604) | 5.285670 / 2.268929 (3.016741) | 2.954964 / 55.444624 (-52.489660) | 2.593161 / 6.876477 (-4.283316) | 2.735254 / 2.142072 (0.593181) | 0.587635 / 4.805227 (-4.217592) | 0.124732 / 6.500664 (-6.375932) | 0.060999 / 0.075469 (-0.014470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354957 / 1.841788 (-0.486831) | 18.803998 / 8.074308 (10.729690) | 14.902712 / 10.191392 (4.711320) | 0.146729 / 0.680424 (-0.533695) | 0.017989 / 0.534201 (-0.516212) | 0.333633 / 0.579283 (-0.245650) | 0.347685 / 0.434364 (-0.086679) | 0.386497 / 0.540337 (-0.153840) | 0.590885 / 1.386936 (-0.796051) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#392d8a46f4da066408785281d9b87760f7273254 \"CML watermark\")\n" ]
2023-08-24T16:16:09
2023-08-25T16:30:00
2023-08-25T16:21:17
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6177/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6177", "html_url": "https://github.com/huggingface/datasets/pull/6177", "diff_url": "https://github.com/huggingface/datasets/pull/6177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6177.patch", "merged_at": "2023-08-25T16:21:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/6176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6176/comments
https://api.github.com/repos/huggingface/datasets/issues/6176/events
https://github.com/huggingface/datasets/issues/6176
1,864,436,408
I_kwDODunzps5vIQq4
6,176
how to limit the size of memory mapped file?
{ "login": "williamium3000", "id": 47763855, "node_id": "MDQ6VXNlcjQ3NzYzODU1", "avatar_url": "https://avatars.githubusercontent.com/u/47763855?v=4", "gravatar_id": "", "url": "https://api.github.com/users/williamium3000", "html_url": "https://github.com/williamium3000", "followers_url": "https://api.github.com/users/williamium3000/followers", "following_url": "https://api.github.com/users/williamium3000/following{/other_user}", "gists_url": "https://api.github.com/users/williamium3000/gists{/gist_id}", "starred_url": "https://api.github.com/users/williamium3000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/williamium3000/subscriptions", "organizations_url": "https://api.github.com/users/williamium3000/orgs", "repos_url": "https://api.github.com/users/williamium3000/repos", "events_url": "https://api.github.com/users/williamium3000/events{/privacy}", "received_events_url": "https://api.github.com/users/williamium3000/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! Can you share the error this reproducer throws in your environment? `streaming=True` streams the dataset as it's iterated over without creating a memory-map file.", "The trace of the error. Streaming works but is slower.\r\n```\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-08-24_06:06:01\r\n host : compute-126.cm.cluster\r\n rank : 0 (local_rank: 0)\r\n exitcode : 1 (pid: 48442)\r\n error_file: /tmp/torchelastic_4fqzcuuz/none_rx2470jl/attempt_0/0/error.json\r\n traceback : Traceback (most recent call last):\r\n File \"/users/yli7/.conda/envs/pytorch2.0/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 346, in wrapper\r\n return f(*args, **kwargs)\r\n File \"Pretrain.py\", line 214, in main\r\n pair_dataset, c4_dataset = create_dataset('pretrain', config)\r\n File \"/dcs05/qiao/data/william/project/DaVinci/dataset/__init__.py\", line 109, in create_dataset\r\n c4_dataset = load_dataset(\"c4\", \"en\", split=\"train\").to_iterable_dataset(num_shards=1024).map(pre_caption_huggingface)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/load.py\", line 1810, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1145, in as_dataset\r\n datasets = map_nested(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 436, in map_nested\r\n return function(data_struct)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1175, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1246, in _as_dataset\r\n dataset_kwargs = ArrowReader(cache_dir, self.info).read(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 244, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 265, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 200, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 336, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 357, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 1059, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 65, in _memory_mapped_arrow_table_from_file\r\n opened_stream = _memory_mapped_record_batch_reader_from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 50, in _memory_mapped_record_batch_reader_from_file\r\n memory_mapped_stream = pa.memory_map(filename)\r\n File \"pyarrow/io.pxi\", line 1009, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 956, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 115, in pyarrow.lib.check_status\r\n OSError: Memory mapping file failed: Cannot allocate memory\r\n```", "This issue has previously been reported here: https://github.com/huggingface/datasets/issues/5710. Reporting it in the Arrow repo makes more sense as they have control over memory mapping.\r\n\r\nPS: this is the API to reduce the size of the generated Arrow file:\r\n```python\r\nfrom datasets import load_dataset\r\nbuilder = load_dataset_builder(\"c4\", \"en\")\r\nbuilder.download_and_prepare(max_shard_size=\"5GB\")\r\ndataset = builder.as_dataset()\r\n```\r\n\r\nIf this resolves the issue, we can consider exposing `max_shard_size` in `load_dataset`.", "Thanks for the response. The problem seems not resolved. The memory I allocated to the environment is 64G and the following error still occurs\r\n`Python 3.8.16 (default, Jun 12 2023, 18:09:05) \r\n[GCC 11.2.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_dataset_builder\r\n>>> builder = load_dataset_builder(\"c4\", \"en\")\r\n>>> builder.download_and_prepare(max_shard_size=\"5GB\")\r\nFound cached dataset c4 (/users/yli7/.cache/huggingface/datasets/c4/en/0.0.0/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01)\r\n>>> dataset = builder.as_dataset()\r\n 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1145, in as_dataset\r\n datasets = map_nested(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 444, in map_nested\r\n mapped = [\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 445, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 347, in _single_map_nested\r\n return function(data_struct)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1175, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/builder.py\", line 1246, in _as_dataset\r\n dataset_kwargs = ArrowReader(cache_dir, self.info).read(\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 244, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 265, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 200, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 336, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/arrow_reader.py\", line 357, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 1059, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 65, in _memory_mapped_arrow_table_from_file\r\n opened_stream = _memory_mapped_record_batch_reader_from_file(filename)\r\n File \"/users/yli7/.local/lib/python3.8/site-packages/datasets/table.py\", line 50, in _memory_mapped_record_batch_reader_from_file\r\n memory_mapped_stream = pa.memory_map(filename)\r\n File \"pyarrow/io.pxi\", line 1009, in pyarrow.lib.memory_map\r\n File \"pyarrow/io.pxi\", line 956, in pyarrow.lib.MemoryMappedFile._open\r\n File \"pyarrow/error.pxi\", line 144, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 115, in pyarrow.lib.check_status\r\nOSError: Memory mapping file failed: Cannot allocate memory`" ]
2023-08-24T05:33:45
2023-08-26T05:09:56
null
NONE
null
### Describe the bug Huggingface datasets use memory-mapped file to map large datasets in memory for fast access. However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over the limit, memory cannot be allocated), however, when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed. So is there a way to explicitly limit the size of memory mapped file? ### Steps to reproduce the bug python >>> from datasets import load_dataset >>> dataset = load_dataset("c4", "en", streaming=True) ### Expected behavior In a normal environment, this will not have any problem. However, when the system allocates a portion of the memory to the program and when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed. ### Environment info linux cluster with SGE(Sun Grid Engine)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6176/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6175/comments
https://api.github.com/repos/huggingface/datasets/issues/6175/events
https://github.com/huggingface/datasets/pull/6175
1,863,592,678
PR_kwDODunzps5YnKlx
6,175
PyArrow 13 CI fixes
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006095 / 0.011353 (-0.005258) | 0.003580 / 0.011008 (-0.007429) | 0.080146 / 0.038508 (0.041638) | 0.063445 / 0.023109 (0.040336) | 0.321930 / 0.275898 (0.046032) | 0.397933 / 0.323480 (0.074453) | 0.003455 / 0.007986 (-0.004531) | 0.002856 / 0.004328 (-0.001472) | 0.062938 / 0.004250 (0.058687) | 0.048896 / 0.037052 (0.011843) | 0.333070 / 0.258489 (0.074581) | 0.404485 / 0.293841 (0.110644) | 0.027156 / 0.128546 (-0.101390) | 0.007974 / 0.075646 (-0.067672) | 0.261505 / 0.419271 (-0.157766) | 0.045328 / 0.043533 (0.001795) | 0.311203 / 0.255139 (0.056064) | 0.390006 / 0.283200 (0.106806) | 0.023650 / 0.141683 (-0.118033) | 1.468856 / 1.452155 (0.016701) | 1.503867 / 1.492716 (0.011151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202110 / 0.018006 (0.184103) | 0.436433 / 0.000490 (0.435944) | 0.002278 / 0.000200 (0.002078) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024575 / 0.037411 (-0.012836) | 0.073005 / 0.014526 (0.058479) | 0.083609 / 0.176557 (-0.092947) | 0.144881 / 0.737135 (-0.592254) | 0.083495 / 0.296338 (-0.212844) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398911 / 0.215209 (0.183702) | 3.994035 / 2.077655 (1.916381) | 2.056768 / 1.504120 (0.552649) | 1.913242 / 1.541195 (0.372047) | 1.932934 / 1.468490 (0.464444) | 0.498953 / 4.584777 (-4.085824) | 3.031107 / 3.745712 (-0.714605) | 2.817165 / 5.269862 (-2.452696) | 1.858886 / 4.565676 (-2.706790) | 0.056977 / 0.424275 (-0.367299) | 0.006634 / 0.007607 (-0.000973) | 0.472580 / 0.226044 (0.246536) | 4.738301 / 2.268929 (2.469372) | 2.373938 / 55.444624 (-53.070686) | 2.021057 / 6.876477 (-4.855420) | 2.195419 / 2.142072 (0.053346) | 0.585182 / 4.805227 (-4.220045) | 0.124260 / 6.500664 (-6.376405) | 0.060250 / 0.075469 (-0.015219) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.227350 / 1.841788 (-0.614438) | 18.496525 / 8.074308 (10.422216) | 13.946658 / 10.191392 (3.755266) | 0.140024 / 0.680424 (-0.540399) | 0.017077 / 0.534201 (-0.517124) | 0.334415 / 0.579283 (-0.244868) | 0.351118 / 0.434364 (-0.083246) | 0.379556 / 0.540337 (-0.160782) | 0.525064 / 1.386936 (-0.861872) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006176 / 0.011353 (-0.005177) | 0.003648 / 0.011008 (-0.007360) | 0.063461 / 0.038508 (0.024953) | 0.062770 / 0.023109 (0.039660) | 0.448786 / 0.275898 (0.172888) | 0.486490 / 0.323480 (0.163010) | 0.005527 / 0.007986 (-0.002458) | 0.002860 / 0.004328 (-0.001469) | 0.063803 / 0.004250 (0.059553) | 0.049657 / 0.037052 (0.012604) | 0.449625 / 0.258489 (0.191136) | 0.489378 / 0.293841 (0.195537) | 0.028406 / 0.128546 (-0.100140) | 0.008062 / 0.075646 (-0.067584) | 0.068417 / 0.419271 (-0.350854) | 0.040854 / 0.043533 (-0.002678) | 0.461670 / 0.255139 (0.206531) | 0.481622 / 0.283200 (0.198423) | 0.021018 / 0.141683 (-0.120665) | 1.450328 / 1.452155 (-0.001826) | 1.501283 / 1.492716 (0.008567) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269824 / 0.018006 (0.251817) | 0.412296 / 0.000490 (0.411807) | 0.039582 / 0.000200 (0.039382) | 0.000266 / 0.000054 (0.000211) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026436 / 0.037411 (-0.010976) | 0.080633 / 0.014526 (0.066107) | 0.089786 / 0.176557 (-0.086770) | 0.145020 / 0.737135 (-0.592115) | 0.092327 / 0.296338 (-0.204012) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.464349 / 0.215209 (0.249140) | 4.630631 / 2.077655 (2.552976) | 2.560527 / 1.504120 (1.056407) | 2.374195 / 1.541195 (0.833000) | 2.424774 / 1.468490 (0.956284) | 0.510428 / 4.584777 (-4.074349) | 3.099805 / 3.745712 (-0.645907) | 2.781096 / 5.269862 (-2.488765) | 1.854276 / 4.565676 (-2.711400) | 0.058102 / 0.424275 (-0.366173) | 0.006365 / 0.007607 (-0.001242) | 0.534082 / 0.226044 (0.308038) | 5.355003 / 2.268929 (3.086074) | 3.012546 / 55.444624 (-52.432078) | 2.665222 / 6.876477 (-4.211255) | 2.821014 / 2.142072 (0.678942) | 0.597733 / 4.805227 (-4.207494) | 0.125433 / 6.500664 (-6.375231) | 0.060802 / 0.075469 (-0.014667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345699 / 1.841788 (-0.496088) | 18.836083 / 8.074308 (10.761774) | 14.895458 / 10.191392 (4.704066) | 0.146843 / 0.680424 (-0.533581) | 0.018082 / 0.534201 (-0.516119) | 0.335729 / 0.579283 (-0.243554) | 0.351013 / 0.434364 (-0.083351) | 0.388435 / 0.540337 (-0.151902) | 0.543826 / 1.386936 (-0.843110) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d0c7e8c4808a1fb6ee7234b4caa25aa9fcfdc88f \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006593 / 0.011353 (-0.004760) | 0.004089 / 0.011008 (-0.006919) | 0.084753 / 0.038508 (0.046245) | 0.079899 / 0.023109 (0.056790) | 0.311528 / 0.275898 (0.035630) | 0.349722 / 0.323480 (0.026243) | 0.004288 / 0.007986 (-0.003698) | 0.004552 / 0.004328 (0.000224) | 0.065896 / 0.004250 (0.061646) | 0.053813 / 0.037052 (0.016760) | 0.316958 / 0.258489 (0.058469) | 0.367011 / 0.293841 (0.073170) | 0.031082 / 0.128546 (-0.097464) | 0.008684 / 0.075646 (-0.066963) | 0.288003 / 0.419271 (-0.131268) | 0.052560 / 0.043533 (0.009027) | 0.305589 / 0.255139 (0.050450) | 0.349656 / 0.283200 (0.066457) | 0.023857 / 0.141683 (-0.117826) | 1.462360 / 1.452155 (0.010205) | 1.568170 / 1.492716 (0.075454) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272342 / 0.018006 (0.254336) | 0.585108 / 0.000490 (0.584618) | 0.003427 / 0.000200 (0.003227) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030347 / 0.037411 (-0.007064) | 0.086325 / 0.014526 (0.071799) | 0.100958 / 0.176557 (-0.075598) | 0.156534 / 0.737135 (-0.580601) | 0.102506 / 0.296338 (-0.193832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406625 / 0.215209 (0.191416) | 4.065957 / 2.077655 (1.988302) | 2.075867 / 1.504120 (0.571747) | 1.914390 / 1.541195 (0.373196) | 2.013321 / 1.468490 (0.544831) | 0.486832 / 4.584777 (-4.097945) | 3.545940 / 3.745712 (-0.199772) | 3.323226 / 5.269862 (-1.946635) | 2.067742 / 4.565676 (-2.497934) | 0.057884 / 0.424275 (-0.366391) | 0.007751 / 0.007607 (0.000144) | 0.484923 / 0.226044 (0.258878) | 4.844885 / 2.268929 (2.575956) | 2.569828 / 55.444624 (-52.874796) | 2.224058 / 6.876477 (-4.652419) | 2.485587 / 2.142072 (0.343515) | 0.584311 / 4.805227 (-4.220916) | 0.134984 / 6.500664 (-6.365680) | 0.062164 / 0.075469 (-0.013305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247182 / 1.841788 (-0.594605) | 20.107500 / 8.074308 (12.033192) | 14.194444 / 10.191392 (4.003052) | 0.147134 / 0.680424 (-0.533290) | 0.018062 / 0.534201 (-0.516138) | 0.392029 / 0.579283 (-0.187254) | 0.402991 / 0.434364 (-0.031373) | 0.457600 / 0.540337 (-0.082737) | 0.632553 / 1.386936 (-0.754383) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006920 / 0.011353 (-0.004433) | 0.004257 / 0.011008 (-0.006751) | 0.065233 / 0.038508 (0.026725) | 0.078151 / 0.023109 (0.055042) | 0.389141 / 0.275898 (0.113243) | 0.431518 / 0.323480 (0.108038) | 0.005752 / 0.007986 (-0.002234) | 0.003584 / 0.004328 (-0.000745) | 0.065173 / 0.004250 (0.060922) | 0.059113 / 0.037052 (0.022060) | 0.398225 / 0.258489 (0.139736) | 0.430980 / 0.293841 (0.137139) | 0.032802 / 0.128546 (-0.095744) | 0.008702 / 0.075646 (-0.066945) | 0.071345 / 0.419271 (-0.347926) | 0.048269 / 0.043533 (0.004736) | 0.389264 / 0.255139 (0.134125) | 0.416008 / 0.283200 (0.132809) | 0.024845 / 0.141683 (-0.116838) | 1.499100 / 1.452155 (0.046945) | 1.576397 / 1.492716 (0.083681) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296674 / 0.018006 (0.278668) | 0.540108 / 0.000490 (0.539619) | 0.004293 / 0.000200 (0.004093) | 0.000151 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034108 / 0.037411 (-0.003303) | 0.092747 / 0.014526 (0.078221) | 0.112203 / 0.176557 (-0.064354) | 0.162728 / 0.737135 (-0.574407) | 0.109955 / 0.296338 (-0.186383) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432006 / 0.215209 (0.216797) | 4.297591 / 2.077655 (2.219937) | 2.379645 / 1.504120 (0.875525) | 2.218680 / 1.541195 (0.677485) | 2.314608 / 1.468490 (0.846117) | 0.495562 / 4.584777 (-4.089215) | 3.589787 / 3.745712 (-0.155925) | 3.349593 / 5.269862 (-1.920268) | 2.119893 / 4.565676 (-2.445783) | 0.057976 / 0.424275 (-0.366299) | 0.007612 / 0.007607 (0.000005) | 0.509422 / 0.226044 (0.283378) | 5.101444 / 2.268929 (2.832515) | 2.794532 / 55.444624 (-52.650092) | 2.459033 / 6.876477 (-4.417444) | 2.714424 / 2.142072 (0.572352) | 0.588444 / 4.805227 (-4.216784) | 0.135763 / 6.500664 (-6.364901) | 0.062593 / 0.075469 (-0.012876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.361415 / 1.841788 (-0.480372) | 20.940684 / 8.074308 (12.866376) | 15.161364 / 10.191392 (4.969972) | 0.154243 / 0.680424 (-0.526181) | 0.020305 / 0.534201 (-0.513896) | 0.397438 / 0.579283 (-0.181845) | 0.415047 / 0.434364 (-0.019317) | 0.473250 / 0.540337 (-0.067088) | 0.740681 / 1.386936 (-0.646255) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6e84937af4f24194bf61f09244ebef6528fb7c4c \"CML watermark\")\n" ]
2023-08-23T15:45:53
2023-08-25T13:15:59
2023-08-25T13:06:52
CONTRIBUTOR
null
Fixes: * bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed) * aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always return `datetime [ns]` objects) Fix #6173
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6175/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6175", "html_url": "https://github.com/huggingface/datasets/pull/6175", "diff_url": "https://github.com/huggingface/datasets/pull/6175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6175.patch", "merged_at": "2023-08-25T13:06:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/6173
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6173/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6173/comments
https://api.github.com/repos/huggingface/datasets/issues/6173/events
https://github.com/huggingface/datasets/issues/6173
1,863,422,065
I_kwDODunzps5vEZBx
6,173
Fix CI for pyarrow 13.0.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2023-08-23T14:11:20
2023-08-25T13:06:53
2023-08-25T13:06:53
MEMBER
null
pyarrow 13.0.0 just came out ``` FAILED tests/test_formatting.py::ArrowExtractorTest::test_pandas_extractor - AssertionError: Attributes of Series are different Attribute "dtype" are different [left]: datetime64[us, UTC] [right]: datetime64[ns, UTC] ``` ``` FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type fixed_size_list<item: int32>[3] to Sequence(feature=Value(dtype='int64', id=None), length=3, id=None) ``` e.g. in https://github.com/huggingface/datasets/actions/runs/5952253963/job/16143847230 first error may be related to https://github.com/apache/arrow/issues/33321 second one maybe because `feature.length * len(array) == len(array_values)` is not satisfied anymore somehow ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6173/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/6173/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6172/comments
https://api.github.com/repos/huggingface/datasets/issues/6172/events
https://github.com/huggingface/datasets/issues/6172
1,863,318,027
I_kwDODunzps5vD_oL
6,172
Make Dataset streaming queries retryable
{ "login": "rojagtap", "id": 42299342, "node_id": "MDQ6VXNlcjQyMjk5MzQy", "avatar_url": "https://avatars.githubusercontent.com/u/42299342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rojagtap", "html_url": "https://github.com/rojagtap", "followers_url": "https://api.github.com/users/rojagtap/followers", "following_url": "https://api.github.com/users/rojagtap/following{/other_user}", "gists_url": "https://api.github.com/users/rojagtap/gists{/gist_id}", "starred_url": "https://api.github.com/users/rojagtap/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rojagtap/subscriptions", "organizations_url": "https://api.github.com/users/rojagtap/orgs", "repos_url": "https://api.github.com/users/rojagtap/repos", "events_url": "https://api.github.com/users/rojagtap/events{/privacy}", "received_events_url": "https://api.github.com/users/rojagtap/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! The streaming mode also retries requests - `datasets.config.STREAMING_READ_MAX_RETRIES` (20 sec by default) controls the number of retries and `datasets.config.STREAMING_READ_RETRY_INTERVAL` (5 sec) the sleep time between retries.\r\n\r\n> At step 1800 I got a 504 HTTP status code error from Huggingface hub for my pytorch dataloader\r\n\r\nA minor Hub outage that we experienced yesterday could be the cause." ]
2023-08-23T13:15:38
2023-08-24T14:29:27
null
NONE
null
### Feature request Streaming datasets, as intended, do not load the entire dataset in memory or disk. However, while querying the next data chunk from the remote, sometimes it is possible that the service is down or there might be other issues that may cause the query to fail. In such a scenario, it would be nice to make these queries retryable (perhaps with a backoff strategy). ### Motivation I was working on a model and the model checkpoints after every 1000 steps. At step 1800 I got a 504 HTTP status code error from Huggingface hub for my pytorch `dataloader`. Given the size of my model and data, it took around 2 hours to reach 1800 steps and now it will take about an hour to recover the lost 800. It would be better to get a retryable querying strategy. ### Your contribution It would be better if someone having experience in this area takes this up as this would require some testing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6172/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6171/comments
https://api.github.com/repos/huggingface/datasets/issues/6171/events
https://github.com/huggingface/datasets/pull/6171
1,862,922,767
PR_kwDODunzps5Yk4AS
6,171
Fix typo in about_mapstyle_vs_iterable.mdx
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6171). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009315 / 0.011353 (-0.002038) | 0.004931 / 0.011008 (-0.006077) | 0.100534 / 0.038508 (0.062026) | 0.089270 / 0.023109 (0.066161) | 0.394995 / 0.275898 (0.119097) | 0.440244 / 0.323480 (0.116764) | 0.006026 / 0.007986 (-0.001959) | 0.004252 / 0.004328 (-0.000077) | 0.078828 / 0.004250 (0.074577) | 0.066770 / 0.037052 (0.029718) | 0.411152 / 0.258489 (0.152663) | 0.445616 / 0.293841 (0.151775) | 0.048344 / 0.128546 (-0.080203) | 0.013700 / 0.075646 (-0.061946) | 0.361205 / 0.419271 (-0.058066) | 0.072085 / 0.043533 (0.028552) | 0.399173 / 0.255139 (0.144034) | 0.439334 / 0.283200 (0.156134) | 0.035815 / 0.141683 (-0.105868) | 1.779023 / 1.452155 (0.326868) | 1.865099 / 1.492716 (0.372383) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275978 / 0.018006 (0.257972) | 0.588850 / 0.000490 (0.588360) | 0.004953 / 0.000200 (0.004754) | 0.000109 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031329 / 0.037411 (-0.006082) | 0.095435 / 0.014526 (0.080910) | 0.111182 / 0.176557 (-0.065375) | 0.177692 / 0.737135 (-0.559444) | 0.113345 / 0.296338 (-0.182993) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577882 / 0.215209 (0.362673) | 5.865872 / 2.077655 (3.788217) | 2.664218 / 1.504120 (1.160098) | 2.383354 / 1.541195 (0.842159) | 2.336821 / 1.468490 (0.868331) | 0.834585 / 4.584777 (-3.750192) | 5.418720 / 3.745712 (1.673008) | 4.551790 / 5.269862 (-0.718072) | 2.921874 / 4.565676 (-1.643803) | 0.095738 / 0.424275 (-0.328537) | 0.009625 / 0.007607 (0.002018) | 0.688317 / 0.226044 (0.462273) | 6.831826 / 2.268929 (4.562897) | 3.482607 / 55.444624 (-51.962017) | 2.633482 / 6.876477 (-4.242995) | 2.878786 / 2.142072 (0.736714) | 0.971615 / 4.805227 (-3.833613) | 0.208661 / 6.500664 (-6.292003) | 0.080271 / 0.075469 (0.004802) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.661193 / 1.841788 (-0.180594) | 24.223041 / 8.074308 (16.148733) | 21.621791 / 10.191392 (11.430399) | 0.243809 / 0.680424 (-0.436614) | 0.031630 / 0.534201 (-0.502571) | 0.501408 / 0.579283 (-0.077875) | 0.600002 / 0.434364 (0.165638) | 0.572066 / 0.540337 (0.031728) | 0.791992 / 1.386936 (-0.594944) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009410 / 0.011353 (-0.001943) | 0.005255 / 0.011008 (-0.005753) | 0.079202 / 0.038508 (0.040693) | 0.078973 / 0.023109 (0.055863) | 0.557416 / 0.275898 (0.281518) | 0.560417 / 0.323480 (0.236937) | 0.007066 / 0.007986 (-0.000920) | 0.004560 / 0.004328 (0.000232) | 0.080359 / 0.004250 (0.076109) | 0.060071 / 0.037052 (0.023019) | 0.538441 / 0.258489 (0.279952) | 0.592486 / 0.293841 (0.298645) | 0.053221 / 0.128546 (-0.075325) | 0.014056 / 0.075646 (-0.061591) | 0.094084 / 0.419271 (-0.325188) | 0.066721 / 0.043533 (0.023188) | 0.521873 / 0.255139 (0.266734) | 0.579637 / 0.283200 (0.296437) | 0.041476 / 0.141683 (-0.100206) | 1.829681 / 1.452155 (0.377527) | 1.948418 / 1.492716 (0.455702) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.347594 / 0.018006 (0.329588) | 0.606906 / 0.000490 (0.606417) | 0.035413 / 0.000200 (0.035213) | 0.000371 / 0.000054 (0.000317) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031987 / 0.037411 (-0.005425) | 0.096985 / 0.014526 (0.082459) | 0.109275 / 0.176557 (-0.067282) | 0.175340 / 0.737135 (-0.561795) | 0.110763 / 0.296338 (-0.185575) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634823 / 0.215209 (0.419614) | 6.527172 / 2.077655 (4.449517) | 3.135709 / 1.504120 (1.631589) | 2.634357 / 1.541195 (1.093162) | 2.670583 / 1.468490 (1.202093) | 0.888686 / 4.584777 (-3.696091) | 5.382289 / 3.745712 (1.636577) | 4.701189 / 5.269862 (-0.568673) | 3.161290 / 4.565676 (-1.404386) | 0.112414 / 0.424275 (-0.311861) | 0.009443 / 0.007607 (0.001836) | 0.774703 / 0.226044 (0.548658) | 7.905334 / 2.268929 (5.636405) | 3.689548 / 55.444624 (-51.755076) | 3.087263 / 6.876477 (-3.789214) | 3.366568 / 2.142072 (1.224496) | 1.185951 / 4.805227 (-3.619277) | 0.248638 / 6.500664 (-6.252026) | 0.104598 / 0.075469 (0.029129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.820667 / 1.841788 (-0.021120) | 24.536703 / 8.074308 (16.462395) | 23.083964 / 10.191392 (12.892572) | 0.252897 / 0.680424 (-0.427527) | 0.032954 / 0.534201 (-0.501247) | 0.482467 / 0.579283 (-0.096816) | 0.602247 / 0.434364 (0.167883) | 0.600563 / 0.540337 (0.060225) | 0.824013 / 1.386936 (-0.562923) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c07a54ed4d570c5842d7bbe467025805be16ef51 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009242 / 0.011353 (-0.002111) | 0.005244 / 0.011008 (-0.005764) | 0.112678 / 0.038508 (0.074170) | 0.089176 / 0.023109 (0.066067) | 0.405823 / 0.275898 (0.129925) | 0.465703 / 0.323480 (0.142223) | 0.005227 / 0.007986 (-0.002758) | 0.004296 / 0.004328 (-0.000032) | 0.082961 / 0.004250 (0.078711) | 0.063144 / 0.037052 (0.026092) | 0.422369 / 0.258489 (0.163880) | 0.478185 / 0.293841 (0.184344) | 0.049770 / 0.128546 (-0.078776) | 0.016561 / 0.075646 (-0.059086) | 0.380172 / 0.419271 (-0.039100) | 0.068698 / 0.043533 (0.025165) | 0.397773 / 0.255139 (0.142634) | 0.461284 / 0.283200 (0.178084) | 0.036907 / 0.141683 (-0.104775) | 1.828017 / 1.452155 (0.375862) | 2.028385 / 1.492716 (0.535669) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291245 / 0.018006 (0.273239) | 0.605519 / 0.000490 (0.605030) | 0.003790 / 0.000200 (0.003590) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029269 / 0.037411 (-0.008142) | 0.087014 / 0.014526 (0.072488) | 0.116984 / 0.176557 (-0.059573) | 0.170644 / 0.737135 (-0.566491) | 0.109011 / 0.296338 (-0.187328) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.603045 / 0.215209 (0.387836) | 6.125308 / 2.077655 (4.047653) | 2.637127 / 1.504120 (1.133007) | 2.468636 / 1.541195 (0.927441) | 2.383773 / 1.468490 (0.915283) | 0.838139 / 4.584777 (-3.746638) | 5.355777 / 3.745712 (1.610065) | 4.753015 / 5.269862 (-0.516846) | 3.097486 / 4.565676 (-1.468191) | 0.094749 / 0.424275 (-0.329526) | 0.009040 / 0.007607 (0.001433) | 0.699987 / 0.226044 (0.473942) | 7.111671 / 2.268929 (4.842742) | 3.297798 / 55.444624 (-52.146827) | 2.614578 / 6.876477 (-4.261898) | 2.927717 / 2.142072 (0.785645) | 1.037292 / 4.805227 (-3.767935) | 0.218025 / 6.500664 (-6.282639) | 0.086306 / 0.075469 (0.010836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.645146 / 1.841788 (-0.196642) | 24.191875 / 8.074308 (16.117567) | 21.844371 / 10.191392 (11.652979) | 0.245369 / 0.680424 (-0.435055) | 0.031776 / 0.534201 (-0.502425) | 0.465634 / 0.579283 (-0.113649) | 0.565498 / 0.434364 (0.131134) | 0.497409 / 0.540337 (-0.042929) | 0.748048 / 1.386936 (-0.638889) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009239 / 0.011353 (-0.002114) | 0.005345 / 0.011008 (-0.005663) | 0.072732 / 0.038508 (0.034224) | 0.099880 / 0.023109 (0.076770) | 0.466933 / 0.275898 (0.191035) | 0.471730 / 0.323480 (0.148250) | 0.006164 / 0.007986 (-0.001821) | 0.004486 / 0.004328 (0.000158) | 0.075475 / 0.004250 (0.071224) | 0.068291 / 0.037052 (0.031238) | 0.465925 / 0.258489 (0.207436) | 0.469198 / 0.293841 (0.175357) | 0.047304 / 0.128546 (-0.081242) | 0.013368 / 0.075646 (-0.062278) | 0.083563 / 0.419271 (-0.335708) | 0.063204 / 0.043533 (0.019671) | 0.457422 / 0.255139 (0.202283) | 0.478793 / 0.283200 (0.195593) | 0.036120 / 0.141683 (-0.105563) | 1.841209 / 1.452155 (0.389054) | 1.955984 / 1.492716 (0.463267) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.369160 / 0.018006 (0.351154) | 0.607140 / 0.000490 (0.606650) | 0.047253 / 0.000200 (0.047054) | 0.000475 / 0.000054 (0.000420) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.040226 / 0.037411 (0.002815) | 0.107361 / 0.014526 (0.092835) | 0.122424 / 0.176557 (-0.054133) | 0.186447 / 0.737135 (-0.550688) | 0.127060 / 0.296338 (-0.169279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.706737 / 0.215209 (0.491528) | 6.791287 / 2.077655 (4.713632) | 3.194471 / 1.504120 (1.690352) | 2.928145 / 1.541195 (1.386950) | 2.829078 / 1.468490 (1.360588) | 0.929797 / 4.584777 (-3.654980) | 5.484638 / 3.745712 (1.738926) | 4.841570 / 5.269862 (-0.428292) | 2.995247 / 4.565676 (-1.570430) | 0.104709 / 0.424275 (-0.319566) | 0.009543 / 0.007607 (0.001936) | 0.817605 / 0.226044 (0.591561) | 7.879234 / 2.268929 (5.610305) | 3.838073 / 55.444624 (-51.606551) | 3.189728 / 6.876477 (-3.686749) | 3.483775 / 2.142072 (1.341703) | 1.092823 / 4.805227 (-3.712404) | 0.227660 / 6.500664 (-6.273004) | 0.082452 / 0.075469 (0.006983) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.750413 / 1.841788 (-0.091374) | 27.078082 / 8.074308 (19.003774) | 23.968038 / 10.191392 (13.776646) | 0.248065 / 0.680424 (-0.432359) | 0.029961 / 0.534201 (-0.504240) | 0.508630 / 0.579283 (-0.070653) | 0.608707 / 0.434364 (0.174343) | 0.611062 / 0.540337 (0.070725) | 0.830797 / 1.386936 (-0.556139) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d793220dd8cbaa099a3928c2132c94c9f7453bc \"CML watermark\")\n" ]
2023-08-23T09:21:11
2023-08-23T09:32:59
2023-08-23T09:21:19
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6171/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6171", "html_url": "https://github.com/huggingface/datasets/pull/6171", "diff_url": "https://github.com/huggingface/datasets/pull/6171.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6171.patch", "merged_at": "2023-08-23T09:21:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/6170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6170/comments
https://api.github.com/repos/huggingface/datasets/issues/6170/events
https://github.com/huggingface/datasets/pull/6170
1,862,705,731
PR_kwDODunzps5YkJOV
6,170
feat: Return the name of the currently loaded file
{ "login": "Amitesh-Patel", "id": 124021133, "node_id": "U_kgDOB2RpjQ", "avatar_url": "https://avatars.githubusercontent.com/u/124021133?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Amitesh-Patel", "html_url": "https://github.com/Amitesh-Patel", "followers_url": "https://api.github.com/users/Amitesh-Patel/followers", "following_url": "https://api.github.com/users/Amitesh-Patel/following{/other_user}", "gists_url": "https://api.github.com/users/Amitesh-Patel/gists{/gist_id}", "starred_url": "https://api.github.com/users/Amitesh-Patel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Amitesh-Patel/subscriptions", "organizations_url": "https://api.github.com/users/Amitesh-Patel/orgs", "repos_url": "https://api.github.com/users/Amitesh-Patel/repos", "events_url": "https://api.github.com/users/Amitesh-Patel/events{/privacy}", "received_events_url": "https://api.github.com/users/Amitesh-Patel/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Your change adds a new element in the key used to avoid duplicates when generating the examples of a dataset. I don't think it fixes the issue you're trying to solve." ]
2023-08-23T07:08:17
2023-08-29T12:41:05
null
NONE
null
Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output. I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92. fixes #5806
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6170/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6170", "html_url": "https://github.com/huggingface/datasets/pull/6170", "diff_url": "https://github.com/huggingface/datasets/pull/6170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6170.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6169/comments
https://api.github.com/repos/huggingface/datasets/issues/6169/events
https://github.com/huggingface/datasets/issues/6169
1,862,360,199
I_kwDODunzps5vAVyH
6,169
Configurations in yaml not working
{ "login": "tsor13", "id": 45085098, "node_id": "MDQ6VXNlcjQ1MDg1MDk4", "avatar_url": "https://avatars.githubusercontent.com/u/45085098?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tsor13", "html_url": "https://github.com/tsor13", "followers_url": "https://api.github.com/users/tsor13/followers", "following_url": "https://api.github.com/users/tsor13/following{/other_user}", "gists_url": "https://api.github.com/users/tsor13/gists{/gist_id}", "starred_url": "https://api.github.com/users/tsor13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tsor13/subscriptions", "organizations_url": "https://api.github.com/users/tsor13/orgs", "repos_url": "https://api.github.com/users/tsor13/repos", "events_url": "https://api.github.com/users/tsor13/events{/privacy}", "received_events_url": "https://api.github.com/users/tsor13/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Unfortunately, I cannot reproduce this behavior on my machine or Colab - the reproducer returns `['main_data', 'additional_data']` as expected.", "Thank you for looking into this, Mario. Is this on [my repository](https://huggingface.co./datasets/tsor13/test), or on another one that you have reproduced? Would you mind pointing me to it if so?", "Whoa, in colab I received the correct behavior using my dataset. It must have something to do with my local copy of `datasets` (which again just failed).\r\n\r\nI've tried uninstalling/reinstnalling to no avail", "hi @tsor13 , I haven't been able to reproduce your issue on `tsor13/test` dataset locally either. reinstalling doesn't help?" ]
2023-08-23T00:13:22
2023-08-23T15:35:31
null
NONE
null
### Dataset configurations cannot be created in YAML/README Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118 I have the exact example in my config file for [my data repo](https://huggingface.co./datasets/tsor13/test): ``` configs: - config_name: main_data data_files: "main_data.csv" - config_name: additional_data data_files: "additional_data.csv" ``` Yet, I'm unable to load different configurations: ``` from datasets import get_dataset_config_names get_dataset_config_names('tsor13/test', use_auth_token=True) ``` returns a single split, `['tsor13--test']` Does anyone have any insights? @polinaeterna thank you for adding this feature, it is super useful. Do you happen to have any ideas? ### Steps to reproduce the bug from datasets import get_dataset_config_names get_dataset_config_names('tsor13/test') ### Expected behavior I would expect there to be two splits, `main_data` and `additional_data`. However, only `['tsor13--test']` test is returned. ### Environment info - `datasets` version: 2.14.4 - Platform: macOS-13.4-arm64-arm-64bit - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6169/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6168/comments
https://api.github.com/repos/huggingface/datasets/issues/6168/events
https://github.com/huggingface/datasets/pull/6168
1,861,867,274
PR_kwDODunzps5YhT7Y
6,168
Fix ArrayXD YAML conversion
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6168). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009350 / 0.011353 (-0.002003) | 0.005658 / 0.011008 (-0.005350) | 0.123173 / 0.038508 (0.084664) | 0.096354 / 0.023109 (0.073244) | 0.464398 / 0.275898 (0.188500) | 0.544455 / 0.323480 (0.220975) | 0.007337 / 0.007986 (-0.000648) | 0.004424 / 0.004328 (0.000096) | 0.089715 / 0.004250 (0.085465) | 0.072462 / 0.037052 (0.035410) | 0.460601 / 0.258489 (0.202112) | 0.544384 / 0.293841 (0.250543) | 0.052994 / 0.128546 (-0.075552) | 0.014459 / 0.075646 (-0.061187) | 0.464368 / 0.419271 (0.045096) | 0.072889 / 0.043533 (0.029356) | 0.471387 / 0.255139 (0.216248) | 0.560982 / 0.283200 (0.277783) | 0.041398 / 0.141683 (-0.100285) | 1.964688 / 1.452155 (0.512533) | 2.240727 / 1.492716 (0.748011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.308524 / 0.018006 (0.290518) | 0.669306 / 0.000490 (0.668816) | 0.006644 / 0.000200 (0.006444) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037395 / 0.037411 (-0.000016) | 0.111303 / 0.014526 (0.096777) | 0.158988 / 0.176557 (-0.017569) | 0.236155 / 0.737135 (-0.500980) | 0.134775 / 0.296338 (-0.161564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.648830 / 0.215209 (0.433621) | 6.614794 / 2.077655 (4.537139) | 2.867526 / 1.504120 (1.363407) | 2.472967 / 1.541195 (0.931772) | 2.488419 / 1.468490 (1.019929) | 0.915785 / 4.584777 (-3.668992) | 6.010754 / 3.745712 (2.265042) | 5.468873 / 5.269862 (0.199011) | 3.446535 / 4.565676 (-1.119141) | 0.118592 / 0.424275 (-0.305684) | 0.012005 / 0.007607 (0.004398) | 0.808467 / 0.226044 (0.582423) | 8.152122 / 2.268929 (5.883193) | 3.751282 / 55.444624 (-51.693342) | 3.009569 / 6.876477 (-3.866908) | 3.282613 / 2.142072 (1.140540) | 1.152727 / 4.805227 (-3.652500) | 0.240224 / 6.500664 (-6.260440) | 0.097871 / 0.075469 (0.022402) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.824944 / 1.841788 (-0.016843) | 27.840842 / 8.074308 (19.766533) | 24.368669 / 10.191392 (14.177277) | 0.260621 / 0.680424 (-0.419803) | 0.033730 / 0.534201 (-0.500471) | 0.552494 / 0.579283 (-0.026789) | 0.666921 / 0.434364 (0.232557) | 0.648812 / 0.540337 (0.108475) | 0.912602 / 1.386936 (-0.474334) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011688 / 0.011353 (0.000335) | 0.005794 / 0.011008 (-0.005215) | 0.093466 / 0.038508 (0.054958) | 0.102583 / 0.023109 (0.079474) | 0.593572 / 0.275898 (0.317674) | 0.614351 / 0.323480 (0.290871) | 0.007006 / 0.007986 (-0.000980) | 0.005557 / 0.004328 (0.001229) | 0.087779 / 0.004250 (0.083529) | 0.072639 / 0.037052 (0.035586) | 0.577464 / 0.258489 (0.318975) | 0.628240 / 0.293841 (0.334399) | 0.053876 / 0.128546 (-0.074670) | 0.015383 / 0.075646 (-0.060263) | 0.110633 / 0.419271 (-0.308639) | 0.067467 / 0.043533 (0.023934) | 0.613457 / 0.255139 (0.358318) | 0.604939 / 0.283200 (0.321739) | 0.041738 / 0.141683 (-0.099945) | 1.967167 / 1.452155 (0.515012) | 2.121009 / 1.492716 (0.628293) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.449937 / 0.018006 (0.431930) | 0.694410 / 0.000490 (0.693921) | 0.064051 / 0.000200 (0.063851) | 0.000810 / 0.000054 (0.000756) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.045138 / 0.037411 (0.007727) | 0.116831 / 0.014526 (0.102306) | 0.131906 / 0.176557 (-0.044651) | 0.202421 / 0.737135 (-0.534714) | 0.132568 / 0.296338 (-0.163770) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.698046 / 0.215209 (0.482837) | 7.112591 / 2.077655 (5.034936) | 3.332679 / 1.504120 (1.828559) | 2.946384 / 1.541195 (1.405189) | 3.074484 / 1.468490 (1.605994) | 0.970917 / 4.584777 (-3.613859) | 6.143506 / 3.745712 (2.397794) | 5.572496 / 5.269862 (0.302634) | 3.602673 / 4.565676 (-0.963004) | 0.115068 / 0.424275 (-0.309207) | 0.009971 / 0.007607 (0.002364) | 0.891090 / 0.226044 (0.665046) | 8.761788 / 2.268929 (6.492859) | 4.362685 / 55.444624 (-51.081939) | 3.612893 / 6.876477 (-3.263583) | 3.797948 / 2.142072 (1.655876) | 1.202890 / 4.805227 (-3.602337) | 0.238120 / 6.500664 (-6.262544) | 0.095612 / 0.075469 (0.020143) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.958880 / 1.841788 (0.117092) | 28.216454 / 8.074308 (20.142146) | 25.361424 / 10.191392 (15.170032) | 0.308203 / 0.680424 (-0.372221) | 0.032903 / 0.534201 (-0.501298) | 0.539714 / 0.579283 (-0.039569) | 0.688278 / 0.434364 (0.253914) | 0.644818 / 0.540337 (0.104481) | 0.905694 / 1.386936 (-0.481242) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a5289345e5b23548fee680a0bbc047c0b9a5ee8c \"CML watermark\")\n", "Maybe convert all the tuples by default instead of hardcoding a logic specific to ArrayXD ?" ]
2023-08-22T17:02:54
2023-08-29T12:42:32
null
CONTRIBUTOR
null
Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion. Fix #6112
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6168/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6168/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6168", "html_url": "https://github.com/huggingface/datasets/pull/6168", "diff_url": "https://github.com/huggingface/datasets/pull/6168.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6168.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6167/comments
https://api.github.com/repos/huggingface/datasets/issues/6167/events
https://github.com/huggingface/datasets/pull/6167
1,861,474,327
PR_kwDODunzps5Yf9-t
6,167
Allow hyphen in split name
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007342 / 0.011353 (-0.004011) | 0.004586 / 0.011008 (-0.006422) | 0.100430 / 0.038508 (0.061922) | 0.081053 / 0.023109 (0.057944) | 0.368130 / 0.275898 (0.092232) | 0.402852 / 0.323480 (0.079372) | 0.004504 / 0.007986 (-0.003482) | 0.003824 / 0.004328 (-0.000505) | 0.075326 / 0.004250 (0.071076) | 0.063329 / 0.037052 (0.026277) | 0.372837 / 0.258489 (0.114348) | 0.437857 / 0.293841 (0.144017) | 0.035512 / 0.128546 (-0.093034) | 0.009756 / 0.075646 (-0.065890) | 0.341035 / 0.419271 (-0.078236) | 0.060503 / 0.043533 (0.016970) | 0.362555 / 0.255139 (0.107416) | 0.409216 / 0.283200 (0.126017) | 0.030093 / 0.141683 (-0.111590) | 1.751550 / 1.452155 (0.299395) | 1.848676 / 1.492716 (0.355959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229448 / 0.018006 (0.211442) | 0.500300 / 0.000490 (0.499811) | 0.005195 / 0.000200 (0.004995) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031753 / 0.037411 (-0.005658) | 0.096075 / 0.014526 (0.081549) | 0.111476 / 0.176557 (-0.065081) | 0.179236 / 0.737135 (-0.557899) | 0.113599 / 0.296338 (-0.182739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472817 / 0.215209 (0.257608) | 4.715029 / 2.077655 (2.637374) | 2.417934 / 1.504120 (0.913814) | 2.235014 / 1.541195 (0.693819) | 2.323588 / 1.468490 (0.855098) | 0.553751 / 4.584777 (-4.031026) | 4.153467 / 3.745712 (0.407755) | 3.858836 / 5.269862 (-1.411025) | 2.377499 / 4.565676 (-2.188178) | 0.066528 / 0.424275 (-0.357747) | 0.008979 / 0.007607 (0.001372) | 0.561076 / 0.226044 (0.335032) | 5.609817 / 2.268929 (3.340888) | 3.011098 / 55.444624 (-52.433526) | 2.594162 / 6.876477 (-4.282314) | 2.863597 / 2.142072 (0.721525) | 0.681135 / 4.805227 (-4.124092) | 0.158863 / 6.500664 (-6.341801) | 0.072551 / 0.075469 (-0.002918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.492230 / 1.841788 (-0.349558) | 23.028828 / 8.074308 (14.954519) | 16.663265 / 10.191392 (6.471873) | 0.173146 / 0.680424 (-0.507278) | 0.021635 / 0.534201 (-0.512566) | 0.478919 / 0.579283 (-0.100364) | 0.472908 / 0.434364 (0.038544) | 0.547248 / 0.540337 (0.006910) | 0.770288 / 1.386936 (-0.616648) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007728 / 0.011353 (-0.003625) | 0.004477 / 0.011008 (-0.006531) | 0.074858 / 0.038508 (0.036350) | 0.084266 / 0.023109 (0.061157) | 0.420280 / 0.275898 (0.144382) | 0.466835 / 0.323480 (0.143356) | 0.005980 / 0.007986 (-0.002006) | 0.003600 / 0.004328 (-0.000729) | 0.074941 / 0.004250 (0.070691) | 0.066414 / 0.037052 (0.029361) | 0.425949 / 0.258489 (0.167460) | 0.473236 / 0.293841 (0.179395) | 0.037213 / 0.128546 (-0.091333) | 0.009743 / 0.075646 (-0.065903) | 0.083758 / 0.419271 (-0.335513) | 0.057916 / 0.043533 (0.014383) | 0.423031 / 0.255139 (0.167892) | 0.451107 / 0.283200 (0.167907) | 0.028577 / 0.141683 (-0.113106) | 1.810509 / 1.452155 (0.358354) | 1.875579 / 1.492716 (0.382863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296052 / 0.018006 (0.278046) | 0.496618 / 0.000490 (0.496128) | 0.028667 / 0.000200 (0.028467) | 0.000140 / 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036694 / 0.037411 (-0.000717) | 0.110873 / 0.014526 (0.096347) | 0.126550 / 0.176557 (-0.050007) | 0.182924 / 0.737135 (-0.554212) | 0.123793 / 0.296338 (-0.172545) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.509881 / 0.215209 (0.294672) | 5.067402 / 2.077655 (2.989747) | 2.696028 / 1.504120 (1.191908) | 2.489861 / 1.541195 (0.948666) | 2.563400 / 1.468490 (1.094910) | 0.571184 / 4.584777 (-4.013593) | 4.154231 / 3.745712 (0.408519) | 3.891004 / 5.269862 (-1.378858) | 2.435290 / 4.565676 (-2.130387) | 0.065825 / 0.424275 (-0.358450) | 0.008460 / 0.007607 (0.000853) | 0.597579 / 0.226044 (0.371534) | 5.914954 / 2.268929 (3.646025) | 3.219305 / 55.444624 (-52.225319) | 2.843548 / 6.876477 (-4.032929) | 3.070300 / 2.142072 (0.928228) | 0.686018 / 4.805227 (-4.119209) | 0.160077 / 6.500664 (-6.340587) | 0.074058 / 0.075469 (-0.001411) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.598748 / 1.841788 (-0.243039) | 23.475685 / 8.074308 (15.401377) | 17.257831 / 10.191392 (7.066439) | 0.176539 / 0.680424 (-0.503885) | 0.021969 / 0.534201 (-0.512232) | 0.473565 / 0.579283 (-0.105718) | 0.465471 / 0.434364 (0.031107) | 0.567107 / 0.540337 (0.026769) | 0.783757 / 1.386936 (-0.603179) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2f6bb450b4a3065a7d5fc50ea67711082749a337 \"CML watermark\")\n", "Note that the https://github.com/huggingface/datasets-server/ explicitly relies on the fact that a split cannot contain a hyphen. cc @lhoestq ", "We can't enable this that easily unfortunately because it could make arrow file names ambiguous in the cache.\r\n\r\ne.g. dataset_name-train-0000-of-0008.arrow", "Oh, this would indeed make the caching for the multi-proc case ambiguous. Implementing this is only worth it if we get more requests, so I'm closing this PR for now." ]
2023-08-22T13:30:59
2023-08-22T15:39:24
2023-08-22T15:38:53
CONTRIBUTOR
null
To fix https://discuss.huggingface.co/t/error-when-setting-up-the-dataset-viewer-streamingrowserror/51276.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6167/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6167", "html_url": "https://github.com/huggingface/datasets/pull/6167", "diff_url": "https://github.com/huggingface/datasets/pull/6167.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6167.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6166/comments
https://api.github.com/repos/huggingface/datasets/issues/6166/events
https://github.com/huggingface/datasets/pull/6166
1,861,259,055
PR_kwDODunzps5YfOt0
6,166
Document BUILDER_CONFIG_CLASS
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009036 / 0.011353 (-0.002317) | 0.004564 / 0.011008 (-0.006444) | 0.114958 / 0.038508 (0.076449) | 0.087329 / 0.023109 (0.064220) | 0.440111 / 0.275898 (0.164213) | 0.486056 / 0.323480 (0.162576) | 0.006580 / 0.007986 (-0.001406) | 0.004257 / 0.004328 (-0.000072) | 0.093458 / 0.004250 (0.089208) | 0.063380 / 0.037052 (0.026328) | 0.469455 / 0.258489 (0.210966) | 0.521630 / 0.293841 (0.227790) | 0.053496 / 0.128546 (-0.075050) | 0.013466 / 0.075646 (-0.062181) | 0.361629 / 0.419271 (-0.057642) | 0.068095 / 0.043533 (0.024562) | 0.472440 / 0.255139 (0.217301) | 0.508682 / 0.283200 (0.225483) | 0.034648 / 0.141683 (-0.107035) | 1.820117 / 1.452155 (0.367962) | 1.933448 / 1.492716 (0.440732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276543 / 0.018006 (0.258537) | 0.563380 / 0.000490 (0.562890) | 0.005345 / 0.000200 (0.005146) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029230 / 0.037411 (-0.008181) | 0.095613 / 0.014526 (0.081087) | 0.106178 / 0.176557 (-0.070378) | 0.181095 / 0.737135 (-0.556040) | 0.107789 / 0.296338 (-0.188550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.612051 / 0.215209 (0.396842) | 6.065008 / 2.077655 (3.987353) | 2.720911 / 1.504120 (1.216791) | 2.495218 / 1.541195 (0.954023) | 2.423351 / 1.468490 (0.954860) | 0.835571 / 4.584777 (-3.749205) | 5.438230 / 3.745712 (1.692518) | 4.550301 / 5.269862 (-0.719561) | 2.919889 / 4.565676 (-1.645788) | 0.097748 / 0.424275 (-0.326527) | 0.009285 / 0.007607 (0.001678) | 0.741968 / 0.226044 (0.515923) | 7.285394 / 2.268929 (5.016466) | 3.433634 / 55.444624 (-52.010991) | 2.680823 / 6.876477 (-4.195654) | 2.931149 / 2.142072 (0.789076) | 1.012852 / 4.805227 (-3.792375) | 0.224899 / 6.500664 (-6.275765) | 0.089411 / 0.075469 (0.013942) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.622759 / 1.841788 (-0.219029) | 23.690030 / 8.074308 (15.615721) | 21.034451 / 10.191392 (10.843059) | 0.241504 / 0.680424 (-0.438920) | 0.030109 / 0.534201 (-0.504092) | 0.472536 / 0.579283 (-0.106747) | 0.631396 / 0.434364 (0.197032) | 0.598997 / 0.540337 (0.058659) | 0.798680 / 1.386936 (-0.588256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008696 / 0.011353 (-0.002657) | 0.005032 / 0.011008 (-0.005977) | 0.087369 / 0.038508 (0.048861) | 0.078105 / 0.023109 (0.054996) | 0.464861 / 0.275898 (0.188963) | 0.509620 / 0.323480 (0.186140) | 0.006399 / 0.007986 (-0.001587) | 0.004276 / 0.004328 (-0.000052) | 0.081643 / 0.004250 (0.077393) | 0.062560 / 0.037052 (0.025508) | 0.495377 / 0.258489 (0.236888) | 0.484885 / 0.293841 (0.191044) | 0.054354 / 0.128546 (-0.074193) | 0.013851 / 0.075646 (-0.061795) | 0.089531 / 0.419271 (-0.329740) | 0.068732 / 0.043533 (0.025199) | 0.455842 / 0.255139 (0.200703) | 0.528775 / 0.283200 (0.245575) | 0.039646 / 0.141683 (-0.102037) | 1.733600 / 1.452155 (0.281445) | 1.879074 / 1.492716 (0.386358) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.369616 / 0.018006 (0.351610) | 0.607426 / 0.000490 (0.606936) | 0.055540 / 0.000200 (0.055341) | 0.000543 / 0.000054 (0.000488) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036026 / 0.037411 (-0.001385) | 0.103968 / 0.014526 (0.089442) | 0.114852 / 0.176557 (-0.061705) | 0.187313 / 0.737135 (-0.549822) | 0.116839 / 0.296338 (-0.179500) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.614018 / 0.215209 (0.398809) | 6.139914 / 2.077655 (4.062259) | 2.826246 / 1.504120 (1.322126) | 2.524133 / 1.541195 (0.982938) | 2.606981 / 1.468490 (1.138491) | 0.844604 / 4.584777 (-3.740173) | 5.537178 / 3.745712 (1.791465) | 4.594624 / 5.269862 (-0.675237) | 3.032145 / 4.565676 (-1.533532) | 0.094771 / 0.424275 (-0.329504) | 0.008132 / 0.007607 (0.000525) | 0.714287 / 0.226044 (0.488242) | 7.296733 / 2.268929 (5.027804) | 3.698066 / 55.444624 (-51.746558) | 2.862781 / 6.876477 (-4.013696) | 3.114502 / 2.142072 (0.972429) | 0.986612 / 4.805227 (-3.818616) | 0.214438 / 6.500664 (-6.286226) | 0.076201 / 0.075469 (0.000732) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.747728 / 1.841788 (-0.094060) | 24.159845 / 8.074308 (16.085537) | 23.553485 / 10.191392 (13.362093) | 0.248387 / 0.680424 (-0.432037) | 0.029850 / 0.534201 (-0.504351) | 0.526416 / 0.579283 (-0.052867) | 0.625681 / 0.434364 (0.191317) | 0.619690 / 0.540337 (0.079352) | 0.827485 / 1.386936 (-0.559451) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#75639f9064dab9549add79fd5ee7de2a4429992c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006728 / 0.011353 (-0.004625) | 0.003960 / 0.011008 (-0.007048) | 0.085569 / 0.038508 (0.047061) | 0.077463 / 0.023109 (0.054354) | 0.343112 / 0.275898 (0.067214) | 0.379128 / 0.323480 (0.055648) | 0.004087 / 0.007986 (-0.003899) | 0.003357 / 0.004328 (-0.000972) | 0.065570 / 0.004250 (0.061320) | 0.056259 / 0.037052 (0.019207) | 0.368595 / 0.258489 (0.110106) | 0.402672 / 0.293841 (0.108831) | 0.030946 / 0.128546 (-0.097600) | 0.008509 / 0.075646 (-0.067137) | 0.288552 / 0.419271 (-0.130719) | 0.052134 / 0.043533 (0.008601) | 0.344653 / 0.255139 (0.089514) | 0.374199 / 0.283200 (0.090999) | 0.026251 / 0.141683 (-0.115432) | 1.488258 / 1.452155 (0.036103) | 1.567119 / 1.492716 (0.074402) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218740 / 0.018006 (0.200734) | 0.465483 / 0.000490 (0.464994) | 0.003959 / 0.000200 (0.003759) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029860 / 0.037411 (-0.007551) | 0.087968 / 0.014526 (0.073442) | 0.098257 / 0.176557 (-0.078299) | 0.155478 / 0.737135 (-0.581657) | 0.100696 / 0.296338 (-0.195642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384642 / 0.215209 (0.169432) | 3.821692 / 2.077655 (1.744038) | 1.838012 / 1.504120 (0.333892) | 1.677554 / 1.541195 (0.136360) | 1.764284 / 1.468490 (0.295794) | 0.487512 / 4.584777 (-4.097265) | 3.614572 / 3.745712 (-0.131141) | 3.300740 / 5.269862 (-1.969122) | 2.079044 / 4.565676 (-2.486632) | 0.057392 / 0.424275 (-0.366883) | 0.007642 / 0.007607 (0.000035) | 0.456161 / 0.226044 (0.230117) | 4.554124 / 2.268929 (2.285196) | 2.319288 / 55.444624 (-53.125336) | 1.972024 / 6.876477 (-4.904452) | 2.210598 / 2.142072 (0.068526) | 0.588442 / 4.805227 (-4.216785) | 0.134474 / 6.500664 (-6.366191) | 0.062682 / 0.075469 (-0.012787) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243548 / 1.841788 (-0.598239) | 20.267230 / 8.074308 (12.192922) | 14.872096 / 10.191392 (4.680704) | 0.165164 / 0.680424 (-0.515260) | 0.018985 / 0.534201 (-0.515216) | 0.394526 / 0.579283 (-0.184757) | 0.413918 / 0.434364 (-0.020446) | 0.467130 / 0.540337 (-0.073208) | 0.627055 / 1.386936 (-0.759881) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006940 / 0.011353 (-0.004412) | 0.004203 / 0.011008 (-0.006805) | 0.065828 / 0.038508 (0.027320) | 0.076604 / 0.023109 (0.053495) | 0.401781 / 0.275898 (0.125883) | 0.434838 / 0.323480 (0.111358) | 0.005626 / 0.007986 (-0.002359) | 0.003409 / 0.004328 (-0.000920) | 0.064702 / 0.004250 (0.060452) | 0.057525 / 0.037052 (0.020473) | 0.405032 / 0.258489 (0.146543) | 0.440906 / 0.293841 (0.147065) | 0.032713 / 0.128546 (-0.095833) | 0.008723 / 0.075646 (-0.066923) | 0.071448 / 0.419271 (-0.347823) | 0.048186 / 0.043533 (0.004653) | 0.403950 / 0.255139 (0.148811) | 0.419506 / 0.283200 (0.136307) | 0.023532 / 0.141683 (-0.118150) | 1.496435 / 1.452155 (0.044280) | 1.567236 / 1.492716 (0.074519) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229194 / 0.018006 (0.211188) | 0.451363 / 0.000490 (0.450873) | 0.003651 / 0.000200 (0.003451) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033674 / 0.037411 (-0.003737) | 0.097521 / 0.014526 (0.082995) | 0.108806 / 0.176557 (-0.067751) | 0.161002 / 0.737135 (-0.576133) | 0.108594 / 0.296338 (-0.187745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436638 / 0.215209 (0.221429) | 4.348844 / 2.077655 (2.271189) | 2.341737 / 1.504120 (0.837617) | 2.195850 / 1.541195 (0.654656) | 2.332147 / 1.468490 (0.863657) | 0.496180 / 4.584777 (-4.088597) | 3.680987 / 3.745712 (-0.064725) | 3.332203 / 5.269862 (-1.937659) | 2.099541 / 4.565676 (-2.466136) | 0.058629 / 0.424275 (-0.365646) | 0.007363 / 0.007607 (-0.000245) | 0.517658 / 0.226044 (0.291614) | 5.175321 / 2.268929 (2.906392) | 2.858660 / 55.444624 (-52.585964) | 2.540557 / 6.876477 (-4.335920) | 2.755360 / 2.142072 (0.613288) | 0.595488 / 4.805227 (-4.209739) | 0.134265 / 6.500664 (-6.366399) | 0.062033 / 0.075469 (-0.013436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.389950 / 1.841788 (-0.451838) | 20.800274 / 8.074308 (12.725966) | 15.314531 / 10.191392 (5.123139) | 0.166822 / 0.680424 (-0.513602) | 0.021099 / 0.534201 (-0.513102) | 0.400388 / 0.579283 (-0.178895) | 0.419981 / 0.434364 (-0.014383) | 0.474259 / 0.540337 (-0.066078) | 0.731678 / 1.386936 (-0.655258) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4566827557acbeba0d4cb66449bb70367e341b05 \"CML watermark\")\n" ]
2023-08-22T11:27:41
2023-08-23T14:01:25
2023-08-23T13:52:36
MEMBER
null
Related to https://github.com/huggingface/datasets/issues/6130
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6166/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6166", "html_url": "https://github.com/huggingface/datasets/pull/6166", "diff_url": "https://github.com/huggingface/datasets/pull/6166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6166.patch", "merged_at": "2023-08-23T13:52:36" }
true
https://api.github.com/repos/huggingface/datasets/issues/6165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6165/comments
https://api.github.com/repos/huggingface/datasets/issues/6165/events
https://github.com/huggingface/datasets/pull/6165
1,861,124,284
PR_kwDODunzps5YexBL
6,165
Fix multiprocessing with spawn in iterable datasets
{ "login": "Hubert-Bonisseur", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hubert-Bonisseur", "html_url": "https://github.com/Hubert-Bonisseur", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq \r\nA test is failing, but I don't think it is due to my changes", "Good catch ! Could you add a test to make sure transformed IterableDataset objects are still picklable ?\r\n\r\nSomething like `test_pickle_after_many_transforms` in in `test_iterable_dataset.py` that does a bunch or rename, map, take on a dataset and checks that the dataset can be pickled at the end and the reloaded dataset returns the same elements", "@lhoestq \r\nI added the test and fixed one last method", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006537 / 0.011353 (-0.004816) | 0.003960 / 0.011008 (-0.007048) | 0.085135 / 0.038508 (0.046627) | 0.079271 / 0.023109 (0.056162) | 0.383743 / 0.275898 (0.107845) | 0.414622 / 0.323480 (0.091143) | 0.004202 / 0.007986 (-0.003784) | 0.003537 / 0.004328 (-0.000791) | 0.065758 / 0.004250 (0.061508) | 0.054225 / 0.037052 (0.017173) | 0.395715 / 0.258489 (0.137226) | 0.438985 / 0.293841 (0.145144) | 0.030590 / 0.128546 (-0.097956) | 0.008754 / 0.075646 (-0.066892) | 0.288415 / 0.419271 (-0.130857) | 0.051863 / 0.043533 (0.008330) | 0.382501 / 0.255139 (0.127363) | 0.414428 / 0.283200 (0.131228) | 0.024084 / 0.141683 (-0.117599) | 1.478726 / 1.452155 (0.026572) | 1.544763 / 1.492716 (0.052047) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285143 / 0.018006 (0.267136) | 0.603859 / 0.000490 (0.603369) | 0.004330 / 0.000200 (0.004131) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027856 / 0.037411 (-0.009555) | 0.081963 / 0.014526 (0.067437) | 0.104106 / 0.176557 (-0.072451) | 0.151378 / 0.737135 (-0.585757) | 0.096476 / 0.296338 (-0.199862) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402938 / 0.215209 (0.187729) | 4.042312 / 2.077655 (1.964657) | 2.068421 / 1.504120 (0.564301) | 1.877870 / 1.541195 (0.336675) | 1.947643 / 1.468490 (0.479153) | 0.482031 / 4.584777 (-4.102746) | 3.554747 / 3.745712 (-0.190965) | 3.307811 / 5.269862 (-1.962050) | 2.082886 / 4.565676 (-2.482791) | 0.056853 / 0.424275 (-0.367422) | 0.007535 / 0.007607 (-0.000072) | 0.483694 / 0.226044 (0.257649) | 4.827906 / 2.268929 (2.558978) | 2.567572 / 55.444624 (-52.877052) | 2.167206 / 6.876477 (-4.709271) | 2.414442 / 2.142072 (0.272369) | 0.579472 / 4.805227 (-4.225755) | 0.132976 / 6.500664 (-6.367688) | 0.059315 / 0.075469 (-0.016154) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260086 / 1.841788 (-0.581702) | 19.438297 / 8.074308 (11.363989) | 14.188161 / 10.191392 (3.996769) | 0.168534 / 0.680424 (-0.511890) | 0.018070 / 0.534201 (-0.516131) | 0.394241 / 0.579283 (-0.185043) | 0.411057 / 0.434364 (-0.023307) | 0.461123 / 0.540337 (-0.079215) | 0.626844 / 1.386936 (-0.760092) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006896 / 0.011353 (-0.004457) | 0.004207 / 0.011008 (-0.006801) | 0.064981 / 0.038508 (0.026473) | 0.080261 / 0.023109 (0.057152) | 0.399403 / 0.275898 (0.123505) | 0.433099 / 0.323480 (0.109619) | 0.005697 / 0.007986 (-0.002288) | 0.003601 / 0.004328 (-0.000728) | 0.065924 / 0.004250 (0.061673) | 0.058868 / 0.037052 (0.021815) | 0.403705 / 0.258489 (0.145216) | 0.439218 / 0.293841 (0.145377) | 0.032789 / 0.128546 (-0.095757) | 0.008675 / 0.075646 (-0.066971) | 0.071217 / 0.419271 (-0.348055) | 0.048487 / 0.043533 (0.004954) | 0.399878 / 0.255139 (0.144739) | 0.412816 / 0.283200 (0.129616) | 0.023905 / 0.141683 (-0.117778) | 1.541402 / 1.452155 (0.089247) | 1.588080 / 1.492716 (0.095364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322863 / 0.018006 (0.304856) | 0.530291 / 0.000490 (0.529802) | 0.004862 / 0.000200 (0.004662) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032697 / 0.037411 (-0.004715) | 0.092416 / 0.014526 (0.077891) | 0.107355 / 0.176557 (-0.069201) | 0.160217 / 0.737135 (-0.576918) | 0.109286 / 0.296338 (-0.187052) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437375 / 0.215209 (0.222166) | 4.362644 / 2.077655 (2.284990) | 2.335404 / 1.504120 (0.831284) | 2.173215 / 1.541195 (0.632020) | 2.254061 / 1.468490 (0.785571) | 0.493906 / 4.584777 (-4.090871) | 3.609025 / 3.745712 (-0.136687) | 3.352380 / 5.269862 (-1.917481) | 2.074185 / 4.565676 (-2.491492) | 0.057863 / 0.424275 (-0.366412) | 0.007297 / 0.007607 (-0.000310) | 0.512464 / 0.226044 (0.286420) | 5.135921 / 2.268929 (2.866993) | 2.788889 / 55.444624 (-52.655736) | 2.479097 / 6.876477 (-4.397379) | 2.717848 / 2.142072 (0.575776) | 0.590442 / 4.805227 (-4.214785) | 0.133721 / 6.500664 (-6.366943) | 0.061491 / 0.075469 (-0.013978) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429564 / 1.841788 (-0.412224) | 20.628733 / 8.074308 (12.554425) | 15.299571 / 10.191392 (5.108179) | 0.171032 / 0.680424 (-0.509392) | 0.019995 / 0.534201 (-0.514206) | 0.401283 / 0.579283 (-0.178000) | 0.416504 / 0.434364 (-0.017860) | 0.471219 / 0.540337 (-0.069118) | 0.641299 / 1.386936 (-0.745637) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5503e7beb5a31926aec03c6c9b24813f9f83cd7b \"CML watermark\")\n" ]
2023-08-22T10:07:23
2023-08-29T13:27:14
2023-08-29T13:18:11
CONTRIBUTOR
null
The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems. This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers. I fixed the issue by replacing lambda and local methods which are not pickle-able. See the example below: ```python from datasets import load_dataset from torch.utils.data import DataLoader if __name__ == "__main__": dataset = load_dataset("lhoestq/demo1", split="train") dataset = dataset.to_iterable_dataset(num_shards=3) dataset = dataset.remove_columns(["package_name"]) dataset = dataset.rename_columns({ "review": "review1" }) dataset = dataset.rename_column("date", "date1") for sample in DataLoader(dataset, batch_size=None, num_workers=3): print(sample) ``` To notice the fix on a linux system, adding these lines should do the trick: ```python import multiprocessing multiprocessing.set_start_method('spawn') ``` I also removed what looks like code duplication between rename_colums and rename_column
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6165/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6165", "html_url": "https://github.com/huggingface/datasets/pull/6165", "diff_url": "https://github.com/huggingface/datasets/pull/6165.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6165.patch", "merged_at": "2023-08-29T13:18:11" }
true
https://api.github.com/repos/huggingface/datasets/issues/6164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6164/comments
https://api.github.com/repos/huggingface/datasets/issues/6164/events
https://github.com/huggingface/datasets/pull/6164
1,859,560,007
PR_kwDODunzps5YZZAJ
6,164
Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006874 / 0.011353 (-0.004479) | 0.004276 / 0.011008 (-0.006732) | 0.085198 / 0.038508 (0.046690) | 0.084281 / 0.023109 (0.061171) | 0.344767 / 0.275898 (0.068869) | 0.377798 / 0.323480 (0.054318) | 0.005656 / 0.007986 (-0.002330) | 0.003601 / 0.004328 (-0.000727) | 0.065486 / 0.004250 (0.061235) | 0.056191 / 0.037052 (0.019139) | 0.351412 / 0.258489 (0.092923) | 0.398591 / 0.293841 (0.104750) | 0.031662 / 0.128546 (-0.096884) | 0.008901 / 0.075646 (-0.066745) | 0.290423 / 0.419271 (-0.128849) | 0.053793 / 0.043533 (0.010260) | 0.347968 / 0.255139 (0.092829) | 0.376978 / 0.283200 (0.093778) | 0.026745 / 0.141683 (-0.114938) | 1.514119 / 1.452155 (0.061964) | 1.580920 / 1.492716 (0.088203) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273648 / 0.018006 (0.255642) | 0.575176 / 0.000490 (0.574686) | 0.003557 / 0.000200 (0.003357) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031714 / 0.037411 (-0.005697) | 0.089166 / 0.014526 (0.074640) | 0.101525 / 0.176557 (-0.075032) | 0.161855 / 0.737135 (-0.575281) | 0.101391 / 0.296338 (-0.194947) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.380947 / 0.215209 (0.165738) | 3.800527 / 2.077655 (1.722873) | 1.820789 / 1.504120 (0.316669) | 1.657327 / 1.541195 (0.116132) | 1.776242 / 1.468490 (0.307752) | 0.486954 / 4.584777 (-4.097823) | 3.688340 / 3.745712 (-0.057372) | 3.354453 / 5.269862 (-1.915409) | 2.119995 / 4.565676 (-2.445682) | 0.057446 / 0.424275 (-0.366829) | 0.007752 / 0.007607 (0.000145) | 0.461907 / 0.226044 (0.235862) | 4.617870 / 2.268929 (2.348942) | 2.337025 / 55.444624 (-53.107599) | 1.964770 / 6.876477 (-4.911707) | 2.252066 / 2.142072 (0.109993) | 0.591585 / 4.805227 (-4.213642) | 0.134655 / 6.500664 (-6.366009) | 0.060646 / 0.075469 (-0.014823) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263271 / 1.841788 (-0.578517) | 20.822286 / 8.074308 (12.747978) | 14.710256 / 10.191392 (4.518864) | 0.167285 / 0.680424 (-0.513139) | 0.018302 / 0.534201 (-0.515899) | 0.401023 / 0.579283 (-0.178260) | 0.428956 / 0.434364 (-0.005407) | 0.466120 / 0.540337 (-0.074218) | 0.637868 / 1.386936 (-0.749069) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007174 / 0.011353 (-0.004179) | 0.004418 / 0.011008 (-0.006590) | 0.065731 / 0.038508 (0.027223) | 0.090457 / 0.023109 (0.067348) | 0.387306 / 0.275898 (0.111408) | 0.427178 / 0.323480 (0.103698) | 0.005699 / 0.007986 (-0.002286) | 0.003662 / 0.004328 (-0.000666) | 0.066190 / 0.004250 (0.061940) | 0.062860 / 0.037052 (0.025808) | 0.388855 / 0.258489 (0.130366) | 0.427853 / 0.293841 (0.134012) | 0.032770 / 0.128546 (-0.095776) | 0.008780 / 0.075646 (-0.066866) | 0.071156 / 0.419271 (-0.348116) | 0.050174 / 0.043533 (0.006641) | 0.385254 / 0.255139 (0.130115) | 0.405069 / 0.283200 (0.121869) | 0.025561 / 0.141683 (-0.116122) | 1.506907 / 1.452155 (0.054752) | 1.543270 / 1.492716 (0.050554) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304651 / 0.018006 (0.286645) | 0.577269 / 0.000490 (0.576780) | 0.004479 / 0.000200 (0.004279) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034070 / 0.037411 (-0.003341) | 0.097664 / 0.014526 (0.083138) | 0.106969 / 0.176557 (-0.069588) | 0.163093 / 0.737135 (-0.574043) | 0.109384 / 0.296338 (-0.186955) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414823 / 0.215209 (0.199614) | 4.148390 / 2.077655 (2.070735) | 2.114038 / 1.504120 (0.609918) | 1.959316 / 1.541195 (0.418121) | 2.098138 / 1.468490 (0.629648) | 0.486338 / 4.584777 (-4.098439) | 3.642850 / 3.745712 (-0.102863) | 3.458311 / 5.269862 (-1.811551) | 2.185662 / 4.565676 (-2.380014) | 0.057555 / 0.424275 (-0.366720) | 0.007522 / 0.007607 (-0.000085) | 0.497975 / 0.226044 (0.271931) | 4.971528 / 2.268929 (2.702600) | 2.614087 / 55.444624 (-52.830537) | 2.288406 / 6.876477 (-4.588070) | 2.564067 / 2.142072 (0.421995) | 0.582248 / 4.805227 (-4.222979) | 0.134931 / 6.500664 (-6.365733) | 0.062689 / 0.075469 (-0.012780) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.343331 / 1.841788 (-0.498457) | 21.398950 / 8.074308 (13.324642) | 14.620971 / 10.191392 (4.429579) | 0.169779 / 0.680424 (-0.510644) | 0.018683 / 0.534201 (-0.515518) | 0.396152 / 0.579283 (-0.183131) | 0.409596 / 0.434364 (-0.024768) | 0.482875 / 0.540337 (-0.057463) | 0.659977 / 1.386936 (-0.726959) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1fd2234b8c802d47db5a5aa939148f98c9c49350 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006662 / 0.011353 (-0.004691) | 0.003959 / 0.011008 (-0.007049) | 0.084447 / 0.038508 (0.045939) | 0.070267 / 0.023109 (0.047158) | 0.310301 / 0.275898 (0.034403) | 0.339866 / 0.323480 (0.016386) | 0.004008 / 0.007986 (-0.003977) | 0.003270 / 0.004328 (-0.001058) | 0.064997 / 0.004250 (0.060746) | 0.053151 / 0.037052 (0.016099) | 0.327867 / 0.258489 (0.069378) | 0.368560 / 0.293841 (0.074719) | 0.031436 / 0.128546 (-0.097111) | 0.008547 / 0.075646 (-0.067099) | 0.288513 / 0.419271 (-0.130758) | 0.051833 / 0.043533 (0.008300) | 0.312660 / 0.255139 (0.057521) | 0.347180 / 0.283200 (0.063980) | 0.024982 / 0.141683 (-0.116701) | 1.472487 / 1.452155 (0.020333) | 1.550138 / 1.492716 (0.057422) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208443 / 0.018006 (0.190437) | 0.451927 / 0.000490 (0.451437) | 0.004452 / 0.000200 (0.004252) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029164 / 0.037411 (-0.008247) | 0.085801 / 0.014526 (0.071275) | 0.096229 / 0.176557 (-0.080327) | 0.153063 / 0.737135 (-0.584072) | 0.097712 / 0.296338 (-0.198626) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383969 / 0.215209 (0.168760) | 3.829216 / 2.077655 (1.751561) | 1.854466 / 1.504120 (0.350346) | 1.684149 / 1.541195 (0.142954) | 1.759422 / 1.468490 (0.290932) | 0.480229 / 4.584777 (-4.104548) | 3.653363 / 3.745712 (-0.092349) | 3.264456 / 5.269862 (-2.005406) | 2.020579 / 4.565676 (-2.545097) | 0.056920 / 0.424275 (-0.367355) | 0.007625 / 0.007607 (0.000018) | 0.458559 / 0.226044 (0.232515) | 4.580288 / 2.268929 (2.311359) | 2.353783 / 55.444624 (-53.090841) | 1.967223 / 6.876477 (-4.909253) | 2.182707 / 2.142072 (0.040634) | 0.631341 / 4.805227 (-4.173886) | 0.141656 / 6.500664 (-6.359008) | 0.059918 / 0.075469 (-0.015551) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279635 / 1.841788 (-0.562153) | 19.725763 / 8.074308 (11.651455) | 14.477946 / 10.191392 (4.286554) | 0.164360 / 0.680424 (-0.516064) | 0.018286 / 0.534201 (-0.515915) | 0.394935 / 0.579283 (-0.184348) | 0.419638 / 0.434364 (-0.014726) | 0.460366 / 0.540337 (-0.079972) | 0.636876 / 1.386936 (-0.750060) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006568 / 0.011353 (-0.004785) | 0.004270 / 0.011008 (-0.006738) | 0.065522 / 0.038508 (0.027014) | 0.071597 / 0.023109 (0.048487) | 0.394929 / 0.275898 (0.119031) | 0.427548 / 0.323480 (0.104068) | 0.005320 / 0.007986 (-0.002665) | 0.003366 / 0.004328 (-0.000962) | 0.065780 / 0.004250 (0.061530) | 0.055390 / 0.037052 (0.018338) | 0.397950 / 0.258489 (0.139461) | 0.435800 / 0.293841 (0.141959) | 0.031816 / 0.128546 (-0.096730) | 0.008555 / 0.075646 (-0.067091) | 0.072110 / 0.419271 (-0.347161) | 0.049077 / 0.043533 (0.005544) | 0.390065 / 0.255139 (0.134926) | 0.410294 / 0.283200 (0.127094) | 0.023389 / 0.141683 (-0.118294) | 1.491491 / 1.452155 (0.039336) | 1.551057 / 1.492716 (0.058341) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243869 / 0.018006 (0.225862) | 0.451961 / 0.000490 (0.451471) | 0.019834 / 0.000200 (0.019634) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031031 / 0.037411 (-0.006380) | 0.088189 / 0.014526 (0.073663) | 0.101743 / 0.176557 (-0.074814) | 0.155236 / 0.737135 (-0.581899) | 0.101245 / 0.296338 (-0.195094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422178 / 0.215209 (0.206969) | 4.199989 / 2.077655 (2.122334) | 2.228816 / 1.504120 (0.724696) | 2.057172 / 1.541195 (0.515978) | 2.162651 / 1.468490 (0.694161) | 0.491186 / 4.584777 (-4.093591) | 3.666221 / 3.745712 (-0.079491) | 3.289531 / 5.269862 (-1.980331) | 2.050027 / 4.565676 (-2.515650) | 0.057464 / 0.424275 (-0.366811) | 0.007379 / 0.007607 (-0.000228) | 0.506532 / 0.226044 (0.280487) | 5.066385 / 2.268929 (2.797456) | 2.694405 / 55.444624 (-52.750219) | 2.372200 / 6.876477 (-4.504277) | 2.562724 / 2.142072 (0.420652) | 0.615474 / 4.805227 (-4.189753) | 0.148284 / 6.500664 (-6.352380) | 0.061380 / 0.075469 (-0.014089) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.332649 / 1.841788 (-0.509139) | 20.591063 / 8.074308 (12.516755) | 14.105253 / 10.191392 (3.913861) | 0.151886 / 0.680424 (-0.528537) | 0.018200 / 0.534201 (-0.516001) | 0.395278 / 0.579283 (-0.184005) | 0.407113 / 0.434364 (-0.027251) | 0.473168 / 0.540337 (-0.067170) | 0.660766 / 1.386936 (-0.726170) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8 \"CML watermark\")\n" ]
2023-08-21T14:57:54
2023-08-21T16:27:05
2023-08-21T16:18:26
CONTRIBUTOR
null
When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with ``` File "app.py", line 123, in add_new_eval eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT) File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5501, in push_to_hub if not metadata_configs: UnboundLocalError: local variable 'metadata_configs' referenced before assignment ``` This fixes it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6164/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6164", "html_url": "https://github.com/huggingface/datasets/pull/6164", "diff_url": "https://github.com/huggingface/datasets/pull/6164.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6164.patch", "merged_at": "2023-08-21T16:18:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/6163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6163/comments
https://api.github.com/repos/huggingface/datasets/issues/6163/events
https://github.com/huggingface/datasets/issues/6163
1,857,682,241
I_kwDODunzps5uuftB
6,163
Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32
{ "login": "shishirCTC", "id": 90616801, "node_id": "MDQ6VXNlcjkwNjE2ODAx", "avatar_url": "https://avatars.githubusercontent.com/u/90616801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shishirCTC", "html_url": "https://github.com/shishirCTC", "followers_url": "https://api.github.com/users/shishirCTC/followers", "following_url": "https://api.github.com/users/shishirCTC/following{/other_user}", "gists_url": "https://api.github.com/users/shishirCTC/gists{/gist_id}", "starred_url": "https://api.github.com/users/shishirCTC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shishirCTC/subscriptions", "organizations_url": "https://api.github.com/users/shishirCTC/orgs", "repos_url": "https://api.github.com/users/shishirCTC/repos", "events_url": "https://api.github.com/users/shishirCTC/events{/privacy}", "received_events_url": "https://api.github.com/users/shishirCTC/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Answered on the forum [here](https://discuss.huggingface.co/t/error-type-arrowinvalid-details-failed-to-parse-string-254-254-as-a-scalar-of-type-int32/51323)." ]
2023-08-19T11:34:40
2023-08-21T13:28:16
null
NONE
null
### Describe the bug I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are integer into string by placing inverted commas and also without inverted commas. Can anyone please help me out? FYI : I am using Chrome browser. Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32 ![Screenshot 2023-08-19 165827](https://github.com/huggingface/datasets/assets/90616801/95fad96e-7dce-4bb5-9f83-9f1659a32891) ### Steps to reproduce the bug Kindly let me know how to fix this? ### Expected behavior Kindly let me know how to fix this? ### Environment info Kindly let me know how to fix this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6163/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6162/comments
https://api.github.com/repos/huggingface/datasets/issues/6162/events
https://github.com/huggingface/datasets/issues/6162
1,856,198,342
I_kwDODunzps5uo1bG
6,162
load_dataset('json',...) from togethercomputer/RedPajama-Data-1T errors when jsonl rows contains different data fields
{ "login": "rbrugaro", "id": 82971690, "node_id": "MDQ6VXNlcjgyOTcxNjkw", "avatar_url": "https://avatars.githubusercontent.com/u/82971690?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rbrugaro", "html_url": "https://github.com/rbrugaro", "followers_url": "https://api.github.com/users/rbrugaro/followers", "following_url": "https://api.github.com/users/rbrugaro/following{/other_user}", "gists_url": "https://api.github.com/users/rbrugaro/gists{/gist_id}", "starred_url": "https://api.github.com/users/rbrugaro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rbrugaro/subscriptions", "organizations_url": "https://api.github.com/users/rbrugaro/orgs", "repos_url": "https://api.github.com/users/rbrugaro/repos", "events_url": "https://api.github.com/users/rbrugaro/events{/privacy}", "received_events_url": "https://api.github.com/users/rbrugaro/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Feel free to open a discussion at https://huggingface.co./datasets/togethercomputer/RedPajama-Data-1T/discussions to ask the file to be fixed (or directly open a PR with the fixed file)\r\n\r\n`datasets` expects all the examples to have the same fields", "@lhoestq I think the problem is caused by the fact that hugging face datasets writes a copy of data to the local cache using pyarrow. And the data scheme is inferred from the first few data blocks as can be seen [here](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_writer.py#L570). Maybe setting `streaming=True` can workaround this problem. Would you agree with my statement? ", "> @lhoestq I think the problem is caused by the fact that hugging face datasets writes a copy of data to the local cache using pyarrow. And the data scheme is inferred from the first few data blocks as can be seen [here](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_writer.py#L570).\r\n\r\nCorrect. Therefore any example that doesn't follow the inferred schema will make the code fail.\r\n\r\n> Maybe setting streaming=True can workaround this problem. Would you agree with my statement?\r\n\r\nYou'll meet the same problem but later - when streaming and arriving at the problematic example", "@lhoestq I just run below test with streaming=True and is not failing at the problematic example\r\n```python\r\nds = load_dataset('json', data_files='/path_to_local_RedPajamaData/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl', streaming=True)\r\ncount = 0\r\nfor i in ds['train']:\r\n count += 1\r\n print(count)\r\n```\r\n\r\nand completes the 262241 samples successfully. It does error our when streaming is not used " ]
2023-08-18T07:19:39
2023-08-18T17:00:35
null
NONE
null
### Describe the bug When loading some jsonl from redpajama-data-1T github source [togethercomputer/RedPajama-Data-1T](https://huggingface.co./datasets/togethercomputer/RedPajama-Data-1T) fails due to one row of the file containing an extra field called **symlink_target: string>**. When deleting that line the loading is successful. We also tried loading this file with the discrepancy using this function and it is successful ```python os.environ["RED_PAJAMA_DATA_DIR"] ="/path_to_local_copy_of_RedPajama-Data-1T" ds = load_dataset('togethercomputer/RedPajama-Data-1T', 'github',cache_dir="/path_to_folder_with_jsonl",streaming=True)['train'] ``` ### Steps to reproduce the bug Steps to reproduce the behavior: 1. Load one jsonl from the redpajama-data-1T ```bash wget https://data.together.xyz/redpajama-data-1T/v1.0.0/github/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl ``` 2.Load dataset will give error: ```python from datasets import load_dataset ds = load_dataset('json', data_files='/path_to/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl') ``` _TypeError: Couldn't cast array of type Struct <content_hash: string, timestamp: string, source: string, line_count: int64, max_line_length: int64, avg_line_length: double, alnum_prop: double, repo_name: string, id: string, size: string, binary: bool, copies: string, ref: string, path: string, mode: string, license: string, language: list<item: struct<name: string, bytes: string>>, **symlink_target: string>** to {'content_hash': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'line_count': Value(dtype='int64', id=None), 'max_line_length': Value(dtype='int64', id=None), 'avg_line_length': Value(dtype='float64', id=None), 'alnum_prop': Value(dtype='float64', id=None), 'repo_name': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'size': Value(dtype='string', id=None), 'binary': Value(dtype='bool', id=None), 'copies': Value(dtype='string', id=None), 'ref': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'mode': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'language': [{'name': Value(dtype='string', id=None), 'bytes': Value(dtype='string', id=None)}]}_ 3. To remove the line causing the problem that includes the **symlink_target: string>** do: ```bash sed -i '112252d' filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl ``` 4. Rerun the loading function now is succesful: ```python from datasets import load_dataset ds = load_dataset('json', data_files='/path_to/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl') ``` ### Expected behavior Have a clean dataset without discrepancies on the jsonl fields or have the load_dataset('json',...) method not error out. ### Environment info - `datasets` version: 2.14.1 - Platform: Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6162/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6161/comments
https://api.github.com/repos/huggingface/datasets/issues/6161/events
https://github.com/huggingface/datasets/pull/6161
1,855,794,354
PR_kwDODunzps5YM0g7
6,161
Fix protocol prefix for Beam
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006736 / 0.011353 (-0.004617) | 0.004099 / 0.011008 (-0.006909) | 0.084339 / 0.038508 (0.045831) | 0.073715 / 0.023109 (0.050605) | 0.311962 / 0.275898 (0.036064) | 0.356108 / 0.323480 (0.032628) | 0.005321 / 0.007986 (-0.002665) | 0.003390 / 0.004328 (-0.000939) | 0.064622 / 0.004250 (0.060372) | 0.053978 / 0.037052 (0.016926) | 0.328967 / 0.258489 (0.070478) | 0.370506 / 0.293841 (0.076665) | 0.031123 / 0.128546 (-0.097423) | 0.008465 / 0.075646 (-0.067181) | 0.288136 / 0.419271 (-0.131136) | 0.052909 / 0.043533 (0.009376) | 0.325189 / 0.255139 (0.070050) | 0.360112 / 0.283200 (0.076912) | 0.023389 / 0.141683 (-0.118294) | 1.492899 / 1.452155 (0.040744) | 1.586449 / 1.492716 (0.093733) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219708 / 0.018006 (0.201702) | 0.469550 / 0.000490 (0.469060) | 0.002776 / 0.000200 (0.002576) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028985 / 0.037411 (-0.008427) | 0.083487 / 0.014526 (0.068961) | 0.096938 / 0.176557 (-0.079619) | 0.152886 / 0.737135 (-0.584249) | 0.096242 / 0.296338 (-0.200096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.381959 / 0.215209 (0.166750) | 3.800033 / 2.077655 (1.722378) | 1.831903 / 1.504120 (0.327783) | 1.663207 / 1.541195 (0.122012) | 1.747282 / 1.468490 (0.278792) | 0.481671 / 4.584777 (-4.103106) | 3.653725 / 3.745712 (-0.091987) | 3.253058 / 5.269862 (-2.016804) | 2.022014 / 4.565676 (-2.543663) | 0.056651 / 0.424275 (-0.367624) | 0.007640 / 0.007607 (0.000033) | 0.461795 / 0.226044 (0.235750) | 4.625535 / 2.268929 (2.356606) | 2.356341 / 55.444624 (-53.088283) | 1.977437 / 6.876477 (-4.899040) | 2.179672 / 2.142072 (0.037599) | 0.582875 / 4.805227 (-4.222353) | 0.132964 / 6.500664 (-6.367700) | 0.060398 / 0.075469 (-0.015071) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.309567 / 1.841788 (-0.532220) | 19.856306 / 8.074308 (11.781997) | 14.074350 / 10.191392 (3.882958) | 0.149615 / 0.680424 (-0.530809) | 0.018487 / 0.534201 (-0.515714) | 0.393995 / 0.579283 (-0.185288) | 0.409057 / 0.434364 (-0.025307) | 0.459551 / 0.540337 (-0.080787) | 0.644594 / 1.386936 (-0.742342) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006824 / 0.011353 (-0.004529) | 0.004099 / 0.011008 (-0.006909) | 0.064415 / 0.038508 (0.025907) | 0.077983 / 0.023109 (0.054874) | 0.359351 / 0.275898 (0.083453) | 0.395168 / 0.323480 (0.071688) | 0.005384 / 0.007986 (-0.002602) | 0.003298 / 0.004328 (-0.001030) | 0.065041 / 0.004250 (0.060791) | 0.056717 / 0.037052 (0.019664) | 0.366882 / 0.258489 (0.108393) | 0.401337 / 0.293841 (0.107496) | 0.032273 / 0.128546 (-0.096273) | 0.008666 / 0.075646 (-0.066981) | 0.071442 / 0.419271 (-0.347829) | 0.049999 / 0.043533 (0.006466) | 0.365001 / 0.255139 (0.109862) | 0.379579 / 0.283200 (0.096379) | 0.023357 / 0.141683 (-0.118326) | 1.476839 / 1.452155 (0.024684) | 1.541703 / 1.492716 (0.048987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239014 / 0.018006 (0.221008) | 0.460678 / 0.000490 (0.460188) | 0.003368 / 0.000200 (0.003168) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030981 / 0.037411 (-0.006430) | 0.088287 / 0.014526 (0.073761) | 0.102459 / 0.176557 (-0.074098) | 0.154695 / 0.737135 (-0.582441) | 0.103479 / 0.296338 (-0.192860) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416084 / 0.215209 (0.200874) | 4.128365 / 2.077655 (2.050710) | 2.113053 / 1.504120 (0.608934) | 1.948993 / 1.541195 (0.407798) | 2.035609 / 1.468490 (0.567119) | 0.481705 / 4.584777 (-4.103072) | 3.630366 / 3.745712 (-0.115346) | 3.340837 / 5.269862 (-1.929024) | 2.052573 / 4.565676 (-2.513104) | 0.056805 / 0.424275 (-0.367470) | 0.007294 / 0.007607 (-0.000313) | 0.489597 / 0.226044 (0.263553) | 4.892728 / 2.268929 (2.623799) | 2.564692 / 55.444624 (-52.879932) | 2.251964 / 6.876477 (-4.624513) | 2.457912 / 2.142072 (0.315839) | 0.588433 / 4.805227 (-4.216794) | 0.133588 / 6.500664 (-6.367076) | 0.062298 / 0.075469 (-0.013171) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.328566 / 1.841788 (-0.513222) | 20.145568 / 8.074308 (12.071260) | 14.231306 / 10.191392 (4.039914) | 0.168356 / 0.680424 (-0.512067) | 0.018333 / 0.534201 (-0.515868) | 0.390901 / 0.579283 (-0.188382) | 0.415005 / 0.434364 (-0.019359) | 0.477282 / 0.540337 (-0.063055) | 0.652085 / 1.386936 (-0.734851) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#341a41880a70b29f030caa0d36f1e297535ba5f9 \"CML watermark\")\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6161). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006388 / 0.011353 (-0.004965) | 0.003917 / 0.011008 (-0.007092) | 0.087397 / 0.038508 (0.048889) | 0.068522 / 0.023109 (0.045412) | 0.313299 / 0.275898 (0.037401) | 0.342884 / 0.323480 (0.019405) | 0.005216 / 0.007986 (-0.002770) | 0.003293 / 0.004328 (-0.001035) | 0.067474 / 0.004250 (0.063224) | 0.051122 / 0.037052 (0.014070) | 0.326443 / 0.258489 (0.067954) | 0.355744 / 0.293841 (0.061903) | 0.031130 / 0.128546 (-0.097416) | 0.008617 / 0.075646 (-0.067029) | 0.291201 / 0.419271 (-0.128070) | 0.052050 / 0.043533 (0.008517) | 0.312135 / 0.255139 (0.056996) | 0.347233 / 0.283200 (0.064034) | 0.023775 / 0.141683 (-0.117907) | 1.478807 / 1.452155 (0.026652) | 1.581239 / 1.492716 (0.088522) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208252 / 0.018006 (0.190246) | 0.466314 / 0.000490 (0.465824) | 0.004439 / 0.000200 (0.004239) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027918 / 0.037411 (-0.009494) | 0.082410 / 0.014526 (0.067884) | 0.094231 / 0.176557 (-0.082326) | 0.150189 / 0.737135 (-0.586946) | 0.095404 / 0.296338 (-0.200935) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382026 / 0.215209 (0.166817) | 3.822213 / 2.077655 (1.744559) | 1.833716 / 1.504120 (0.329596) | 1.666250 / 1.541195 (0.125055) | 1.703350 / 1.468490 (0.234860) | 0.477918 / 4.584777 (-4.106859) | 3.629304 / 3.745712 (-0.116408) | 3.199672 / 5.269862 (-2.070190) | 1.977855 / 4.565676 (-2.587821) | 0.056275 / 0.424275 (-0.368000) | 0.007538 / 0.007607 (-0.000070) | 0.455995 / 0.226044 (0.229950) | 4.559234 / 2.268929 (2.290305) | 2.333819 / 55.444624 (-53.110805) | 2.006851 / 6.876477 (-4.869625) | 2.150683 / 2.142072 (0.008611) | 0.576786 / 4.805227 (-4.228441) | 0.132352 / 6.500664 (-6.368312) | 0.059359 / 0.075469 (-0.016110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261525 / 1.841788 (-0.580262) | 19.174957 / 8.074308 (11.100649) | 14.286796 / 10.191392 (4.095404) | 0.144610 / 0.680424 (-0.535813) | 0.018213 / 0.534201 (-0.515988) | 0.390404 / 0.579283 (-0.188879) | 0.404678 / 0.434364 (-0.029686) | 0.455636 / 0.540337 (-0.084701) | 0.620801 / 1.386936 (-0.766135) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006383 / 0.011353 (-0.004970) | 0.003852 / 0.011008 (-0.007156) | 0.064116 / 0.038508 (0.025607) | 0.068920 / 0.023109 (0.045810) | 0.359439 / 0.275898 (0.083541) | 0.388904 / 0.323480 (0.065425) | 0.005192 / 0.007986 (-0.002794) | 0.003233 / 0.004328 (-0.001095) | 0.064589 / 0.004250 (0.060339) | 0.054496 / 0.037052 (0.017444) | 0.368699 / 0.258489 (0.110210) | 0.400420 / 0.293841 (0.106579) | 0.030869 / 0.128546 (-0.097677) | 0.008424 / 0.075646 (-0.067222) | 0.071015 / 0.419271 (-0.348257) | 0.048333 / 0.043533 (0.004801) | 0.360652 / 0.255139 (0.105513) | 0.393534 / 0.283200 (0.110334) | 0.022685 / 0.141683 (-0.118998) | 1.495565 / 1.452155 (0.043410) | 1.537947 / 1.492716 (0.045230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232911 / 0.018006 (0.214905) | 0.454191 / 0.000490 (0.453702) | 0.005711 / 0.000200 (0.005511) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029486 / 0.037411 (-0.007925) | 0.087249 / 0.014526 (0.072724) | 0.100104 / 0.176557 (-0.076453) | 0.151556 / 0.737135 (-0.585580) | 0.100853 / 0.296338 (-0.195485) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415134 / 0.215209 (0.199925) | 4.139068 / 2.077655 (2.061413) | 2.121079 / 1.504120 (0.616959) | 1.945616 / 1.541195 (0.404421) | 1.988188 / 1.468490 (0.519698) | 0.483994 / 4.584777 (-4.100783) | 3.640366 / 3.745712 (-0.105347) | 3.218896 / 5.269862 (-2.050966) | 2.015527 / 4.565676 (-2.550149) | 0.056946 / 0.424275 (-0.367329) | 0.007262 / 0.007607 (-0.000345) | 0.486075 / 0.226044 (0.260031) | 4.864191 / 2.268929 (2.595262) | 2.590853 / 55.444624 (-52.853772) | 2.315359 / 6.876477 (-4.561118) | 2.418733 / 2.142072 (0.276661) | 0.582378 / 4.805227 (-4.222849) | 0.134097 / 6.500664 (-6.366568) | 0.060797 / 0.075469 (-0.014672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337021 / 1.841788 (-0.504766) | 19.468907 / 8.074308 (11.394599) | 14.348874 / 10.191392 (4.157482) | 0.170408 / 0.680424 (-0.510016) | 0.018414 / 0.534201 (-0.515787) | 0.394551 / 0.579283 (-0.184732) | 0.404750 / 0.434364 (-0.029613) | 0.471972 / 0.540337 (-0.068365) | 0.650607 / 1.386936 (-0.736329) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ab4d978e2d5c246dc91e2fed041b06a38190be3b \"CML watermark\")\n", "The CI errors are unrelated to the changes" ]
2023-08-17T22:40:37
2023-08-18T13:47:59
null
CONTRIBUTOR
null
Fix #6147
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6161/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6161", "html_url": "https://github.com/huggingface/datasets/pull/6161", "diff_url": "https://github.com/huggingface/datasets/pull/6161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6161.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6160/comments
https://api.github.com/repos/huggingface/datasets/issues/6160/events
https://github.com/huggingface/datasets/pull/6160
1,855,760,543
PR_kwDODunzps5YMtLQ
6,160
Fix Parquet loading with `columns`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008368 / 0.011353 (-0.002985) | 0.004754 / 0.011008 (-0.006254) | 0.096646 / 0.038508 (0.058138) | 0.088980 / 0.023109 (0.065871) | 0.374532 / 0.275898 (0.098633) | 0.404840 / 0.323480 (0.081360) | 0.006026 / 0.007986 (-0.001960) | 0.005716 / 0.004328 (0.001387) | 0.076297 / 0.004250 (0.072047) | 0.072335 / 0.037052 (0.035283) | 0.379435 / 0.258489 (0.120946) | 0.423449 / 0.293841 (0.129608) | 0.041344 / 0.128546 (-0.087202) | 0.009758 / 0.075646 (-0.065889) | 0.341550 / 0.419271 (-0.077721) | 0.068559 / 0.043533 (0.025026) | 0.368313 / 0.255139 (0.113174) | 0.415147 / 0.283200 (0.131947) | 0.028692 / 0.141683 (-0.112990) | 1.816198 / 1.452155 (0.364044) | 1.983351 / 1.492716 (0.490635) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222712 / 0.018006 (0.204706) | 0.517850 / 0.000490 (0.517360) | 0.004436 / 0.000200 (0.004236) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033168 / 0.037411 (-0.004243) | 0.101353 / 0.014526 (0.086827) | 0.113235 / 0.176557 (-0.063322) | 0.180308 / 0.737135 (-0.556827) | 0.114604 / 0.296338 (-0.181734) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454415 / 0.215209 (0.239206) | 4.500355 / 2.077655 (2.422701) | 2.188223 / 1.504120 (0.684103) | 1.974256 / 1.541195 (0.433061) | 2.067331 / 1.468490 (0.598841) | 0.572982 / 4.584777 (-4.011795) | 4.239160 / 3.745712 (0.493448) | 3.836812 / 5.269862 (-1.433049) | 2.367022 / 4.565676 (-2.198655) | 0.066886 / 0.424275 (-0.357389) | 0.009111 / 0.007607 (0.001504) | 0.539881 / 0.226044 (0.313837) | 5.362247 / 2.268929 (3.093319) | 2.784044 / 55.444624 (-52.660580) | 2.320975 / 6.876477 (-4.555502) | 2.543108 / 2.142072 (0.401036) | 0.685751 / 4.805227 (-4.119477) | 0.156840 / 6.500664 (-6.343824) | 0.071764 / 0.075469 (-0.003705) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.549830 / 1.841788 (-0.291958) | 22.799622 / 8.074308 (14.725314) | 16.750692 / 10.191392 (6.559300) | 0.196192 / 0.680424 (-0.484232) | 0.024518 / 0.534201 (-0.509683) | 0.479302 / 0.579283 (-0.099981) | 0.522256 / 0.434364 (0.087892) | 0.545809 / 0.540337 (0.005471) | 0.748437 / 1.386936 (-0.638499) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007789 / 0.011353 (-0.003564) | 0.004563 / 0.011008 (-0.006445) | 0.074631 / 0.038508 (0.036123) | 0.086892 / 0.023109 (0.063783) | 0.427014 / 0.275898 (0.151116) | 0.463257 / 0.323480 (0.139777) | 0.005987 / 0.007986 (-0.001999) | 0.003803 / 0.004328 (-0.000526) | 0.074799 / 0.004250 (0.070549) | 0.063473 / 0.037052 (0.026420) | 0.429905 / 0.258489 (0.171416) | 0.468967 / 0.293841 (0.175127) | 0.036768 / 0.128546 (-0.091778) | 0.009675 / 0.075646 (-0.065971) | 0.082546 / 0.419271 (-0.336725) | 0.058027 / 0.043533 (0.014494) | 0.429813 / 0.255139 (0.174674) | 0.449200 / 0.283200 (0.166001) | 0.026713 / 0.141683 (-0.114969) | 1.812022 / 1.452155 (0.359867) | 1.847305 / 1.492716 (0.354589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.320383 / 0.018006 (0.302377) | 0.485995 / 0.000490 (0.485505) | 0.024365 / 0.000200 (0.024165) | 0.000156 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036341 / 0.037411 (-0.001071) | 0.104635 / 0.014526 (0.090110) | 0.119456 / 0.176557 (-0.057101) | 0.182042 / 0.737135 (-0.555093) | 0.118944 / 0.296338 (-0.177395) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506410 / 0.215209 (0.291201) | 5.061119 / 2.077655 (2.983465) | 2.756557 / 1.504120 (1.252437) | 2.546504 / 1.541195 (1.005309) | 2.585509 / 1.468490 (1.117019) | 0.564291 / 4.584777 (-4.020486) | 4.281219 / 3.745712 (0.535507) | 3.919439 / 5.269862 (-1.350423) | 2.588788 / 4.565676 (-1.976889) | 0.066900 / 0.424275 (-0.357375) | 0.008680 / 0.007607 (0.001073) | 0.598435 / 0.226044 (0.372390) | 5.976054 / 2.268929 (3.707125) | 3.260211 / 55.444624 (-52.184414) | 2.874597 / 6.876477 (-4.001880) | 3.105769 / 2.142072 (0.963697) | 0.692938 / 4.805227 (-4.112289) | 0.157777 / 6.500664 (-6.342887) | 0.073128 / 0.075469 (-0.002341) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.559380 / 1.841788 (-0.282408) | 22.986540 / 8.074308 (14.912232) | 16.305564 / 10.191392 (6.114172) | 0.174939 / 0.680424 (-0.505485) | 0.021932 / 0.534201 (-0.512269) | 0.468162 / 0.579283 (-0.111121) | 0.472610 / 0.434364 (0.038246) | 0.574574 / 0.540337 (0.034237) | 0.783505 / 1.386936 (-0.603431) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#550923b5d6ae64eb20b8f66da843395e9fa404ac \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012553 / 0.011353 (0.001201) | 0.005358 / 0.011008 (-0.005650) | 0.108338 / 0.038508 (0.069830) | 0.101105 / 0.023109 (0.077995) | 0.416808 / 0.275898 (0.140910) | 0.454599 / 0.323480 (0.131119) | 0.006665 / 0.007986 (-0.001321) | 0.004186 / 0.004328 (-0.000143) | 0.084900 / 0.004250 (0.080649) | 0.062881 / 0.037052 (0.025829) | 0.424423 / 0.258489 (0.165934) | 0.482651 / 0.293841 (0.188810) | 0.055740 / 0.128546 (-0.072807) | 0.014469 / 0.075646 (-0.061177) | 0.383267 / 0.419271 (-0.036005) | 0.067487 / 0.043533 (0.023955) | 0.414983 / 0.255139 (0.159844) | 0.459437 / 0.283200 (0.176237) | 0.038679 / 0.141683 (-0.103004) | 1.828002 / 1.452155 (0.375847) | 1.951946 / 1.492716 (0.459230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288033 / 0.018006 (0.270027) | 0.603536 / 0.000490 (0.603046) | 0.004874 / 0.000200 (0.004674) | 0.000138 / 0.000054 (0.000084) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031988 / 0.037411 (-0.005423) | 0.095807 / 0.014526 (0.081281) | 0.113459 / 0.176557 (-0.063098) | 0.182012 / 0.737135 (-0.555123) | 0.113121 / 0.296338 (-0.183217) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620709 / 0.215209 (0.405500) | 6.096569 / 2.077655 (4.018915) | 2.754612 / 1.504120 (1.250492) | 2.449786 / 1.541195 (0.908591) | 2.470694 / 1.468490 (1.002204) | 0.837016 / 4.584777 (-3.747761) | 5.237290 / 3.745712 (1.491578) | 4.713220 / 5.269862 (-0.556642) | 3.020934 / 4.565676 (-1.544743) | 0.096892 / 0.424275 (-0.327383) | 0.009423 / 0.007607 (0.001816) | 0.720313 / 0.226044 (0.494269) | 7.369673 / 2.268929 (5.100744) | 3.550384 / 55.444624 (-51.894241) | 2.868868 / 6.876477 (-4.007609) | 3.081469 / 2.142072 (0.939397) | 1.042968 / 4.805227 (-3.762259) | 0.232530 / 6.500664 (-6.268134) | 0.080805 / 0.075469 (0.005336) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.645777 / 1.841788 (-0.196011) | 24.590862 / 8.074308 (16.516554) | 21.315496 / 10.191392 (11.124104) | 0.228796 / 0.680424 (-0.451628) | 0.028479 / 0.534201 (-0.505722) | 0.494413 / 0.579283 (-0.084870) | 0.582773 / 0.434364 (0.148409) | 0.552575 / 0.540337 (0.012238) | 0.787217 / 1.386936 (-0.599719) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008743 / 0.011353 (-0.002609) | 0.005253 / 0.011008 (-0.005755) | 0.083766 / 0.038508 (0.045257) | 0.086305 / 0.023109 (0.063195) | 0.520171 / 0.275898 (0.244273) | 0.565812 / 0.323480 (0.242332) | 0.006465 / 0.007986 (-0.001520) | 0.004585 / 0.004328 (0.000257) | 0.085344 / 0.004250 (0.081094) | 0.063418 / 0.037052 (0.026366) | 0.519759 / 0.258489 (0.261270) | 0.552770 / 0.293841 (0.258929) | 0.049439 / 0.128546 (-0.079107) | 0.017564 / 0.075646 (-0.058082) | 0.092713 / 0.419271 (-0.326559) | 0.065837 / 0.043533 (0.022305) | 0.516133 / 0.255139 (0.260994) | 0.539813 / 0.283200 (0.256613) | 0.036531 / 0.141683 (-0.105152) | 1.919275 / 1.452155 (0.467121) | 2.039987 / 1.492716 (0.547271) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.297978 / 0.018006 (0.279972) | 0.608243 / 0.000490 (0.607753) | 0.006611 / 0.000200 (0.006411) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033909 / 0.037411 (-0.003503) | 0.106370 / 0.014526 (0.091844) | 0.119032 / 0.176557 (-0.057524) | 0.180319 / 0.737135 (-0.556816) | 0.122826 / 0.296338 (-0.173513) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639265 / 0.215209 (0.424056) | 6.248430 / 2.077655 (4.170775) | 2.944760 / 1.504120 (1.440640) | 2.654005 / 1.541195 (1.112811) | 2.733625 / 1.468490 (1.265134) | 0.837172 / 4.584777 (-3.747605) | 5.245084 / 3.745712 (1.499372) | 4.722614 / 5.269862 (-0.547248) | 3.008286 / 4.565676 (-1.557391) | 0.102340 / 0.424275 (-0.321935) | 0.009433 / 0.007607 (0.001826) | 0.762991 / 0.226044 (0.536946) | 7.385020 / 2.268929 (5.116092) | 3.787648 / 55.444624 (-51.656977) | 3.234345 / 6.876477 (-3.642132) | 3.394444 / 2.142072 (1.252371) | 1.023472 / 4.805227 (-3.781756) | 0.208199 / 6.500664 (-6.292465) | 0.081513 / 0.075469 (0.006043) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.795864 / 1.841788 (-0.045923) | 25.270852 / 8.074308 (17.196544) | 23.356413 / 10.191392 (13.165021) | 0.228002 / 0.680424 (-0.452422) | 0.031851 / 0.534201 (-0.502350) | 0.499424 / 0.579283 (-0.079859) | 0.588027 / 0.434364 (0.153664) | 0.581746 / 0.540337 (0.041408) | 0.814183 / 1.386936 (-0.572753) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33ee536876a667403ee44574bd685073261c4903 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006477 / 0.011353 (-0.004876) | 0.003878 / 0.011008 (-0.007130) | 0.084085 / 0.038508 (0.045577) | 0.071297 / 0.023109 (0.048188) | 0.309176 / 0.275898 (0.033278) | 0.342830 / 0.323480 (0.019350) | 0.005189 / 0.007986 (-0.002796) | 0.003263 / 0.004328 (-0.001065) | 0.063920 / 0.004250 (0.059670) | 0.052233 / 0.037052 (0.015180) | 0.324830 / 0.258489 (0.066341) | 0.357956 / 0.293841 (0.064115) | 0.030459 / 0.128546 (-0.098087) | 0.008350 / 0.075646 (-0.067297) | 0.287330 / 0.419271 (-0.131942) | 0.051005 / 0.043533 (0.007473) | 0.309227 / 0.255139 (0.054088) | 0.346184 / 0.283200 (0.062984) | 0.023961 / 0.141683 (-0.117722) | 1.463983 / 1.452155 (0.011829) | 1.573036 / 1.492716 (0.080319) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205653 / 0.018006 (0.187647) | 0.457336 / 0.000490 (0.456846) | 0.005347 / 0.000200 (0.005147) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028080 / 0.037411 (-0.009332) | 0.081755 / 0.014526 (0.067229) | 0.095716 / 0.176557 (-0.080841) | 0.151340 / 0.737135 (-0.585795) | 0.097174 / 0.296338 (-0.199164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390725 / 0.215209 (0.175516) | 3.899114 / 2.077655 (1.821459) | 1.895352 / 1.504120 (0.391232) | 1.716072 / 1.541195 (0.174877) | 1.784952 / 1.468490 (0.316462) | 0.477247 / 4.584777 (-4.107530) | 3.606641 / 3.745712 (-0.139071) | 3.203337 / 5.269862 (-2.066524) | 2.017003 / 4.565676 (-2.548674) | 0.056182 / 0.424275 (-0.368094) | 0.007508 / 0.007607 (-0.000099) | 0.461965 / 0.226044 (0.235921) | 4.605926 / 2.268929 (2.336997) | 2.466695 / 55.444624 (-52.977929) | 2.136376 / 6.876477 (-4.740100) | 2.277334 / 2.142072 (0.135261) | 0.576119 / 4.805227 (-4.229109) | 0.131497 / 6.500664 (-6.369167) | 0.060068 / 0.075469 (-0.015401) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262681 / 1.841788 (-0.579107) | 19.411572 / 8.074308 (11.337264) | 14.383421 / 10.191392 (4.192029) | 0.166115 / 0.680424 (-0.514308) | 0.018366 / 0.534201 (-0.515835) | 0.393903 / 0.579283 (-0.185380) | 0.408788 / 0.434364 (-0.025576) | 0.461796 / 0.540337 (-0.078541) | 0.628460 / 1.386936 (-0.758476) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006501 / 0.011353 (-0.004852) | 0.003915 / 0.011008 (-0.007093) | 0.065245 / 0.038508 (0.026737) | 0.073146 / 0.023109 (0.050037) | 0.363537 / 0.275898 (0.087639) | 0.391571 / 0.323480 (0.068092) | 0.005181 / 0.007986 (-0.002805) | 0.003272 / 0.004328 (-0.001056) | 0.065060 / 0.004250 (0.060810) | 0.054302 / 0.037052 (0.017249) | 0.361571 / 0.258489 (0.103082) | 0.400221 / 0.293841 (0.106380) | 0.030762 / 0.128546 (-0.097784) | 0.008449 / 0.075646 (-0.067197) | 0.071148 / 0.419271 (-0.348123) | 0.048111 / 0.043533 (0.004578) | 0.360327 / 0.255139 (0.105188) | 0.379073 / 0.283200 (0.095874) | 0.024367 / 0.141683 (-0.117316) | 1.451080 / 1.452155 (-0.001074) | 1.510818 / 1.492716 (0.018102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267078 / 0.018006 (0.249072) | 0.454074 / 0.000490 (0.453584) | 0.015055 / 0.000200 (0.014855) | 0.000129 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030916 / 0.037411 (-0.006496) | 0.089212 / 0.014526 (0.074686) | 0.100005 / 0.176557 (-0.076552) | 0.155100 / 0.737135 (-0.582035) | 0.101759 / 0.296338 (-0.194580) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412826 / 0.215209 (0.197616) | 4.122520 / 2.077655 (2.044865) | 2.107870 / 1.504120 (0.603750) | 1.911936 / 1.541195 (0.370741) | 1.984936 / 1.468490 (0.516446) | 0.483835 / 4.584777 (-4.100942) | 3.641860 / 3.745712 (-0.103852) | 3.220540 / 5.269862 (-2.049322) | 2.015521 / 4.565676 (-2.550155) | 0.056913 / 0.424275 (-0.367362) | 0.007285 / 0.007607 (-0.000322) | 0.484886 / 0.226044 (0.258842) | 4.854734 / 2.268929 (2.585805) | 2.593550 / 55.444624 (-52.851074) | 2.233904 / 6.876477 (-4.642572) | 2.438858 / 2.142072 (0.296785) | 0.580880 / 4.805227 (-4.224347) | 0.133891 / 6.500664 (-6.366773) | 0.061678 / 0.075469 (-0.013791) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336843 / 1.841788 (-0.504944) | 19.731571 / 8.074308 (11.657263) | 14.290228 / 10.191392 (4.098836) | 0.167635 / 0.680424 (-0.512789) | 0.018767 / 0.534201 (-0.515434) | 0.394953 / 0.579283 (-0.184330) | 0.407711 / 0.434364 (-0.026653) | 0.472371 / 0.540337 (-0.067966) | 0.655278 / 1.386936 (-0.731658) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#528b15f775a4724836bdefdc38d932c06484d702 \"CML watermark\")\n" ]
2023-08-17T21:58:24
2023-08-17T22:44:59
2023-08-17T22:36:04
CONTRIBUTOR
null
Fix #6149
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6160/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6160", "html_url": "https://github.com/huggingface/datasets/pull/6160", "diff_url": "https://github.com/huggingface/datasets/pull/6160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6160.patch", "merged_at": "2023-08-17T22:36:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/6159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6159/comments
https://api.github.com/repos/huggingface/datasets/issues/6159/events
https://github.com/huggingface/datasets/issues/6159
1,855,691,512
I_kwDODunzps5um5r4
6,159
Add `BoundingBox` feature
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2023-08-17T20:49:51
2023-08-17T20:49:51
null
CONTRIBUTOR
null
... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explained [here](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/)), so we need to decide which one to support (or maybe all of them). cc @NielsRogge @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6159/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6159/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6158/comments
https://api.github.com/repos/huggingface/datasets/issues/6158/events
https://github.com/huggingface/datasets/pull/6158
1,855,374,220
PR_kwDODunzps5YLZBf
6,158
[docs] Complete `to_iterable_dataset`
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008219 / 0.011353 (-0.003134) | 0.005201 / 0.011008 (-0.005807) | 0.108542 / 0.038508 (0.070034) | 0.076427 / 0.023109 (0.053318) | 0.441257 / 0.275898 (0.165358) | 0.436477 / 0.323480 (0.112997) | 0.006915 / 0.007986 (-0.001071) | 0.004215 / 0.004328 (-0.000113) | 0.072517 / 0.004250 (0.068267) | 0.066906 / 0.037052 (0.029853) | 0.431153 / 0.258489 (0.172664) | 0.413359 / 0.293841 (0.119518) | 0.051112 / 0.128546 (-0.077435) | 0.014664 / 0.075646 (-0.060982) | 0.358385 / 0.419271 (-0.060887) | 0.069682 / 0.043533 (0.026149) | 0.434810 / 0.255139 (0.179671) | 0.484372 / 0.283200 (0.201172) | 0.035731 / 0.141683 (-0.105952) | 1.827648 / 1.452155 (0.375494) | 2.039761 / 1.492716 (0.547045) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.277386 / 0.018006 (0.259379) | 0.599771 / 0.000490 (0.599282) | 0.005033 / 0.000200 (0.004833) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030652 / 0.037411 (-0.006759) | 0.103435 / 0.014526 (0.088909) | 0.120072 / 0.176557 (-0.056485) | 0.177886 / 0.737135 (-0.559249) | 0.140636 / 0.296338 (-0.155702) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.603729 / 0.215209 (0.388520) | 6.144213 / 2.077655 (4.066558) | 2.785080 / 1.504120 (1.280960) | 2.368958 / 1.541195 (0.827763) | 2.409806 / 1.468490 (0.941316) | 0.836531 / 4.584777 (-3.748246) | 5.154035 / 3.745712 (1.408323) | 4.620224 / 5.269862 (-0.649638) | 2.879441 / 4.565676 (-1.686235) | 0.087322 / 0.424275 (-0.336953) | 0.007698 / 0.007607 (0.000090) | 0.678443 / 0.226044 (0.452399) | 7.431798 / 2.268929 (5.162869) | 3.589905 / 55.444624 (-51.854719) | 2.679349 / 6.876477 (-4.197127) | 3.100569 / 2.142072 (0.958496) | 1.021501 / 4.805227 (-3.783726) | 0.203150 / 6.500664 (-6.297514) | 0.073545 / 0.075469 (-0.001924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669981 / 1.841788 (-0.171806) | 23.379274 / 8.074308 (15.304966) | 19.811451 / 10.191392 (9.620059) | 0.197705 / 0.680424 (-0.482719) | 0.030112 / 0.534201 (-0.504089) | 0.501720 / 0.579283 (-0.077563) | 0.582413 / 0.434364 (0.148049) | 0.513261 / 0.540337 (-0.027076) | 0.729710 / 1.386936 (-0.657226) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011493 / 0.011353 (0.000140) | 0.005478 / 0.011008 (-0.005530) | 0.070955 / 0.038508 (0.032447) | 0.073877 / 0.023109 (0.050768) | 0.425765 / 0.275898 (0.149867) | 0.440869 / 0.323480 (0.117389) | 0.008322 / 0.007986 (0.000337) | 0.004004 / 0.004328 (-0.000325) | 0.071968 / 0.004250 (0.067718) | 0.060576 / 0.037052 (0.023524) | 0.448731 / 0.258489 (0.190242) | 0.517038 / 0.293841 (0.223197) | 0.051542 / 0.128546 (-0.077005) | 0.013219 / 0.075646 (-0.062427) | 0.077933 / 0.419271 (-0.341339) | 0.072879 / 0.043533 (0.029346) | 0.436553 / 0.255139 (0.181414) | 0.510050 / 0.283200 (0.226850) | 0.037136 / 0.141683 (-0.104547) | 1.535706 / 1.452155 (0.083552) | 1.611909 / 1.492716 (0.119192) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.335648 / 0.018006 (0.317642) | 0.612787 / 0.000490 (0.612297) | 0.021934 / 0.000200 (0.021734) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028164 / 0.037411 (-0.009247) | 0.097686 / 0.014526 (0.083160) | 0.093343 / 0.176557 (-0.083214) | 0.156871 / 0.737135 (-0.580264) | 0.102694 / 0.296338 (-0.193645) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.609348 / 0.215209 (0.394139) | 5.835798 / 2.077655 (3.758144) | 2.792700 / 1.504120 (1.288580) | 2.539597 / 1.541195 (0.998403) | 2.413003 / 1.468490 (0.944513) | 0.882404 / 4.584777 (-3.702372) | 5.170564 / 3.745712 (1.424852) | 4.621663 / 5.269862 (-0.648199) | 3.029683 / 4.565676 (-1.535993) | 0.097061 / 0.424275 (-0.327214) | 0.008940 / 0.007607 (0.001333) | 0.723052 / 0.226044 (0.497007) | 7.484947 / 2.268929 (5.216018) | 3.833049 / 55.444624 (-51.611575) | 3.019606 / 6.876477 (-3.856871) | 3.270503 / 2.142072 (1.128430) | 0.977870 / 4.805227 (-3.827357) | 0.210090 / 6.500664 (-6.290574) | 0.094723 / 0.075469 (0.019254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585278 / 1.841788 (-0.256510) | 22.769727 / 8.074308 (14.695419) | 19.503640 / 10.191392 (9.312248) | 0.231996 / 0.680424 (-0.448428) | 0.032641 / 0.534201 (-0.501560) | 0.429833 / 0.579283 (-0.149451) | 0.549606 / 0.434364 (0.115242) | 0.527405 / 0.540337 (-0.012933) | 0.713302 / 1.386936 (-0.673634) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#546c7bb5cbeff0f8673cf60c4432ea167283cc42 \"CML watermark\")\n" ]
2023-08-17T17:02:11
2023-08-17T19:24:20
2023-08-17T19:13:15
MEMBER
null
Finishes the `to_iterable_dataset` documentation by adding it to the relevant sections in the tutorial and guide.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6158/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6158", "html_url": "https://github.com/huggingface/datasets/pull/6158", "diff_url": "https://github.com/huggingface/datasets/pull/6158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6158.patch", "merged_at": "2023-08-17T19:13:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/6157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6157/comments
https://api.github.com/repos/huggingface/datasets/issues/6157/events
https://github.com/huggingface/datasets/issues/6157
1,855,265,663
I_kwDODunzps5ulRt_
6,157
DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'
{ "login": "AisingioroHao0", "id": 51043929, "node_id": "MDQ6VXNlcjUxMDQzOTI5", "avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AisingioroHao0", "html_url": "https://github.com/AisingioroHao0", "followers_url": "https://api.github.com/users/AisingioroHao0/followers", "following_url": "https://api.github.com/users/AisingioroHao0/following{/other_user}", "gists_url": "https://api.github.com/users/AisingioroHao0/gists{/gist_id}", "starred_url": "https://api.github.com/users/AisingioroHao0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AisingioroHao0/subscriptions", "organizations_url": "https://api.github.com/users/AisingioroHao0/orgs", "repos_url": "https://api.github.com/users/AisingioroHao0/repos", "events_url": "https://api.github.com/users/AisingioroHao0/events{/privacy}", "received_events_url": "https://api.github.com/users/AisingioroHao0/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Thanks for reporting, but we can only fix this issue if you can provide a reproducer that consistently reproduces it.", "@mariosasko Ok. What exactly does it mean to provide a reproducer", "To provide a code that reproduces the issue :)", "@mariosasko I complete the above code, is it enough?", "@mariosasko That's all the code, I'm using locally stored data", "Does this error occur even if you change the cache directory (the `cache_dir` parameter in `load_dataset`)?", "@mariosasko I didn't add any parameters for catch. Nor did any cache configuration change.", "@mariosasko And I changed the data file, but executing load_dataset is always the previous result. I had to change something in images.py to use the new results. Using 'cleanup_cache_files' is invalid! Help me.", "@mariosasko I added a comprehensive error message. Check that _column_requires_decoding is being passed where it shouldn't be. DatasetInfo.__init__() Whether this parameter is required" ]
2023-08-17T15:48:11
2023-08-27T16:34:41
null
NONE
null
### Describe the bug When I was in load_dataset, it said "DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'". The second time I ran it, there was no error and the dataset object worked ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset = load_dataset( 2 "/home/aihao/workspace/DeepLearningContent/datasets/manga", 3 data_dir="/home/aihao/workspace/DeepLearningContent/datasets/manga", 4 split="train", 5 ) File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/load.py:2146](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/load.py:2146), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2142 # Build dataset for splits 2143 keep_in_memory = ( 2144 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2145 ) -> 2146 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2147 # Rename and cast features to match task schema 2148 if task is not None: 2149 # To avoid issuing the same warning twice File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py:1190](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py:1190), in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1187 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS) 1189 # Create a dataset for each of the given splits -> 1190 datasets = map_nested( 1191 partial( 1192 self._build_single_dataset, ... File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/info.py:379](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/info.py:379), in DatasetInfo.copy(self) 378 def copy(self) -> "DatasetInfo": --> 379 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) TypeError: DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding' ``` ### Steps to reproduce the bug /home/aihao/workspace/DeepLearningContent/datasets/images/images.py ```python from logging import config import datasets import os from PIL import Image import csv import json class ImagesConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(ImagesConfig, self).__init__(**kwargs) class Images(datasets.GeneratorBasedBuilder): def _split_generators(self, dl_manager: datasets.DownloadManager): return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"split": datasets.Split.TRAIN}, ) ] BUILDER_CONFIGS = [ ImagesConfig( name="similar_pairs", description="simliar pair dataset,item is a pair of similar images", ), ImagesConfig( name="image_prompt_pairs", description="image prompt pairs", ), ] def _info(self): if self.config.name == "similar_pairs": return datasets.Features( { "image1": datasets.features.Image(), "image2": datasets.features.Image(), "similarity": datasets.Value("float32"), } ) elif self.config.name == "image_prompt_pairs": return datasets.Features( {"image": datasets.features.Image(), "prompt": datasets.Value("string")} ) def _generate_examples(self, split): data_path = os.path.join(self.config.data_dir, "data") if self.config.name == "similar_pairs": prompts = {} with open(os.path.join(data_path ,"prompts.json"), "r") as f: prompts = json.load(f) with open(os.path.join(data_path, "similar_pairs.csv"), "r") as f: reader = csv.reader(f) for row in reader: image1_path, image2_path, similarity = row yield image1_path + ":" + image2_path + ":", { "image1": Image.open(image1_path), "prompt1": prompts[image1_path], "image2": Image.open(image2_path), "prompt2": prompts[image2_path], "similarity": float(similarity), } ``` Code that indicates an error: ```python from datasets import load_dataset import json import csv import ast import torch data_dir = "/home/aihao/workspace/DeepLearningContent/datasets/images" dataset = load_dataset(data_dir, data_dir=data_dir, name="similar_pairs") ``` ### Expected behavior The first execution gives an error, but it works fine ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6157/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6156/comments
https://api.github.com/repos/huggingface/datasets/issues/6156/events
https://github.com/huggingface/datasets/issues/6156
1,854,768,618
I_kwDODunzps5ujYXq
6,156
Why not use self._epoch as seed to shuffle in distributed training with IterableDataset
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq ", "`_effective_generator` returns a RNG that takes into account `self._epoch` and the current dataset's base shuffling RNG (which can be set by specifying `seed=` in `.shuffle() for example`).\r\n\r\nTo fix your error you can pass `seed=` to `.shuffle()`. And the shuffling will depend on both this seed and `self._epoch`", "Thanks for the reply" ]
2023-08-17T10:58:20
2023-08-17T14:33:15
2023-08-17T14:33:14
CONTRIBUTOR
null
### Describe the bug Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping. https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177 My question is why not directly use `self._epoch` which is set by `set_epoch` as seed? It's almost the same across nodes. https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1790-L1801 If not using `self._epoch` as shuffling seed, what does this method do to prepare an epoch seeded generator? https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1206 ### Steps to reproduce the bug As mentioned above. ### Expected behavior As mentioned above. ### Environment info Not related
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6156/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6155/comments
https://api.github.com/repos/huggingface/datasets/issues/6155/events
https://github.com/huggingface/datasets/pull/6155
1,854,661,682
PR_kwDODunzps5YI8Pc
6,155
Raise FileNotFoundError when passing data_files that don't exist
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009288 / 0.011353 (-0.002065) | 0.005950 / 0.011008 (-0.005058) | 0.122376 / 0.038508 (0.083868) | 0.093177 / 0.023109 (0.070068) | 0.448517 / 0.275898 (0.172619) | 0.474999 / 0.323480 (0.151520) | 0.005133 / 0.007986 (-0.002853) | 0.005123 / 0.004328 (0.000795) | 0.085479 / 0.004250 (0.081229) | 0.065613 / 0.037052 (0.028561) | 0.451179 / 0.258489 (0.192690) | 0.516876 / 0.293841 (0.223036) | 0.047536 / 0.128546 (-0.081010) | 0.013894 / 0.075646 (-0.061752) | 0.382149 / 0.419271 (-0.037122) | 0.067380 / 0.043533 (0.023848) | 0.419282 / 0.255139 (0.164143) | 0.482042 / 0.283200 (0.198842) | 0.041230 / 0.141683 (-0.100452) | 1.818127 / 1.452155 (0.365972) | 1.938123 / 1.492716 (0.445406) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271824 / 0.018006 (0.253817) | 0.604933 / 0.000490 (0.604443) | 0.004953 / 0.000200 (0.004753) | 0.000173 / 0.000054 (0.000119) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036682 / 0.037411 (-0.000729) | 0.095604 / 0.014526 (0.081078) | 0.116862 / 0.176557 (-0.059695) | 0.191335 / 0.737135 (-0.545800) | 0.116620 / 0.296338 (-0.179718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620735 / 0.215209 (0.405526) | 6.157119 / 2.077655 (4.079465) | 2.848548 / 1.504120 (1.344428) | 2.493731 / 1.541195 (0.952536) | 2.505801 / 1.468490 (1.037311) | 0.837315 / 4.584777 (-3.747462) | 5.360653 / 3.745712 (1.614941) | 4.908863 / 5.269862 (-0.360999) | 3.184672 / 4.565676 (-1.381004) | 0.105687 / 0.424275 (-0.318588) | 0.011350 / 0.007607 (0.003743) | 0.745729 / 0.226044 (0.519684) | 7.431584 / 2.268929 (5.162655) | 3.644670 / 55.444624 (-51.799954) | 2.910159 / 6.876477 (-3.966317) | 3.257137 / 2.142072 (1.115065) | 1.041377 / 4.805227 (-3.763851) | 0.213289 / 6.500664 (-6.287375) | 0.089208 / 0.075469 (0.013739) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.727274 / 1.841788 (-0.114513) | 25.448436 / 8.074308 (17.374128) | 23.016108 / 10.191392 (12.824716) | 0.219454 / 0.680424 (-0.460970) | 0.028531 / 0.534201 (-0.505670) | 0.500231 / 0.579283 (-0.079052) | 0.614631 / 0.434364 (0.180267) | 0.557926 / 0.540337 (0.017588) | 0.786261 / 1.386936 (-0.600675) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008608 / 0.011353 (-0.002745) | 0.006185 / 0.011008 (-0.004823) | 0.089258 / 0.038508 (0.050750) | 0.090109 / 0.023109 (0.067000) | 0.522200 / 0.275898 (0.246302) | 0.559218 / 0.323480 (0.235738) | 0.008983 / 0.007986 (0.000997) | 0.004488 / 0.004328 (0.000159) | 0.083658 / 0.004250 (0.079408) | 0.064962 / 0.037052 (0.027909) | 0.519477 / 0.258489 (0.260988) | 0.573842 / 0.293841 (0.280001) | 0.053984 / 0.128546 (-0.074562) | 0.014665 / 0.075646 (-0.060982) | 0.089438 / 0.419271 (-0.329834) | 0.065756 / 0.043533 (0.022223) | 0.525131 / 0.255139 (0.269992) | 0.568934 / 0.283200 (0.285734) | 0.037308 / 0.141683 (-0.104375) | 1.928790 / 1.452155 (0.476635) | 2.027926 / 1.492716 (0.535209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.309595 / 0.018006 (0.291588) | 0.615675 / 0.000490 (0.615186) | 0.004869 / 0.000200 (0.004669) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033306 / 0.037411 (-0.004105) | 0.104429 / 0.014526 (0.089904) | 0.116989 / 0.176557 (-0.059568) | 0.183638 / 0.737135 (-0.553497) | 0.132624 / 0.296338 (-0.163714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.644511 / 0.215209 (0.429302) | 6.425544 / 2.077655 (4.347889) | 3.079071 / 1.504120 (1.574951) | 2.720963 / 1.541195 (1.179769) | 2.835607 / 1.468490 (1.367117) | 0.863561 / 4.584777 (-3.721216) | 5.333462 / 3.745712 (1.587750) | 4.843183 / 5.269862 (-0.426678) | 3.106858 / 4.565676 (-1.458819) | 0.106790 / 0.424275 (-0.317485) | 0.008829 / 0.007607 (0.001222) | 0.759003 / 0.226044 (0.532958) | 7.771247 / 2.268929 (5.502318) | 3.896844 / 55.444624 (-51.547780) | 3.246671 / 6.876477 (-3.629806) | 3.486167 / 2.142072 (1.344094) | 1.071290 / 4.805227 (-3.733937) | 0.217972 / 6.500664 (-6.282692) | 0.089848 / 0.075469 (0.014379) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.816048 / 1.841788 (-0.025739) | 25.625084 / 8.074308 (17.550776) | 24.490882 / 10.191392 (14.299490) | 0.242356 / 0.680424 (-0.438067) | 0.027886 / 0.534201 (-0.506315) | 0.496997 / 0.579283 (-0.082286) | 0.613815 / 0.434364 (0.179451) | 0.607132 / 0.540337 (0.066795) | 0.833051 / 1.386936 (-0.553885) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0adfa9ada14c38fce5973b5e3f196a2c46dc9170 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011580 / 0.011353 (0.000227) | 0.004199 / 0.011008 (-0.006809) | 0.084055 / 0.038508 (0.045547) | 0.096824 / 0.023109 (0.073715) | 0.308755 / 0.275898 (0.032857) | 0.341717 / 0.323480 (0.018237) | 0.006018 / 0.007986 (-0.001968) | 0.003597 / 0.004328 (-0.000731) | 0.064953 / 0.004250 (0.060702) | 0.059577 / 0.037052 (0.022525) | 0.316292 / 0.258489 (0.057803) | 0.358991 / 0.293841 (0.065150) | 0.033925 / 0.128546 (-0.094621) | 0.008828 / 0.075646 (-0.066818) | 0.288673 / 0.419271 (-0.130599) | 0.055494 / 0.043533 (0.011961) | 0.311181 / 0.255139 (0.056042) | 0.345220 / 0.283200 (0.062021) | 0.024033 / 0.141683 (-0.117649) | 1.504709 / 1.452155 (0.052554) | 1.587920 / 1.492716 (0.095204) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.301099 / 0.018006 (0.283093) | 0.594497 / 0.000490 (0.594007) | 0.006244 / 0.000200 (0.006044) | 0.000228 / 0.000054 (0.000174) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027663 / 0.037411 (-0.009748) | 0.081767 / 0.014526 (0.067241) | 0.097342 / 0.176557 (-0.079215) | 0.153200 / 0.737135 (-0.583935) | 0.097474 / 0.296338 (-0.198864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405929 / 0.215209 (0.190719) | 4.045398 / 2.077655 (1.967743) | 2.044669 / 1.504120 (0.540549) | 1.872889 / 1.541195 (0.331694) | 1.911901 / 1.468490 (0.443411) | 0.480939 / 4.584777 (-4.103838) | 3.652833 / 3.745712 (-0.092879) | 3.281659 / 5.269862 (-1.988202) | 2.038023 / 4.565676 (-2.527654) | 0.056501 / 0.424275 (-0.367775) | 0.007571 / 0.007607 (-0.000036) | 0.481053 / 0.226044 (0.255009) | 4.802048 / 2.268929 (2.533119) | 2.560479 / 55.444624 (-52.884145) | 2.164852 / 6.876477 (-4.711625) | 2.374595 / 2.142072 (0.232523) | 0.576309 / 4.805227 (-4.228918) | 0.134831 / 6.500664 (-6.365833) | 0.060649 / 0.075469 (-0.014820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254210 / 1.841788 (-0.587578) | 19.826143 / 8.074308 (11.751835) | 14.446391 / 10.191392 (4.254999) | 0.165707 / 0.680424 (-0.514717) | 0.018221 / 0.534201 (-0.515980) | 0.395996 / 0.579283 (-0.183287) | 0.424567 / 0.434364 (-0.009796) | 0.459836 / 0.540337 (-0.080501) | 0.635969 / 1.386936 (-0.750967) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006696 / 0.011353 (-0.004657) | 0.004131 / 0.011008 (-0.006877) | 0.064587 / 0.038508 (0.026079) | 0.079189 / 0.023109 (0.056080) | 0.359977 / 0.275898 (0.084079) | 0.389331 / 0.323480 (0.065851) | 0.005502 / 0.007986 (-0.002483) | 0.003492 / 0.004328 (-0.000837) | 0.064967 / 0.004250 (0.060716) | 0.055953 / 0.037052 (0.018901) | 0.363997 / 0.258489 (0.105508) | 0.398405 / 0.293841 (0.104564) | 0.031292 / 0.128546 (-0.097254) | 0.008693 / 0.075646 (-0.066953) | 0.070451 / 0.419271 (-0.348820) | 0.048965 / 0.043533 (0.005432) | 0.358288 / 0.255139 (0.103149) | 0.379136 / 0.283200 (0.095936) | 0.024364 / 0.141683 (-0.117319) | 1.478998 / 1.452155 (0.026843) | 1.547282 / 1.492716 (0.054566) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.328188 / 0.018006 (0.310182) | 0.525968 / 0.000490 (0.525478) | 0.003782 / 0.000200 (0.003582) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032528 / 0.037411 (-0.004883) | 0.087685 / 0.014526 (0.073159) | 0.100684 / 0.176557 (-0.075872) | 0.155944 / 0.737135 (-0.581192) | 0.101949 / 0.296338 (-0.194389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418591 / 0.215209 (0.203382) | 4.199235 / 2.077655 (2.121580) | 2.183880 / 1.504120 (0.679760) | 2.024502 / 1.541195 (0.483307) | 2.017435 / 1.468490 (0.548945) | 0.488881 / 4.584777 (-4.095896) | 3.635002 / 3.745712 (-0.110710) | 3.359992 / 5.269862 (-1.909870) | 2.089686 / 4.565676 (-2.475991) | 0.057813 / 0.424275 (-0.366462) | 0.007349 / 0.007607 (-0.000258) | 0.490719 / 0.226044 (0.264674) | 4.859950 / 2.268929 (2.591022) | 2.616711 / 55.444624 (-52.827914) | 2.238671 / 6.876477 (-4.637806) | 2.442262 / 2.142072 (0.300190) | 0.598368 / 4.805227 (-4.206859) | 0.135281 / 6.500664 (-6.365383) | 0.063072 / 0.075469 (-0.012397) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356396 / 1.841788 (-0.485392) | 20.075123 / 8.074308 (12.000815) | 14.191317 / 10.191392 (3.999925) | 0.167691 / 0.680424 (-0.512732) | 0.018290 / 0.534201 (-0.515911) | 0.392881 / 0.579283 (-0.186402) | 0.413665 / 0.434364 (-0.020699) | 0.480766 / 0.540337 (-0.059571) | 0.655625 / 1.386936 (-0.731311) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a46ca9cc138754629be261522301e725c7d14152 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007834 / 0.011353 (-0.003519) | 0.004744 / 0.011008 (-0.006264) | 0.102061 / 0.038508 (0.063553) | 0.089246 / 0.023109 (0.066137) | 0.399936 / 0.275898 (0.124038) | 0.436974 / 0.323480 (0.113494) | 0.004791 / 0.007986 (-0.003195) | 0.005976 / 0.004328 (0.001647) | 0.079336 / 0.004250 (0.075086) | 0.065947 / 0.037052 (0.028894) | 0.403747 / 0.258489 (0.145258) | 0.460249 / 0.293841 (0.166408) | 0.038065 / 0.128546 (-0.090482) | 0.010179 / 0.075646 (-0.065467) | 0.403620 / 0.419271 (-0.015652) | 0.066439 / 0.043533 (0.022906) | 0.412123 / 0.255139 (0.156984) | 0.452121 / 0.283200 (0.168921) | 0.033533 / 0.141683 (-0.108150) | 1.858650 / 1.452155 (0.406495) | 1.916248 / 1.492716 (0.423532) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237180 / 0.018006 (0.219174) | 0.526844 / 0.000490 (0.526354) | 0.004220 / 0.000200 (0.004020) | 0.000123 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033860 / 0.037411 (-0.003552) | 0.105054 / 0.014526 (0.090528) | 0.116494 / 0.176557 (-0.060063) | 0.185990 / 0.737135 (-0.551145) | 0.119072 / 0.296338 (-0.177266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488549 / 0.215209 (0.273340) | 4.884950 / 2.077655 (2.807295) | 2.521819 / 1.504120 (1.017699) | 2.329382 / 1.541195 (0.788188) | 2.413710 / 1.468490 (0.945220) | 0.568325 / 4.584777 (-4.016452) | 4.243505 / 3.745712 (0.497793) | 3.785983 / 5.269862 (-1.483879) | 2.387146 / 4.565676 (-2.178531) | 0.067176 / 0.424275 (-0.357099) | 0.009145 / 0.007607 (0.001538) | 0.571482 / 0.226044 (0.345437) | 5.688822 / 2.268929 (3.419894) | 3.067346 / 55.444624 (-52.377278) | 2.688723 / 6.876477 (-4.187754) | 2.883785 / 2.142072 (0.741713) | 0.679326 / 4.805227 (-4.125901) | 0.156018 / 6.500664 (-6.344646) | 0.070947 / 0.075469 (-0.004522) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.556611 / 1.841788 (-0.285177) | 23.545074 / 8.074308 (15.470766) | 17.125108 / 10.191392 (6.933716) | 0.180180 / 0.680424 (-0.500244) | 0.021420 / 0.534201 (-0.512781) | 0.466888 / 0.579283 (-0.112395) | 0.485746 / 0.434364 (0.051383) | 0.606181 / 0.540337 (0.065843) | 0.776691 / 1.386936 (-0.610245) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007820 / 0.011353 (-0.003533) | 0.004531 / 0.011008 (-0.006478) | 0.076142 / 0.038508 (0.037634) | 0.086367 / 0.023109 (0.063258) | 0.456150 / 0.275898 (0.180252) | 0.499712 / 0.323480 (0.176232) | 0.006545 / 0.007986 (-0.001441) | 0.003760 / 0.004328 (-0.000568) | 0.076400 / 0.004250 (0.072150) | 0.069689 / 0.037052 (0.032637) | 0.459732 / 0.258489 (0.201243) | 0.504217 / 0.293841 (0.210376) | 0.037838 / 0.128546 (-0.090709) | 0.009804 / 0.075646 (-0.065843) | 0.084654 / 0.419271 (-0.334617) | 0.060301 / 0.043533 (0.016768) | 0.452984 / 0.255139 (0.197845) | 0.479956 / 0.283200 (0.196757) | 0.029674 / 0.141683 (-0.112009) | 1.814059 / 1.452155 (0.361904) | 1.878886 / 1.492716 (0.386170) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.326174 / 0.018006 (0.308168) | 0.539722 / 0.000490 (0.539232) | 0.025637 / 0.000200 (0.025437) | 0.000209 / 0.000054 (0.000154) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036328 / 0.037411 (-0.001084) | 0.106369 / 0.014526 (0.091843) | 0.118598 / 0.176557 (-0.057958) | 0.182760 / 0.737135 (-0.554376) | 0.120013 / 0.296338 (-0.176326) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507328 / 0.215209 (0.292119) | 5.092689 / 2.077655 (3.015034) | 2.962334 / 1.504120 (1.458214) | 2.507699 / 1.541195 (0.966504) | 2.612245 / 1.468490 (1.143755) | 0.568625 / 4.584777 (-4.016152) | 4.296484 / 3.745712 (0.550772) | 4.037788 / 5.269862 (-1.232073) | 2.579826 / 4.565676 (-1.985850) | 0.068558 / 0.424275 (-0.355717) | 0.008916 / 0.007607 (0.001309) | 0.601054 / 0.226044 (0.375010) | 6.016061 / 2.268929 (3.747133) | 3.311880 / 55.444624 (-52.132744) | 2.912926 / 6.876477 (-3.963551) | 3.101465 / 2.142072 (0.959393) | 0.686848 / 4.805227 (-4.118380) | 0.160243 / 6.500664 (-6.340421) | 0.074084 / 0.075469 (-0.001385) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.754343 / 1.841788 (-0.087444) | 24.215302 / 8.074308 (16.140994) | 17.211007 / 10.191392 (7.019615) | 0.188370 / 0.680424 (-0.492054) | 0.028157 / 0.534201 (-0.506044) | 0.490879 / 0.579283 (-0.088404) | 0.501508 / 0.434364 (0.067144) | 0.599719 / 0.540337 (0.059381) | 0.852438 / 1.386936 (-0.534498) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d84cd1d6f51ca75ec5f5c3db3f372f093758cac9 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009736 / 0.011353 (-0.001617) | 0.004761 / 0.011008 (-0.006247) | 0.100069 / 0.038508 (0.061561) | 0.077944 / 0.023109 (0.054835) | 0.419944 / 0.275898 (0.144046) | 0.459803 / 0.323480 (0.136323) | 0.006296 / 0.007986 (-0.001689) | 0.005375 / 0.004328 (0.001047) | 0.089457 / 0.004250 (0.085207) | 0.060585 / 0.037052 (0.023532) | 0.437988 / 0.258489 (0.179499) | 0.482676 / 0.293841 (0.188835) | 0.049126 / 0.128546 (-0.079420) | 0.015043 / 0.075646 (-0.060603) | 0.342500 / 0.419271 (-0.076771) | 0.067088 / 0.043533 (0.023555) | 0.418364 / 0.255139 (0.163225) | 0.458259 / 0.283200 (0.175059) | 0.034091 / 0.141683 (-0.107592) | 1.721589 / 1.452155 (0.269434) | 1.823142 / 1.492716 (0.330426) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212110 / 0.018006 (0.194103) | 0.530957 / 0.000490 (0.530467) | 0.003581 / 0.000200 (0.003382) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030202 / 0.037411 (-0.007210) | 0.100552 / 0.014526 (0.086026) | 0.108150 / 0.176557 (-0.068407) | 0.173203 / 0.737135 (-0.563932) | 0.108624 / 0.296338 (-0.187715) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577340 / 0.215209 (0.362131) | 5.794197 / 2.077655 (3.716543) | 2.396285 / 1.504120 (0.892165) | 2.151972 / 1.541195 (0.610777) | 2.109485 / 1.468490 (0.640995) | 0.873906 / 4.584777 (-3.710871) | 5.083302 / 3.745712 (1.337589) | 4.600756 / 5.269862 (-0.669105) | 2.891731 / 4.565676 (-1.673945) | 0.096293 / 0.424275 (-0.327982) | 0.008651 / 0.007607 (0.001044) | 0.719095 / 0.226044 (0.493051) | 7.193225 / 2.268929 (4.924297) | 3.220145 / 55.444624 (-52.224479) | 2.496715 / 6.876477 (-4.379762) | 2.672972 / 2.142072 (0.530900) | 1.031656 / 4.805227 (-3.773571) | 0.207854 / 6.500664 (-6.292810) | 0.074507 / 0.075469 (-0.000962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.552821 / 1.841788 (-0.288967) | 22.573015 / 8.074308 (14.498707) | 21.074321 / 10.191392 (10.882929) | 0.231911 / 0.680424 (-0.448513) | 0.027761 / 0.534201 (-0.506440) | 0.474644 / 0.579283 (-0.104639) | 0.563780 / 0.434364 (0.129416) | 0.527593 / 0.540337 (-0.012745) | 0.732299 / 1.386936 (-0.654637) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008675 / 0.011353 (-0.002678) | 0.005268 / 0.011008 (-0.005741) | 0.079078 / 0.038508 (0.040570) | 0.073505 / 0.023109 (0.050395) | 0.453982 / 0.275898 (0.178083) | 0.487839 / 0.323480 (0.164359) | 0.005950 / 0.007986 (-0.002035) | 0.003848 / 0.004328 (-0.000481) | 0.076004 / 0.004250 (0.071754) | 0.058410 / 0.037052 (0.021358) | 0.460099 / 0.258489 (0.201610) | 0.514860 / 0.293841 (0.221019) | 0.048843 / 0.128546 (-0.079703) | 0.014275 / 0.075646 (-0.061371) | 0.090243 / 0.419271 (-0.329029) | 0.060092 / 0.043533 (0.016559) | 0.455669 / 0.255139 (0.200530) | 0.484738 / 0.283200 (0.201538) | 0.033012 / 0.141683 (-0.108671) | 1.738854 / 1.452155 (0.286699) | 1.852552 / 1.492716 (0.359835) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245453 / 0.018006 (0.227447) | 0.519929 / 0.000490 (0.519439) | 0.007262 / 0.000200 (0.007062) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031446 / 0.037411 (-0.005965) | 0.094236 / 0.014526 (0.079710) | 0.114457 / 0.176557 (-0.062100) | 0.167448 / 0.737135 (-0.569687) | 0.108791 / 0.296338 (-0.187548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.603331 / 0.215209 (0.388122) | 6.051556 / 2.077655 (3.973902) | 2.797110 / 1.504120 (1.292990) | 2.500517 / 1.541195 (0.959322) | 2.531421 / 1.468490 (1.062931) | 0.852075 / 4.584777 (-3.732702) | 5.034140 / 3.745712 (1.288427) | 4.576573 / 5.269862 (-0.693289) | 2.973541 / 4.565676 (-1.592135) | 0.101303 / 0.424275 (-0.322972) | 0.008467 / 0.007607 (0.000860) | 0.707143 / 0.226044 (0.481098) | 7.262803 / 2.268929 (4.993874) | 3.548841 / 55.444624 (-51.895783) | 2.895975 / 6.876477 (-3.980502) | 3.063521 / 2.142072 (0.921449) | 1.014961 / 4.805227 (-3.790266) | 0.208527 / 6.500664 (-6.292137) | 0.074939 / 0.075469 (-0.000530) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.670708 / 1.841788 (-0.171080) | 22.685227 / 8.074308 (14.610919) | 20.393017 / 10.191392 (10.201625) | 0.239303 / 0.680424 (-0.441121) | 0.027742 / 0.534201 (-0.506459) | 0.467230 / 0.579283 (-0.112053) | 0.564169 / 0.434364 (0.129805) | 0.554859 / 0.540337 (0.014522) | 0.767471 / 1.386936 (-0.619465) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#72a57356a46ded67f4d7a02741141a96061246a8 \"CML watermark\")\n" ]
2023-08-17T09:49:48
2023-08-18T13:45:58
2023-08-18T13:35:13
MEMBER
null
e.g. when running `load_dataset("parquet", data_files="doesnt_exist.parquet")`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6155/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6155", "html_url": "https://github.com/huggingface/datasets/pull/6155", "diff_url": "https://github.com/huggingface/datasets/pull/6155.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6155.patch", "merged_at": "2023-08-18T13:35:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/6154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6154/comments
https://api.github.com/repos/huggingface/datasets/issues/6154/events
https://github.com/huggingface/datasets/pull/6154
1,854,595,943
PR_kwDODunzps5YItlH
6,154
Use yaml instead of get data patterns when possible
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006829 / 0.011353 (-0.004524) | 0.004535 / 0.011008 (-0.006473) | 0.085255 / 0.038508 (0.046747) | 0.080861 / 0.023109 (0.057752) | 0.366023 / 0.275898 (0.090125) | 0.403095 / 0.323480 (0.079615) | 0.005615 / 0.007986 (-0.002370) | 0.003830 / 0.004328 (-0.000498) | 0.064502 / 0.004250 (0.060251) | 0.053916 / 0.037052 (0.016863) | 0.366010 / 0.258489 (0.107521) | 0.414565 / 0.293841 (0.120724) | 0.031500 / 0.128546 (-0.097046) | 0.009252 / 0.075646 (-0.066394) | 0.289584 / 0.419271 (-0.129688) | 0.052984 / 0.043533 (0.009451) | 0.352626 / 0.255139 (0.097487) | 0.390964 / 0.283200 (0.107764) | 0.025118 / 0.141683 (-0.116565) | 1.462316 / 1.452155 (0.010161) | 1.565682 / 1.492716 (0.072966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294432 / 0.018006 (0.276426) | 0.618366 / 0.000490 (0.617876) | 0.003270 / 0.000200 (0.003071) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031194 / 0.037411 (-0.006217) | 0.088892 / 0.014526 (0.074366) | 0.102580 / 0.176557 (-0.073977) | 0.159449 / 0.737135 (-0.577686) | 0.104434 / 0.296338 (-0.191905) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.385690 / 0.215209 (0.170481) | 3.832782 / 2.077655 (1.755128) | 1.862521 / 1.504120 (0.358401) | 1.685674 / 1.541195 (0.144479) | 1.724984 / 1.468490 (0.256494) | 0.483700 / 4.584777 (-4.101077) | 3.664154 / 3.745712 (-0.081558) | 3.323023 / 5.269862 (-1.946839) | 2.055958 / 4.565676 (-2.509718) | 0.056990 / 0.424275 (-0.367285) | 0.007674 / 0.007607 (0.000067) | 0.460642 / 0.226044 (0.234598) | 4.609964 / 2.268929 (2.341036) | 2.434868 / 55.444624 (-53.009756) | 2.003347 / 6.876477 (-4.873130) | 2.209520 / 2.142072 (0.067448) | 0.629363 / 4.805227 (-4.175864) | 0.135434 / 6.500664 (-6.365230) | 0.060498 / 0.075469 (-0.014971) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253917 / 1.841788 (-0.587870) | 19.988953 / 8.074308 (11.914645) | 14.353739 / 10.191392 (4.162347) | 0.165987 / 0.680424 (-0.514437) | 0.018299 / 0.534201 (-0.515902) | 0.395532 / 0.579283 (-0.183751) | 0.418708 / 0.434364 (-0.015656) | 0.460865 / 0.540337 (-0.079472) | 0.633925 / 1.386936 (-0.753011) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006631 / 0.011353 (-0.004722) | 0.004109 / 0.011008 (-0.006899) | 0.065003 / 0.038508 (0.026495) | 0.080407 / 0.023109 (0.057297) | 0.362966 / 0.275898 (0.087068) | 0.389727 / 0.323480 (0.066247) | 0.005588 / 0.007986 (-0.002397) | 0.003517 / 0.004328 (-0.000812) | 0.065821 / 0.004250 (0.061570) | 0.057614 / 0.037052 (0.020561) | 0.367422 / 0.258489 (0.108932) | 0.400706 / 0.293841 (0.106865) | 0.031560 / 0.128546 (-0.096986) | 0.008659 / 0.075646 (-0.066987) | 0.070756 / 0.419271 (-0.348516) | 0.049821 / 0.043533 (0.006288) | 0.360836 / 0.255139 (0.105697) | 0.383981 / 0.283200 (0.100781) | 0.023719 / 0.141683 (-0.117963) | 1.485197 / 1.452155 (0.033043) | 1.544899 / 1.492716 (0.052182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.336480 / 0.018006 (0.318474) | 0.532839 / 0.000490 (0.532349) | 0.003767 / 0.000200 (0.003567) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034132 / 0.037411 (-0.003280) | 0.090131 / 0.014526 (0.075605) | 0.104086 / 0.176557 (-0.072471) | 0.158385 / 0.737135 (-0.578751) | 0.106417 / 0.296338 (-0.189922) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416462 / 0.215209 (0.201253) | 4.160409 / 2.077655 (2.082755) | 2.195355 / 1.504120 (0.691235) | 2.051234 / 1.541195 (0.510040) | 2.012116 / 1.468490 (0.543626) | 0.477414 / 4.584777 (-4.107363) | 3.590326 / 3.745712 (-0.155386) | 3.318490 / 5.269862 (-1.951371) | 2.064124 / 4.565676 (-2.501553) | 0.057040 / 0.424275 (-0.367235) | 0.007283 / 0.007607 (-0.000324) | 0.480490 / 0.226044 (0.254445) | 4.804013 / 2.268929 (2.535084) | 2.625940 / 55.444624 (-52.818685) | 2.231537 / 6.876477 (-4.644939) | 2.441649 / 2.142072 (0.299576) | 0.573207 / 4.805227 (-4.232020) | 0.131685 / 6.500664 (-6.368979) | 0.060112 / 0.075469 (-0.015357) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.358587 / 1.841788 (-0.483200) | 20.457562 / 8.074308 (12.383254) | 14.236304 / 10.191392 (4.044912) | 0.152860 / 0.680424 (-0.527563) | 0.018466 / 0.534201 (-0.515735) | 0.401391 / 0.579283 (-0.177893) | 0.410252 / 0.434364 (-0.024111) | 0.484335 / 0.540337 (-0.056002) | 0.663818 / 1.386936 (-0.723118) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#acac88873abcb585892dc361eb9f6a70a1fd9a59 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007725 / 0.011353 (-0.003628) | 0.004448 / 0.011008 (-0.006560) | 0.098689 / 0.038508 (0.060180) | 0.082919 / 0.023109 (0.059809) | 0.380707 / 0.275898 (0.104809) | 0.452977 / 0.323480 (0.129497) | 0.004430 / 0.007986 (-0.003555) | 0.003712 / 0.004328 (-0.000616) | 0.076675 / 0.004250 (0.072425) | 0.062281 / 0.037052 (0.025228) | 0.403370 / 0.258489 (0.144881) | 0.464557 / 0.293841 (0.170716) | 0.035646 / 0.128546 (-0.092900) | 0.009776 / 0.075646 (-0.065870) | 0.341955 / 0.419271 (-0.077316) | 0.059515 / 0.043533 (0.015983) | 0.388421 / 0.255139 (0.133282) | 0.439496 / 0.283200 (0.156296) | 0.029090 / 0.141683 (-0.112593) | 1.727473 / 1.452155 (0.275319) | 1.810448 / 1.492716 (0.317732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221215 / 0.018006 (0.203208) | 0.486660 / 0.000490 (0.486171) | 0.005467 / 0.000200 (0.005267) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032491 / 0.037411 (-0.004920) | 0.094446 / 0.014526 (0.079920) | 0.110339 / 0.176557 (-0.066217) | 0.175004 / 0.737135 (-0.562131) | 0.109209 / 0.296338 (-0.187129) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453966 / 0.215209 (0.238757) | 4.515842 / 2.077655 (2.438187) | 2.240512 / 1.504120 (0.736392) | 2.059911 / 1.541195 (0.518717) | 2.150635 / 1.468490 (0.682145) | 0.564509 / 4.584777 (-4.020268) | 4.055208 / 3.745712 (0.309496) | 3.614084 / 5.269862 (-1.655778) | 2.295760 / 4.565676 (-2.269917) | 0.066507 / 0.424275 (-0.357768) | 0.008909 / 0.007607 (0.001302) | 0.542604 / 0.226044 (0.316560) | 5.412162 / 2.268929 (3.143233) | 2.758757 / 55.444624 (-52.685867) | 2.430693 / 6.876477 (-4.445784) | 2.669866 / 2.142072 (0.527793) | 0.681756 / 4.805227 (-4.123471) | 0.156524 / 6.500664 (-6.344140) | 0.069499 / 0.075469 (-0.005970) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.571591 / 1.841788 (-0.270197) | 22.543437 / 8.074308 (14.469129) | 16.068426 / 10.191392 (5.877034) | 0.169860 / 0.680424 (-0.510564) | 0.021216 / 0.534201 (-0.512985) | 0.468745 / 0.579283 (-0.110538) | 0.475924 / 0.434364 (0.041560) | 0.535574 / 0.540337 (-0.004763) | 0.733823 / 1.386936 (-0.653113) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008038 / 0.011353 (-0.003315) | 0.004565 / 0.011008 (-0.006443) | 0.076892 / 0.038508 (0.038384) | 0.089559 / 0.023109 (0.066450) | 0.456752 / 0.275898 (0.180854) | 0.497282 / 0.323480 (0.173802) | 0.005991 / 0.007986 (-0.001995) | 0.003784 / 0.004328 (-0.000545) | 0.076339 / 0.004250 (0.072089) | 0.066050 / 0.037052 (0.028998) | 0.462708 / 0.258489 (0.204219) | 0.503711 / 0.293841 (0.209870) | 0.037098 / 0.128546 (-0.091448) | 0.009869 / 0.075646 (-0.065777) | 0.083678 / 0.419271 (-0.335594) | 0.058166 / 0.043533 (0.014633) | 0.461839 / 0.255139 (0.206700) | 0.481546 / 0.283200 (0.198347) | 0.027755 / 0.141683 (-0.113928) | 1.738490 / 1.452155 (0.286335) | 1.832276 / 1.492716 (0.339560) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329935 / 0.018006 (0.311929) | 0.497438 / 0.000490 (0.496949) | 0.034644 / 0.000200 (0.034444) | 0.000199 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035427 / 0.037411 (-0.001984) | 0.105689 / 0.014526 (0.091163) | 0.117706 / 0.176557 (-0.058850) | 0.177862 / 0.737135 (-0.559273) | 0.116791 / 0.296338 (-0.179547) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.484851 / 0.215209 (0.269642) | 4.804346 / 2.077655 (2.726691) | 2.494801 / 1.504120 (0.990681) | 2.320185 / 1.541195 (0.778990) | 2.374090 / 1.468490 (0.905600) | 0.567397 / 4.584777 (-4.017380) | 4.087402 / 3.745712 (0.341690) | 3.794245 / 5.269862 (-1.475616) | 2.378481 / 4.565676 (-2.187195) | 0.068228 / 0.424275 (-0.356047) | 0.008740 / 0.007607 (0.001133) | 0.574876 / 0.226044 (0.348832) | 5.742644 / 2.268929 (3.473716) | 3.047661 / 55.444624 (-52.396963) | 2.729742 / 6.876477 (-4.146735) | 2.852510 / 2.142072 (0.710438) | 0.679450 / 4.805227 (-4.125777) | 0.156162 / 6.500664 (-6.344502) | 0.074051 / 0.075469 (-0.001418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.576182 / 1.841788 (-0.265605) | 23.298147 / 8.074308 (15.223839) | 16.344621 / 10.191392 (6.153229) | 0.167571 / 0.680424 (-0.512852) | 0.021423 / 0.534201 (-0.512778) | 0.464511 / 0.579283 (-0.114772) | 0.453257 / 0.434364 (0.018893) | 0.563439 / 0.540337 (0.023102) | 0.764759 / 1.386936 (-0.622177) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e8dc4b32b0d91bdb0971f8203ee37e6588c7770e \"CML watermark\")\n", "This should also fix https://github.com/huggingface/datasets/issues/6140, so please link it with this PR before merging.", "Done !", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006719 / 0.011353 (-0.004634) | 0.004299 / 0.011008 (-0.006709) | 0.085296 / 0.038508 (0.046788) | 0.085144 / 0.023109 (0.062035) | 0.361703 / 0.275898 (0.085805) | 0.397721 / 0.323480 (0.074241) | 0.005920 / 0.007986 (-0.002065) | 0.003853 / 0.004328 (-0.000476) | 0.065633 / 0.004250 (0.061383) | 0.057000 / 0.037052 (0.019947) | 0.379981 / 0.258489 (0.121492) | 0.419041 / 0.293841 (0.125200) | 0.031225 / 0.128546 (-0.097322) | 0.008868 / 0.075646 (-0.066779) | 0.288808 / 0.419271 (-0.130463) | 0.052391 / 0.043533 (0.008859) | 0.362349 / 0.255139 (0.107210) | 0.399858 / 0.283200 (0.116658) | 0.025843 / 0.141683 (-0.115840) | 1.498988 / 1.452155 (0.046834) | 1.547290 / 1.492716 (0.054574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278091 / 0.018006 (0.260085) | 0.621794 / 0.000490 (0.621305) | 0.003770 / 0.000200 (0.003570) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029128 / 0.037411 (-0.008283) | 0.082061 / 0.014526 (0.067536) | 0.101758 / 0.176557 (-0.074799) | 0.155724 / 0.737135 (-0.581411) | 0.102173 / 0.296338 (-0.194165) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.387145 / 0.215209 (0.171935) | 3.868262 / 2.077655 (1.790607) | 1.886440 / 1.504120 (0.382320) | 1.723305 / 1.541195 (0.182111) | 1.805411 / 1.468490 (0.336921) | 0.485024 / 4.584777 (-4.099753) | 3.637859 / 3.745712 (-0.107853) | 3.319593 / 5.269862 (-1.950269) | 2.087860 / 4.565676 (-2.477817) | 0.056992 / 0.424275 (-0.367283) | 0.007623 / 0.007607 (0.000016) | 0.468182 / 0.226044 (0.242138) | 4.681112 / 2.268929 (2.412183) | 2.407010 / 55.444624 (-53.037614) | 2.026604 / 6.876477 (-4.849872) | 2.298158 / 2.142072 (0.156086) | 0.581839 / 4.805227 (-4.223388) | 0.132101 / 6.500664 (-6.368563) | 0.060472 / 0.075469 (-0.014997) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236422 / 1.841788 (-0.605365) | 20.505168 / 8.074308 (12.430860) | 14.356081 / 10.191392 (4.164689) | 0.148808 / 0.680424 (-0.531616) | 0.018433 / 0.534201 (-0.515768) | 0.391323 / 0.579283 (-0.187960) | 0.413142 / 0.434364 (-0.021222) | 0.453484 / 0.540337 (-0.086853) | 0.620771 / 1.386936 (-0.766165) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007030 / 0.011353 (-0.004323) | 0.004430 / 0.011008 (-0.006578) | 0.065578 / 0.038508 (0.027070) | 0.090751 / 0.023109 (0.067642) | 0.389121 / 0.275898 (0.113223) | 0.424657 / 0.323480 (0.101177) | 0.006575 / 0.007986 (-0.001410) | 0.003855 / 0.004328 (-0.000473) | 0.066175 / 0.004250 (0.061925) | 0.063255 / 0.037052 (0.026202) | 0.397161 / 0.258489 (0.138672) | 0.435291 / 0.293841 (0.141450) | 0.031622 / 0.128546 (-0.096925) | 0.008900 / 0.075646 (-0.066747) | 0.071694 / 0.419271 (-0.347577) | 0.049161 / 0.043533 (0.005628) | 0.386214 / 0.255139 (0.131075) | 0.404571 / 0.283200 (0.121372) | 0.024821 / 0.141683 (-0.116862) | 1.489514 / 1.452155 (0.037359) | 1.576139 / 1.492716 (0.083423) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289884 / 0.018006 (0.271878) | 0.629342 / 0.000490 (0.628852) | 0.004799 / 0.000200 (0.004599) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032081 / 0.037411 (-0.005331) | 0.088152 / 0.014526 (0.073626) | 0.107289 / 0.176557 (-0.069267) | 0.164598 / 0.737135 (-0.572537) | 0.108395 / 0.296338 (-0.187944) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426723 / 0.215209 (0.211514) | 4.267719 / 2.077655 (2.190064) | 2.289657 / 1.504120 (0.785537) | 2.117435 / 1.541195 (0.576240) | 2.187292 / 1.468490 (0.718802) | 0.478387 / 4.584777 (-4.106390) | 3.625096 / 3.745712 (-0.120616) | 3.408036 / 5.269862 (-1.861826) | 2.124117 / 4.565676 (-2.441559) | 0.056537 / 0.424275 (-0.367738) | 0.007489 / 0.007607 (-0.000118) | 0.502434 / 0.226044 (0.276389) | 5.025357 / 2.268929 (2.756428) | 2.740554 / 55.444624 (-52.704070) | 2.418841 / 6.876477 (-4.457635) | 2.730764 / 2.142072 (0.588691) | 0.600013 / 4.805227 (-4.205214) | 0.133039 / 6.500664 (-6.367625) | 0.061466 / 0.075469 (-0.014003) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330211 / 1.841788 (-0.511577) | 21.092100 / 8.074308 (13.017792) | 14.463054 / 10.191392 (4.271662) | 0.154149 / 0.680424 (-0.526274) | 0.018891 / 0.534201 (-0.515310) | 0.393078 / 0.579283 (-0.186205) | 0.415279 / 0.434364 (-0.019085) | 0.479469 / 0.540337 (-0.060868) | 0.659953 / 1.386936 (-0.726983) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5ca2ba050340829b4dd44791afc15db0d82a3276 \"CML watermark\")\n" ]
2023-08-17T09:17:05
2023-08-17T20:46:25
2023-08-17T20:37:19
MEMBER
null
This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use. fix https://github.com/huggingface/datasets/issues/6140
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6154/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6154", "html_url": "https://github.com/huggingface/datasets/pull/6154", "diff_url": "https://github.com/huggingface/datasets/pull/6154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6154.patch", "merged_at": "2023-08-17T20:37:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/6152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6152/comments
https://api.github.com/repos/huggingface/datasets/issues/6152/events
https://github.com/huggingface/datasets/issues/6152
1,852,494,646
I_kwDODunzps5uatM2
6,152
FolderBase Dataset automatically resolves under current directory when data_dir is not specified
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
[ "@lhoestq ", "Makes sense, I guess this can be fixed in the load_dataset_builder method.\r\nIt concerns every packaged builder I think (see values in `_PACKAGED_DATASETS_MODULES`)", "I think the behavior is related to these lines, which short circuited the error handling.\r\nhttps://github.com/huggingface/datasets/blob/664a1cb72ea1e6ef7c47e671e2686ca4a35e8d63/src/datasets/load.py#L946-L952\r\n\r\nSo should data_dir be checked here or still delegating to actual `DatasetModule`? In that case, how to properly set `data_files` here.", "This is location in PackagedDatasetModuleFactory.get_module seems the be the right place to check if at least data_dir or data_files are passed" ]
2023-08-16T04:38:09
2023-08-17T13:45:18
null
CONTRIBUTOR
null
### Describe the bug FolderBase Dataset automatically resolves under current directory when data_dir is not specified. For example: ``` load_dataset("audiofolder") ``` takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https://github.com/huggingface/datasets/blob/cb8c5de5145c7e7eee65391cb7f4d92f0d565d62/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58-L59 ### Steps to reproduce the bug ``` load_dataset("audiofolder") ``` ### Expected behavior Error report ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6152/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6151
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6151/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6151/comments
https://api.github.com/repos/huggingface/datasets/issues/6151/events
https://github.com/huggingface/datasets/issues/6151
1,851,497,818
I_kwDODunzps5uW51a
6,151
Faster sorting for single key items
{ "login": "jackapbutler", "id": 47942453, "node_id": "MDQ6VXNlcjQ3OTQyNDUz", "avatar_url": "https://avatars.githubusercontent.com/u/47942453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jackapbutler", "html_url": "https://github.com/jackapbutler", "followers_url": "https://api.github.com/users/jackapbutler/followers", "following_url": "https://api.github.com/users/jackapbutler/following{/other_user}", "gists_url": "https://api.github.com/users/jackapbutler/gists{/gist_id}", "starred_url": "https://api.github.com/users/jackapbutler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jackapbutler/subscriptions", "organizations_url": "https://api.github.com/users/jackapbutler/orgs", "repos_url": "https://api.github.com/users/jackapbutler/repos", "events_url": "https://api.github.com/users/jackapbutler/events{/privacy}", "received_events_url": "https://api.github.com/users/jackapbutler/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "`Dataset.sort` essentially does the same thing except it uses `pyarrow.compute.sort_indices` which doesn't involve copying the data into python objects (saving memory)\r\n\r\n```python\r\nsort_keys = [(col, \"ascending\") for col in column_names]\r\nindices = pc.sort_indices(self.data, sort_keys=sort_keys)\r\nreturn self.select(indices)\r\n```", "Ok interesting, I'll continue debugging to see what is going wrong on my end." ]
2023-08-15T14:02:31
2023-08-21T14:38:26
2023-08-21T14:38:25
NONE
null
### Feature request A faster way to sort a dataset which contains a large number of rows. ### Motivation The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps. **Code snippet:** ```python ds = datasets.load_dataset( "json", **{"data_files": {"train": "path-to-jsonlines"}, "split": "train"}, num_proc=os.cpu_count(), keep_in_memory=True) sorted_ds = ds.sort("pubDate", keep_in_memory=True) ``` However, once I switched to a different method which 1. unpacked to a list of tuples 2. sorted tuples by key 3. run `.select` with the sorted list of indices It was significantly faster (orders of magnitude, especially with M's of rows) ### Your contribution I'd be happy to implement a crude single key sorting algorithm so that other users can benefit from this trick. Broadly, this would take a `Dataset` and perform; ```python # ds is a Dataset object # key_name is the sorting key class Dataset: ... def _sort(key_name: str) -> Dataset: index_keys = [(i,x) for i,x in enumerate(self[key_name])] sorted_rows = sorted(row_pubdate, key=lambda x: x[1]) sorted_indicies = [x[0] for x in sorted_rows] return self.select(sorted_indicies) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6151/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6150/comments
https://api.github.com/repos/huggingface/datasets/issues/6150/events
https://github.com/huggingface/datasets/issues/6150
1,850,740,456
I_kwDODunzps5uUA7o
6,150
Allow dataset implement .take
{ "login": "brando90", "id": 1855278, "node_id": "MDQ6VXNlcjE4NTUyNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1855278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brando90", "html_url": "https://github.com/brando90", "followers_url": "https://api.github.com/users/brando90/followers", "following_url": "https://api.github.com/users/brando90/following{/other_user}", "gists_url": "https://api.github.com/users/brando90/gists{/gist_id}", "starred_url": "https://api.github.com/users/brando90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brando90/subscriptions", "organizations_url": "https://api.github.com/users/brando90/orgs", "repos_url": "https://api.github.com/users/brando90/repos", "events_url": "https://api.github.com/users/brando90/events{/privacy}", "received_events_url": "https://api.github.com/users/brando90/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "```\r\n dataset = IterableDataset(dataset) if type(dataset) != IterableDataset else dataset # to force dataset.take(batch_size) to work in non-streaming mode\r\n ```\r\n", "hf discuss: https://discuss.huggingface.co/t/how-does-one-make-dataset-take-512-work-with-streaming-false-with-hugging-face-data-set/50770", "so: https://stackoverflow.com/questions/76902824/how-does-one-make-dataset-take512-work-with-streaming-false-with-hugging-fac", "Feel free to work on this. In addition, `IterableDataset` supports `skip`, so we should also add this method to `Dataset`." ]
2023-08-15T00:17:51
2023-08-17T13:49:37
null
NONE
null
### Feature request I want to do: ``` dataset.take(512) ``` but it only works with streaming = True ### Motivation uniform interface to data sets. Really surprising the above only works with streaming = True. ### Your contribution Should be trivial to copy paste the IterableDataset .take to use the local path in the data (when streaming = False)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6150/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6149/comments
https://api.github.com/repos/huggingface/datasets/issues/6149/events
https://github.com/huggingface/datasets/issues/6149
1,850,700,624
I_kwDODunzps5uT3NQ
6,149
Dataset.from_parquet cannot load subset of columns
{ "login": "dwyatte", "id": 2512762, "node_id": "MDQ6VXNlcjI1MTI3NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyatte", "html_url": "https://github.com/dwyatte", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "repos_url": "https://api.github.com/users/dwyatte/repos", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Looks like this regression was introduced in `datasets==2.13.0` (`2.12.0` could load a subset of columns)\r\n\r\nThis does not appear to be fixed by https://github.com/huggingface/datasets/pull/6045 (bug still exists on `main`)" ]
2023-08-14T23:28:22
2023-08-17T22:36:05
2023-08-17T22:36:05
CONTRIBUTOR
null
### Describe the bug When using `Dataset.from_parquet(path_or_paths, columns=[...])` and a subset of columns, loading fails with a variant of the following ``` ValueError: Couldn't cast a: int64 -- schema metadata -- pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 273 to {'a': Value(dtype='int64', id=None), 'b': Value(dtype='int64', id=None)} because column names don't match The above exception was the direct cause of the following exception: ``` Looks to be triggered by https://github.com/huggingface/datasets/blob/c02a44715c036b5261686669727394b1308a3a4b/src/datasets/table.py#L2285-L2286 ### Steps to reproduce the bug ``` import pandas as pd from datasets import Dataset pd.DataFrame([{"a": 1, "b": 2}]).to_parquet("test.pq") Dataset.from_parquet("test.pq", columns=["a"]) ``` ### Expected behavior A subset of columns should be loaded without error ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-5.10.0-23-cloud-amd64-x86_64-with-glibc2.2.5 - Python version: 3.8.16 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6149/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6148/comments
https://api.github.com/repos/huggingface/datasets/issues/6148/events
https://github.com/huggingface/datasets/pull/6148
1,849,524,683
PR_kwDODunzps5X3oqv
6,148
Ignore parallel warning in map_nested
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006818 / 0.011353 (-0.004534) | 0.004166 / 0.011008 (-0.006842) | 0.086756 / 0.038508 (0.048248) | 0.084444 / 0.023109 (0.061335) | 0.319249 / 0.275898 (0.043351) | 0.358689 / 0.323480 (0.035209) | 0.004344 / 0.007986 (-0.003641) | 0.003564 / 0.004328 (-0.000765) | 0.065021 / 0.004250 (0.060771) | 0.055991 / 0.037052 (0.018939) | 0.319573 / 0.258489 (0.061084) | 0.373239 / 0.293841 (0.079398) | 0.031431 / 0.128546 (-0.097115) | 0.008671 / 0.075646 (-0.066975) | 0.288484 / 0.419271 (-0.130788) | 0.053501 / 0.043533 (0.009968) | 0.316934 / 0.255139 (0.061795) | 0.354233 / 0.283200 (0.071034) | 0.028088 / 0.141683 (-0.113595) | 1.510905 / 1.452155 (0.058750) | 1.568614 / 1.492716 (0.075898) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292343 / 0.018006 (0.274337) | 0.592309 / 0.000490 (0.591819) | 0.003850 / 0.000200 (0.003650) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033510 / 0.037411 (-0.003901) | 0.089546 / 0.014526 (0.075020) | 0.104909 / 0.176557 (-0.071648) | 0.162219 / 0.737135 (-0.574916) | 0.104137 / 0.296338 (-0.192202) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407993 / 0.215209 (0.192784) | 4.063423 / 2.077655 (1.985768) | 2.050237 / 1.504120 (0.546117) | 1.888939 / 1.541195 (0.347744) | 2.015195 / 1.468490 (0.546704) | 0.492617 / 4.584777 (-4.092160) | 3.595871 / 3.745712 (-0.149841) | 3.320467 / 5.269862 (-1.949395) | 2.099987 / 4.565676 (-2.465690) | 0.058513 / 0.424275 (-0.365762) | 0.007709 / 0.007607 (0.000102) | 0.479277 / 0.226044 (0.253233) | 4.790712 / 2.268929 (2.521783) | 2.517292 / 55.444624 (-52.927332) | 2.167461 / 6.876477 (-4.709016) | 2.432011 / 2.142072 (0.289939) | 0.600537 / 4.805227 (-4.204690) | 0.133538 / 6.500664 (-6.367126) | 0.059621 / 0.075469 (-0.015848) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.280375 / 1.841788 (-0.561413) | 20.777971 / 8.074308 (12.703663) | 14.869539 / 10.191392 (4.678147) | 0.159372 / 0.680424 (-0.521052) | 0.018096 / 0.534201 (-0.516105) | 0.393945 / 0.579283 (-0.185338) | 0.409598 / 0.434364 (-0.024766) | 0.459202 / 0.540337 (-0.081136) | 0.632298 / 1.386936 (-0.754638) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006694 / 0.011353 (-0.004659) | 0.004299 / 0.011008 (-0.006709) | 0.064880 / 0.038508 (0.026372) | 0.083233 / 0.023109 (0.060124) | 0.366488 / 0.275898 (0.090590) | 0.405049 / 0.323480 (0.081569) | 0.005602 / 0.007986 (-0.002384) | 0.003623 / 0.004328 (-0.000705) | 0.064410 / 0.004250 (0.060160) | 0.057962 / 0.037052 (0.020910) | 0.365318 / 0.258489 (0.106829) | 0.403151 / 0.293841 (0.109310) | 0.031285 / 0.128546 (-0.097261) | 0.008867 / 0.075646 (-0.066780) | 0.071137 / 0.419271 (-0.348135) | 0.048398 / 0.043533 (0.004865) | 0.360187 / 0.255139 (0.105048) | 0.383872 / 0.283200 (0.100673) | 0.023232 / 0.141683 (-0.118451) | 1.526980 / 1.452155 (0.074826) | 1.587265 / 1.492716 (0.094549) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.362603 / 0.018006 (0.344596) | 0.557034 / 0.000490 (0.556544) | 0.025303 / 0.000200 (0.025103) | 0.000562 / 0.000054 (0.000508) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030636 / 0.037411 (-0.006775) | 0.088085 / 0.014526 (0.073559) | 0.103238 / 0.176557 (-0.073318) | 0.155208 / 0.737135 (-0.581928) | 0.106661 / 0.296338 (-0.189678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413660 / 0.215209 (0.198451) | 4.122717 / 2.077655 (2.045063) | 2.097656 / 1.504120 (0.593536) | 1.931995 / 1.541195 (0.390801) | 2.071497 / 1.468490 (0.603007) | 0.490257 / 4.584777 (-4.094520) | 3.588076 / 3.745712 (-0.157636) | 3.423087 / 5.269862 (-1.846774) | 2.147974 / 4.565676 (-2.417703) | 0.058783 / 0.424275 (-0.365492) | 0.007456 / 0.007607 (-0.000151) | 0.492350 / 0.226044 (0.266305) | 4.935935 / 2.268929 (2.667006) | 2.604217 / 55.444624 (-52.840407) | 2.333723 / 6.876477 (-4.542754) | 2.585293 / 2.142072 (0.443220) | 0.608800 / 4.805227 (-4.196427) | 0.135806 / 6.500664 (-6.364858) | 0.062716 / 0.075469 (-0.012753) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347359 / 1.841788 (-0.494429) | 21.420505 / 8.074308 (13.346197) | 14.325914 / 10.191392 (4.134522) | 0.159617 / 0.680424 (-0.520806) | 0.018769 / 0.534201 (-0.515432) | 0.399677 / 0.579283 (-0.179606) | 0.402992 / 0.434364 (-0.031372) | 0.484629 / 0.540337 (-0.055709) | 0.656007 / 1.386936 (-0.730929) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ac94bb10d5c00ce8fdaf461eb1ff4b8572cfe956 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007291 / 0.011353 (-0.004062) | 0.004501 / 0.011008 (-0.006508) | 0.097529 / 0.038508 (0.059021) | 0.079257 / 0.023109 (0.056147) | 0.356390 / 0.275898 (0.080492) | 0.390065 / 0.323480 (0.066585) | 0.006071 / 0.007986 (-0.001914) | 0.003783 / 0.004328 (-0.000546) | 0.074598 / 0.004250 (0.070348) | 0.059626 / 0.037052 (0.022574) | 0.395344 / 0.258489 (0.136855) | 0.418564 / 0.293841 (0.124723) | 0.041843 / 0.128546 (-0.086704) | 0.009293 / 0.075646 (-0.066354) | 0.332668 / 0.419271 (-0.086604) | 0.065753 / 0.043533 (0.022220) | 0.357285 / 0.255139 (0.102146) | 0.402974 / 0.283200 (0.119775) | 0.028714 / 0.141683 (-0.112968) | 1.733913 / 1.452155 (0.281759) | 1.802574 / 1.492716 (0.309858) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253114 / 0.018006 (0.235108) | 0.606338 / 0.000490 (0.605848) | 0.006871 / 0.000200 (0.006671) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031850 / 0.037411 (-0.005562) | 0.095148 / 0.014526 (0.080622) | 0.111499 / 0.176557 (-0.065057) | 0.174653 / 0.737135 (-0.562483) | 0.109396 / 0.296338 (-0.186943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440442 / 0.215209 (0.225233) | 4.408792 / 2.077655 (2.331137) | 2.149778 / 1.504120 (0.645658) | 1.922430 / 1.541195 (0.381235) | 2.029281 / 1.468490 (0.560791) | 0.611586 / 4.584777 (-3.973191) | 4.204571 / 3.745712 (0.458859) | 3.638194 / 5.269862 (-1.631668) | 2.336146 / 4.565676 (-2.229531) | 0.065383 / 0.424275 (-0.358892) | 0.008441 / 0.007607 (0.000834) | 0.527357 / 0.226044 (0.301313) | 5.247892 / 2.268929 (2.978963) | 2.654005 / 55.444624 (-52.790620) | 2.256596 / 6.876477 (-4.619881) | 2.432191 / 2.142072 (0.290119) | 0.672759 / 4.805227 (-4.132469) | 0.148494 / 6.500664 (-6.352170) | 0.068248 / 0.075469 (-0.007221) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.544250 / 1.841788 (-0.297538) | 21.882016 / 8.074308 (13.807708) | 16.470182 / 10.191392 (6.278790) | 0.166107 / 0.680424 (-0.514317) | 0.021305 / 0.534201 (-0.512896) | 0.445069 / 0.579283 (-0.134214) | 0.500631 / 0.434364 (0.066267) | 0.525801 / 0.540337 (-0.014536) | 0.806534 / 1.386936 (-0.580402) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007322 / 0.011353 (-0.004030) | 0.004206 / 0.011008 (-0.006802) | 0.074827 / 0.038508 (0.036319) | 0.084759 / 0.023109 (0.061650) | 0.421204 / 0.275898 (0.145306) | 0.464442 / 0.323480 (0.140962) | 0.006523 / 0.007986 (-0.001463) | 0.003613 / 0.004328 (-0.000716) | 0.073796 / 0.004250 (0.069545) | 0.066609 / 0.037052 (0.029557) | 0.430108 / 0.258489 (0.171619) | 0.463165 / 0.293841 (0.169324) | 0.036015 / 0.128546 (-0.092532) | 0.009696 / 0.075646 (-0.065951) | 0.083326 / 0.419271 (-0.335946) | 0.056804 / 0.043533 (0.013271) | 0.423333 / 0.255139 (0.168194) | 0.450538 / 0.283200 (0.167338) | 0.027067 / 0.141683 (-0.114616) | 1.700563 / 1.452155 (0.248408) | 1.748738 / 1.492716 (0.256021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.395682 / 0.018006 (0.377675) | 0.540192 / 0.000490 (0.539702) | 0.140049 / 0.000200 (0.139849) | 0.000694 / 0.000054 (0.000639) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036643 / 0.037411 (-0.000769) | 0.104422 / 0.014526 (0.089896) | 0.113072 / 0.176557 (-0.063484) | 0.179561 / 0.737135 (-0.557575) | 0.118620 / 0.296338 (-0.177718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.476547 / 0.215209 (0.261338) | 4.716009 / 2.077655 (2.638354) | 2.412111 / 1.504120 (0.907991) | 2.246389 / 1.541195 (0.705194) | 2.307058 / 1.468490 (0.838568) | 0.552759 / 4.584777 (-4.032018) | 4.172484 / 3.745712 (0.426771) | 3.848419 / 5.269862 (-1.421443) | 2.310338 / 4.565676 (-2.255339) | 0.071757 / 0.424275 (-0.352518) | 0.011206 / 0.007607 (0.003599) | 0.609526 / 0.226044 (0.383482) | 5.583065 / 2.268929 (3.314136) | 3.081227 / 55.444624 (-52.363397) | 2.637782 / 6.876477 (-4.238695) | 2.887561 / 2.142072 (0.745489) | 0.667227 / 4.805227 (-4.138000) | 0.154421 / 6.500664 (-6.346243) | 0.070772 / 0.075469 (-0.004697) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.605500 / 1.841788 (-0.236288) | 22.872717 / 8.074308 (14.798409) | 15.865333 / 10.191392 (5.673941) | 0.170353 / 0.680424 (-0.510071) | 0.021854 / 0.534201 (-0.512347) | 0.461467 / 0.579283 (-0.117816) | 0.477743 / 0.434364 (0.043379) | 0.597234 / 0.540337 (0.056896) | 0.800416 / 1.386936 (-0.586520) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a \"CML watermark\")\n" ]
2023-08-14T10:43:41
2023-08-17T08:54:06
2023-08-17T08:43:58
MEMBER
null
This warning message was shown every time you pass num_proc to `load_dataset` because of `map_nested` ``` parallel_map is experimental and might be subject to breaking changes in the future ``` This PR removes it for `map_nested`. If someone uses another parallel backend they're already warned when `parallel_backend` is called anyway
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6148/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6148", "html_url": "https://github.com/huggingface/datasets/pull/6148", "diff_url": "https://github.com/huggingface/datasets/pull/6148.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6148.patch", "merged_at": "2023-08-17T08:43:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/6147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6147/comments
https://api.github.com/repos/huggingface/datasets/issues/6147/events
https://github.com/huggingface/datasets/issues/6147
1,848,914,830
I_kwDODunzps5uNDOO
6,147
ValueError when running BeamBasedBuilder with GCS path in cache_dir
{ "login": "ktrk115", "id": 13844767, "node_id": "MDQ6VXNlcjEzODQ0NzY3", "avatar_url": "https://avatars.githubusercontent.com/u/13844767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ktrk115", "html_url": "https://github.com/ktrk115", "followers_url": "https://api.github.com/users/ktrk115/followers", "following_url": "https://api.github.com/users/ktrk115/following{/other_user}", "gists_url": "https://api.github.com/users/ktrk115/gists{/gist_id}", "starred_url": "https://api.github.com/users/ktrk115/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ktrk115/subscriptions", "organizations_url": "https://api.github.com/users/ktrk115/orgs", "repos_url": "https://api.github.com/users/ktrk115/repos", "events_url": "https://api.github.com/users/ktrk115/events{/privacy}", "received_events_url": "https://api.github.com/users/ktrk115/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The cause of the error seems to be that `datasets` adds \"gcs://\" as a schema, while `beam` checks only \"gs://\".\r\n\r\ndatasets: https://github.com/huggingface/datasets/blob/c02a44715c036b5261686669727394b1308a3a4b/src/datasets/builder.py#L822\r\n\r\nbeam: [link](https://github.com/apache/beam/blob/25e1a64641b1c8a3c0a6c75c6e86031b87307f22/sdks/python/apache_beam/io/filesystems.py#L98-L101)\r\n```\r\n systems = [\r\n fs for fs in FileSystem.get_all_subclasses()\r\n if fs.scheme() == path_scheme\r\n ]\r\n```" ]
2023-08-14T03:11:34
2023-08-14T03:19:43
null
NONE
null
### Describe the bug When running the BeamBasedBuilder with a GCS path specified in the cache_dir, the following ValueError occurs: ``` ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://my-bucket/huggingface_datasets/my_beam_dataset/default/0.0.0/my_beam_dataset-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite'] ``` Same error occurs after running `pip install apache-beam[gcp]` as instructed. ### Steps to reproduce the bug Put `my_beam_dataset.py`: ```python import datasets class MyBeamDataset(datasets.BeamBasedBuilder): def _info(self): features = datasets.Features({"value": datasets.Value("int64")}) return datasets.DatasetInfo(features=features) def _split_generators(self, dl_manager, pipeline): return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={})] def _build_pcollection(self, pipeline): import apache_beam as beam return pipeline | beam.Create([{"value": i} for i in range(10)]) ``` Run: ```bash datasets-cli run_beam my_beam_dataset.py --cache_dir=gs://my-bucket/huggingface_datasets/ --beam_pipeline_options="runner=DirectRunner" ``` ### Expected behavior Running the BeamBasedBuilder with a GCS cache path without any errors. ### Environment info - `datasets` version: 2.14.4 - Platform: macOS-13.4-arm64-arm-64bit - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6147/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6146/comments
https://api.github.com/repos/huggingface/datasets/issues/6146/events
https://github.com/huggingface/datasets/issues/6146
1,848,417,366
I_kwDODunzps5uLJxW
6,146
DatasetGenerationError when load glue benchmark datasets from `load_dataset`
{ "login": "yusx-swapp", "id": 78742415, "node_id": "MDQ6VXNlcjc4NzQyNDE1", "avatar_url": "https://avatars.githubusercontent.com/u/78742415?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yusx-swapp", "html_url": "https://github.com/yusx-swapp", "followers_url": "https://api.github.com/users/yusx-swapp/followers", "following_url": "https://api.github.com/users/yusx-swapp/following{/other_user}", "gists_url": "https://api.github.com/users/yusx-swapp/gists{/gist_id}", "starred_url": "https://api.github.com/users/yusx-swapp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yusx-swapp/subscriptions", "organizations_url": "https://api.github.com/users/yusx-swapp/orgs", "repos_url": "https://api.github.com/users/yusx-swapp/repos", "events_url": "https://api.github.com/users/yusx-swapp/events{/privacy}", "received_events_url": "https://api.github.com/users/yusx-swapp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I've tried clear the .cache file, doesn't work.", "This issue happens on AWS sagemaker", "This issue can happen if there is a directory named \"glue\" relative to the Python script with the `load_dataset` call (similar issue to this one: https://github.com/huggingface/datasets/issues/5228). Is this the case?", "> This issue can happen if there is a directory named \"glue\" relative to the Python script with the `load_dataset` call (similar issue to this one: #5228). Is this the case?\r\n\r\nThats correct!\r\nSorry for my late response." ]
2023-08-13T05:17:56
2023-08-26T22:09:09
2023-08-26T22:09:09
NONE
null
### Describe the bug Package version: datasets-2.14.4 When I run the codes: ``` from datasets import load_dataset dataset = load_dataset("glue", "ax") ``` I got the following errors: --------------------------------------------------------------------------- SchemaInferenceError Traceback (most recent call last) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1949, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1948 num_shards = shard_id + 1 -> 1949 num_examples, num_bytes = writer.finalize() 1950 writer.close() File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/arrow_writer.py:598, in ArrowWriter.finalize(self, close_stream) 597 self.stream.close() --> 598 raise SchemaInferenceError("Please pass `features` or at least one example when writing data") 599 logger.debug( 600 f"Done writing {self._num_examples} {self.unit} in {self._num_bytes} bytes {self._path if self._path else ''}." 601 ) SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[5], line 3 1 from datasets import load_dataset ----> 3 dataset = load_dataset("glue", "ax") File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/load.py:2136, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2133 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 2135 # Download and prepare data -> 2136 builder_instance.download_and_prepare( 2137 download_config=download_config, 2138 download_mode=download_mode, 2139 verification_mode=verification_mode, 2140 try_from_hf_gcs=try_from_hf_gcs, 2141 num_proc=num_proc, 2142 storage_options=storage_options, 2143 ) 2145 # Build dataset for splits 2146 keep_in_memory = ( 2147 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2148 ) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 952 if num_proc is not None: 953 prepare_split_kwargs["num_proc"] = num_proc --> 954 self._download_and_prepare( 955 dl_manager=dl_manager, 956 verification_mode=verification_mode, 957 **prepare_split_kwargs, 958 **download_and_prepare_kwargs, 959 ) 960 # Sync info 961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1049, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1045 split_dict.add(split_generator.split_info) 1047 try: 1048 # Prepare split will record examples associated to the split -> 1049 self._prepare_split(split_generator, **prepare_split_kwargs) 1050 except OSError as e: 1051 raise OSError( 1052 "Cannot find data file. " 1053 + (self.manual_download_instructions or "") 1054 + "\nOriginal error:\n" 1055 + str(e) 1056 ) from None File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1813, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1811 job_id = 0 1812 with pbar: -> 1813 for job_id, done, content in self._prepare_split_single( 1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1815 ): 1816 if done: 1817 result = content File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1958, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1957 e = e.__context__ -> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ### Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset("glue", "ax") ### Expected behavior When generating the train split: Generating train split: 0/0 [00:00<?, ? examples/s] It raise the error: DatasetGenerationError: An error occurred while generating the dataset ### Environment info datasets-2.14.4. Python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6146/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6153/comments
https://api.github.com/repos/huggingface/datasets/issues/6153/events
https://github.com/huggingface/datasets/issues/6153
1,852,630,074
I_kwDODunzps5ubOQ6
6,153
custom load dataset to hub
{ "login": "andysingal", "id": 20493493, "node_id": "MDQ6VXNlcjIwNDkzNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andysingal", "html_url": "https://github.com/andysingal", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "organizations_url": "https://api.github.com/users/andysingal/orgs", "repos_url": "https://api.github.com/users/andysingal/repos", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "received_events_url": "https://api.github.com/users/andysingal/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "This is an issue for the [Datasets repo](https://github.com/huggingface/datasets).", "> This is an issue for the [Datasets repo](https://github.com/huggingface/datasets).\r\n\r\nThanks @sgugger , I guess I will wait for them to address the issue . Looking forward to hearing from them ", "You can use `.push_to_hub(\"<username>/<repo>\")` to push a `Dataset` to the Hub." ]
2023-08-13T04:42:22
2023-08-17T14:17:05
null
NONE
null
### System Info kaggle notebook i transformed dataset: ``` dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt") ``` to formatted_dataset: ``` Dataset({ features: ['message_tree_id', 'message_tree_text'], num_rows: 33143 }) ``` but would like to know how to upload to hub ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction shared above ### Expected behavior load dataset to hub
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6153/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6145
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6145/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6145/comments
https://api.github.com/repos/huggingface/datasets/issues/6145/events
https://github.com/huggingface/datasets/pull/6145
1,847,811,310
PR_kwDODunzps5Xx5If
6,145
Export to_iterable_dataset to document
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006076 / 0.011353 (-0.005277) | 0.003730 / 0.011008 (-0.007279) | 0.080778 / 0.038508 (0.042270) | 0.062970 / 0.023109 (0.039860) | 0.395864 / 0.275898 (0.119966) | 0.430024 / 0.323480 (0.106544) | 0.004823 / 0.007986 (-0.003162) | 0.002949 / 0.004328 (-0.001379) | 0.062423 / 0.004250 (0.058172) | 0.047343 / 0.037052 (0.010291) | 0.403153 / 0.258489 (0.144664) | 0.443666 / 0.293841 (0.149825) | 0.027798 / 0.128546 (-0.100748) | 0.008056 / 0.075646 (-0.067590) | 0.262260 / 0.419271 (-0.157011) | 0.045958 / 0.043533 (0.002425) | 0.391349 / 0.255139 (0.136210) | 0.421831 / 0.283200 (0.138632) | 0.021837 / 0.141683 (-0.119846) | 1.485509 / 1.452155 (0.033355) | 1.542940 / 1.492716 (0.050224) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196831 / 0.018006 (0.178825) | 0.435774 / 0.000490 (0.435285) | 0.003647 / 0.000200 (0.003447) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023756 / 0.037411 (-0.013655) | 0.075737 / 0.014526 (0.061211) | 0.303703 / 0.176557 (0.127146) | 0.164862 / 0.737135 (-0.572273) | 0.198483 / 0.296338 (-0.097855) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405220 / 0.215209 (0.190011) | 4.065983 / 2.077655 (1.988328) | 2.043001 / 1.504120 (0.538881) | 1.853318 / 1.541195 (0.312123) | 1.977452 / 1.468490 (0.508962) | 0.500897 / 4.584777 (-4.083880) | 3.065756 / 3.745712 (-0.679956) | 2.924096 / 5.269862 (-2.345765) | 1.876194 / 4.565676 (-2.689482) | 0.057774 / 0.424275 (-0.366501) | 0.006809 / 0.007607 (-0.000798) | 0.470979 / 0.226044 (0.244934) | 4.719546 / 2.268929 (2.450618) | 2.449651 / 55.444624 (-52.994973) | 2.211817 / 6.876477 (-4.664660) | 2.398760 / 2.142072 (0.256687) | 0.590608 / 4.805227 (-4.214619) | 0.125836 / 6.500664 (-6.374829) | 0.060759 / 0.075469 (-0.014710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243609 / 1.841788 (-0.598179) | 18.836193 / 8.074308 (10.761885) | 13.835053 / 10.191392 (3.643661) | 0.129708 / 0.680424 (-0.550716) | 0.016708 / 0.534201 (-0.517493) | 0.337219 / 0.579283 (-0.242065) | 0.359045 / 0.434364 (-0.075319) | 0.383329 / 0.540337 (-0.157009) | 0.539629 / 1.386936 (-0.847307) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006073 / 0.011353 (-0.005280) | 0.003713 / 0.011008 (-0.007295) | 0.062642 / 0.038508 (0.024134) | 0.062618 / 0.023109 (0.039508) | 0.362029 / 0.275898 (0.086130) | 0.401924 / 0.323480 (0.078445) | 0.004689 / 0.007986 (-0.003297) | 0.002945 / 0.004328 (-0.001384) | 0.062720 / 0.004250 (0.058470) | 0.048901 / 0.037052 (0.011848) | 0.363780 / 0.258489 (0.105291) | 0.405111 / 0.293841 (0.111270) | 0.027738 / 0.128546 (-0.100808) | 0.008046 / 0.075646 (-0.067600) | 0.067752 / 0.419271 (-0.351519) | 0.041955 / 0.043533 (-0.001577) | 0.361615 / 0.255139 (0.106476) | 0.388762 / 0.283200 (0.105562) | 0.021302 / 0.141683 (-0.120380) | 1.473527 / 1.452155 (0.021372) | 1.529753 / 1.492716 (0.037037) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300446 / 0.018006 (0.282440) | 0.425844 / 0.000490 (0.425354) | 0.054507 / 0.000200 (0.054307) | 0.000282 / 0.000054 (0.000228) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025478 / 0.037411 (-0.011933) | 0.078298 / 0.014526 (0.063772) | 0.087647 / 0.176557 (-0.088909) | 0.138978 / 0.737135 (-0.598157) | 0.088396 / 0.296338 (-0.207942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421345 / 0.215209 (0.206136) | 4.209188 / 2.077655 (2.131533) | 2.260731 / 1.504120 (0.756611) | 2.072329 / 1.541195 (0.531134) | 2.086778 / 1.468490 (0.618288) | 0.495425 / 4.584777 (-4.089352) | 2.987519 / 3.745712 (-0.758194) | 2.895106 / 5.269862 (-2.374756) | 1.874637 / 4.565676 (-2.691039) | 0.057080 / 0.424275 (-0.367195) | 0.006402 / 0.007607 (-0.001205) | 0.498233 / 0.226044 (0.272188) | 4.974385 / 2.268929 (2.705457) | 2.671755 / 55.444624 (-52.772870) | 2.356120 / 6.876477 (-4.520357) | 2.531374 / 2.142072 (0.389301) | 0.581955 / 4.805227 (-4.223272) | 0.125491 / 6.500664 (-6.375173) | 0.062267 / 0.075469 (-0.013202) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307233 / 1.841788 (-0.534555) | 18.929740 / 8.074308 (10.855431) | 14.029693 / 10.191392 (3.838301) | 0.161992 / 0.680424 (-0.518431) | 0.017127 / 0.534201 (-0.517074) | 0.336644 / 0.579283 (-0.242639) | 0.336550 / 0.434364 (-0.097814) | 0.400554 / 0.540337 (-0.139783) | 0.560725 / 1.386936 (-0.826211) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cb8c5de5145c7e7eee65391cb7f4d92f0d565d62 \"CML watermark\")\n" ]
2023-08-12T07:00:14
2023-08-15T17:04:01
2023-08-15T16:55:24
CONTRIBUTOR
null
Fix the export of a missing method of `Dataset`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6145/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6145", "html_url": "https://github.com/huggingface/datasets/pull/6145", "diff_url": "https://github.com/huggingface/datasets/pull/6145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6145.patch", "merged_at": "2023-08-15T16:55:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/6144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6144/comments
https://api.github.com/repos/huggingface/datasets/issues/6144/events
https://github.com/huggingface/datasets/issues/6144
1,847,296,711
I_kwDODunzps5uG4LH
6,144
NIH exporter file not found
{ "login": "brando90", "id": 1855278, "node_id": "MDQ6VXNlcjE4NTUyNzg=", "avatar_url": "https://avatars.githubusercontent.com/u/1855278?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brando90", "html_url": "https://github.com/brando90", "followers_url": "https://api.github.com/users/brando90/followers", "following_url": "https://api.github.com/users/brando90/following{/other_user}", "gists_url": "https://api.github.com/users/brando90/gists{/gist_id}", "starred_url": "https://api.github.com/users/brando90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brando90/subscriptions", "organizations_url": "https://api.github.com/users/brando90/orgs", "repos_url": "https://api.github.com/users/brando90/repos", "events_url": "https://api.github.com/users/brando90/events{/privacy}", "received_events_url": "https://api.github.com/users/brando90/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "related: https://github.com/huggingface/datasets/issues/3504", "another file not found:\r\n```\r\nTraceback (most recent call last):\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 417, in _info\r\n await _file_info(\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 837, in _file_info\r\n r.raise_for_status()\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/aiohttp/client_reqrep.py\", line 1005, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 404, message='Not Found', url=URL('https://the-eye.eu/public/AI/pile_preliminary_components/pile_uspto.tar')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/lfs/ampere1/0/brando9/.vscode-server-insiders/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/lfs/ampere1/0/brando9/.vscode-server-insiders/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/lfs/ampere1/0/brando9/.vscode-server-insiders/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/lfs/ampere1/0/brando9/.vscode-server-insiders/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"/lfs/ampere1/0/brando9/.vscode-server-insiders/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/lfs/ampere1/0/brando9/.vscode-server-insiders/extensions/ms-python.python-2023.14.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py\", line 526, in <module>\r\n experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights()\r\n File \"/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py\", line 475, in experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights\r\n column_names = next(iter(dataset)).keys()\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1353, in __iter__\r\n for key, example in ex_iterable:\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 207, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/lfs/ampere1/0/brando9/.cache/huggingface/modules/datasets_modules/datasets/EleutherAI--pile/ebea56d358e91cf4d37b0fde361d563bed1472fbd8221a21b38fc8bb4ba554fb/pile.py\", line 257, in _generate_examples\r\n for path, file in files[subset]:\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py\", line 840, in __iter__\r\n yield from self.generator(*self.args, **self.kwargs)\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py\", line 891, in _iter_from_urlpath\r\n with xopen(urlpath, \"rb\", download_config=download_config) as f:\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py\", line 496, in xopen\r\n file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py\", line 134, in open\r\n return self.__enter__()\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py\", line 102, in __enter__\r\n f = self.fs.open(self.path, mode=mode)\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/spec.py\", line 1241, in open\r\n f = self._open(\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 356, in _open\r\n size = size or self.info(path, **kwargs)[\"size\"]\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 121, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 106, in sync\r\n raise return_result\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py\", line 61, in _runner\r\n result[0] = await coro\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 430, in _info\r\n raise FileNotFoundError(url) from exc\r\nFileNotFoundError: https://the-eye.eu/public/AI/pile_preliminary_components/pile_uspto.tar\r\n```", "```\r\nFileNotFoundError: https://the-eye.eu/public/AI/pile_preliminary_components/pile_uspto.tar\r\n```\r\nmost relevant line I think.", "link to tweet: https://twitter.com/BrandoHablando/status/1690081313519489024?s=20 about issue", "so: https://stackoverflow.com/questions/76891189/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-th", "this seems to work but it's rather annoying.\r\n\r\nSummary of how to make it work:\r\n1. get urls to parquet files into a list\r\n2. load list to load_dataset via `load_dataset('parquet', data_files=urls)` (note api names to hf are really confusing sometimes)\r\n3. then it should work, print a batch of text.\r\n\r\npresudo code\r\n```python\r\nurls_hacker_news = [\r\n \"https://huggingface.co./datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00000-of-00004.parquet\",\r\n \"https://huggingface.co./datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00001-of-00004.parquet\",\r\n \"https://huggingface.co./datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00002-of-00004.parquet\",\r\n \"https://huggingface.co./datasets/EleutherAI/pile/resolve/refs%2Fconvert%2Fparquet/hacker_news/pile-train-00003-of-00004.parquet\"\r\n]\r\n\r\n...\r\n\r\n\r\n # streaming = False\r\n from diversity.pile_subset_urls import urls_hacker_news\r\n path, name, data_files = 'parquet', 'hacker_news', urls_hacker_news\r\n # not changing\r\n batch_size = 512\r\n today = datetime.datetime.now().strftime('%Y-m%m-d%d-t%Hh_%Mm_%Ss')\r\n run_name = f'{path} div_coeff_{num_batches=} ({today=} ({name=}) {data_mixture_name=} {probabilities=})'\r\n print(f'{run_name=}')\r\n\r\n # - Init wandb\r\n debug: bool = mode == 'dryrun'\r\n run = wandb.init(mode=mode, project=\"beyond-scale\", name=run_name, save_code=True)\r\n wandb.config.update({\"num_batches\": num_batches, \"path\": path, \"name\": name, \"today\": today, 'probabilities': probabilities, 'batch_size': batch_size, 'debug': debug, 'data_mixture_name': data_mixture_name, 'streaming': streaming, 'data_files': data_files})\r\n # run.notify_on_failure() # https://community.wandb.ai/t/how-do-i-set-the-wandb-alert-programatically-for-my-current-run/4891\r\n print(f'{debug=}')\r\n print(f'{wandb.config=}')\r\n\r\n # -- Get probe network\r\n from datasets import load_dataset\r\n import torch\r\n from transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n\r\n tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\r\n if tokenizer.pad_token_id is None:\r\n tokenizer.pad_token = tokenizer.eos_token\r\n probe_network = GPT2LMHeadModel.from_pretrained(\"gpt2\")\r\n device = torch.device(f\"cuda:{0}\" if torch.cuda.is_available() else \"cpu\")\r\n probe_network = probe_network.to(device)\r\n\r\n # -- Get data set\r\n def my_load_dataset(path, name):\r\n print(f'{path=} {name=} {streaming=}')\r\n if path == 'json' or path == 'bin' or path == 'csv':\r\n print(f'{data_files_prefix+name=}')\r\n return load_dataset(path, data_files=data_files_prefix+name, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n elif path == 'parquet':\r\n print(f'{data_files=}')\r\n return load_dataset(path, data_files=data_files, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n else:\r\n return load_dataset(path, name, streaming=streaming, split=\"train\").with_format(\"torch\")\r\n # - get data set for real now\r\n if isinstance(path, str):\r\n dataset = my_load_dataset(path, name)\r\n else:\r\n print('-- interleaving datasets')\r\n datasets = [my_load_dataset(path, name).with_format(\"torch\") for path, name in zip(path, name)]\r\n [print(f'{dataset.description=}') for dataset in datasets]\r\n dataset = interleave_datasets(datasets, probabilities)\r\n print(f'{dataset=}')\r\n batch = dataset.take(batch_size)\r\n print(f'{next(iter(batch))=}')\r\n column_names = next(iter(batch)).keys()\r\n print(f'{column_names=}')\r\n\r\n # - Prepare functions to tokenize batch\r\n def preprocess(examples):\r\n return tokenizer(examples[\"text\"], padding=\"max_length\", max_length=128, truncation=True, return_tensors=\"pt\")\r\n remove_columns = column_names # remove all keys that are not tensors to avoid bugs in collate function in task2vec's pytorch data loader\r\n def map(batch):\r\n return batch.map(preprocess, batched=True, remove_columns=remove_columns)\r\n tokenized_batch = map(batch)\r\n print(f'{next(iter(tokenized_batch))=}')\r\n```\r\n\r\nhttps://stackoverflow.com/questions/76891189/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-th/76902681#76902681\r\n\r\nhttps://discuss.huggingface.co/t/how-to-download-data-from-hugging-face-that-is-visible-on-the-data-viewer-but-the-files-are-not-available/50555/5?u=severo" ]
2023-08-11T19:05:25
2023-08-14T23:28:38
null
NONE
null
### Describe the bug can't use or download the nih exporter pile data. ``` 15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights() 16 File "/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py", line 474, in experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights 17 column_names = next(iter(dataset)).keys() 18 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__ 19 for key, example in ex_iterable: 20 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 207, in __iter__ 21 yield from self.generate_examples_fn(**self.kwargs) 22 File "/lfs/ampere1/0/brando9/.cache/huggingface/modules/datasets_modules/datasets/EleutherAI--pile/ebea56d358e91cf4d37b0fde361d563bed1472fbd8221a21b38fc8bb4ba554fb/pile.py", line 236, in _generate_examples 23 with zstd.open(open(files[subset], "rb"), "rt", encoding="utf-8") as f: 24 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/streaming.py", line 74, in wrapper 25 return function(*args, download_config=download_config, **kwargs) 26 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen 27 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() 28 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py", line 134, in open 29 return self.__enter__() 30 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/core.py", line 102, in __enter__ 31 f = self.fs.open(self.path, mode=mode) 32 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/spec.py", line 1241, in open 33 f = self._open( 34 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py", line 356, in _open 35 size = size or self.info(path, **kwargs)["size"] 36 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper 37 return sync(self.loop, func, *args, **kwargs) 38 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync 39 raise return_result 40 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner 41 result[0] = await coro 42 File "/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py", line 430, in _info 43 raise FileNotFoundError(url) from exc 44 FileNotFoundError: https://the-eye.eu/public/AI/pile_preliminary_components/NIH_ExPORTER_awarded_grant_text.jsonl.zst ``` ### Steps to reproduce the bug run this: ``` from datasets import load_dataset path, name = 'EleutherAI/pile', 'nih_exporter' # -- Get data set dataset = load_dataset(path, name, streaming=True, split="train").with_format("torch") batch = dataset.take(512) print(f'{batch=}') ``` ### Expected behavior print the batch ### Environment info ``` (beyond_scale) brando9@ampere1:~/beyond-scale-language-data-diversity$ datasets-cli env Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.14.4 - Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6144/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6142/comments
https://api.github.com/repos/huggingface/datasets/issues/6142/events
https://github.com/huggingface/datasets/issues/6142
1,846,205,216
I_kwDODunzps5uCtsg
6,142
the-stack-dedup fails to generate
{ "login": "michaelroyzen", "id": 45830328, "node_id": "MDQ6VXNlcjQ1ODMwMzI4", "avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelroyzen", "html_url": "https://github.com/michaelroyzen", "followers_url": "https://api.github.com/users/michaelroyzen/followers", "following_url": "https://api.github.com/users/michaelroyzen/following{/other_user}", "gists_url": "https://api.github.com/users/michaelroyzen/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelroyzen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelroyzen/subscriptions", "organizations_url": "https://api.github.com/users/michaelroyzen/orgs", "repos_url": "https://api.github.com/users/michaelroyzen/repos", "events_url": "https://api.github.com/users/michaelroyzen/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelroyzen/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "@severo ", "It seems that some parquet files have additional columns.\r\n\r\nI ran a scan and found that two files have the additional `__id__` column:\r\n\r\n1. `hf://datasets/bigcode/the-stack-dedup/data/numpy/data-00000-of-00001.parquet`\r\n2. `hf://datasets/bigcode/the-stack-dedup/data/omgrofl/data-00000-of-00001.parquet`\r\n\r\nWe should open a PR to fix those two files", "I opened https://huggingface.co./datasets/bigcode/the-stack-dedup/discussions/31", "The files have been fixed ! I'm closing this one but feel free to re-open if you still have the issue" ]
2023-08-11T05:10:49
2023-08-17T09:26:13
2023-08-17T09:26:13
NONE
null
### Describe the bug I'm getting an error generating the-stack-dedup with datasets 2.13.1, and with 2.14.4 nothing happens. ### Steps to reproduce the bug My code: ``` import os import datasets as ds MY_CACHE_DIR = "/home/ubuntu/the-stack-dedup-local" MY_TOKEN="my-token" the_stack_ds = ds.load_dataset("bigcode/the-stack-dedup", split="train", download_mode="reuse_cache_if_exists", cache_dir=MY_CACHE_DIR, use_auth_token=MY_TOKEN, num_proc=64) ``` The exception: ``` Generating train split: 233248251 examples [54:31, 57280.00 examples/s] multiprocess.pool.RemoteTraceback: """ Traceback (most recent call last): File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 1879, in _prepare_split_single for _, table in generator: File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/packa ged_modules/parquet/parquet.py", line 82, in _generate_tables yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/packa ged_modules/parquet/parquet.py", line 61, in _cast_table pa_table = table_cast(pa_table, self.info.features.arrow_schema) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/table .py", line 2324, in table_cast return cast_table_to_schema(table, schema) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/table .py", line 2282, in cast_table_to_schema raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nb ecause column names don't match") ValueError: Couldn't cast hexsha: string size: int64 ext: string lang: string max_stars_repo_path: string max_stars_repo_name: string max_stars_repo_head_hexsha: string max_stars_repo_licenses: list<item: string> child 0, item: string max_stars_count: int64 max_stars_repo_stars_event_min_datetime: string max_stars_repo_stars_event_max_datetime: string max_issues_repo_path: string max_issues_repo_name: string max_issues_repo_head_hexsha: string max_issues_repo_licenses: list<item: string> child 0, item: string max_issues_count: int64 max_issues_repo_issues_event_min_datetime: string max_issues_repo_issues_event_max_datetime: string max_forks_repo_path: string max_forks_repo_name: string max_forks_repo_head_hexsha: string max_forks_repo_licenses: list<item: string> child 0, item: string max_forks_count: int64 max_forks_repo_forks_event_min_datetime: string max_forks_repo_forks_event_max_datetime: string content: string avg_line_length: double max_line_length: int64 alphanum_fraction: double __id__: int64 -- schema metadata -- huggingface: '{"info": {"features": {"hexsha": {"dtype": "string", "_type' + 1979 to {'hexsha': Value(dtype='string', id=None), 'size': Value(dtype='int64', id=None), 'ext': Value(dtype='string', id=None), 'lang': Value(dtype='string', id=None), 'max_stars_repo_path': Value(dtype='string', id=None), 'max_stars_repo_name': Value(dtype='string', id=None), 'max_stars_repo_head_hexsha': Value(dtype='string', id=None), 'max_stars_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_stars_count': Value(dtype='int64', id=None), 'max_stars_repo_stars_event_min_datetime': Value(dtype='string', id=None), 'max_stars_repo_stars_event_max_datetime': Value(dtype='string', id=None), 'max_issues_repo_path': Value(dtype='string', id=None), 'max_issues_repo_name': Value(dtype='string', id=None), 'max_issues_repo_head_hexsha': Value(dtype='string', id=None), 'max_issues_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_issues_count': Value(dtype='int64', id=None), 'max_issues_repo_issues_event_min_datetime': Value(dtype='string', id=None), 'max_issues_repo_issues_event_max_datetime': Value(dtype='string', id=None), 'max_forks_repo_path': Value(dtype='string', id=None), 'max_forks_repo_name': Value(dtype='string', id=None), 'max_forks_repo_head_hexsha': Value(dtype='string', id=None), 'max_forks_repo_licenses': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'max_forks_count': Value(dtype='int64', id=None), 'max_forks_repo_forks_event_min_datetime': Value(dtype='string', id=None), 'max_forks_repo_forks_event_max_datetime': Value(dtype='string', id=None), 'content': Value(dtype='string', id=None), 'avg_line_length': Value(dtype='float64', id=None), 'max_line_length': Value(dtype='int64', id=None), 'alphanum_fraction': Value(dtype='float64', id=None)} because column names don't match The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/.local/lib/python3.10/site-packages/multiprocess/p ool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils /py_utils.py", line 1328, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 1912, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating th e dataset") from e datasets.builder.DatasetGenerationError: An error occurred while genera ting the dataset """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/ubuntu/download_the_stack.py", line 7, in <module> the_stack_ds = ds.load_dataset("bigcode/the-stack-dedup", split="tr ain", download_mode="reuse_cache_if_exists", cache_dir=MY_CACHE_DIR, us e_auth_token=MY_TOKEN, num_proc=64) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/load. py", line 1809, in load_dataset builder_instance.download_and_prepare( File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 909, in download_and_prepare self._download_and_prepare( File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/build er.py", line 1796, in _prepare_split for job_id, done, content in iflatmap_unordered( File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils /py_utils.py", line 1354, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/home/ubuntu/.local/lib/python3.10/site-packages/datasets/utils /py_utils.py", line 1354, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/home/ubuntu/.local/lib/python3.10/site-packages/multiprocess/p ool.py", line 774, in get raise self._value datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior The dataset downloads properly. @lhoestq @loub ### Environment info Datasets 2.13.1, large VM with 2TB RAM, Ubuntu 20.04
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6142/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6141/comments
https://api.github.com/repos/huggingface/datasets/issues/6141/events
https://github.com/huggingface/datasets/issues/6141
1,846,117,729
I_kwDODunzps5uCYVh
6,141
TypeError: ClientSession._request() got an unexpected keyword argument 'https'
{ "login": "q935970314", "id": 35994018, "node_id": "MDQ6VXNlcjM1OTk0MDE4", "avatar_url": "https://avatars.githubusercontent.com/u/35994018?v=4", "gravatar_id": "", "url": "https://api.github.com/users/q935970314", "html_url": "https://github.com/q935970314", "followers_url": "https://api.github.com/users/q935970314/followers", "following_url": "https://api.github.com/users/q935970314/following{/other_user}", "gists_url": "https://api.github.com/users/q935970314/gists{/gist_id}", "starred_url": "https://api.github.com/users/q935970314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/q935970314/subscriptions", "organizations_url": "https://api.github.com/users/q935970314/orgs", "repos_url": "https://api.github.com/users/q935970314/repos", "events_url": "https://api.github.com/users/q935970314/events{/privacy}", "received_events_url": "https://api.github.com/users/q935970314/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! I cannot reproduce this error on my machine or in Colab. Which version of `fsspec` do you have installed?" ]
2023-08-11T02:40:32
2023-08-17T18:09:23
null
NONE
null
### Describe the bug Hello, when I ran the [code snippet](https://huggingface.co./docs/datasets/v2.14.4/en/loading#json) on the document, I encountered the following problem: ``` Python 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> base_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/" >>> dataset = load_dataset("json", data_files={"train": base_url + "train-v1.1.json", "validation": base_url + "dev-v1.1.json"}, field="data") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/load.py", line 2112, in load_dataset builder_instance = load_dataset_builder( File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/load.py", line 1798, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/load.py", line 1413, in dataset_module_factory ).get_module() File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/load.py", line 949, in get_module data_files = DataFilesDict.from_patterns( File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/data_files.py", line 672, in from_patterns DataFilesList.from_patterns( File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/data_files.py", line 578, in from_patterns resolve_pattern( File "/home/liushuai/anaconda3/lib/python3.10/site-packages/datasets/data_files.py", line 340, in resolve_pattern for filepath, info in fs.glob(pattern, detail=True).items() File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/asyn.py", line 113, in wrapper return sync(self.loop, func, *args, **kwargs) File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/asyn.py", line 98, in sync raise return_result File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/asyn.py", line 53, in _runner result[0] = await coro File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/implementations/http.py", line 449, in _glob elif await self._exists(path): File "/home/liushuai/anaconda3/lib/python3.10/site-packages/fsspec/implementations/http.py", line 306, in _exists r = await session.get(self.encode_url(path), **kw) File "/home/liushuai/anaconda3/lib/python3.10/site-packages/aiohttp/client.py", line 922, in get self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs) TypeError: ClientSession._request() got an unexpected keyword argument 'https' ``` ### Steps to reproduce the bug ``` from datasets import load_dataset base_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/" dataset = load_dataset("json", data_files={"train": base_url + "train-v1.1.json", "validation": base_url + "dev-v1.1.json"}, field="data") ``` ### Expected behavior able to load normally ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-5.4.54-2-x86_64-with-glibc2.27 - Python version: 3.10.9 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6141/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6141/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6140
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6140/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6140/comments
https://api.github.com/repos/huggingface/datasets/issues/6140/events
https://github.com/huggingface/datasets/issues/6140
1,845,384,712
I_kwDODunzps5t_lYI
6,140
Misalignment between file format specified in configs metadata YAML and the inferred builder
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
2023-08-10T15:07:34
2023-08-17T20:37:20
2023-08-17T20:37:20
MEMBER
null
There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV): ```yaml configs: - config_name: default data_files: - split: train path: data.csv ``` and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not appear in the configs metadata YAML. See: https://huggingface.co./datasets/freddyaboulton/chatinterface_with_image_csv/discussions/1 CC: @freddyaboulton @polinaeterna
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6140/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6140/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6139/comments
https://api.github.com/repos/huggingface/datasets/issues/6139/events
https://github.com/huggingface/datasets/issues/6139
1,844,991,583
I_kwDODunzps5t-FZf
6,139
Offline dataset viewer
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi, thanks for the suggestion. It's not possible at the moment. The viewer is part of the Hub codebase and only works on public datasets. Also, it relies on [Datasets Server](https://github.com/huggingface/datasets-server/), which prepares the data and provides an API to access the rows, size, etc.\r\n\r\nIf you're interested in hosting your data as a private dataset on the Hub, you might want to look at https://github.com/huggingface/datasets-server/issues/39.", "Hi, we are building an offline dataset viewer: https://github.com/Renumics/spotlight\r\nIt supports many HF datasets, but currently you have to use it via Pandas:\r\ndf=ds.to_pandas()\r\nspotlight.show(df)\r\n\r\nWould love to hear from you if that works for your use case. If not, feel free to open an issue on the repo: https://github.com/Renumics/spotlight/issues", "@ssuwelack thank you! I will definitely try it out." ]
2023-08-10T11:30:00
2023-08-26T19:30:40
null
NONE
null
### Feature request The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something that makes it easy to view the dataset. ### Motivation I want to easily view my dataset even when it is hosted locally. ### Your contribution N.A.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6139/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6138/comments
https://api.github.com/repos/huggingface/datasets/issues/6138/events
https://github.com/huggingface/datasets/pull/6138
1,844,952,496
PR_kwDODunzps5XoH2V
6,138
Ignore CI lint rule violation in Pickler.memoize
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006536 / 0.011353 (-0.004817) | 0.003890 / 0.011008 (-0.007118) | 0.084044 / 0.038508 (0.045536) | 0.071893 / 0.023109 (0.048784) | 0.346926 / 0.275898 (0.071028) | 0.397487 / 0.323480 (0.074007) | 0.004065 / 0.007986 (-0.003921) | 0.003218 / 0.004328 (-0.001111) | 0.064670 / 0.004250 (0.060420) | 0.052414 / 0.037052 (0.015362) | 0.355413 / 0.258489 (0.096924) | 0.398894 / 0.293841 (0.105053) | 0.030763 / 0.128546 (-0.097783) | 0.008590 / 0.075646 (-0.067056) | 0.286857 / 0.419271 (-0.132415) | 0.051126 / 0.043533 (0.007593) | 0.346125 / 0.255139 (0.090986) | 0.395673 / 0.283200 (0.112474) | 0.025766 / 0.141683 (-0.115917) | 1.466238 / 1.452155 (0.014084) | 1.543117 / 1.492716 (0.050400) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213210 / 0.018006 (0.195204) | 0.451981 / 0.000490 (0.451491) | 0.003784 / 0.000200 (0.003585) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027756 / 0.037411 (-0.009655) | 0.082446 / 0.014526 (0.067920) | 0.095414 / 0.176557 (-0.081142) | 0.151812 / 0.737135 (-0.585323) | 0.096296 / 0.296338 (-0.200042) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383729 / 0.215209 (0.168520) | 3.835126 / 2.077655 (1.757471) | 1.891972 / 1.504120 (0.387852) | 1.719934 / 1.541195 (0.178739) | 1.899980 / 1.468490 (0.431490) | 0.488741 / 4.584777 (-4.096036) | 3.634120 / 3.745712 (-0.111592) | 3.243314 / 5.269862 (-2.026547) | 2.028382 / 4.565676 (-2.537294) | 0.057355 / 0.424275 (-0.366920) | 0.007717 / 0.007607 (0.000110) | 0.459835 / 0.226044 (0.233790) | 4.591793 / 2.268929 (2.322864) | 2.346861 / 55.444624 (-53.097764) | 2.067357 / 6.876477 (-4.809120) | 2.254954 / 2.142072 (0.112882) | 0.587016 / 4.805227 (-4.218211) | 0.133918 / 6.500664 (-6.366746) | 0.060311 / 0.075469 (-0.015158) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.250016 / 1.841788 (-0.591772) | 19.674333 / 8.074308 (11.600025) | 14.522764 / 10.191392 (4.331372) | 0.145741 / 0.680424 (-0.534683) | 0.018593 / 0.534201 (-0.515608) | 0.392833 / 0.579283 (-0.186450) | 0.408194 / 0.434364 (-0.026170) | 0.455164 / 0.540337 (-0.085174) | 0.622722 / 1.386936 (-0.764214) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006583 / 0.011353 (-0.004770) | 0.004008 / 0.011008 (-0.007000) | 0.064688 / 0.038508 (0.026180) | 0.074969 / 0.023109 (0.051860) | 0.360504 / 0.275898 (0.084606) | 0.396926 / 0.323480 (0.073446) | 0.005190 / 0.007986 (-0.002796) | 0.003363 / 0.004328 (-0.000966) | 0.064372 / 0.004250 (0.060122) | 0.054428 / 0.037052 (0.017376) | 0.361204 / 0.258489 (0.102715) | 0.400917 / 0.293841 (0.107077) | 0.031117 / 0.128546 (-0.097429) | 0.008406 / 0.075646 (-0.067241) | 0.069655 / 0.419271 (-0.349617) | 0.048582 / 0.043533 (0.005049) | 0.365396 / 0.255139 (0.110257) | 0.381344 / 0.283200 (0.098145) | 0.023809 / 0.141683 (-0.117874) | 1.472926 / 1.452155 (0.020772) | 1.547298 / 1.492716 (0.054582) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276912 / 0.018006 (0.258906) | 0.449096 / 0.000490 (0.448607) | 0.018921 / 0.000200 (0.018721) | 0.000111 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030237 / 0.037411 (-0.007174) | 0.088610 / 0.014526 (0.074084) | 0.101529 / 0.176557 (-0.075027) | 0.154070 / 0.737135 (-0.583065) | 0.103471 / 0.296338 (-0.192867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416047 / 0.215209 (0.200838) | 4.152374 / 2.077655 (2.074719) | 2.111181 / 1.504120 (0.607061) | 1.943582 / 1.541195 (0.402387) | 2.031729 / 1.468490 (0.563239) | 0.486740 / 4.584777 (-4.098037) | 3.631547 / 3.745712 (-0.114165) | 3.251202 / 5.269862 (-2.018660) | 2.041272 / 4.565676 (-2.524405) | 0.057287 / 0.424275 (-0.366988) | 0.007303 / 0.007607 (-0.000304) | 0.491027 / 0.226044 (0.264982) | 4.906757 / 2.268929 (2.637829) | 2.581694 / 55.444624 (-52.862931) | 2.250996 / 6.876477 (-4.625481) | 2.441771 / 2.142072 (0.299698) | 0.600714 / 4.805227 (-4.204514) | 0.133233 / 6.500664 (-6.367431) | 0.060856 / 0.075469 (-0.014613) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.340062 / 1.841788 (-0.501725) | 19.973899 / 8.074308 (11.899591) | 14.347381 / 10.191392 (4.155989) | 0.166651 / 0.680424 (-0.513773) | 0.018691 / 0.534201 (-0.515510) | 0.393580 / 0.579283 (-0.185703) | 0.409425 / 0.434364 (-0.024939) | 0.474409 / 0.540337 (-0.065929) | 0.649423 / 1.386936 (-0.737514) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5da68102297c3639207a7901952d2765a4cdb8b \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006593 / 0.011353 (-0.004760) | 0.004123 / 0.011008 (-0.006885) | 0.084424 / 0.038508 (0.045916) | 0.076867 / 0.023109 (0.053758) | 0.309149 / 0.275898 (0.033251) | 0.348572 / 0.323480 (0.025092) | 0.005463 / 0.007986 (-0.002523) | 0.003440 / 0.004328 (-0.000889) | 0.064604 / 0.004250 (0.060353) | 0.053920 / 0.037052 (0.016868) | 0.345221 / 0.258489 (0.086732) | 0.363209 / 0.293841 (0.069368) | 0.031209 / 0.128546 (-0.097337) | 0.008690 / 0.075646 (-0.066956) | 0.288851 / 0.419271 (-0.130421) | 0.052239 / 0.043533 (0.008707) | 0.308643 / 0.255139 (0.053504) | 0.346407 / 0.283200 (0.063207) | 0.023935 / 0.141683 (-0.117748) | 1.469207 / 1.452155 (0.017052) | 1.532855 / 1.492716 (0.040138) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290885 / 0.018006 (0.272879) | 0.580561 / 0.000490 (0.580071) | 0.004698 / 0.000200 (0.004498) | 0.000286 / 0.000054 (0.000231) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028015 / 0.037411 (-0.009396) | 0.081172 / 0.014526 (0.066646) | 0.096822 / 0.176557 (-0.079735) | 0.151355 / 0.737135 (-0.585781) | 0.098017 / 0.296338 (-0.198321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384069 / 0.215209 (0.168859) | 3.828635 / 2.077655 (1.750980) | 1.829311 / 1.504120 (0.325192) | 1.672520 / 1.541195 (0.131325) | 1.743944 / 1.468490 (0.275453) | 0.481594 / 4.584777 (-4.103183) | 3.556204 / 3.745712 (-0.189509) | 3.279499 / 5.269862 (-1.990363) | 2.033243 / 4.565676 (-2.532434) | 0.056525 / 0.424275 (-0.367750) | 0.007717 / 0.007607 (0.000109) | 0.466815 / 0.226044 (0.240771) | 4.657022 / 2.268929 (2.388094) | 2.438600 / 55.444624 (-53.006024) | 2.097999 / 6.876477 (-4.778478) | 2.263122 / 2.142072 (0.121049) | 0.636001 / 4.805227 (-4.169226) | 0.147727 / 6.500664 (-6.352937) | 0.059293 / 0.075469 (-0.016176) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243111 / 1.841788 (-0.598677) | 19.558379 / 8.074308 (11.484071) | 14.141017 / 10.191392 (3.949625) | 0.169840 / 0.680424 (-0.510583) | 0.017912 / 0.534201 (-0.516289) | 0.391325 / 0.579283 (-0.187958) | 0.417169 / 0.434364 (-0.017195) | 0.457129 / 0.540337 (-0.083209) | 0.629907 / 1.386936 (-0.757029) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006687 / 0.011353 (-0.004666) | 0.004165 / 0.011008 (-0.006844) | 0.064738 / 0.038508 (0.026230) | 0.077286 / 0.023109 (0.054177) | 0.364236 / 0.275898 (0.088338) | 0.393228 / 0.323480 (0.069748) | 0.005451 / 0.007986 (-0.002535) | 0.003547 / 0.004328 (-0.000781) | 0.065761 / 0.004250 (0.061510) | 0.056526 / 0.037052 (0.019474) | 0.365523 / 0.258489 (0.107034) | 0.403331 / 0.293841 (0.109490) | 0.030900 / 0.128546 (-0.097646) | 0.008757 / 0.075646 (-0.066889) | 0.070961 / 0.419271 (-0.348311) | 0.048394 / 0.043533 (0.004861) | 0.365908 / 0.255139 (0.110769) | 0.381197 / 0.283200 (0.097998) | 0.022940 / 0.141683 (-0.118743) | 1.487909 / 1.452155 (0.035754) | 1.532931 / 1.492716 (0.040215) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.317506 / 0.018006 (0.299500) | 0.513391 / 0.000490 (0.512902) | 0.005464 / 0.000200 (0.005264) | 0.000214 / 0.000054 (0.000159) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032289 / 0.037411 (-0.005122) | 0.090157 / 0.014526 (0.075631) | 0.103514 / 0.176557 (-0.073043) | 0.158236 / 0.737135 (-0.578899) | 0.106554 / 0.296338 (-0.189784) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406455 / 0.215209 (0.191246) | 4.061563 / 2.077655 (1.983908) | 2.082201 / 1.504120 (0.578081) | 1.914433 / 1.541195 (0.373238) | 2.039342 / 1.468490 (0.570852) | 0.478444 / 4.584777 (-4.106333) | 3.599755 / 3.745712 (-0.145957) | 3.294453 / 5.269862 (-1.975409) | 2.028519 / 4.565676 (-2.537158) | 0.056118 / 0.424275 (-0.368157) | 0.007325 / 0.007607 (-0.000282) | 0.493177 / 0.226044 (0.267132) | 4.926218 / 2.268929 (2.657289) | 2.605033 / 55.444624 (-52.839591) | 2.239933 / 6.876477 (-4.636544) | 2.454210 / 2.142072 (0.312137) | 0.571905 / 4.805227 (-4.233322) | 0.133251 / 6.500664 (-6.367413) | 0.062422 / 0.075469 (-0.013047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352752 / 1.841788 (-0.489036) | 20.265109 / 8.074308 (12.190801) | 14.293064 / 10.191392 (4.101672) | 0.169267 / 0.680424 (-0.511157) | 0.018607 / 0.534201 (-0.515594) | 0.393655 / 0.579283 (-0.185628) | 0.402132 / 0.434364 (-0.032232) | 0.477566 / 0.540337 (-0.062772) | 0.651773 / 1.386936 (-0.735163) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#80023f36b2b6678347979421ef973d8969d31306 \"CML watermark\")\n" ]
2023-08-10T11:03:15
2023-08-10T11:31:45
2023-08-10T11:22:56
MEMBER
null
This PR ignores the violation of the lint rule E721 in `Pickler.memoize`. The lint rule violation was introduced in this PR: - #3182 @lhoestq is there a reason you did not use `isinstance` instead? As a hotfix, we just ignore the violation of the lint rule. Fix #6136.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6138/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6138/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6138", "html_url": "https://github.com/huggingface/datasets/pull/6138", "diff_url": "https://github.com/huggingface/datasets/pull/6138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6138.patch", "merged_at": "2023-08-10T11:22:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/6137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6137/comments
https://api.github.com/repos/huggingface/datasets/issues/6137/events
https://github.com/huggingface/datasets/issues/6137
1,844,952,312
I_kwDODunzps5t97z4
6,137
(`from_spark()`) Unable to connect HDFS in pyspark YARN setting
{ "login": "kyoungrok0517", "id": 1051900, "node_id": "MDQ6VXNlcjEwNTE5MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1051900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kyoungrok0517", "html_url": "https://github.com/kyoungrok0517", "followers_url": "https://api.github.com/users/kyoungrok0517/followers", "following_url": "https://api.github.com/users/kyoungrok0517/following{/other_user}", "gists_url": "https://api.github.com/users/kyoungrok0517/gists{/gist_id}", "starred_url": "https://api.github.com/users/kyoungrok0517/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyoungrok0517/subscriptions", "organizations_url": "https://api.github.com/users/kyoungrok0517/orgs", "repos_url": "https://api.github.com/users/kyoungrok0517/repos", "events_url": "https://api.github.com/users/kyoungrok0517/events{/privacy}", "received_events_url": "https://api.github.com/users/kyoungrok0517/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-08-10T11:03:08
2023-08-10T11:03:08
null
NONE
null
### Describe the bug related issue: https://github.com/apache/arrow/issues/37057#issue-1841013613 --- Hello. I'm trying to interact with HDFS storage from a driver and workers of pyspark YARN cluster. Precisely I'm using **huggingface's `datasets`** ([link](https://github.com/huggingface/datasets)) library that relies on pyarrow to communicate with HDFS. The `from_spark()` ([link](https://huggingface.co./docs/datasets/use_with_spark#load-from-spark)) is what I'm invoking in my script. Below is the error I'm encountering. Note that I've masked sensitive paths. My code is sent to worker containers (docker) from driver container then executed. I confirmed that in both driver and worker images I can connect to HDFS using pyarrow since the envs and required jars are properly set, but strangely that becomes impossible when the same image runs as remote worker process. These are some peculiarities in my environment that might caused this issue. * **Cluster requires kerberos authentication** * But I think the error message implies that's not the problem in this case * **The user that runs the worker process is different from that built the docker image** * To avoid permission-related issues I made all directories that are accessed from the script accessible to everyone * **Pyspark-part of my code has no problem interacting with HDFS.** * Even pyarrow doesn't experience problem when I run the code in interactive session of the same docker images (driver, worker) * The problem occurs only when it runs as cluster's worker runtime Hope I could get some help. Thanks. ```bash 2023-08-08 18:51:19,638 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-08-08 18:51:20,280 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 23/08/08 18:51:22 WARN TaskSetManager: Lost task 0.0 in stage 142.0 (TID 9732) (ac3bax2062.bdp.bdata.ai executor 1): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000003/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 23/08/08 18:51:24 WARN TaskSetManager: Lost task 0.1 in stage 142.0 (TID 9733) (ac3iax2079.bdp.bdata.ai executor 2): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000005/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 23/08/08 18:51:38 WARN TaskSetManager: Lost task 0.2 in stage 142.0 (TID 9734) (<MASKED> executor 4): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 830, in main process() File "<MASKED>/application_1682476586273_25865777/container_e143_1682476586273_25865777_01_000008/pyspark.zip/pyspark/worker.py", line 820, in process out_iter = func(split_index, iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/spark/python/pyspark/rdd.py", line 5405, in pipeline_func File "/root/spark/python/pyspark/rdd.py", line 828, in func File "/opt/conda/lib/python3.11/site-packages/datasets/packaged_modules/spark/spark.py", line 130, in create_cache_and_write_probe open(probe_file, "a") File "/opt/conda/lib/python3.11/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 496, in xopen file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 439, in open out = open_files( ^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 282, in open_files fs, fs_token, paths = get_fs_token_paths( ^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/core.py", line 609, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/registry.py", line 267, in filesystem return cls(**storage_options) ^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/spec.py", line 79, in __call__ obj = super().__call__(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/site-packages/fsspec/implementations/arrow.py", line 278, in __init__ fs = HadoopFileSystem( ^^^^^^^^^^^^^^^^^ File "pyarrow/_hdfs.pyx", line 96, in pyarrow._hdfs.HadoopFileSystem.__init__ File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: HDFS connection failed at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:561) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:767) at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:749) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:514) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at scala.collection.Iterator.foreach(Iterator.scala:943) at scala.collection.Iterator.foreach$(Iterator.scala:943) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62) at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49) at scala.collection.TraversableOnce.to(TraversableOnce.scala:366) at scala.collection.TraversableOnce.to$(TraversableOnce.scala:364) at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:358) at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:358) at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28) at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:345) at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:339) at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28) at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:1019) at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2303) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92) at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) at org.apache.spark.scheduler.Task.run(Task.scala:139) at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ``` ### Steps to reproduce the bug Use `from_spark()` function in pyspark YARN setting. I set `cache_dir` to HDFS path. ### Expected behavior Work as described in document ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6137/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6136
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6136/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6136/comments
https://api.github.com/repos/huggingface/datasets/issues/6136/events
https://github.com/huggingface/datasets/issues/6136
1,844,887,866
I_kwDODunzps5t9sE6
6,136
CI check_code_quality error: E721 Do not compare types, use `isinstance()`
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
2023-08-10T10:19:50
2023-08-10T11:22:58
2023-08-10T11:22:58
MEMBER
null
After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error: ``` src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()` ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6136/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6135/comments
https://api.github.com/repos/huggingface/datasets/issues/6135/events
https://github.com/huggingface/datasets/pull/6135
1,844,870,943
PR_kwDODunzps5Xn2AT
6,135
Remove unused allowed_extensions param
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009055 / 0.011353 (-0.002298) | 0.008835 / 0.011008 (-0.002173) | 0.117048 / 0.038508 (0.078540) | 0.096268 / 0.023109 (0.073159) | 0.474678 / 0.275898 (0.198780) | 0.550509 / 0.323480 (0.227029) | 0.005552 / 0.007986 (-0.002434) | 0.004315 / 0.004328 (-0.000013) | 0.094336 / 0.004250 (0.090086) | 0.061945 / 0.037052 (0.024892) | 0.461422 / 0.258489 (0.202933) | 0.521271 / 0.293841 (0.227430) | 0.049116 / 0.128546 (-0.079430) | 0.015007 / 0.075646 (-0.060639) | 0.414351 / 0.419271 (-0.004920) | 0.137520 / 0.043533 (0.093987) | 0.465627 / 0.255139 (0.210488) | 0.537244 / 0.283200 (0.254044) | 0.068577 / 0.141683 (-0.073106) | 1.921373 / 1.452155 (0.469219) | 2.506653 / 1.492716 (1.013937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273970 / 0.018006 (0.255963) | 0.750295 / 0.000490 (0.749805) | 0.004241 / 0.000200 (0.004041) | 0.000128 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033793 / 0.037411 (-0.003618) | 0.105562 / 0.014526 (0.091037) | 0.131771 / 0.176557 (-0.044786) | 0.196890 / 0.737135 (-0.540245) | 0.119842 / 0.296338 (-0.176496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.634881 / 0.215209 (0.419672) | 6.069221 / 2.077655 (3.991566) | 2.678765 / 1.504120 (1.174646) | 2.460309 / 1.541195 (0.919114) | 2.517579 / 1.468490 (1.049089) | 0.869558 / 4.584777 (-3.715219) | 5.407686 / 3.745712 (1.661974) | 4.920687 / 5.269862 (-0.349175) | 3.130066 / 4.565676 (-1.435611) | 0.100337 / 0.424275 (-0.323938) | 0.009615 / 0.007607 (0.002008) | 0.745275 / 0.226044 (0.519231) | 7.577890 / 2.268929 (5.308962) | 3.607887 / 55.444624 (-51.836738) | 2.922211 / 6.876477 (-3.954266) | 3.205592 / 2.142072 (1.063519) | 1.052298 / 4.805227 (-3.752929) | 0.218798 / 6.500664 (-6.281866) | 0.082137 / 0.075469 (0.006667) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.696551 / 1.841788 (-0.145237) | 24.946074 / 8.074308 (16.871766) | 23.114202 / 10.191392 (12.922810) | 0.220498 / 0.680424 (-0.459925) | 0.029388 / 0.534201 (-0.504813) | 0.494721 / 0.579283 (-0.084562) | 0.603085 / 0.434364 (0.168722) | 0.573093 / 0.540337 (0.032756) | 0.784937 / 1.386936 (-0.601999) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009642 / 0.011353 (-0.001711) | 0.007551 / 0.011008 (-0.003457) | 0.085224 / 0.038508 (0.046716) | 0.099493 / 0.023109 (0.076384) | 0.503824 / 0.275898 (0.227926) | 0.546583 / 0.323480 (0.223103) | 0.006385 / 0.007986 (-0.001601) | 0.004751 / 0.004328 (0.000423) | 0.084699 / 0.004250 (0.080449) | 0.067875 / 0.037052 (0.030823) | 0.485313 / 0.258489 (0.226824) | 0.535808 / 0.293841 (0.241967) | 0.049935 / 0.128546 (-0.078611) | 0.014427 / 0.075646 (-0.061219) | 0.095531 / 0.419271 (-0.323741) | 0.068487 / 0.043533 (0.024954) | 0.502204 / 0.255139 (0.247065) | 0.514393 / 0.283200 (0.231193) | 0.037350 / 0.141683 (-0.104333) | 1.849380 / 1.452155 (0.397226) | 1.920151 / 1.492716 (0.427434) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298363 / 0.018006 (0.280357) | 0.651555 / 0.000490 (0.651065) | 0.005910 / 0.000200 (0.005710) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039170 / 0.037411 (0.001758) | 0.106436 / 0.014526 (0.091910) | 0.129880 / 0.176557 (-0.046677) | 0.185401 / 0.737135 (-0.551734) | 0.125732 / 0.296338 (-0.170607) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643248 / 0.215209 (0.428039) | 6.374807 / 2.077655 (4.297152) | 3.057296 / 1.504120 (1.553176) | 2.779534 / 1.541195 (1.238340) | 2.790165 / 1.468490 (1.321675) | 0.841580 / 4.584777 (-3.743197) | 5.371478 / 3.745712 (1.625766) | 4.973251 / 5.269862 (-0.296610) | 3.235817 / 4.565676 (-1.329860) | 0.097276 / 0.424275 (-0.326999) | 0.008840 / 0.007607 (0.001233) | 0.728678 / 0.226044 (0.502634) | 7.526382 / 2.268929 (5.257454) | 3.792550 / 55.444624 (-51.652074) | 3.439134 / 6.876477 (-3.437342) | 3.466626 / 2.142072 (1.324553) | 1.035894 / 4.805227 (-3.769333) | 0.211670 / 6.500664 (-6.288994) | 0.087596 / 0.075469 (0.012127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.782755 / 1.841788 (-0.059033) | 25.704407 / 8.074308 (17.630099) | 23.799672 / 10.191392 (13.608280) | 0.233952 / 0.680424 (-0.446472) | 0.030810 / 0.534201 (-0.503391) | 0.505857 / 0.579283 (-0.073426) | 0.629331 / 0.434364 (0.194967) | 0.608530 / 0.540337 (0.068192) | 0.813688 / 1.386936 (-0.573248) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ed4d6bb5f1331576c41b04acd9872a5349a0915c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006401 / 0.011353 (-0.004952) | 0.003916 / 0.011008 (-0.007092) | 0.083976 / 0.038508 (0.045468) | 0.072583 / 0.023109 (0.049474) | 0.322747 / 0.275898 (0.046849) | 0.345159 / 0.323480 (0.021679) | 0.005366 / 0.007986 (-0.002620) | 0.003399 / 0.004328 (-0.000930) | 0.064232 / 0.004250 (0.059982) | 0.053313 / 0.037052 (0.016261) | 0.353127 / 0.258489 (0.094638) | 0.361398 / 0.293841 (0.067557) | 0.030604 / 0.128546 (-0.097942) | 0.008615 / 0.075646 (-0.067031) | 0.285806 / 0.419271 (-0.133466) | 0.050887 / 0.043533 (0.007354) | 0.312293 / 0.255139 (0.057154) | 0.349716 / 0.283200 (0.066516) | 0.024546 / 0.141683 (-0.117137) | 1.472318 / 1.452155 (0.020163) | 1.536063 / 1.492716 (0.043347) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280012 / 0.018006 (0.262006) | 0.593574 / 0.000490 (0.593085) | 0.004083 / 0.000200 (0.003883) | 0.000195 / 0.000054 (0.000141) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027715 / 0.037411 (-0.009696) | 0.081392 / 0.014526 (0.066866) | 0.096445 / 0.176557 (-0.080112) | 0.152131 / 0.737135 (-0.585004) | 0.094825 / 0.296338 (-0.201514) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.380749 / 0.215209 (0.165540) | 3.806994 / 2.077655 (1.729339) | 1.842544 / 1.504120 (0.338424) | 1.682829 / 1.541195 (0.141635) | 1.701679 / 1.468490 (0.233189) | 0.484830 / 4.584777 (-4.099947) | 3.517359 / 3.745712 (-0.228353) | 3.231211 / 5.269862 (-2.038651) | 2.029371 / 4.565676 (-2.536306) | 0.057199 / 0.424275 (-0.367077) | 0.007653 / 0.007607 (0.000046) | 0.458572 / 0.226044 (0.232528) | 4.579835 / 2.268929 (2.310907) | 2.326467 / 55.444624 (-53.118157) | 1.939646 / 6.876477 (-4.936831) | 2.133150 / 2.142072 (-0.008922) | 0.596251 / 4.805227 (-4.208976) | 0.131979 / 6.500664 (-6.368686) | 0.059226 / 0.075469 (-0.016243) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.234833 / 1.841788 (-0.606955) | 19.475522 / 8.074308 (11.401214) | 14.102760 / 10.191392 (3.911368) | 0.159657 / 0.680424 (-0.520767) | 0.018292 / 0.534201 (-0.515909) | 0.391079 / 0.579283 (-0.188204) | 0.406736 / 0.434364 (-0.027628) | 0.459159 / 0.540337 (-0.081178) | 0.618159 / 1.386936 (-0.768777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006592 / 0.011353 (-0.004761) | 0.004052 / 0.011008 (-0.006957) | 0.064536 / 0.038508 (0.026028) | 0.075051 / 0.023109 (0.051942) | 0.379596 / 0.275898 (0.103698) | 0.412413 / 0.323480 (0.088933) | 0.005377 / 0.007986 (-0.002608) | 0.003466 / 0.004328 (-0.000863) | 0.064958 / 0.004250 (0.060708) | 0.055265 / 0.037052 (0.018213) | 0.391505 / 0.258489 (0.133016) | 0.425345 / 0.293841 (0.131504) | 0.030750 / 0.128546 (-0.097796) | 0.008652 / 0.075646 (-0.066994) | 0.072107 / 0.419271 (-0.347165) | 0.048340 / 0.043533 (0.004807) | 0.387714 / 0.255139 (0.132575) | 0.402602 / 0.283200 (0.119402) | 0.023492 / 0.141683 (-0.118191) | 1.528377 / 1.452155 (0.076222) | 1.574827 / 1.492716 (0.082110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316999 / 0.018006 (0.298993) | 0.528391 / 0.000490 (0.527901) | 0.005183 / 0.000200 (0.004983) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029670 / 0.037411 (-0.007741) | 0.087130 / 0.014526 (0.072604) | 0.099897 / 0.176557 (-0.076660) | 0.154074 / 0.737135 (-0.583062) | 0.104309 / 0.296338 (-0.192030) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408804 / 0.215209 (0.193595) | 4.072248 / 2.077655 (1.994593) | 2.103333 / 1.504120 (0.599213) | 1.931972 / 1.541195 (0.390777) | 1.980132 / 1.468490 (0.511642) | 0.482623 / 4.584777 (-4.102154) | 3.532789 / 3.745712 (-0.212923) | 3.304962 / 5.269862 (-1.964899) | 2.036672 / 4.565676 (-2.529004) | 0.056944 / 0.424275 (-0.367331) | 0.007190 / 0.007607 (-0.000417) | 0.490650 / 0.226044 (0.264606) | 4.903604 / 2.268929 (2.634675) | 2.586247 / 55.444624 (-52.858377) | 2.227631 / 6.876477 (-4.648846) | 2.397286 / 2.142072 (0.255214) | 0.579167 / 4.805227 (-4.226060) | 0.132037 / 6.500664 (-6.368627) | 0.059971 / 0.075469 (-0.015498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336430 / 1.841788 (-0.505358) | 19.915846 / 8.074308 (11.841538) | 14.102781 / 10.191392 (3.911389) | 0.147956 / 0.680424 (-0.532468) | 0.018192 / 0.534201 (-0.516009) | 0.397949 / 0.579283 (-0.181334) | 0.408529 / 0.434364 (-0.025835) | 0.479382 / 0.540337 (-0.060955) | 0.659735 / 1.386936 (-0.727201) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#98074122449bc031f7269f298f1c55f20e39b975 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005880 / 0.011353 (-0.005473) | 0.003677 / 0.011008 (-0.007332) | 0.080022 / 0.038508 (0.041514) | 0.055554 / 0.023109 (0.032445) | 0.397449 / 0.275898 (0.121551) | 0.428346 / 0.323480 (0.104867) | 0.004613 / 0.007986 (-0.003373) | 0.002873 / 0.004328 (-0.001455) | 0.062226 / 0.004250 (0.057976) | 0.044721 / 0.037052 (0.007669) | 0.404792 / 0.258489 (0.146303) | 0.437467 / 0.293841 (0.143626) | 0.027166 / 0.128546 (-0.101381) | 0.008077 / 0.075646 (-0.067569) | 0.260469 / 0.419271 (-0.158803) | 0.043551 / 0.043533 (0.000018) | 0.401712 / 0.255139 (0.146573) | 0.427294 / 0.283200 (0.144094) | 0.021243 / 0.141683 (-0.120440) | 1.464553 / 1.452155 (0.012398) | 1.507112 / 1.492716 (0.014396) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198415 / 0.018006 (0.180408) | 0.427940 / 0.000490 (0.427450) | 0.004236 / 0.000200 (0.004036) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023759 / 0.037411 (-0.013652) | 0.073262 / 0.014526 (0.058736) | 0.677113 / 0.176557 (0.500557) | 0.194964 / 0.737135 (-0.542172) | 0.086121 / 0.296338 (-0.210217) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401176 / 0.215209 (0.185967) | 4.028688 / 2.077655 (1.951034) | 2.026804 / 1.504120 (0.522685) | 1.887964 / 1.541195 (0.346770) | 2.008991 / 1.468490 (0.540501) | 0.498847 / 4.584777 (-4.085930) | 3.015920 / 3.745712 (-0.729792) | 2.837019 / 5.269862 (-2.432843) | 1.849976 / 4.565676 (-2.715701) | 0.057545 / 0.424275 (-0.366730) | 0.006645 / 0.007607 (-0.000962) | 0.470225 / 0.226044 (0.244180) | 4.720910 / 2.268929 (2.451982) | 2.473693 / 55.444624 (-52.970931) | 2.177525 / 6.876477 (-4.698952) | 2.374702 / 2.142072 (0.232630) | 0.588253 / 4.805227 (-4.216974) | 0.125512 / 6.500664 (-6.375152) | 0.061247 / 0.075469 (-0.014222) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255829 / 1.841788 (-0.585959) | 18.251689 / 8.074308 (10.177381) | 13.690373 / 10.191392 (3.498981) | 0.146928 / 0.680424 (-0.533496) | 0.016534 / 0.534201 (-0.517667) | 0.335249 / 0.579283 (-0.244034) | 0.338940 / 0.434364 (-0.095424) | 0.382170 / 0.540337 (-0.158168) | 0.529570 / 1.386936 (-0.857366) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005920 / 0.011353 (-0.005433) | 0.003557 / 0.011008 (-0.007451) | 0.062776 / 0.038508 (0.024267) | 0.058473 / 0.023109 (0.035364) | 0.358780 / 0.275898 (0.082882) | 0.394161 / 0.323480 (0.070682) | 0.004636 / 0.007986 (-0.003349) | 0.002865 / 0.004328 (-0.001463) | 0.062033 / 0.004250 (0.057782) | 0.047154 / 0.037052 (0.010101) | 0.367718 / 0.258489 (0.109229) | 0.400814 / 0.293841 (0.106973) | 0.026919 / 0.128546 (-0.101628) | 0.008071 / 0.075646 (-0.067575) | 0.067802 / 0.419271 (-0.351469) | 0.040894 / 0.043533 (-0.002638) | 0.358757 / 0.255139 (0.103618) | 0.384971 / 0.283200 (0.101771) | 0.020019 / 0.141683 (-0.121664) | 1.458578 / 1.452155 (0.006423) | 1.525059 / 1.492716 (0.032342) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207795 / 0.018006 (0.189789) | 0.413201 / 0.000490 (0.412712) | 0.005199 / 0.000200 (0.004999) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025716 / 0.037411 (-0.011696) | 0.078434 / 0.014526 (0.063908) | 0.086920 / 0.176557 (-0.089637) | 0.138327 / 0.737135 (-0.598808) | 0.088120 / 0.296338 (-0.208219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434344 / 0.215209 (0.219135) | 4.343114 / 2.077655 (2.265459) | 2.384439 / 1.504120 (0.880319) | 2.253929 / 1.541195 (0.712735) | 2.306811 / 1.468490 (0.838321) | 0.497572 / 4.584777 (-4.087205) | 3.028794 / 3.745712 (-0.716919) | 2.833484 / 5.269862 (-2.436377) | 1.878918 / 4.565676 (-2.686759) | 0.057133 / 0.424275 (-0.367143) | 0.006357 / 0.007607 (-0.001251) | 0.508019 / 0.226044 (0.281975) | 5.076935 / 2.268929 (2.808007) | 2.745784 / 55.444624 (-52.698841) | 2.476291 / 6.876477 (-4.400186) | 2.677264 / 2.142072 (0.535191) | 0.587173 / 4.805227 (-4.218054) | 0.126373 / 6.500664 (-6.374291) | 0.062815 / 0.075469 (-0.012654) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.355482 / 1.841788 (-0.486305) | 18.818227 / 8.074308 (10.743919) | 13.954289 / 10.191392 (3.762896) | 0.143413 / 0.680424 (-0.537011) | 0.016844 / 0.534201 (-0.517357) | 0.338334 / 0.579283 (-0.240949) | 0.344559 / 0.434364 (-0.089805) | 0.400669 / 0.540337 (-0.139669) | 0.563835 / 1.386936 (-0.823101) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c02a44715c036b5261686669727394b1308a3a4b \"CML watermark\")\n" ]
2023-08-10T10:09:54
2023-08-10T12:08:38
2023-08-10T12:00:02
MEMBER
null
This PR removes unused `allowed_extensions` parameter from `create_builder_configs_from_metadata_configs`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6135/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6135", "html_url": "https://github.com/huggingface/datasets/pull/6135", "diff_url": "https://github.com/huggingface/datasets/pull/6135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6135.patch", "merged_at": "2023-08-10T12:00:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/6134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6134/comments
https://api.github.com/repos/huggingface/datasets/issues/6134/events
https://github.com/huggingface/datasets/issues/6134
1,844,535,142
I_kwDODunzps5t8V9m
6,134
`datasets` cannot be installed alongside `apache-beam`
{ "login": "boyleconnor", "id": 6520892, "node_id": "MDQ6VXNlcjY1MjA4OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6520892?v=4", "gravatar_id": "", "url": "https://api.github.com/users/boyleconnor", "html_url": "https://github.com/boyleconnor", "followers_url": "https://api.github.com/users/boyleconnor/followers", "following_url": "https://api.github.com/users/boyleconnor/following{/other_user}", "gists_url": "https://api.github.com/users/boyleconnor/gists{/gist_id}", "starred_url": "https://api.github.com/users/boyleconnor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/boyleconnor/subscriptions", "organizations_url": "https://api.github.com/users/boyleconnor/orgs", "repos_url": "https://api.github.com/users/boyleconnor/repos", "events_url": "https://api.github.com/users/boyleconnor/events{/privacy}", "received_events_url": "https://api.github.com/users/boyleconnor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I noticed that this is actually covered by issue #5613, which for some reason I didn't see when I searched the issues in this repo the first time." ]
2023-08-10T06:54:32
2023-08-10T15:22:22
2023-08-10T15:22:10
NONE
null
### Describe the bug If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co./datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to do something such as importing the `load_dataset` method from `datasets` results in a crashing error. I think the problem is that `apache-beam` version 2.49.0 requires `dill>=0.3.1.1,<0.3.2`, but the latest version of `multiprocess` (0.70.15) (on which `datasets` depends) requires `dill>=0.3.7,`, so this is causing the dependency resolver to use an older version of `multiprocess` which leads to the `datasets` crashing since it doesn't actually appear to be compatible with older versions. ### Steps to reproduce the bug See this [Google Colab notebook](https://colab.research.google.com/drive/1PTeGlshamFcJZix_GiS3vMXX_YzAhGv0?usp=sharing) to easily reproduce the bug. In some environments, I have been able to reproduce the bug by running the following in Bash: ```bash $ pip install datasets apache-beam ``` then the following in a Python shell: ```python from datasets import load_dataset ``` Here is my stacktrace from running on Google Colab: <details> <summary>stacktrace</summary> ``` [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.14.4" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 64 65 from . import config ---> 66 from .arrow_reader import ArrowReader 67 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 68 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 28 import pyarrow.parquet as pq 29 ---> 30 from .download.download_config import DownloadConfig 31 from .naming import _split_re, filenames_for_dataset_split 32 from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables [/usr/local/lib/python3.10/dist-packages/datasets/download/__init__.py](https://localhost:8080/#) in <module> 7 8 from .download_config import DownloadConfig ----> 9 from .download_manager import DownloadManager, DownloadMode 10 from .streaming_download_manager import StreamingDownloadManager [/usr/local/lib/python3.10/dist-packages/datasets/download/download_manager.py](https://localhost:8080/#) in <module> 33 from ..utils.info_utils import get_size_checksum_dict 34 from ..utils.logging import get_logger, is_progress_bar_enabled, tqdm ---> 35 from ..utils.py_utils import NestedDataStructure, map_nested, size_str 36 from .download_config import DownloadConfig 37 [/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <module> 38 import dill 39 import multiprocess ---> 40 import multiprocess.pool 41 import numpy as np 42 from packaging import version [/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in <module> 607 # 608 --> 609 class ThreadPool(Pool): 610 611 from .dummy import Process [/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py](https://localhost:8080/#) in ThreadPool() 609 class ThreadPool(Pool): 610 --> 611 from .dummy import Process 612 613 def __init__(self, processes=None, initializer=None, initargs=()): [/usr/local/lib/python3.10/dist-packages/multiprocess/dummy/__init__.py](https://localhost:8080/#) in <module> 85 # 86 ---> 87 class Condition(threading._Condition): 88 # XXX 89 if sys.version_info < (3, 0): AttributeError: module 'threading' has no attribute '_Condition' ``` </details> I've also found that attempting to install these `datasets` and `apache-beam` in certain environments (e.g. via pip inside a conda env) simply causes pip to hang indefinitely. ### Expected behavior I would expect to be able to import methods from `datasets` without crashing. I have tested that this is possible as long as I do not attempt to install `apache-beam`. ### Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6134/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6133/comments
https://api.github.com/repos/huggingface/datasets/issues/6133/events
https://github.com/huggingface/datasets/issues/6133
1,844,511,519
I_kwDODunzps5t8QMf
6,133
Dataset is slower after calling `to_iterable_dataset`
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq ", "It's roughly the same code between the two so we can expected roughly the same speed, could you share a benchmark ?" ]
2023-08-10T06:36:23
2023-08-16T09:18:54
null
CONTRIBUTOR
null
### Describe the bug Can anyone explain why looping over a dataset becomes slower after calling `to_iterable_dataset` to convert to `IterableDataset` ### Steps to reproduce the bug Any dataset after converting to `IterableDataset` ### Expected behavior Maybe it should be faster on big dataset? I only test on small dataset ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6133/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6132/comments
https://api.github.com/repos/huggingface/datasets/issues/6132/events
https://github.com/huggingface/datasets/issues/6132
1,843,491,020
I_kwDODunzps5t4XDM
6,132
to_iterable_dataset is missing in document
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Fixed with PR" ]
2023-08-09T15:15:03
2023-08-16T04:43:36
2023-08-16T04:43:29
CONTRIBUTOR
null
### Describe the bug to_iterable_dataset is missing in document ### Steps to reproduce the bug to_iterable_dataset is missing in document ### Expected behavior document enhancement ### Environment info unrelated
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6132/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6130
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6130/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6130/comments
https://api.github.com/repos/huggingface/datasets/issues/6130/events
https://github.com/huggingface/datasets/issues/6130
1,843,158,846
I_kwDODunzps5t3F8-
6,130
default config name doesn't work when config kwargs are specified.
{ "login": "npuichigo", "id": 11533479, "node_id": "MDQ6VXNlcjExNTMzNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/npuichigo", "html_url": "https://github.com/npuichigo", "followers_url": "https://api.github.com/users/npuichigo/followers", "following_url": "https://api.github.com/users/npuichigo/following{/other_user}", "gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}", "starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions", "organizations_url": "https://api.github.com/users/npuichigo/orgs", "repos_url": "https://api.github.com/users/npuichigo/repos", "events_url": "https://api.github.com/users/npuichigo/events{/privacy}", "received_events_url": "https://api.github.com/users/npuichigo/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq ", "What should be the behavior in this case ? Should it override the default config with the added parameter ?", "I know why it should be treated as a new config if overriding parameters are passed. But in some case, I just pass in some common fields like `data_dir`.\r\n\r\nFor example, I want to extend the FolderBasedBuilder as a multi-config version, the `data_dir` or `data_files` are always passed by user and should not be considered as overriding the default config. In current state, I cannot leverage the feature of default config since passing `data_dir` will disable the default config.", "Thinking more about it I think the current behavior is the right one.\r\n\r\nProvided parameters should be passed to instantiate a new BuilderConfig.\r\n\r\nWhat's the error you're getting ?", "For example, this works to use default config with name '_all_':\r\n```python\r\ndatasets.load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\")\r\n```\r\nwhile this failed to use default config\r\n```python\r\ndatasets.load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\", data_dir='.')\r\n```\r\nAfter manually specifying it, it works again.\r\n```python\r\ndatasets.load_dataset(\"indonesian-nlp/librivox-indonesia\", \"_all_\", split=\"train\", data_dir='.')\r\n```", "@lhoestq ", "It should work if you explicitly ask for the config you want to override\r\n\r\n```python\r\nload_dataset('/dataset/with/multiple/config', 'name_of_the_default_config', some_field_in_config='some')\r\n```\r\n\r\nAlternatively you can have a BuilderConfig class that when instantiated returns a config with the right default values. In this case this code would instantiate this config with the default values except for the parameter to override:\r\n\r\n```python\r\nload_dataset('/dataset/with/multiple/config', some_field_in_config='some')\r\n```", "@lhoestq Yes. But it doesn't work for me.\r\n\r\nHere's my dataset for example.\r\n```\r\nlass MyDatasetConfig(datasets.BuilderConfig):\r\n def __init__(self, name: str, version: str, **kwargs):\r\n self.option1 = kwargs.pop(\"option1\", False)\r\n self.option2 = kwargs.pop(\"option2\", 5)\r\n\r\n super().__init__(\r\n name=name,\r\n version=datasets.Version(version),\r\n **kwargs)\r\n\r\n\r\nclass MyDataset(datasets.GeneratorBasedBuilder):\r\n DEFAULT_CONFIG_NAME = \"v1\"\r\n\r\n BUILDER_CONFIGS = [\r\n UnifiedTtsDatasetConfig(\r\n name=\"v1\",\r\n version=\"1.0.0\",\r\n description=\"Initial version of the dataset\"\r\n ),\r\n ]\r\n\r\n def _info(self) -> DatasetInfo:\r\n _ = self.option1\r\n ....\r\n```\r\n\r\nHere it's okay to use `load_dataset('my_dataset.py')` for loading the default config `v1`.\r\n\r\nBut if I want to override the default values in config with `load_dataset('my_dataset.py', option2=3)`, it failed to find my default config `v1.\r\n\r\nUnless I use `load_dataset('my_dataset.py', 'v1', option2=3)`\r\n\r\nSo according to your advice, how can I modify my dataset to be able to override default config without manually specifying it.", "What's the error ? It should try to instantiate `MyDatasetConfig` with `option2=3`", "@lhoestq The error is\r\n```\r\ndef _info(self) -> DatasetInfo:\r\n _ = self.option1 <-\r\n ....\r\nAttributeError: 'BuilderConfig' object has no attribute 'option1'\r\n```\r\nwhich seems to find another unknown config.\r\n\r\nYou can try this line `datasets.load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\", data_dir='.')`, it's a multi-config dataset on HF hub and the error is the same.\r\n\r\nMy insights:\r\nhttps://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518\r\nif `config_kwargs` is provided here, the if branch is skipped.", "I see, you just have to set this class attribute to your builder class :)\r\n\r\n```python\r\nBUILDER_CONFIG_CLASS = MyDatasetConfig\r\n```", "So what does this attribute do? In most cases it's not used and the [documents for multi-config dataset](https://huggingface.co./docs/datasets/main/en/image_dataset#multiple-configurations) never mentioned that.", "It tells which builder config class to instantiate if additional config parameters are passed to load_dataset", "@lhoestq maybe we can enhance the document to say something about the common attributes of `DatasetBuilder`", "Ah indeed it's missing in the docs, thanks for reporting. I'm opening a PR" ]
2023-08-09T12:43:15
2023-08-22T10:03:41
null
CONTRIBUTOR
null
### Describe the bug https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522 If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset cannot select the default config from multiple configs. ### Steps to reproduce the bug ```python import datasets datasets.load_dataset('/dataset/with/multiple/config'') # Ok datasets.load_dataset('/dataset/with/multiple/config', some_field_in_config='some') # Err ``` ### Expected behavior Default config behavior should be consistent. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.15 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6130/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6130/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6129/comments
https://api.github.com/repos/huggingface/datasets/issues/6129/events
https://github.com/huggingface/datasets/pull/6129
1,841,563,517
PR_kwDODunzps5Xcmuw
6,129
Release 2.14.4
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006053 / 0.011353 (-0.005299) | 0.003532 / 0.011008 (-0.007476) | 0.081930 / 0.038508 (0.043422) | 0.059043 / 0.023109 (0.035934) | 0.322785 / 0.275898 (0.046887) | 0.378158 / 0.323480 (0.054678) | 0.004709 / 0.007986 (-0.003277) | 0.002907 / 0.004328 (-0.001421) | 0.061516 / 0.004250 (0.057266) | 0.047209 / 0.037052 (0.010157) | 0.346885 / 0.258489 (0.088396) | 0.381011 / 0.293841 (0.087170) | 0.027491 / 0.128546 (-0.101055) | 0.008014 / 0.075646 (-0.067632) | 0.260663 / 0.419271 (-0.158608) | 0.045427 / 0.043533 (0.001894) | 0.315277 / 0.255139 (0.060138) | 0.377902 / 0.283200 (0.094703) | 0.021371 / 0.141683 (-0.120311) | 1.416350 / 1.452155 (-0.035804) | 1.483345 / 1.492716 (-0.009372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203660 / 0.018006 (0.185654) | 0.569081 / 0.000490 (0.568591) | 0.002742 / 0.000200 (0.002542) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023456 / 0.037411 (-0.013955) | 0.073954 / 0.014526 (0.059428) | 0.082991 / 0.176557 (-0.093566) | 0.144781 / 0.737135 (-0.592354) | 0.083346 / 0.296338 (-0.212992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391542 / 0.215209 (0.176333) | 3.909505 / 2.077655 (1.831850) | 1.862234 / 1.504120 (0.358114) | 1.676076 / 1.541195 (0.134881) | 1.727595 / 1.468490 (0.259105) | 0.501769 / 4.584777 (-4.083008) | 3.083697 / 3.745712 (-0.662016) | 2.819751 / 5.269862 (-2.450111) | 1.867265 / 4.565676 (-2.698411) | 0.057575 / 0.424275 (-0.366700) | 0.006478 / 0.007607 (-0.001129) | 0.466684 / 0.226044 (0.240640) | 4.657982 / 2.268929 (2.389054) | 2.347052 / 55.444624 (-53.097573) | 1.964688 / 6.876477 (-4.911789) | 2.077821 / 2.142072 (-0.064252) | 0.590591 / 4.805227 (-4.214636) | 0.124585 / 6.500664 (-6.376079) | 0.059468 / 0.075469 (-0.016001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223484 / 1.841788 (-0.618304) | 18.104638 / 8.074308 (10.030330) | 13.755126 / 10.191392 (3.563734) | 0.143158 / 0.680424 (-0.537266) | 0.017147 / 0.534201 (-0.517054) | 0.337427 / 0.579283 (-0.241856) | 0.352270 / 0.434364 (-0.082094) | 0.383718 / 0.540337 (-0.156619) | 0.534973 / 1.386936 (-0.851963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006039 / 0.011353 (-0.005314) | 0.003735 / 0.011008 (-0.007274) | 0.061954 / 0.038508 (0.023446) | 0.061786 / 0.023109 (0.038677) | 0.429420 / 0.275898 (0.153522) | 0.457629 / 0.323480 (0.134149) | 0.004748 / 0.007986 (-0.003237) | 0.002843 / 0.004328 (-0.001485) | 0.061811 / 0.004250 (0.057560) | 0.048740 / 0.037052 (0.011687) | 0.430066 / 0.258489 (0.171577) | 0.465971 / 0.293841 (0.172130) | 0.027577 / 0.128546 (-0.100969) | 0.007981 / 0.075646 (-0.067665) | 0.067580 / 0.419271 (-0.351692) | 0.042058 / 0.043533 (-0.001475) | 0.428412 / 0.255139 (0.173273) | 0.451054 / 0.283200 (0.167855) | 0.020850 / 0.141683 (-0.120833) | 1.453907 / 1.452155 (0.001752) | 1.509914 / 1.492716 (0.017197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237713 / 0.018006 (0.219707) | 0.418064 / 0.000490 (0.417575) | 0.006411 / 0.000200 (0.006211) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024950 / 0.037411 (-0.012462) | 0.076806 / 0.014526 (0.062281) | 0.085237 / 0.176557 (-0.091320) | 0.137940 / 0.737135 (-0.599196) | 0.086266 / 0.296338 (-0.210072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418666 / 0.215209 (0.203457) | 4.160547 / 2.077655 (2.082893) | 2.135671 / 1.504120 (0.631551) | 1.964985 / 1.541195 (0.423790) | 2.009447 / 1.468490 (0.540957) | 0.501377 / 4.584777 (-4.083400) | 3.064293 / 3.745712 (-0.681419) | 2.827153 / 5.269862 (-2.442709) | 1.854698 / 4.565676 (-2.710978) | 0.057662 / 0.424275 (-0.366613) | 0.006829 / 0.007607 (-0.000778) | 0.496730 / 0.226044 (0.270686) | 4.964663 / 2.268929 (2.695735) | 2.583133 / 55.444624 (-52.861491) | 2.329700 / 6.876477 (-4.546776) | 2.415521 / 2.142072 (0.273449) | 0.591973 / 4.805227 (-4.213255) | 0.126801 / 6.500664 (-6.373863) | 0.062811 / 0.075469 (-0.012659) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.348575 / 1.841788 (-0.493212) | 18.282861 / 8.074308 (10.208553) | 13.734056 / 10.191392 (3.542664) | 0.154987 / 0.680424 (-0.525437) | 0.016996 / 0.534201 (-0.517205) | 0.335264 / 0.579283 (-0.244019) | 0.356907 / 0.434364 (-0.077456) | 0.399185 / 0.540337 (-0.141152) | 0.540209 / 1.386936 (-0.846727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#887bef1217e0f4441d57bf0f4d1e806df12f2c50 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006768 / 0.011353 (-0.004585) | 0.004250 / 0.011008 (-0.006758) | 0.086780 / 0.038508 (0.048272) | 0.080872 / 0.023109 (0.057762) | 0.309281 / 0.275898 (0.033383) | 0.352293 / 0.323480 (0.028814) | 0.005604 / 0.007986 (-0.002382) | 0.003544 / 0.004328 (-0.000784) | 0.066910 / 0.004250 (0.062659) | 0.055568 / 0.037052 (0.018516) | 0.314931 / 0.258489 (0.056442) | 0.366026 / 0.293841 (0.072185) | 0.031247 / 0.128546 (-0.097300) | 0.008860 / 0.075646 (-0.066786) | 0.293210 / 0.419271 (-0.126061) | 0.052868 / 0.043533 (0.009335) | 0.316769 / 0.255139 (0.061630) | 0.352128 / 0.283200 (0.068929) | 0.025492 / 0.141683 (-0.116190) | 1.478379 / 1.452155 (0.026224) | 1.573652 / 1.492716 (0.080936) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294975 / 0.018006 (0.276968) | 0.615093 / 0.000490 (0.614603) | 0.004279 / 0.000200 (0.004079) | 0.000102 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031557 / 0.037411 (-0.005855) | 0.085026 / 0.014526 (0.070500) | 0.101221 / 0.176557 (-0.075336) | 0.157432 / 0.737135 (-0.579703) | 0.102350 / 0.296338 (-0.193988) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384158 / 0.215209 (0.168949) | 3.826656 / 2.077655 (1.749001) | 1.873510 / 1.504120 (0.369390) | 1.721913 / 1.541195 (0.180718) | 1.848779 / 1.468490 (0.380289) | 0.485128 / 4.584777 (-4.099649) | 3.656660 / 3.745712 (-0.089052) | 3.441964 / 5.269862 (-1.827898) | 2.150611 / 4.565676 (-2.415066) | 0.056869 / 0.424275 (-0.367406) | 0.007382 / 0.007607 (-0.000225) | 0.458751 / 0.226044 (0.232707) | 4.585028 / 2.268929 (2.316099) | 2.439538 / 55.444624 (-53.005086) | 2.116959 / 6.876477 (-4.759518) | 2.459220 / 2.142072 (0.317147) | 0.580907 / 4.805227 (-4.224321) | 0.134502 / 6.500664 (-6.366162) | 0.062528 / 0.075469 (-0.012941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251006 / 1.841788 (-0.590782) | 20.755849 / 8.074308 (12.681541) | 14.456950 / 10.191392 (4.265558) | 0.167074 / 0.680424 (-0.513350) | 0.018482 / 0.534201 (-0.515719) | 0.395867 / 0.579283 (-0.183416) | 0.415620 / 0.434364 (-0.018744) | 0.462247 / 0.540337 (-0.078090) | 0.645762 / 1.386936 (-0.741174) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007050 / 0.011353 (-0.004303) | 0.004421 / 0.011008 (-0.006587) | 0.065312 / 0.038508 (0.026804) | 0.089790 / 0.023109 (0.066681) | 0.366318 / 0.275898 (0.090420) | 0.403542 / 0.323480 (0.080062) | 0.005695 / 0.007986 (-0.002290) | 0.003642 / 0.004328 (-0.000687) | 0.064540 / 0.004250 (0.060289) | 0.060933 / 0.037052 (0.023881) | 0.369004 / 0.258489 (0.110515) | 0.408056 / 0.293841 (0.114215) | 0.032124 / 0.128546 (-0.096422) | 0.008960 / 0.075646 (-0.066686) | 0.071267 / 0.419271 (-0.348005) | 0.049745 / 0.043533 (0.006212) | 0.367203 / 0.255139 (0.112064) | 0.383009 / 0.283200 (0.099809) | 0.025330 / 0.141683 (-0.116353) | 1.518290 / 1.452155 (0.066135) | 1.581738 / 1.492716 (0.089022) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.338281 / 0.018006 (0.320275) | 0.538195 / 0.000490 (0.537706) | 0.008498 / 0.000200 (0.008298) | 0.000121 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033279 / 0.037411 (-0.004133) | 0.093233 / 0.014526 (0.078707) | 0.106019 / 0.176557 (-0.070538) | 0.161262 / 0.737135 (-0.575874) | 0.109935 / 0.296338 (-0.186404) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411563 / 0.215209 (0.196354) | 4.102149 / 2.077655 (2.024495) | 2.108513 / 1.504120 (0.604393) | 1.945344 / 1.541195 (0.404150) | 2.066964 / 1.468490 (0.598474) | 0.482771 / 4.584777 (-4.102006) | 3.659160 / 3.745712 (-0.086552) | 3.420833 / 5.269862 (-1.849029) | 2.147276 / 4.565676 (-2.418400) | 0.056957 / 0.424275 (-0.367318) | 0.007898 / 0.007607 (0.000290) | 0.482401 / 0.226044 (0.256357) | 4.821044 / 2.268929 (2.552115) | 2.567993 / 55.444624 (-52.876631) | 2.336165 / 6.876477 (-4.540312) | 2.545066 / 2.142072 (0.402994) | 0.580888 / 4.805227 (-4.224339) | 0.134092 / 6.500664 (-6.366572) | 0.062681 / 0.075469 (-0.012788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.379124 / 1.841788 (-0.462664) | 21.627949 / 8.074308 (13.553641) | 15.064818 / 10.191392 (4.873426) | 0.169707 / 0.680424 (-0.510716) | 0.018671 / 0.534201 (-0.515530) | 0.400496 / 0.579283 (-0.178787) | 0.415542 / 0.434364 (-0.018822) | 0.484351 / 0.540337 (-0.055986) | 0.646046 / 1.386936 (-0.740890) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007113 / 0.011353 (-0.004240) | 0.004436 / 0.011008 (-0.006572) | 0.087422 / 0.038508 (0.048914) | 0.085996 / 0.023109 (0.062887) | 0.311772 / 0.275898 (0.035873) | 0.353281 / 0.323480 (0.029801) | 0.004562 / 0.007986 (-0.003423) | 0.003840 / 0.004328 (-0.000488) | 0.066500 / 0.004250 (0.062250) | 0.061293 / 0.037052 (0.024241) | 0.328840 / 0.258489 (0.070351) | 0.365587 / 0.293841 (0.071746) | 0.031802 / 0.128546 (-0.096744) | 0.008881 / 0.075646 (-0.066765) | 0.289671 / 0.419271 (-0.129601) | 0.053348 / 0.043533 (0.009816) | 0.307822 / 0.255139 (0.052683) | 0.342559 / 0.283200 (0.059360) | 0.025760 / 0.141683 (-0.115923) | 1.509944 / 1.452155 (0.057789) | 1.556634 / 1.492716 (0.063918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282036 / 0.018006 (0.264029) | 0.608350 / 0.000490 (0.607860) | 0.004843 / 0.000200 (0.004643) | 0.000108 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029810 / 0.037411 (-0.007601) | 0.086215 / 0.014526 (0.071689) | 0.102200 / 0.176557 (-0.074356) | 0.158051 / 0.737135 (-0.579084) | 0.103083 / 0.296338 (-0.193255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392119 / 0.215209 (0.176910) | 3.895796 / 2.077655 (1.818141) | 1.921118 / 1.504120 (0.416998) | 1.754271 / 1.541195 (0.213076) | 1.880991 / 1.468490 (0.412501) | 0.481158 / 4.584777 (-4.103618) | 3.609210 / 3.745712 (-0.136502) | 3.412018 / 5.269862 (-1.857843) | 2.131710 / 4.565676 (-2.433967) | 0.057122 / 0.424275 (-0.367153) | 0.007444 / 0.007607 (-0.000163) | 0.468880 / 0.226044 (0.242835) | 4.682441 / 2.268929 (2.413512) | 2.505613 / 55.444624 (-52.939012) | 2.149655 / 6.876477 (-4.726822) | 2.465904 / 2.142072 (0.323832) | 0.578877 / 4.805227 (-4.226350) | 0.133504 / 6.500664 (-6.367160) | 0.061422 / 0.075469 (-0.014047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.269395 / 1.841788 (-0.572393) | 21.107558 / 8.074308 (13.033250) | 15.318502 / 10.191392 (5.127110) | 0.165273 / 0.680424 (-0.515151) | 0.018783 / 0.534201 (-0.515418) | 0.396259 / 0.579283 (-0.183024) | 0.412907 / 0.434364 (-0.021457) | 0.465723 / 0.540337 (-0.074615) | 0.638414 / 1.386936 (-0.748522) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007083 / 0.011353 (-0.004270) | 0.004216 / 0.011008 (-0.006793) | 0.065362 / 0.038508 (0.026854) | 0.095454 / 0.023109 (0.072345) | 0.364220 / 0.275898 (0.088322) | 0.417650 / 0.323480 (0.094170) | 0.006114 / 0.007986 (-0.001872) | 0.003577 / 0.004328 (-0.000751) | 0.064830 / 0.004250 (0.060579) | 0.062535 / 0.037052 (0.025483) | 0.381844 / 0.258489 (0.123355) | 0.418996 / 0.293841 (0.125155) | 0.031386 / 0.128546 (-0.097160) | 0.008913 / 0.075646 (-0.066733) | 0.070860 / 0.419271 (-0.348411) | 0.049132 / 0.043533 (0.005599) | 0.360406 / 0.255139 (0.105267) | 0.392407 / 0.283200 (0.109207) | 0.024611 / 0.141683 (-0.117072) | 1.509051 / 1.452155 (0.056896) | 1.570288 / 1.492716 (0.077572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.368611 / 0.018006 (0.350605) | 0.537587 / 0.000490 (0.537098) | 0.028056 / 0.000200 (0.027856) | 0.000317 / 0.000054 (0.000262) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031570 / 0.037411 (-0.005841) | 0.088985 / 0.014526 (0.074460) | 0.105268 / 0.176557 (-0.071288) | 0.156724 / 0.737135 (-0.580412) | 0.105266 / 0.296338 (-0.191073) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413861 / 0.215209 (0.198652) | 4.127001 / 2.077655 (2.049347) | 2.112114 / 1.504120 (0.607994) | 1.945200 / 1.541195 (0.404005) | 2.083031 / 1.468490 (0.614540) | 0.488086 / 4.584777 (-4.096691) | 3.565584 / 3.745712 (-0.180128) | 3.380782 / 5.269862 (-1.889079) | 2.103481 / 4.565676 (-2.462195) | 0.058203 / 0.424275 (-0.366072) | 0.007996 / 0.007607 (0.000389) | 0.487986 / 0.226044 (0.261941) | 4.871023 / 2.268929 (2.602095) | 2.584632 / 55.444624 (-52.859992) | 2.240103 / 6.876477 (-4.636374) | 2.555165 / 2.142072 (0.413092) | 0.591950 / 4.805227 (-4.213278) | 0.134919 / 6.500664 (-6.365745) | 0.062868 / 0.075469 (-0.012601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369731 / 1.841788 (-0.472057) | 21.497888 / 8.074308 (13.423580) | 14.555054 / 10.191392 (4.363662) | 0.168768 / 0.680424 (-0.511656) | 0.018837 / 0.534201 (-0.515364) | 0.394512 / 0.579283 (-0.184771) | 0.405459 / 0.434364 (-0.028905) | 0.475479 / 0.540337 (-0.064858) | 0.631994 / 1.386936 (-0.754942) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009072 / 0.011353 (-0.002280) | 0.004894 / 0.011008 (-0.006114) | 0.108790 / 0.038508 (0.070282) | 0.081783 / 0.023109 (0.058674) | 0.381963 / 0.275898 (0.106064) | 0.450700 / 0.323480 (0.127220) | 0.006961 / 0.007986 (-0.001025) | 0.004035 / 0.004328 (-0.000293) | 0.081420 / 0.004250 (0.077169) | 0.058029 / 0.037052 (0.020976) | 0.437453 / 0.258489 (0.178964) | 0.472607 / 0.293841 (0.178766) | 0.048663 / 0.128546 (-0.079884) | 0.013512 / 0.075646 (-0.062134) | 0.406009 / 0.419271 (-0.013262) | 0.067616 / 0.043533 (0.024084) | 0.383641 / 0.255139 (0.128502) | 0.456734 / 0.283200 (0.173534) | 0.033391 / 0.141683 (-0.108292) | 1.753529 / 1.452155 (0.301375) | 1.859831 / 1.492716 (0.367115) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215128 / 0.018006 (0.197122) | 0.538261 / 0.000490 (0.537771) | 0.005430 / 0.000200 (0.005230) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032664 / 0.037411 (-0.004748) | 0.093465 / 0.014526 (0.078939) | 0.106637 / 0.176557 (-0.069919) | 0.173642 / 0.737135 (-0.563494) | 0.113944 / 0.296338 (-0.182394) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.629212 / 0.215209 (0.414003) | 6.116729 / 2.077655 (4.039075) | 2.818000 / 1.504120 (1.313880) | 2.515317 / 1.541195 (0.974122) | 2.466588 / 1.468490 (0.998098) | 0.850815 / 4.584777 (-3.733962) | 5.051292 / 3.745712 (1.305579) | 4.472138 / 5.269862 (-0.797724) | 2.968317 / 4.565676 (-1.597360) | 0.100173 / 0.424275 (-0.324102) | 0.008407 / 0.007607 (0.000800) | 0.743972 / 0.226044 (0.517928) | 7.397619 / 2.268929 (5.128690) | 3.596681 / 55.444624 (-51.847943) | 2.854674 / 6.876477 (-4.021803) | 3.114274 / 2.142072 (0.972201) | 1.064879 / 4.805227 (-3.740348) | 0.215981 / 6.500664 (-6.284683) | 0.078159 / 0.075469 (0.002690) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.543291 / 1.841788 (-0.298497) | 23.244641 / 8.074308 (15.170333) | 20.784610 / 10.191392 (10.593218) | 0.222002 / 0.680424 (-0.458422) | 0.028584 / 0.534201 (-0.505617) | 0.478563 / 0.579283 (-0.100720) | 0.556101 / 0.434364 (0.121737) | 0.547446 / 0.540337 (0.007109) | 0.764318 / 1.386936 (-0.622618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008651 / 0.011353 (-0.002702) | 0.004925 / 0.011008 (-0.006083) | 0.078995 / 0.038508 (0.040487) | 0.092878 / 0.023109 (0.069769) | 0.485615 / 0.275898 (0.209717) | 0.532157 / 0.323480 (0.208677) | 0.008228 / 0.007986 (0.000243) | 0.004777 / 0.004328 (0.000449) | 0.076892 / 0.004250 (0.072642) | 0.066905 / 0.037052 (0.029853) | 0.465497 / 0.258489 (0.207008) | 0.520153 / 0.293841 (0.226312) | 0.047357 / 0.128546 (-0.081189) | 0.016870 / 0.075646 (-0.058776) | 0.090481 / 0.419271 (-0.328791) | 0.060774 / 0.043533 (0.017241) | 0.474368 / 0.255139 (0.219229) | 0.503981 / 0.283200 (0.220781) | 0.036025 / 0.141683 (-0.105658) | 1.769939 / 1.452155 (0.317784) | 1.851518 / 1.492716 (0.358802) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265947 / 0.018006 (0.247941) | 0.532317 / 0.000490 (0.531828) | 0.004997 / 0.000200 (0.004797) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034112 / 0.037411 (-0.003299) | 0.102290 / 0.014526 (0.087764) | 0.109989 / 0.176557 (-0.066567) | 0.182813 / 0.737135 (-0.554323) | 0.111774 / 0.296338 (-0.184565) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.584893 / 0.215209 (0.369684) | 6.138505 / 2.077655 (4.060850) | 2.925761 / 1.504120 (1.421641) | 2.607320 / 1.541195 (1.066125) | 2.655827 / 1.468490 (1.187337) | 0.871140 / 4.584777 (-3.713637) | 5.051171 / 3.745712 (1.305459) | 4.708008 / 5.269862 (-0.561854) | 3.027485 / 4.565676 (-1.538191) | 0.100970 / 0.424275 (-0.323305) | 0.009640 / 0.007607 (0.002033) | 0.747818 / 0.226044 (0.521774) | 7.539930 / 2.268929 (5.271001) | 3.611693 / 55.444624 (-51.832931) | 2.924087 / 6.876477 (-3.952390) | 3.141993 / 2.142072 (0.999920) | 1.062921 / 4.805227 (-3.742306) | 0.213185 / 6.500664 (-6.287479) | 0.077146 / 0.075469 (0.001677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.669182 / 1.841788 (-0.172606) | 23.810242 / 8.074308 (15.735934) | 21.220649 / 10.191392 (11.029257) | 0.212639 / 0.680424 (-0.467785) | 0.026705 / 0.534201 (-0.507496) | 0.469231 / 0.579283 (-0.110053) | 0.551672 / 0.434364 (0.117308) | 0.575043 / 0.540337 (0.034706) | 0.767511 / 1.386936 (-0.619425) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53d55f33bfac9febb0c355e136f2847e5f3e3b53 \"CML watermark\")\n" ]
2023-08-08T15:43:56
2023-08-08T16:08:22
2023-08-08T15:49:06
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6129/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6129", "html_url": "https://github.com/huggingface/datasets/pull/6129", "diff_url": "https://github.com/huggingface/datasets/pull/6129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6129.patch", "merged_at": "2023-08-08T15:49:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/6128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6128/comments
https://api.github.com/repos/huggingface/datasets/issues/6128/events
https://github.com/huggingface/datasets/issues/6128
1,841,545,493
I_kwDODunzps5tw8EV
6,128
IndexError: Invalid key: 88 is out of bounds for size 0
{ "login": "TomasAndersonFang", "id": 38727343, "node_id": "MDQ6VXNlcjM4NzI3MzQz", "avatar_url": "https://avatars.githubusercontent.com/u/38727343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TomasAndersonFang", "html_url": "https://github.com/TomasAndersonFang", "followers_url": "https://api.github.com/users/TomasAndersonFang/followers", "following_url": "https://api.github.com/users/TomasAndersonFang/following{/other_user}", "gists_url": "https://api.github.com/users/TomasAndersonFang/gists{/gist_id}", "starred_url": "https://api.github.com/users/TomasAndersonFang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomasAndersonFang/subscriptions", "organizations_url": "https://api.github.com/users/TomasAndersonFang/orgs", "repos_url": "https://api.github.com/users/TomasAndersonFang/repos", "events_url": "https://api.github.com/users/TomasAndersonFang/events{/privacy}", "received_events_url": "https://api.github.com/users/TomasAndersonFang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @TomasAndersonFang,\r\n\r\nHave you tried instead to use `torch_compile` in `transformers.TrainingArguments`? https://huggingface.co./docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.torch_compile", "> \r\n\r\nI tried this and got the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 324, in _compile\r\n out_code = transform_code_object(code, transform)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py\", line 445, in transform_code_object\r\n transformations(instructions, code_options)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 311, in transform\r\n tracer.run()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1726, in run\r\n super().run()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 576, in run\r\n and self.step()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 540, in step\r\n getattr(self, inst.opname)(inst)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py\", line 1030, in LOAD_ATTR\r\n result = BuiltinVariable(getattr).call_function(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 566, in call_function\r\n result = handler(tx, *args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py\", line 931, in call_getattr\r\n return obj.var_getattr(tx, name).add_options(options)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/variables/nn_module.py\", line 124, in var_getattr\r\n subobj = inspect.getattr_static(base, name)\r\n File \"/apps/Arch/software/Python/3.10.8-GCCcore-12.2.0/lib/python3.10/inspect.py\", line 1777, in getattr_static\r\n raise AttributeError(attr)\r\nAttributeError: config\r\n\r\nfrom user code:\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/peft/peft_model.py\", line 909, in forward\r\n if self.base_model.config.model_type == \"mpt\":\r\n\r\nSet torch._dynamo.config.verbose=True for more information\r\n\r\n\r\nYou can suppress this exception and fall back to eager by setting:\r\n torch._dynamo.config.suppress_errors = True\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/llm-copt/fine-tune/falcon/falcon_sft.py\", line 228, in <module>\r\n main()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/llm-copt/fine-tune/falcon/falcon_sft.py\", line 221, in main\r\n trainer.train()\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 1539, in train\r\n return inner_training_loop(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 1809, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 2654, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/transformers/trainer.py\", line 2679, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 82, in forward\r\n return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 209, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 581, in forward\r\n return model_forward(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/accelerate/utils/operations.py\", line 569, in __call__\r\n return convert_to_fp32(self.model_forward(*args, **kwargs))\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/amp/autocast_mode.py\", line 14, in decorate_autocast\r\n return func(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 337, in catch_errors\r\n return callback(frame, cache_size, hooks)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 404, in _convert_frame\r\n result = inner_convert(frame, cache_size, hooks)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 104, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 262, in _convert_frame_assert\r\n return _compile(\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/utils.py\", line 163, in time_wrapper\r\n r = func(*args, **kwargs)\r\n File \"/cephyr/NOBACKUP/groups/snic2021-23-24/LLM4-CodeOpt/env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py\", line 394, in _compile\r\n raise InternalTorchDynamoError() from e\r\ntorch._dynamo.exc.InternalTorchDynamoError\r\n```", "Hi @TomasAndersonFang,\r\n\r\nI guess in this case it may be an issue with `transformers` (or `PyTorch`). I would recommend you open an issue on their repo.", "@albertvillanova Thanks for your recommendation. I'll do it" ]
2023-08-08T15:32:08
2023-08-11T13:35:09
2023-08-11T13:35:09
NONE
null
### Describe the bug This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib. ### Steps to reproduce the bug I use the following code to fine-tune Falcon on my private dataset. ```python import transformers from transformers import ( AutoModelForCausalLM, AutoTokenizer, AutoConfig, DataCollatorForSeq2Seq, Trainer, Seq2SeqTrainer, HfArgumentParser, Seq2SeqTrainingArguments, BitsAndBytesConfig, ) from peft import ( LoraConfig, get_peft_model, get_peft_model_state_dict, prepare_model_for_int8_training, set_peft_model_state_dict, ) import torch import os import evaluate import functools from datasets import load_dataset import bitsandbytes as bnb import logging import json import copy from typing import Dict, Optional, Sequence from dataclasses import dataclass, field # Lora settings LORA_R = 8 LORA_ALPHA = 16 LORA_DROPOUT= 0.05 LORA_TARGET_MODULES = ["query_key_value"] @dataclass class ModelArguments: model_name_or_path: Optional[str] = field(default="Salesforce/codegen2-7B") @dataclass class DataArguments: data_path: str = field(default=None, metadata={"help": "Path to the training data."}) train_file: str = field(default=None, metadata={"help": "Path to the evaluation data."}) eval_file: str = field(default=None, metadata={"help": "Path to the evaluation data."}) cache_path: str = field(default=None, metadata={"help": "Path to the cache directory."}) num_proc: int = field(default=4, metadata={"help": "Number of processes to use for data preprocessing."}) @dataclass class TrainingArguments(transformers.TrainingArguments): # cache_dir: Optional[str] = field(default=None) optim: str = field(default="adamw_torch") model_max_length: int = field( default=512, metadata={"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."}, ) is_lora: bool = field(default=True, metadata={"help": "Whether to use LORA."}) def tokenize(text, tokenizer, max_seq_len=512, add_eos_token=True): result = tokenizer( text, truncation=True, max_length=max_seq_len, padding=False, return_tensors=None, ) if ( result["input_ids"][-1] != tokenizer.eos_token_id and len(result["input_ids"]) < max_seq_len and add_eos_token ): result["input_ids"].append(tokenizer.eos_token_id) result["attention_mask"].append(1) if add_eos_token and len(result["input_ids"]) >= max_seq_len: result["input_ids"][max_seq_len - 1] = tokenizer.eos_token_id result["attention_mask"][max_seq_len - 1] = 1 result["labels"] = result["input_ids"].copy() return result def main(): parser = HfArgumentParser((ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() config = AutoConfig.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, trust_remote_code=True, ) if training_args.is_lora: model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, torch_dtype=torch.float16, trust_remote_code=True, load_in_8bit=True, quantization_config=BitsAndBytesConfig( load_in_8bit=True, llm_int8_threshold=6.0 ), ) model = prepare_model_for_int8_training(model) config = LoraConfig( r=LORA_R, lora_alpha=LORA_ALPHA, target_modules=LORA_TARGET_MODULES, lora_dropout=LORA_DROPOUT, bias="none", task_type="CAUSAL_LM", ) model = get_peft_model(model, config) else: model = AutoModelForCausalLM.from_pretrained( model_args.model_name_or_path, torch_dtype=torch.float16, cache_dir=data_args.cache_path, trust_remote_code=True, ) model.config.use_cache = False def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) print_trainable_parameters(model) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, cache_dir=data_args.cache_path, model_max_length=training_args.model_max_length, padding_side="left", use_fast=True, trust_remote_code=True, ) tokenizer.pad_token = tokenizer.eos_token # Load dataset def generate_and_tokenize_prompt(sample): input_text = sample["input"] target_text = sample["output"] + tokenizer.eos_token full_text = input_text + target_text tokenized_full_text = tokenize(full_text, tokenizer, max_seq_len=512) tokenized_input_text = tokenize(input_text, tokenizer, max_seq_len=512) input_len = len(tokenized_input_text["input_ids"]) - 1 # -1 for eos token tokenized_full_text["labels"] = [-100] * input_len + tokenized_full_text["labels"][input_len:] return tokenized_full_text data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.eval_file is not None: data_files["eval"] = data_args.eval_file dataset = load_dataset(data_args.data_path, data_files=data_files) train_dataset = dataset["train"] eval_dataset = dataset["eval"] train_dataset = train_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc) eval_dataset = eval_dataset.map(generate_and_tokenize_prompt, num_proc=data_args.num_proc) data_collator = DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors="pt", padding=True) # Evaluation metrics def compute_metrics(eval_preds, tokenizer): metric = evaluate.load('exact_match') preds, labels = eval_preds # In case the model returns more than the prediction logits if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True, clean_up_tokenization_spaces=False) # Replace -100s in the labels as we can't decode them labels[labels == -100] = tokenizer.pad_token_id decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True, clean_up_tokenization_spaces=False) # Some simple post-processing decoded_preds = [pred.strip() for pred in decoded_preds] decoded_labels = [label.strip() for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels) return {'exact_match': result['exact_match']} compute_metrics_fn = functools.partial(compute_metrics, tokenizer=tokenizer) model = torch.compile(model) # Training trainer = Trainer( model=model, train_dataset=train_dataset, eval_dataset=eval_dataset, args=training_args, data_collator=data_collator, compute_metrics=compute_metrics_fn, ) trainer.train() trainer.save_state() trainer.save_model(output_dir=training_args.output_dir) tokenizer.save_pretrained(save_directory=training_args.output_dir) if __name__ == "__main__": main() ``` When I didn't use `torch.cpmpile(model)`, my code worked well. But when I added this line to my code, It produced the following error: ``` Traceback (most recent call last): File "falcon_sft.py", line 230, in <module> main() File "falcon_sft.py", line 223, in main trainer.train() File "python3.10/site-packages/transformers/trainer.py", line 1539, in train return inner_training_loop( File "python3.10/site-packages/transformers/trainer.py", line 1787, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "python3.10/site-packages/accelerate/data_loader.py", line 384, in __iter__ current_batch = next(dataloader_iter) File "python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() File "python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = self.dataset.__getitems__(possibly_batched_index) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__ batch = self.__getitem__(keys) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__ return self._getitem(key) File "python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table _check_valid_index_key(key, size) File "python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key _check_valid_index_key(int(max(key)), size=size) File "python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") IndexError: Invalid key: 88 is out of bounds for size 0 ``` So I'm confused about why this error was generated, and how to fix it. Is this error produced by datasets or `torch.compile`? ### Expected behavior I want to use `torch.compile` in my code. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-4.18.0-425.19.2.el8_7.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6128/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6127/comments
https://api.github.com/repos/huggingface/datasets/issues/6127/events
https://github.com/huggingface/datasets/pull/6127
1,839,746,721
PR_kwDODunzps5XWdP5
6,127
Fix authentication issues
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006103 / 0.011353 (-0.005250) | 0.003588 / 0.011008 (-0.007420) | 0.080335 / 0.038508 (0.041827) | 0.059634 / 0.023109 (0.036525) | 0.356093 / 0.275898 (0.080195) | 0.407376 / 0.323480 (0.083896) | 0.005343 / 0.007986 (-0.002643) | 0.002928 / 0.004328 (-0.001400) | 0.062580 / 0.004250 (0.058330) | 0.047544 / 0.037052 (0.010491) | 0.364305 / 0.258489 (0.105816) | 0.421463 / 0.293841 (0.127623) | 0.027249 / 0.128546 (-0.101298) | 0.008010 / 0.075646 (-0.067636) | 0.262543 / 0.419271 (-0.156728) | 0.044978 / 0.043533 (0.001445) | 0.339344 / 0.255139 (0.084205) | 0.395288 / 0.283200 (0.112088) | 0.021425 / 0.141683 (-0.120258) | 1.439767 / 1.452155 (-0.012387) | 1.498081 / 1.492716 (0.005365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196976 / 0.018006 (0.178970) | 0.435383 / 0.000490 (0.434893) | 0.004559 / 0.000200 (0.004359) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023653 / 0.037411 (-0.013759) | 0.072944 / 0.014526 (0.058418) | 0.083651 / 0.176557 (-0.092906) | 0.144590 / 0.737135 (-0.592545) | 0.084844 / 0.296338 (-0.211494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398752 / 0.215209 (0.183543) | 3.959539 / 2.077655 (1.881884) | 1.935277 / 1.504120 (0.431157) | 1.751994 / 1.541195 (0.210799) | 1.828386 / 1.468490 (0.359896) | 0.500492 / 4.584777 (-4.084284) | 3.086630 / 3.745712 (-0.659082) | 2.851664 / 5.269862 (-2.418198) | 1.869792 / 4.565676 (-2.695885) | 0.058509 / 0.424275 (-0.365766) | 0.006500 / 0.007607 (-0.001107) | 0.467468 / 0.226044 (0.241424) | 4.686168 / 2.268929 (2.417240) | 2.427632 / 55.444624 (-53.016993) | 2.193194 / 6.876477 (-4.683283) | 2.408574 / 2.142072 (0.266501) | 0.592173 / 4.805227 (-4.213054) | 0.125381 / 6.500664 (-6.375283) | 0.060679 / 0.075469 (-0.014790) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236066 / 1.841788 (-0.605722) | 18.591689 / 8.074308 (10.517381) | 14.138774 / 10.191392 (3.947382) | 0.147455 / 0.680424 (-0.532968) | 0.016921 / 0.534201 (-0.517280) | 0.328129 / 0.579283 (-0.251154) | 0.348872 / 0.434364 (-0.085491) | 0.380311 / 0.540337 (-0.160026) | 0.532901 / 1.386936 (-0.854035) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005914 / 0.011353 (-0.005438) | 0.003614 / 0.011008 (-0.007394) | 0.062857 / 0.038508 (0.024349) | 0.060633 / 0.023109 (0.037524) | 0.419684 / 0.275898 (0.143786) | 0.449025 / 0.323480 (0.125546) | 0.004595 / 0.007986 (-0.003391) | 0.002861 / 0.004328 (-0.001467) | 0.063253 / 0.004250 (0.059003) | 0.048770 / 0.037052 (0.011718) | 0.419838 / 0.258489 (0.161349) | 0.465183 / 0.293841 (0.171342) | 0.027350 / 0.128546 (-0.101196) | 0.008065 / 0.075646 (-0.067582) | 0.068321 / 0.419271 (-0.350950) | 0.041083 / 0.043533 (-0.002449) | 0.400831 / 0.255139 (0.145692) | 0.449286 / 0.283200 (0.166086) | 0.020472 / 0.141683 (-0.121210) | 1.437215 / 1.452155 (-0.014940) | 1.503679 / 1.492716 (0.010963) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230764 / 0.018006 (0.212758) | 0.420774 / 0.000490 (0.420285) | 0.004012 / 0.000200 (0.003812) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026009 / 0.037411 (-0.011402) | 0.077943 / 0.014526 (0.063417) | 0.087281 / 0.176557 (-0.089276) | 0.139422 / 0.737135 (-0.597713) | 0.089090 / 0.296338 (-0.207248) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417298 / 0.215209 (0.202088) | 4.152303 / 2.077655 (2.074648) | 2.179996 / 1.504120 (0.675877) | 2.020619 / 1.541195 (0.479424) | 2.085241 / 1.468490 (0.616751) | 0.501111 / 4.584777 (-4.083666) | 3.079849 / 3.745712 (-0.665863) | 2.820607 / 5.269862 (-2.449255) | 1.863988 / 4.565676 (-2.701688) | 0.057662 / 0.424275 (-0.366613) | 0.006778 / 0.007607 (-0.000830) | 0.498661 / 0.226044 (0.272616) | 4.986503 / 2.268929 (2.717574) | 2.620676 / 55.444624 (-52.823949) | 2.297546 / 6.876477 (-4.578931) | 2.458148 / 2.142072 (0.316075) | 0.599490 / 4.805227 (-4.205738) | 0.125102 / 6.500664 (-6.375562) | 0.061411 / 0.075469 (-0.014059) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323816 / 1.841788 (-0.517971) | 18.462614 / 8.074308 (10.388306) | 13.845826 / 10.191392 (3.654434) | 0.146115 / 0.680424 (-0.534309) | 0.016862 / 0.534201 (-0.517339) | 0.335449 / 0.579283 (-0.243834) | 0.343792 / 0.434364 (-0.090572) | 0.394068 / 0.540337 (-0.146269) | 0.536378 / 1.386936 (-0.850558) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#de3f00368c9236e9410821f5fddb95d6069883c1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006825 / 0.011353 (-0.004527) | 0.004005 / 0.011008 (-0.007003) | 0.085504 / 0.038508 (0.046996) | 0.077252 / 0.023109 (0.054143) | 0.351891 / 0.275898 (0.075993) | 0.383404 / 0.323480 (0.059924) | 0.004153 / 0.007986 (-0.003833) | 0.003344 / 0.004328 (-0.000985) | 0.064936 / 0.004250 (0.060685) | 0.057653 / 0.037052 (0.020601) | 0.368155 / 0.258489 (0.109666) | 0.406122 / 0.293841 (0.112282) | 0.032049 / 0.128546 (-0.096497) | 0.008698 / 0.075646 (-0.066949) | 0.292394 / 0.419271 (-0.126878) | 0.053634 / 0.043533 (0.010101) | 0.358273 / 0.255139 (0.103134) | 0.378441 / 0.283200 (0.095242) | 0.026928 / 0.141683 (-0.114755) | 1.458718 / 1.452155 (0.006563) | 1.536231 / 1.492716 (0.043515) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213956 / 0.018006 (0.195950) | 0.458620 / 0.000490 (0.458130) | 0.002718 / 0.000200 (0.002519) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083922 / 0.014526 (0.069396) | 0.152056 / 0.176557 (-0.024501) | 0.151584 / 0.737135 (-0.585552) | 0.095698 / 0.296338 (-0.200641) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407762 / 0.215209 (0.192553) | 4.074324 / 2.077655 (1.996669) | 2.089929 / 1.504120 (0.585809) | 1.920024 / 1.541195 (0.378829) | 2.013410 / 1.468490 (0.544920) | 0.486056 / 4.584777 (-4.098721) | 3.656869 / 3.745712 (-0.088843) | 3.304008 / 5.269862 (-1.965854) | 2.074363 / 4.565676 (-2.491313) | 0.057293 / 0.424275 (-0.366982) | 0.007240 / 0.007607 (-0.000367) | 0.482696 / 0.226044 (0.256652) | 4.833251 / 2.268929 (2.564322) | 2.570391 / 55.444624 (-52.874233) | 2.220619 / 6.876477 (-4.655857) | 2.426316 / 2.142072 (0.284243) | 0.584811 / 4.805227 (-4.220416) | 0.134907 / 6.500664 (-6.365757) | 0.061115 / 0.075469 (-0.014354) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251969 / 1.841788 (-0.589818) | 19.601611 / 8.074308 (11.527303) | 14.190217 / 10.191392 (3.998825) | 0.166296 / 0.680424 (-0.514128) | 0.018334 / 0.534201 (-0.515867) | 0.395172 / 0.579283 (-0.184111) | 0.410440 / 0.434364 (-0.023924) | 0.462263 / 0.540337 (-0.078074) | 0.645504 / 1.386936 (-0.741432) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006991 / 0.011353 (-0.004362) | 0.004084 / 0.011008 (-0.006924) | 0.065208 / 0.038508 (0.026700) | 0.077809 / 0.023109 (0.054699) | 0.386472 / 0.275898 (0.110574) | 0.418686 / 0.323480 (0.095206) | 0.005346 / 0.007986 (-0.002640) | 0.003416 / 0.004328 (-0.000912) | 0.066209 / 0.004250 (0.061958) | 0.057517 / 0.037052 (0.020465) | 0.407684 / 0.258489 (0.149195) | 0.425438 / 0.293841 (0.131597) | 0.032166 / 0.128546 (-0.096380) | 0.008662 / 0.075646 (-0.066985) | 0.071712 / 0.419271 (-0.347560) | 0.049764 / 0.043533 (0.006231) | 0.394882 / 0.255139 (0.139743) | 0.403589 / 0.283200 (0.120389) | 0.023688 / 0.141683 (-0.117995) | 1.468488 / 1.452155 (0.016334) | 1.533118 / 1.492716 (0.040401) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252949 / 0.018006 (0.234943) | 0.447355 / 0.000490 (0.446865) | 0.011721 / 0.000200 (0.011521) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031444 / 0.037411 (-0.005968) | 0.089390 / 0.014526 (0.074864) | 0.100103 / 0.176557 (-0.076454) | 0.153301 / 0.737135 (-0.583835) | 0.101336 / 0.296338 (-0.195003) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408574 / 0.215209 (0.193365) | 4.073135 / 2.077655 (1.995480) | 2.086550 / 1.504120 (0.582430) | 1.930651 / 1.541195 (0.389457) | 2.013548 / 1.468490 (0.545058) | 0.477235 / 4.584777 (-4.107542) | 3.547545 / 3.745712 (-0.198167) | 3.321957 / 5.269862 (-1.947905) | 2.057705 / 4.565676 (-2.507971) | 0.056730 / 0.424275 (-0.367545) | 0.007882 / 0.007607 (0.000275) | 0.487297 / 0.226044 (0.261253) | 4.874184 / 2.268929 (2.605255) | 2.631129 / 55.444624 (-52.813496) | 2.235755 / 6.876477 (-4.640722) | 2.463329 / 2.142072 (0.321257) | 0.578308 / 4.805227 (-4.226919) | 0.132726 / 6.500664 (-6.367938) | 0.064883 / 0.075469 (-0.010586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347564 / 1.841788 (-0.494223) | 20.192973 / 8.074308 (12.118665) | 14.563553 / 10.191392 (4.372161) | 0.168244 / 0.680424 (-0.512180) | 0.018638 / 0.534201 (-0.515563) | 0.394789 / 0.579283 (-0.184494) | 0.419677 / 0.434364 (-0.014687) | 0.480274 / 0.540337 (-0.060063) | 0.641204 / 1.386936 (-0.745732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c7a0d56b60bf700d6a491fa30eaf66500969315 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005939 / 0.011353 (-0.005413) | 0.003457 / 0.011008 (-0.007551) | 0.079985 / 0.038508 (0.041477) | 0.056492 / 0.023109 (0.033383) | 0.312356 / 0.275898 (0.036458) | 0.354038 / 0.323480 (0.030558) | 0.004551 / 0.007986 (-0.003435) | 0.002828 / 0.004328 (-0.001501) | 0.062369 / 0.004250 (0.058119) | 0.044712 / 0.037052 (0.007660) | 0.318244 / 0.258489 (0.059755) | 0.361977 / 0.293841 (0.068136) | 0.026460 / 0.128546 (-0.102086) | 0.007928 / 0.075646 (-0.067719) | 0.261378 / 0.419271 (-0.157894) | 0.044209 / 0.043533 (0.000676) | 0.313931 / 0.255139 (0.058792) | 0.339553 / 0.283200 (0.056354) | 0.019776 / 0.141683 (-0.121907) | 1.443126 / 1.452155 (-0.009029) | 1.508149 / 1.492716 (0.015432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183801 / 0.018006 (0.165795) | 0.427967 / 0.000490 (0.427477) | 0.002028 / 0.000200 (0.001828) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023697 / 0.037411 (-0.013715) | 0.072128 / 0.014526 (0.057602) | 0.083701 / 0.176557 (-0.092855) | 0.142821 / 0.737135 (-0.594315) | 0.082276 / 0.296338 (-0.214063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434427 / 0.215209 (0.219218) | 4.325962 / 2.077655 (2.248308) | 2.277115 / 1.504120 (0.772995) | 2.093736 / 1.541195 (0.552541) | 2.127984 / 1.468490 (0.659494) | 0.502336 / 4.584777 (-4.082441) | 3.023243 / 3.745712 (-0.722469) | 2.805154 / 5.269862 (-2.464708) | 1.821273 / 4.565676 (-2.744403) | 0.057480 / 0.424275 (-0.366795) | 0.006365 / 0.007607 (-0.001242) | 0.508258 / 0.226044 (0.282213) | 5.087950 / 2.268929 (2.819022) | 2.705029 / 55.444624 (-52.739596) | 2.378392 / 6.876477 (-4.498085) | 2.515380 / 2.142072 (0.373307) | 0.589283 / 4.805227 (-4.215944) | 0.125719 / 6.500664 (-6.374945) | 0.061074 / 0.075469 (-0.014395) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.221895 / 1.841788 (-0.619893) | 18.025917 / 8.074308 (9.951609) | 13.556901 / 10.191392 (3.365509) | 0.142614 / 0.680424 (-0.537809) | 0.016731 / 0.534201 (-0.517469) | 0.328374 / 0.579283 (-0.250910) | 0.342553 / 0.434364 (-0.091811) | 0.374502 / 0.540337 (-0.165836) | 0.534173 / 1.386936 (-0.852763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005817 / 0.011353 (-0.005536) | 0.003500 / 0.011008 (-0.007509) | 0.062240 / 0.038508 (0.023732) | 0.058128 / 0.023109 (0.035019) | 0.424014 / 0.275898 (0.148116) | 0.468453 / 0.323480 (0.144973) | 0.004641 / 0.007986 (-0.003345) | 0.002821 / 0.004328 (-0.001508) | 0.062180 / 0.004250 (0.057930) | 0.047578 / 0.037052 (0.010526) | 0.427367 / 0.258489 (0.168878) | 0.467889 / 0.293841 (0.174048) | 0.027144 / 0.128546 (-0.101403) | 0.007969 / 0.075646 (-0.067678) | 0.067764 / 0.419271 (-0.351508) | 0.040719 / 0.043533 (-0.002814) | 0.423663 / 0.255139 (0.168524) | 0.458556 / 0.283200 (0.175356) | 0.019196 / 0.141683 (-0.122487) | 1.471546 / 1.452155 (0.019392) | 1.547541 / 1.492716 (0.054825) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228777 / 0.018006 (0.210770) | 0.406663 / 0.000490 (0.406173) | 0.003688 / 0.000200 (0.003488) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025494 / 0.037411 (-0.011917) | 0.076339 / 0.014526 (0.061814) | 0.084233 / 0.176557 (-0.092324) | 0.136995 / 0.737135 (-0.600140) | 0.085443 / 0.296338 (-0.210895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420441 / 0.215209 (0.205232) | 4.187018 / 2.077655 (2.109363) | 2.142139 / 1.504120 (0.638019) | 1.974530 / 1.541195 (0.433335) | 2.027321 / 1.468490 (0.558831) | 0.498116 / 4.584777 (-4.086661) | 2.988514 / 3.745712 (-0.757198) | 2.782046 / 5.269862 (-2.487816) | 1.821725 / 4.565676 (-2.743951) | 0.057711 / 0.424275 (-0.366564) | 0.006664 / 0.007607 (-0.000944) | 0.491015 / 0.226044 (0.264971) | 4.921037 / 2.268929 (2.652108) | 2.574964 / 55.444624 (-52.869661) | 2.251703 / 6.876477 (-4.624774) | 2.361154 / 2.142072 (0.219082) | 0.593362 / 4.805227 (-4.211865) | 0.126107 / 6.500664 (-6.374557) | 0.061840 / 0.075469 (-0.013630) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327459 / 1.841788 (-0.514328) | 18.062960 / 8.074308 (9.988652) | 13.669253 / 10.191392 (3.477861) | 0.130719 / 0.680424 (-0.549705) | 0.016564 / 0.534201 (-0.517637) | 0.335821 / 0.579283 (-0.243462) | 0.341691 / 0.434364 (-0.092673) | 0.392651 / 0.540337 (-0.147686) | 0.529650 / 1.386936 (-0.857286) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c65806b0542996e56825ab46a3ce8f9c07ab0df3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009625 / 0.011353 (-0.001728) | 0.005354 / 0.011008 (-0.005654) | 0.114350 / 0.038508 (0.075842) | 0.086637 / 0.023109 (0.063528) | 0.465381 / 0.275898 (0.189483) | 0.490411 / 0.323480 (0.166931) | 0.006575 / 0.007986 (-0.001411) | 0.004287 / 0.004328 (-0.000041) | 0.093134 / 0.004250 (0.088884) | 0.060209 / 0.037052 (0.023156) | 0.459570 / 0.258489 (0.201080) | 0.523320 / 0.293841 (0.229479) | 0.047943 / 0.128546 (-0.080603) | 0.014764 / 0.075646 (-0.060882) | 0.383887 / 0.419271 (-0.035384) | 0.069864 / 0.043533 (0.026331) | 0.469122 / 0.255139 (0.213983) | 0.509953 / 0.283200 (0.226753) | 0.037800 / 0.141683 (-0.103883) | 1.877589 / 1.452155 (0.425434) | 2.014913 / 1.492716 (0.522197) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.309146 / 0.018006 (0.291140) | 0.644390 / 0.000490 (0.643900) | 0.005017 / 0.000200 (0.004817) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032964 / 0.037411 (-0.004447) | 0.103236 / 0.014526 (0.088711) | 0.119950 / 0.176557 (-0.056607) | 0.207674 / 0.737135 (-0.529461) | 0.117278 / 0.296338 (-0.179060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.605464 / 0.215209 (0.390255) | 6.027805 / 2.077655 (3.950150) | 2.719725 / 1.504120 (1.215605) | 2.262752 / 1.541195 (0.721558) | 2.330310 / 1.468490 (0.861820) | 0.862537 / 4.584777 (-3.722240) | 5.347080 / 3.745712 (1.601368) | 4.792170 / 5.269862 (-0.477691) | 3.103694 / 4.565676 (-1.461983) | 0.103646 / 0.424275 (-0.320629) | 0.009411 / 0.007607 (0.001804) | 0.743052 / 0.226044 (0.517008) | 7.289684 / 2.268929 (5.020755) | 3.436530 / 55.444624 (-52.008094) | 2.722440 / 6.876477 (-4.154036) | 2.952380 / 2.142072 (0.810308) | 1.047688 / 4.805227 (-3.757539) | 0.212724 / 6.500664 (-6.287940) | 0.081473 / 0.075469 (0.006004) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.714437 / 1.841788 (-0.127351) | 24.384330 / 8.074308 (16.310022) | 22.444162 / 10.191392 (12.252770) | 0.226264 / 0.680424 (-0.454160) | 0.030530 / 0.534201 (-0.503671) | 0.473999 / 0.579283 (-0.105284) | 0.575005 / 0.434364 (0.140641) | 0.542789 / 0.540337 (0.002451) | 0.776079 / 1.386936 (-0.610857) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009366 / 0.011353 (-0.001987) | 0.005239 / 0.011008 (-0.005769) | 0.085116 / 0.038508 (0.046608) | 0.089600 / 0.023109 (0.066491) | 0.485778 / 0.275898 (0.209880) | 0.540054 / 0.323480 (0.216574) | 0.006290 / 0.007986 (-0.001695) | 0.004054 / 0.004328 (-0.000274) | 0.083535 / 0.004250 (0.079284) | 0.067200 / 0.037052 (0.030148) | 0.519520 / 0.258489 (0.261031) | 0.544049 / 0.293841 (0.250208) | 0.054300 / 0.128546 (-0.074246) | 0.013650 / 0.075646 (-0.061996) | 0.102515 / 0.419271 (-0.316757) | 0.063054 / 0.043533 (0.019522) | 0.491724 / 0.255139 (0.236585) | 0.547498 / 0.283200 (0.264298) | 0.039266 / 0.141683 (-0.102416) | 1.801226 / 1.452155 (0.349071) | 1.861778 / 1.492716 (0.369061) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313009 / 0.018006 (0.295003) | 0.587695 / 0.000490 (0.587205) | 0.004972 / 0.000200 (0.004772) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029230 / 0.037411 (-0.008181) | 0.091154 / 0.014526 (0.076628) | 0.110505 / 0.176557 (-0.066052) | 0.164204 / 0.737135 (-0.572932) | 0.107812 / 0.296338 (-0.188526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.610535 / 0.215209 (0.395326) | 6.162517 / 2.077655 (4.084862) | 2.866718 / 1.504120 (1.362598) | 2.542412 / 1.541195 (1.001218) | 2.584136 / 1.468490 (1.115645) | 0.874319 / 4.584777 (-3.710458) | 5.257184 / 3.745712 (1.511472) | 4.705840 / 5.269862 (-0.564022) | 2.971708 / 4.565676 (-1.593969) | 0.099026 / 0.424275 (-0.325249) | 0.009142 / 0.007607 (0.001535) | 0.728660 / 0.226044 (0.502615) | 7.560922 / 2.268929 (5.291994) | 3.439521 / 55.444624 (-52.005103) | 2.854730 / 6.876477 (-4.021746) | 3.088951 / 2.142072 (0.946879) | 0.973621 / 4.805227 (-3.831606) | 0.209792 / 6.500664 (-6.290872) | 0.081107 / 0.075469 (0.005638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716809 / 1.841788 (-0.124978) | 24.386927 / 8.074308 (16.312619) | 20.715524 / 10.191392 (10.524131) | 0.260831 / 0.680424 (-0.419592) | 0.030701 / 0.534201 (-0.503500) | 0.490018 / 0.579283 (-0.089265) | 0.590424 / 0.434364 (0.156060) | 0.589942 / 0.540337 (0.049604) | 0.798094 / 1.386936 (-0.588842) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c0a77dc943de68a17f23f141517028c734c78623 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006592 / 0.011353 (-0.004761) | 0.003880 / 0.011008 (-0.007128) | 0.083761 / 0.038508 (0.045253) | 0.075966 / 0.023109 (0.052857) | 0.315291 / 0.275898 (0.039393) | 0.355920 / 0.323480 (0.032440) | 0.004972 / 0.007986 (-0.003014) | 0.003053 / 0.004328 (-0.001275) | 0.063553 / 0.004250 (0.059302) | 0.050794 / 0.037052 (0.013742) | 0.317681 / 0.258489 (0.059192) | 0.361991 / 0.293841 (0.068150) | 0.028119 / 0.128546 (-0.100427) | 0.008203 / 0.075646 (-0.067443) | 0.271756 / 0.419271 (-0.147516) | 0.046701 / 0.043533 (0.003168) | 0.316520 / 0.255139 (0.061381) | 0.350499 / 0.283200 (0.067300) | 0.022399 / 0.141683 (-0.119284) | 1.416017 / 1.452155 (-0.036138) | 1.503087 / 1.492716 (0.010371) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208250 / 0.018006 (0.190244) | 0.470345 / 0.000490 (0.469856) | 0.003687 / 0.000200 (0.003487) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026163 / 0.037411 (-0.011248) | 0.083315 / 0.014526 (0.068789) | 0.088541 / 0.176557 (-0.088015) | 0.150078 / 0.737135 (-0.587057) | 0.088862 / 0.296338 (-0.207476) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404911 / 0.215209 (0.189702) | 4.059257 / 2.077655 (1.981602) | 1.890987 / 1.504120 (0.386867) | 1.726608 / 1.541195 (0.185413) | 1.767479 / 1.468490 (0.298989) | 0.518826 / 4.584777 (-4.065951) | 3.212145 / 3.745712 (-0.533567) | 3.029933 / 5.269862 (-2.239929) | 2.000203 / 4.565676 (-2.565474) | 0.059631 / 0.424275 (-0.364644) | 0.006707 / 0.007607 (-0.000900) | 0.485741 / 0.226044 (0.259697) | 4.871938 / 2.268929 (2.603010) | 2.418856 / 55.444624 (-53.025769) | 2.084847 / 6.876477 (-4.791630) | 2.207992 / 2.142072 (0.065920) | 0.614354 / 4.805227 (-4.190873) | 0.128932 / 6.500664 (-6.371732) | 0.062342 / 0.075469 (-0.013127) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.325792 / 1.841788 (-0.515995) | 19.718995 / 8.074308 (11.644687) | 15.278535 / 10.191392 (5.087143) | 0.146719 / 0.680424 (-0.533705) | 0.017718 / 0.534201 (-0.516483) | 0.335709 / 0.579283 (-0.243574) | 0.378060 / 0.434364 (-0.056304) | 0.391135 / 0.540337 (-0.149202) | 0.548045 / 1.386936 (-0.838891) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006504 / 0.011353 (-0.004849) | 0.003742 / 0.011008 (-0.007266) | 0.064405 / 0.038508 (0.025897) | 0.077618 / 0.023109 (0.054509) | 0.365325 / 0.275898 (0.089427) | 0.408109 / 0.323480 (0.084629) | 0.004909 / 0.007986 (-0.003076) | 0.002972 / 0.004328 (-0.001356) | 0.063933 / 0.004250 (0.059682) | 0.052916 / 0.037052 (0.015863) | 0.370891 / 0.258489 (0.112402) | 0.412134 / 0.293841 (0.118293) | 0.028171 / 0.128546 (-0.100375) | 0.008150 / 0.075646 (-0.067497) | 0.069248 / 0.419271 (-0.350024) | 0.042353 / 0.043533 (-0.001180) | 0.368117 / 0.255139 (0.112978) | 0.397548 / 0.283200 (0.114348) | 0.022967 / 0.141683 (-0.118716) | 1.472740 / 1.452155 (0.020586) | 1.524028 / 1.492716 (0.031311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256854 / 0.018006 (0.238848) | 0.471499 / 0.000490 (0.471009) | 0.009609 / 0.000200 (0.009409) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027978 / 0.037411 (-0.009433) | 0.086741 / 0.014526 (0.072215) | 0.091189 / 0.176557 (-0.085368) | 0.146117 / 0.737135 (-0.591018) | 0.092358 / 0.296338 (-0.203980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426356 / 0.215209 (0.211147) | 4.263782 / 2.077655 (2.186127) | 2.178198 / 1.504120 (0.674078) | 2.015405 / 1.541195 (0.474211) | 2.055966 / 1.468490 (0.587476) | 0.507531 / 4.584777 (-4.077246) | 3.175967 / 3.745712 (-0.569745) | 3.055697 / 5.269862 (-2.214165) | 1.987663 / 4.565676 (-2.578014) | 0.058452 / 0.424275 (-0.365823) | 0.006944 / 0.007607 (-0.000663) | 0.502534 / 0.226044 (0.276489) | 5.024693 / 2.268929 (2.755765) | 2.754971 / 55.444624 (-52.689653) | 2.470845 / 6.876477 (-4.405632) | 2.698675 / 2.142072 (0.556602) | 0.602357 / 4.805227 (-4.202871) | 0.129490 / 6.500664 (-6.371174) | 0.065127 / 0.075469 (-0.010342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.398487 / 1.841788 (-0.443301) | 19.692279 / 8.074308 (11.617971) | 15.124064 / 10.191392 (4.932672) | 0.148938 / 0.680424 (-0.531486) | 0.017418 / 0.534201 (-0.516783) | 0.340480 / 0.579283 (-0.238803) | 0.377223 / 0.434364 (-0.057141) | 0.405303 / 0.540337 (-0.135034) | 0.548923 / 1.386936 (-0.838013) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#58e62af004b6b8b84dcfd897a4bc71637cfa6c3f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006433 / 0.011353 (-0.004920) | 0.004002 / 0.011008 (-0.007006) | 0.084130 / 0.038508 (0.045622) | 0.070628 / 0.023109 (0.047519) | 0.312372 / 0.275898 (0.036474) | 0.343993 / 0.323480 (0.020513) | 0.003936 / 0.007986 (-0.004050) | 0.003336 / 0.004328 (-0.000993) | 0.064715 / 0.004250 (0.060465) | 0.052511 / 0.037052 (0.015458) | 0.314092 / 0.258489 (0.055603) | 0.363152 / 0.293841 (0.069311) | 0.030898 / 0.128546 (-0.097648) | 0.008396 / 0.075646 (-0.067250) | 0.288083 / 0.419271 (-0.131188) | 0.051654 / 0.043533 (0.008122) | 0.315252 / 0.255139 (0.060113) | 0.346756 / 0.283200 (0.063556) | 0.025167 / 0.141683 (-0.116515) | 1.487265 / 1.452155 (0.035110) | 1.557528 / 1.492716 (0.064812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206517 / 0.018006 (0.188510) | 0.458359 / 0.000490 (0.457869) | 0.003719 / 0.000200 (0.003519) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029631 / 0.037411 (-0.007780) | 0.083856 / 0.014526 (0.069330) | 0.340431 / 0.176557 (0.163875) | 0.153864 / 0.737135 (-0.583271) | 0.095951 / 0.296338 (-0.200388) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379182 / 0.215209 (0.163973) | 3.783396 / 2.077655 (1.705741) | 1.835932 / 1.504120 (0.331813) | 1.667563 / 1.541195 (0.126369) | 1.739309 / 1.468490 (0.270818) | 0.478957 / 4.584777 (-4.105820) | 3.521974 / 3.745712 (-0.223738) | 3.237635 / 5.269862 (-2.032227) | 2.000300 / 4.565676 (-2.565377) | 0.056389 / 0.424275 (-0.367887) | 0.007242 / 0.007607 (-0.000365) | 0.452642 / 0.226044 (0.226598) | 4.524339 / 2.268929 (2.255411) | 2.346210 / 55.444624 (-53.098414) | 1.957196 / 6.876477 (-4.919281) | 2.180051 / 2.142072 (0.037979) | 0.570205 / 4.805227 (-4.235022) | 0.131346 / 6.500664 (-6.369318) | 0.059327 / 0.075469 (-0.016142) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244709 / 1.841788 (-0.597079) | 19.566277 / 8.074308 (11.491969) | 14.172598 / 10.191392 (3.981206) | 0.166493 / 0.680424 (-0.513931) | 0.018281 / 0.534201 (-0.515920) | 0.391608 / 0.579283 (-0.187675) | 0.402642 / 0.434364 (-0.031722) | 0.464974 / 0.540337 (-0.075364) | 0.637565 / 1.386936 (-0.749371) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006929 / 0.011353 (-0.004424) | 0.004114 / 0.011008 (-0.006894) | 0.064589 / 0.038508 (0.026081) | 0.083334 / 0.023109 (0.060225) | 0.391280 / 0.275898 (0.115382) | 0.426157 / 0.323480 (0.102678) | 0.005336 / 0.007986 (-0.002650) | 0.003395 / 0.004328 (-0.000934) | 0.064560 / 0.004250 (0.060310) | 0.057094 / 0.037052 (0.020042) | 0.398959 / 0.258489 (0.140470) | 0.432470 / 0.293841 (0.138629) | 0.031412 / 0.128546 (-0.097134) | 0.008670 / 0.075646 (-0.066976) | 0.071249 / 0.419271 (-0.348022) | 0.048934 / 0.043533 (0.005401) | 0.384207 / 0.255139 (0.129068) | 0.407992 / 0.283200 (0.124792) | 0.024492 / 0.141683 (-0.117191) | 1.467788 / 1.452155 (0.015634) | 1.541011 / 1.492716 (0.048295) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279607 / 0.018006 (0.261600) | 0.448899 / 0.000490 (0.448410) | 0.020990 / 0.000200 (0.020790) | 0.000132 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030313 / 0.037411 (-0.007099) | 0.089209 / 0.014526 (0.074684) | 0.101024 / 0.176557 (-0.075532) | 0.153468 / 0.737135 (-0.583667) | 0.103219 / 0.296338 (-0.193120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429176 / 0.215209 (0.213967) | 4.302234 / 2.077655 (2.224580) | 2.291103 / 1.504120 (0.786983) | 2.126257 / 1.541195 (0.585062) | 2.207090 / 1.468490 (0.738600) | 0.484643 / 4.584777 (-4.100134) | 3.557429 / 3.745712 (-0.188283) | 3.253804 / 5.269862 (-2.016058) | 2.026087 / 4.565676 (-2.539589) | 0.057793 / 0.424275 (-0.366482) | 0.007761 / 0.007607 (0.000154) | 0.504819 / 0.226044 (0.278775) | 5.046868 / 2.268929 (2.777940) | 2.773149 / 55.444624 (-52.671475) | 2.398036 / 6.876477 (-4.478440) | 2.608094 / 2.142072 (0.466021) | 0.630499 / 4.805227 (-4.174729) | 0.135496 / 6.500664 (-6.365168) | 0.061329 / 0.075469 (-0.014140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.327124 / 1.841788 (-0.514664) | 19.889796 / 8.074308 (11.815488) | 14.196100 / 10.191392 (4.004708) | 0.161963 / 0.680424 (-0.518461) | 0.018529 / 0.534201 (-0.515672) | 0.392325 / 0.579283 (-0.186958) | 0.404836 / 0.434364 (-0.029528) | 0.475898 / 0.540337 (-0.064439) | 0.633563 / 1.386936 (-0.753373) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4684fc1032321abf0d494b0c130ea7c82ebda80 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006390 / 0.011353 (-0.004963) | 0.003683 / 0.011008 (-0.007325) | 0.081274 / 0.038508 (0.042766) | 0.062193 / 0.023109 (0.039083) | 0.355360 / 0.275898 (0.079462) | 0.396471 / 0.323480 (0.072992) | 0.003569 / 0.007986 (-0.004416) | 0.003928 / 0.004328 (-0.000400) | 0.062292 / 0.004250 (0.058041) | 0.049700 / 0.037052 (0.012648) | 0.354604 / 0.258489 (0.096115) | 0.419436 / 0.293841 (0.125595) | 0.027151 / 0.128546 (-0.101395) | 0.007954 / 0.075646 (-0.067692) | 0.262231 / 0.419271 (-0.157041) | 0.045483 / 0.043533 (0.001950) | 0.354285 / 0.255139 (0.099146) | 0.385178 / 0.283200 (0.101978) | 0.021183 / 0.141683 (-0.120500) | 1.420785 / 1.452155 (-0.031370) | 1.531545 / 1.492716 (0.038829) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202298 / 0.018006 (0.184292) | 0.442172 / 0.000490 (0.441683) | 0.003565 / 0.000200 (0.003366) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024229 / 0.037411 (-0.013183) | 0.074352 / 0.014526 (0.059826) | 0.087530 / 0.176557 (-0.089026) | 0.146478 / 0.737135 (-0.590658) | 0.085145 / 0.296338 (-0.211194) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388395 / 0.215209 (0.173186) | 3.877623 / 2.077655 (1.799968) | 1.882444 / 1.504120 (0.378324) | 1.707871 / 1.541195 (0.166676) | 1.772132 / 1.468490 (0.303642) | 0.491937 / 4.584777 (-4.092840) | 3.057947 / 3.745712 (-0.687765) | 2.822390 / 5.269862 (-2.447471) | 1.879719 / 4.565676 (-2.685957) | 0.056830 / 0.424275 (-0.367445) | 0.006415 / 0.007607 (-0.001192) | 0.458945 / 0.226044 (0.232900) | 4.594502 / 2.268929 (2.325574) | 2.339677 / 55.444624 (-53.104948) | 1.983750 / 6.876477 (-4.892727) | 2.173792 / 2.142072 (0.031719) | 0.580390 / 4.805227 (-4.224838) | 0.124568 / 6.500664 (-6.376096) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265108 / 1.841788 (-0.576680) | 18.415254 / 8.074308 (10.340946) | 13.963829 / 10.191392 (3.772437) | 0.148926 / 0.680424 (-0.531498) | 0.016919 / 0.534201 (-0.517282) | 0.331082 / 0.579283 (-0.248201) | 0.345777 / 0.434364 (-0.088587) | 0.381123 / 0.540337 (-0.159214) | 0.543297 / 1.386936 (-0.843639) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003717 / 0.011008 (-0.007291) | 0.063653 / 0.038508 (0.025144) | 0.063723 / 0.023109 (0.040613) | 0.360233 / 0.275898 (0.084335) | 0.398353 / 0.323480 (0.074873) | 0.004696 / 0.007986 (-0.003290) | 0.002876 / 0.004328 (-0.001452) | 0.063057 / 0.004250 (0.058806) | 0.050258 / 0.037052 (0.013206) | 0.362946 / 0.258489 (0.104457) | 0.403260 / 0.293841 (0.109419) | 0.027738 / 0.128546 (-0.100809) | 0.008025 / 0.075646 (-0.067621) | 0.068781 / 0.419271 (-0.350491) | 0.042114 / 0.043533 (-0.001419) | 0.363546 / 0.255139 (0.108407) | 0.385640 / 0.283200 (0.102440) | 0.021757 / 0.141683 (-0.119926) | 1.482364 / 1.452155 (0.030209) | 1.571859 / 1.492716 (0.079143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235628 / 0.018006 (0.217622) | 0.439909 / 0.000490 (0.439419) | 0.003070 / 0.000200 (0.002870) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027045 / 0.037411 (-0.010366) | 0.080413 / 0.014526 (0.065887) | 0.088953 / 0.176557 (-0.087603) | 0.141907 / 0.737135 (-0.595228) | 0.090604 / 0.296338 (-0.205735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.423250 / 0.215209 (0.208041) | 4.216510 / 2.077655 (2.138855) | 2.162946 / 1.504120 (0.658826) | 2.014561 / 1.541195 (0.473366) | 2.086347 / 1.468490 (0.617857) | 0.496591 / 4.584777 (-4.088186) | 3.089594 / 3.745712 (-0.656118) | 2.853640 / 5.269862 (-2.416221) | 1.878149 / 4.565676 (-2.687527) | 0.056914 / 0.424275 (-0.367361) | 0.006762 / 0.007607 (-0.000845) | 0.493470 / 0.226044 (0.267426) | 4.929966 / 2.268929 (2.661037) | 2.640885 / 55.444624 (-52.803739) | 2.335950 / 6.876477 (-4.540527) | 2.565866 / 2.142072 (0.423793) | 0.585433 / 4.805227 (-4.219794) | 0.124969 / 6.500664 (-6.375695) | 0.062361 / 0.075469 (-0.013108) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369144 / 1.841788 (-0.472644) | 19.037582 / 8.074308 (10.963274) | 14.069141 / 10.191392 (3.877749) | 0.146469 / 0.680424 (-0.533954) | 0.016911 / 0.534201 (-0.517290) | 0.336802 / 0.579283 (-0.242482) | 0.336411 / 0.434364 (-0.097953) | 0.392360 / 0.540337 (-0.147977) | 0.536078 / 1.386936 (-0.850858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12cfc1196e62847e2e8239fbd727a02cbc86ddec \"CML watermark\")\n" ]
2023-08-07T15:41:25
2023-08-08T15:24:59
2023-08-08T15:16:22
MEMBER
null
This PR fixes 3 authentication issues: - Fix authentication when passing `token`. - Fix authentication in `Audio.decode_example` and `Image.decode_example`. - Fix authentication to resolve `data_files` in repositories without script. This PR also fixes our CI so that we properly test when passing `token` and we do not use the token stored in `HfFolder`. Fix #6126. ## Details ### Fix authentication when passing `token` See c0a77dc943de68a17f23f141517028c734c78623 The root issue was caused when the `token` was set in an already instantiated `DownloadConfig` and thus not propagated to `self._storage_options`: ```python download_config.token = token ``` As this usage pattern is very common, the fix consists in overriding `DownloadConfig.__setattr__`. This fixes authentication issues in the following functions: - `load_dataset` and `load_dataset_builder` - `Dataset.push_to_hub` and `Dataset.push_to_hub` - `inspect.get_dataset_config_info`, `inspect.get_dataset_infos` and `inspect.get_dataset_split_names` ### Fix authentication in `Audio.decode_example` and `Image.decode_example`. See: 58e62af004b6b8b84dcfd897a4bc71637cfa6c3f The `token` was not set because the `repo_id` was wrongly tried to be parsed from an HTTP URL (`"http://..."`), instead of an HFFileSystem URL (`"hf://"`) ### Fix authentication to resolve `data_files` in repositories without script See: e4684fc1032321abf0d494b0c130ea7c82ebda80 This is fixed by passing `download_config` to the function `create_builder_configs_from_metadata_configs`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6127/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6127/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6127", "html_url": "https://github.com/huggingface/datasets/pull/6127", "diff_url": "https://github.com/huggingface/datasets/pull/6127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6127.patch", "merged_at": "2023-08-08T15:16:22" }
true
https://api.github.com/repos/huggingface/datasets/issues/6126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6126/comments
https://api.github.com/repos/huggingface/datasets/issues/6126/events
https://github.com/huggingface/datasets/issues/6126
1,839,675,320
I_kwDODunzps5tpze4
6,126
Private datasets do not load when passing token
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Our CI did not catch this issue because with current implementation, stored token in `HfFolder` (which always exists) is used by default.", "I can confirm this and have the same problem (and just went almost crazy because I couldn't figure out the source of this problem because on another computer everything worked well even with `DownloadMode.FORCE_REDOWNLOAD`).", "We are planning to do a patch release today, after the merge of the fix:\r\n- #6127\r\n\r\nIn the meantime, the problem can be circumvented by passing `download_config` instead:\r\n```python\r\nfrom datasets import DownloadConfig, load_dataset\r\n\r\nload_dataset(\"<DATASET-NAME>\", split=\"train\", download_config=DownloadConfig(token=\"<TOKEN>\"))\r\n``` ", "> We are planning to do a patch release today, after the merge of the fix:\r\n> \r\n> * [Fix authentication issues #6127](https://github.com/huggingface/datasets/pull/6127)\r\n> \r\n> \r\n> In the meantime, the problem can be circumvented by passing `download_config` instead:\r\n> \r\n> ```python\r\n> from datasets import DownloadConfig, load_dataset\r\n> \r\n> load_dataset(\"<DATASET-NAME>\", split=\"train\", download_config=DownloadConfig(token=\"<TOKEN>\"))\r\n> ```\r\n\r\nThis did not work for me (there was some other error with the split being an unexpected size 0). Downgrading to 2.13 fixed it...." ]
2023-08-07T15:06:47
2023-08-08T15:16:23
2023-08-08T15:16:23
MEMBER
null
### Describe the bug Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`. This is a non-planned backward incompatible breaking change. Note that private datasets do load if instead `download_config` is passed: ```python from datasets import DownloadConfig, load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", download_config=DownloadConfig(token="<MY-TOKEN>")) ds ``` gives ``` Dataset({ features: ['text'], num_rows: 4 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") ``` gives ``` --------------------------------------------------------------------------- EmptyDatasetError Traceback (most recent call last) [<ipython-input-2-25b48732107a>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 ds = load_dataset("albertvillanova/tmp-private", split="train", token="<MY-TOKEN>") 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1793 download_config = download_config.copy() if download_config else DownloadConfig() 1794 download_config.storage_options.update(storage_options) -> 1795 dataset_module = dataset_module_factory( 1796 path, 1797 revision=revision, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1484 raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None 1485 if isinstance(e1, EmptyDatasetError): -> 1486 raise e1 from None 1487 if isinstance(e1, FileNotFoundError): 1488 raise FileNotFoundError( [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1474 download_config=download_config, 1475 download_mode=download_mode, -> 1476 ).get_module() 1477 except ( 1478 Exception [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self) 1030 sanitize_patterns(self.data_files) 1031 if self.data_files is not None -> 1032 else get_data_patterns(base_path, download_config=self.download_config) 1033 ) 1034 data_files = DataFilesDict.from_patterns( [/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in get_data_patterns(base_path, download_config) 457 return _get_data_files_patterns(resolver) 458 except FileNotFoundError: --> 459 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None 460 461 EmptyDatasetError: The directory at hf://datasets/albertvillanova/tmp-private@79b9e4fe79670a9a050d6ebc385464891915a71d doesn't contain any data files ``` ### Expected behavior The dataset should load. ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6126/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6125/comments
https://api.github.com/repos/huggingface/datasets/issues/6125/events
https://github.com/huggingface/datasets/issues/6125
1,837,980,986
I_kwDODunzps5tjV06
6,125
Reinforcement Learning and Robotics are not task categories in HF datasets metadata
{ "login": "StoneT2000", "id": 35373228, "node_id": "MDQ6VXNlcjM1MzczMjI4", "avatar_url": "https://avatars.githubusercontent.com/u/35373228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StoneT2000", "html_url": "https://github.com/StoneT2000", "followers_url": "https://api.github.com/users/StoneT2000/followers", "following_url": "https://api.github.com/users/StoneT2000/following{/other_user}", "gists_url": "https://api.github.com/users/StoneT2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/StoneT2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StoneT2000/subscriptions", "organizations_url": "https://api.github.com/users/StoneT2000/orgs", "repos_url": "https://api.github.com/users/StoneT2000/repos", "events_url": "https://api.github.com/users/StoneT2000/events{/privacy}", "received_events_url": "https://api.github.com/users/StoneT2000/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
2023-08-05T23:59:42
2023-08-18T12:28:42
2023-08-18T12:28:42
NONE
null
### Describe the bug In https://huggingface.co./models there are task categories for RL and robotics but none in https://huggingface.co./datasets Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags Moreover we see some older datasets that do have that tag, but we can't seem to add it ourselves. ### Steps to reproduce the bug 1. Create a new dataset on Hugging face 2. Try to type reinforcemement-learning or robotics into the tasks categories, it does not allow you to commit ### Expected behavior Expected to be able to add RL and robotics as task categories as some previous datasets have these tags ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6125/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6124/comments
https://api.github.com/repos/huggingface/datasets/issues/6124/events
https://github.com/huggingface/datasets/issues/6124
1,837,868,112
I_kwDODunzps5ti6RQ
6,124
Datasets crashing runs due to KeyError
{ "login": "conceptofmind", "id": 25208228, "node_id": "MDQ6VXNlcjI1MjA4MjI4", "avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4", "gravatar_id": "", "url": "https://api.github.com/users/conceptofmind", "html_url": "https://github.com/conceptofmind", "followers_url": "https://api.github.com/users/conceptofmind/followers", "following_url": "https://api.github.com/users/conceptofmind/following{/other_user}", "gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}", "starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions", "organizations_url": "https://api.github.com/users/conceptofmind/orgs", "repos_url": "https://api.github.com/users/conceptofmind/repos", "events_url": "https://api.github.com/users/conceptofmind/events{/privacy}", "received_events_url": "https://api.github.com/users/conceptofmind/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "i once had the same error and I could fix that by pushing a fake or a dummy commit on my hugging face dataset repo", "Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?", "> Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?\r\n\r\nHi Mario,\r\n\r\nUnfortunately, the dataset in question is currently private until the model is trained and released.\r\n\r\nThis is not happening with one dataset but numerous hosted private datasets.\r\n\r\nI am only loading the dataset and doing nothing else currently. It seems to happen completely sporadically.\r\n\r\nThank you,\r\n\r\nEnrico" ]
2023-08-05T17:48:56
2023-08-20T17:33:15
null
NONE
null
### Describe the bug Hi all, I have been running into a pretty persistent issue recently when trying to load datasets. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` I receive a KeyError which crashes the runs. ``` Traceback (most recent call last): main() train_dataset = load_dataset( ^^^^^^^^^^^^^ builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ raise e1 from None ).get_module() ^^^^^^^^^^^^ else get_data_patterns(base_path, download_config=self.download_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ return _get_data_files_patterns(resolver) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ data_files = pattern_resolver(pattern) ^^^^^^^^^^^^^^^^^^^^^^^^^ fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ paths = [f for f in sorted(fs.glob(paths)) if not fs.isdir(f)] ^^^^^^^^^^^^^^ allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs): listing = self.ls(path, detail=True, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ "last_modified": parse_datetime(tree_item["lastCommit"]["date"]), ~~~~~~~~~^^^^^^^^^^^^^^ KeyError: 'lastCommit' ``` Any help would be greatly appreciated. Thank you, Enrico ### Steps to reproduce the bug Load the dataset from the Huggingface hub. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` ### Expected behavior Loads the dataset. ### Environment info datasets-2.14.3 CUDA 11.8 Python 3.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6124/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6123/comments
https://api.github.com/repos/huggingface/datasets/issues/6123/events
https://github.com/huggingface/datasets/issues/6123
1,837,789,294
I_kwDODunzps5tinBu
6,123
Inaccurate Bounding Boxes in "wildreceipt" Dataset
{ "login": "HamzaGbada", "id": 50714796, "node_id": "MDQ6VXNlcjUwNzE0Nzk2", "avatar_url": "https://avatars.githubusercontent.com/u/50714796?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HamzaGbada", "html_url": "https://github.com/HamzaGbada", "followers_url": "https://api.github.com/users/HamzaGbada/followers", "following_url": "https://api.github.com/users/HamzaGbada/following{/other_user}", "gists_url": "https://api.github.com/users/HamzaGbada/gists{/gist_id}", "starred_url": "https://api.github.com/users/HamzaGbada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HamzaGbada/subscriptions", "organizations_url": "https://api.github.com/users/HamzaGbada/orgs", "repos_url": "https://api.github.com/users/HamzaGbada/repos", "events_url": "https://api.github.com/users/HamzaGbada/events{/privacy}", "received_events_url": "https://api.github.com/users/HamzaGbada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Thanks for the investigation, but we are not the authors of these datasets, so please report this on the Hub instead so that the actual authors can fix it." ]
2023-08-05T14:34:13
2023-08-17T14:25:27
2023-08-17T14:25:26
NONE
null
### Describe the bug I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, namely `load_dataset("Theivaprakasham/wildreceipt")` and `load_dataset("jinhybr/WildReceipt")`, and the actual labels and corresponding bounding boxes present in the dataset. To illustrate this divergence, I've provided two examples in the form of screenshots. These screenshots highlight the contrasting outcomes between my personal implementation of the dataloader and the implementation offered by Hugging Face: **Example 1:** ![image](https://github.com/huggingface/datasets/assets/50714796/7a6604d2-899d-4102-a008-1a28c90698f1) ![image](https://github.com/huggingface/datasets/assets/50714796/eba458c7-d3af-4868-a520-8b683aa96f66) ![image](https://github.com/huggingface/datasets/assets/50714796/9f394891-5f5b-46f7-8e52-071b724aedab) **Example 2:** ![image](https://github.com/huggingface/datasets/assets/50714796/a2b2a8d3-124e-4990-b64a-5133cf4be2fe) ![image](https://github.com/huggingface/datasets/assets/50714796/6ee25642-35aa-40ad-ac1e-899d33be90df) ![image](https://github.com/huggingface/datasets/assets/50714796/5e42ff91-9fc4-4520-8803-0e225656f96c) It's important to note that my dataloader implementation is based on the same dataset files as utilized in the Hugging Face implementation. For your reference, you can access the dataset files through this link: [wildreceipt dataset files](https://download.openmmlab.com/mmocr/data/wildreceipt.tar). This inconsistency in bounding box accuracy warrants investigation and rectification for maintaining the integrity of the "wildreceipt" dataset. Your attention and assistance in addressing this matter would be greatly appreciated. ### Steps to reproduce the bug ```python import matplotlib.pyplot as plt from datasets import load_dataset # Define functions to convert bounding box formats def convert_format1(box): x, y, w, h = box x2, y2 = x + w, y + h return [x, y, x2, y2] def convert_format2(box): x1, y1, x2, y2 = box return [x1, y1, x2, y2] def plot_cropped_image(image, box, title): cropped_image = image.crop(box) plt.imshow(cropped_image) plt.title(title) plt.axis('off') plt.savefig(title+'.png') plt.show() doc_index = 1 word_index = 3 dataset = load_dataset("Theivaprakasham/wildreceipt")['train'] bbox_hugging_face = dataset[doc_index]['bboxes'][word_index] text_unit_face = dataset[doc_index]['words'][word_index] common_box_hugface_1 = convert_format1(bbox_hugging_face) common_box_hugface_2 = convert_format2(bbox_hugging_face) plot_cropped_image(image_hugging, common_box_hugface_1, f'Hugging Face Bouding boxes (x,y,w,h format) \n its associated text unit: {text_unit_face}') plot_cropped_image(image_hugging, common_box_hugface_2, f'Hugging Face Bouding boxes (x1,y1,x2, y2 format) \n its associated text unit: {text_unit_face}') ``` ### Expected behavior The bounding boxes generated by the "wildreceipt" dataset in HuggingFace implementation loading commands should accurately match the actual labels and bounding boxes of the dataset. ### Environment info - Python version: 3.8 - Hugging Face datasets version: 2.14.2 - Dataset file taken from this link: https://download.openmmlab.com/mmocr/data/wildreceipt.tar
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6123/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6122/comments
https://api.github.com/repos/huggingface/datasets/issues/6122/events
https://github.com/huggingface/datasets/issues/6122
1,837,335,721
I_kwDODunzps5tg4Sp
6,122
Upload README via `push_to_hub`
{ "login": "liyucheng09", "id": 27999909, "node_id": "MDQ6VXNlcjI3OTk5OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyucheng09", "html_url": "https://github.com/liyucheng09", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "repos_url": "https://api.github.com/users/liyucheng09/repos", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "You can use `huggingface_hub`'s [Card API](https://huggingface.co./docs/huggingface_hub/package_reference/cards) to programmatically push a dataset card to the Hub." ]
2023-08-04T21:00:27
2023-08-21T18:18:54
2023-08-21T18:18:54
NONE
null
### Feature request `push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually. However, I do discover snippets to intialize a README for every `push_to_hub`: ``` dataset_card = ( DatasetCard( "---\n" + str(dataset_card_data) + "\n---\n" + f'# Dataset Card for "{repo_id.split("/")[-1]}"\n\n[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)' ) if dataset_card is None else dataset_card ) HfApi(endpoint=config.HF_ENDPOINT).upload_file( path_or_fileobj=str(dataset_card).encode(), path_in_repo="README.md", repo_id=repo_id, token=token, repo_type="dataset", revision=branch, ) ``` So, if we can enable `push_to_hub` to upload a readme file by ourselves instead of using the auto generated ones, it can save ton of time, and will definitely alleviate the current "lack-of-dataset-card" situation. ### Motivation as elabrated above. ### Your contribution I might be able to make a pr.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6122/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6121/comments
https://api.github.com/repos/huggingface/datasets/issues/6121/events
https://github.com/huggingface/datasets/pull/6121
1,836,761,712
PR_kwDODunzps5XMsWd
6,121
Small typo in the code example of create imagefolder dataset
{ "login": "WangXin93", "id": 19688994, "node_id": "MDQ6VXNlcjE5Njg4OTk0", "avatar_url": "https://avatars.githubusercontent.com/u/19688994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WangXin93", "html_url": "https://github.com/WangXin93", "followers_url": "https://api.github.com/users/WangXin93/followers", "following_url": "https://api.github.com/users/WangXin93/following{/other_user}", "gists_url": "https://api.github.com/users/WangXin93/gists{/gist_id}", "starred_url": "https://api.github.com/users/WangXin93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WangXin93/subscriptions", "organizations_url": "https://api.github.com/users/WangXin93/orgs", "repos_url": "https://api.github.com/users/WangXin93/repos", "events_url": "https://api.github.com/users/WangXin93/events{/privacy}", "received_events_url": "https://api.github.com/users/WangXin93/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nI found a small typo in the code example of create imagefolder dataset. It confused me a little when I first saw it.\r\n\r\nBest Regards.\r\n\r\nXin" ]
2023-08-04T13:36:59
2023-08-04T13:45:32
2023-08-04T13:41:43
NONE
null
Fix type of code example of load imagefolder dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6121/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6121", "html_url": "https://github.com/huggingface/datasets/pull/6121", "diff_url": "https://github.com/huggingface/datasets/pull/6121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6121.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6120/comments
https://api.github.com/repos/huggingface/datasets/issues/6120/events
https://github.com/huggingface/datasets/issues/6120
1,836,026,938
I_kwDODunzps5tb4w6
6,120
Lookahead streaming support?
{ "login": "PicoCreator", "id": 17175484, "node_id": "MDQ6VXNlcjE3MTc1NDg0", "avatar_url": "https://avatars.githubusercontent.com/u/17175484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PicoCreator", "html_url": "https://github.com/PicoCreator", "followers_url": "https://api.github.com/users/PicoCreator/followers", "following_url": "https://api.github.com/users/PicoCreator/following{/other_user}", "gists_url": "https://api.github.com/users/PicoCreator/gists{/gist_id}", "starred_url": "https://api.github.com/users/PicoCreator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PicoCreator/subscriptions", "organizations_url": "https://api.github.com/users/PicoCreator/orgs", "repos_url": "https://api.github.com/users/PicoCreator/repos", "events_url": "https://api.github.com/users/PicoCreator/events{/privacy}", "received_events_url": "https://api.github.com/users/PicoCreator/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "In which format is your dataset? We could expose the `pre_buffer` flag for Parquet to use PyArrow's background thread pool to speed up loading. " ]
2023-08-04T04:01:52
2023-08-17T17:48:42
null
NONE
null
### Feature request From what I understand, streaming dataset currently pulls the data, and process the data as it is requested. This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment. While the delays might be dataset specific (or even mapping instruction/tokenizer specific) Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained. With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches. ### Motivation Faster streaming performance, while training over extra large TB sized datasets ### Your contribution I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6120/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6119/comments
https://api.github.com/repos/huggingface/datasets/issues/6119/events
https://github.com/huggingface/datasets/pull/6119
1,835,996,350
PR_kwDODunzps5XKI19
6,119
[Docs] Add description of `select_columns` to guide
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007755 / 0.011353 (-0.003598) | 0.004618 / 0.011008 (-0.006391) | 0.098132 / 0.038508 (0.059624) | 0.086759 / 0.023109 (0.063650) | 0.374668 / 0.275898 (0.098770) | 0.417131 / 0.323480 (0.093651) | 0.004604 / 0.007986 (-0.003382) | 0.005461 / 0.004328 (0.001132) | 0.077249 / 0.004250 (0.072999) | 0.063247 / 0.037052 (0.026195) | 0.391801 / 0.258489 (0.133312) | 0.432139 / 0.293841 (0.138298) | 0.036755 / 0.128546 (-0.091791) | 0.010011 / 0.075646 (-0.065636) | 0.346175 / 0.419271 (-0.073097) | 0.061503 / 0.043533 (0.017971) | 0.374063 / 0.255139 (0.118924) | 0.435873 / 0.283200 (0.152673) | 0.029476 / 0.141683 (-0.112207) | 1.786945 / 1.452155 (0.334790) | 1.857190 / 1.492716 (0.364474) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253939 / 0.018006 (0.235933) | 0.506847 / 0.000490 (0.506358) | 0.007278 / 0.000200 (0.007079) | 0.000451 / 0.000054 (0.000397) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032938 / 0.037411 (-0.004474) | 0.097493 / 0.014526 (0.082967) | 0.112090 / 0.176557 (-0.064467) | 0.177986 / 0.737135 (-0.559149) | 0.112060 / 0.296338 (-0.184278) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.481858 / 0.215209 (0.266649) | 4.814894 / 2.077655 (2.737239) | 2.496428 / 1.504120 (0.992308) | 2.309965 / 1.541195 (0.768770) | 2.393819 / 1.468490 (0.925329) | 0.564670 / 4.584777 (-4.020107) | 4.151222 / 3.745712 (0.405510) | 3.676115 / 5.269862 (-1.593747) | 2.346165 / 4.565676 (-2.219512) | 0.066344 / 0.424275 (-0.357931) | 0.009006 / 0.007607 (0.001399) | 0.567699 / 0.226044 (0.341654) | 5.686799 / 2.268929 (3.417871) | 3.031044 / 55.444624 (-52.413580) | 2.606259 / 6.876477 (-4.270217) | 2.864876 / 2.142072 (0.722804) | 0.681730 / 4.805227 (-4.123498) | 0.155405 / 6.500664 (-6.345259) | 0.071492 / 0.075469 (-0.003977) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.514446 / 1.841788 (-0.327341) | 22.624912 / 8.074308 (14.550604) | 16.754145 / 10.191392 (6.562753) | 0.193113 / 0.680424 (-0.487311) | 0.021808 / 0.534201 (-0.512393) | 0.468241 / 0.579283 (-0.111042) | 0.499647 / 0.434364 (0.065283) | 0.539571 / 0.540337 (-0.000766) | 0.771268 / 1.386936 (-0.615668) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007562 / 0.011353 (-0.003791) | 0.004548 / 0.011008 (-0.006460) | 0.075998 / 0.038508 (0.037490) | 0.081648 / 0.023109 (0.058539) | 0.462876 / 0.275898 (0.186978) | 0.499366 / 0.323480 (0.175886) | 0.005839 / 0.007986 (-0.002147) | 0.003753 / 0.004328 (-0.000576) | 0.075918 / 0.004250 (0.071668) | 0.063233 / 0.037052 (0.026181) | 0.459024 / 0.258489 (0.200535) | 0.506388 / 0.293841 (0.212547) | 0.036179 / 0.128546 (-0.092367) | 0.009961 / 0.075646 (-0.065685) | 0.082061 / 0.419271 (-0.337211) | 0.056469 / 0.043533 (0.012936) | 0.459567 / 0.255139 (0.204428) | 0.482578 / 0.283200 (0.199378) | 0.026363 / 0.141683 (-0.115320) | 1.742247 / 1.452155 (0.290092) | 1.807166 / 1.492716 (0.314450) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.330526 / 0.018006 (0.312520) | 0.511674 / 0.000490 (0.511184) | 0.040969 / 0.000200 (0.040769) | 0.000176 / 0.000054 (0.000121) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035492 / 0.037411 (-0.001920) | 0.104338 / 0.014526 (0.089813) | 0.116973 / 0.176557 (-0.059583) | 0.180218 / 0.737135 (-0.556917) | 0.118801 / 0.296338 (-0.177538) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492196 / 0.215209 (0.276987) | 4.910271 / 2.077655 (2.832616) | 2.542562 / 1.504120 (1.038442) | 2.333516 / 1.541195 (0.792321) | 2.439682 / 1.468490 (0.971192) | 0.571966 / 4.584777 (-4.012811) | 4.089801 / 3.745712 (0.344089) | 3.732129 / 5.269862 (-1.537733) | 2.375887 / 4.565676 (-2.189789) | 0.067376 / 0.424275 (-0.356900) | 0.008350 / 0.007607 (0.000743) | 0.583942 / 0.226044 (0.357897) | 5.840002 / 2.268929 (3.571074) | 3.062520 / 55.444624 (-52.382104) | 2.722512 / 6.876477 (-4.153965) | 2.938307 / 2.142072 (0.796234) | 0.689459 / 4.805227 (-4.115769) | 0.155632 / 6.500664 (-6.345032) | 0.072387 / 0.075469 (-0.003082) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595587 / 1.841788 (-0.246201) | 23.035478 / 8.074308 (14.961170) | 16.457675 / 10.191392 (6.266283) | 0.170819 / 0.680424 (-0.509605) | 0.022042 / 0.534201 (-0.512159) | 0.466824 / 0.579283 (-0.112459) | 0.486350 / 0.434364 (0.051986) | 0.574330 / 0.540337 (0.033993) | 0.764913 / 1.386936 (-0.622023) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#664a1cb72ea1e6ef7c47e671e2686ca4a35e8d63 \"CML watermark\")\n" ]
2023-08-04T03:13:30
2023-08-16T10:13:02
2023-08-16T10:02:52
CONTRIBUTOR
null
Closes #6116
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6119/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6119", "html_url": "https://github.com/huggingface/datasets/pull/6119", "diff_url": "https://github.com/huggingface/datasets/pull/6119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6119.patch", "merged_at": "2023-08-16T10:02:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/6118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6118/comments
https://api.github.com/repos/huggingface/datasets/issues/6118/events
https://github.com/huggingface/datasets/issues/6118
1,835,940,417
I_kwDODunzps5tbjpB
6,118
IterableDataset.from_generator() fails with pickle error when provided a generator or iterator
{ "login": "finkga", "id": 1281051, "node_id": "MDQ6VXNlcjEyODEwNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1281051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/finkga", "html_url": "https://github.com/finkga", "followers_url": "https://api.github.com/users/finkga/followers", "following_url": "https://api.github.com/users/finkga/following{/other_user}", "gists_url": "https://api.github.com/users/finkga/gists{/gist_id}", "starred_url": "https://api.github.com/users/finkga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/finkga/subscriptions", "organizations_url": "https://api.github.com/users/finkga/orgs", "repos_url": "https://api.github.com/users/finkga/repos", "events_url": "https://api.github.com/users/finkga/events{/privacy}", "received_events_url": "https://api.github.com/users/finkga/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! `IterableDataset.from_generator` expects a generator function, not the object (to be consistent with `Dataset.from_generator`).\r\n\r\nYou can fix the above snippet as follows:\r\n```python\r\ntrain_dataset = IterableDataset.from_generator(line_generator, fn_kwargs={\"files\": model_training_files})\r\n```" ]
2023-08-04T01:45:04
2023-08-17T17:58:27
null
NONE
null
### Describe the bug **Description** Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator. **Code example** ``` def line_generator(files: List[Path]): if isinstance(files, str): files = [Path(files)] for file in files: if isinstance(file, str): file = Path(file) yield from open(file,'r').readlines() ... model_training_files = ['file1.txt', 'file2.txt', 'file3.txt'] train_dataset = IterableDataset.from_generator(generator=line_generator(model_training_files)) ``` **Traceback** Traceback (most recent call last): File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/contextlib.py", line 135, in __exit__ self.gen.throw(type, value, traceback) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 691, in _no_cache_fields yield File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 701, in dumps dump(obj, file) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 676, in dump Pickler(file, recurse=True).dump(obj) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 394, in dump StockPickler.dump(self, obj) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 487, in dump self.save(obj) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 1186, in save_module_dict StockPickler.save_dict(pickler, obj) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 971, in save_dict self._batch_setitems(obj.items()) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 997, in _batch_setitems save(v) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 666, in save dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id) File "/Users/d3p692/code/clem_bert/venv/lib/python3.9/site-packages/dill/_dill.py", line 388, in save StockPickler.save(self, obj, save_persistent_id) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/pickle.py", line 578, in save rv = reduce(self.proto) TypeError: cannot pickle 'generator' object ### Steps to reproduce the bug 1. Create a set of text files to iterate over. 2. Create a generator that returns the lines in each file until all files are exhausted. 3. Instantiate the dataset over the generator by instantiating an IterableDataset.from_generator(). 4. Wait for the explosion. ### Expected behavior I would expect that since the function claims to accept a generator that there would be no crash. Instead, I would expect the dataset to return all the lines in the files as queued up in the `line_generator()` function. ### Environment info datasets.__version__ == '2.13.1' Python 3.9.6 Platform: Darwin WE35261 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:22 PDT 2023; root:xnu-8796.121.3~7/RELEASE_X86_64 x86_64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6118/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6117/comments
https://api.github.com/repos/huggingface/datasets/issues/6117/events
https://github.com/huggingface/datasets/pull/6117
1,835,213,848
PR_kwDODunzps5XHktw
6,117
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6117). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012516 / 0.011353 (0.001163) | 0.004725 / 0.011008 (-0.006283) | 0.112245 / 0.038508 (0.073736) | 0.079146 / 0.023109 (0.056037) | 0.386415 / 0.275898 (0.110517) | 0.420441 / 0.323480 (0.096961) | 0.005682 / 0.007986 (-0.002304) | 0.004169 / 0.004328 (-0.000160) | 0.077847 / 0.004250 (0.073597) | 0.055763 / 0.037052 (0.018711) | 0.385529 / 0.258489 (0.127040) | 0.422711 / 0.293841 (0.128870) | 0.047212 / 0.128546 (-0.081334) | 0.013711 / 0.075646 (-0.061935) | 0.342856 / 0.419271 (-0.076416) | 0.066788 / 0.043533 (0.023255) | 0.380728 / 0.255139 (0.125589) | 0.416241 / 0.283200 (0.133041) | 0.034676 / 0.141683 (-0.107007) | 1.679661 / 1.452155 (0.227506) | 1.838014 / 1.492716 (0.345297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219556 / 0.018006 (0.201550) | 0.524728 / 0.000490 (0.524238) | 0.005045 / 0.000200 (0.004845) | 0.000124 / 0.000054 (0.000069) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.085937 / 0.014526 (0.071412) | 0.099245 / 0.176557 (-0.077311) | 0.158995 / 0.737135 (-0.578141) | 0.101504 / 0.296338 (-0.194835) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.582200 / 0.215209 (0.366991) | 5.794340 / 2.077655 (3.716685) | 2.473635 / 1.504120 (0.969515) | 2.168135 / 1.541195 (0.626941) | 2.215886 / 1.468490 (0.747396) | 0.855599 / 4.584777 (-3.729178) | 5.003067 / 3.745712 (1.257354) | 4.503566 / 5.269862 (-0.766295) | 2.912248 / 4.565676 (-1.653428) | 0.103267 / 0.424275 (-0.321008) | 0.012114 / 0.007607 (0.004507) | 0.712240 / 0.226044 (0.486196) | 7.131946 / 2.268929 (4.863017) | 3.280052 / 55.444624 (-52.164573) | 2.583472 / 6.876477 (-4.293004) | 2.820758 / 2.142072 (0.678686) | 1.132097 / 4.805227 (-3.673131) | 0.232191 / 6.500664 (-6.268473) | 0.082966 / 0.075469 (0.007497) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581125 / 1.841788 (-0.260662) | 22.723878 / 8.074308 (14.649570) | 19.969347 / 10.191392 (9.777955) | 0.234365 / 0.680424 (-0.446059) | 0.030245 / 0.534201 (-0.503956) | 0.470843 / 0.579283 (-0.108440) | 0.558069 / 0.434364 (0.123705) | 0.534878 / 0.540337 (-0.005460) | 0.801025 / 1.386936 (-0.585911) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008524 / 0.011353 (-0.002829) | 0.005083 / 0.011008 (-0.005925) | 0.078054 / 0.038508 (0.039546) | 0.082025 / 0.023109 (0.058915) | 0.458027 / 0.275898 (0.182129) | 0.498232 / 0.323480 (0.174752) | 0.005938 / 0.007986 (-0.002048) | 0.003776 / 0.004328 (-0.000553) | 0.080413 / 0.004250 (0.076163) | 0.060485 / 0.037052 (0.023433) | 0.462816 / 0.258489 (0.204327) | 0.513970 / 0.293841 (0.220129) | 0.047574 / 0.128546 (-0.080973) | 0.013424 / 0.075646 (-0.062222) | 0.087707 / 0.419271 (-0.331565) | 0.065007 / 0.043533 (0.021474) | 0.465844 / 0.255139 (0.210705) | 0.498474 / 0.283200 (0.215274) | 0.033518 / 0.141683 (-0.108164) | 1.737507 / 1.452155 (0.285352) | 1.848291 / 1.492716 (0.355574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.316710 / 0.018006 (0.298703) | 0.504415 / 0.000490 (0.503925) | 0.042128 / 0.000200 (0.041928) | 0.000171 / 0.000054 (0.000117) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032097 / 0.037411 (-0.005314) | 0.099371 / 0.014526 (0.084845) | 0.109311 / 0.176557 (-0.067246) | 0.177373 / 0.737135 (-0.559762) | 0.110753 / 0.296338 (-0.185585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688060 / 0.215209 (0.472851) | 6.255219 / 2.077655 (4.177564) | 2.696845 / 1.504120 (1.192725) | 2.395424 / 1.541195 (0.854230) | 2.414870 / 1.468490 (0.946380) | 0.865704 / 4.584777 (-3.719073) | 5.086828 / 3.745712 (1.341116) | 4.648107 / 5.269862 (-0.621754) | 3.091119 / 4.565676 (-1.474558) | 0.101787 / 0.424275 (-0.322489) | 0.008829 / 0.007607 (0.001222) | 0.772398 / 0.226044 (0.546354) | 7.700366 / 2.268929 (5.431438) | 3.608632 / 55.444624 (-51.835992) | 2.923309 / 6.876477 (-3.953168) | 2.952141 / 2.142072 (0.810069) | 1.093006 / 4.805227 (-3.712221) | 0.224363 / 6.500664 (-6.276301) | 0.074927 / 0.075469 (-0.000542) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.638414 / 1.841788 (-0.203374) | 23.486781 / 8.074308 (15.412473) | 21.129104 / 10.191392 (10.937712) | 0.259955 / 0.680424 (-0.420469) | 0.027305 / 0.534201 (-0.506895) | 0.464448 / 0.579283 (-0.114835) | 0.553737 / 0.434364 (0.119373) | 0.571318 / 0.540337 (0.030981) | 0.772917 / 1.386936 (-0.614019) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3ec5ee9e78b464364796651d995823c7ecb0f951 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009093 / 0.011353 (-0.002260) | 0.005283 / 0.011008 (-0.005725) | 0.112299 / 0.038508 (0.073791) | 0.081341 / 0.023109 (0.058232) | 0.363799 / 0.275898 (0.087901) | 0.409261 / 0.323480 (0.085781) | 0.006400 / 0.007986 (-0.001586) | 0.003965 / 0.004328 (-0.000363) | 0.074389 / 0.004250 (0.070139) | 0.060654 / 0.037052 (0.023602) | 0.391046 / 0.258489 (0.132557) | 0.430514 / 0.293841 (0.136673) | 0.054900 / 0.128546 (-0.073646) | 0.017972 / 0.075646 (-0.057675) | 0.410875 / 0.419271 (-0.008396) | 0.067405 / 0.043533 (0.023873) | 0.371468 / 0.255139 (0.116329) | 0.435061 / 0.283200 (0.151861) | 0.038063 / 0.141683 (-0.103620) | 1.733509 / 1.452155 (0.281354) | 1.833899 / 1.492716 (0.341182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243230 / 0.018006 (0.225224) | 0.605636 / 0.000490 (0.605146) | 0.004890 / 0.000200 (0.004690) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027624 / 0.037411 (-0.009787) | 0.084799 / 0.014526 (0.070273) | 0.104405 / 0.176557 (-0.072152) | 0.165383 / 0.737135 (-0.571752) | 0.102083 / 0.296338 (-0.194255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.578334 / 0.215209 (0.363125) | 5.369520 / 2.077655 (3.291866) | 2.294174 / 1.504120 (0.790055) | 2.054195 / 1.541195 (0.513000) | 2.007304 / 1.468490 (0.538814) | 0.839283 / 4.584777 (-3.745494) | 5.262288 / 3.745712 (1.516576) | 4.363346 / 5.269862 (-0.906516) | 2.854903 / 4.565676 (-1.710773) | 0.096975 / 0.424275 (-0.327300) | 0.008237 / 0.007607 (0.000630) | 0.646746 / 0.226044 (0.420702) | 6.250621 / 2.268929 (3.981693) | 2.900377 / 55.444624 (-52.544247) | 2.283238 / 6.876477 (-4.593239) | 2.443785 / 2.142072 (0.301713) | 0.991719 / 4.805227 (-3.813508) | 0.189755 / 6.500664 (-6.310909) | 0.067906 / 0.075469 (-0.007563) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.515563 / 1.841788 (-0.326225) | 21.956499 / 8.074308 (13.882191) | 19.161750 / 10.191392 (8.970358) | 0.238199 / 0.680424 (-0.442225) | 0.026771 / 0.534201 (-0.507430) | 0.450195 / 0.579283 (-0.129088) | 0.585168 / 0.434364 (0.150804) | 0.522945 / 0.540337 (-0.017393) | 0.776244 / 1.386936 (-0.610693) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007997 / 0.011353 (-0.003356) | 0.005021 / 0.011008 (-0.005988) | 0.087308 / 0.038508 (0.048800) | 0.077760 / 0.023109 (0.054650) | 0.425313 / 0.275898 (0.149415) | 0.451470 / 0.323480 (0.127990) | 0.006848 / 0.007986 (-0.001137) | 0.004812 / 0.004328 (0.000484) | 0.071198 / 0.004250 (0.066947) | 0.058325 / 0.037052 (0.021273) | 0.427411 / 0.258489 (0.168922) | 0.466069 / 0.293841 (0.172228) | 0.048686 / 0.128546 (-0.079861) | 0.011841 / 0.075646 (-0.063806) | 0.086225 / 0.419271 (-0.333047) | 0.060500 / 0.043533 (0.016967) | 0.435580 / 0.255139 (0.180441) | 0.456919 / 0.283200 (0.173719) | 0.035094 / 0.141683 (-0.106588) | 1.582805 / 1.452155 (0.130650) | 1.717838 / 1.492716 (0.225122) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283967 / 0.018006 (0.265960) | 0.517496 / 0.000490 (0.517006) | 0.014747 / 0.000200 (0.014547) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027870 / 0.037411 (-0.009541) | 0.083835 / 0.014526 (0.069309) | 0.099157 / 0.176557 (-0.077400) | 0.173210 / 0.737135 (-0.563925) | 0.094212 / 0.296338 (-0.202127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.535720 / 0.215209 (0.320511) | 5.273730 / 2.077655 (3.196075) | 2.422560 / 1.504120 (0.918440) | 2.131416 / 1.541195 (0.590222) | 2.192000 / 1.468490 (0.723510) | 0.708469 / 4.584777 (-3.876308) | 4.758092 / 3.745712 (1.012380) | 3.940729 / 5.269862 (-1.329133) | 2.553093 / 4.565676 (-2.012583) | 0.084895 / 0.424275 (-0.339380) | 0.008730 / 0.007607 (0.001123) | 0.646975 / 0.226044 (0.420930) | 6.294811 / 2.268929 (4.025883) | 3.293964 / 55.444624 (-52.150660) | 2.568985 / 6.876477 (-4.307492) | 2.743786 / 2.142072 (0.601713) | 0.899733 / 4.805227 (-3.905494) | 0.193484 / 6.500664 (-6.307181) | 0.070012 / 0.075469 (-0.005457) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.502255 / 1.841788 (-0.339532) | 20.690234 / 8.074308 (12.615926) | 18.375791 / 10.191392 (8.184399) | 0.200135 / 0.680424 (-0.480289) | 0.029434 / 0.534201 (-0.504767) | 0.477267 / 0.579283 (-0.102016) | 0.566869 / 0.434364 (0.132505) | 0.543756 / 0.540337 (0.003418) | 0.700476 / 1.386936 (-0.686460) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef17d9fd6c648bb41d43ba301c3de4d7b6f833d8 \"CML watermark\")\n" ]
2023-08-03T14:46:04
2023-08-03T14:56:59
2023-08-03T14:46:18
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6117/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6117", "html_url": "https://github.com/huggingface/datasets/pull/6117", "diff_url": "https://github.com/huggingface/datasets/pull/6117.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6117.patch", "merged_at": "2023-08-03T14:46:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/6116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6116/comments
https://api.github.com/repos/huggingface/datasets/issues/6116/events
https://github.com/huggingface/datasets/issues/6116
1,835,098,484
I_kwDODunzps5tYWF0
6,116
[Docs] The "Process" how-to guide lacks description of `select_columns` function
{ "login": "unifyh", "id": 18213435, "node_id": "MDQ6VXNlcjE4MjEzNDM1", "avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unifyh", "html_url": "https://github.com/unifyh", "followers_url": "https://api.github.com/users/unifyh/followers", "following_url": "https://api.github.com/users/unifyh/following{/other_user}", "gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unifyh/subscriptions", "organizations_url": "https://api.github.com/users/unifyh/orgs", "repos_url": "https://api.github.com/users/unifyh/repos", "events_url": "https://api.github.com/users/unifyh/events{/privacy}", "received_events_url": "https://api.github.com/users/unifyh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Great idea, feel free to open a PR! :)" ]
2023-08-03T13:45:10
2023-08-16T10:02:53
2023-08-16T10:02:53
CONTRIBUTOR
null
### Feature request The [how to process dataset guide](https://huggingface.co./docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co./docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide. ### Motivation This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480. Mentioning it in the guide would help future users discover this added feature. ### Your contribution I could submit a PR to add a brief description of the function to said guide.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6116/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6115/comments
https://api.github.com/repos/huggingface/datasets/issues/6115/events
https://github.com/huggingface/datasets/pull/6115
1,834,765,485
PR_kwDODunzps5XGChP
6,115
Release: 2.14.3
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007578 / 0.011353 (-0.003775) | 0.004271 / 0.011008 (-0.006738) | 0.086607 / 0.038508 (0.048098) | 0.063209 / 0.023109 (0.040099) | 0.351724 / 0.275898 (0.075826) | 0.399261 / 0.323480 (0.075781) | 0.004767 / 0.007986 (-0.003219) | 0.003487 / 0.004328 (-0.000842) | 0.071483 / 0.004250 (0.067233) | 0.051281 / 0.037052 (0.014229) | 0.387726 / 0.258489 (0.129237) | 0.408446 / 0.293841 (0.114605) | 0.041189 / 0.128546 (-0.087357) | 0.012446 / 0.075646 (-0.063200) | 0.331147 / 0.419271 (-0.088124) | 0.056721 / 0.043533 (0.013188) | 0.361306 / 0.255139 (0.106167) | 0.409651 / 0.283200 (0.126451) | 0.035485 / 0.141683 (-0.106198) | 1.461391 / 1.452155 (0.009236) | 1.554820 / 1.492716 (0.062104) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237119 / 0.018006 (0.219113) | 0.518731 / 0.000490 (0.518241) | 0.004192 / 0.000200 (0.003992) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024912 / 0.037411 (-0.012499) | 0.089420 / 0.014526 (0.074894) | 0.091209 / 0.176557 (-0.085347) | 0.152580 / 0.737135 (-0.584555) | 0.089660 / 0.296338 (-0.206678) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515223 / 0.215209 (0.300014) | 5.328359 / 2.077655 (3.250705) | 1.974326 / 1.504120 (0.470206) | 1.665216 / 1.541195 (0.124021) | 1.736040 / 1.468490 (0.267550) | 0.734746 / 4.584777 (-3.850031) | 4.186613 / 3.745712 (0.440901) | 3.535760 / 5.269862 (-1.734102) | 2.333247 / 4.565676 (-2.232429) | 0.071845 / 0.424275 (-0.352430) | 0.006147 / 0.007607 (-0.001460) | 0.546649 / 0.226044 (0.320605) | 5.452281 / 2.268929 (3.183353) | 2.512984 / 55.444624 (-52.931640) | 2.104210 / 6.876477 (-4.772267) | 2.409251 / 2.142072 (0.267178) | 0.822797 / 4.805227 (-3.982430) | 0.166648 / 6.500664 (-6.334016) | 0.056350 / 0.075469 (-0.019119) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397798 / 1.841788 (-0.443989) | 20.549399 / 8.074308 (12.475091) | 19.118168 / 10.191392 (8.926776) | 0.216361 / 0.680424 (-0.464063) | 0.027064 / 0.534201 (-0.507136) | 0.410762 / 0.579283 (-0.168521) | 0.559225 / 0.434364 (0.124861) | 0.468028 / 0.540337 (-0.072309) | 0.691520 / 1.386936 (-0.695416) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004890) | 0.003879 / 0.011008 (-0.007130) | 0.058723 / 0.038508 (0.020215) | 0.057202 / 0.023109 (0.034092) | 0.344397 / 0.275898 (0.068499) | 0.360388 / 0.323480 (0.036908) | 0.005502 / 0.007986 (-0.002483) | 0.004101 / 0.004328 (-0.000227) | 0.058168 / 0.004250 (0.053917) | 0.059112 / 0.037052 (0.022060) | 0.362206 / 0.258489 (0.103717) | 0.386444 / 0.293841 (0.092603) | 0.036613 / 0.128546 (-0.091934) | 0.010482 / 0.075646 (-0.065165) | 0.065850 / 0.419271 (-0.353421) | 0.046528 / 0.043533 (0.002995) | 0.349568 / 0.255139 (0.094429) | 0.360181 / 0.283200 (0.076981) | 0.029030 / 0.141683 (-0.112653) | 1.314569 / 1.452155 (-0.137586) | 1.422393 / 1.492716 (-0.070324) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.281554 / 0.018006 (0.263548) | 0.608018 / 0.000490 (0.607528) | 0.004568 / 0.000200 (0.004368) | 0.000182 / 0.000054 (0.000127) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023515 / 0.037411 (-0.013896) | 0.072994 / 0.014526 (0.058468) | 0.080688 / 0.176557 (-0.095868) | 0.125904 / 0.737135 (-0.611232) | 0.085457 / 0.296338 (-0.210882) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471530 / 0.215209 (0.256321) | 4.796197 / 2.077655 (2.718542) | 2.189181 / 1.504120 (0.685061) | 1.886649 / 1.541195 (0.345454) | 1.871067 / 1.468490 (0.402577) | 0.661043 / 4.584777 (-3.923734) | 4.344027 / 3.745712 (0.598315) | 3.656967 / 5.269862 (-1.612895) | 2.286033 / 4.565676 (-2.279644) | 0.079146 / 0.424275 (-0.345129) | 0.006840 / 0.007607 (-0.000767) | 0.588750 / 0.226044 (0.362706) | 6.301286 / 2.268929 (4.032357) | 3.074702 / 55.444624 (-52.369923) | 2.398739 / 6.876477 (-4.477738) | 2.555057 / 2.142072 (0.412985) | 0.874189 / 4.805227 (-3.931038) | 0.191423 / 6.500664 (-6.309241) | 0.061227 / 0.075469 (-0.014242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.472763 / 1.841788 (-0.369024) | 19.441304 / 8.074308 (11.366996) | 15.974276 / 10.191392 (5.782884) | 0.172503 / 0.680424 (-0.507921) | 0.027016 / 0.534201 (-0.507185) | 0.356085 / 0.579283 (-0.223198) | 0.473251 / 0.434364 (0.038887) | 0.427949 / 0.540337 (-0.112388) | 0.588924 / 1.386936 (-0.798013) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0973da6e60ac7c1d24229ba6aa6881747b21858a \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006166 / 0.011353 (-0.005187) | 0.003558 / 0.011008 (-0.007450) | 0.080576 / 0.038508 (0.042068) | 0.066542 / 0.023109 (0.043432) | 0.323997 / 0.275898 (0.048099) | 0.369828 / 0.323480 (0.046348) | 0.004896 / 0.007986 (-0.003090) | 0.002909 / 0.004328 (-0.001419) | 0.062553 / 0.004250 (0.058302) | 0.049795 / 0.037052 (0.012742) | 0.321369 / 0.258489 (0.062880) | 0.422860 / 0.293841 (0.129019) | 0.027394 / 0.128546 (-0.101152) | 0.007954 / 0.075646 (-0.067693) | 0.264122 / 0.419271 (-0.155149) | 0.044881 / 0.043533 (0.001349) | 0.316702 / 0.255139 (0.061563) | 0.374718 / 0.283200 (0.091518) | 0.021728 / 0.141683 (-0.119955) | 1.394456 / 1.452155 (-0.057699) | 1.474936 / 1.492716 (-0.017780) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191902 / 0.018006 (0.173896) | 0.430468 / 0.000490 (0.429979) | 0.003790 / 0.000200 (0.003590) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024974 / 0.037411 (-0.012438) | 0.073053 / 0.014526 (0.058527) | 0.083801 / 0.176557 (-0.092756) | 0.143457 / 0.737135 (-0.593678) | 0.085099 / 0.296338 (-0.211240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428411 / 0.215209 (0.213202) | 4.278077 / 2.077655 (2.200422) | 2.230039 / 1.504120 (0.725919) | 2.057191 / 1.541195 (0.515996) | 2.120109 / 1.468490 (0.651619) | 0.495242 / 4.584777 (-4.089535) | 3.031299 / 3.745712 (-0.714413) | 2.802685 / 5.269862 (-2.467176) | 1.839828 / 4.565676 (-2.725849) | 0.056875 / 0.424275 (-0.367401) | 0.006446 / 0.007607 (-0.001161) | 0.498958 / 0.226044 (0.272913) | 4.980440 / 2.268929 (2.711511) | 2.659659 / 55.444624 (-52.784965) | 2.315174 / 6.876477 (-4.561303) | 2.475920 / 2.142072 (0.333848) | 0.586946 / 4.805227 (-4.218282) | 0.124291 / 6.500664 (-6.376373) | 0.060701 / 0.075469 (-0.014768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245062 / 1.841788 (-0.596725) | 18.201444 / 8.074308 (10.127136) | 13.723271 / 10.191392 (3.531879) | 0.130203 / 0.680424 (-0.550221) | 0.016773 / 0.534201 (-0.517428) | 0.332909 / 0.579283 (-0.246374) | 0.347469 / 0.434364 (-0.086895) | 0.381364 / 0.540337 (-0.158973) | 0.541723 / 1.386936 (-0.845213) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005934 / 0.011353 (-0.005419) | 0.003573 / 0.011008 (-0.007435) | 0.062195 / 0.038508 (0.023687) | 0.059026 / 0.023109 (0.035917) | 0.413993 / 0.275898 (0.138095) | 0.459552 / 0.323480 (0.136072) | 0.004610 / 0.007986 (-0.003376) | 0.002907 / 0.004328 (-0.001421) | 0.062983 / 0.004250 (0.058733) | 0.047797 / 0.037052 (0.010745) | 0.415461 / 0.258489 (0.156972) | 0.417424 / 0.293841 (0.123583) | 0.027098 / 0.128546 (-0.101449) | 0.008106 / 0.075646 (-0.067540) | 0.067600 / 0.419271 (-0.351672) | 0.041432 / 0.043533 (-0.002101) | 0.407861 / 0.255139 (0.152722) | 0.430774 / 0.283200 (0.147575) | 0.020738 / 0.141683 (-0.120945) | 1.435127 / 1.452155 (-0.017028) | 1.486961 / 1.492716 (-0.005755) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231174 / 0.018006 (0.213168) | 0.421208 / 0.000490 (0.420718) | 0.005411 / 0.000200 (0.005211) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025362 / 0.037411 (-0.012049) | 0.078534 / 0.014526 (0.064008) | 0.085304 / 0.176557 (-0.091252) | 0.139048 / 0.737135 (-0.598087) | 0.087015 / 0.296338 (-0.209323) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448506 / 0.215209 (0.233297) | 4.486694 / 2.077655 (2.409039) | 2.488022 / 1.504120 (0.983902) | 2.325321 / 1.541195 (0.784126) | 2.381311 / 1.468490 (0.912821) | 0.502102 / 4.584777 (-4.082675) | 3.018326 / 3.745712 (-0.727386) | 2.824922 / 5.269862 (-2.444940) | 1.857414 / 4.565676 (-2.708263) | 0.057514 / 0.424275 (-0.366761) | 0.006829 / 0.007607 (-0.000779) | 0.521939 / 0.226044 (0.295895) | 5.224393 / 2.268929 (2.955465) | 2.933132 / 55.444624 (-52.511492) | 2.661187 / 6.876477 (-4.215290) | 2.781950 / 2.142072 (0.639878) | 0.592927 / 4.805227 (-4.212300) | 0.126685 / 6.500664 (-6.373979) | 0.064188 / 0.075469 (-0.011281) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.351107 / 1.841788 (-0.490681) | 18.344453 / 8.074308 (10.270145) | 13.838788 / 10.191392 (3.647396) | 0.157881 / 0.680424 (-0.522543) | 0.016636 / 0.534201 (-0.517565) | 0.331597 / 0.579283 (-0.247686) | 0.345573 / 0.434364 (-0.088791) | 0.397361 / 0.540337 (-0.142976) | 0.534289 / 1.386936 (-0.852647) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#582e722a76534904c0f3038d32ebb8db88ce9128 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006399 / 0.011353 (-0.004954) | 0.003872 / 0.011008 (-0.007136) | 0.083722 / 0.038508 (0.045214) | 0.068845 / 0.023109 (0.045736) | 0.329112 / 0.275898 (0.053214) | 0.343295 / 0.323480 (0.019815) | 0.005137 / 0.007986 (-0.002849) | 0.003303 / 0.004328 (-0.001026) | 0.064495 / 0.004250 (0.060245) | 0.051448 / 0.037052 (0.014395) | 0.322554 / 0.258489 (0.064065) | 0.361934 / 0.293841 (0.068093) | 0.030821 / 0.128546 (-0.097726) | 0.008482 / 0.075646 (-0.067164) | 0.288136 / 0.419271 (-0.131135) | 0.051935 / 0.043533 (0.008402) | 0.308283 / 0.255139 (0.053144) | 0.343421 / 0.283200 (0.060221) | 0.023639 / 0.141683 (-0.118044) | 1.485442 / 1.452155 (0.033288) | 1.533282 / 1.492716 (0.040565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218163 / 0.018006 (0.200157) | 0.464473 / 0.000490 (0.463983) | 0.003097 / 0.000200 (0.002897) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028650 / 0.037411 (-0.008761) | 0.083295 / 0.014526 (0.068769) | 0.096468 / 0.176557 (-0.080088) | 0.152086 / 0.737135 (-0.585050) | 0.102586 / 0.296338 (-0.193752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393038 / 0.215209 (0.177829) | 3.925514 / 2.077655 (1.847859) | 1.938419 / 1.504120 (0.434300) | 1.760265 / 1.541195 (0.219071) | 1.810024 / 1.468490 (0.341534) | 0.486232 / 4.584777 (-4.098545) | 3.618747 / 3.745712 (-0.126965) | 3.206950 / 5.269862 (-2.062912) | 1.999240 / 4.565676 (-2.566436) | 0.056986 / 0.424275 (-0.367289) | 0.007193 / 0.007607 (-0.000415) | 0.469313 / 0.226044 (0.243269) | 4.688670 / 2.268929 (2.419741) | 2.400332 / 55.444624 (-53.044292) | 2.074197 / 6.876477 (-4.802279) | 2.290823 / 2.142072 (0.148751) | 0.582339 / 4.805227 (-4.222888) | 0.134127 / 6.500664 (-6.366537) | 0.061061 / 0.075469 (-0.014408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272782 / 1.841788 (-0.569006) | 19.463375 / 8.074308 (11.389067) | 14.306819 / 10.191392 (4.115427) | 0.164608 / 0.680424 (-0.515816) | 0.018626 / 0.534201 (-0.515575) | 0.395225 / 0.579283 (-0.184058) | 0.408984 / 0.434364 (-0.025380) | 0.463364 / 0.540337 (-0.076974) | 0.630425 / 1.386936 (-0.756511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006465 / 0.011353 (-0.004888) | 0.003975 / 0.011008 (-0.007033) | 0.063643 / 0.038508 (0.025134) | 0.075214 / 0.023109 (0.052105) | 0.361734 / 0.275898 (0.085836) | 0.396664 / 0.323480 (0.073184) | 0.005251 / 0.007986 (-0.002735) | 0.003249 / 0.004328 (-0.001080) | 0.063841 / 0.004250 (0.059591) | 0.054504 / 0.037052 (0.017451) | 0.374791 / 0.258489 (0.116302) | 0.399205 / 0.293841 (0.105364) | 0.031355 / 0.128546 (-0.097192) | 0.008483 / 0.075646 (-0.067163) | 0.070234 / 0.419271 (-0.349037) | 0.048336 / 0.043533 (0.004803) | 0.373484 / 0.255139 (0.118345) | 0.382174 / 0.283200 (0.098974) | 0.022560 / 0.141683 (-0.119123) | 1.449799 / 1.452155 (-0.002355) | 1.525255 / 1.492716 (0.032539) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228350 / 0.018006 (0.210343) | 0.444344 / 0.000490 (0.443855) | 0.003699 / 0.000200 (0.003499) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030681 / 0.037411 (-0.006731) | 0.087340 / 0.014526 (0.072814) | 0.098636 / 0.176557 (-0.077920) | 0.151665 / 0.737135 (-0.585471) | 0.100840 / 0.296338 (-0.195498) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417857 / 0.215209 (0.202648) | 4.168407 / 2.077655 (2.090752) | 2.201758 / 1.504120 (0.697638) | 1.997834 / 1.541195 (0.456639) | 2.127693 / 1.468490 (0.659202) | 0.486429 / 4.584777 (-4.098348) | 3.676335 / 3.745712 (-0.069378) | 3.226268 / 5.269862 (-2.043594) | 2.027255 / 4.565676 (-2.538422) | 0.056759 / 0.424275 (-0.367516) | 0.007628 / 0.007607 (0.000021) | 0.500482 / 0.226044 (0.274438) | 4.996236 / 2.268929 (2.727307) | 2.628884 / 55.444624 (-52.815740) | 2.347611 / 6.876477 (-4.528866) | 2.551328 / 2.142072 (0.409255) | 0.582449 / 4.805227 (-4.222778) | 0.132844 / 6.500664 (-6.367821) | 0.061791 / 0.075469 (-0.013678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.373718 / 1.841788 (-0.468070) | 19.921217 / 8.074308 (11.846909) | 14.209642 / 10.191392 (4.018250) | 0.185334 / 0.680424 (-0.495090) | 0.018228 / 0.534201 (-0.515973) | 0.395549 / 0.579283 (-0.183734) | 0.404446 / 0.434364 (-0.029918) | 0.472456 / 0.540337 (-0.067882) | 0.622739 / 1.386936 (-0.764197) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006007 / 0.011353 (-0.005346) | 0.003588 / 0.011008 (-0.007420) | 0.080334 / 0.038508 (0.041826) | 0.058932 / 0.023109 (0.035823) | 0.404613 / 0.275898 (0.128715) | 0.438377 / 0.323480 (0.114897) | 0.003468 / 0.007986 (-0.004518) | 0.003702 / 0.004328 (-0.000627) | 0.062936 / 0.004250 (0.058686) | 0.047987 / 0.037052 (0.010934) | 0.411409 / 0.258489 (0.152920) | 0.450244 / 0.293841 (0.156403) | 0.027007 / 0.128546 (-0.101539) | 0.007932 / 0.075646 (-0.067714) | 0.261390 / 0.419271 (-0.157882) | 0.044992 / 0.043533 (0.001459) | 0.409730 / 0.255139 (0.154591) | 0.433331 / 0.283200 (0.150131) | 0.020446 / 0.141683 (-0.121237) | 1.425418 / 1.452155 (-0.026736) | 1.479242 / 1.492716 (-0.013475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187375 / 0.018006 (0.169368) | 0.428532 / 0.000490 (0.428043) | 0.003406 / 0.000200 (0.003206) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024390 / 0.037411 (-0.013022) | 0.072571 / 0.014526 (0.058045) | 0.083513 / 0.176557 (-0.093044) | 0.144395 / 0.737135 (-0.592741) | 0.084813 / 0.296338 (-0.211526) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409176 / 0.215209 (0.193967) | 4.078082 / 2.077655 (2.000428) | 1.913596 / 1.504120 (0.409476) | 1.718470 / 1.541195 (0.177275) | 1.753106 / 1.468490 (0.284616) | 0.494167 / 4.584777 (-4.090610) | 3.029531 / 3.745712 (-0.716181) | 2.807331 / 5.269862 (-2.462531) | 1.839471 / 4.565676 (-2.726206) | 0.057169 / 0.424275 (-0.367106) | 0.006433 / 0.007607 (-0.001175) | 0.482666 / 0.226044 (0.256621) | 4.817601 / 2.268929 (2.548673) | 2.449967 / 55.444624 (-52.994658) | 2.113891 / 6.876477 (-4.762586) | 2.399293 / 2.142072 (0.257221) | 0.578903 / 4.805227 (-4.226324) | 0.124306 / 6.500664 (-6.376358) | 0.061572 / 0.075469 (-0.013897) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254692 / 1.841788 (-0.587096) | 18.414049 / 8.074308 (10.339741) | 13.992059 / 10.191392 (3.800667) | 0.146671 / 0.680424 (-0.533753) | 0.016925 / 0.534201 (-0.517275) | 0.333124 / 0.579283 (-0.246159) | 0.348007 / 0.434364 (-0.086357) | 0.378519 / 0.540337 (-0.161819) | 0.532540 / 1.386936 (-0.854396) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006050 / 0.011353 (-0.005303) | 0.003614 / 0.011008 (-0.007394) | 0.061707 / 0.038508 (0.023199) | 0.062874 / 0.023109 (0.039765) | 0.364760 / 0.275898 (0.088862) | 0.398136 / 0.323480 (0.074656) | 0.005598 / 0.007986 (-0.002388) | 0.002836 / 0.004328 (-0.001493) | 0.061880 / 0.004250 (0.057630) | 0.048165 / 0.037052 (0.011113) | 0.372656 / 0.258489 (0.114167) | 0.403967 / 0.293841 (0.110126) | 0.027046 / 0.128546 (-0.101501) | 0.008091 / 0.075646 (-0.067555) | 0.066783 / 0.419271 (-0.352489) | 0.041186 / 0.043533 (-0.002347) | 0.376009 / 0.255139 (0.120870) | 0.391769 / 0.283200 (0.108569) | 0.021020 / 0.141683 (-0.120663) | 1.514593 / 1.452155 (0.062438) | 1.548506 / 1.492716 (0.055790) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237610 / 0.018006 (0.219604) | 0.434274 / 0.000490 (0.433784) | 0.009720 / 0.000200 (0.009520) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025605 / 0.037411 (-0.011807) | 0.078971 / 0.014526 (0.064445) | 0.088154 / 0.176557 (-0.088403) | 0.139112 / 0.737135 (-0.598023) | 0.088890 / 0.296338 (-0.207449) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420027 / 0.215209 (0.204818) | 4.189493 / 2.077655 (2.111838) | 2.143907 / 1.504120 (0.639787) | 1.967032 / 1.541195 (0.425837) | 2.011845 / 1.468490 (0.543355) | 0.496692 / 4.584777 (-4.088085) | 3.025456 / 3.745712 (-0.720256) | 2.828436 / 5.269862 (-2.441426) | 1.860673 / 4.565676 (-2.705003) | 0.057199 / 0.424275 (-0.367076) | 0.006770 / 0.007607 (-0.000838) | 0.491281 / 0.226044 (0.265236) | 4.918065 / 2.268929 (2.649136) | 2.593172 / 55.444624 (-52.851452) | 2.250750 / 6.876477 (-4.625727) | 2.406235 / 2.142072 (0.264162) | 0.588648 / 4.805227 (-4.216579) | 0.125635 / 6.500664 (-6.375029) | 0.061697 / 0.075469 (-0.013773) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374065 / 1.841788 (-0.467722) | 18.439315 / 8.074308 (10.365007) | 14.031660 / 10.191392 (3.840268) | 0.153665 / 0.680424 (-0.526759) | 0.016980 / 0.534201 (-0.517221) | 0.331799 / 0.579283 (-0.247484) | 0.343201 / 0.434364 (-0.091163) | 0.392445 / 0.540337 (-0.147892) | 0.530387 / 1.386936 (-0.856549) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008189 / 0.011353 (-0.003164) | 0.004598 / 0.011008 (-0.006410) | 0.102199 / 0.038508 (0.063691) | 0.077961 / 0.023109 (0.054852) | 0.364936 / 0.275898 (0.089038) | 0.402606 / 0.323480 (0.079126) | 0.005522 / 0.007986 (-0.002464) | 0.004007 / 0.004328 (-0.000322) | 0.071560 / 0.004250 (0.067310) | 0.055818 / 0.037052 (0.018765) | 0.378394 / 0.258489 (0.119905) | 0.428990 / 0.293841 (0.135149) | 0.043142 / 0.128546 (-0.085404) | 0.013254 / 0.075646 (-0.062392) | 0.331102 / 0.419271 (-0.088170) | 0.061407 / 0.043533 (0.017875) | 0.387397 / 0.255139 (0.132258) | 0.416062 / 0.283200 (0.132862) | 0.036330 / 0.141683 (-0.105353) | 1.735352 / 1.452155 (0.283198) | 1.773329 / 1.492716 (0.280613) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.188587 / 0.018006 (0.170581) | 0.519506 / 0.000490 (0.519016) | 0.004702 / 0.000200 (0.004502) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027152 / 0.037411 (-0.010260) | 0.094296 / 0.014526 (0.079770) | 0.098155 / 0.176557 (-0.078402) | 0.162541 / 0.737135 (-0.574595) | 0.112092 / 0.296338 (-0.184246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.537555 / 0.215209 (0.322346) | 5.486821 / 2.077655 (3.409166) | 2.377127 / 1.504120 (0.873008) | 2.073205 / 1.541195 (0.532011) | 2.075130 / 1.468490 (0.606640) | 0.783779 / 4.584777 (-3.800998) | 5.029524 / 3.745712 (1.283812) | 4.382724 / 5.269862 (-0.887138) | 2.836180 / 4.565676 (-1.729496) | 0.108840 / 0.424275 (-0.315435) | 0.008123 / 0.007607 (0.000516) | 0.673460 / 0.226044 (0.447416) | 6.674030 / 2.268929 (4.405102) | 3.208922 / 55.444624 (-52.235702) | 2.464908 / 6.876477 (-4.411568) | 2.661929 / 2.142072 (0.519856) | 0.962529 / 4.805227 (-3.842698) | 0.197974 / 6.500664 (-6.302690) | 0.066656 / 0.075469 (-0.008813) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430373 / 1.841788 (-0.411415) | 21.180540 / 8.074308 (13.106232) | 19.027491 / 10.191392 (8.836099) | 0.217520 / 0.680424 (-0.462904) | 0.028038 / 0.534201 (-0.506163) | 0.435266 / 0.579283 (-0.144017) | 0.529510 / 0.434364 (0.095147) | 0.511011 / 0.540337 (-0.029327) | 0.728940 / 1.386936 (-0.657996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007883 / 0.011353 (-0.003470) | 0.004448 / 0.011008 (-0.006560) | 0.071350 / 0.038508 (0.032842) | 0.075269 / 0.023109 (0.052160) | 0.396705 / 0.275898 (0.120807) | 0.457809 / 0.323480 (0.134329) | 0.005193 / 0.007986 (-0.002792) | 0.003695 / 0.004328 (-0.000633) | 0.078087 / 0.004250 (0.073836) | 0.054276 / 0.037052 (0.017224) | 0.412184 / 0.258489 (0.153695) | 0.452400 / 0.293841 (0.158559) | 0.049762 / 0.128546 (-0.078784) | 0.013206 / 0.075646 (-0.062440) | 0.085985 / 0.419271 (-0.333287) | 0.058837 / 0.043533 (0.015304) | 0.432481 / 0.255139 (0.177342) | 0.433260 / 0.283200 (0.150060) | 0.031190 / 0.141683 (-0.110493) | 1.582707 / 1.452155 (0.130552) | 1.664457 / 1.492716 (0.171741) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223639 / 0.018006 (0.205633) | 0.524388 / 0.000490 (0.523899) | 0.005489 / 0.000200 (0.005289) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030182 / 0.037411 (-0.007230) | 0.089309 / 0.014526 (0.074783) | 0.103306 / 0.176557 (-0.073250) | 0.162624 / 0.737135 (-0.574511) | 0.108957 / 0.296338 (-0.187381) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577423 / 0.215209 (0.362214) | 5.900154 / 2.077655 (3.822500) | 2.687369 / 1.504120 (1.183249) | 2.513061 / 1.541195 (0.971866) | 2.506453 / 1.468490 (1.037963) | 0.830838 / 4.584777 (-3.753939) | 5.032195 / 3.745712 (1.286483) | 4.396827 / 5.269862 (-0.873035) | 2.884230 / 4.565676 (-1.681447) | 0.102239 / 0.424275 (-0.322036) | 0.008178 / 0.007607 (0.000571) | 0.710027 / 0.226044 (0.483983) | 7.149626 / 2.268929 (4.880698) | 3.403605 / 55.444624 (-52.041019) | 2.661970 / 6.876477 (-4.214506) | 2.760227 / 2.142072 (0.618154) | 1.043981 / 4.805227 (-3.761246) | 0.195028 / 6.500664 (-6.305636) | 0.065211 / 0.075469 (-0.010258) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.581265 / 1.841788 (-0.260522) | 21.640230 / 8.074308 (13.565922) | 19.031860 / 10.191392 (8.840468) | 0.196903 / 0.680424 (-0.483520) | 0.027061 / 0.534201 (-0.507140) | 0.444995 / 0.579283 (-0.134288) | 0.528195 / 0.434364 (0.093831) | 0.521540 / 0.540337 (-0.018797) | 0.730204 / 1.386936 (-0.656732) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#33f736eafa0f77de03aa6894ea4a6c923702e5d1 \"CML watermark\")\n" ]
2023-08-03T10:18:32
2023-08-03T15:08:02
2023-08-03T10:24:57
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6115/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6115", "html_url": "https://github.com/huggingface/datasets/pull/6115", "diff_url": "https://github.com/huggingface/datasets/pull/6115.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6115.patch", "merged_at": "2023-08-03T10:24:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/6114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6114/comments
https://api.github.com/repos/huggingface/datasets/issues/6114/events
https://github.com/huggingface/datasets/issues/6114
1,834,015,584
I_kwDODunzps5tUNtg
6,114
Cache not being used when loading commonvoice 8.0.0
{ "login": "clabornd", "id": 31082141, "node_id": "MDQ6VXNlcjMxMDgyMTQx", "avatar_url": "https://avatars.githubusercontent.com/u/31082141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clabornd", "html_url": "https://github.com/clabornd", "followers_url": "https://api.github.com/users/clabornd/followers", "following_url": "https://api.github.com/users/clabornd/following{/other_user}", "gists_url": "https://api.github.com/users/clabornd/gists{/gist_id}", "starred_url": "https://api.github.com/users/clabornd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clabornd/subscriptions", "organizations_url": "https://api.github.com/users/clabornd/orgs", "repos_url": "https://api.github.com/users/clabornd/repos", "events_url": "https://api.github.com/users/clabornd/events{/privacy}", "received_events_url": "https://api.github.com/users/clabornd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "You can avoid this by using the `revision` parameter in `load_dataset` to always force downloading a specific commit (if not specified it defaults to HEAD, hence the redownload).", "Thanks @mariosasko this works well, looks like I should have read the documentation a bit more carefully. \r\n\r\nIt is still a bit confusing which hash I should provide: passing `revision = c8fd66e85f086e3abb11eeee55b1737a3d1e8487` from https://huggingface.co./datasets/mozilla-foundation/common_voice_8_0/commits/main caused the cached version at `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a` to be loaded, so I had to know that it was the previous commit unless I've missed something else." ]
2023-08-02T23:18:11
2023-08-18T23:59:00
2023-08-18T23:59:00
NONE
null
### Describe the bug I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 instance I'm working on. Now, with the same command that downloaded it initially: ``` dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>") ``` it tries to redownload the dataset to `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/05bdc7940b0a336ceeaeef13470c89522c29a8e4494cbeece64fb472a87acb32` ### Steps to reproduce the bug Steps to reproduce the behavior: 1. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")``` 2. dataset is updated by maintainers 3. ```dataset = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token="<mytoken>")``` ### Expected behavior I expect that it uses the already downloaded data in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. Not sure what's happening in 2. but if, say it's an issue with the dataset referenced by "mozilla-foundation/common_voice_8_0" being modified by the maintainers, how would I force datasets to point to the original version I downloaded? EDIT: It was indeed that the maintainers had updated the dataset (v 8.0.0). However I still cant load the dataset from disk instead of redownloading, with for example: ``` load_dataset(".cache/huggingface/datasets/downloads/extracted/<hash>/cv-corpus-8.0-2022-01-19/en/", "en") > ... > File [~/miniconda3/envs/aa_torch2/lib/python3.10/site-packages/datasets/table.py:1938](.../ python3.10/site-packages/datasets/table.py:1938), in cast_array_to_feature(array, feature, allow_number_to_str) 1937 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1938 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ... 1794 e = e.__context__ -> 1795 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1797 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Environment info datasets==2.7.0 python==3.10.8 OS: AWS Linux
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6114/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6113/comments
https://api.github.com/repos/huggingface/datasets/issues/6113/events
https://github.com/huggingface/datasets/issues/6113
1,833,854,030
I_kwDODunzps5tTmRO
6,113
load_dataset() fails with streamlit caching inside docker
{ "login": "fierval", "id": 987574, "node_id": "MDQ6VXNlcjk4NzU3NA==", "avatar_url": "https://avatars.githubusercontent.com/u/987574?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fierval", "html_url": "https://github.com/fierval", "followers_url": "https://api.github.com/users/fierval/followers", "following_url": "https://api.github.com/users/fierval/following{/other_user}", "gists_url": "https://api.github.com/users/fierval/gists{/gist_id}", "starred_url": "https://api.github.com/users/fierval/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fierval/subscriptions", "organizations_url": "https://api.github.com/users/fierval/orgs", "repos_url": "https://api.github.com/users/fierval/repos", "events_url": "https://api.github.com/users/fierval/events{/privacy}", "received_events_url": "https://api.github.com/users/fierval/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! This should be fixed in the latest (patch) release (run `pip install -U datasets` to install it). This behavior was due to a bug in our authentication logic." ]
2023-08-02T20:20:26
2023-08-21T18:18:27
2023-08-21T18:18:27
NONE
null
### Describe the bug When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message: EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files Traceback: File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script exec(code, module.__dict__) File "/home/user/app/app.py", line 62, in <module> dashboard() File "/home/user/app/app.py", line 47, in dashboard feat_dict, path_gml = load_data(hf_repo, model_gml_dict[selected_model], hf_token) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 211, in wrapper return cached_func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 240, in __call__ return self._get_or_create_cached_value(args, kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 266, in _get_or_create_cached_value return self._handle_cache_miss(cache, value_key, func_args, func_kwargs) File "/opt/conda/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 320, in _handle_cache_miss computed_value = self._info.func(*func_args, **func_kwargs) File "/home/user/app/hf_interface.py", line 16, in load_data hf_dataset = load_dataset(repo_id, use_auth_token=hf_token) File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2109, in load_dataset builder_instance = load_dataset_builder( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1795, in load_dataset_builder dataset_module = dataset_module_factory( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1486, in dataset_module_factory raise e1 from None File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1476, in dataset_module_factory ).get_module() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1032, in get_module else get_data_patterns(base_path, download_config=self.download_config) File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 458, in get_data_patterns raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None ### Steps to reproduce the bug ```python @st.cache_resource def load_data(repo_id: str, hf_token=None): """Load data from HuggingFace Hub """ hf_dataset = load_dataset(repo_id, use_auth_token=hf_token) hf_dataset = hf_dataset.map(lambda x: json.loads(x["ground_truth"]), remove_columns=["ground_truth"]) return hf_dataset ``` ### Expected behavior Expect to load. Note: works fine with datasets==2.13.1 ### Environment info datasets==2.14.2, Ubuntu bionic-based Docker container.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6113/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6112/comments
https://api.github.com/repos/huggingface/datasets/issues/6112/events
https://github.com/huggingface/datasets/issues/6112
1,833,693,299
I_kwDODunzps5tS_Bz
6,112
yaml error using push_to_hub with generated README.md
{ "login": "kevintee", "id": 1643887, "node_id": "MDQ6VXNlcjE2NDM4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/1643887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevintee", "html_url": "https://github.com/kevintee", "followers_url": "https://api.github.com/users/kevintee/followers", "following_url": "https://api.github.com/users/kevintee/following{/other_user}", "gists_url": "https://api.github.com/users/kevintee/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevintee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevintee/subscriptions", "organizations_url": "https://api.github.com/users/kevintee/orgs", "repos_url": "https://api.github.com/users/kevintee/repos", "events_url": "https://api.github.com/users/kevintee/events{/privacy}", "received_events_url": "https://api.github.com/users/kevintee/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting! This is a bug in converting the `ArrayXD` types to YAML. It will be fixed soon." ]
2023-08-02T18:21:21
2023-08-17T16:53:24
null
NONE
null
### Describe the bug When I construct a dataset with the following features: ``` features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "tokens": Sequence(Value(dtype="string")), "bbox": Array2D(dtype="int64", shape=(512, 4)), } ) ``` and run `push_to_hub`, the individual `*.parquet` files are pushed, but when trying to upload the auto-generated README, I run into the following error: ``` Traceback (most recent call last): File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 261, in hf_raise_for_status response.raise_for_status() File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co./api/datasets/looppayments/multitask_document_classification_dataset/commit/main The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 297, in <module> build_dataset() File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 290, in build_dataset push_to_hub(dataset, "multitask_document_classification_dataset") File "/Users/kevintee/loop-payments/ml/src/ml/data_scripts/build_document_classification_training_data.py", line 135, in push_to_hub dataset.push_to_hub(f"looppayments/{dataset_name}", private=True) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5577, in push_to_hub HfApi(endpoint=config.HF_ENDPOINT).upload_file( File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file commit_info = self.create_commit( File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2728, in create_commit hf_raise_for_status(commit_resp, endpoint_name="commit") File "/Users/kevintee/.pyenv/versions/dev2/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 299, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64ca9c3d-2d2bbef354e102482a9a168e;bc00371c-8549-4859-9f41-43ff140ad36e) Bad request for commit endpoint: Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> (10:9) 7 | - 3 8 | - 224 9 | - 224 10 | dtype: float64 --------------^ 11 | - name: input_ids 12 | sequence: int64 ``` My guess is that the auto-generated yaml is unable to be parsed for some reason. ### Steps to reproduce the bug The description contains most of what's needed to reproduce the issue, but I've added a shortened code snippet: ``` from datasets import Array2D, Array3D, ClassLabel, Dataset, Features, Sequence, Value from PIL import Image from transformers import AutoProcessor features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "tokens": Sequence(Value(dtype="string")), "bbox": Array2D(dtype="int64", shape=(512, 4)), } ) processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) def preprocess_dataset(rows): # Get images images = [ Image.open(png_filename).convert("RGB") for png_filename in rows["png_filename"] ] encoding = processor( images, rows["tokens"], boxes=rows["bbox"], truncation=True, padding="max_length", ) encoding["tokens"] = rows["tokens"] return encoding dataset = dataset.map( preprocess_dataset, batched=True, batch_size=5, features=features, ) ``` ### Expected behavior Using datasets==2.11.0, I'm able to succesfully push_to_hub, no issues, but with datasets==2.14.2, I run into the above error. ### Environment info - `datasets` version: 2.14.2 - Platform: macOS-12.5-arm64-arm-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6112/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6111/comments
https://api.github.com/repos/huggingface/datasets/issues/6111/events
https://github.com/huggingface/datasets/issues/6111
1,832,781,654
I_kwDODunzps5tPgdW
6,111
raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." )
{ "login": "2catycm", "id": 41530341, "node_id": "MDQ6VXNlcjQxNTMwMzQx", "avatar_url": "https://avatars.githubusercontent.com/u/41530341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/2catycm", "html_url": "https://github.com/2catycm", "followers_url": "https://api.github.com/users/2catycm/followers", "following_url": "https://api.github.com/users/2catycm/following{/other_user}", "gists_url": "https://api.github.com/users/2catycm/gists{/gist_id}", "starred_url": "https://api.github.com/users/2catycm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/2catycm/subscriptions", "organizations_url": "https://api.github.com/users/2catycm/orgs", "repos_url": "https://api.github.com/users/2catycm/repos", "events_url": "https://api.github.com/users/2catycm/events{/privacy}", "received_events_url": "https://api.github.com/users/2catycm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "any idea?", "This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n\r\n`load_from_disk` is intended to be used on directories created with `Dataset.save_to_disk` or `DatasetDict.save_to_disk`", "> This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n> \r\n> `load_from_disk` is intended to be used on directories created with `Dataset.save_to_disk` or `DatasetDict.save_to_disk`\r\n\r\nThanks for your help. This works." ]
2023-08-02T09:17:29
2023-08-29T02:00:28
2023-08-29T02:00:28
NONE
null
### Describe the bug For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object. However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects. ### Steps to reproduce the bug Steps to reproduce the bug: 1. Found CIFAR dataset in hugging face: https://huggingface.co./datasets/cifar100/tree/main 2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box: ```bash cd my_directory_absolute git lfs install git clone https://huggingface.co./datasets/cifar100 ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK. ``` 3. Write A python file to try to load the dataset ```python from datasets import load_dataset, load_from_disk dataset = load_from_disk("my_directory_absolute/cifar100") ``` Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead. 4. Then you will see the error reported: ```log --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[5], line 9 1 from datasets import load_dataset, load_from_disk ----> 9 dataset = load_from_disk("my_directory_absolute/cifar100") File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options) 2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) 2231 else: -> 2232 raise FileNotFoundError( 2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." 2234 ) FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory. ``` ### Expected behavior The dataset should be load successfully. ### Environment info ```bash datasets-cli env ``` -> results: ```txt Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.14.2 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6111/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6110/comments
https://api.github.com/repos/huggingface/datasets/issues/6110/events
https://github.com/huggingface/datasets/issues/6110
1,831,110,633
I_kwDODunzps5tJIfp
6,110
[BUG] Dataset initialized from in-memory data does not create cache.
{ "login": "MattYoon", "id": 57797966, "node_id": "MDQ6VXNlcjU3Nzk3OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/57797966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MattYoon", "html_url": "https://github.com/MattYoon", "followers_url": "https://api.github.com/users/MattYoon/followers", "following_url": "https://api.github.com/users/MattYoon/following{/other_user}", "gists_url": "https://api.github.com/users/MattYoon/gists{/gist_id}", "starred_url": "https://api.github.com/users/MattYoon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MattYoon/subscriptions", "organizations_url": "https://api.github.com/users/MattYoon/orgs", "repos_url": "https://api.github.com/users/MattYoon/repos", "events_url": "https://api.github.com/users/MattYoon/events{/privacy}", "received_events_url": "https://api.github.com/users/MattYoon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This is expected behavior. You must provide `cache_file_name` when performing `.map` on an in-memory dataset for the result to be cached." ]
2023-08-01T11:58:58
2023-08-17T14:03:01
2023-08-17T14:03:00
NONE
null
### Describe the bug `Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`. ### Steps to reproduce the bug ```python # below code was run the second time so the map function can be loaded from cache if exists from datasets import load_dataset, Dataset dataset = load_dataset("tatsu-lab/alpaca")['train'] dataset = dataset.map(lambda x: {'input': x['input'] + 'hi'}) # some random map print(len(dataset.cache_files)) # 1 # copy the exact same data but initialize from a dictionary memory_dataset = Dataset.from_dict({ 'instruction': dataset['instruction'], 'input': dataset['input'], 'output': dataset['output'], 'text': dataset['text']}) memory_dataset = memory_dataset.map(lambda x: {'input': x['input'] + 'hi'}) # exact same map print(len(memory_dataset.cache_files)) # Map: 100%|██████████| 52002[/52002] # 0 ``` ### Expected behavior The `map` function should create cache regardless of the method the `Dataset` was created. ### Environment info - `datasets` version: 2.14.2 - Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.14.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6110/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6109/comments
https://api.github.com/repos/huggingface/datasets/issues/6109/events
https://github.com/huggingface/datasets/issues/6109
1,830,753,793
I_kwDODunzps5tHxYB
6,109
Problems in downloading Amazon reviews from HF
{ "login": "610v4nn1", "id": 52964960, "node_id": "MDQ6VXNlcjUyOTY0OTYw", "avatar_url": "https://avatars.githubusercontent.com/u/52964960?v=4", "gravatar_id": "", "url": "https://api.github.com/users/610v4nn1", "html_url": "https://github.com/610v4nn1", "followers_url": "https://api.github.com/users/610v4nn1/followers", "following_url": "https://api.github.com/users/610v4nn1/following{/other_user}", "gists_url": "https://api.github.com/users/610v4nn1/gists{/gist_id}", "starred_url": "https://api.github.com/users/610v4nn1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/610v4nn1/subscriptions", "organizations_url": "https://api.github.com/users/610v4nn1/orgs", "repos_url": "https://api.github.com/users/610v4nn1/repos", "events_url": "https://api.github.com/users/610v4nn1/events{/privacy}", "received_events_url": "https://api.github.com/users/610v4nn1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @610v4nn1.\r\n\r\nIndeed, the source data files are no longer available. We have contacted the authors of the dataset and they report that Amazon has decided to stop distributing the multilingual reviews dataset.\r\n\r\nWe are adding a notification about this issue to the dataset card.\r\n\r\nSee: https://huggingface.co./datasets/amazon_reviews_multi/discussions/4#64c3898db63057f1fd3ce1a0 " ]
2023-08-01T08:38:29
2023-08-02T07:12:07
2023-08-02T07:12:07
NONE
null
### Describe the bug I have a script downloading `amazon_reviews_multi`. When the download starts, I get ``` Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.43MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 842.40it/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 928kB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.42s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 832.70it/s] Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.81MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.40s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 1294.14it/s] Generating train split: 0%| | 0/200000 [00:00<?, ? examples/s] ``` the file is clearly too small to contain the requested dataset, in fact it contains en error message: ``` <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>AGJWSY3ZADT2QVWE</RequestId><HostId>Gx1O2KXnxtQFqvzDLxyVSTq3+TTJuTnuVFnJL3SP89Yp8UzvYLPTVwd1PpniE4EvQzT3tCaqEJw=</HostId></Error> ``` obviously the script fails: ``` > raise DatasetGenerationError("An error occurred while generating the dataset") from e E datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug 1. load_dataset("amazon_reviews_multi", name="en", split="train", cache_dir="ADDYOURPATHHERE") ### Expected behavior I would expect the dataset to be downloaded and processed ### Environment info * The problem is present with both datasets 2.12.0 and 2.14.2 * python version 3.10.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6109/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6108/comments
https://api.github.com/repos/huggingface/datasets/issues/6108/events
https://github.com/huggingface/datasets/issues/6108
1,830,347,187
I_kwDODunzps5tGOGz
6,108
Loading local datasets got strangely stuck
{ "login": "LoveCatc", "id": 48412571, "node_id": "MDQ6VXNlcjQ4NDEyNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/48412571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LoveCatc", "html_url": "https://github.com/LoveCatc", "followers_url": "https://api.github.com/users/LoveCatc/followers", "following_url": "https://api.github.com/users/LoveCatc/following{/other_user}", "gists_url": "https://api.github.com/users/LoveCatc/gists{/gist_id}", "starred_url": "https://api.github.com/users/LoveCatc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LoveCatc/subscriptions", "organizations_url": "https://api.github.com/users/LoveCatc/orgs", "repos_url": "https://api.github.com/users/LoveCatc/repos", "events_url": "https://api.github.com/users/LoveCatc/events{/privacy}", "received_events_url": "https://api.github.com/users/LoveCatc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Yesterday I waited for more than 12 hours to make sure it was really **stuck** instead of proceeding too slow.", "I've had similar weird issues with `load_dataset` as well. Not multiple files, but dataset is quite big, about 50G.", "We use a generic multiprocessing code, so there is little we can do about this - unfortunately, turning off multiprocessing seems to be the only solution. Multithreading would make our code easier to maintain and (most likely) avoid issues such as this one, but we cannot use it until the GIL is dropped (no-GIL Python should be released in 2024, so we can start exploring this then)" ]
2023-08-01T02:28:06
2023-08-17T17:36:45
null
NONE
null
### Describe the bug I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as: ```python ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train'] ``` However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way: ```python dlist = list() for _ in LIST_OF_FILE_PATHS: dlist.append(load_dataset("json", data_files=_)['train']) ds = concatenate_datasets(dlist) ``` I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error: ```bash ^C Process ForkPoolWorker-1: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker task = get() File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get res = self._reader.recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes buf = self._recv_bytes(maxlength) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt Generating train split: 92431 examples [01:23, 1104.25 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered yield queue.get(timeout=0.05) File "<string>", line 2, in get File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod kind, result = conn.recv() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv buf = self._recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module> a = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split for job_id, done, content in iflatmap_unordered( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get raise TimeoutError multiprocess.context.TimeoutError ``` I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram. Thanks for your efforts and patience! Any suggestion or help would be appreciated. ### Steps to reproduce the bug 1. use load_dataset() with `data_files = LIST_OF_FILES` ### Expected behavior All the files should be smoothly loaded. ### Environment info - Datasets: A private dataset. ~2500 `.jsonl` files. ~300GB in total. Each json structure only contains one key: `text`. Format checked. - `datasets` version: 2.14.2 - Platform: Linux-4.19.91-014.kangaroo.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 10.0.1.dev0+ga6eabc2b.d20230609 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6108/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6108/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6107/comments
https://api.github.com/repos/huggingface/datasets/issues/6107/events
https://github.com/huggingface/datasets/pull/6107
1,829,625,320
PR_kwDODunzps5W0rLR
6,107
Fix deprecation of use_auth_token in file_utils
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007678 / 0.011353 (-0.003675) | 0.004233 / 0.011008 (-0.006776) | 0.095934 / 0.038508 (0.057426) | 0.064201 / 0.023109 (0.041092) | 0.345765 / 0.275898 (0.069867) | 0.383089 / 0.323480 (0.059609) | 0.004084 / 0.007986 (-0.003902) | 0.003311 / 0.004328 (-0.001017) | 0.072367 / 0.004250 (0.068117) | 0.048252 / 0.037052 (0.011200) | 0.338340 / 0.258489 (0.079851) | 0.391627 / 0.293841 (0.097786) | 0.045203 / 0.128546 (-0.083343) | 0.013494 / 0.075646 (-0.062153) | 0.314097 / 0.419271 (-0.105174) | 0.058183 / 0.043533 (0.014650) | 0.353946 / 0.255139 (0.098807) | 0.385181 / 0.283200 (0.101981) | 0.033111 / 0.141683 (-0.108572) | 1.578489 / 1.452155 (0.126335) | 1.631660 / 1.492716 (0.138944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202592 / 0.018006 (0.184586) | 0.506450 / 0.000490 (0.505961) | 0.004630 / 0.000200 (0.004430) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024761 / 0.037411 (-0.012651) | 0.086295 / 0.014526 (0.071769) | 0.094063 / 0.176557 (-0.082494) | 0.154189 / 0.737135 (-0.582947) | 0.096273 / 0.296338 (-0.200065) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.581731 / 0.215209 (0.366522) | 5.552020 / 2.077655 (3.474365) | 2.430800 / 1.504120 (0.926680) | 2.130864 / 1.541195 (0.589669) | 2.092802 / 1.468490 (0.624312) | 0.833956 / 4.584777 (-3.750821) | 4.840859 / 3.745712 (1.095147) | 4.267812 / 5.269862 (-1.002050) | 2.663245 / 4.565676 (-1.902432) | 0.093195 / 0.424275 (-0.331080) | 0.007942 / 0.007607 (0.000335) | 0.651457 / 0.226044 (0.425413) | 6.782986 / 2.268929 (4.514058) | 3.103307 / 55.444624 (-52.341318) | 2.373933 / 6.876477 (-4.502544) | 2.571613 / 2.142072 (0.429540) | 0.981389 / 4.805227 (-3.823839) | 0.199019 / 6.500664 (-6.301645) | 0.065828 / 0.075469 (-0.009641) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429778 / 1.841788 (-0.412009) | 20.967563 / 8.074308 (12.893255) | 19.329723 / 10.191392 (9.138331) | 0.222048 / 0.680424 (-0.458376) | 0.033507 / 0.534201 (-0.500694) | 0.436801 / 0.579283 (-0.142482) | 0.530197 / 0.434364 (0.095833) | 0.491532 / 0.540337 (-0.048805) | 0.718216 / 1.386936 (-0.668720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007798 / 0.011353 (-0.003555) | 0.004748 / 0.011008 (-0.006260) | 0.070847 / 0.038508 (0.032339) | 0.069338 / 0.023109 (0.046229) | 0.400890 / 0.275898 (0.124992) | 0.429482 / 0.323480 (0.106002) | 0.006469 / 0.007986 (-0.001517) | 0.003514 / 0.004328 (-0.000814) | 0.069049 / 0.004250 (0.064798) | 0.059800 / 0.037052 (0.022748) | 0.415644 / 0.258489 (0.157155) | 0.432562 / 0.293841 (0.138721) | 0.043778 / 0.128546 (-0.084768) | 0.015141 / 0.075646 (-0.060506) | 0.081521 / 0.419271 (-0.337750) | 0.054692 / 0.043533 (0.011160) | 0.404497 / 0.255139 (0.149358) | 0.419783 / 0.283200 (0.136583) | 0.029588 / 0.141683 (-0.112094) | 1.593506 / 1.452155 (0.141351) | 1.615977 / 1.492716 (0.123261) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270981 / 0.018006 (0.252975) | 0.522074 / 0.000490 (0.521584) | 0.026568 / 0.000200 (0.026368) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031551 / 0.037411 (-0.005861) | 0.086723 / 0.014526 (0.072197) | 0.103315 / 0.176557 (-0.073242) | 0.154692 / 0.737135 (-0.582443) | 0.099472 / 0.296338 (-0.196866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.570238 / 0.215209 (0.355029) | 5.655963 / 2.077655 (3.578308) | 2.662670 / 1.504120 (1.158550) | 2.380903 / 1.541195 (0.839709) | 2.409467 / 1.468490 (0.940977) | 0.828055 / 4.584777 (-3.756722) | 4.964698 / 3.745712 (1.218986) | 4.299995 / 5.269862 (-0.969867) | 2.824162 / 4.565676 (-1.741514) | 0.095872 / 0.424275 (-0.328403) | 0.007907 / 0.007607 (0.000300) | 0.701595 / 0.226044 (0.475551) | 7.131965 / 2.268929 (4.863036) | 3.250554 / 55.444624 (-52.194070) | 2.531916 / 6.876477 (-4.344561) | 2.717908 / 2.142072 (0.575835) | 1.014479 / 4.805227 (-3.790748) | 0.223804 / 6.500664 (-6.276861) | 0.071893 / 0.075469 (-0.003576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541702 / 1.841788 (-0.300086) | 21.668219 / 8.074308 (13.593911) | 18.916032 / 10.191392 (8.724640) | 0.205915 / 0.680424 (-0.474508) | 0.026356 / 0.534201 (-0.507845) | 0.429122 / 0.579283 (-0.150161) | 0.506110 / 0.434364 (0.071746) | 0.510148 / 0.540337 (-0.030190) | 0.724699 / 1.386936 (-0.662237) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c4ca93ff86551b398c979862e7be7305725a240b \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006884 / 0.011353 (-0.004469) | 0.004492 / 0.011008 (-0.006516) | 0.085439 / 0.038508 (0.046931) | 0.083905 / 0.023109 (0.060796) | 0.313604 / 0.275898 (0.037706) | 0.354683 / 0.323480 (0.031203) | 0.006535 / 0.007986 (-0.001451) | 0.004318 / 0.004328 (-0.000011) | 0.066129 / 0.004250 (0.061879) | 0.057568 / 0.037052 (0.020516) | 0.317162 / 0.258489 (0.058672) | 0.372501 / 0.293841 (0.078660) | 0.031059 / 0.128546 (-0.097488) | 0.009013 / 0.075646 (-0.066634) | 0.288794 / 0.419271 (-0.130478) | 0.053326 / 0.043533 (0.009793) | 0.314318 / 0.255139 (0.059179) | 0.357505 / 0.283200 (0.074305) | 0.027020 / 0.141683 (-0.114663) | 1.530653 / 1.452155 (0.078498) | 1.599782 / 1.492716 (0.107066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278788 / 0.018006 (0.260782) | 0.626822 / 0.000490 (0.626333) | 0.003780 / 0.000200 (0.003580) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031703 / 0.037411 (-0.005708) | 0.085654 / 0.014526 (0.071128) | 0.754858 / 0.176557 (0.578301) | 0.212251 / 0.737135 (-0.524885) | 0.171344 / 0.296338 (-0.124994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382291 / 0.215209 (0.167082) | 3.825612 / 2.077655 (1.747958) | 1.874553 / 1.504120 (0.370433) | 1.712574 / 1.541195 (0.171379) | 1.791479 / 1.468490 (0.322989) | 0.481005 / 4.584777 (-4.103772) | 3.530559 / 3.745712 (-0.215153) | 3.395305 / 5.269862 (-1.874557) | 2.133747 / 4.565676 (-2.431930) | 0.056139 / 0.424275 (-0.368136) | 0.007424 / 0.007607 (-0.000183) | 0.458321 / 0.226044 (0.232277) | 4.577665 / 2.268929 (2.308736) | 2.380233 / 55.444624 (-53.064392) | 2.004060 / 6.876477 (-4.872417) | 2.290712 / 2.142072 (0.148639) | 0.570157 / 4.805227 (-4.235070) | 0.131670 / 6.500664 (-6.368994) | 0.060684 / 0.075469 (-0.014785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294929 / 1.841788 (-0.546858) | 21.386663 / 8.074308 (13.312355) | 14.389440 / 10.191392 (4.198048) | 0.171177 / 0.680424 (-0.509247) | 0.018660 / 0.534201 (-0.515541) | 0.394385 / 0.579283 (-0.184898) | 0.424942 / 0.434364 (-0.009422) | 0.463618 / 0.540337 (-0.076719) | 0.651499 / 1.386936 (-0.735437) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007079 / 0.011353 (-0.004274) | 0.004615 / 0.011008 (-0.006393) | 0.066300 / 0.038508 (0.027792) | 0.092636 / 0.023109 (0.069527) | 0.399080 / 0.275898 (0.123182) | 0.429873 / 0.323480 (0.106393) | 0.006689 / 0.007986 (-0.001297) | 0.004358 / 0.004328 (0.000029) | 0.067155 / 0.004250 (0.062905) | 0.064040 / 0.037052 (0.026988) | 0.399905 / 0.258489 (0.141416) | 0.448237 / 0.293841 (0.154397) | 0.031985 / 0.128546 (-0.096561) | 0.009053 / 0.075646 (-0.066593) | 0.071904 / 0.419271 (-0.347368) | 0.048759 / 0.043533 (0.005227) | 0.386797 / 0.255139 (0.131658) | 0.411240 / 0.283200 (0.128040) | 0.028568 / 0.141683 (-0.113115) | 1.501037 / 1.452155 (0.048882) | 1.594560 / 1.492716 (0.101844) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300756 / 0.018006 (0.282750) | 0.631220 / 0.000490 (0.630730) | 0.010163 / 0.000200 (0.009963) | 0.000144 / 0.000054 (0.000089) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033716 / 0.037411 (-0.003695) | 0.093562 / 0.014526 (0.079037) | 0.106975 / 0.176557 (-0.069582) | 0.161919 / 0.737135 (-0.575216) | 0.113397 / 0.296338 (-0.182942) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410392 / 0.215209 (0.195183) | 4.094411 / 2.077655 (2.016756) | 2.085868 / 1.504120 (0.581748) | 1.959589 / 1.541195 (0.418394) | 2.096683 / 1.468490 (0.628193) | 0.494593 / 4.584777 (-4.090184) | 3.854302 / 3.745712 (0.108590) | 3.742303 / 5.269862 (-1.527558) | 2.379983 / 4.565676 (-2.185693) | 0.058640 / 0.424275 (-0.365635) | 0.008092 / 0.007607 (0.000484) | 0.486957 / 0.226044 (0.260912) | 4.855784 / 2.268929 (2.586855) | 2.654029 / 55.444624 (-52.790595) | 2.237627 / 6.876477 (-4.638850) | 2.536955 / 2.142072 (0.394882) | 0.622398 / 4.805227 (-4.182829) | 0.139212 / 6.500664 (-6.361452) | 0.062805 / 0.075469 (-0.012664) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374862 / 1.841788 (-0.466926) | 22.797015 / 8.074308 (14.722707) | 14.393995 / 10.191392 (4.202603) | 0.196603 / 0.680424 (-0.483821) | 0.018602 / 0.534201 (-0.515599) | 0.394568 / 0.579283 (-0.184715) | 0.408792 / 0.434364 (-0.025572) | 0.486706 / 0.540337 (-0.053631) | 0.652365 / 1.386936 (-0.734571) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5713299a88f527ea162a099c2bf2cbceada8fb86 \"CML watermark\")\n" ]
2023-07-31T16:32:01
2023-08-03T10:13:32
2023-08-03T10:04:18
MEMBER
null
Fix issues with the deprecation of `use_auth_token` introduced by: - #5996 in functions: - `get_authentication_headers_for_url` - `request_etag` - `get_from_cache` Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588 ``` FAILED tests/job_runners/config/test_parquet_and_info.py::test__is_too_big_external_files[None-None-False] - TypeError: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token' FAILED tests/job_runners/config/test_parquet_and_info.py::test_fill_builder_info[None-False] - libcommon.exceptions.FileSystemError: Could not read the parquet files: get_authentication_headers_for_url() got an unexpected keyword argument 'use_auth_token' ``` Related to: - #6094
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6107/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6107/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6107", "html_url": "https://github.com/huggingface/datasets/pull/6107", "diff_url": "https://github.com/huggingface/datasets/pull/6107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6107.patch", "merged_at": "2023-08-03T10:04:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/6106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6106/comments
https://api.github.com/repos/huggingface/datasets/issues/6106/events
https://github.com/huggingface/datasets/issues/6106
1,829,131,223
I_kwDODunzps5tBlPX
6,106
load local json_file as dataset
{ "login": "CiaoHe", "id": 39040787, "node_id": "MDQ6VXNlcjM5MDQwNzg3", "avatar_url": "https://avatars.githubusercontent.com/u/39040787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CiaoHe", "html_url": "https://github.com/CiaoHe", "followers_url": "https://api.github.com/users/CiaoHe/followers", "following_url": "https://api.github.com/users/CiaoHe/following{/other_user}", "gists_url": "https://api.github.com/users/CiaoHe/gists{/gist_id}", "starred_url": "https://api.github.com/users/CiaoHe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CiaoHe/subscriptions", "organizations_url": "https://api.github.com/users/CiaoHe/orgs", "repos_url": "https://api.github.com/users/CiaoHe/repos", "events_url": "https://api.github.com/users/CiaoHe/events{/privacy}", "received_events_url": "https://api.github.com/users/CiaoHe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! We use PyArrow to read JSON files, and PyArrow doesn't allow different value types in the same column. #5776 should address this.\r\n\r\nIn the meantime, you can combine `Dataset.from_generator` with the above code to cast the values to the same type. ", "Thanks for your help!" ]
2023-07-31T12:53:49
2023-08-18T01:46:35
2023-08-18T01:46:35
NONE
null
### Describe the bug I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type. ### Steps to reproduce the bug 1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)` 2. Then, the error will be triggered like `ArrowInvalid: Could not convert '-0.2253' with type str: tried to convert to double ### Expected behavior Should allow some columns are 'float' type, at least it should convert those columns to str type. I tried to avoid the error by naively convert the float item to str: ```python # if col type is not str, we need to convert it to str mapping = {} for col in keys: if isinstance(dataset[0][col], str): mapping[col] = [row.get(col) for row in dataset] else: mapping[col] = [str(row.get(col)) for row in dataset] ``` ### Environment info - `datasets` version: 2.14.2 - Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.0 - Pandas version: 2.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6106/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6105/comments
https://api.github.com/repos/huggingface/datasets/issues/6105/events
https://github.com/huggingface/datasets/pull/6105
1,829,008,430
PR_kwDODunzps5WyiJD
6,105
Fix error when loading from GCP bucket
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006706 / 0.011353 (-0.004647) | 0.004016 / 0.011008 (-0.006992) | 0.083696 / 0.038508 (0.045188) | 0.074340 / 0.023109 (0.051230) | 0.327338 / 0.275898 (0.051440) | 0.366663 / 0.323480 (0.043183) | 0.004052 / 0.007986 (-0.003934) | 0.003423 / 0.004328 (-0.000906) | 0.064576 / 0.004250 (0.060326) | 0.055037 / 0.037052 (0.017985) | 0.325089 / 0.258489 (0.066600) | 0.379986 / 0.293841 (0.086145) | 0.031614 / 0.128546 (-0.096932) | 0.008553 / 0.075646 (-0.067094) | 0.287430 / 0.419271 (-0.131841) | 0.053032 / 0.043533 (0.009499) | 0.318990 / 0.255139 (0.063851) | 0.364426 / 0.283200 (0.081226) | 0.024926 / 0.141683 (-0.116757) | 1.461835 / 1.452155 (0.009680) | 1.557172 / 1.492716 (0.064456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212430 / 0.018006 (0.194424) | 0.512891 / 0.000490 (0.512402) | 0.004772 / 0.000200 (0.004572) | 0.000132 / 0.000054 (0.000078) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027873 / 0.037411 (-0.009538) | 0.085598 / 0.014526 (0.071072) | 0.097330 / 0.176557 (-0.079226) | 0.152235 / 0.737135 (-0.584900) | 0.097787 / 0.296338 (-0.198552) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384645 / 0.215209 (0.169436) | 3.841161 / 2.077655 (1.763506) | 1.863696 / 1.504120 (0.359577) | 1.685082 / 1.541195 (0.143887) | 1.772904 / 1.468490 (0.304414) | 0.480177 / 4.584777 (-4.104599) | 3.601537 / 3.745712 (-0.144175) | 3.273647 / 5.269862 (-1.996214) | 2.014415 / 4.565676 (-2.551261) | 0.056668 / 0.424275 (-0.367607) | 0.007257 / 0.007607 (-0.000350) | 0.458194 / 0.226044 (0.232150) | 4.577311 / 2.268929 (2.308382) | 2.333983 / 55.444624 (-53.110641) | 1.964508 / 6.876477 (-4.911969) | 2.193379 / 2.142072 (0.051307) | 0.577557 / 4.805227 (-4.227670) | 0.133899 / 6.500664 (-6.366765) | 0.060804 / 0.075469 (-0.014665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.249490 / 1.841788 (-0.592298) | 19.791875 / 8.074308 (11.717567) | 14.418728 / 10.191392 (4.227336) | 0.167788 / 0.680424 (-0.512636) | 0.018993 / 0.534201 (-0.515208) | 0.396141 / 0.579283 (-0.183142) | 0.412427 / 0.434364 (-0.021937) | 0.456718 / 0.540337 (-0.083619) | 0.641383 / 1.386936 (-0.745553) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006546 / 0.011353 (-0.004807) | 0.004059 / 0.011008 (-0.006949) | 0.064523 / 0.038508 (0.026015) | 0.074988 / 0.023109 (0.051878) | 0.388932 / 0.275898 (0.113034) | 0.424496 / 0.323480 (0.101016) | 0.005226 / 0.007986 (-0.002760) | 0.003409 / 0.004328 (-0.000920) | 0.064284 / 0.004250 (0.060034) | 0.056829 / 0.037052 (0.019777) | 0.386457 / 0.258489 (0.127968) | 0.428063 / 0.293841 (0.134222) | 0.031411 / 0.128546 (-0.097136) | 0.008577 / 0.075646 (-0.067070) | 0.070357 / 0.419271 (-0.348915) | 0.048920 / 0.043533 (0.005388) | 0.385197 / 0.255139 (0.130058) | 0.407167 / 0.283200 (0.123967) | 0.024469 / 0.141683 (-0.117214) | 1.482733 / 1.452155 (0.030578) | 1.539027 / 1.492716 (0.046311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227532 / 0.018006 (0.209526) | 0.448792 / 0.000490 (0.448302) | 0.004139 / 0.000200 (0.003939) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031004 / 0.037411 (-0.006408) | 0.088163 / 0.014526 (0.073637) | 0.101452 / 0.176557 (-0.075105) | 0.152907 / 0.737135 (-0.584229) | 0.102325 / 0.296338 (-0.194014) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418092 / 0.215209 (0.202883) | 4.162277 / 2.077655 (2.084623) | 2.232987 / 1.504120 (0.728867) | 2.143583 / 1.541195 (0.602388) | 2.246142 / 1.468490 (0.777652) | 0.490181 / 4.584777 (-4.094596) | 3.631514 / 3.745712 (-0.114198) | 3.315025 / 5.269862 (-1.954837) | 2.101853 / 4.565676 (-2.463823) | 0.057905 / 0.424275 (-0.366370) | 0.007686 / 0.007607 (0.000079) | 0.489965 / 0.226044 (0.263921) | 4.894375 / 2.268929 (2.625447) | 2.655459 / 55.444624 (-52.789165) | 2.262211 / 6.876477 (-4.614266) | 2.505335 / 2.142072 (0.363263) | 0.591329 / 4.805227 (-4.213898) | 0.133554 / 6.500664 (-6.367110) | 0.061922 / 0.075469 (-0.013547) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.347483 / 1.841788 (-0.494304) | 20.027011 / 8.074308 (11.952703) | 14.430737 / 10.191392 (4.239345) | 0.165767 / 0.680424 (-0.514657) | 0.018460 / 0.534201 (-0.515741) | 0.393790 / 0.579283 (-0.185494) | 0.407213 / 0.434364 (-0.027151) | 0.474459 / 0.540337 (-0.065879) | 0.635054 / 1.386936 (-0.751882) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7f575111481e2e2f4d4fc9180771797f69ebcc44 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007652 / 0.011353 (-0.003701) | 0.004581 / 0.011008 (-0.006427) | 0.101629 / 0.038508 (0.063121) | 0.090233 / 0.023109 (0.067124) | 0.392789 / 0.275898 (0.116891) | 0.432163 / 0.323480 (0.108683) | 0.004694 / 0.007986 (-0.003292) | 0.003927 / 0.004328 (-0.000401) | 0.076533 / 0.004250 (0.072282) | 0.064442 / 0.037052 (0.027390) | 0.397539 / 0.258489 (0.139050) | 0.441323 / 0.293841 (0.147482) | 0.036278 / 0.128546 (-0.092268) | 0.009810 / 0.075646 (-0.065836) | 0.343537 / 0.419271 (-0.075734) | 0.060273 / 0.043533 (0.016740) | 0.395023 / 0.255139 (0.139884) | 0.427210 / 0.283200 (0.144011) | 0.031717 / 0.141683 (-0.109966) | 1.771221 / 1.452155 (0.319066) | 1.896336 / 1.492716 (0.403620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235081 / 0.018006 (0.217075) | 0.512781 / 0.000490 (0.512292) | 0.004920 / 0.000200 (0.004721) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033525 / 0.037411 (-0.003887) | 0.104416 / 0.014526 (0.089890) | 0.115695 / 0.176557 (-0.060861) | 0.182216 / 0.737135 (-0.554919) | 0.116259 / 0.296338 (-0.180079) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454817 / 0.215209 (0.239608) | 4.527753 / 2.077655 (2.450098) | 2.222273 / 1.504120 (0.718153) | 2.038448 / 1.541195 (0.497253) | 2.179444 / 1.468490 (0.710953) | 0.573665 / 4.584777 (-4.011112) | 4.504943 / 3.745712 (0.759231) | 3.848435 / 5.269862 (-1.421427) | 2.455185 / 4.565676 (-2.110491) | 0.067985 / 0.424275 (-0.356290) | 0.008719 / 0.007607 (0.001112) | 0.552405 / 0.226044 (0.326360) | 5.515251 / 2.268929 (3.246322) | 2.851557 / 55.444624 (-52.593067) | 2.463070 / 6.876477 (-4.413407) | 2.761596 / 2.142072 (0.619524) | 0.688561 / 4.805227 (-4.116667) | 0.159946 / 6.500664 (-6.340718) | 0.075435 / 0.075469 (-0.000034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505178 / 1.841788 (-0.336610) | 23.555236 / 8.074308 (15.480928) | 17.272759 / 10.191392 (7.081367) | 0.206495 / 0.680424 (-0.473928) | 0.021869 / 0.534201 (-0.512332) | 0.469271 / 0.579283 (-0.110012) | 0.469200 / 0.434364 (0.034837) | 0.542437 / 0.540337 (0.002100) | 0.792864 / 1.386936 (-0.594072) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008151 / 0.011353 (-0.003202) | 0.004992 / 0.011008 (-0.006016) | 0.079545 / 0.038508 (0.041037) | 0.100234 / 0.023109 (0.077125) | 0.492791 / 0.275898 (0.216893) | 0.511315 / 0.323480 (0.187835) | 0.006878 / 0.007986 (-0.001108) | 0.003807 / 0.004328 (-0.000522) | 0.080876 / 0.004250 (0.076625) | 0.076734 / 0.037052 (0.039681) | 0.518247 / 0.258489 (0.259758) | 0.524202 / 0.293841 (0.230361) | 0.039896 / 0.128546 (-0.088650) | 0.016581 / 0.075646 (-0.059065) | 0.101228 / 0.419271 (-0.318043) | 0.061990 / 0.043533 (0.018457) | 0.490611 / 0.255139 (0.235472) | 0.514930 / 0.283200 (0.231730) | 0.028680 / 0.141683 (-0.113002) | 1.966215 / 1.452155 (0.514061) | 2.047757 / 1.492716 (0.555040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286807 / 0.018006 (0.268801) | 0.506448 / 0.000490 (0.505959) | 0.005867 / 0.000200 (0.005667) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037141 / 0.037411 (-0.000270) | 0.113232 / 0.014526 (0.098706) | 0.121201 / 0.176557 (-0.055356) | 0.185472 / 0.737135 (-0.551663) | 0.122896 / 0.296338 (-0.173442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.514491 / 0.215209 (0.299282) | 4.942457 / 2.077655 (2.864802) | 2.533519 / 1.504120 (1.029399) | 2.371011 / 1.541195 (0.829817) | 2.495604 / 1.468490 (1.027114) | 0.576224 / 4.584777 (-4.008553) | 4.368584 / 3.745712 (0.622872) | 3.885598 / 5.269862 (-1.384263) | 2.443596 / 4.565676 (-2.122080) | 0.068905 / 0.424275 (-0.355371) | 0.009171 / 0.007607 (0.001564) | 0.584977 / 0.226044 (0.358932) | 5.835220 / 2.268929 (3.566291) | 3.189037 / 55.444624 (-52.255588) | 2.753228 / 6.876477 (-4.123249) | 3.009062 / 2.142072 (0.866990) | 0.690179 / 4.805227 (-4.115048) | 0.157981 / 6.500664 (-6.342683) | 0.074518 / 0.075469 (-0.000951) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599907 / 1.841788 (-0.241880) | 23.853903 / 8.074308 (15.779595) | 17.419796 / 10.191392 (7.228404) | 0.204974 / 0.680424 (-0.475450) | 0.022014 / 0.534201 (-0.512187) | 0.473379 / 0.579283 (-0.105905) | 0.461346 / 0.434364 (0.026982) | 0.564881 / 0.540337 (0.024543) | 0.752933 / 1.386936 (-0.634003) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f49c9ca993fa600fae0e327636d52657328e7ffb \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006547 / 0.011353 (-0.004805) | 0.004020 / 0.011008 (-0.006988) | 0.086828 / 0.038508 (0.048320) | 0.072924 / 0.023109 (0.049815) | 0.312847 / 0.275898 (0.036949) | 0.344605 / 0.323480 (0.021125) | 0.004117 / 0.007986 (-0.003868) | 0.004365 / 0.004328 (0.000037) | 0.066755 / 0.004250 (0.062505) | 0.053248 / 0.037052 (0.016195) | 0.315744 / 0.258489 (0.057255) | 0.362426 / 0.293841 (0.068585) | 0.030732 / 0.128546 (-0.097814) | 0.008516 / 0.075646 (-0.067130) | 0.289927 / 0.419271 (-0.129345) | 0.052115 / 0.043533 (0.008582) | 0.308026 / 0.255139 (0.052887) | 0.343115 / 0.283200 (0.059915) | 0.024131 / 0.141683 (-0.117551) | 1.464290 / 1.452155 (0.012135) | 1.559359 / 1.492716 (0.066642) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216744 / 0.018006 (0.198738) | 0.473156 / 0.000490 (0.472666) | 0.004176 / 0.000200 (0.003977) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028500 / 0.037411 (-0.008911) | 0.083892 / 0.014526 (0.069366) | 0.131851 / 0.176557 (-0.044705) | 0.162202 / 0.737135 (-0.574933) | 0.127989 / 0.296338 (-0.168349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404555 / 0.215209 (0.189346) | 4.035989 / 2.077655 (1.958334) | 2.025174 / 1.504120 (0.521054) | 1.835785 / 1.541195 (0.294590) | 1.909819 / 1.468490 (0.441329) | 0.475352 / 4.584777 (-4.109425) | 3.548055 / 3.745712 (-0.197657) | 3.234782 / 5.269862 (-2.035080) | 2.010305 / 4.565676 (-2.555371) | 0.056507 / 0.424275 (-0.367768) | 0.007259 / 0.007607 (-0.000348) | 0.482021 / 0.226044 (0.255977) | 4.818559 / 2.268929 (2.549631) | 2.528765 / 55.444624 (-52.915860) | 2.159804 / 6.876477 (-4.716673) | 2.380640 / 2.142072 (0.238567) | 0.585005 / 4.805227 (-4.220222) | 0.133811 / 6.500664 (-6.366853) | 0.060686 / 0.075469 (-0.014783) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260902 / 1.841788 (-0.580886) | 19.500215 / 8.074308 (11.425907) | 14.164698 / 10.191392 (3.973306) | 0.172492 / 0.680424 (-0.507932) | 0.018221 / 0.534201 (-0.515980) | 0.392609 / 0.579283 (-0.186674) | 0.423265 / 0.434364 (-0.011099) | 0.454705 / 0.540337 (-0.085633) | 0.639856 / 1.386936 (-0.747080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006656 / 0.011353 (-0.004697) | 0.003903 / 0.011008 (-0.007106) | 0.063780 / 0.038508 (0.025272) | 0.076848 / 0.023109 (0.053739) | 0.379429 / 0.275898 (0.103531) | 0.442554 / 0.323480 (0.119074) | 0.005327 / 0.007986 (-0.002658) | 0.003318 / 0.004328 (-0.001010) | 0.064307 / 0.004250 (0.060056) | 0.057183 / 0.037052 (0.020131) | 0.398163 / 0.258489 (0.139674) | 0.448532 / 0.293841 (0.154691) | 0.031322 / 0.128546 (-0.097224) | 0.008462 / 0.075646 (-0.067184) | 0.070354 / 0.419271 (-0.348917) | 0.048420 / 0.043533 (0.004887) | 0.368304 / 0.255139 (0.113165) | 0.428786 / 0.283200 (0.145587) | 0.023921 / 0.141683 (-0.117762) | 1.499281 / 1.452155 (0.047126) | 1.554448 / 1.492716 (0.061731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238830 / 0.018006 (0.220824) | 0.464196 / 0.000490 (0.463706) | 0.004812 / 0.000200 (0.004613) | 0.000098 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031642 / 0.037411 (-0.005770) | 0.089205 / 0.014526 (0.074679) | 0.101577 / 0.176557 (-0.074980) | 0.154993 / 0.737135 (-0.582142) | 0.102935 / 0.296338 (-0.193403) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415218 / 0.215209 (0.200009) | 4.137711 / 2.077655 (2.060056) | 2.128757 / 1.504120 (0.624637) | 1.961086 / 1.541195 (0.419891) | 2.047552 / 1.468490 (0.579061) | 0.486953 / 4.584777 (-4.097824) | 3.587851 / 3.745712 (-0.157861) | 3.280771 / 5.269862 (-1.989090) | 2.016980 / 4.565676 (-2.548697) | 0.057284 / 0.424275 (-0.366991) | 0.007705 / 0.007607 (0.000097) | 0.492242 / 0.226044 (0.266197) | 4.923213 / 2.268929 (2.654285) | 2.672528 / 55.444624 (-52.772097) | 2.292862 / 6.876477 (-4.583614) | 2.517410 / 2.142072 (0.375337) | 0.614798 / 4.805227 (-4.190429) | 0.149642 / 6.500664 (-6.351023) | 0.062898 / 0.075469 (-0.012571) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323266 / 1.841788 (-0.518522) | 19.891504 / 8.074308 (11.817196) | 14.115069 / 10.191392 (3.923677) | 0.169859 / 0.680424 (-0.510564) | 0.018538 / 0.534201 (-0.515663) | 0.398456 / 0.579283 (-0.180827) | 0.410111 / 0.434364 (-0.024253) | 0.483198 / 0.540337 (-0.057139) | 0.639283 / 1.386936 (-0.747653) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#01e2194f2aab6aa98686a2069ee5201b69a53c14 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007731 / 0.011353 (-0.003622) | 0.004064 / 0.011008 (-0.006944) | 0.095261 / 0.038508 (0.056753) | 0.081594 / 0.023109 (0.058485) | 0.390413 / 0.275898 (0.114515) | 0.415542 / 0.323480 (0.092063) | 0.006031 / 0.007986 (-0.001954) | 0.003817 / 0.004328 (-0.000512) | 0.066381 / 0.004250 (0.062131) | 0.058262 / 0.037052 (0.021210) | 0.383626 / 0.258489 (0.125137) | 0.443237 / 0.293841 (0.149396) | 0.034358 / 0.128546 (-0.094188) | 0.010002 / 0.075646 (-0.065644) | 0.317472 / 0.419271 (-0.101800) | 0.057428 / 0.043533 (0.013895) | 0.393929 / 0.255139 (0.138790) | 0.444572 / 0.283200 (0.161373) | 0.026295 / 0.141683 (-0.115388) | 1.603639 / 1.452155 (0.151484) | 1.707750 / 1.492716 (0.215034) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222171 / 0.018006 (0.204165) | 0.491762 / 0.000490 (0.491272) | 0.003389 / 0.000200 (0.003189) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029420 / 0.037411 (-0.007991) | 0.086201 / 0.014526 (0.071676) | 0.100150 / 0.176557 (-0.076406) | 0.162338 / 0.737135 (-0.574797) | 0.099349 / 0.296338 (-0.196989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445976 / 0.215209 (0.230767) | 4.460197 / 2.077655 (2.382542) | 2.211767 / 1.504120 (0.707647) | 1.988740 / 1.541195 (0.447545) | 2.052289 / 1.468490 (0.583799) | 0.570321 / 4.584777 (-4.014456) | 4.148777 / 3.745712 (0.403065) | 3.750977 / 5.269862 (-1.518885) | 2.309443 / 4.565676 (-2.256234) | 0.064552 / 0.424275 (-0.359724) | 0.008167 / 0.007607 (0.000560) | 0.523283 / 0.226044 (0.297238) | 5.349347 / 2.268929 (3.080419) | 2.710292 / 55.444624 (-52.734332) | 2.344252 / 6.876477 (-4.532225) | 2.549903 / 2.142072 (0.407831) | 0.665942 / 4.805227 (-4.139285) | 0.154108 / 6.500664 (-6.346556) | 0.070181 / 0.075469 (-0.005289) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455733 / 1.841788 (-0.386054) | 21.846958 / 8.074308 (13.772650) | 15.133865 / 10.191392 (4.942473) | 0.199009 / 0.680424 (-0.481415) | 0.021299 / 0.534201 (-0.512902) | 0.421555 / 0.579283 (-0.157729) | 0.437639 / 0.434364 (0.003275) | 0.498568 / 0.540337 (-0.041769) | 0.719649 / 1.386936 (-0.667287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007858 / 0.011353 (-0.003495) | 0.004629 / 0.011008 (-0.006380) | 0.075701 / 0.038508 (0.037193) | 0.084425 / 0.023109 (0.061316) | 0.436650 / 0.275898 (0.160752) | 0.466046 / 0.323480 (0.142566) | 0.006042 / 0.007986 (-0.001944) | 0.003834 / 0.004328 (-0.000495) | 0.074729 / 0.004250 (0.070478) | 0.065983 / 0.037052 (0.028931) | 0.447239 / 0.258489 (0.188750) | 0.466728 / 0.293841 (0.172887) | 0.035814 / 0.128546 (-0.092733) | 0.009919 / 0.075646 (-0.065727) | 0.081151 / 0.419271 (-0.338120) | 0.057256 / 0.043533 (0.013723) | 0.435609 / 0.255139 (0.180470) | 0.448901 / 0.283200 (0.165701) | 0.026325 / 0.141683 (-0.115357) | 1.745658 / 1.452155 (0.293503) | 1.804137 / 1.492716 (0.311421) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.302551 / 0.018006 (0.284544) | 0.498438 / 0.000490 (0.497948) | 0.038562 / 0.000200 (0.038362) | 0.000411 / 0.000054 (0.000356) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035573 / 0.037411 (-0.001839) | 0.104957 / 0.014526 (0.090431) | 0.117208 / 0.176557 (-0.059349) | 0.178935 / 0.737135 (-0.558200) | 0.124577 / 0.296338 (-0.171761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.467076 / 0.215209 (0.251867) | 4.698852 / 2.077655 (2.621197) | 2.453389 / 1.504120 (0.949269) | 2.257378 / 1.541195 (0.716183) | 2.338615 / 1.468490 (0.870125) | 0.542379 / 4.584777 (-4.042398) | 4.066895 / 3.745712 (0.321183) | 3.689540 / 5.269862 (-1.580321) | 2.268997 / 4.565676 (-2.296679) | 0.064754 / 0.424275 (-0.359521) | 0.008866 / 0.007607 (0.001259) | 0.546732 / 0.226044 (0.320687) | 5.487765 / 2.268929 (3.218836) | 2.974126 / 55.444624 (-52.470498) | 2.585492 / 6.876477 (-4.290985) | 2.754417 / 2.142072 (0.612345) | 0.652045 / 4.805227 (-4.153183) | 0.145597 / 6.500664 (-6.355067) | 0.065415 / 0.075469 (-0.010054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.553970 / 1.841788 (-0.287818) | 22.300954 / 8.074308 (14.226646) | 15.640990 / 10.191392 (5.449598) | 0.170903 / 0.680424 (-0.509521) | 0.021750 / 0.534201 (-0.512451) | 0.455316 / 0.579283 (-0.123967) | 0.455051 / 0.434364 (0.020687) | 0.536174 / 0.540337 (-0.004164) | 0.735930 / 1.386936 (-0.651006) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f68139846c26b43631bd235114854f4bf6cb9954 \"CML watermark\")\n" ]
2023-07-31T11:44:46
2023-08-01T10:48:52
2023-08-01T10:38:54
MEMBER
null
Fix `resolve_pattern` for filesystems with tuple protocol. Fix #6100. The bug code lines were introduced by: - #6028
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6105/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6105", "html_url": "https://github.com/huggingface/datasets/pull/6105", "diff_url": "https://github.com/huggingface/datasets/pull/6105.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6105.patch", "merged_at": "2023-08-01T10:38:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/6104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6104/comments
https://api.github.com/repos/huggingface/datasets/issues/6104/events
https://github.com/huggingface/datasets/issues/6104
1,828,959,107
I_kwDODunzps5tA7OD
6,104
HF Datasets data access is extremely slow even when in memory
{ "login": "NightMachinery", "id": 36224762, "node_id": "MDQ6VXNlcjM2MjI0NzYy", "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NightMachinery", "html_url": "https://github.com/NightMachinery", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "repos_url": "https://api.github.com/users/NightMachinery/repos", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Possibly related:\r\n- https://github.com/pytorch/pytorch/issues/22462" ]
2023-07-31T11:12:19
2023-08-01T11:22:43
null
CONTRIBUTOR
null
### Describe the bug Doing a simple `some_dataset[:10]` can take more than a minute. Profiling it: <img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab"> `some_dataset` is completely in memory with no disk cache. This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long? It's faster to produce the dataset from scratch than to access it from HF Datasets! ### Steps to reproduce the bug I have uploaded the dataset that causes this problem [here](https://huggingface.co./datasets/NightMachinery/hf_datasets_bug1). ```python #!/usr/bin/env python3 import sys import time import torch from datasets import load_dataset def main(dataset_name): # Start the timer start_time = time.time() # Load the dataset from Hugging Face Hub dataset = load_dataset(dataset_name) # Set the dataset format as torch dataset.set_format(type="torch") # Perform an identity map dataset = dataset.map(lambda example: example, batched=True, batch_size=20) # End the timer end_time = time.time() # Print the time taken print(f"Time taken: {end_time - start_time:.2f} seconds") if __name__ == "__main__": dataset_name = "NightMachinery/hf_datasets_bug1" print(f"dataset_name: {dataset_name}") main(dataset_name) ``` ### Expected behavior _ ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6104/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6103/comments
https://api.github.com/repos/huggingface/datasets/issues/6103/events
https://github.com/huggingface/datasets/pull/6103
1,828,515,165
PR_kwDODunzps5Ww2gV
6,103
Set dev version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6103). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006528 / 0.011353 (-0.004825) | 0.003909 / 0.011008 (-0.007099) | 0.083954 / 0.038508 (0.045446) | 0.070513 / 0.023109 (0.047404) | 0.344362 / 0.275898 (0.068464) | 0.370278 / 0.323480 (0.046798) | 0.005395 / 0.007986 (-0.002591) | 0.003323 / 0.004328 (-0.001005) | 0.064538 / 0.004250 (0.060288) | 0.055616 / 0.037052 (0.018564) | 0.353590 / 0.258489 (0.095101) | 0.382159 / 0.293841 (0.088318) | 0.031133 / 0.128546 (-0.097414) | 0.008429 / 0.075646 (-0.067217) | 0.288665 / 0.419271 (-0.130606) | 0.052626 / 0.043533 (0.009093) | 0.347676 / 0.255139 (0.092537) | 0.363726 / 0.283200 (0.080526) | 0.021956 / 0.141683 (-0.119727) | 1.506091 / 1.452155 (0.053936) | 1.563940 / 1.492716 (0.071223) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207658 / 0.018006 (0.189652) | 0.473411 / 0.000490 (0.472922) | 0.005437 / 0.000200 (0.005237) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027769 / 0.037411 (-0.009643) | 0.082566 / 0.014526 (0.068040) | 0.092700 / 0.176557 (-0.083857) | 0.152589 / 0.737135 (-0.584546) | 0.093772 / 0.296338 (-0.202566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401072 / 0.215209 (0.185863) | 3.997922 / 2.077655 (1.920267) | 2.028223 / 1.504120 (0.524103) | 1.845229 / 1.541195 (0.304035) | 1.883980 / 1.468490 (0.415489) | 0.485112 / 4.584777 (-4.099665) | 3.657048 / 3.745712 (-0.088664) | 4.998475 / 5.269862 (-0.271386) | 3.007417 / 4.565676 (-1.558259) | 0.057003 / 0.424275 (-0.367272) | 0.007270 / 0.007607 (-0.000338) | 0.482220 / 0.226044 (0.256176) | 4.817560 / 2.268929 (2.548631) | 2.484285 / 55.444624 (-52.960340) | 2.163327 / 6.876477 (-4.713149) | 2.326412 / 2.142072 (0.184339) | 0.600349 / 4.805227 (-4.204878) | 0.134245 / 6.500664 (-6.366419) | 0.060705 / 0.075469 (-0.014764) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281440 / 1.841788 (-0.560347) | 19.165591 / 8.074308 (11.091283) | 14.007728 / 10.191392 (3.816336) | 0.168367 / 0.680424 (-0.512057) | 0.018149 / 0.534201 (-0.516052) | 0.391688 / 0.579283 (-0.187595) | 0.414528 / 0.434364 (-0.019836) | 0.456964 / 0.540337 (-0.083373) | 0.613807 / 1.386936 (-0.773129) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004851) | 0.003956 / 0.011008 (-0.007052) | 0.064297 / 0.038508 (0.025789) | 0.073430 / 0.023109 (0.050321) | 0.364113 / 0.275898 (0.088215) | 0.389021 / 0.323480 (0.065541) | 0.005375 / 0.007986 (-0.002611) | 0.003363 / 0.004328 (-0.000966) | 0.064404 / 0.004250 (0.060153) | 0.056664 / 0.037052 (0.019612) | 0.365504 / 0.258489 (0.107015) | 0.398477 / 0.293841 (0.104636) | 0.031739 / 0.128546 (-0.096807) | 0.008663 / 0.075646 (-0.066984) | 0.070757 / 0.419271 (-0.348515) | 0.051014 / 0.043533 (0.007481) | 0.368287 / 0.255139 (0.113148) | 0.382941 / 0.283200 (0.099742) | 0.024642 / 0.141683 (-0.117041) | 1.516721 / 1.452155 (0.064567) | 1.557625 / 1.492716 (0.064908) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208248 / 0.018006 (0.190242) | 0.443560 / 0.000490 (0.443070) | 0.004004 / 0.000200 (0.003805) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031116 / 0.037411 (-0.006295) | 0.086814 / 0.014526 (0.072288) | 0.099111 / 0.176557 (-0.077445) | 0.155032 / 0.737135 (-0.582104) | 0.098938 / 0.296338 (-0.197401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413080 / 0.215209 (0.197871) | 4.115546 / 2.077655 (2.037891) | 2.162073 / 1.504120 (0.657953) | 2.008107 / 1.541195 (0.466912) | 2.052317 / 1.468490 (0.583827) | 0.485158 / 4.584777 (-4.099619) | 3.617478 / 3.745712 (-0.128234) | 5.030564 / 5.269862 (-0.239298) | 2.787812 / 4.565676 (-1.777865) | 0.057466 / 0.424275 (-0.366809) | 0.007656 / 0.007607 (0.000049) | 0.490037 / 0.226044 (0.263993) | 4.887896 / 2.268929 (2.618968) | 2.639644 / 55.444624 (-52.804981) | 2.258051 / 6.876477 (-4.618426) | 2.417573 / 2.142072 (0.275500) | 0.604473 / 4.805227 (-4.200754) | 0.134770 / 6.500664 (-6.365894) | 0.061709 / 0.075469 (-0.013760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.342500 / 1.841788 (-0.499288) | 19.354990 / 8.074308 (11.280682) | 14.161975 / 10.191392 (3.970583) | 0.157084 / 0.680424 (-0.523339) | 0.018227 / 0.534201 (-0.515974) | 0.391819 / 0.579283 (-0.187464) | 0.399157 / 0.434364 (-0.035207) | 0.460582 / 0.540337 (-0.079756) | 0.612183 / 1.386936 (-0.774753) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b20f6a82410dd47e89585bb932616a22e0eaf2e6 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009318 / 0.011353 (-0.002035) | 0.005515 / 0.011008 (-0.005493) | 0.108532 / 0.038508 (0.070024) | 0.103583 / 0.023109 (0.080473) | 0.419249 / 0.275898 (0.143351) | 0.453573 / 0.323480 (0.130093) | 0.006601 / 0.007986 (-0.001384) | 0.005297 / 0.004328 (0.000968) | 0.082737 / 0.004250 (0.078487) | 0.064708 / 0.037052 (0.027656) | 0.425679 / 0.258489 (0.167190) | 0.462028 / 0.293841 (0.168187) | 0.048104 / 0.128546 (-0.080442) | 0.014069 / 0.075646 (-0.061577) | 0.377780 / 0.419271 (-0.041491) | 0.067510 / 0.043533 (0.023977) | 0.422421 / 0.255139 (0.167282) | 0.447127 / 0.283200 (0.163927) | 0.037745 / 0.141683 (-0.103938) | 1.855306 / 1.452155 (0.403152) | 1.943876 / 1.492716 (0.451160) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280161 / 0.018006 (0.262155) | 0.598001 / 0.000490 (0.597512) | 0.001130 / 0.000200 (0.000930) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036064 / 0.037411 (-0.001347) | 0.113256 / 0.014526 (0.098730) | 0.120598 / 0.176557 (-0.055959) | 0.191386 / 0.737135 (-0.545750) | 0.118125 / 0.296338 (-0.178214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616887 / 0.215209 (0.401678) | 6.085498 / 2.077655 (4.007844) | 2.639428 / 1.504120 (1.135308) | 2.215444 / 1.541195 (0.674249) | 2.311990 / 1.468490 (0.843500) | 0.820539 / 4.584777 (-3.764238) | 5.306010 / 3.745712 (1.560298) | 4.731726 / 5.269862 (-0.538136) | 3.053933 / 4.565676 (-1.511744) | 0.098862 / 0.424275 (-0.325413) | 0.009456 / 0.007607 (0.001849) | 0.725455 / 0.226044 (0.499411) | 7.367385 / 2.268929 (5.098457) | 3.464921 / 55.444624 (-51.979703) | 2.833868 / 6.876477 (-4.042608) | 3.033008 / 2.142072 (0.890935) | 1.036751 / 4.805227 (-3.768476) | 0.243646 / 6.500664 (-6.257018) | 0.081079 / 0.075469 (0.005610) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584695 / 1.841788 (-0.257093) | 25.150355 / 8.074308 (17.076047) | 21.826622 / 10.191392 (11.635230) | 0.212502 / 0.680424 (-0.467921) | 0.029865 / 0.534201 (-0.504335) | 0.496814 / 0.579283 (-0.082470) | 0.611959 / 0.434364 (0.177595) | 0.550434 / 0.540337 (0.010097) | 0.800897 / 1.386936 (-0.586039) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005236 / 0.011008 (-0.005772) | 0.082402 / 0.038508 (0.043894) | 0.090578 / 0.023109 (0.067468) | 0.487302 / 0.275898 (0.211404) | 0.523639 / 0.323480 (0.200159) | 0.006684 / 0.007986 (-0.001302) | 0.004306 / 0.004328 (-0.000023) | 0.083273 / 0.004250 (0.079023) | 0.068585 / 0.037052 (0.031532) | 0.487751 / 0.258489 (0.229262) | 0.538972 / 0.293841 (0.245131) | 0.048915 / 0.128546 (-0.079632) | 0.014312 / 0.075646 (-0.061335) | 0.091863 / 0.419271 (-0.327409) | 0.066114 / 0.043533 (0.022581) | 0.483552 / 0.255139 (0.228413) | 0.522250 / 0.283200 (0.239050) | 0.038533 / 0.141683 (-0.103150) | 1.803834 / 1.452155 (0.351680) | 1.891927 / 1.492716 (0.399211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.336662 / 0.018006 (0.318656) | 0.611408 / 0.000490 (0.610918) | 0.014310 / 0.000200 (0.014110) | 0.000152 / 0.000054 (0.000097) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034755 / 0.037411 (-0.002656) | 0.101008 / 0.014526 (0.086483) | 0.124530 / 0.176557 (-0.052026) | 0.179844 / 0.737135 (-0.557292) | 0.125027 / 0.296338 (-0.171312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.618341 / 0.215209 (0.403132) | 6.146848 / 2.077655 (4.069193) | 2.893305 / 1.504120 (1.389185) | 2.608722 / 1.541195 (1.067528) | 2.671276 / 1.468490 (1.202786) | 0.860096 / 4.584777 (-3.724681) | 5.440671 / 3.745712 (1.694959) | 4.776958 / 5.269862 (-0.492903) | 3.098300 / 4.565676 (-1.467376) | 0.098664 / 0.424275 (-0.325611) | 0.009270 / 0.007607 (0.001663) | 0.712780 / 0.226044 (0.486735) | 7.199721 / 2.268929 (4.930793) | 3.620723 / 55.444624 (-51.823902) | 3.052218 / 6.876477 (-3.824259) | 3.321093 / 2.142072 (1.179021) | 1.070992 / 4.805227 (-3.734235) | 0.224091 / 6.500664 (-6.276573) | 0.083395 / 0.075469 (0.007926) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.716867 / 1.841788 (-0.124921) | 25.534617 / 8.074308 (17.460309) | 25.221014 / 10.191392 (15.029621) | 0.248098 / 0.680424 (-0.432326) | 0.029659 / 0.534201 (-0.504542) | 0.492929 / 0.579283 (-0.086355) | 0.618253 / 0.434364 (0.183889) | 0.577108 / 0.540337 (0.036771) | 0.803188 / 1.386936 (-0.583748) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#584db360eed9155e173b199ba5fc037562b7b862 \"CML watermark\")\n" ]
2023-07-31T06:44:05
2023-07-31T06:55:58
2023-07-31T06:45:41
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6103/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6103", "html_url": "https://github.com/huggingface/datasets/pull/6103", "diff_url": "https://github.com/huggingface/datasets/pull/6103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6103.patch", "merged_at": "2023-07-31T06:45:41" }
true
https://api.github.com/repos/huggingface/datasets/issues/6102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6102/comments
https://api.github.com/repos/huggingface/datasets/issues/6102/events
https://github.com/huggingface/datasets/pull/6102
1,828,494,896
PR_kwDODunzps5WwyGy
6,102
Release 2.14.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006517 / 0.011353 (-0.004836) | 0.004217 / 0.011008 (-0.006792) | 0.083162 / 0.038508 (0.044654) | 0.074476 / 0.023109 (0.051367) | 0.321193 / 0.275898 (0.045295) | 0.358348 / 0.323480 (0.034868) | 0.005531 / 0.007986 (-0.002455) | 0.003621 / 0.004328 (-0.000707) | 0.063819 / 0.004250 (0.059568) | 0.056524 / 0.037052 (0.019471) | 0.322145 / 0.258489 (0.063656) | 0.371415 / 0.293841 (0.077574) | 0.030612 / 0.128546 (-0.097934) | 0.008907 / 0.075646 (-0.066739) | 0.289451 / 0.419271 (-0.129821) | 0.051959 / 0.043533 (0.008426) | 0.317729 / 0.255139 (0.062590) | 0.339750 / 0.283200 (0.056550) | 0.022430 / 0.141683 (-0.119253) | 1.487661 / 1.452155 (0.035506) | 1.554916 / 1.492716 (0.062199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296673 / 0.018006 (0.278667) | 0.599183 / 0.000490 (0.598694) | 0.002524 / 0.000200 (0.002324) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027898 / 0.037411 (-0.009514) | 0.080870 / 0.014526 (0.066344) | 0.094894 / 0.176557 (-0.081662) | 0.152350 / 0.737135 (-0.584785) | 0.095765 / 0.296338 (-0.200573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415442 / 0.215209 (0.200233) | 4.161155 / 2.077655 (2.083500) | 2.117061 / 1.504120 (0.612941) | 1.937846 / 1.541195 (0.396651) | 1.979635 / 1.468490 (0.511145) | 0.488381 / 4.584777 (-4.096396) | 3.509836 / 3.745712 (-0.235876) | 3.833074 / 5.269862 (-1.436788) | 2.307536 / 4.565676 (-2.258141) | 0.057059 / 0.424275 (-0.367216) | 0.007366 / 0.007607 (-0.000241) | 0.487752 / 0.226044 (0.261708) | 4.869406 / 2.268929 (2.600478) | 2.594775 / 55.444624 (-52.849849) | 2.191712 / 6.876477 (-4.684765) | 2.413220 / 2.142072 (0.271147) | 0.584513 / 4.805227 (-4.220714) | 0.132162 / 6.500664 (-6.368502) | 0.061059 / 0.075469 (-0.014410) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245178 / 1.841788 (-0.596610) | 20.624563 / 8.074308 (12.550255) | 14.675545 / 10.191392 (4.484153) | 0.165838 / 0.680424 (-0.514586) | 0.018700 / 0.534201 (-0.515501) | 0.392475 / 0.579283 (-0.186808) | 0.399884 / 0.434364 (-0.034480) | 0.457478 / 0.540337 (-0.082859) | 0.624553 / 1.386936 (-0.762383) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006716 / 0.011353 (-0.004637) | 0.004308 / 0.011008 (-0.006700) | 0.064495 / 0.038508 (0.025987) | 0.083194 / 0.023109 (0.060085) | 0.371994 / 0.275898 (0.096096) | 0.433045 / 0.323480 (0.109566) | 0.005535 / 0.007986 (-0.002450) | 0.003469 / 0.004328 (-0.000859) | 0.064342 / 0.004250 (0.060092) | 0.059362 / 0.037052 (0.022309) | 0.393819 / 0.258489 (0.135330) | 0.442591 / 0.293841 (0.148750) | 0.031594 / 0.128546 (-0.096952) | 0.008943 / 0.075646 (-0.066703) | 0.070689 / 0.419271 (-0.348582) | 0.049219 / 0.043533 (0.005686) | 0.361568 / 0.255139 (0.106429) | 0.417085 / 0.283200 (0.133886) | 0.025112 / 0.141683 (-0.116571) | 1.497204 / 1.452155 (0.045049) | 1.552781 / 1.492716 (0.060064) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325254 / 0.018006 (0.307248) | 0.528399 / 0.000490 (0.527909) | 0.007429 / 0.000200 (0.007229) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029908 / 0.037411 (-0.007504) | 0.087114 / 0.014526 (0.072588) | 0.103366 / 0.176557 (-0.073191) | 0.155145 / 0.737135 (-0.581990) | 0.103458 / 0.296338 (-0.192880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409432 / 0.215209 (0.194223) | 4.093327 / 2.077655 (2.015673) | 2.154115 / 1.504120 (0.649995) | 1.953492 / 1.541195 (0.412297) | 2.021532 / 1.468490 (0.553042) | 0.478928 / 4.584777 (-4.105849) | 3.515287 / 3.745712 (-0.230426) | 4.976239 / 5.269862 (-0.293623) | 2.832803 / 4.565676 (-1.732873) | 0.057239 / 0.424275 (-0.367036) | 0.007718 / 0.007607 (0.000111) | 0.484102 / 0.226044 (0.258057) | 4.833020 / 2.268929 (2.564092) | 2.564550 / 55.444624 (-52.880074) | 2.268969 / 6.876477 (-4.607508) | 2.513308 / 2.142072 (0.371235) | 0.582822 / 4.805227 (-4.222406) | 0.133989 / 6.500664 (-6.366675) | 0.062078 / 0.075469 (-0.013391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.393766 / 1.841788 (-0.448021) | 20.224546 / 8.074308 (12.150238) | 14.359438 / 10.191392 (4.168046) | 0.166358 / 0.680424 (-0.514066) | 0.018840 / 0.534201 (-0.515361) | 0.393206 / 0.579283 (-0.186077) | 0.404220 / 0.434364 (-0.030144) | 0.462346 / 0.540337 (-0.077992) | 0.603078 / 1.386936 (-0.783858) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006835 / 0.011353 (-0.004518) | 0.004530 / 0.011008 (-0.006478) | 0.087506 / 0.038508 (0.048997) | 0.088289 / 0.023109 (0.065180) | 0.351575 / 0.275898 (0.075677) | 0.391873 / 0.323480 (0.068393) | 0.005627 / 0.007986 (-0.002359) | 0.003735 / 0.004328 (-0.000594) | 0.065747 / 0.004250 (0.061497) | 0.058779 / 0.037052 (0.021726) | 0.358076 / 0.258489 (0.099587) | 0.408466 / 0.293841 (0.114626) | 0.031369 / 0.128546 (-0.097178) | 0.008807 / 0.075646 (-0.066839) | 0.293253 / 0.419271 (-0.126019) | 0.052950 / 0.043533 (0.009417) | 0.350411 / 0.255139 (0.095272) | 0.384827 / 0.283200 (0.101627) | 0.026219 / 0.141683 (-0.115464) | 1.464290 / 1.452155 (0.012136) | 1.549688 / 1.492716 (0.056972) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.270354 / 0.018006 (0.252348) | 0.593436 / 0.000490 (0.592946) | 0.003872 / 0.000200 (0.003673) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031625 / 0.037411 (-0.005787) | 0.092599 / 0.014526 (0.078073) | 0.104619 / 0.176557 (-0.071938) | 0.163183 / 0.737135 (-0.573952) | 0.103245 / 0.296338 (-0.193094) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390213 / 0.215209 (0.175004) | 3.894519 / 2.077655 (1.816864) | 1.905739 / 1.504120 (0.401619) | 1.728873 / 1.541195 (0.187678) | 1.838692 / 1.468490 (0.370202) | 0.484730 / 4.584777 (-4.100047) | 3.706749 / 3.745712 (-0.038963) | 5.572311 / 5.269862 (0.302449) | 3.389949 / 4.565676 (-1.175727) | 0.057315 / 0.424275 (-0.366960) | 0.007475 / 0.007607 (-0.000132) | 0.464690 / 0.226044 (0.238645) | 4.622242 / 2.268929 (2.353314) | 2.380957 / 55.444624 (-53.063667) | 2.038225 / 6.876477 (-4.838251) | 2.358881 / 2.142072 (0.216809) | 0.606358 / 4.805227 (-4.198869) | 0.133584 / 6.500664 (-6.367080) | 0.061894 / 0.075469 (-0.013575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259575 / 1.841788 (-0.582213) | 20.915216 / 8.074308 (12.840908) | 14.971952 / 10.191392 (4.780560) | 0.160206 / 0.680424 (-0.520218) | 0.018675 / 0.534201 (-0.515526) | 0.396821 / 0.579283 (-0.182462) | 0.430982 / 0.434364 (-0.003382) | 0.452895 / 0.540337 (-0.087443) | 0.647869 / 1.386936 (-0.739067) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007194 / 0.011353 (-0.004158) | 0.004340 / 0.011008 (-0.006669) | 0.065125 / 0.038508 (0.026617) | 0.096243 / 0.023109 (0.073134) | 0.374361 / 0.275898 (0.098463) | 0.411863 / 0.323480 (0.088383) | 0.005813 / 0.007986 (-0.002172) | 0.003615 / 0.004328 (-0.000713) | 0.064953 / 0.004250 (0.060703) | 0.063171 / 0.037052 (0.026119) | 0.376238 / 0.258489 (0.117749) | 0.415826 / 0.293841 (0.121985) | 0.031926 / 0.128546 (-0.096620) | 0.008821 / 0.075646 (-0.066825) | 0.072150 / 0.419271 (-0.347122) | 0.049484 / 0.043533 (0.005951) | 0.369691 / 0.255139 (0.114552) | 0.390669 / 0.283200 (0.107470) | 0.025732 / 0.141683 (-0.115950) | 1.493833 / 1.452155 (0.041679) | 1.601786 / 1.492716 (0.109070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284279 / 0.018006 (0.266272) | 0.585909 / 0.000490 (0.585419) | 0.000411 / 0.000200 (0.000211) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033642 / 0.037411 (-0.003769) | 0.095328 / 0.014526 (0.080802) | 0.105810 / 0.176557 (-0.070746) | 0.159779 / 0.737135 (-0.577357) | 0.108938 / 0.296338 (-0.187400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408112 / 0.215209 (0.192902) | 4.067035 / 2.077655 (1.989380) | 2.114504 / 1.504120 (0.610384) | 1.944027 / 1.541195 (0.402832) | 2.066117 / 1.468490 (0.597627) | 0.486441 / 4.584777 (-4.098336) | 3.622659 / 3.745712 (-0.123053) | 3.399310 / 5.269862 (-1.870552) | 2.183151 / 4.565676 (-2.382525) | 0.057490 / 0.424275 (-0.366785) | 0.007955 / 0.007607 (0.000347) | 0.490221 / 0.226044 (0.264177) | 4.887301 / 2.268929 (2.618373) | 2.679806 / 55.444624 (-52.764819) | 2.258992 / 6.876477 (-4.617484) | 2.592493 / 2.142072 (0.450420) | 0.606515 / 4.805227 (-4.198712) | 0.135645 / 6.500664 (-6.365019) | 0.063956 / 0.075469 (-0.011513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.331304 / 1.841788 (-0.510483) | 21.458611 / 8.074308 (13.384303) | 14.898964 / 10.191392 (4.707572) | 0.172110 / 0.680424 (-0.508314) | 0.018791 / 0.534201 (-0.515409) | 0.395944 / 0.579283 (-0.183339) | 0.424526 / 0.434364 (-0.009838) | 0.462517 / 0.540337 (-0.077821) | 0.610139 / 1.386936 (-0.776797) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09492ba523518289a84175ddb7ab3bc555e742ee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005957 / 0.011353 (-0.005396) | 0.003581 / 0.011008 (-0.007427) | 0.079624 / 0.038508 (0.041116) | 0.058004 / 0.023109 (0.034895) | 0.309345 / 0.275898 (0.033447) | 0.346653 / 0.323480 (0.023173) | 0.005420 / 0.007986 (-0.002566) | 0.002906 / 0.004328 (-0.001423) | 0.061970 / 0.004250 (0.057720) | 0.047627 / 0.037052 (0.010575) | 0.314096 / 0.258489 (0.055607) | 0.361368 / 0.293841 (0.067527) | 0.027211 / 0.128546 (-0.101335) | 0.007853 / 0.075646 (-0.067793) | 0.260202 / 0.419271 (-0.159070) | 0.045308 / 0.043533 (0.001775) | 0.312150 / 0.255139 (0.057011) | 0.341085 / 0.283200 (0.057886) | 0.021302 / 0.141683 (-0.120381) | 1.430315 / 1.452155 (-0.021840) | 1.608989 / 1.492716 (0.116273) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185289 / 0.018006 (0.167283) | 0.423318 / 0.000490 (0.422828) | 0.005741 / 0.000200 (0.005541) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023777 / 0.037411 (-0.013634) | 0.071937 / 0.014526 (0.057412) | 0.079406 / 0.176557 (-0.097151) | 0.143815 / 0.737135 (-0.593320) | 0.081648 / 0.296338 (-0.214690) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431514 / 0.215209 (0.216305) | 4.314471 / 2.077655 (2.236817) | 2.305167 / 1.504120 (0.801047) | 2.137894 / 1.541195 (0.596699) | 2.161034 / 1.468490 (0.692544) | 0.511701 / 4.584777 (-4.073076) | 3.098213 / 3.745712 (-0.647499) | 4.086837 / 5.269862 (-1.183024) | 2.517184 / 4.565676 (-2.048492) | 0.058272 / 0.424275 (-0.366003) | 0.006415 / 0.007607 (-0.001192) | 0.504792 / 0.226044 (0.278747) | 5.046758 / 2.268929 (2.777829) | 2.752049 / 55.444624 (-52.692576) | 2.407707 / 6.876477 (-4.468770) | 2.532162 / 2.142072 (0.390090) | 0.597562 / 4.805227 (-4.207666) | 0.125935 / 6.500664 (-6.374729) | 0.060837 / 0.075469 (-0.014632) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257048 / 1.841788 (-0.584740) | 17.877849 / 8.074308 (9.803541) | 13.904805 / 10.191392 (3.713413) | 0.131647 / 0.680424 (-0.548776) | 0.016975 / 0.534201 (-0.517226) | 0.329651 / 0.579283 (-0.249633) | 0.354358 / 0.434364 (-0.080006) | 0.377545 / 0.540337 (-0.162792) | 0.545593 / 1.386936 (-0.841343) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005839 / 0.011353 (-0.005514) | 0.003580 / 0.011008 (-0.007428) | 0.062204 / 0.038508 (0.023696) | 0.057943 / 0.023109 (0.034834) | 0.400165 / 0.275898 (0.124267) | 0.427911 / 0.323480 (0.104431) | 0.004412 / 0.007986 (-0.003574) | 0.002794 / 0.004328 (-0.001534) | 0.062933 / 0.004250 (0.058683) | 0.046243 / 0.037052 (0.009191) | 0.413640 / 0.258489 (0.155151) | 0.418592 / 0.293841 (0.124751) | 0.027020 / 0.128546 (-0.101526) | 0.007927 / 0.075646 (-0.067720) | 0.067581 / 0.419271 (-0.351691) | 0.041927 / 0.043533 (-0.001606) | 0.381863 / 0.255139 (0.126724) | 0.415711 / 0.283200 (0.132511) | 0.019827 / 0.141683 (-0.121856) | 1.464049 / 1.452155 (0.011894) | 1.528387 / 1.492716 (0.035671) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224999 / 0.018006 (0.206993) | 0.419167 / 0.000490 (0.418678) | 0.000363 / 0.000200 (0.000163) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024827 / 0.037411 (-0.012585) | 0.077134 / 0.014526 (0.062608) | 0.085142 / 0.176557 (-0.091414) | 0.137400 / 0.737135 (-0.599735) | 0.086434 / 0.296338 (-0.209905) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452716 / 0.215209 (0.237507) | 4.530610 / 2.077655 (2.452955) | 2.467309 / 1.504120 (0.963189) | 2.300441 / 1.541195 (0.759246) | 2.323475 / 1.468490 (0.854985) | 0.501847 / 4.584777 (-4.082930) | 3.079432 / 3.745712 (-0.666280) | 2.793107 / 5.269862 (-2.476755) | 1.835010 / 4.565676 (-2.730666) | 0.057698 / 0.424275 (-0.366577) | 0.006756 / 0.007607 (-0.000851) | 0.529062 / 0.226044 (0.303017) | 5.287822 / 2.268929 (3.018894) | 2.908411 / 55.444624 (-52.536214) | 2.571627 / 6.876477 (-4.304850) | 2.691188 / 2.142072 (0.549116) | 0.592289 / 4.805227 (-4.212938) | 0.126091 / 6.500664 (-6.374573) | 0.062312 / 0.075469 (-0.013157) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.328854 / 1.841788 (-0.512933) | 18.185628 / 8.074308 (10.111320) | 13.858781 / 10.191392 (3.667389) | 0.142421 / 0.680424 (-0.538003) | 0.016535 / 0.534201 (-0.517666) | 0.330839 / 0.579283 (-0.248444) | 0.346559 / 0.434364 (-0.087805) | 0.389153 / 0.540337 (-0.151185) | 0.516897 / 1.386936 (-0.870039) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#09492ba523518289a84175ddb7ab3bc555e742ee \"CML watermark\")\n" ]
2023-07-31T06:27:47
2023-07-31T06:48:09
2023-07-31T06:32:58
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6102/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6102", "html_url": "https://github.com/huggingface/datasets/pull/6102", "diff_url": "https://github.com/huggingface/datasets/pull/6102.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6102.patch", "merged_at": "2023-07-31T06:32:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/6101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6101/comments
https://api.github.com/repos/huggingface/datasets/issues/6101/events
https://github.com/huggingface/datasets/pull/6101
1,828,469,648
PR_kwDODunzps5WwspW
6,101
Release 2.14.2
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006543 / 0.011353 (-0.004810) | 0.003894 / 0.011008 (-0.007115) | 0.084742 / 0.038508 (0.046234) | 0.072942 / 0.023109 (0.049833) | 0.310722 / 0.275898 (0.034824) | 0.346806 / 0.323480 (0.023326) | 0.005373 / 0.007986 (-0.002613) | 0.003270 / 0.004328 (-0.001059) | 0.064379 / 0.004250 (0.060128) | 0.054876 / 0.037052 (0.017824) | 0.316794 / 0.258489 (0.058305) | 0.350353 / 0.293841 (0.056512) | 0.030683 / 0.128546 (-0.097863) | 0.008275 / 0.075646 (-0.067371) | 0.288747 / 0.419271 (-0.130525) | 0.051892 / 0.043533 (0.008359) | 0.315060 / 0.255139 (0.059921) | 0.331664 / 0.283200 (0.048464) | 0.023334 / 0.141683 (-0.118349) | 1.499734 / 1.452155 (0.047579) | 1.542006 / 1.492716 (0.049290) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210488 / 0.018006 (0.192482) | 0.462187 / 0.000490 (0.461697) | 0.001280 / 0.000200 (0.001080) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027812 / 0.037411 (-0.009599) | 0.082492 / 0.014526 (0.067966) | 0.096504 / 0.176557 (-0.080053) | 0.158164 / 0.737135 (-0.578972) | 0.096678 / 0.296338 (-0.199661) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403317 / 0.215209 (0.188108) | 4.008367 / 2.077655 (1.930713) | 2.033067 / 1.504120 (0.528947) | 1.869484 / 1.541195 (0.328290) | 1.947450 / 1.468490 (0.478960) | 0.494048 / 4.584777 (-4.090729) | 3.631673 / 3.745712 (-0.114039) | 5.322167 / 5.269862 (0.052306) | 3.125570 / 4.565676 (-1.440107) | 0.057341 / 0.424275 (-0.366934) | 0.007318 / 0.007607 (-0.000289) | 0.483990 / 0.226044 (0.257945) | 4.830573 / 2.268929 (2.561645) | 2.543267 / 55.444624 (-52.901358) | 2.217890 / 6.876477 (-4.658587) | 2.435111 / 2.142072 (0.293038) | 0.597920 / 4.805227 (-4.207307) | 0.132690 / 6.500664 (-6.367974) | 0.060160 / 0.075469 (-0.015309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247656 / 1.841788 (-0.594131) | 19.436984 / 8.074308 (11.362675) | 14.504249 / 10.191392 (4.312857) | 0.167444 / 0.680424 (-0.512980) | 0.018214 / 0.534201 (-0.515987) | 0.394790 / 0.579283 (-0.184493) | 0.413770 / 0.434364 (-0.020594) | 0.474290 / 0.540337 (-0.066048) | 0.646782 / 1.386936 (-0.740154) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006575 / 0.011353 (-0.004778) | 0.003924 / 0.011008 (-0.007084) | 0.064402 / 0.038508 (0.025893) | 0.072569 / 0.023109 (0.049460) | 0.361981 / 0.275898 (0.086083) | 0.398660 / 0.323480 (0.075180) | 0.005380 / 0.007986 (-0.002605) | 0.003355 / 0.004328 (-0.000974) | 0.065173 / 0.004250 (0.060923) | 0.057120 / 0.037052 (0.020067) | 0.366347 / 0.258489 (0.107858) | 0.402723 / 0.293841 (0.108882) | 0.031258 / 0.128546 (-0.097288) | 0.008499 / 0.075646 (-0.067147) | 0.070558 / 0.419271 (-0.348714) | 0.050089 / 0.043533 (0.006556) | 0.361280 / 0.255139 (0.106141) | 0.384497 / 0.283200 (0.101297) | 0.024789 / 0.141683 (-0.116893) | 1.492577 / 1.452155 (0.040422) | 1.572242 / 1.492716 (0.079525) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228054 / 0.018006 (0.210048) | 0.448317 / 0.000490 (0.447828) | 0.000368 / 0.000200 (0.000168) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030575 / 0.037411 (-0.006836) | 0.088604 / 0.014526 (0.074078) | 0.099317 / 0.176557 (-0.077239) | 0.152455 / 0.737135 (-0.584680) | 0.100444 / 0.296338 (-0.195894) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411876 / 0.215209 (0.196667) | 4.108187 / 2.077655 (2.030532) | 2.096371 / 1.504120 (0.592251) | 1.923532 / 1.541195 (0.382337) | 1.998345 / 1.468490 (0.529855) | 0.483853 / 4.584777 (-4.100924) | 3.622433 / 3.745712 (-0.123279) | 3.254430 / 5.269862 (-2.015431) | 2.044342 / 4.565676 (-2.521334) | 0.056756 / 0.424275 (-0.367519) | 0.007720 / 0.007607 (0.000113) | 0.487656 / 0.226044 (0.261612) | 4.882024 / 2.268929 (2.613096) | 2.585008 / 55.444624 (-52.859616) | 2.229251 / 6.876477 (-4.647225) | 2.408318 / 2.142072 (0.266246) | 0.617537 / 4.805227 (-4.187691) | 0.132102 / 6.500664 (-6.368562) | 0.061694 / 0.075469 (-0.013775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362077 / 1.841788 (-0.479711) | 19.750714 / 8.074308 (11.676406) | 14.545299 / 10.191392 (4.353907) | 0.168666 / 0.680424 (-0.511758) | 0.018606 / 0.534201 (-0.515595) | 0.394760 / 0.579283 (-0.184523) | 0.410030 / 0.434364 (-0.024334) | 0.464742 / 0.540337 (-0.075596) | 0.610881 / 1.386936 (-0.776055) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53e8007baeff133aaad8cbb366196be18a5e57fd \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005836 / 0.011353 (-0.005517) | 0.003493 / 0.011008 (-0.007515) | 0.079877 / 0.038508 (0.041369) | 0.057299 / 0.023109 (0.034190) | 0.332945 / 0.275898 (0.057047) | 0.386615 / 0.323480 (0.063135) | 0.004437 / 0.007986 (-0.003548) | 0.002758 / 0.004328 (-0.001571) | 0.062668 / 0.004250 (0.058418) | 0.046135 / 0.037052 (0.009083) | 0.346160 / 0.258489 (0.087671) | 0.416720 / 0.293841 (0.122879) | 0.026678 / 0.128546 (-0.101868) | 0.007893 / 0.075646 (-0.067753) | 0.260427 / 0.419271 (-0.158845) | 0.044240 / 0.043533 (0.000707) | 0.328101 / 0.255139 (0.072963) | 0.380072 / 0.283200 (0.096872) | 0.020813 / 0.141683 (-0.120870) | 1.400202 / 1.452155 (-0.051952) | 1.475627 / 1.492716 (-0.017089) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174479 / 0.018006 (0.156473) | 0.413810 / 0.000490 (0.413320) | 0.003059 / 0.000200 (0.002860) | 0.000212 / 0.000054 (0.000157) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023422 / 0.037411 (-0.013990) | 0.071519 / 0.014526 (0.056993) | 0.080555 / 0.176557 (-0.096001) | 0.143825 / 0.737135 (-0.593311) | 0.081182 / 0.296338 (-0.215157) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.406858 / 0.215209 (0.191648) | 4.161475 / 2.077655 (2.083820) | 1.991800 / 1.504120 (0.487680) | 1.811224 / 1.541195 (0.270030) | 1.828809 / 1.468490 (0.360318) | 0.504882 / 4.584777 (-4.079895) | 2.985010 / 3.745712 (-0.760703) | 3.984856 / 5.269862 (-1.285006) | 2.477936 / 4.565676 (-2.087740) | 0.057553 / 0.424275 (-0.366722) | 0.006436 / 0.007607 (-0.001172) | 0.488061 / 0.226044 (0.262016) | 4.805501 / 2.268929 (2.536573) | 2.446508 / 55.444624 (-52.998116) | 2.051406 / 6.876477 (-4.825071) | 2.177696 / 2.142072 (0.035623) | 0.588021 / 4.805227 (-4.217207) | 0.125118 / 6.500664 (-6.375546) | 0.060885 / 0.075469 (-0.014584) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197130 / 1.841788 (-0.644658) | 17.867450 / 8.074308 (9.793142) | 13.536895 / 10.191392 (3.345503) | 0.137603 / 0.680424 (-0.542821) | 0.016706 / 0.534201 (-0.517495) | 0.327642 / 0.579283 (-0.251641) | 0.347201 / 0.434364 (-0.087163) | 0.379570 / 0.540337 (-0.160768) | 0.517825 / 1.386936 (-0.869111) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005769 / 0.011353 (-0.005584) | 0.003414 / 0.011008 (-0.007594) | 0.063198 / 0.038508 (0.024690) | 0.056020 / 0.023109 (0.032911) | 0.393333 / 0.275898 (0.117435) | 0.421166 / 0.323480 (0.097686) | 0.004360 / 0.007986 (-0.003626) | 0.002860 / 0.004328 (-0.001469) | 0.062712 / 0.004250 (0.058461) | 0.045363 / 0.037052 (0.008311) | 0.413156 / 0.258489 (0.154667) | 0.422897 / 0.293841 (0.129056) | 0.027092 / 0.128546 (-0.101455) | 0.007960 / 0.075646 (-0.067687) | 0.068531 / 0.419271 (-0.350740) | 0.041402 / 0.043533 (-0.002131) | 0.377008 / 0.255139 (0.121869) | 0.409142 / 0.283200 (0.125942) | 0.019707 / 0.141683 (-0.121976) | 1.440556 / 1.452155 (-0.011599) | 1.487403 / 1.492716 (-0.005314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224355 / 0.018006 (0.206349) | 0.397855 / 0.000490 (0.397365) | 0.000363 / 0.000200 (0.000163) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025107 / 0.037411 (-0.012305) | 0.076404 / 0.014526 (0.061878) | 0.083194 / 0.176557 (-0.093362) | 0.135347 / 0.737135 (-0.601789) | 0.084786 / 0.296338 (-0.211553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433024 / 0.215209 (0.217815) | 4.323879 / 2.077655 (2.246224) | 2.263004 / 1.504120 (0.758884) | 2.072053 / 1.541195 (0.530858) | 2.113916 / 1.468490 (0.645426) | 0.502742 / 4.584777 (-4.082035) | 3.001716 / 3.745712 (-0.743996) | 2.777960 / 5.269862 (-2.491901) | 1.826514 / 4.565676 (-2.739162) | 0.057735 / 0.424275 (-0.366540) | 0.006671 / 0.007607 (-0.000937) | 0.503347 / 0.226044 (0.277303) | 5.037308 / 2.268929 (2.768380) | 2.679146 / 55.444624 (-52.765478) | 2.410899 / 6.876477 (-4.465577) | 2.467341 / 2.142072 (0.325268) | 0.589824 / 4.805227 (-4.215403) | 0.125529 / 6.500664 (-6.375135) | 0.061950 / 0.075469 (-0.013520) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.304128 / 1.841788 (-0.537659) | 17.950215 / 8.074308 (9.875907) | 13.673768 / 10.191392 (3.482376) | 0.129863 / 0.680424 (-0.550561) | 0.016720 / 0.534201 (-0.517481) | 0.329795 / 0.579283 (-0.249488) | 0.339057 / 0.434364 (-0.095307) | 0.382279 / 0.540337 (-0.158059) | 0.507337 / 1.386936 (-0.879599) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ef05b6f99a2b19990c6f5e4e28d95d28781570db \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006199 / 0.011353 (-0.005154) | 0.003749 / 0.011008 (-0.007259) | 0.080600 / 0.038508 (0.042092) | 0.061017 / 0.023109 (0.037908) | 0.319966 / 0.275898 (0.044067) | 0.354937 / 0.323480 (0.031457) | 0.004854 / 0.007986 (-0.003131) | 0.002996 / 0.004328 (-0.001333) | 0.063100 / 0.004250 (0.058849) | 0.050063 / 0.037052 (0.013011) | 0.316744 / 0.258489 (0.058255) | 0.358001 / 0.293841 (0.064160) | 0.027503 / 0.128546 (-0.101043) | 0.007876 / 0.075646 (-0.067771) | 0.262211 / 0.419271 (-0.157060) | 0.045717 / 0.043533 (0.002184) | 0.317188 / 0.255139 (0.062049) | 0.342404 / 0.283200 (0.059205) | 0.020194 / 0.141683 (-0.121489) | 1.498672 / 1.452155 (0.046517) | 1.545479 / 1.492716 (0.052762) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210985 / 0.018006 (0.192979) | 0.433592 / 0.000490 (0.433102) | 0.002864 / 0.000200 (0.002664) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023463 / 0.037411 (-0.013948) | 0.073375 / 0.014526 (0.058850) | 0.083082 / 0.176557 (-0.093475) | 0.142583 / 0.737135 (-0.594552) | 0.084267 / 0.296338 (-0.212071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412890 / 0.215209 (0.197681) | 4.131421 / 2.077655 (2.053766) | 1.969164 / 1.504120 (0.465044) | 1.772379 / 1.541195 (0.231185) | 1.834154 / 1.468490 (0.365664) | 0.496290 / 4.584777 (-4.088487) | 3.056504 / 3.745712 (-0.689208) | 3.400962 / 5.269862 (-1.868900) | 2.120575 / 4.565676 (-2.445101) | 0.056932 / 0.424275 (-0.367343) | 0.006412 / 0.007607 (-0.001195) | 0.484521 / 0.226044 (0.258477) | 4.817474 / 2.268929 (2.548545) | 2.464075 / 55.444624 (-52.980549) | 2.085056 / 6.876477 (-4.791421) | 2.324516 / 2.142072 (0.182444) | 0.592013 / 4.805227 (-4.213214) | 0.132232 / 6.500664 (-6.368432) | 0.062825 / 0.075469 (-0.012645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228080 / 1.841788 (-0.613708) | 18.555385 / 8.074308 (10.481077) | 13.939565 / 10.191392 (3.748173) | 0.145979 / 0.680424 (-0.534445) | 0.016823 / 0.534201 (-0.517377) | 0.330569 / 0.579283 (-0.248714) | 0.358094 / 0.434364 (-0.076270) | 0.384642 / 0.540337 (-0.155696) | 0.518347 / 1.386936 (-0.868589) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006198 / 0.011353 (-0.005155) | 0.003670 / 0.011008 (-0.007338) | 0.062502 / 0.038508 (0.023994) | 0.064339 / 0.023109 (0.041229) | 0.428414 / 0.275898 (0.152516) | 0.463899 / 0.323480 (0.140420) | 0.005524 / 0.007986 (-0.002462) | 0.002915 / 0.004328 (-0.001413) | 0.062521 / 0.004250 (0.058270) | 0.051182 / 0.037052 (0.014130) | 0.431144 / 0.258489 (0.172655) | 0.469465 / 0.293841 (0.175624) | 0.027463 / 0.128546 (-0.101083) | 0.007974 / 0.075646 (-0.067673) | 0.068029 / 0.419271 (-0.351242) | 0.042123 / 0.043533 (-0.001409) | 0.428667 / 0.255139 (0.173528) | 0.455917 / 0.283200 (0.172717) | 0.023264 / 0.141683 (-0.118419) | 1.426986 / 1.452155 (-0.025168) | 1.500049 / 1.492716 (0.007332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207264 / 0.018006 (0.189258) | 0.440738 / 0.000490 (0.440248) | 0.000802 / 0.000200 (0.000602) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026245 / 0.037411 (-0.011166) | 0.078749 / 0.014526 (0.064223) | 0.087873 / 0.176557 (-0.088684) | 0.141518 / 0.737135 (-0.595617) | 0.089811 / 0.296338 (-0.206527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418955 / 0.215209 (0.203746) | 4.177881 / 2.077655 (2.100226) | 2.162678 / 1.504120 (0.658558) | 1.998969 / 1.541195 (0.457775) | 2.066720 / 1.468490 (0.598230) | 0.496850 / 4.584777 (-4.087927) | 3.041179 / 3.745712 (-0.704534) | 4.126039 / 5.269862 (-1.143823) | 2.740507 / 4.565676 (-1.825169) | 0.058025 / 0.424275 (-0.366250) | 0.006846 / 0.007607 (-0.000761) | 0.493281 / 0.226044 (0.267237) | 4.930196 / 2.268929 (2.661268) | 2.685152 / 55.444624 (-52.759472) | 2.378247 / 6.876477 (-4.498230) | 2.469103 / 2.142072 (0.327031) | 0.585346 / 4.805227 (-4.219882) | 0.126099 / 6.500664 (-6.374565) | 0.062946 / 0.075469 (-0.012523) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313892 / 1.841788 (-0.527896) | 19.177117 / 8.074308 (11.102809) | 14.081321 / 10.191392 (3.889929) | 0.133948 / 0.680424 (-0.546476) | 0.017128 / 0.534201 (-0.517073) | 0.332241 / 0.579283 (-0.247042) | 0.373218 / 0.434364 (-0.061145) | 0.395308 / 0.540337 (-0.145030) | 0.529883 / 1.386936 (-0.857053) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#16f7c7677942083436062b904b74643accb9bcac \"CML watermark\")\n" ]
2023-07-31T06:05:36
2023-07-31T06:33:00
2023-07-31T06:18:17
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6101/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6101", "html_url": "https://github.com/huggingface/datasets/pull/6101", "diff_url": "https://github.com/huggingface/datasets/pull/6101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6101.patch", "merged_at": "2023-07-31T06:18:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/6100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6100/comments
https://api.github.com/repos/huggingface/datasets/issues/6100/events
https://github.com/huggingface/datasets/issues/6100
1,828,118,930
I_kwDODunzps5s9uGS
6,100
TypeError when loading from GCP bucket
{ "login": "bilelomrani1", "id": 16692099, "node_id": "MDQ6VXNlcjE2NjkyMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilelomrani1", "html_url": "https://github.com/bilelomrani1", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @bilelomrani1.\r\n\r\nWe are fixing it. ", "We have fixed it. We are planning to do a patch release today." ]
2023-07-30T23:03:00
2023-08-03T10:00:48
2023-08-01T10:38:55
NONE
null
### Describe the bug Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1. ### Steps to reproduce the bug Load any file from a GCP bucket: ```python import datasets datasets.load_dataset("json", data_files=["gs://..."]) ``` The following exception is raised: ```python Traceback (most recent call last): ... packages/datasets/data_files.py", line 335, in resolve_pattern protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else "" TypeError: can only concatenate tuple (not "str") to tuple ``` With a `GoogleFileSystem`, the attribute `fs.protocol` is a tuple `('gs', 'gcs')` and hence cannot be concatenated with a string. ### Expected behavior The file should be loaded without exception. ### Environment info - `datasets` version: 2.14.1 - Platform: macOS-13.2.1-x86_64-i386-64bit - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6100/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6099/comments
https://api.github.com/repos/huggingface/datasets/issues/6099/events
https://github.com/huggingface/datasets/issues/6099
1,827,893,576
I_kwDODunzps5s83FI
6,099
How do i get "amazon_us_reviews
{ "login": "IqraBaluch", "id": 57810189, "node_id": "MDQ6VXNlcjU3ODEwMTg5", "avatar_url": "https://avatars.githubusercontent.com/u/57810189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IqraBaluch", "html_url": "https://github.com/IqraBaluch", "followers_url": "https://api.github.com/users/IqraBaluch/followers", "following_url": "https://api.github.com/users/IqraBaluch/following{/other_user}", "gists_url": "https://api.github.com/users/IqraBaluch/gists{/gist_id}", "starred_url": "https://api.github.com/users/IqraBaluch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IqraBaluch/subscriptions", "organizations_url": "https://api.github.com/users/IqraBaluch/orgs", "repos_url": "https://api.github.com/users/IqraBaluch/repos", "events_url": "https://api.github.com/users/IqraBaluch/events{/privacy}", "received_events_url": "https://api.github.com/users/IqraBaluch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Seems like the problem isn't with the library, but the dataset itself hosted on AWS S3.\r\n\r\nIts [homepage](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) returns an `AccessDenied` XML response, which is the same thing you get if you try to log the `record` that triggers the exception\r\n\r\n```python\r\ntry:\r\n example = self.info.features.encode_example(record) if self.info.features is not None else record\r\nexcept Exception as e:\r\n print(record)\r\n```\r\n\r\n⬇️\r\n\r\n```\r\n{'<?xml version=\"1.0\" encoding=\"UTF-8\"?>': '<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>N2HFJ82ZV8SZW9BV</RequestId><HostId>Zw2DQ0V2GdRmvH5qWEpumK4uj5+W8YPcilQbN9fLBr3VqQOcKPHOhUZLG3LcM9X5fkOetxp48Os=</HostId></Error>'}\r\n```", "I'm getting same errors when loading this dataset", "I have figured it out. there was an option of **parquet formated files** i downloaded some from there. ", "this dataset is unfortunately no longer public", "Thanks for reporting, @IqraBaluch.\r\n\r\nWe contacted the authors and unfortunately they reported that Amazon has decided to stop distributing this dataset.", "If anyone still needs this dataset, you could find it on kaggle here : https://www.kaggle.com/datasets/cynthiarempel/amazon-us-customer-reviews-dataset", "Thanks @Maryam-Mostafa ", "@albertvillanova don't tell 'em, we have figured it out. XD", "I noticed that some book data is missing, we can only get Books_v1_02 data. \r\nIs there any way we can get the Books_v1_00 and Books_v1_01? \r\nReally appreciate !!!", "@albertvillanova will this dataset be retired given the data are no longer hosted on S3? What is done in cases such as these?" ]
2023-07-30T11:02:17
2023-08-21T05:08:08
2023-08-10T05:02:35
NONE
null
### Feature request I have been trying to load 'amazon_us_dataset" but unable to do so. `amazon_us_reviews = load_dataset('amazon_us_reviews')` `print(amazon_us_reviews)` > [ValueError: Config name is missing. Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02'] Example of usage: `load_dataset('amazon_us_reviews', 'Wireless_v1_00')`] __________________________________________________________________________ `amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00') print(amazon_us_reviews)` **ERROR** `Generating` train split: 0% 0/960872 [00:00<?, ? examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1692 ) -> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record 1694 writer.write(example, key) 11 frames KeyError: 'marketplace' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1711 e = e.__context__ -> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1713 1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ### Motivation The dataset I'm using https://huggingface.co./datasets/amazon_us_reviews ### Your contribution What is the best way to load this data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6099/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6098/comments
https://api.github.com/repos/huggingface/datasets/issues/6098/events
https://github.com/huggingface/datasets/pull/6098
1,827,655,071
PR_kwDODunzps5WuCn1
6,098
Expanduser in save_to_disk()
{ "login": "Unknown3141592", "id": 51715864, "node_id": "MDQ6VXNlcjUxNzE1ODY0", "avatar_url": "https://avatars.githubusercontent.com/u/51715864?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Unknown3141592", "html_url": "https://github.com/Unknown3141592", "followers_url": "https://api.github.com/users/Unknown3141592/followers", "following_url": "https://api.github.com/users/Unknown3141592/following{/other_user}", "gists_url": "https://api.github.com/users/Unknown3141592/gists{/gist_id}", "starred_url": "https://api.github.com/users/Unknown3141592/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Unknown3141592/subscriptions", "organizations_url": "https://api.github.com/users/Unknown3141592/orgs", "repos_url": "https://api.github.com/users/Unknown3141592/repos", "events_url": "https://api.github.com/users/Unknown3141592/events{/privacy}", "received_events_url": "https://api.github.com/users/Unknown3141592/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
2023-07-29T20:50:45
2023-07-29T20:58:57
null
NONE
null
Fixes #5651. The same problem occurs when loading from disk so I fixed it there too. I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6098/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6098", "html_url": "https://github.com/huggingface/datasets/pull/6098", "diff_url": "https://github.com/huggingface/datasets/pull/6098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6098.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6097/comments
https://api.github.com/repos/huggingface/datasets/issues/6097/events
https://github.com/huggingface/datasets/issues/6097
1,827,054,143
I_kwDODunzps5s5qI_
6,097
Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format
{ "login": "aschoenauer-sebag", "id": 2538048, "node_id": "MDQ6VXNlcjI1MzgwNDg=", "avatar_url": "https://avatars.githubusercontent.com/u/2538048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aschoenauer-sebag", "html_url": "https://github.com/aschoenauer-sebag", "followers_url": "https://api.github.com/users/aschoenauer-sebag/followers", "following_url": "https://api.github.com/users/aschoenauer-sebag/following{/other_user}", "gists_url": "https://api.github.com/users/aschoenauer-sebag/gists{/gist_id}", "starred_url": "https://api.github.com/users/aschoenauer-sebag/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aschoenauer-sebag/subscriptions", "organizations_url": "https://api.github.com/users/aschoenauer-sebag/orgs", "repos_url": "https://api.github.com/users/aschoenauer-sebag/repos", "events_url": "https://api.github.com/users/aschoenauer-sebag/events{/privacy}", "received_events_url": "https://api.github.com/users/aschoenauer-sebag/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually, my bad -- specifying\r\n```python\r\nfoo.set_format('numpy', ['vectors'], output_all_columns=True)\r\n```\r\nfixes it." ]
2023-07-28T20:31:59
2023-07-28T20:49:58
2023-07-28T20:49:58
NONE
null
### Describe the bug Hi team! I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case if not using the `set_format` method: you can also retrieve any other feature value, such as an index/id/etc. Are you able to reproduce what I observe? ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]} foo = Dataset.from_dict(foo) foo.set_format('numpy', ['vectors']) foo.add_faiss_index('vectors') new_vector = np.random.random(1024) scores, res = foo.get_nearest_examples('vectors', new_vector, k=3) ``` This will return, for the resulting most similar vectors to `new_vector` - in particular it will not return the `ids` feature: ``` {'vectors': array([[random values ...]])} ``` ### Expected behavior The expected behavior happens when the `set_format` method is not called: ```python from datasets import Dataset import numpy as np foo = {'vectors': np.random.random((100,1024)), 'ids': [str(u) for u in range(100)]} foo = Dataset.from_dict(foo) # foo.set_format('numpy', ['vectors']) foo.add_faiss_index('vectors') new_vector = np.random.random(1024) scores, res = foo.get_nearest_examples('vectors', new_vector, k=3) ``` This *will* return the `ids` of the similar vectors - with unfortunately a list of lists in lieu of the array I think for caching reasons - read it elsewhere ``` {'vectors': [[random values on multiple lines...]], 'ids': ['x', 'y', 'z']} ``` ### Environment info - `datasets` version: 2.12.0 - Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6097/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6096/comments
https://api.github.com/repos/huggingface/datasets/issues/6096/events
https://github.com/huggingface/datasets/pull/6096
1,826,731,091
PR_kwDODunzps5Wq9Hb
6,096
Add `fsspec` support for `to_json`, `to_csv`, and `to_parquet`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6096). All of your documentation changes will be reflected on that endpoint." ]
2023-07-28T16:36:59
2023-07-31T13:12:52
null
CONTRIBUTOR
null
Hi to whoever is reading this! 🤗 (Most likely @mariosasko) ## What's in this PR? This PR replaces the `open` from Python with `fsspec.open` and adds the argument `storage_options` for the methods `to_json`, `to_csv`, and `to_parquet`, to allow users to export any 🤗`Dataset` into a file in a file-system as requested at #6086. ## What's missing in this PR? As per `to_json`, `to_csv`, and `to_parquet` docstrings for the recently included `storage_options` arg, I've scoped it to 2.15.0, so we should check that before merging in case we want to scope that for 2.14.2 instead. Additionally, should we also add `fsspec` support for the `from_csv`, `from_json`, and `from_parquet` methods? If you want me to do so @mariosasko just let me know and I'll create another PR to support that too!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6096/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6096/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6096", "html_url": "https://github.com/huggingface/datasets/pull/6096", "diff_url": "https://github.com/huggingface/datasets/pull/6096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6096.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/6095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6095/comments
https://api.github.com/repos/huggingface/datasets/issues/6095/events
https://github.com/huggingface/datasets/pull/6095
1,826,496,967
PR_kwDODunzps5WqJtr
6,095
Fix deprecation of errors in TextConfig
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012497 / 0.011353 (0.001144) | 0.005355 / 0.011008 (-0.005654) | 0.106018 / 0.038508 (0.067510) | 0.093069 / 0.023109 (0.069960) | 0.394699 / 0.275898 (0.118801) | 0.449723 / 0.323480 (0.126243) | 0.006434 / 0.007986 (-0.001552) | 0.004187 / 0.004328 (-0.000141) | 0.079620 / 0.004250 (0.075370) | 0.062513 / 0.037052 (0.025460) | 0.410305 / 0.258489 (0.151816) | 0.467231 / 0.293841 (0.173390) | 0.048130 / 0.128546 (-0.080416) | 0.013747 / 0.075646 (-0.061899) | 0.357979 / 0.419271 (-0.061293) | 0.064764 / 0.043533 (0.021231) | 0.411029 / 0.255139 (0.155890) | 0.454734 / 0.283200 (0.171534) | 0.037215 / 0.141683 (-0.104468) | 1.801331 / 1.452155 (0.349176) | 1.951628 / 1.492716 (0.458912) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231073 / 0.018006 (0.213067) | 0.564179 / 0.000490 (0.563689) | 0.000947 / 0.000200 (0.000747) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030629 / 0.037411 (-0.006783) | 0.092522 / 0.014526 (0.077996) | 0.109781 / 0.176557 (-0.066775) | 0.183185 / 0.737135 (-0.553950) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.600095 / 0.215209 (0.384886) | 6.072868 / 2.077655 (3.995213) | 2.684109 / 1.504120 (1.179989) | 2.436204 / 1.541195 (0.895010) | 2.514667 / 1.468490 (1.046177) | 0.865455 / 4.584777 (-3.719322) | 5.245561 / 3.745712 (1.499849) | 5.628688 / 5.269862 (0.358826) | 3.457343 / 4.565676 (-1.108333) | 0.107563 / 0.424275 (-0.316712) | 0.008803 / 0.007607 (0.001196) | 0.754014 / 0.226044 (0.527970) | 7.341226 / 2.268929 (5.072297) | 3.482090 / 55.444624 (-51.962534) | 2.726071 / 6.876477 (-4.150406) | 3.168494 / 2.142072 (1.026422) | 1.023517 / 4.805227 (-3.781710) | 0.207440 / 6.500664 (-6.293224) | 0.073642 / 0.075469 (-0.001827) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.588636 / 1.841788 (-0.253152) | 23.305257 / 8.074308 (15.230949) | 22.071476 / 10.191392 (11.880084) | 0.242044 / 0.680424 (-0.438379) | 0.028830 / 0.534201 (-0.505371) | 0.461414 / 0.579283 (-0.117869) | 0.591024 / 0.434364 (0.156660) | 0.548984 / 0.540337 (0.008646) | 0.783318 / 1.386936 (-0.603618) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008724 / 0.011353 (-0.002629) | 0.004638 / 0.011008 (-0.006371) | 0.081024 / 0.038508 (0.042516) | 0.077533 / 0.023109 (0.054423) | 0.444827 / 0.275898 (0.168929) | 0.507812 / 0.323480 (0.184332) | 0.006017 / 0.007986 (-0.001968) | 0.004204 / 0.004328 (-0.000124) | 0.082154 / 0.004250 (0.077904) | 0.063818 / 0.037052 (0.026765) | 0.463468 / 0.258489 (0.204979) | 0.536784 / 0.293841 (0.242943) | 0.046393 / 0.128546 (-0.082153) | 0.014349 / 0.075646 (-0.061298) | 0.089213 / 0.419271 (-0.330059) | 0.058313 / 0.043533 (0.014780) | 0.463674 / 0.255139 (0.208535) | 0.495865 / 0.283200 (0.212665) | 0.036586 / 0.141683 (-0.105096) | 1.801601 / 1.452155 (0.349447) | 1.871219 / 1.492716 (0.378502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273411 / 0.018006 (0.255405) | 0.531745 / 0.000490 (0.531255) | 0.000424 / 0.000200 (0.000224) | 0.000130 / 0.000054 (0.000076) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037689 / 0.037411 (0.000278) | 0.109544 / 0.014526 (0.095019) | 0.124053 / 0.176557 (-0.052504) | 0.179960 / 0.737135 (-0.557175) | 0.118218 / 0.296338 (-0.178120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639859 / 0.215209 (0.424650) | 6.347385 / 2.077655 (4.269730) | 2.910188 / 1.504120 (1.406068) | 2.698821 / 1.541195 (1.157626) | 2.802652 / 1.468490 (1.334161) | 0.816109 / 4.584777 (-3.768668) | 5.190313 / 3.745712 (1.444601) | 4.642684 / 5.269862 (-0.627178) | 2.948092 / 4.565676 (-1.617584) | 0.095877 / 0.424275 (-0.328398) | 0.009631 / 0.007607 (0.002024) | 0.779136 / 0.226044 (0.553091) | 7.611586 / 2.268929 (5.342658) | 3.760804 / 55.444624 (-51.683820) | 3.139355 / 6.876477 (-3.737122) | 3.419660 / 2.142072 (1.277587) | 1.036397 / 4.805227 (-3.768831) | 0.224015 / 6.500664 (-6.276649) | 0.084037 / 0.075469 (0.008568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.710608 / 1.841788 (-0.131179) | 24.447646 / 8.074308 (16.373338) | 21.345322 / 10.191392 (11.153930) | 0.232383 / 0.680424 (-0.448040) | 0.026381 / 0.534201 (-0.507820) | 0.475995 / 0.579283 (-0.103289) | 0.611939 / 0.434364 (0.177575) | 0.541441 / 0.540337 (0.001104) | 0.742796 / 1.386936 (-0.644140) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7929929525e734f7232cfc68d1d22fb8d53c54a3 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006140 / 0.011353 (-0.005213) | 0.003664 / 0.011008 (-0.007344) | 0.080765 / 0.038508 (0.042257) | 0.065009 / 0.023109 (0.041900) | 0.312787 / 0.275898 (0.036889) | 0.354637 / 0.323480 (0.031157) | 0.004846 / 0.007986 (-0.003140) | 0.003019 / 0.004328 (-0.001310) | 0.062823 / 0.004250 (0.058573) | 0.050446 / 0.037052 (0.013394) | 0.314478 / 0.258489 (0.055989) | 0.360206 / 0.293841 (0.066365) | 0.027282 / 0.128546 (-0.101265) | 0.008024 / 0.075646 (-0.067622) | 0.262125 / 0.419271 (-0.157146) | 0.045793 / 0.043533 (0.002260) | 0.310508 / 0.255139 (0.055369) | 0.340899 / 0.283200 (0.057699) | 0.021850 / 0.141683 (-0.119833) | 1.510791 / 1.452155 (0.058636) | 1.570661 / 1.492716 (0.077944) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192136 / 0.018006 (0.174130) | 0.449310 / 0.000490 (0.448820) | 0.004556 / 0.000200 (0.004356) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023689 / 0.037411 (-0.013722) | 0.076316 / 0.014526 (0.061791) | 0.084800 / 0.176557 (-0.091757) | 0.153154 / 0.737135 (-0.583981) | 0.086467 / 0.296338 (-0.209871) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432254 / 0.215209 (0.217045) | 4.305098 / 2.077655 (2.227443) | 2.304267 / 1.504120 (0.800147) | 2.139503 / 1.541195 (0.598309) | 2.220414 / 1.468490 (0.751924) | 0.498595 / 4.584777 (-4.086182) | 3.058593 / 3.745712 (-0.687119) | 4.324501 / 5.269862 (-0.945361) | 2.667731 / 4.565676 (-1.897946) | 0.059917 / 0.424275 (-0.364358) | 0.006829 / 0.007607 (-0.000778) | 0.504608 / 0.226044 (0.278564) | 5.044480 / 2.268929 (2.775552) | 2.753080 / 55.444624 (-52.691545) | 2.449265 / 6.876477 (-4.427212) | 2.635113 / 2.142072 (0.493040) | 0.590760 / 4.805227 (-4.214467) | 0.130133 / 6.500664 (-6.370532) | 0.062759 / 0.075469 (-0.012710) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267014 / 1.841788 (-0.574773) | 18.562890 / 8.074308 (10.488581) | 13.991257 / 10.191392 (3.799865) | 0.147108 / 0.680424 (-0.533315) | 0.017216 / 0.534201 (-0.516985) | 0.330317 / 0.579283 (-0.248966) | 0.351328 / 0.434364 (-0.083036) | 0.381097 / 0.540337 (-0.159241) | 0.558718 / 1.386936 (-0.828218) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006385 / 0.011353 (-0.004967) | 0.003668 / 0.011008 (-0.007340) | 0.062581 / 0.038508 (0.024073) | 0.067006 / 0.023109 (0.043896) | 0.428465 / 0.275898 (0.152567) | 0.466106 / 0.323480 (0.142626) | 0.005806 / 0.007986 (-0.002180) | 0.003117 / 0.004328 (-0.001212) | 0.063554 / 0.004250 (0.059303) | 0.054404 / 0.037052 (0.017352) | 0.431168 / 0.258489 (0.172679) | 0.467578 / 0.293841 (0.173737) | 0.027779 / 0.128546 (-0.100767) | 0.008055 / 0.075646 (-0.067592) | 0.067718 / 0.419271 (-0.351554) | 0.043042 / 0.043533 (-0.000491) | 0.425926 / 0.255139 (0.170787) | 0.453699 / 0.283200 (0.170500) | 0.023495 / 0.141683 (-0.118187) | 1.435356 / 1.452155 (-0.016799) | 1.509340 / 1.492716 (0.016624) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242322 / 0.018006 (0.224316) | 0.446865 / 0.000490 (0.446376) | 0.001079 / 0.000200 (0.000879) | 0.000065 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025376 / 0.037411 (-0.012035) | 0.079373 / 0.014526 (0.064847) | 0.088554 / 0.176557 (-0.088002) | 0.141026 / 0.737135 (-0.596109) | 0.090666 / 0.296338 (-0.205672) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434800 / 0.215209 (0.219590) | 4.314491 / 2.077655 (2.236836) | 2.320688 / 1.504120 (0.816568) | 2.163941 / 1.541195 (0.622747) | 2.292576 / 1.468490 (0.824086) | 0.500226 / 4.584777 (-4.084551) | 3.114604 / 3.745712 (-0.631108) | 4.206997 / 5.269862 (-1.062864) | 2.461126 / 4.565676 (-2.104551) | 0.057717 / 0.424275 (-0.366558) | 0.006989 / 0.007607 (-0.000618) | 0.515623 / 0.226044 (0.289579) | 5.155301 / 2.268929 (2.886372) | 2.733589 / 55.444624 (-52.711035) | 2.542111 / 6.876477 (-4.334366) | 2.697035 / 2.142072 (0.554963) | 0.594213 / 4.805227 (-4.211014) | 0.128537 / 6.500664 (-6.372127) | 0.065223 / 0.075469 (-0.010246) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306738 / 1.841788 (-0.535050) | 19.065370 / 8.074308 (10.991062) | 14.242096 / 10.191392 (4.050704) | 0.146177 / 0.680424 (-0.534246) | 0.017186 / 0.534201 (-0.517015) | 0.337224 / 0.579283 (-0.242059) | 0.349997 / 0.434364 (-0.084367) | 0.390408 / 0.540337 (-0.149930) | 0.524597 / 1.386936 (-0.862339) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#69ec36948b0ef1f194e9dcd43ec53a50b7708962 \"CML watermark\")\n" ]
2023-07-28T14:08:37
2023-07-31T05:26:32
2023-07-31T05:17:38
MEMBER
null
This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by: - #5974 ```python In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict") --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-701c27131a5d> in <module> ----> 1 ds = load_dataset("text", data_files="test.txt", errors="strict") ~/huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2107 2108 # Create a dataset builder -> 2109 builder_instance = load_dataset_builder( 2110 path=path, 2111 name=name, ~/huggingface/datasets/src/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs) 1830 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name) 1831 # Instantiate the dataset builder -> 1832 builder_instance: DatasetBuilder = builder_cls( 1833 cache_dir=cache_dir, 1834 dataset_name=dataset_name, ~/huggingface/datasets/src/datasets/builder.py in __init__(self, cache_dir, dataset_name, config_name, hash, base_path, info, features, token, use_auth_token, repo_id, data_files, data_dir, storage_options, writer_batch_size, name, **config_kwargs) 371 if data_dir is not None: 372 config_kwargs["data_dir"] = data_dir --> 373 self.config, self.config_id = self._create_builder_config( 374 config_name=config_name, 375 custom_features=features, ~/huggingface/datasets/src/datasets/builder.py in _create_builder_config(self, config_name, custom_features, **config_kwargs) 550 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 551 config_kwargs["version"] = self.VERSION --> 552 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 553 554 # otherwise use the config_kwargs to overwrite the attributes TypeError: __init__() got an unexpected keyword argument 'errors' ``` Similar to: - #6094
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6095/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6095", "html_url": "https://github.com/huggingface/datasets/pull/6095", "diff_url": "https://github.com/huggingface/datasets/pull/6095.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6095.patch", "merged_at": "2023-07-31T05:17:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/6094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6094/comments
https://api.github.com/repos/huggingface/datasets/issues/6094/events
https://github.com/huggingface/datasets/pull/6094
1,826,293,414
PR_kwDODunzps5WpdpA
6,094
Fix deprecation of use_auth_token in DownloadConfig
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008996 / 0.011353 (-0.002357) | 0.004976 / 0.011008 (-0.006033) | 0.114495 / 0.038508 (0.075987) | 0.083958 / 0.023109 (0.060849) | 0.408395 / 0.275898 (0.132497) | 0.456757 / 0.323480 (0.133278) | 0.006396 / 0.007986 (-0.001589) | 0.004315 / 0.004328 (-0.000014) | 0.093558 / 0.004250 (0.089307) | 0.062067 / 0.037052 (0.025014) | 0.423452 / 0.258489 (0.164963) | 0.463947 / 0.293841 (0.170106) | 0.049934 / 0.128546 (-0.078613) | 0.013937 / 0.075646 (-0.061709) | 0.365809 / 0.419271 (-0.053463) | 0.067382 / 0.043533 (0.023849) | 0.418860 / 0.255139 (0.163721) | 0.463264 / 0.283200 (0.180065) | 0.034392 / 0.141683 (-0.107291) | 1.870685 / 1.452155 (0.418530) | 1.975313 / 1.492716 (0.482597) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261748 / 0.018006 (0.243742) | 0.645510 / 0.000490 (0.645020) | 0.000376 / 0.000200 (0.000176) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032129 / 0.037411 (-0.005282) | 0.104309 / 0.014526 (0.089783) | 0.113154 / 0.176557 (-0.063403) | 0.186795 / 0.737135 (-0.550341) | 0.115584 / 0.296338 (-0.180755) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.577755 / 0.215209 (0.362546) | 5.984988 / 2.077655 (3.907333) | 2.581967 / 1.504120 (1.077848) | 2.305744 / 1.541195 (0.764549) | 2.359618 / 1.468490 (0.891128) | 0.882892 / 4.584777 (-3.701885) | 5.755578 / 3.745712 (2.009866) | 8.718373 / 5.269862 (3.448511) | 5.217586 / 4.565676 (0.651909) | 0.099785 / 0.424275 (-0.324490) | 0.009008 / 0.007607 (0.001401) | 0.730937 / 0.226044 (0.504892) | 7.265309 / 2.268929 (4.996381) | 3.487167 / 55.444624 (-51.957457) | 2.750090 / 6.876477 (-4.126386) | 3.060198 / 2.142072 (0.918125) | 1.069945 / 4.805227 (-3.735282) | 0.227143 / 6.500664 (-6.273521) | 0.083601 / 0.075469 (0.008132) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.754375 / 1.841788 (-0.087412) | 25.448731 / 8.074308 (17.374423) | 22.385943 / 10.191392 (12.194551) | 0.249921 / 0.680424 (-0.430503) | 0.034138 / 0.534201 (-0.500063) | 0.535170 / 0.579283 (-0.044113) | 0.605474 / 0.434364 (0.171110) | 0.580025 / 0.540337 (0.039688) | 0.810537 / 1.386936 (-0.576399) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009117 / 0.011353 (-0.002236) | 0.005029 / 0.011008 (-0.005979) | 0.082200 / 0.038508 (0.043691) | 0.082386 / 0.023109 (0.059277) | 0.491869 / 0.275898 (0.215971) | 0.546735 / 0.323480 (0.223255) | 0.006893 / 0.007986 (-0.001093) | 0.004571 / 0.004328 (0.000243) | 0.085361 / 0.004250 (0.081111) | 0.063342 / 0.037052 (0.026290) | 0.522522 / 0.258489 (0.264033) | 0.560784 / 0.293841 (0.266943) | 0.047685 / 0.128546 (-0.080861) | 0.017741 / 0.075646 (-0.057905) | 0.098204 / 0.419271 (-0.321067) | 0.062919 / 0.043533 (0.019386) | 0.504005 / 0.255139 (0.248866) | 0.547022 / 0.283200 (0.263823) | 0.033731 / 0.141683 (-0.107952) | 1.869765 / 1.452155 (0.417610) | 1.935867 / 1.492716 (0.443151) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.304756 / 0.018006 (0.286750) | 0.623647 / 0.000490 (0.623157) | 0.000508 / 0.000200 (0.000308) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.043627 / 0.037411 (0.006216) | 0.107183 / 0.014526 (0.092657) | 0.119304 / 0.176557 (-0.057253) | 0.192651 / 0.737135 (-0.544485) | 0.125118 / 0.296338 (-0.171221) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669980 / 0.215209 (0.454771) | 6.566068 / 2.077655 (4.488413) | 3.136271 / 1.504120 (1.632152) | 2.964643 / 1.541195 (1.423448) | 2.936772 / 1.468490 (1.468282) | 0.885205 / 4.584777 (-3.699572) | 5.539062 / 3.745712 (1.793350) | 5.006133 / 5.269862 (-0.263729) | 3.313697 / 4.565676 (-1.251979) | 0.102975 / 0.424275 (-0.321301) | 0.010759 / 0.007607 (0.003152) | 0.791176 / 0.226044 (0.565132) | 7.822195 / 2.268929 (5.553266) | 3.982315 / 55.444624 (-51.462309) | 3.357026 / 6.876477 (-3.519451) | 3.561307 / 2.142072 (1.419234) | 1.056966 / 4.805227 (-3.748261) | 0.220476 / 6.500664 (-6.280188) | 0.090535 / 0.075469 (0.015066) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.897984 / 1.841788 (0.056196) | 26.411411 / 8.074308 (18.337103) | 22.951939 / 10.191392 (12.760547) | 0.216091 / 0.680424 (-0.464333) | 0.037005 / 0.534201 (-0.497196) | 0.505585 / 0.579283 (-0.073698) | 0.617794 / 0.434364 (0.183430) | 0.604631 / 0.540337 (0.064293) | 0.826356 / 1.386936 (-0.560580) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ca6342c0177adc3a1d114740444e207b8525ed6e \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006850 / 0.011353 (-0.004503) | 0.004062 / 0.011008 (-0.006947) | 0.086587 / 0.038508 (0.048079) | 0.079587 / 0.023109 (0.056478) | 0.353601 / 0.275898 (0.077702) | 0.396399 / 0.323480 (0.072919) | 0.004182 / 0.007986 (-0.003804) | 0.004445 / 0.004328 (0.000117) | 0.065100 / 0.004250 (0.060849) | 0.057386 / 0.037052 (0.020334) | 0.356945 / 0.258489 (0.098456) | 0.407093 / 0.293841 (0.113252) | 0.031949 / 0.128546 (-0.096597) | 0.008525 / 0.075646 (-0.067121) | 0.291310 / 0.419271 (-0.127961) | 0.053638 / 0.043533 (0.010105) | 0.359381 / 0.255139 (0.104242) | 0.399473 / 0.283200 (0.116273) | 0.025880 / 0.141683 (-0.115803) | 1.487604 / 1.452155 (0.035449) | 1.550528 / 1.492716 (0.057812) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201106 / 0.018006 (0.183099) | 0.457538 / 0.000490 (0.457048) | 0.003995 / 0.000200 (0.003795) | 0.000087 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030365 / 0.037411 (-0.007046) | 0.088064 / 0.014526 (0.073538) | 0.096432 / 0.176557 (-0.080124) | 0.158063 / 0.737135 (-0.579072) | 0.098258 / 0.296338 (-0.198080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405351 / 0.215209 (0.190142) | 4.032639 / 2.077655 (1.954984) | 2.018357 / 1.504120 (0.514237) | 1.848493 / 1.541195 (0.307298) | 1.929401 / 1.468490 (0.460910) | 0.488729 / 4.584777 (-4.096048) | 3.586114 / 3.745712 (-0.159598) | 5.279054 / 5.269862 (0.009193) | 3.113275 / 4.565676 (-1.452402) | 0.057373 / 0.424275 (-0.366902) | 0.007416 / 0.007607 (-0.000191) | 0.485514 / 0.226044 (0.259470) | 4.854389 / 2.268929 (2.585461) | 2.493113 / 55.444624 (-52.951512) | 2.128836 / 6.876477 (-4.747641) | 2.383669 / 2.142072 (0.241596) | 0.588266 / 4.805227 (-4.216962) | 0.133603 / 6.500664 (-6.367061) | 0.061812 / 0.075469 (-0.013657) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260841 / 1.841788 (-0.580947) | 20.086954 / 8.074308 (12.012646) | 14.620932 / 10.191392 (4.429540) | 0.161525 / 0.680424 (-0.518899) | 0.018102 / 0.534201 (-0.516099) | 0.393810 / 0.579283 (-0.185473) | 0.406974 / 0.434364 (-0.027390) | 0.462732 / 0.540337 (-0.077606) | 0.634221 / 1.386936 (-0.752715) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004068 / 0.011008 (-0.006940) | 0.068009 / 0.038508 (0.029501) | 0.081298 / 0.023109 (0.058189) | 0.363531 / 0.275898 (0.087633) | 0.408482 / 0.323480 (0.085002) | 0.005601 / 0.007986 (-0.002384) | 0.003385 / 0.004328 (-0.000943) | 0.068043 / 0.004250 (0.063792) | 0.059739 / 0.037052 (0.022687) | 0.374043 / 0.258489 (0.115553) | 0.407219 / 0.293841 (0.113378) | 0.031194 / 0.128546 (-0.097352) | 0.008630 / 0.075646 (-0.067017) | 0.073755 / 0.419271 (-0.345517) | 0.049831 / 0.043533 (0.006298) | 0.363664 / 0.255139 (0.108525) | 0.381515 / 0.283200 (0.098315) | 0.026331 / 0.141683 (-0.115352) | 1.507771 / 1.452155 (0.055617) | 1.554403 / 1.492716 (0.061686) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226309 / 0.018006 (0.208302) | 0.452428 / 0.000490 (0.451938) | 0.000937 / 0.000200 (0.000737) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031899 / 0.037411 (-0.005513) | 0.092090 / 0.014526 (0.077564) | 0.100838 / 0.176557 (-0.075718) | 0.153722 / 0.737135 (-0.583413) | 0.101950 / 0.296338 (-0.194389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417879 / 0.215209 (0.202669) | 4.171939 / 2.077655 (2.094284) | 2.312937 / 1.504120 (0.808817) | 2.209991 / 1.541195 (0.668796) | 2.329469 / 1.468490 (0.860979) | 0.484576 / 4.584777 (-4.100201) | 3.659198 / 3.745712 (-0.086514) | 5.255227 / 5.269862 (-0.014634) | 3.047430 / 4.565676 (-1.518247) | 0.057029 / 0.424275 (-0.367246) | 0.007735 / 0.007607 (0.000127) | 0.499962 / 0.226044 (0.273918) | 4.991655 / 2.268929 (2.722727) | 2.755999 / 55.444624 (-52.688625) | 2.374034 / 6.876477 (-4.502443) | 2.599759 / 2.142072 (0.457687) | 0.600319 / 4.805227 (-4.204908) | 0.146176 / 6.500664 (-6.354488) | 0.062328 / 0.075469 (-0.013142) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.346065 / 1.841788 (-0.495722) | 20.430343 / 8.074308 (12.356035) | 14.632959 / 10.191392 (4.441567) | 0.167007 / 0.680424 (-0.513417) | 0.018588 / 0.534201 (-0.515613) | 0.396015 / 0.579283 (-0.183268) | 0.429384 / 0.434364 (-0.004980) | 0.467746 / 0.540337 (-0.072591) | 0.615166 / 1.386936 (-0.771770) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#289bcc2ae9bf98c9414b6846ae603178a1816d3f \"CML watermark\")\n" ]
2023-07-28T11:52:21
2023-07-31T05:08:41
2023-07-31T04:59:50
MEMBER
null
This PR fixes an issue with the deprecation of `use_auth_token` in `DownloadConfig` introduced by: - #5996 ```python In [1]: from datasets import DownloadConfig In [2]: DownloadConfig(use_auth_token=False) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-3-41927b449e72> in <module> ----> 1 DownloadConfig(use_auth_token=False) TypeError: __init__() got an unexpected keyword argument 'use_auth_token' ``` ```python In [1]: from datasets import get_dataset_config_names In [2]: get_dataset_config_names("squad", use_auth_token=False) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-22-4671992ead50> in <module> ----> 1 get_dataset_config_names("squad", use_auth_token=False) ~/huggingface/datasets/src/datasets/inspect.py in get_dataset_config_names(path, revision, download_config, download_mode, dynamic_modules_path, data_files, **download_kwargs) 349 ``` 350 """ --> 351 dataset_module = dataset_module_factory( 352 path, 353 revision=revision, ~/huggingface/datasets/src/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1374 """ 1375 if download_config is None: -> 1376 download_config = DownloadConfig(**download_kwargs) 1377 download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS) 1378 download_config.extract_compressed_file = True TypeError: __init__() got an unexpected keyword argument 'use_auth_token' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6094/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6094", "html_url": "https://github.com/huggingface/datasets/pull/6094", "diff_url": "https://github.com/huggingface/datasets/pull/6094.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6094.patch", "merged_at": "2023-07-31T04:59:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/6093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6093/comments
https://api.github.com/repos/huggingface/datasets/issues/6093/events
https://github.com/huggingface/datasets/pull/6093
1,826,210,490
PR_kwDODunzps5WpLfh
6,093
Deprecate `download_custom`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007498 / 0.011353 (-0.003855) | 0.004158 / 0.011008 (-0.006850) | 0.087568 / 0.038508 (0.049060) | 0.083265 / 0.023109 (0.060156) | 0.378505 / 0.275898 (0.102607) | 0.399025 / 0.323480 (0.075545) | 0.006173 / 0.007986 (-0.001813) | 0.003743 / 0.004328 (-0.000586) | 0.071958 / 0.004250 (0.067707) | 0.059323 / 0.037052 (0.022271) | 0.377084 / 0.258489 (0.118595) | 0.408358 / 0.293841 (0.114517) | 0.035191 / 0.128546 (-0.093356) | 0.009408 / 0.075646 (-0.066238) | 0.312587 / 0.419271 (-0.106685) | 0.058073 / 0.043533 (0.014540) | 0.381977 / 0.255139 (0.126838) | 0.395611 / 0.283200 (0.112411) | 0.024191 / 0.141683 (-0.117491) | 1.572735 / 1.452155 (0.120581) | 1.687186 / 1.492716 (0.194470) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208886 / 0.018006 (0.190879) | 0.474625 / 0.000490 (0.474135) | 0.006261 / 0.000200 (0.006061) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031401 / 0.037411 (-0.006011) | 0.086433 / 0.014526 (0.071907) | 0.108405 / 0.176557 (-0.068152) | 0.174564 / 0.737135 (-0.562571) | 0.099932 / 0.296338 (-0.196407) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407059 / 0.215209 (0.191850) | 4.102056 / 2.077655 (2.024401) | 1.975397 / 1.504120 (0.471277) | 1.807117 / 1.541195 (0.265922) | 1.908667 / 1.468490 (0.440177) | 0.525880 / 4.584777 (-4.058897) | 3.899639 / 3.745712 (0.153927) | 4.358664 / 5.269862 (-0.911198) | 2.586185 / 4.565676 (-1.979492) | 0.061967 / 0.424275 (-0.362308) | 0.007656 / 0.007607 (0.000049) | 0.504851 / 0.226044 (0.278807) | 5.004429 / 2.268929 (2.735500) | 2.515540 / 55.444624 (-52.929084) | 2.183142 / 6.876477 (-4.693334) | 2.369835 / 2.142072 (0.227763) | 0.623527 / 4.805227 (-4.181700) | 0.145105 / 6.500664 (-6.355559) | 0.063924 / 0.075469 (-0.011546) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.472661 / 1.841788 (-0.369126) | 21.781655 / 8.074308 (13.707347) | 15.628820 / 10.191392 (5.437428) | 0.182342 / 0.680424 (-0.498082) | 0.021139 / 0.534201 (-0.513062) | 0.438610 / 0.579283 (-0.140673) | 0.451343 / 0.434364 (0.016979) | 0.563320 / 0.540337 (0.022983) | 0.740976 / 1.386936 (-0.645960) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007492 / 0.011353 (-0.003861) | 0.004429 / 0.011008 (-0.006579) | 0.068517 / 0.038508 (0.030008) | 0.078533 / 0.023109 (0.055424) | 0.383530 / 0.275898 (0.107632) | 0.435061 / 0.323480 (0.111581) | 0.005955 / 0.007986 (-0.002030) | 0.003645 / 0.004328 (-0.000683) | 0.068792 / 0.004250 (0.064541) | 0.062452 / 0.037052 (0.025399) | 0.408768 / 0.258489 (0.150279) | 0.438538 / 0.293841 (0.144697) | 0.032038 / 0.128546 (-0.096508) | 0.009196 / 0.075646 (-0.066450) | 0.074495 / 0.419271 (-0.344776) | 0.051322 / 0.043533 (0.007789) | 0.394458 / 0.255139 (0.139319) | 0.424763 / 0.283200 (0.141564) | 0.024890 / 0.141683 (-0.116793) | 1.568322 / 1.452155 (0.116167) | 1.703903 / 1.492716 (0.211187) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249630 / 0.018006 (0.231624) | 0.471412 / 0.000490 (0.470923) | 0.000435 / 0.000200 (0.000235) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033054 / 0.037411 (-0.004358) | 0.100150 / 0.014526 (0.085624) | 0.101704 / 0.176557 (-0.074853) | 0.164031 / 0.737135 (-0.573104) | 0.112497 / 0.296338 (-0.183841) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487150 / 0.215209 (0.271941) | 4.662335 / 2.077655 (2.584681) | 2.477285 / 1.504120 (0.973165) | 2.294033 / 1.541195 (0.752838) | 2.380143 / 1.468490 (0.911653) | 0.519182 / 4.584777 (-4.065595) | 3.983589 / 3.745712 (0.237877) | 3.669895 / 5.269862 (-1.599967) | 2.267147 / 4.565676 (-2.298529) | 0.063300 / 0.424275 (-0.360975) | 0.008839 / 0.007607 (0.001232) | 0.566766 / 0.226044 (0.340721) | 5.533475 / 2.268929 (3.264546) | 3.033412 / 55.444624 (-52.411212) | 2.701793 / 6.876477 (-4.174684) | 2.899444 / 2.142072 (0.757372) | 0.614236 / 4.805227 (-4.190991) | 0.139533 / 6.500664 (-6.361131) | 0.067537 / 0.075469 (-0.007932) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505572 / 1.841788 (-0.336216) | 22.859062 / 8.074308 (14.784754) | 15.044777 / 10.191392 (4.853385) | 0.169153 / 0.680424 (-0.511271) | 0.021027 / 0.534201 (-0.513174) | 0.447979 / 0.579283 (-0.131304) | 0.460676 / 0.434364 (0.026312) | 0.506327 / 0.540337 (-0.034010) | 0.737880 / 1.386936 (-0.649057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db7180eb7e3ebf52b9d1f2c6629db6d92d8a29ba \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003692 / 0.011008 (-0.007316) | 0.080606 / 0.038508 (0.042098) | 0.062014 / 0.023109 (0.038905) | 0.391886 / 0.275898 (0.115988) | 0.423978 / 0.323480 (0.100498) | 0.004968 / 0.007986 (-0.003017) | 0.002911 / 0.004328 (-0.001417) | 0.062867 / 0.004250 (0.058617) | 0.049493 / 0.037052 (0.012441) | 0.395656 / 0.258489 (0.137167) | 0.432406 / 0.293841 (0.138565) | 0.027242 / 0.128546 (-0.101304) | 0.007938 / 0.075646 (-0.067709) | 0.261703 / 0.419271 (-0.157569) | 0.045922 / 0.043533 (0.002389) | 0.391544 / 0.255139 (0.136405) | 0.417902 / 0.283200 (0.134703) | 0.021339 / 0.141683 (-0.120344) | 1.508391 / 1.452155 (0.056236) | 1.518970 / 1.492716 (0.026254) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181159 / 0.018006 (0.163153) | 0.431402 / 0.000490 (0.430912) | 0.003849 / 0.000200 (0.003649) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024498 / 0.037411 (-0.012914) | 0.072758 / 0.014526 (0.058233) | 0.084910 / 0.176557 (-0.091646) | 0.148314 / 0.737135 (-0.588821) | 0.085212 / 0.296338 (-0.211126) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386693 / 0.215209 (0.171484) | 3.852652 / 2.077655 (1.774997) | 1.891758 / 1.504120 (0.387638) | 1.718793 / 1.541195 (0.177598) | 1.747595 / 1.468490 (0.279104) | 0.498593 / 4.584777 (-4.086184) | 3.057907 / 3.745712 (-0.687805) | 4.728449 / 5.269862 (-0.541413) | 2.966368 / 4.565676 (-1.599308) | 0.057538 / 0.424275 (-0.366737) | 0.006415 / 0.007607 (-0.001192) | 0.461652 / 0.226044 (0.235608) | 4.625944 / 2.268929 (2.357015) | 2.306938 / 55.444624 (-53.137686) | 1.974670 / 6.876477 (-4.901806) | 2.146327 / 2.142072 (0.004254) | 0.585033 / 4.805227 (-4.220195) | 0.125936 / 6.500664 (-6.374728) | 0.062365 / 0.075469 (-0.013104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263415 / 1.841788 (-0.578373) | 18.380651 / 8.074308 (10.306343) | 13.853410 / 10.191392 (3.662018) | 0.144674 / 0.680424 (-0.535749) | 0.016833 / 0.534201 (-0.517368) | 0.330812 / 0.579283 (-0.248471) | 0.357553 / 0.434364 (-0.076810) | 0.383529 / 0.540337 (-0.156809) | 0.558923 / 1.386936 (-0.828013) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006074 / 0.011353 (-0.005278) | 0.003655 / 0.011008 (-0.007353) | 0.062981 / 0.038508 (0.024473) | 0.061457 / 0.023109 (0.038348) | 0.366471 / 0.275898 (0.090573) | 0.408463 / 0.323480 (0.084983) | 0.004854 / 0.007986 (-0.003132) | 0.002916 / 0.004328 (-0.001412) | 0.062745 / 0.004250 (0.058494) | 0.051136 / 0.037052 (0.014084) | 0.380313 / 0.258489 (0.121824) | 0.416945 / 0.293841 (0.123104) | 0.027228 / 0.128546 (-0.101318) | 0.008031 / 0.075646 (-0.067615) | 0.067941 / 0.419271 (-0.351331) | 0.042886 / 0.043533 (-0.000647) | 0.370112 / 0.255139 (0.114973) | 0.397111 / 0.283200 (0.113911) | 0.023063 / 0.141683 (-0.118620) | 1.476955 / 1.452155 (0.024800) | 1.534783 / 1.492716 (0.042066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231462 / 0.018006 (0.213456) | 0.439559 / 0.000490 (0.439069) | 0.000364 / 0.000200 (0.000164) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026925 / 0.037411 (-0.010486) | 0.079623 / 0.014526 (0.065097) | 0.088694 / 0.176557 (-0.087862) | 0.143163 / 0.737135 (-0.593972) | 0.089900 / 0.296338 (-0.206438) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451429 / 0.215209 (0.236220) | 4.510723 / 2.077655 (2.433069) | 2.491853 / 1.504120 (0.987733) | 2.334670 / 1.541195 (0.793475) | 2.395519 / 1.468490 (0.927029) | 0.501369 / 4.584777 (-4.083408) | 3.014019 / 3.745712 (-0.731693) | 2.809199 / 5.269862 (-2.460662) | 1.842195 / 4.565676 (-2.723481) | 0.057675 / 0.424275 (-0.366600) | 0.006742 / 0.007607 (-0.000865) | 0.524402 / 0.226044 (0.298358) | 5.245296 / 2.268929 (2.976367) | 2.957990 / 55.444624 (-52.486634) | 2.649807 / 6.876477 (-4.226670) | 2.755909 / 2.142072 (0.613836) | 0.589610 / 4.805227 (-4.215617) | 0.125708 / 6.500664 (-6.374956) | 0.062237 / 0.075469 (-0.013232) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.362758 / 1.841788 (-0.479030) | 18.343694 / 8.074308 (10.269386) | 13.621521 / 10.191392 (3.430129) | 0.128866 / 0.680424 (-0.551558) | 0.016608 / 0.534201 (-0.517593) | 0.333071 / 0.579283 (-0.246212) | 0.341917 / 0.434364 (-0.092447) | 0.381075 / 0.540337 (-0.159263) | 0.512485 / 1.386936 (-0.874451) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ab3f0165d4a2a8ab1aee1ebc4628893e17e27387 \"CML watermark\")\n", "I forgot to mention this in the initial comment, but only one public dataset (excluding gated) uses this method - `pg19`, which I just fixed.\r\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007838 / 0.011353 (-0.003515) | 0.004791 / 0.011008 (-0.006217) | 0.102596 / 0.038508 (0.064088) | 0.087678 / 0.023109 (0.064569) | 0.373858 / 0.275898 (0.097960) | 0.416643 / 0.323480 (0.093163) | 0.006147 / 0.007986 (-0.001839) | 0.003837 / 0.004328 (-0.000491) | 0.076706 / 0.004250 (0.072456) | 0.063449 / 0.037052 (0.026396) | 0.378392 / 0.258489 (0.119903) | 0.431768 / 0.293841 (0.137927) | 0.036648 / 0.128546 (-0.091898) | 0.010042 / 0.075646 (-0.065604) | 0.350277 / 0.419271 (-0.068995) | 0.062892 / 0.043533 (0.019359) | 0.376151 / 0.255139 (0.121012) | 0.420929 / 0.283200 (0.137729) | 0.027816 / 0.141683 (-0.113867) | 1.791607 / 1.452155 (0.339452) | 1.903045 / 1.492716 (0.410328) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224688 / 0.018006 (0.206682) | 0.491941 / 0.000490 (0.491451) | 0.004482 / 0.000200 (0.004282) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033495 / 0.037411 (-0.003917) | 0.099855 / 0.014526 (0.085329) | 0.114593 / 0.176557 (-0.061964) | 0.190947 / 0.737135 (-0.546189) | 0.116202 / 0.296338 (-0.180136) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488581 / 0.215209 (0.273372) | 4.869531 / 2.077655 (2.791876) | 2.527920 / 1.504120 (1.023800) | 2.340021 / 1.541195 (0.798826) | 2.432661 / 1.468490 (0.964171) | 0.569646 / 4.584777 (-4.015131) | 4.392036 / 3.745712 (0.646324) | 4.987253 / 5.269862 (-0.282608) | 2.866604 / 4.565676 (-1.699073) | 0.067393 / 0.424275 (-0.356882) | 0.008759 / 0.007607 (0.001152) | 0.584327 / 0.226044 (0.358283) | 5.853000 / 2.268929 (3.584072) | 3.206721 / 55.444624 (-52.237904) | 2.730867 / 6.876477 (-4.145610) | 2.944814 / 2.142072 (0.802742) | 0.703336 / 4.805227 (-4.101891) | 0.173985 / 6.500664 (-6.326679) | 0.075333 / 0.075469 (-0.000137) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.519755 / 1.841788 (-0.322033) | 22.918038 / 8.074308 (14.843730) | 17.211160 / 10.191392 (7.019768) | 0.196941 / 0.680424 (-0.483483) | 0.021833 / 0.534201 (-0.512368) | 0.476835 / 0.579283 (-0.102448) | 0.464513 / 0.434364 (0.030149) | 0.559180 / 0.540337 (0.018843) | 0.748232 / 1.386936 (-0.638704) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008461 / 0.011353 (-0.002892) | 0.004799 / 0.011008 (-0.006209) | 0.077466 / 0.038508 (0.038958) | 0.103562 / 0.023109 (0.080453) | 0.453661 / 0.275898 (0.177763) | 0.531126 / 0.323480 (0.207647) | 0.006618 / 0.007986 (-0.001367) | 0.004048 / 0.004328 (-0.000280) | 0.075446 / 0.004250 (0.071196) | 0.072815 / 0.037052 (0.035762) | 0.497145 / 0.258489 (0.238656) | 0.533828 / 0.293841 (0.239987) | 0.037657 / 0.128546 (-0.090890) | 0.010139 / 0.075646 (-0.065507) | 0.083759 / 0.419271 (-0.335512) | 0.061401 / 0.043533 (0.017868) | 0.441785 / 0.255139 (0.186646) | 0.491678 / 0.283200 (0.208479) | 0.033100 / 0.141683 (-0.108583) | 1.753612 / 1.452155 (0.301458) | 1.838956 / 1.492716 (0.346240) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.395023 / 0.018006 (0.377017) | 0.509362 / 0.000490 (0.508872) | 0.060742 / 0.000200 (0.060542) | 0.000545 / 0.000054 (0.000491) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.039327 / 0.037411 (0.001916) | 0.117345 / 0.014526 (0.102819) | 0.124540 / 0.176557 (-0.052017) | 0.200743 / 0.737135 (-0.536392) | 0.126750 / 0.296338 (-0.169589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.488597 / 0.215209 (0.273388) | 4.875534 / 2.077655 (2.797880) | 2.714364 / 1.504120 (1.210244) | 2.603707 / 1.541195 (1.062513) | 2.733547 / 1.468490 (1.265057) | 0.575183 / 4.584777 (-4.009594) | 4.126096 / 3.745712 (0.380384) | 3.853803 / 5.269862 (-1.416058) | 2.395160 / 4.565676 (-2.170516) | 0.067391 / 0.424275 (-0.356884) | 0.009108 / 0.007607 (0.001501) | 0.585865 / 0.226044 (0.359820) | 5.864878 / 2.268929 (3.595949) | 3.153369 / 55.444624 (-52.291256) | 2.759064 / 6.876477 (-4.117413) | 3.032489 / 2.142072 (0.890416) | 0.702615 / 4.805227 (-4.102613) | 0.160034 / 6.500664 (-6.340630) | 0.077294 / 0.075469 (0.001825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595069 / 1.841788 (-0.246719) | 23.231191 / 8.074308 (15.156883) | 16.365137 / 10.191392 (6.173745) | 0.188360 / 0.680424 (-0.492063) | 0.021704 / 0.534201 (-0.512497) | 0.469996 / 0.579283 (-0.109287) | 0.463255 / 0.434364 (0.028891) | 0.560506 / 0.540337 (0.020169) | 0.751006 / 1.386936 (-0.635930) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#50d9a70c666ff46ff9974c47cedc77d9f88d6471 \"CML watermark\")\n", "@mariosasko How would you stream a split zip file with just [download_and_extract or download](https://github.com/huggingface/datasets/blob/main/src/datasets/download/download_manager.py#L353)? With download_custom, it is possible to combine a split zip file. Perhaps add an option in [download](https://huggingface.co./docs/datasets/v2.2.1/en/package_reference/builder_classes#datasets.DownloadManager.download) to combine split zips. This issue may apply to other multipart file-types.\r\n\r\nEdit - \r\nIn case asked why I use split zips, I haven't been able to upload zips larger than 50 GB to HuggingFace.\r\n\r\nEdit2 -\r\nIssue is [tackled](https://discuss.huggingface.co/t/download-custom-method-of-streamingdownloadmanager-not-implemented/28298/8) for split zips. " ]
2023-07-28T10:49:06
2023-08-21T17:51:34
2023-07-28T11:30:02
CONTRIBUTOR
null
Deprecate `DownloadManager.download_custom`. Users should use `fsspec` URLs (cacheable) or make direct requests with `fsspec`/`requests` (not cacheable) instead. We should deprecate this method as it's not compatible with streaming, and implementing the streaming version of it is hard/impossible. There have been requests to implement the streaming version of this method on the forum, but the reason for this seems to be a tip in the docs that "promotes" this method (this PR removes it).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6093/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6093", "html_url": "https://github.com/huggingface/datasets/pull/6093", "diff_url": "https://github.com/huggingface/datasets/pull/6093.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6093.patch", "merged_at": "2023-07-28T11:30:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/6092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6092/comments
https://api.github.com/repos/huggingface/datasets/issues/6092/events
https://github.com/huggingface/datasets/pull/6092
1,826,111,806
PR_kwDODunzps5Wo1mh
6,092
Minor fix in `iter_files` for hidden files
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007873 / 0.011353 (-0.003480) | 0.004585 / 0.011008 (-0.006423) | 0.101622 / 0.038508 (0.063114) | 0.092459 / 0.023109 (0.069350) | 0.365157 / 0.275898 (0.089259) | 0.405943 / 0.323480 (0.082463) | 0.006229 / 0.007986 (-0.001756) | 0.003811 / 0.004328 (-0.000518) | 0.073831 / 0.004250 (0.069580) | 0.065097 / 0.037052 (0.028045) | 0.378912 / 0.258489 (0.120423) | 0.422174 / 0.293841 (0.128333) | 0.036244 / 0.128546 (-0.092302) | 0.009677 / 0.075646 (-0.065970) | 0.345164 / 0.419271 (-0.074107) | 0.061632 / 0.043533 (0.018099) | 0.370350 / 0.255139 (0.115211) | 0.418245 / 0.283200 (0.135046) | 0.027272 / 0.141683 (-0.114411) | 1.774047 / 1.452155 (0.321892) | 1.880278 / 1.492716 (0.387562) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217238 / 0.018006 (0.199231) | 0.489560 / 0.000490 (0.489071) | 0.004013 / 0.000200 (0.003813) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034139 / 0.037411 (-0.003272) | 0.103831 / 0.014526 (0.089305) | 0.114353 / 0.176557 (-0.062204) | 0.182034 / 0.737135 (-0.555102) | 0.116171 / 0.296338 (-0.180168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448658 / 0.215209 (0.233449) | 4.520849 / 2.077655 (2.443195) | 2.216121 / 1.504120 (0.712001) | 2.034596 / 1.541195 (0.493402) | 2.193216 / 1.468490 (0.724725) | 0.568166 / 4.584777 (-4.016611) | 4.133587 / 3.745712 (0.387875) | 4.641117 / 5.269862 (-0.628744) | 2.772913 / 4.565676 (-1.792764) | 0.067664 / 0.424275 (-0.356611) | 0.008719 / 0.007607 (0.001112) | 0.547723 / 0.226044 (0.321678) | 5.438325 / 2.268929 (3.169397) | 2.877667 / 55.444624 (-52.566958) | 2.477503 / 6.876477 (-4.398974) | 2.688209 / 2.142072 (0.546136) | 0.692593 / 4.805227 (-4.112634) | 0.154549 / 6.500664 (-6.346115) | 0.073286 / 0.075469 (-0.002183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.610927 / 1.841788 (-0.230861) | 23.413345 / 8.074308 (15.339037) | 16.851819 / 10.191392 (6.660427) | 0.170076 / 0.680424 (-0.510348) | 0.021428 / 0.534201 (-0.512773) | 0.468184 / 0.579283 (-0.111099) | 0.491820 / 0.434364 (0.057456) | 0.553453 / 0.540337 (0.013115) | 0.762303 / 1.386936 (-0.624633) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008033 / 0.011353 (-0.003320) | 0.004638 / 0.011008 (-0.006370) | 0.077044 / 0.038508 (0.038536) | 0.096529 / 0.023109 (0.073420) | 0.428735 / 0.275898 (0.152837) | 0.477303 / 0.323480 (0.153823) | 0.006040 / 0.007986 (-0.001946) | 0.003808 / 0.004328 (-0.000521) | 0.076042 / 0.004250 (0.071791) | 0.066123 / 0.037052 (0.029071) | 0.445482 / 0.258489 (0.186993) | 0.481350 / 0.293841 (0.187509) | 0.036951 / 0.128546 (-0.091595) | 0.009944 / 0.075646 (-0.065703) | 0.082731 / 0.419271 (-0.336541) | 0.057490 / 0.043533 (0.013958) | 0.432668 / 0.255139 (0.177529) | 0.461146 / 0.283200 (0.177947) | 0.027330 / 0.141683 (-0.114353) | 1.784195 / 1.452155 (0.332040) | 1.834776 / 1.492716 (0.342059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254104 / 0.018006 (0.236097) | 0.475810 / 0.000490 (0.475321) | 0.000459 / 0.000200 (0.000259) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037058 / 0.037411 (-0.000353) | 0.114962 / 0.014526 (0.100436) | 0.123725 / 0.176557 (-0.052832) | 0.188885 / 0.737135 (-0.548251) | 0.125668 / 0.296338 (-0.170670) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.492627 / 0.215209 (0.277418) | 4.900625 / 2.077655 (2.822970) | 2.546349 / 1.504120 (1.042229) | 2.360350 / 1.541195 (0.819155) | 2.477975 / 1.468490 (1.009485) | 0.574042 / 4.584777 (-4.010735) | 4.408414 / 3.745712 (0.662702) | 3.836640 / 5.269862 (-1.433222) | 2.438450 / 4.565676 (-2.127227) | 0.067706 / 0.424275 (-0.356569) | 0.009165 / 0.007607 (0.001558) | 0.580313 / 0.226044 (0.354269) | 5.798211 / 2.268929 (3.529283) | 3.098480 / 55.444624 (-52.346145) | 2.740180 / 6.876477 (-4.136296) | 2.984548 / 2.142072 (0.842476) | 0.702550 / 4.805227 (-4.102677) | 0.158248 / 6.500664 (-6.342416) | 0.073999 / 0.075469 (-0.001470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.636034 / 1.841788 (-0.205754) | 24.068000 / 8.074308 (15.993692) | 17.123987 / 10.191392 (6.932595) | 0.210101 / 0.680424 (-0.470323) | 0.022555 / 0.534201 (-0.511646) | 0.509354 / 0.579283 (-0.069929) | 0.540739 / 0.434364 (0.106375) | 0.546048 / 0.540337 (0.005711) | 0.719155 / 1.386936 (-0.667781) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#40530382ba98f54445de8820943b1236d4a4704f \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007342 / 0.011353 (-0.004010) | 0.004579 / 0.011008 (-0.006429) | 0.087050 / 0.038508 (0.048542) | 0.089001 / 0.023109 (0.065892) | 0.307319 / 0.275898 (0.031421) | 0.377573 / 0.323480 (0.054093) | 0.006472 / 0.007986 (-0.001514) | 0.004287 / 0.004328 (-0.000041) | 0.067226 / 0.004250 (0.062976) | 0.063147 / 0.037052 (0.026094) | 0.314541 / 0.258489 (0.056052) | 0.369919 / 0.293841 (0.076078) | 0.031283 / 0.128546 (-0.097263) | 0.009175 / 0.075646 (-0.066471) | 0.289211 / 0.419271 (-0.130061) | 0.053444 / 0.043533 (0.009911) | 0.307308 / 0.255139 (0.052169) | 0.346221 / 0.283200 (0.063021) | 0.027948 / 0.141683 (-0.113735) | 1.475177 / 1.452155 (0.023022) | 1.575971 / 1.492716 (0.083255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.291092 / 0.018006 (0.273086) | 0.696951 / 0.000490 (0.696461) | 0.005211 / 0.000200 (0.005011) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031787 / 0.037411 (-0.005625) | 0.084382 / 0.014526 (0.069857) | 0.106474 / 0.176557 (-0.070083) | 0.161472 / 0.737135 (-0.575663) | 0.108650 / 0.296338 (-0.187688) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379656 / 0.215209 (0.164447) | 3.784072 / 2.077655 (1.706417) | 1.826580 / 1.504120 (0.322460) | 1.654916 / 1.541195 (0.113721) | 1.730698 / 1.468490 (0.262208) | 0.478003 / 4.584777 (-4.106774) | 3.564920 / 3.745712 (-0.180792) | 5.824873 / 5.269862 (0.555012) | 3.454563 / 4.565676 (-1.111113) | 0.056646 / 0.424275 (-0.367629) | 0.007410 / 0.007607 (-0.000197) | 0.461781 / 0.226044 (0.235737) | 4.600928 / 2.268929 (2.331999) | 2.351887 / 55.444624 (-53.092738) | 1.986470 / 6.876477 (-4.890007) | 2.311623 / 2.142072 (0.169551) | 0.571247 / 4.805227 (-4.233980) | 0.132191 / 6.500664 (-6.368473) | 0.059943 / 0.075469 (-0.015526) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253142 / 1.841788 (-0.588646) | 21.294983 / 8.074308 (13.220675) | 14.522429 / 10.191392 (4.331037) | 0.166663 / 0.680424 (-0.513761) | 0.019694 / 0.534201 (-0.514507) | 0.395908 / 0.579283 (-0.183375) | 0.413283 / 0.434364 (-0.021081) | 0.457739 / 0.540337 (-0.082599) | 0.664361 / 1.386936 (-0.722575) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007228 / 0.011353 (-0.004124) | 0.004941 / 0.011008 (-0.006067) | 0.065381 / 0.038508 (0.026873) | 0.090790 / 0.023109 (0.067681) | 0.391315 / 0.275898 (0.115417) | 0.416518 / 0.323480 (0.093038) | 0.007015 / 0.007986 (-0.000970) | 0.004417 / 0.004328 (0.000089) | 0.067235 / 0.004250 (0.062985) | 0.068092 / 0.037052 (0.031039) | 0.403031 / 0.258489 (0.144542) | 0.434013 / 0.293841 (0.140172) | 0.032004 / 0.128546 (-0.096542) | 0.009242 / 0.075646 (-0.066404) | 0.071222 / 0.419271 (-0.348050) | 0.054207 / 0.043533 (0.010674) | 0.386198 / 0.255139 (0.131059) | 0.404350 / 0.283200 (0.121150) | 0.036284 / 0.141683 (-0.105399) | 1.488814 / 1.452155 (0.036660) | 1.587785 / 1.492716 (0.095069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313760 / 0.018006 (0.295754) | 0.747778 / 0.000490 (0.747289) | 0.003307 / 0.000200 (0.003107) | 0.000113 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034321 / 0.037411 (-0.003090) | 0.088266 / 0.014526 (0.073740) | 0.112874 / 0.176557 (-0.063682) | 0.171554 / 0.737135 (-0.565581) | 0.111356 / 0.296338 (-0.184982) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422624 / 0.215209 (0.207415) | 4.212079 / 2.077655 (2.134425) | 2.242742 / 1.504120 (0.738622) | 2.072555 / 1.541195 (0.531360) | 2.192648 / 1.468490 (0.724158) | 0.488214 / 4.584777 (-4.096563) | 3.597013 / 3.745712 (-0.148699) | 3.477556 / 5.269862 (-1.792305) | 2.184340 / 4.565676 (-2.381337) | 0.057170 / 0.424275 (-0.367105) | 0.007772 / 0.007607 (0.000165) | 0.499455 / 0.226044 (0.273411) | 4.988953 / 2.268929 (2.720024) | 2.797894 / 55.444624 (-52.646731) | 2.402215 / 6.876477 (-4.474262) | 2.725069 / 2.142072 (0.582997) | 0.596213 / 4.805227 (-4.209014) | 0.136564 / 6.500664 (-6.364100) | 0.061799 / 0.075469 (-0.013670) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.360739 / 1.841788 (-0.481049) | 21.846457 / 8.074308 (13.772149) | 14.568842 / 10.191392 (4.377450) | 0.168980 / 0.680424 (-0.511444) | 0.018795 / 0.534201 (-0.515406) | 0.396173 / 0.579283 (-0.183110) | 0.418651 / 0.434364 (-0.015713) | 0.480042 / 0.540337 (-0.060295) | 0.650803 / 1.386936 (-0.736133) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7d460304487d4daab0a64ca0ca707e896367ca1 \"CML watermark\")\n" ]
2023-07-28T09:50:12
2023-07-28T10:59:28
2023-07-28T10:50:10
CONTRIBUTOR
null
Fix #6090
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6092/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6092", "html_url": "https://github.com/huggingface/datasets/pull/6092", "diff_url": "https://github.com/huggingface/datasets/pull/6092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6092.patch", "merged_at": "2023-07-28T10:50:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/6091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6091/comments
https://api.github.com/repos/huggingface/datasets/issues/6091/events
https://github.com/huggingface/datasets/pull/6091
1,826,086,487
PR_kwDODunzps5Wov9Q
6,091
Bump fsspec from 2021.11.1 to 2022.3.0
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006640 / 0.011353 (-0.004713) | 0.004077 / 0.011008 (-0.006931) | 0.084905 / 0.038508 (0.046397) | 0.074004 / 0.023109 (0.050895) | 0.315968 / 0.275898 (0.040070) | 0.351594 / 0.323480 (0.028114) | 0.005623 / 0.007986 (-0.002362) | 0.003476 / 0.004328 (-0.000852) | 0.065089 / 0.004250 (0.060839) | 0.054683 / 0.037052 (0.017631) | 0.314983 / 0.258489 (0.056494) | 0.371776 / 0.293841 (0.077935) | 0.031727 / 0.128546 (-0.096819) | 0.008786 / 0.075646 (-0.066860) | 0.289905 / 0.419271 (-0.129367) | 0.053340 / 0.043533 (0.009807) | 0.311802 / 0.255139 (0.056663) | 0.351927 / 0.283200 (0.068727) | 0.024453 / 0.141683 (-0.117229) | 1.491727 / 1.452155 (0.039572) | 1.585027 / 1.492716 (0.092310) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238757 / 0.018006 (0.220750) | 0.557691 / 0.000490 (0.557202) | 0.005158 / 0.000200 (0.004958) | 0.000204 / 0.000054 (0.000149) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028435 / 0.037411 (-0.008977) | 0.082219 / 0.014526 (0.067693) | 0.096932 / 0.176557 (-0.079625) | 0.153802 / 0.737135 (-0.583333) | 0.098338 / 0.296338 (-0.198001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.383448 / 0.215209 (0.168238) | 3.816074 / 2.077655 (1.738420) | 1.835111 / 1.504120 (0.330991) | 1.662326 / 1.541195 (0.121131) | 1.720202 / 1.468490 (0.251712) | 0.483107 / 4.584777 (-4.101669) | 3.648528 / 3.745712 (-0.097184) | 4.020929 / 5.269862 (-1.248932) | 2.433141 / 4.565676 (-2.132536) | 0.057081 / 0.424275 (-0.367194) | 0.007303 / 0.007607 (-0.000304) | 0.461366 / 0.226044 (0.235322) | 4.609090 / 2.268929 (2.340162) | 2.355940 / 55.444624 (-53.088684) | 1.989833 / 6.876477 (-4.886644) | 2.201451 / 2.142072 (0.059378) | 0.586156 / 4.805227 (-4.219071) | 0.133486 / 6.500664 (-6.367178) | 0.060062 / 0.075469 (-0.015407) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.247845 / 1.841788 (-0.593942) | 19.624252 / 8.074308 (11.549944) | 14.305975 / 10.191392 (4.114583) | 0.168687 / 0.680424 (-0.511737) | 0.018075 / 0.534201 (-0.516126) | 0.393859 / 0.579283 (-0.185424) | 0.407272 / 0.434364 (-0.027092) | 0.463760 / 0.540337 (-0.076578) | 0.629930 / 1.386936 (-0.757006) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006760 / 0.011353 (-0.004593) | 0.004345 / 0.011008 (-0.006663) | 0.064379 / 0.038508 (0.025871) | 0.078295 / 0.023109 (0.055186) | 0.364532 / 0.275898 (0.088633) | 0.395852 / 0.323480 (0.072372) | 0.005659 / 0.007986 (-0.002327) | 0.003515 / 0.004328 (-0.000813) | 0.065030 / 0.004250 (0.060780) | 0.059950 / 0.037052 (0.022898) | 0.375420 / 0.258489 (0.116931) | 0.411579 / 0.293841 (0.117738) | 0.031575 / 0.128546 (-0.096972) | 0.008737 / 0.075646 (-0.066910) | 0.070350 / 0.419271 (-0.348922) | 0.050607 / 0.043533 (0.007075) | 0.359785 / 0.255139 (0.104646) | 0.382638 / 0.283200 (0.099438) | 0.025533 / 0.141683 (-0.116150) | 1.564379 / 1.452155 (0.112225) | 1.620642 / 1.492716 (0.127925) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212779 / 0.018006 (0.194773) | 0.563827 / 0.000490 (0.563337) | 0.003767 / 0.000200 (0.003567) | 0.000103 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030275 / 0.037411 (-0.007136) | 0.088108 / 0.014526 (0.073582) | 0.102454 / 0.176557 (-0.074103) | 0.156107 / 0.737135 (-0.581028) | 0.103961 / 0.296338 (-0.192378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.421395 / 0.215209 (0.206186) | 4.204935 / 2.077655 (2.127280) | 2.144929 / 1.504120 (0.640809) | 1.999341 / 1.541195 (0.458147) | 2.066966 / 1.468490 (0.598476) | 0.486135 / 4.584777 (-4.098642) | 3.628139 / 3.745712 (-0.117573) | 5.652683 / 5.269862 (0.382821) | 3.216721 / 4.565676 (-1.348956) | 0.057513 / 0.424275 (-0.366762) | 0.007553 / 0.007607 (-0.000055) | 0.494470 / 0.226044 (0.268426) | 4.949343 / 2.268929 (2.680414) | 2.654222 / 55.444624 (-52.790402) | 2.322257 / 6.876477 (-4.554220) | 2.555633 / 2.142072 (0.413561) | 0.588355 / 4.805227 (-4.216872) | 0.134481 / 6.500664 (-6.366183) | 0.062415 / 0.075469 (-0.013054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.377578 / 1.841788 (-0.464209) | 19.805201 / 8.074308 (11.730893) | 14.128536 / 10.191392 (3.937144) | 0.164343 / 0.680424 (-0.516081) | 0.018553 / 0.534201 (-0.515648) | 0.398191 / 0.579283 (-0.181093) | 0.414268 / 0.434364 (-0.020096) | 0.462270 / 0.540337 (-0.078068) | 0.608497 / 1.386936 (-0.778439) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3af05ba487f361fae90a4c80af72de5c4ed70162 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006966 / 0.011353 (-0.004387) | 0.004339 / 0.011008 (-0.006669) | 0.086682 / 0.038508 (0.048174) | 0.086143 / 0.023109 (0.063034) | 0.316106 / 0.275898 (0.040208) | 0.351422 / 0.323480 (0.027942) | 0.005916 / 0.007986 (-0.002069) | 0.003630 / 0.004328 (-0.000698) | 0.066980 / 0.004250 (0.062730) | 0.060031 / 0.037052 (0.022979) | 0.317487 / 0.258489 (0.058998) | 0.356280 / 0.293841 (0.062439) | 0.031816 / 0.128546 (-0.096730) | 0.008797 / 0.075646 (-0.066849) | 0.289848 / 0.419271 (-0.129424) | 0.055431 / 0.043533 (0.011898) | 0.318881 / 0.255139 (0.063742) | 0.332315 / 0.283200 (0.049116) | 0.025946 / 0.141683 (-0.115737) | 1.472904 / 1.452155 (0.020749) | 1.577973 / 1.492716 (0.085257) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239056 / 0.018006 (0.221050) | 0.565406 / 0.000490 (0.564917) | 0.003606 / 0.000200 (0.003406) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029771 / 0.037411 (-0.007640) | 0.085534 / 0.014526 (0.071008) | 0.107008 / 0.176557 (-0.069548) | 0.631583 / 0.737135 (-0.105552) | 0.104210 / 0.296338 (-0.192128) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390675 / 0.215209 (0.175466) | 3.898746 / 2.077655 (1.821091) | 1.933048 / 1.504120 (0.428928) | 1.792162 / 1.541195 (0.250967) | 1.958045 / 1.468490 (0.489555) | 0.488632 / 4.584777 (-4.096144) | 3.696306 / 3.745712 (-0.049406) | 3.454600 / 5.269862 (-1.815262) | 2.176292 / 4.565676 (-2.389385) | 0.057617 / 0.424275 (-0.366658) | 0.007603 / 0.007607 (-0.000004) | 0.467843 / 0.226044 (0.241798) | 4.672928 / 2.268929 (2.404000) | 2.441096 / 55.444624 (-53.003529) | 2.133506 / 6.876477 (-4.742970) | 2.431167 / 2.142072 (0.289095) | 0.588567 / 4.805227 (-4.216661) | 0.136070 / 6.500664 (-6.364594) | 0.063395 / 0.075469 (-0.012074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255003 / 1.841788 (-0.586784) | 20.587656 / 8.074308 (12.513348) | 15.147817 / 10.191392 (4.956425) | 0.152039 / 0.680424 (-0.528384) | 0.018815 / 0.534201 (-0.515386) | 0.397458 / 0.579283 (-0.181825) | 0.431433 / 0.434364 (-0.002931) | 0.487890 / 0.540337 (-0.052448) | 0.675367 / 1.386936 (-0.711569) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007209 / 0.011353 (-0.004144) | 0.004372 / 0.011008 (-0.006636) | 0.066288 / 0.038508 (0.027780) | 0.091776 / 0.023109 (0.068667) | 0.390724 / 0.275898 (0.114826) | 0.434711 / 0.323480 (0.111231) | 0.005790 / 0.007986 (-0.002196) | 0.003562 / 0.004328 (-0.000767) | 0.066155 / 0.004250 (0.061904) | 0.062459 / 0.037052 (0.025406) | 0.406622 / 0.258489 (0.148133) | 0.433976 / 0.293841 (0.140135) | 0.032590 / 0.128546 (-0.095957) | 0.008856 / 0.075646 (-0.066790) | 0.072327 / 0.419271 (-0.346945) | 0.049958 / 0.043533 (0.006426) | 0.400164 / 0.255139 (0.145025) | 0.413339 / 0.283200 (0.130139) | 0.025283 / 0.141683 (-0.116399) | 1.487668 / 1.452155 (0.035514) | 1.537679 / 1.492716 (0.044962) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257814 / 0.018006 (0.239808) | 0.571741 / 0.000490 (0.571251) | 0.000412 / 0.000200 (0.000212) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033893 / 0.037411 (-0.003518) | 0.094533 / 0.014526 (0.080008) | 0.105876 / 0.176557 (-0.070680) | 0.158675 / 0.737135 (-0.578460) | 0.107790 / 0.296338 (-0.188548) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425796 / 0.215209 (0.210587) | 4.229159 / 2.077655 (2.151505) | 2.239613 / 1.504120 (0.735493) | 2.073830 / 1.541195 (0.532635) | 2.185508 / 1.468490 (0.717018) | 0.483984 / 4.584777 (-4.100793) | 3.645575 / 3.745712 (-0.100137) | 3.454767 / 5.269862 (-1.815095) | 2.141387 / 4.565676 (-2.424290) | 0.057570 / 0.424275 (-0.366705) | 0.007901 / 0.007607 (0.000294) | 0.501160 / 0.226044 (0.275116) | 5.012283 / 2.268929 (2.743355) | 2.701267 / 55.444624 (-52.743357) | 2.465409 / 6.876477 (-4.411068) | 2.696812 / 2.142072 (0.554739) | 0.587160 / 4.805227 (-4.218067) | 0.134175 / 6.500664 (-6.366489) | 0.062028 / 0.075469 (-0.013441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.345632 / 1.841788 (-0.496155) | 21.077279 / 8.074308 (13.002971) | 14.700826 / 10.191392 (4.509434) | 0.156191 / 0.680424 (-0.524233) | 0.018991 / 0.534201 (-0.515210) | 0.400413 / 0.579283 (-0.178870) | 0.420597 / 0.434364 (-0.013767) | 0.486534 / 0.540337 (-0.053804) | 0.646606 / 1.386936 (-0.740330) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5bb8fabb135ca8adf47151ad3de050e3a258ccab \"CML watermark\")\n" ]
2023-07-28T09:37:15
2023-07-28T10:16:11
2023-07-28T10:07:02
CONTRIBUTOR
null
Fix https://github.com/huggingface/datasets/issues/6087 (Colab installs 2023.6.0, so we should be good)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/6091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/6091/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/6091", "html_url": "https://github.com/huggingface/datasets/pull/6091", "diff_url": "https://github.com/huggingface/datasets/pull/6091.diff", "patch_url": "https://github.com/huggingface/datasets/pull/6091.patch", "merged_at": "2023-07-28T10:07:02" }
true