parquet-converter commited on
Commit
89f8e24
·
1 Parent(s): f56d96d

Update parquet files

Browse files
README.md DELETED
@@ -1,164 +0,0 @@
1
- ---
2
- license: apache-2.0
3
- task_categories:
4
- - image-to-text
5
- - question-answering
6
- - zero-shot-classification
7
- language:
8
- - en
9
- multilinguality:
10
- - monolingual
11
- task_ids:
12
- - text-scoring
13
- pretty_name: HL (High-Level Dataset)
14
- size_categories:
15
- - 10K<n<100K
16
- annotations_creators:
17
- - crowdsurced
18
- annotations_origin:
19
- - crowdsurced
20
- dataset_info:
21
- features:
22
- - name: file_name
23
- dtype: string
24
- captions:
25
- - name: scene
26
- sequence:
27
- dtype: string
28
- - name: action
29
- sequence:
30
- dtype: string
31
- - name: rationale
32
- sequence:
33
- dtype: string
34
- - name: object
35
- sequence:
36
- dtype: string
37
- splits:
38
- - name: train
39
- num_examples: 13498
40
- - name: test
41
- num_examples: 1499
42
- ---
43
- # Dataset Card for the High-Level Dataset
44
-
45
- ## Table of Contents
46
- - [Table of Contents](#table-of-contents)
47
- - [Dataset Description](#dataset-description)
48
- - [Dataset Summary](#dataset-summary)
49
- - [Supported Tasks](#supported-tasks)
50
- - [Languages](#languages)
51
- - [Dataset Structure](#dataset-structure)
52
- - [Data Instances](#data-instances)
53
- - [Data Fields](#data-fields)
54
- - [Data Splits](#data-splits)
55
- - [Dataset Creation](#dataset-creation)
56
- - [Curation Rationale](#curation-rationale)
57
- - [Source Data](#source-data)
58
- - [Annotations](#annotations)
59
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
60
- - [Considerations for Using the Data](#considerations-for-using-the-data)
61
- - [Social Impact of Dataset](#social-impact-of-dataset)
62
- - [Discussion of Biases](#discussion-of-biases)
63
- - [Other Known Limitations](#other-known-limitations)
64
- - [Additional Information](#additional-information)
65
- - [Dataset Curators](#dataset-curators)
66
- - [Licensing Information](#licensing-information)
67
- - [Citation Information](#citation-information)
68
- - [Contributions](#contributions)
69
-
70
- ## Dataset Description
71
-
72
- - **Homepage:**
73
- - **Repository:
74
- - **Paper:**
75
- - **Point of Contact:**
76
-
77
- ### Dataset Summary
78
-
79
- [More Information Needed]
80
-
81
- ### Supported Tasks
82
-
83
- ### Languages
84
-
85
- English
86
-
87
- ## Dataset Structure
88
-
89
- [More Information Needed]
90
-
91
- ### Data Instances
92
-
93
- [More Information Needed]
94
-
95
- ### Data Fields
96
-
97
- [More Information Needed]
98
-
99
- ### Data Splits
100
-
101
- [More Information Needed]
102
-
103
- ## Dataset Creation
104
-
105
- ### Curation Rationale
106
-
107
- [More Information Needed]
108
-
109
- ### Source Data
110
-
111
- [More Information Needed]
112
-
113
- #### Initial Data Collection and Normalization
114
-
115
- [More Information Needed]
116
-
117
- #### Who are the source language producers?
118
-
119
- [More Information Needed]
120
-
121
- ### Annotations
122
-
123
- [More Information Needed]
124
-
125
- #### Annotation process
126
-
127
- [More Information Needed]
128
-
129
- #### Who are the annotators?
130
-
131
- [More Information Needed]
132
-
133
- ### Personal and Sensitive Information
134
-
135
- [More Information Needed]
136
-
137
- ## Considerations for Using the Data
138
-
139
- ### Social Impact of Dataset
140
-
141
- [More Information Needed]
142
-
143
- ### Discussion of Biases
144
-
145
- [More Information Needed]
146
-
147
- ### Other Known Limitations
148
-
149
- [More Information Needed]
150
-
151
- ## Additional Information
152
-
153
- ### Dataset Curators
154
-
155
- [More Information Needed]
156
-
157
- ### Licensing Information
158
-
159
- [More Information Needed]
160
-
161
- ### Citation Information
162
-
163
- ```
164
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/annotations/test.jsonl DELETED
The diff for this file is too large to render. See raw diff
 
data/annotations/train.jsonl → default/hl-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c4027591fa99abdd8b7786dcae7999ac7b78be89f9f010234cfd68946f6a4e34
3
- size 15165087
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9cac32227680d47f032fa1e2a055f402b5f03af7d2b3e3ce156b7286c0acbb3
3
+ size 246309674
data/images.tar.gz → default/hl-train-00000-of-00004.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e077862371637ebbc821466e6e3df3f77ea5ee3a75c0968eddd08f4a7adcfe8c
3
- size 2439435515
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb34b158f7e9b5873b672ad632f7ed0f70b765c3234f04bbb567e4031f8e72e2
3
+ size 659871158
default/hl-train-00001-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c3d45bd5513a347010c19e5bf3145c884791414235a08dfda693491a0fbf7e4
3
+ size 650392195
default/hl-train-00002-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d4dffec44058aa74f1d38c68b461482d1fd7d7e4063e746e69c6363f431525e
3
+ size 652455208
default/hl-train-00003-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc5eefba2d5a37a3f7757dff02024440820a2d25a3a1b2e60a64656b400a7129
3
+ size 245980507
hl.py DELETED
@@ -1,131 +0,0 @@
1
- # coding=utf-8
2
- # Licensed under the Apache License, Version 2.0 (the "License");
3
- # you may not use this file except in compliance with the License.
4
- # You may obtain a copy of the License at
5
- #
6
- # http://www.apache.org/licenses/LICENSE-2.0
7
- #
8
- # Unless required by applicable law or agreed to in writing, software
9
- # distributed under the License is distributed on an "AS IS" BASIS,
10
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11
- # See the License for the specific language governing permissions and
12
- # limitations under the License.
13
- """High-Level dataset."""
14
-
15
-
16
- import json
17
- from pathlib import Path
18
-
19
- import datasets
20
-
21
-
22
- _CITATION = """\
23
- @misc{}
24
- """
25
-
26
- _DESCRIPTION = """\
27
- High-level Dataset
28
- """
29
-
30
- # github link
31
- _HOMEPAGE = ""
32
-
33
- _LICENSE = "Apache 2.0"
34
-
35
- _IMG = "https://huggingface.co/datasets/michelecafagna26/hl/resolve/main/data/images.tar.gz"
36
- _TRAIN = "https://huggingface.co/datasets/michelecafagna26/hl/resolve/main/data/annotations/train.jsonl"
37
- _TEST = "https://huggingface.co/datasets/michelecafagna26/hl/resolve/main/data/annotations/test.jsonl"
38
-
39
-
40
-
41
- class HL(datasets.GeneratorBasedBuilder):
42
- """High Level Dataset."""
43
-
44
- VERSION = datasets.Version("1.0.0")
45
-
46
- def _info(self):
47
- features = datasets.Features(
48
- {
49
- "file_name": datasets.Value("string"),
50
- "image": datasets.Image(),
51
- "scene": datasets.Sequence(datasets.Value("string")),
52
- "action": datasets.Sequence(datasets.Value("string")),
53
- "rationale": datasets.Sequence(datasets.Value("string")),
54
- "object": datasets.Sequence(datasets.Value("string")),
55
- # "captions": {
56
- # "scene": datasets.Sequence(datasets.Value("string")),
57
- # "action": datasets.Sequence(datasets.Value("string")),
58
- # "rationale": datasets.Sequence(datasets.Value("string")),
59
- # "object": datasets.Sequence(datasets.Value("string")),
60
- # },
61
- "confidence": {
62
- "scene": datasets.Sequence(datasets.Value("float32")),
63
- "action": datasets.Sequence(datasets.Value("float32")),
64
- "rationale": datasets.Sequence(datasets.Value("float32")),
65
- }
66
- # "purity": {
67
- # "scene": datasets.Sequence(datasets.Value("float32")),
68
- # "action": datasets.Sequence(datasets.Value("float32")),
69
- # "rationale": datasets.Sequence(datasets.Value("float32")),
70
- # },
71
- # "diversity": {
72
- # "scene": datasets.Value("float32"),
73
- # "action": datasets.Value("float32"),
74
- # "rationale": datasets.Value("float32"),
75
- # },
76
- }
77
- )
78
- return datasets.DatasetInfo(
79
- description=_DESCRIPTION,
80
- features=features,
81
- homepage=_HOMEPAGE,
82
- license=_LICENSE,
83
- citation=_CITATION,
84
- )
85
-
86
- def _split_generators(self, dl_manager):
87
- image_files = dl_manager.download(_IMG)
88
- annotation_files = dl_manager.download_and_extract([_TRAIN, _TEST])
89
- return [
90
- datasets.SplitGenerator(
91
- name=datasets.Split.TRAIN,
92
- gen_kwargs={
93
- "annotation_file_path": annotation_files[0],
94
- "images": dl_manager.iter_archive(image_files),
95
- },
96
- ),
97
- datasets.SplitGenerator(
98
- name=datasets.Split.TEST,
99
- gen_kwargs={
100
- "annotation_file_path": annotation_files[1],
101
- "images": dl_manager.iter_archive(image_files),
102
- },
103
- ),
104
- ]
105
-
106
- def _generate_examples(self, annotation_file_path, images):
107
-
108
- idx = 0
109
-
110
- #assert Path(annotation_file_path).suffix == ".jsonl"
111
-
112
- with open(annotation_file_path, "r") as fp:
113
- metadata = {json.loads(item)['file_name']: json.loads(item) for item in fp}
114
-
115
- # This loop relies on the ordering of the files in the archive:
116
- # Annotation files come first, then the images.
117
- for img_file_path, img_obj in images:
118
-
119
- file_name = Path(img_file_path).name
120
-
121
- if file_name in metadata:
122
- yield idx, {
123
- "file_name": file_name,
124
- "image": {"path": img_file_path, "bytes": img_obj.read()},
125
- "scene": metadata[file_name]['captions']['scene'],
126
- "action": metadata[file_name]['captions']['action'],
127
- "rationale": metadata[file_name]['captions']['rationale'],
128
- "object": metadata[file_name]['captions']['object'],
129
- "confidence": metadata[file_name]['confidence'],
130
- }
131
- idx += 1