Datasets:
Commit
·
c1bb42f
1
Parent(s):
bcada79
Update README.md
Browse files
README.md
CHANGED
@@ -207,12 +207,12 @@ The high-level captions capture the human interpretation of the scene, providing
|
|
207 |
|
208 |
From the paper:
|
209 |
|
210 |
-
>**Pilot
|
211 |
>With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform.
|
212 |
>We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the
|
213 |
>annotation in bulk. The final annotation form is shown in Appendix D.
|
214 |
|
215 |
-
>***Procedure
|
216 |
> i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use
|
217 |
>their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover,
|
218 |
>differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities
|
|
|
207 |
|
208 |
From the paper:
|
209 |
|
210 |
+
>**Pilot:** We run a pilot study with the double goal of collecting feedback and defining the task instructions.
|
211 |
>With the results from the pilot we design a beta version of the task and we run a small batch of cases on the crowd-sourcing platform.
|
212 |
>We manually inspect the results and we further refine the instructions and the formulation of the task before finally proceeding with the
|
213 |
>annotation in bulk. The final annotation form is shown in Appendix D.
|
214 |
|
215 |
+
>***Procedure:*** The participants are shown an image and three questions regarding three aspects or axes: _scene_, _actions_ and _rationales_
|
216 |
> i,e. _Where is the picture taken?_, _What is the subject doing?_, _Why is the subject doing it?_. We explicitly ask the participants to use
|
217 |
>their personal interpretation of the scene and add examples and suggestions in the instructions to further guide the annotators. Moreover,
|
218 |
>differently from other VQA datasets like (Antol et al., 2015) and (Zhu et al., 2016), where each question can refer to different entities
|