GovReport Training set split
Hi
@Mivg
and team,
can you please tell me more about how you (or others) split the data into training, eval and test? I have just created knowledge graphs of the inputs for all three (train, dev and test) and I find that the training set has many more relatively large documents. The test set gave me no memory issues. The validation set was closer to the training set. But the training set had monsters in there (250k, 100k+, plenty over 50k). Some of these gave me memory issues even with an 80gb gpu.
Just wondering why the asymmetry in sampling.
Cheers,
Patrick
Hi
@patrickocal
,
In the SLED work, we did not create or modify any of the datasets, we simply created a different loader for convenience to be able to separate the prefix (e.g. question, query, instruction etc.) from the input. The datasets all remain as they appear in the SCROLLS paper (Shaham et al,.. 2022). Specifically, as mentioned in the SCROLLS paper, the GovReport dataset came from Huang et al. 2021 which provided the split.
As for your memory issues, you may try to clip the inputs to the longest sequence your HW can handle.
Best,
Maor
Perfect: thanks @Mivg . Thanks for the suggestion: I ended up generating the three most problematic KGs slowly on the cpu with 1500gb ram!