Datasets:
YAML Metadata
Warning:
The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other
在LLM横行的今天,大家都在讲究SFT数据质量。相比于各种一板一眼的AI回复,又是step by step又是detailed reasoning,这种非常casual的对话显得那么的独特,更适合用作情感陪伴闲聊机器人的目的。
本项目提供了一个大规模中文对话数据集,原始数据来自于清华大学的LCCC(Large-scale Cleaned Chinese Conversation)数据集
基于LCCC-large,但因为有1200万。故使用bert-base-chinese转换为embedding,且使用类knn的方法抽取了1万条。并转换成了sharegpt格式。
从实用的角度来说,因为对话都只有两句,需要通过GPT进行续写。但是实测发现openai系列的太严肃了,失去了casual的味道。浅测了一下文心一言可以续写这种闲聊对话。只是测试了一下,并没有放在这个数据集中。
当然了,最好的还是收集真实世界的对话。
- Downloads last month
- 224