spawn99 commited on
Commit
11c2ca8
1 Parent(s): 3ad85a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md CHANGED
@@ -55,3 +55,117 @@ tags:
55
  - movie dialog
56
  - cornell
57
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  - movie dialog
56
  - cornell
57
  ---
58
+
59
+ Cornell Movie-Dialogs Corpus
60
+
61
+ Distributed together with:
62
+
63
+ "Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs"
64
+ Cristian Danescu-Niculescu-Mizil and Lillian Lee
65
+ Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, ACL 2011.
66
+
67
+ (this paper is included in this zip file)
68
+
69
+ NOTE: If you have results to report on these corpora, please send email to [email protected] or [email protected] so we can add you to our list of people using this data. Thanks!
70
+
71
+
72
+ Contents of this README:
73
+
74
+ A) Brief description
75
+ B) Files description
76
+ C) Details on the collection procedure
77
+ D) Contact
78
+
79
+
80
+ A) Brief description:
81
+
82
+ This corpus contains a metadata-rich collection of fictional conversations extracted from raw movie scripts:
83
+
84
+ - 220,579 conversational exchanges between 10,292 pairs of movie characters
85
+ - involves 9,035 characters from 617 movies
86
+ - in total 304,713 utterances
87
+ - movie metadata included:
88
+ - genres
89
+ - release year
90
+ - IMDB rating
91
+ - number of IMDB votes
92
+ - IMDB rating
93
+ - character metadata included:
94
+ - gender (for 3,774 characters)
95
+ - position on movie credits (3,321 characters)
96
+
97
+
98
+ B) Files description:
99
+
100
+ In all files the field separator is " +++$+++ "
101
+
102
+ - movie_titles_metadata.txt
103
+ - contains information about each movie title
104
+ - fields:
105
+ - movieID,
106
+ - movie title,
107
+ - movie year,
108
+ - IMDB rating,
109
+ - no. IMDB votes,
110
+ - genres in the format ['genre1','genre2',�,'genreN']
111
+
112
+ - movie_characters_metadata.txt
113
+ - contains information about each movie character
114
+ - fields:
115
+ - characterID
116
+ - character name
117
+ - movieID
118
+ - movie title
119
+ - gender ("?" for unlabeled cases)
120
+ - position in credits ("?" for unlabeled cases)
121
+
122
+ - movie_lines.txt
123
+ - contains the actual text of each utterance
124
+ - fields:
125
+ - lineID
126
+ - characterID (who uttered this phrase)
127
+ - movieID
128
+ - character name
129
+ - text of the utterance
130
+
131
+ - movie_conversations.txt
132
+ - the structure of the conversations
133
+ - fields
134
+ - characterID of the first character involved in the conversation
135
+ - characterID of the second character involved in the conversation
136
+ - movieID of the movie in which the conversation occurred
137
+ - list of the utterances that make the conversation, in chronological
138
+ order: ['lineID1','lineID2',�,'lineIDN']
139
+ has to be matched with movie_lines.txt to reconstruct the actual content
140
+
141
+ - raw_script_urls.txt
142
+ - the urls from which the raw sources were retrieved
143
+
144
+ C) Details on the collection procedure:
145
+
146
+ We started from raw publicly available movie scripts (sources acknowledged in
147
+ raw_script_urls.txt). In order to collect the metadata necessary for this study
148
+ and to distinguish between two script versions of the same movie, we automatically
149
+ matched each script with an entry in movie database provided by IMDB (The Internet
150
+ Movie Database; data interfaces available at http://www.imdb.com/interfaces). Some
151
+ amount of manual correction was also involved. When more than one movie with the same
152
+ title was found in IMBD, the match was made with the most popular title
153
+ (the one that received most IMDB votes)
154
+
155
+ After discarding all movies that could not be matched or that had less than 5 IMDB
156
+ votes, we were left with 617 unique titles with metadata including genre, release
157
+ year, IMDB rating and no. of IMDB votes and cast distribution. We then identified
158
+ the pairs of characters that interact and separated their conversations automatically
159
+ using simple data processing heuristics. After discarding all pairs that exchanged
160
+ less than 5 conversational exchanges there were 10,292 left, exchanging 220,579
161
+ conversational exchanges (304,713 utterances). After automatically matching the names
162
+ of the 9,035 involved characters to the list of cast distribution, we used the
163
+ gender of each interpreting actor to infer the fictional gender of a subset of
164
+ 3,321 movie characters (we raised the number of gendered 3,774 characters through
165
+ manual annotation). Similarly, we collected the end credit position of a subset
166
+ of 3,321 characters as a proxy for their status.
167
+
168
+
169
+ D) Contact:
170
+
171
+ Please email any questions to: [email protected] (Cristian Danescu-Niculescu-Mizil)