Generating Electronic Dance Music without Mirrors: Corpus-based Modelling without Quotation
Symposium:
- ISEA2013: 19th International Symposium on Electronic Art
- More presentations from ISEA2013:
Session Title:
- Sonics
Presentation Title:
- Generating Electronic Dance Music without Mirrors: Corpus-based Modelling without Quotation
Presenter(s):
Venue(s):
Abstract:
On the surface, generative music has been successful in styles that provide clear rules for creators: tonal music [Cope 2005], jazz [Lewis 2000], Electronic Dance Music [Eigenfeldt and Pasquier 2011]. While such rule-based systems offer initial success, difficulties arise through the need of expressing finer gradations of rules. A more flexible approach is by learning through analysis of a given corpus. Machine learning, as demonstrated within Music Information Retrieval (MIR), is a hot topic of research, although still very much in its infancy. Collins, for example, stated his automated EDM analysis system “cannot be claimed to be on a par with the musicologist’s ear” [Collins 2012]. The Generative Electronica Research Project has undertaken a long-term investigation into creating EDM through generative methods, using a corpus of 100 human-transcribed tracks as models. As the music generation is autonomous, no interaction with humans occurs during generation, and no artistic decisions are made in real-time: in other words, all creative decisions are coded. Decisions such as how beats are constructed and varied are all derived from the corpus through analysis, without quotation, and without resorting to personal algorithms, however successful those may have been in the past. This paper will describe our methods, and present examples of autonomous generation by the system.