Extended Musical Interface with the Human Nervous System: Assessment and Prospects


Session Title:

  • New Tools for the Composer

Presentation Title:

  • Extended Musical Interface with the Human Nervous System: Assessment and Prospects



  • FISEA 

    Many decades ago American composer Charles Ives speculated that eventually music would be made through a direct connection of the human brain to devices for sound production. Subsequently, the pioneering physiologist Adrian reported on experiencing a translation of the human electroencephalogram (EEG) into audio signals. Decades later, composers Lucier, Teitelbaum, Rosen-boom and others produced major works of music with EEG and other bioelectronic signals. Since then many have expanded these applications into the kinetic arts as well. The author’s work in biofeedback and the arts, begun 20 years ago, is experiencing a revival due to the fact that advances in technology now permit realization of musical concepts in performance which depend on complex real-time analysis of EEG signals, previously achievable only with cumbersome, non-real-time, laboratory-bound methods. In this paper the author provides an assessment of current techniques and prospects for further development of extended musical interface with the human nervous system. Topics discussed include the following:

    • the musical cognitive significance of stimulus-bound EEG events measurable in real-time
    • the relationship of these events to aspects of musical formal perception, such as feature extraction and temporal gestalt (TG) boundary detection in musical holarchic structures
    • methods of circumventing inherent limitations on the information bandwidth of EEG signals
    • application of event-related potentials (ERPs) to the building of formal musical holarchies in real-time
    • paradigms for algorithmic improvisation using these signals
    • traversing a musical knowledge base by driving an inference engine with cognitively significant ERPs
    • the EEG analysis expert system—
    • application of AI techniques
    • comparison of the uses of EEG information in making decisions on relatively high levels of musical structure versus direct, low-level event feedback in musical textures—the applications of each approach
    • techniques from the study of chaos dynamics applied to analysis of long-term EEG waveform patterns—their significance for music
    • applications of recent advances in measurement technology, such as use of the SQUID (Super-cooled QUantum Interference Device) enabling EEG detection without direct electrical contact with the subject, super-conducting electrodes, etc.
    • speculations on possible extensions of these ideas in the conception of musical instruments, performance and education
    • EEG-to-MIDI—some direct mapping ideas for an input device.

    Finally, a configuration of hardware and software currently being used to develop the author’s work-in-progress, On Being Invisible II, will be described. This includes use of a composition and performance language, HMSL (Hierarchical Music Specification Language), to implement real-time composition strategies in response to EEG analyses; a software synthesis and signal processing environment, Cmix, for non-real-time mapping of these events to precomposed sound tracks; and EEG analysis software — all running on a Macintosh II outfitted with data acquisition and MIDI interface hardware and a NuBuss interface to the Digisound-16 audio conversion system.