A Language Apparatus


Presentation Title:

  • A Language Apparatus



  • The author discusses their creative projects Bodytext, Tower and Crosstalk, exploring how language and communication function in a context where human and machine are both responsible for the articulation and interpretation of texts. The dynamics of such a hybrid apparatus allow insights into the making of meaning and its reception as a socio-technical system. Bodytext, Tower and Crosstalk are language based digitally mediated performance installations. They each use progressive developments of generative and interpretative grammar systems. Bodytext (2010) was authored in Adobe Director and coded in Lingo and C++. Tower (2011) was developed for a large scale immersive virtual reality system and coded in Python. Crosstalk (2014) was developed and coded in Processing.
    Bodytext is a performance work involving speech, movement and the body. A dancer’s movement and speech are re-mediated within an augmented environment employing real-time motion tracking, voice recognition, interpretative language systems, projection and audio synthesis. The acquired speech, a description of an imagined dance, is re-written through projected digital display and sound synthesis, the performer causing texts to interact and recombine with one another. What is written is affected by the dance whilst the emergent texts determine what is danced. The work questions and seeks insight into the relations between kinaesthetic experience, memory, agency and language.
    Tower is an interactive work where the computer listens to and anticipates what is said by those interacting with it. A self-learning system, as the inter-actor speaks the computer displays what they say and the potential words they might speak next. The speaker may or may not use a displayed word. New word conjunctions are added to the corpus employed for prediction. Words uttered by the inter-actor appear as a red spiral of text, at the top of which the inter-actor is located within the virtual reality environment. Wearing a head mounted display the inter-actor can look wherever they wish, although they cannot move. The predicted words appear as white flickering clouds of text in and around the spoken words. What emerges is an archeology of
    speech where what is spoken can be seen amongst what might have been said, challenging the unique speaker’s voice and our memory of events.
    ‘Crosstalk’ is a multi-performer installation where movement and speech are remediated within an augmented 3D environment employing real-time motion tracking, multi-source voice recognition, interpretative language systems, a bespoke physics engine, large scale projection and surround-sound audio synthesis. Acquired speech of inter-actors is re-mediated through projected digital display and sound synthesis, the inter-actors physical actions causing texts to interact and recombine with one another. The elements in the system all affect how each adapts, from state to state, as the various elements of the work – people, machines, language, image, movement and sound – interact with one another. ‘Crosstalk’ explores social relations, as articulated in performative language acts, in relation to generative ontologies of selfhood and the capacity of a socio-technical space to ‘make people’.