Bruges, in the Middle Ages the Venice of the north, locked between land and sea and beset by the misfortune of time. Plagued by natural disasters and the calamities that lie its heart and faced with decline, the city struggles towards its Renaissance… In the heart of that city works a painter, an inspired alchemist in search of new materials. There he mixes his breath with oil and pigments, condenses color and light, transparency and solidity. The secret of oil painting he discovers will offer immortality to his city. This film shows us the middle ages as seen through the imaginative eyes of the Flemish masters and does not restrict itself to a faithful reconstruction of historical fact. It shows a world of fantasy that combines legends and popular and religious rituals, while creating a story, searching for meaning and relinking fantasy with reality, the true with the not so true. Some of the characters seem like points of reference in this fantasy world as they merge with details from paintings by Jan van Eyck, Hans Memling, Pieter Bruegel and Hieronimus Bosch. They bring to light what archives have kept hidden: the secret of oil painting discovered more than five hundred years ago and kept stored behind the heavy and elusive silence of stone…
Hard & software featured:
Image processing: Henry Paint Box (Quantel) Flame (Discreet Logic) Video editing: Video Cube with Photoshop Edit Box (Quantel) Audio morphing: AudioSculpt (IRCAM) Audio editing: Sound Designer Deck
Location Intersection began as a document by Jeep Johnson of the show Between Water and Sand by Dana Dorosh that was on exhibit at the A.I.R. gallery in the spring of 1994. Initially the work was recorded to show a representation of Dana’s work. As we began the editing we soon became unsatisfied with the “traditional” approach of documentation. We began exploring how we could add another dimension to the material. We started to alter and digitally manipulate the images in MAC based computer programs such as Adobe After Affects and Adobe Premiere. We then edited the material in the MAC using the Media 100 system. The results were very exciting. We eagerly continued down that path and started to integrate related visuals like a series of sand dune shots, aerials over the midwest, and still photographs of cityscapes. We found that in the digital domain the computer was able to integrate every kind of image we wanted to works with. Quickly we were able to create what felt like an infinite amount of materials to draw from. We continued developing distinct treatments for each section. We took similar approaches with the sound track as we did with the picture. Recording sounds, inputting them into the computer and manipulating them in the digital domain. We discovered that similar to the images, the sound we could create and manipulate were endless. The computer facilitates the creation of a new space in reality. Unique juxtapositions emerge. We shot a gallery scene, removed the walls, and replaced them with an aerial image from the midwest. Time and location became the elements of a new language. The video reflects the integration of digital technology with traditional photography, video, painting and all other mediums. We are exploring this new language as we create it. The video was co-directed by Daria Dorosh and Jeep Johnson. It was shot and edited by Jeep Johnson with constant input from Daria Dorosh. The digital images were the result of intensive collaboration between both artist. The process flourished with a successful collaboration.
Inside Round is about the mind, flabbergasted in the face of existential absurdity. Reflecting upon the outside ongoing life it is exposed to and being an isolated world of its own at the same time. We create the world we belong to. We belong to a world we did not create. Feet on the ground, head in the sky. Bustle all around us, stars out of reach above. A multimedia piece involving computer music and computer animation combined with prerecorded video. The picture and the music are equally significant. The relationship between them, which is complementary most of the time, is crucial to the piece – its expressivity and dynamics emerge from their interaction. The idea was to express the feeling of ‘existential loneliness and absurdity’, as a fundamental state of mind, by using extremely simple, elementary forms and gestures, which seemed appropriate for achieving that result. Inside Round is a contemplative piece and in a way could be even called minimalistic.
This music was composed in 1995 for the one hundredth anniversary of cinema on the silent film “Jeux et de la vitesse” realized by Henry Chomette in France in 1925. It was commissioned by the Societe Philharmonique de Bruxelles and by the Cinematheque Royale de Belgique, with an ADAT tape playing the music. The music was composed in hexaphonic format and most of the sound materials where generated by areal-time multichannel granular morphing algorithm developed at the Polytechnic Faculty in Mons, Belgium, and running on the IRCAM Sound Processing Workstation.
The older brother of Rene Clair, Henry Chomette made some experimental movies. This one is made out of retrieved fragments of pictures celebrating Paris as a town in perpetual movements. Most of the film is a long hypnotizing run through Paris by train and by boat. The music accentuates the impression of speed and the constantly moving rhythmical and hexaphonic spatial sound textures surround the audience, creating spinning movements as well as a sort of music traveling. Passing through tunnels and under bridges becomes almost a physical
“Panini Stickers” is performed by [THE], Ed Harkins and Phil Larson of the University of California at San Diego Music Department with video by Vibeke Sorensen of the School of Cinema – Television, University of Southern California. This piece is a development of a project originally prepared in 1959 for bassoon and dancer (dancer). It was premiered on national state radio (Birdies) as part of a video for a California political convention.
I have always been fascinated by the sounds of piano tuning and wanted to write a piece based on the ritualistic atmosphere it can evoke before a concert begins, a kind of aural preludium. A good piano tuner is a music maker in his own right. He restores anew the raw materials used by the composer and performer according to the tuning of the day, laying the foundation for the realization of their music. Every tuner has his own working rhythms combining the shifting of the hammer to a new tuning pin, playing the key to be tuned, adjusting the hammer to move the string onto the “right spot”. A tuner is continually listening to the speed of beats between intervals to determine the accuracy of his work and has his own special checks and controls to engineer a harmonic balance over the keyboard. These rhythms and the melodic and harmonic elements drawn from the equal temperament tuning were the source of the musical materials in the piece. I used the Yamaha Disklavier because I wanted to have the freedom to write for the piano without the technical limitations of a pianist and yet employ an acoustic instrument. Using the notation software Finale in a computer provided the means to write not only the notes but, also, to program the “interpretation” for the concert. Because a human performance is never entirely possible using a computer, the slightly mechanical sound produced on the Disklavier gave a quality I was looking for – “the piano tuning that became realtime music”.
This work is one of a series of pieces called Tavole in which the timbre and dynamic possibilities of the musical instruments were studied. Tavola IV, dedicated to the viola and nicknamed “of the rustle”, utilizes the sound possibilities that the string instrument allows. The timbre has a very important part. The original idea was to point out on a so and universe that usually is very difficult to hear and made up of attack transitoriness, rustles and infrasound. These kinds of sounds are normally not utilized in music. The aim is to work into the sound, directly modeling the acoustic material. The formal organization of the piece is developed starting rom single notes that were articulated following timbre principles. The sound discovery and the hypnotic movement were realized with the purpose of extending the perception of space-time. The premiere performance of Tavola IV was given at the Computer & Art Festival in Padua on the 24th of February 1994. The piece received the second prize at the XVI International Competition “L. Russolo” and is recorded on CD (Fondazione Russolo Ef. Er. P94.).
Performs duo pieces from THE H.A.L.I. CONFIGURATION: Bass clarinet & electric guitar with direct MIDI control of AKAI/EPS-samplers (o.a. tuned microtonally and using grain-synthesis) & effects processors. “SAMPLIFIED”; 35 minutes piece in 5 movements “HOW TO MOVE A H.A.L.I YARD”; 30 minutes piece in 3 movements. The pieces expanding sound material uses 12,19,24,31 & 53 tones in the octave. The original sounds for the pieces are programmed into the 2 samplers, so they continuously can be modulated and move inside the speaker-set-up via the bass clarinets and the el. guitars MIDI-interfaces. The harmonic and melodic material is developed on the basis of the relation between the digital microtonal and the analogue diatonic instrumentarium/gearium.
Jorgen Teller; Cazio MIDIguitar w. EPS-sampler, DP-4 effect w. realtime ctrl., distortion, “wah”.
Jakob Draminsky Hojmark; Bass clarinet with IVL-pitchrider to AKAI S-1000-sampler, ZOOM 9030 effect w. realtime ctrl.
The aesthetic images which occur in the mind of the listener during the performance of a piece of music and how they relate to the way the music is perceived are the concern of the composer. The placement of sound images in three dimensional space when performing electroacoustic music on tape over a number of loudspeakers and how this imaging relates to the way the music is perceived by the listener is the concern of the sound diffuser. As a composer and performer of electroacoustic music on tape, I wanted to create a work in which I could investigate and explore these two aspects of “image”. There is an interplay between the real image and the altered image throughout the work. Sometimes a sound may be recognized and associated with one in the real world, but these images change over time, as does their associated “meaning”. Similarly, the position of the sound image is constantly changing, sometimes slowly, at other times rapidly, and the breadth and depth of these changes are of course enhanced when the piece is performed over a multi-channel diffusion system. Altered Images was realized in the Electroacoustic Music Studios at Northern College, Aberdeen and at the University of Birmingham in August 1995. It was premiered in Montreal in January 1996.
I am an improvisational musician using computers, a participant in a network of Japanese musicians, and I have organized projects. My project “Invisible Objects”, utilizing two Macintosh Powerbooks, is a challenge to myself to create real-time sound, play improvised music, in real-time interaction with the computer, pushing the use of technology in a live situation to the limit. All composition, performance and restructure is by myself. Other projects in which I have been involved are the “Realtime conducting system” which I created and developed, using Macintosh Ethertalk and Internet, and “OTOMO Yoshihide”, working together with Sampling Virus.
A short program of interactive works for violin and computers. The program consists of the European premiere of EFFECTIVE (1996) by Robert Rowe, for violin and effect processor; ETUDE (1992) by Kimura, an improvisation work for violin and interactive computer system; TOCCATA (1935) by Conlon Nancarrow, a work for player piano (MIDI piano) and violin.
Sound & Fury is the title of a whole “class of compositions”. Each piece has properties similar to all the other pieces in the class, but it also reveals individual features at every single public appearance.The concert performance features at least to levels of interaction: musician/computer and computer/environment (via controlled acoustical feedback). The details and the general layout of the music are experienced as emergent phenomena brought forth by the dynamical system constituted by these interactions. All sounds in Sound & Fury are generated in real-time with a fairly peculiar sound synthesis technique devised by the composer, functional iteration synthesis. This is a method of “nonstandard” digital synthesis of sound, i.e. a method which abstracts from known acoustical models (and especially from the Fourier acoustical paradigm). “Nonstandard” approaches to sound synthesis represent an area of research unique to computer music (among the pioneers in nonstandard synthesis of sound are I. Xenakis and G.M. Koenig). Functional iteration synthesis is modeled after the mathematics of “chaos theory”. However, Sound & Fury utilizes such mathematical models not only for the generation of sound, but for the generation of the musical structure itself, as it unfolds in real time. Every performance reveals paths and trajectories of sound of its own, due to different starting parameters set up by the performer. In this way, every performance reflects-in its timbres and textures, in the timing of its pace and rhythm-the notion of “temporal horizon”, the “long-term unpredictability” of events (popularly known as “the butterfly effect” ), a common feature of dynamical systems-not only natural systems, but also social and cultural systems. And the unpredictability, as WI, of our experiences and life. (Hence the title, drawn from Shakespeare’s The Tempest).
Special preview for ISEA96 delegates of Stacey Spiegel’s project ‘Safe Haven’: a harbor simulator of Marine Safety featuring a 360-degree Virtual Reality environment for experiencing the multicultural city of Rotterdam.
Interactive Virtual Drama: Body Communication Actor “MIC” and Poetic Communication Actress “MUSE”
Why do people, regardless of age or gender, have an affinity for objects manifested in the human form? From the earthen figures of ancient times to mechanical dolls, teddy bears and robots, is it not true that man has conceived such objects in his imagination, then formed attachments and transferred emotions to them? We address the issues of communication and the esthetics of artificial life that possess this “human form” in modern society, both from artistic and engineering standpoints. As we create a virtual life that is nothing short of an artificial life, and communicate with this life itself, we have to ask where our future is leading us. An example is presented in which emotions are interpreted from human voices, and emotional responses are triggered within the interactive setting of Maturing Neuro-Baby , “MIC & MUSE.” “Neuro-Baby” (NB) is a communication tool with its own personality and character, including emotional modeling, such as reacting to changing voices, facial expressions and behavior. Based on the experiences of developing the early version of NB, we started the development of a revised version, “Maturing Neuro- Baby”. The basic improvements in Maturing Neuro- Baby are the following. The Neuro-Baby character customizes itself to individual human communication partners by learning. Leaning is achieved by Artificial Neural Networks(ANN) mapping from the input signal emotional state of the NB (recognition mapping), to an appropriate expression showing the response by the NB (expression mapping). “MIC” is a male child character. He has a cuteness that makes humans want to speak to him. MIC recognizes several emotions (joy, anger, surprise, sadness, disgust, teasing, fear) from the intonation of the human voice. People use a microphone when communicating with MIC. For example, if one whistles, MIC’s feeling will be positive and he will respond with excitement. If the speaker’s voice is low and strong , MIC will feel poorly and become angry. “MUSE” is a goddess. She is very expressive, has refined manners, is feminine, sensual, and erotic. MUSE’s emotions are generated by a musical grammar. For example (joy — rising musical scale, anger— vigoroso, sadness — volante, disgust— discord, teasing— scherzando, fear– pesante) People can communicate with MUSE in an improvisational manner by means of a musical installation. From the standpoint of an artist, it is interactive art based on communication and on creatures that have a real ability to participate in an interactive process. Moreover, we think that by selecting a “human” – the creature with which we realistically communicate the most – we establish a condition that demands a creative character from a creature. From an engineering standpoint, we have come to the conclusion that if we want to create life-like characters, we have to develop non-verbal communication technologies. These are expected to give characters the capability of achieving heartfelt communication with humans by exchanging emotional messages. These life-like characters, or “androids”, will unravel a new point of view in a new direction which allows the blending of art, computer science, psychology, and philosophy in a kind of novel research on realistic human expression.
The Multi Mega Book project is an electronic book sculpture composed of 24 mobile Maxi-Pages. At the top of the Maxi-Pages is placed a screen, on which the content of the Virtual book: animated images, video, still images are projected. The MMB structure is 6 Mt. large, 3,70 Mt. heigh, 3 Mt. deep. The Maxi-Pages are multi-medial panels composed with fixed images and a map of interactive word-symbols. They have their own content, their function, their interactivity. The Maxi-Pages have integrated loudspeakers which allows for it’s own sounds and music. The audio is composed of sounds and music which represents the diverse themes contained in the MMB. The Maxi-Pages are synchronized with the screen. They are different levels of interactivity between the images placed in the Maxi-Pages and the animated films projected on the screen.
The concept for this work is both musical and visual, and the following description will contain information relevant for both aesthetic domains. Knowledge from the natural sciences paired with computer technology has opened up new perspectives within the arts. It is now relatively easy to use cross-disciplinary mapping to display the same idea; the same data structure in several ways. The construction of this work is one of many possible mappings, and the animation is based on a direct representation of the data structure that comprises the music, one “sees” the music as one hears it. In addition to their art qualities, mappings like this can very well be considered pedagogic as well, as an entry into the current debate about musical representation. With the development of user interfaces that will allow the user to find his or her own visual way through the music, these kinds of mappings would share common borders with the VR field. Technically, the work has been realized first as music, and the piece was processed through an FFT analysis of the same type used to make sonograms. This was our preferred kind of analysis because of the visual results it yielded. The data set was then structured to make it available to the program used for the creation of the model that was later “filmed”. The result is an experience of flying over/under/through the music as it is being played.
“Changes” is a one-minute video about images, that change. To me computer graphics is all about that – changes. That you might and indeed can work with 10-100 different versions of a picture. Computergraphics give you the ability to change and work on your impulses with no risk what-so ever. The motifs are things that interest me. Pretty women that become strange and frightening creatures. Cats that become cat-flowers or dressed up in men4s suits. Women4s faces that are mixed with leaves of flowers so they appear to be beautiful – though a bit sad. The result is kind of strange – but pretty. Frightening but moving. I like the dark sides of human emotions, pictures that are both eshetically appealing as well as deeply disturbing. So that is what my video is: Kind of pretty, kind of distubing. This is no real story-line in my video – it is simply based on the pleasure of looking. The simple pleasure of: not knowing that is going to happen next.
Music by the danish composer Anne Linnet.
Produced with support from Aarhus Filmworkshop.
After they took away the dead body of my mother I made photocopies of everything that was around her. Medicines, electrocardiograms, the paper with her diagnosis of terminal cancer, prints, her praying book, her sewing case. Then I passed the photocopies on cells and made cartoons with them, a friend included stains. This video contains the remains of my mother: I’m floating on procaryote-eucaryote fluids. I’m evolving towards an ecosystem governed by microbes. I’m between the nucleation of water and a dead planet. I’m under a panespermic government. Even if my belly boils, I prefer the cruelty of a coating of blood-hairsperm.
301 Nails… is a continuation of a series of work which considers destiny. Set in a gambling casino, near a boxing ring next to the big church behind the race track. The protagonist and “Spider” duck weave their way through the day’s events tempting fate with their own prophesies. They finally end up skipping town and head to the big game up in Mesmorosa only to come to a screeching halt.
The Possible Fog of Heaven is a consideration of the dimensionality of metaphysics and the metaphysics of dimensionality. Elvis speaks for the first time from the afterlife, describing in voice over and graphic text, his experience in Heaven. The Structure of the tape follows the King’s last prescription.
My film is based upon historical sources from Bulgarian art. I have drawn upon the pure, traditional art of my country, expressed in thousands of icons created over the centuries. Their Biblical content has been developed into a direct participant in the action of the film. The script concerns the destiny of a motherless child. Its subject is the birth of a child, as seen through the eyes of one person. It seeks to convey very specific emotions, without personal sensation. In this way, the film functions as a testament – father to son, artist to public. In creating the film, I have also drawn on the methods of psychodrama. My appearance in the role of Madonna shows that a man can also love a child very much. The moment of physical death is fused with the role of the Savior – but with a notebook in his arms rather than the Bible, the book of traditional knowledge. In that way the films accent is on the human problems of a new world and change accomplished by reaching for knowledge and wisdom. As the text of the final song says, “When you are in a blind alley Beaten down, boxed in by four walls. Make your own new way from all the cut-off paths. Be on the move again”.
GMS is part of The Mutant Genome Project (TMGP), an ongoing project that deals with the effects, and prevailing authority, of medical technology in contemporary society. The work takes its impetus from the worldwide Human Genome Initiative which aims to map and document all human genetic material, with a view to being able to change it. The motivations behind this scientific research are entirely admirable; scientists working to rid the world of genetic disease. TMGP operates at the point where the rarefied world of altruistic genetic research meets the prosaic world of consumer medicine, where drug companies have to make a profit for their shareholders. TMGP asks the question – who makes the decisions and whose interests do these serve. In a world where every part of our superficial bodies can be surgically altered to conform to an increasingly global ideal, and where there is an imperative to provide the best for yourself and for your family, and where the desire to fulfill these obligations are constantly exploited by advertising and the mass media; TMGP asks, ” could genetic engineering become the cosmetic surgery of the future?” TMGP is fictitious bio-technology company that markets genetically engineered, ‘designer babies’ called LUMPs (Lifeforms with un-evolved mutant properties) – supposedly the first prodigy of the Human Genome Initiative.
LUMP is a cute lovable baby with six eyes and no legs; it is very intelligent and it is immune to all known deceases. LUMP represents the human body redesigned by an engineer for maximum efficiency with a high ‘safety profile’. GMS, the proposed installation for ISEA96 takes the form of three large format, computer generated photographs and a Macintosh-based, interactive multimedia that emulates and critiques medical advertising and ideology. LUMP is seen as a pristine, 3D modeled form with the sort of beautiful, shiny surface that suits marketing objectives more than reality. The GMS interactive shows people what they would like to see rather than what they are actually buying; a fleshy mass with a cocktail of genes that might have an unforeseen effects on our evolution. The interactive gives the user a sense that they are in control much like with any other computer game; self-consciously oversimplifying the whole process, by reducing LUMP to a hastily digested commodity with a price tag. This brings the whole ‘interactive advertising’ process to the surface. The viewer as a customer designs their ideal baby by choosing options in the same way one would chose a new car or a home loan (the interactive is based on laptop-bases home loan simulators advertises on Australian television). In the installation, this cute and bloodless process is contrasted with a series of three1.3 m square digital photographs depicting the visceral anatomies of the LUMPs. These almost Caravaggio-esque image combine 3D renderings of dead LUMPs, dissected with anatomically accurate interiors rendered by hand directly into the computer via a pressure sensitive realities of 3D modeling with the speciously ‘warm and fuzzy’ techniques of traditional medical illustration. When the viewer sees the animation of their ‘supposedly’ perfect child contrasted with the dark realities of dissection, it is evident that while their creation supposedly satisfies all their desires and is certainly cute, it is not human – at least not as that term is currently understood,. Animated LUMP may seem appealing but the idea is disturbing. However, TMGP is not good or bad – it’s just there, and it may be a reality in the near future. The work can’t afford to be moral about the issue of genetic research, because it is too important. What TMPG is really saying is that it is an issue too important to left only for the medical scientific community to deal with it. We could be at the beginning of the most potentially revolutionary era in human ‘being’, potentially able to redesign ourselves, or our children, from scratch. TMGP wants nothing more than to be an ironic participant in the discussions that should take place.
“The Mutant Gene & Tainted Kool-Aid Sideshow CD-ROM” (completed October 1995) is a navigable interpretation of a series of performances I staged in 1994, by the same name. The performances incorporated live and prerecorded, multiple-monitor and projected video, animation, text, both sequenced and live instrumental music, as well as the use of dramatic artifacts and performance elements such as masks and dance. Beginning with the psycho-dramatic confession of an extraterrestrial, the piece journeyed into a series of multicolored, entropic landscapes. My intent with the performances, and the use of technology was to create alternate or augmented realities for an audience. I wished for the audience to be immersed in an environment of sound, light and motion, which often paralleled the content – in essence, making certain fantasy states real. A complete written description of the performances was published in Leonardo (Journal of the International Society for the Arts, Sciences & Technology), The Special Virtual Reality Section, Volume 27, Issue 4. The CD-ROM emerged from a desire to break down the linear constraints of a performance to create a more personal “circular” experience, where an individual can explore the environment in any order, without being guided as a collective “audience” through various states. It was created on a Macintosh 660 AV, a Commodore Amiga 2000 and the equipment at the Experimental Television Center. It is entirely self-produced and self-published, and is available for the Macintosh.
Since I started making collages with paper and glue when I was a teenager in Texas–and selling them on the walls of local Mexican restaurants for $50 a pop – I have always been interested in finding images and recombining them in new and different ways, so that the meaning of the new object subverts the meaning of the original image. I have tried to use original documents to tell certain stories, unintended by their original makers. Point of view is expressed by the selection of documents, and their juxtaposition. Not until many years later–when Barbara Kruger reviewed my work in Art Forum–did I learn that the form I was working in was called “appropriation art.” In The Atomic Cafe, we attempted to combine the principles of cinema verite and appropriation art–in the tradition of the great anti-fascist, John Heartfield, and Robert Coover’s The Public Burning– to produce what we used to call “compilation verite.” The images we appropriated were ephemeral films, created by the United States government, designed to make people stop worrying about the atomic bomb and threat of nuclear annihilation. We recombined these images, so that they highlighted the absurdity of the pro-bomb propaganda and also revealed how deathly afraid Americans were of atomic war in the 1950s. In my novels, essays, and short stories, I appropriate (and reassemble) my own life. With my CDROM Public Shelter, I tried to expand this found footage concept to the field of multimedia. The CD-ROM format is ideally suited for this kind of approach and even allows one to expand and elaborate on it with the addition of vast quantities of text. I was thus able to add another layer of content to the mix, along with the sounds, videos, and still photographs which I have worked with in the past. We can now go from the CD-ROM directly into cyberspace with the click of a mouse. The CD-ROM thus becomes merely a starting point for an experience that is completely unique and not controlled by us, the artists. Multimedia is thus a perfect and appropriate venue for both political art and intellectual discourse. Not only is it participatory–by its very nature forcing one to actively engage the material, rather than passively receiving it – but it also allows pauses for thought and reflection. Because they are not limited by time, multimedia artworks can offer complexity and background, both visual and textual. They can deliver whatever it takes to achieve understanding. Producing sites on the World Wide Web allows me to combine my writing, filmmaking and multimedia work with both internal links (hyperfiction) and external links to other Websites. This allows me to expand the concept of appropriation indefinitely, into cyberspace.
An interactive art work on CD-ROM “The World’s Greatest Bar Chart” takes a serious but humorous look at humanity’s obsession with comparative measurement, a phenomenon particularly cultivated in capitalistic nations like the United States. The piece’s structure is based on the analytic bar chart used by businesses universally, but in this case juxtaposing everything from vitamins to whales to bomb sites. There is an additional irony to the piece in that such an illogical construction arises from the tools and language of the ultimate machine of logic and measurement, the computer. “What makes this multimedia artwork significant is that it succeeds in transcending the inherent limitations of its medium. Although the cyber-painting’s enigmatic interface is based on the structure of a traditional bar chart, it invokes a distinctly sensory quality through the use of images, color and sound.
Leicester Square is the cinema centre of London. Tourists come in their thousands every day. Families argue about Pizza Hut or Burger King. People walk to work, mill about, carry bags, check maps and watches, and complain about the shoes they’ve just bought. Using an electronic camera every day for several weeks I collected my data. I toyed around with these images for a while, before finding these patterns, sometimes using arbitrary backgrounds (dishwasher interior, sex shop), sometimes getting more filmic. As someone who mixes the digital in with the painterly – my paintings are much larger and now use a stencil technique to bring in the Mac processed stuff – I’ve been anxious about keeping in contact with the very ordinary world we actually live in. I suppose I’m an abstract painter in some ways, but don’t like being that way. So instead of wafting to fantasy land (the destination for many of these distracted pedestrians, maybe going to the interactive rock museum) I use software to keep my eye on what’s around me…its mysterious beauty.
The Scribe: an electronic scriptor. While circumstances have prevented the physical presence of The Scribe at ISEA96, this document, along with the artist’s slides and brief informal video, provide information on The Scribe (and its historical origins). The Scribe is essentially a “personal expert system” consisting of a multi-pen plotter driven by original code. Developed over a period of 15 years this robot-artist literally “grows” visual forms from randomly generated bits of information. Drawing from a bank of technical pens The Scribe automatically executes art-works on archival quality rag papers. The software has accumulated thousands of lines of code, and has come to embody artistic procedures evolved from the artist’s earlier work as a painter. All form-generating routines operate within parameter limits that are preset by the artist. The Scribe then works on its own, making form generating decisions within those parameters. This includes ink pen choices.
Interpretation
The Scribe, an electronic “scriptor”, is our equivalent of the medieval manuscript illuminator who worked in the scriptorium. Random bits of information are transformed by The Scribe into visual forms which celebrate the information processing procedures that drive our culture. The Scribe employs a pseudorandomizer to cast around for working parameters to initiate yet one more form. Laboring under a set of artistic procedures evolved over a period of years, the machine hesitates, reaches for a pen and proceeds to execute pen strokes . It does so with the same “seeming” intelligence of those ubiquitous machines whose algorithms control more and more of our daily routines. Through its drawing activity The Scribe invites us to ponder the nature of the human-machine intercourse so pervasive in today’s culture.
The content of my work deals with the exploration of myself through self-portraiture. I use modified self-portraiture as a basis for investigating and affirming my fears and strengths. An integral aspect of the content is in the process involved in devising the image. The significance of the process in the content is that it takes many steps to complete the image, and at each step the image is changed. By using a computer as one of the steps in the process it increases the possibilities of the directions the image may be going in. I never know what the final image is going to be. Therefore the more possibilities there are, the more likely I am of discovering more about myself through this process.
Nanotechnology is an idea with far-reaching consequences for almost every aspect of our lives. It is the ability to have precise control over matter and is one of the most fascinating areas being studied today. Some areas of research that would be rapidly altered are computing, medicine, manufacturing and space travel. In addition, with these changes, social issues will also arise. Many estimates put the first arrival of this technology 10-20 years from now which is well within most of our lifetimes. As one of the first nano artists, Alexa created the series Nanoworlds to abstractly express the “spirit” of current ideas in nanotechnology specifically and the excitement of future science generally. Each image represents a glimpse of different applications of nanotechnologies. Her work is exhibited on the World Wide Web in the Nano Gallery at Nanothinc, A California Corporation’s website.
The work on exhibit at ISEA96 is an excerpt from an installation entitled Hickory Dickory Dock which is a critical commentary on the aesthetics of space and time in interactive computing. The installation is a three-dimensional layout of the storyboard for an interactive computer artwork. In the installation twenty-four screen designs are framed and hung back-to-back to create twelve stations that are arranged in a formation resembling the mathematical symbol for infinity. The screen designs are mounted between oversized pieces of Plexiglas, creating transparent borders that visually link the storyboard with the external environment. The installation highlights the conceptual and aesthetic limitations of language and symbols in describing the process of human-computer interaction. The screen designs in the storyboard use language and symbols to show how Western temporal references limit the interpretation of time to specific perspectives and discrete numerical values. These references include answering machine messages; temporal orientation cues such as the days of the week, Recorded Earlier, EDT (Eastern Daylight Time), Now, Earlier, Later; and references to Mother Goose nursery rhymes, a form of early childhood exposure to the use of language to define time. Most screen designs contain a frame in the center that acts as a “window” on time. Some screens also include transparent, three-dimensional (3-D) graphics to remind the viewer of the spatial dimensions of time. The 3-D installation plays an important role in helping the viewer understand the limitations of symbols and language in human-computer interaction. The installation forces the viewer to abandon the interactive conventions (mouse, keyboard, touch screens, etc.) and metaphors that we blindly accept when using the computer. The viewer must translate the commands and symbols in the interface design into movements and actions in the 3-D environment. In this process, the viewer experiences the problems inherent in trying to use visual and linguistic abstractions to define concrete logic. The installation shows how the symbols and language of computer interfaces create perceptual paradoxes that conflict with our cognitive and aesthetic interpretations of space and time in a 3-D environment. These paradoxes are further emphasized by the use of music in the installation. Wireless infrared headphones allow viewers to independently experience low-volume classical music (the Brahms Waltz in A Flat) as they walk through the installation. The holistic qualities of the music contrasts with the fixed frames and measured layout of the installation, emphasizing the dichotomy between discrete mathematical references to time and the ethereal, contiguous representation of time that we experience in 3-D space. However, the semantic structure of music also reinforces the semiotic constraints of the language and symbols in the storyboard, providing a satirical commentary on the prominent role that mathematical measures of time play in a technological society.
This piece is an electronic photo/painting: it originated as video, was digitally processed, transferred from one computer platform to another, then completely reworked and reassembled as an electronic painting using various software including Fractal Painter and Adobe Photoshop. Not one original pixel has been left standing – all have been transformed – sliced, chopped, diced and painted over. The work starts as an event in time and space, a small part of a much larger continuum, in which apparently unrelated processes and individual intentions come together and interact meaningfully in the presence of an observer who records the transaction. Much like real life and thought, the event is then disassembled into its component parts and reconstructed according to the needs and concepts of the individual as historian, partly shaping the mental structure, partly being altered by it. Finally it becomes a resonant framework, a mental construct made up of memory, ideas, physical records, and intentions. This composite is then projected back into the physical world as a starting point for new observations.
“Photographs do not lie, but liars can photograph”
-Lewis W. Hine.
Has “the truth” ever found fertile soil in a photograph? Photography has finally come into it’s own right as an art form; it has entered the phase of self-examination. By grace of computer manipulation software I examine photography: its themes, its conventions, its grammar and its visual language. I work these elements; I turn them upside down, I manipulate them, I associate and deconstruct and then trick the viewer into thinking that he is dealing with a conventional photograph. A beautiful picture, eager to please one’s eye, willing to comply with the viewers conventions. How many people just walk past by my pictures, not noticing the chaos lurking beneath the shiny surface, the truth being attacked, twisted by computer algorithms. These photographs are my struggle, dealing with reality, truth, representation, manipulation and ethics. It’s much easier for me to make conventional documentary work, so much easier. But my computer manipulated photographs present a much greater challenge to me: The examination of reality, which I consider one of the most , if not, the most important theme in photography. I am a photographer. My tools are my camera and my computer. I MAKE photographs. Let me paraphrase Fred Ritchin: The “decisive moment” as formulated by Cartier-Bresson, may not refer to when the photographer made the picture, but can refer to when the image was modified.
Through several years of exhibiting computer art, I have moved toward revealing more of my processes as an artist to the viewing public. I find that the more that I share with viewers, the more responsive they are to the work. The commonly held belief that the “work of art should stand on its own” has proven to be both untrue and limiting, as the viewing public has become more habituated to forming links in understanding art and what the artist is doing. My works are based on cultural memories. They are my response to places, artifacts, and images that touch my interior thoughts, and I present them in the intertwined fashion of remembered history. This, however, is my imagined history. Several of the images draw from my personal heritage of Eastern European Jewish culture, but the references are not exclusive. Just as my actual experience is a cultural composite, so are my images.
The presented work: ”The End of Fertility as We Know It ” is a comment on the fact that fertility is now in the hands of science. In-vitro, cloning and DNA technology are the tools with which fertility is under scrutiny in powerful laboratory’s thus dictating the way fertility has to go. In the presented work I used microscope images of Ovary’s placed in a background of Hubble telescope images of the crab nebula. Both representing natures fertility.Since 1985 do I use electronic imaging to realize 2 and 3 dimensional artwork. My work field is: “The Microcosmos”, the area between micron and atom, revealed by electron and light microscope, showing natures miraculous constructions beyond our perceptibility and notion. The basis for my work, the monochrome, green electron microscope images, are transferred to videotape and digitized into the computer. ln the computer l am manipulating, filtering and adding other images until I feel that the complexity and beauty of that hidden world is made visible. The finished image is transferred to m.o. disk and send to the U.S.A. to be printed on a large scale inkjet printer which is able to print the image any desired size. The prints can be used for inside and outside environments, they are printed on vinyl with acrylic paints. The smaller images are also inkjet, but printed on paper.
Art schools within Australia have been subsumed into the University system, in line with this they have been forced to compete with the traditional university disciplines for research funding. The application of electronic technologies to the visual arts has become a fruitful area for conducting (and having funded) Fine Art research. The concept of research in the visual arts is somewhat antithetical to 20th C. modernist (or even post modernist) notions of art practice. Current research projects focusing upon aspects of digital imaging at the Tasmanian School of Art not only seek to explore and extend the printed digital image, but to develop multidisciplinary research paradigms within the visual arts which allow the aesthetic to drive the technical aspects of research. The focus of current work is upon the printed digitally processed image, initially attempting to transfer many of the traditional skills and approaches of painting and printmaking to digital imaging. Along the way many aspects of the imaging and printing process have proven to be un suitable or unsatisfactory, seeking to address these problems involves delving into the technical/engineering levels of the processes. What has begun to emerge are not only possible solutions to problems with the digitally printed image, ideas about how imaging software can be made more flexible and expressive, but as a realization that aesthetically directed research can produce more useful and effective outcomes than those where only engineering or technical aspects are considered.
As an artist, I have moved along a path from drawing and painting to electronic art and video. I seek to explore the passageways we travel in life and lam interested in trying to find the edge between meaning and the abstract. Various visual images of fractals, strange attractors, and chaos all starting from mathematical equations are an intricate part of my art; seeing the world through the blend of art, nature and science. Recently, women’s issues…especially after attending the Fourth World Conference on Women in Beijing, have become the themes, concerns and are now a prime mix in the imagery of my work.
Lane Hall and Lisa Moline have created a collaborative series of prints entitled “Joyce Astronomia.” These prints are a complex combination of computer graphics and traditional printmaking media. Hall and Moline build small sculptural models to video digitize, and then use the computer to manipulate the digitized images, and combine them with text and other computer-generated elements. Hall and Moline then use laser printers and different colored toners to print these computer images on paper. Often the individual sheets of paper are laser-printed three or four times, to achieve an interesting layering and superimposition of colored toner. Then the prints are taken to the lithographic studio, where they are hand-printed in bright colors using hand-drawn images on lithographic stones. This combination of old and new technologies makes for unusual and compelling “computer art.” Lane Hall and Lisa Moline have been working with digital technology since 1987. They are intrigued by the computer’s powerful possibilities of image manipulation and have sought to combine that with the physical properties and qualities of traditional printings and prints (their scale, texture, materials, surface). They are committed to approaching digital technology with an inventive spirit, one which combines the “hot” or emotional realm of art with the “cold” realm of computers. They are also committed to exploring creative output solutions, unusual ways to liberate the computer image trapped inside the monitor.
Moving Forward Beyond Beijing is a video collaboration by three artists who travelled together in China for four weeks in 1995, concluding with their participation in the United Nations’ 4th World Conference on Women in Beijing, reportedly the largest gathering of women in history. Artists Liz Dodson, Kat O’Brien and Cecilia Sanchez-Duarte live in the USA, Canada and Mexico respectively. They were delegates of the Women’s Caucus for Art, unique as an arts organization granted non-government organization status by the U.N. to sponsor activities at the Beijing/Huairou conference. Dodson, O’Brien and Sanchez presented their work in exhibitions and panel discussions, demonstrating a variety of personal approaches and cultural perspectives through their usage of computers and video. Currently, the three artists are developing a video collaboration in which each will present a five-minute reflection on their shared activities during their month’s experience in China. They are also exhibiting individual video pieces and computer manipulated images in Beijing and Beyond, an exhibition originating at Lieberhouse Gallery in New York and traveling internationally for two years.
If I were to use a simple cliché, I would say ‘an Art is an Art’ hence simply explained my work as a digital artist is…’. However, I can not as an educator get away by such an explanation, nor would I do justice to this phenomenon that is described as digital art. A few years back when I started to explore the technology and its tools, those who dared to write about it struggled to explain and articulate the unique qualities inherent to the technology and the end product that is presented as ‘Digital Art’. Some wrote about this art form as a ‘Computer Generated Art’ or ‘Computer Art’ and others saw the need to develop a manifesto that attempted to provide a forum. Be it the manifesto on Dataism, or attempts at addressing the algorithmic beauty that is imbedded in the design of a given program, critics and art historians alike are still out forging an aesthetic criterion to an art form that uses an ever evolving technic and technology. Be that as it may I approach the new tool with in the context of a pan-human universe where I attempt to function by its influence and at the same time influence it by bringing to it a culture context. To this end i have so far enjoyed creating and discovering with the computer as an assistant. I can speak of digital art as a computer assisted art not a computer generated one. Simply put the computer and associated software compliments and enhances my ability to create and do not contribute in developing original concept nor do they generate an art work. However, they play a crucial role in making it possible for the artist who is willing to use them. This does not mean that the computer does not have an input in the process, in-fact it does. As in any traditional medium the tools and materials used contribute to the final looks of the image created. The difference is on the skill and knowledge of the artist. Although technical know how does not guaranty artistic and or creative dexterity if combined one is sure to express an idea with certain level of sophistication as well as simplicity. In other words, if an artist is interested in using water color, he or she is obligated to know and discover all possible means associated with his or her chosen medium. I feel honored to be in the company of those that are now referred to as artists of the cyber culture. As an individual with a specific cultural background I bring to the computer canvas a variety of ideas, some that I have completed or have resolved via paintings using traditional tools. Sources from my acrylic on canvas paintings or other works on any number of mixed media, photographs, three dimensional images that I create in the computer world, and or drawings, sketches etc. These sources are selected, digitized or scanned and filed as image data where I have access to as many or as few of them as I need. I use these resources selectively concocting portions of colors here, parts of figures there or a variety of African surface decorations, designs and or motifs. After a certain amount of creative process I come up with my final composition of a visual music and present it in the old fashioned way, a two dimensional graphic representation framed or on the computers screen world.
The body is being enhanced, modified and upgraded. These hyper-muscled morphing bods also represent the body in a state of impending collapse. The body pushed to the limit, with no place left to go. Mighty Morphing Muscle Men seek to heighten and satirize the body: Somewhere between the masculine fantasy of the super, maxi, power, bodies in computer games (like Virtual Fighter) and the underlying macho power base which seems inherent in so much 3D computer graphic aesthetics, iconography and advertising.
I use new materials and new techniques for making forms of unchangeable beauty. It symbolized humans have something that are changeable and something that are not in time. Materials and techniques may become old, but I hope these forms will be understood in time. And together, I suggest to the audience how people may or may not choose to express themselves. My work doesn’t belong to art nor fashion. I think my work belongs between them. My work expresses anyone’s existence, who wears my work. This idea is different from art, which expresses the artist. And it is different from fashion, because fashion changes from with the stream of age. And these wearable sculptures, which display various patterns in response to anyone’s delicate motion, are creating another time flow and another space between anyone who wears it. This is how my idea is different from others.
“Marbleyes” consists of an opaque screen embedded with 3-D, clear glass marbles that is mounted on the front surface of a video monitor. Video output can be in many forms: electronic art, web art, as well as live-feeds from ongoing installations with ambient sound and live camera/audience participation. My long-term involvement with both audience-participatory performance art and making video art “live” inside the camera, has given me both a micro and macro view of the world. Perhaps this is why the 3-D optical effects offered by the interface array of the glass marbles hold such a fascination for me. I began to see the resulting array of 3-D images as a metaphor for the multiple lenses in the eye of insects such as the horsefly, dragon fly and others. Although composed of thousands of separate lenses, their eyes function as a whole. This is also, of course, a wonderful metaphor for the multiplicity and interconnectivity of the world eye – the WWW.
“Clock” was the first work to deal with this theme (light as both giver and taker of life). In this work the gradual disintegration of a wooden dome bombarded with light becomes the marker of passing time. I created a plaque reading ‘This light when activated will completely dissipate the round of wood within 370 years’, and attached it to the front of the ‘Clock’.
Both pieces (Clock and Hours Remaining in the Life of Allan Giddy) are provoked by Rudolph Steiner’s questionable statement, ”everything in the universe is made from light”. I hypothesized, that if everything was indeed made of light it would therefore be deconstructed by light’s incessive bombardment. After consulting physics Professor, John Smith of the University of New South Wales, Syndney, Australia, I came to realize that this deconstruction did in fact take place with many materials. “CLOCK” was the first work to deal with this theme (light as both giver and taker of life).
Review: Janne Koski, ‘Aurinko – Sun: Solar Art at the Rauma Art Museum, Finland’, Leonardo, Vol 2/98, Boston: MIT Press, pp. 81-86
This installation tries to realize a virtual book. The aim has been set to go beyond the book; to add interactivity. In this installation, an image of a book is projected from the ceiling on a white table using a LCD-projector in a dim lit room. The book can be manipulated interactively according to the participants’ action with a wireless digitizer pen. The image of the book is totally controlled by a Macintosh computer using MacroMedia Director. The book is designed for arranging the objects into class and each object reacts interactively. For example, an apple on the page will be bitten when one flips the pages. On the other part of this book, there is a stone on the page that will runaway when touched. Infinite objects can be included into this virtual book with infinite pages. It’s a new style of archive for an interaction or categorization of the relationship between objects, humans and the world. The function of a book is to describe the world. “Beyond Pages” is also the world which will be describing as the active model of the world.
This piece is composed of human voice (and digital technology) using the phrasing of read text and dividing it to find sub-structures. The visualization of the text on the computer monitor screen has allowed the editing of each piece of text to be determined by the natural pulse allowing for the true character of the languages. These have then been looped to create the continuous rhythms and field of each language. The recording of various section of pulse creates a rhythmic interaction, with some semblance of unison unique to each language. This piece starts with a combination of the four languages, followed by each particular language setting up its own rhythmic field to explore distinctions of tone, rhythm and vocal technique. The four languages are Japanese, Uruguay(Spanish), Finnish and Israeli (Hebrew). In affirmation of the different cultural origins of the prosody, common objects from each culture have been displayed preciously in museum conditions, signifying epistemological difference. The installation is composed of a space in which four sets of objects are displayed in conjunction with four speakers sounding the different tracks of the composition made from the pulses of the languages culturally relative to the objects.
The attract mode of GENDERBENDER displays the following: “Are you really a man or a woman or a little bit of both? “Etes-vous un homme ou une femme ou un peu les deux? “Now you can be sure (or can you?)!” “Maintenant, vous en tes certain!(l’tes-vous?)” Although on the internet no one may know you are a dog GENDERBENDER performs a much needed public service by minimizing the cognitive dissonance of gender confusion and subterfuge found in chat groups, MUDS, and MOOS. Inspired by standard psychological tests for gender and personality profiles and Alan Turing’s test for Artificial Intelligence. GENDERBENDER allows a user to self administer a gender test. Based on the user’s responses the “Computer Psychologist” will display the message, “You are a man!” or “You are a woman!” or “You are androgynous!” The “two player” version allows two users to view the responses of one another. Each in turn can guess the gender of the other player and whomever the computer psychologist agrees with is the winner(?)!. CUseeme teleconferencing makes it possible for the users to compare a video simulacrum with the assessment of the Computer Psychologist. At the start of both the single player and two-player version the Computer Psychologist will display the first of a series of questions randomly selected from a possible total of sixty. The Morph-o-meter displays KENBY an androgynous ‘virtual’ figurine. As a user answers each question with yes, no or don’t know the Morph-o-meter gives instant feedback on whether masculine or feminine characteristics predominate in the user’s personality by morphing towards a identifiably male or female figurine. The Tile-o-matic will reveal each user’s video image tile by tile for each yes response. For each don’t know both the Morph-o-meter and the Tile-o-matic do not change. GENDERBENDER (Release 1.0) was exhibited as part of the summer installment of Image du Futur in Montreal (May-September). GENDERBENDER Release 2.0 will introduce the two player internet version. GENDERBENDER Release 3.0 will contain the additional feature of the creation of an online avatar that reflects the gender profile that the user gives it. The Self-Test allows the user to construct a personal gender profile of twenty masculine, feminine or neutral traits. Once created it can act as a gendered knowbot that will visit chat groups, perform searches and then report back to it’s master on its discoveries, experiences, exploits and perhaps provide a little black book for actual meat and flesh encounters.
Who has not dreamed about his or her movements transformed into sound?
In Fred Kolman’s interactive installation the spectator becomes the instrument and thereby the work of art is created. This entirely computer controlled installation transforms the movements of the head, hands, and feet into sounds, which the artist in advance had defined for specific areas of the room. It is Kolman’s aim to create a monument in the room rather than creating a composition of sounds that exist in time. Kolman himself gives performances, he does improvised dances inspired by the basic movements of Tai Chi. The perspective of ‘Kolman’s Kube’ are splendid, one could imagine dance and theater performances in which the music is controlled by the dancers’ movements. The installation could be moved out into the city and let thousands of people become instruments of a ‘street symphony’. It is a demand from an interactive work of art that the spectator participate in the creation of art, if not, it does not come into being.
SORRY! is an interactive computer installation which exploits the stylistic slapstick violence/humor of mainstream Western animation and comics to challenge the viewer to rethink what effect representational graphics really have in a “user-friendly” environment. SORRY! consists of four buttons which are associated with four characters. The player must first select a particular character (by pressing down on one of the buttons) and then continue to press down on that button causing the character on the screen to flinch. With each successive “blow”, the character deteriorates more and more, drawing heavily on the visual codes and devices of cartoons that are used to represent pain, wounds and death, as well as the suggestive power of sound effects to induce the impression of a heightened sense of impact. If the player continues to pound the character, it will eventually “die” – but only to reappear a few minutes later as bright and perky as ever, and the process may begin all over again. Though “game-like” in appearance, SORRY is not really a game, as their is no skill needed to use it, and no element of chance. There is only one purpose, and that is to allow the user to inflict inane and senseless representational violence onto an inanimate object. While on one level Sorry may be seen as an elaborate electronic punching bag, a therapeutic device for the stress of our electronic age, it also seeks to explore the idea of how we can so easily empathize with a mechanical device, if it’s simply has some kind imitative human quality, no matter how stylized or abstracted they maybe. With the on going development of user-friendly interface design for the personal computer, we as users are being continually required to “suspend our belief” of its mechanical nature and instead regard it with more human virtues of intelligence , patience, helpfulness, and even personality. But no matter how friendly computers attempt to be, when things go wrong, – a system error, bug or whatever, the facade melts away and we are once again confronted with nothing more than just an idiotic, cryptic, computing machine. Sorry attempts to intensify this paradox by creating an absurdly user friendly environment, the epitome of personified technology. But in order to cooperate or interact with this friendly beckoning blob, you are required to abuse it, and like the dumb machine that it is, it must endure the procedure according to its programming, which is until the user is satisfied. However the “abuse” of course is purely subjective and regardless of whatever aural or visual messages we are receiving from the console, they are nothing more than binary coding to the computer. We are suspended between the desire to project life into these graphic representations of cute, infant like characters (by pressing on a button) and the comforting (or frustrating) reality that it is in fact only a machine.
The parametrically forced pendulum is a well-known subject that has been thoroughly researched and documented by physicists within the cadre of order and chaos theories. Parametrically forced pendulums are activated by the up and down movement of their hanging mounts. Since the behavior of these pendulums depends on the oscillating frequency of these mounts, the use of a vari-speed electromotor is essential. As a consequence, the pendulums command an exceptionally wide range of movement; what can start off as a traditional to and fro swing can become an unpredictable and irregular motion leading to a startlingly vigorous full circumrotation.
The Electric Swaying Orchestra as shown at ISEA’96 consists of six of such pendulums, each with a length of 1.50 meters. A microphone or loudspeaker is attached to the end of each pendulum. A computer controls the electro-motors and the musical process. However it does not have precise control over the consequences of its decisions. Although the movements of the pendulums are related to the oscillating frequency of their hanging mounts, at a certain point the behavior of the pendulums becomes unpredictable and thus the musical outcome is unpredictable as well. The computer interprets the sounds received from the three swaying microphones and responds by playing new notes over the three swaying speakers. The main factors determining this live composed music are the unpredictable movement of the pendulums and the composition rules executed by the computer. It is a process which repeats itself endlessly; the computer is in fact constantly listening and responding to itself. Since 1995, we have been developing a new installation that furthers the concept of the Electric Swaying Orchestra: A machine that is capable of complex, chaotic behavior and which produces music that is related to this behavior.
While the relationship between movements and sound becomes more sophisticated, it also becomes more apparent. The direction of movement and exact position of each pendulum will be measured as variables for the musical outcome, permitting greater control over the relationship between the movements of the pendulums and the music produced. Each pendulum might be assigned a specific parameter of one tutti live algorithmic improvisation density, pitch, dynamics, for example, or each pendulum could have its own independent musical world. All pendulums are equipped with a loudspeaker – microphones are no longer needed – and they are much longer, two to three meters, providing larger movements and thus resulting in more interesting spatial sound effects. This work is planned for completion during the course of 1996.
This is a labyrinth from which the user cannot easily leave once inside. The exit is an enigma for the user, who must operate interactive navigation buttons and commands that do not work as expected. The goal is to reveal to the user the myth of collaborative computing and demonstrate the truth of persuasive computing. Users are invited to know that they are always induced and led by any interface to a previously determined group of situations and possibilities rather than ever deciding about anything.
This installation creates a dynamic “global picture” by intertwining weather information, satellite images, interactive video discs, interactive sound and viewers’ movements. Current weather reports for cities throughout the Northern hemisphere are regularly accessed through the Internet. The weather information is processed and controls video disc players. The video consists of short narratives of a figure moving about a room and is projected as the shadow side of the “globe”. Hourly satellite images of the earth are retrieved from geo-stationary satellites and are projected as the lit side of the “globe”. A moving “atmosphere” of video noise traverses the surface of the global image. The viewers’ movements create perturbations in the noise sending patterns floating across the surface. These patterns reveal an underlying video image which changes the contents of the room?
Open Sky Etude celebrates the flux of humans in motion. Each animated sequence has been generated by performers in our interactive film set. Data is gathered and run through our animation generator, recombining it into many different patterns. The possibilities of the program are endless.
Junya Nishimura: Creative Commons license, noncommercial license: freemusicarchive.org/music/junya_nishimura/ instant_ep/03_em
Our times are characterized by transience, impermanence and change. For the largest screen in the world, we propose a short sequence composed of a swarm of artificial flies. They slowly appear, propagate and gradually invade the whole ICC Tower, before flying away again. A short text “Fly High – Time Flies” reminds us of the beauty of the current moment.
The app, TOUR(IST), is a mobile experience, an augmented soundwalk through the urban landscape. The User can take interactive “sound tunnels”, urban shortcuts revealing a series of acoustic ambiances creating a stimulating listening experience, a mobile audio voyage through the urban environment. Playing with the usual codes of spatial representation, the app TOUR(IST) is an augmented soundwalk. TOUR(IST) offers urban shortcuts, virtual displacements and an immersive experience in a new acoustic space and ambience. By unveiling a series of 3D ambisonic recordings, TOUR(IST) creates “sound tunnels”, trajectories emanating from the actual location of the User. TOUR(IST) reveals a series of acoustic ambiances that incrementally create a whole new way of experiencing the city. Amidst the new urban soundscape thus created, the User develops a new sensory rapport with his or hers immediate environment.
A new urban cartography is developed, a hybrid space in which the mobile User generates in real time, a listening experience while walking through the urban environment around the gallery of ISEA. This urban grid is superimposed on a network of data that, although immaterial, is based on the same nodal logic. Both are made up of intersecting points and articulations of lines that enable functional movement. The act of travelling outside the grid (be it the urban fabric or the data array), of choosing to proceed along parallel paths, appears difficult to reconcile with that logic. This project aims at an exploration that is contrary to normal experience of the city, as it involves plotting transverse lines through the public and private spaces. Each of these trajectories will be in the form of sound tunnels, “wormholes” that will make it possible to move from one point to another by passing through every looming obstacle— somewhat like a wave passing through solid material. A hybrid space of data collection and a mobile device enables this real soundwalk and virtual journey through the neighbourhood of ISEA, offering alternatives to its rectilinear nature.
Sampling sounds from buildings and the urban space surrounding the gallery, data is captured to create a virtual tour of the neighbourhood. This series of recordings, made in straight trajectories, are like “core samples” from drilling; they reveal simultaneously the various occurrences of sound phenomena of the urban core, from the infra-perceptible to the ephemeral sound event. The samples present the User with a series of related soundscapes from the area surrounding the main gallery explored by foot by the User. TOUR(IST) takes advantage of the integrated compass, GPS, tactile screen and binaural sound processing capabilities of the iPhone, enabling the User to move through the “sound tunnels”, either travel towards specific place in the city or generate a 360-degree sound experience, a total-field collage of sound just beyond his immediate location. These tunnels carry the User through obstacles from space to space, encounter to encounter. The User is like tourist (according to John Cage), like a wave, travelling through space and matter, confounding normal movement and penetrating both private and collective spaces.
The sound database of TOUR(IST) will include sounds created during the ISEA workshop and urban intervention: ConcreteCity. An ultralight approach to urban interventions orchestrated by Insertio. The urban sound intervention aims to incite participants to conceptualize, construct and implement an ultralight, large-scale wireless intervention of audio elements inserted in a public space, transforming a section of the city into a sound experience. The work will be achieved through a hands-on approach constructing and deploying an ephemeral wireless ubiquitous computing network, by temporarily grafting actuators, small devices “plankton”, that inhabit objects and urban infrastructures, (…can a stop sign shudder?). These urban elements, diverted from their primary use, create a furtive audio orchestration. The resulting composition is a large-scale spatialization of sound with multiple points of listening. The spatial forms include elements of the site’s material things, social activities, phenomena and the processes that are concomitantly taking place, specific to a time, place and culture. Our approach focuses on the imagination of urban sites, their materiality, usage and memory. By interfering with what is normally a given “state” of operations, the intervention reveals an “augmented everyday soundtrack” leaving the field open to exploring the potential of the sounds of the city, the interaction with urban spaces and objects and the diverse interpretations of what surrounds us.
TOUR(IST) explores the theme of New Media and Cultural Heritage. As a listening experience created through the recording, archival and retrieval of fragments of sounds: from bursts of conversation or moments of daily routine (public and domestic interiors) disparate places (places of worship, shopping centers, local businesses, etc.). It is a world that is revealed, a condensed auditory form where the experience of the city itself reveals the simultaneity of its diverse and multivalent expressions.
TOUR(IST) explores surrounding digital archiving technologies, investigating new hardware and software interfaces for storage and retrieval of archived data, posing interesting questions like: what kinds of new materials and subjects are being archived?, what are the assumptions that lie behind these storage techniques?, and how the resulting representations shape our perception of the original information?, it also explores what happens when a particular kind of map, developed for a specific type of data, is used to present another kind of information, within its historical precedents and social, political and technological implications.
SSHRC, Social Sciences and Humanities Reasearch Coucil, Canada and The Fonds de recherche du Québec – Société et culture (FRQSC)
Dieter Jung sees in the hologram a means of preying on and capturing the precise instant at which light, perception and consciousness coincide at a single point — the point at which reality is meant to be created. In other words, Jung is a catcher of rainbows. His compositions involve rectangular, trapezoidal, parallelogram-like or rhomboid fields of colour, graduated in an optically staggered sequence, which appear to be diagonally offset against each other. The virtual appearance of Jung’s computer holograms is thus translated into the classic medium of painting. The onus of completing the simulation of movement passes to the consciousness of the observer, demanding active participation instead of passivity.
The sense of confusion caused by Jung’s ‘paintings’ stems from the general relativity of spatial contexts. Like actors announcing their message from the stage (because it cannot be resubstantialised in any other way) the third dimension can only be a plane of projection by which the fourth dimension is rendered tangible.
Our observation thus arrives at an important point: it touches on a central issue. Holograms that serve solely to transform the three-dimensionality of objects in the outside world into spectral images have failed to exploit the essence of the hologram and have therefore disregarded its true quality. This quality consists in the fact that the hologram is a vehicle capable of projecting the presence of a dimensionality beyond the third dimension. The concept of the fourth dimension has already been used in such an inflated way that it has become jaded and imprecise. It could mean anything. This is why there should be no mention of the fourth dimension, only of determining one of the dimensions situated beyond the three conventional dimensions. It is this added dimension, together with the perception of light, that constitutes the point of convergence of the space within and the space without. where the light of consciousness is sparked. I would like to call them dimensions of consciousness. The rainbow is the allegory.
Our observation thus arrives at an important point: it touches on a central issue. Holograms that serve solely to transform the three-dimensionality of objects in the outside world into spectral images have failed to exploit the essence of the hologram and have therefore disregarded its true quality. This quality consists in the fact that the hologram is a vehicle capable of projecting the presence of a dimensionality beyond the third dimension. The concept of the fourth dimension has already been used in such an inflated way that it has become jaded and imprecise. It could mean anything. This is why there should be no mention of the fourth dimension, only of determining one of the dimensions situated beyond the three conventional dimensions. It is this added dimension, together with the perception of light, that constitutes the point of convergence of the space within and the space without. where the light of consciousness is sparked. I would like to call them dimensions of consciousness. The rainbow is the allegory.
The Form of the Invisible is the title of a book by the English writer Herbert Read which we were surprised to discover, translated into Japanese, in Tsukuba University library. The Form of the Invisible is itself a very good book, that is, a physical object containing immaterial ideas. The role the book as a carrier of ideas and culture, its changing form and function in a new era of technological media, has been the focus of our recent work.
This approach should naturally be leavened with recourse to readymade processes when available, and indeed it is the case that large applications programs which rely primarily on graphic user inter faces are now providing interfaces either to dedicated scripting facilities or to generalised inter process communications capable of or oriented towards programmed control. More and more, third party provided components may be connected to produce flexible and powerful hybrids.
The communicational connections that constitute these hybrids are at once generalised, specific, arbitrary and structured. They are inevitably expressed in some form of language. The program of events described by that language is limited largely by the ability of artists to express themselves appropriately. Few would disagree that expression is an essential goal for artists
Philosophically the most interesting thing about computers is that from the earliest stages of their conception, they were thought of as general purpose machines. Part of their structure has been deliberately left blank and may be readily changed. This part is of course the program. If a computer is made use of for its ability to support a particular small set of programs then it its potential is unrealised. If artists are truly interested in maximising the scope of their creativity, or in taking more control of the political agenda in their use of computers, then they should get tough and take the hard approach of writing their own programs.
This approach requires no more hardware than others (usually less) and can be achieved (somewhat surprisingly) on quite standard personal computer configurations. Further, the results of programming efforts are easily subject to literal deconstruction and reuse in other contexts and are therefore well suited to communal use. The major input to the programming approach is time, usually more abundant to artists than other commodities.
This approach should naturally be leavened with recourse to ready-made processes when available, and indeed it is the case that large applications programs which rely primarily on graphic user interfaces are now providing interfaces either to dedicated scripting facilities or to generalised inter-process communications capable of or oriented towards programmed control. More and more, third party provided components may be connected to produce flexible and powerful hybrids.
The communicational connections that constitute these hybrids are at once generalised, specific, arbitrary and structured. They are inevitably expressed in some form of language. The program of events described by that language is limited largely by the ability of artists to express themselves appropriately. Few would disagree that expression is an essential goal for artists.
My intention is to transform the TV picture beyond the ‘corpus’ of the TV-set, to take a step out of the given process in which the picture passes through the camera until it comes out as a representation in the TV-set. Both the camera and the TV-set are in a way ‘containers’ with a ‘glassed’ window towards the world. What I do is that I let the TV image pass through an additional ‘container’ of ‘glass’ in front of the TV-set. So the ‘glass-container’ is the primary object whether it’s shaped as a camera, TV or a glassobject, while the picture as such, is secondary and variable.
The representations in my glass objects are exchangable and can receive any signal broadcasted.This allows me to be a part of the audience, face to face with my own production.
My work can be understood in the context of language art and visual poetry, two genres that explore similarities and distinctions between word and image. I create what I call holographic poems, or holopoems, which are essentially computer-generated holograms that address language both as material and subject matter. Language shapes our thoughts which in turn shape our world. To question the structure of language is to investigate how realities are built. I use holography and computer holography to blur the frontier between words and images and to create an animated syntax that moves words beyond their meaning in ordinary discourse.
The choice of holography as the most suitable medium for my project, and the subsequent use of computer animation, reflects my desire to create experimental texts that move language, and more specifically, written language, beyond the linearity and rigidity that characterise its printed form. I do not adapt existing verbal structures to holography, but try to investigate the possibility of creating verbal texts or artworks that emerge from a genuine holographic syntax.
I am also concerned with the temporal and rhythmic structure of my texts. Most of my pieces deal with time as non-linear (ie discontinuous) and reversible (ie flowing in both directions) in such a way that the viewer/reader can move up or down, back and forth, from left to right, at any speed, and still be able to establish associations between words present in the transitory perceptual field. I try to create texts that can only signify upon the active perceptual and cognitive engagement on the part of the reader or viewer. My texts don’t rest quietly on the surface. When the viewer starts to look for words and their links, the texts will transform themselves, move in three-dimensional space, change colour and meaning, coalesce and disappear. Their choreography is as much a part of the signifying process as the words themselves.
Humans use tools to extend the range of our senses and our physical selves. The digital computer is a radically novel tool in the history of human kind. Never before have we known such a tool with which we can explore the structure of our understanding of ourselves and our universe.
The computer, as a creative device, is an expressive conduit of our profound internal being. The image is a loaded visual presentation which stirs the senses and touches the emotions and soul of the viewer. The viewer senses the sculpture’s presence in their personal space by comparison to their own physicality. I state the image I make to the computer and to other people in concise language, invented by humans to convey abstract concepts.
The terms of computer art consist of nothing less than the immutable absolutes that form the struc-ture of the universe. In as much as we are products and part of our universe, we have the potential to use this extension of ourselves to treat every aspect of our physical and abstract existence. I see this as a source of great social benefit and cultural change.
Jean-Pierre Hebert is an artist with a background in engineering who first worked with computers in 1959 and began ‘drawing’ with them in 1979. He has modified an HP plotter to carry large ink reservoirs feeding the pens. Each of his works, generally sized from 5x5cm to 75x100cm, comprises a single unbroken line meandering under the direction of the artist’s custom software creating non-repeating but self-similar linear harmonies.
Hebert’s original drawings are the artefacts of a poetic exploration of the quality of line, texture and surface possible via the integration of the formal materials of this age-old discipline and the potentials of contemporary technology.
Work in exhibition:
The images for Gulf War Memories were culled from cable television broadcasts and videotapes of the Persian Gulf war and commercial advertisements. The image column consists of six 11 x 14 foot back-lit duratrans prints of computer manipulated television, flanked on both sides by eight 4 x 4 foot box fans each blowing towards the viewer the aroma of bones from previously consumed Kentucky Fried chicken.Near the base of the piece is a bracket to house the empty bucket of chicken. This piece sets up associations of sensory information which are intended to provoke thought and contemplation about our nation’s recent history. The piece is an absurd memorial for an absurd conflict. The idea for this piece came about after reading of the large increase in fast food sales which took place during the Persian Gulf conflict as people rushed home from work to view the war on T.V. Gulf War Memories represents the absurd and ironic nature of our national obsession with this most recent television war which was consumed and forgotten as readily as a fast food dinner.
The piece is based on the texts of the late Roman philosopher Boethius, and deals with the fluid and ultimately impossible nature of identity. The work makes use of sensory deprivation and stroboscopic visual effects to immerse the viewer in a disorienting environment. The work uses remote visual sensing techniques to track the viewer. If the viewers’ remain static then they see nothing, are left in total darkness, but as soon as they move large human shapes begin to follow them about the room. At first the shapes appear to be shadows, but then resolve into the faces of young children, observing the viewers actions.
Solitary deals with the fragmentary nature of memory through time, and the ensuing fragmentation of identity. What is the relationship between identity and time? How does this indefinable medium within which we are all suspended shape our being? What are the traces we leave on the temporal surface? How do these traces, these shadows of ourselves, function in constituting our memories of ourselves, and thus our sense of being? Does the relationship between time and memory imply that we are not one, but many selves? Who are these selves, and how do they exist? What is the relationship between our actions, our being, and our sense of time, space and memory?
One viewer at a time may enter a large, dark and silent space. It is entirely empty, but as they move about the space they leave behind them a trace of black and white shadow-like figures. These figures however, are not their own shadows but the shadows of others. Each shadow-figure is composed of a short animation (walking, turning, falling, etc), and over a short period of time each figure decays (fades) to black. Subsequently there are never more than eight shadow-figures present on the wall, each darker than the other, creating the effect of a fading and extremely slow stroboscopic image of other people inhabiting the space with the viewer. The period of time it takes for a figure to fade away, and the number of figures visible, is a function of the viewer’s movement. So long as the viewer remains in motion then they will leave an unfolding trace of figures, but as soon as they stop moving the figures soon fade to black, leaving the viewer alone in the dark and silent space. Thus the viewer defines or creates their sense of time through the nature of their action. Time and space are seen as functions of the self.
Within the contours of each shadow-figure there is another animation cycle composed of children’s faces (smiling, looking, turning, etc), which follow the shadow-figures animation. The images of child-hood, in black and white, help evoke in the viewer a sense of memory, whilst the otherness of both these faces (who are unknown) and the shadows follow the viewer, as if they were the viewer’s own shadow.
Solitary utilises live computer graphics, remote visual sensing techniques and digital projection to immerse the viewer in another sense of time and space. The computer is used not only to make this possible but as a metaphor for the suspension of the self in contemporary information media. This can in turn be seen as a metaphor for the functioning of our own memories and the manner in which we construct our identities in the non-linear and fragmented nature of time and space. At what point does the time/space constitution of the self in the world and in media technology merge or collide? Is there a separation here, or are these two states of being inexorably merging into one world?
1 interactive video projection, black and white, silent 8 x 12 x 6 metres
In 1983, the quaternion images produced by Alan Norton shocked me with features that I had never imagined before. I felt I looked at the place I had to be. An unsatisfied photographer with real objects before the camera lens started `his computer-imaging project immediately.
All works exhibited here were generated by ray-tracing algorithms and primitives of hyperbolic-paraboloids. Although we can define parameters such as colours, light sources, surface attrib-utes and so on, it is hard to predict the exact result — especially in abstracts provided by the complicated ray behavour. Many trial images are needed in order to create a work. In fact, I produce numerous images in this time-consum-ing process. Nevertheless, because it is capable of producing images we have never seen before, math-based imaging is the only method I use in computer graphics.
There are many remarkable analogies between computing processes and biological processes (e.g. software generations, computer viruses). We can expect these analogies to become more transparent as computers evolve further. We assume that the ‘rulebook’ in our universe is the same for every information processing system whether it be the mind of a human or a chimpanzee, an abacus or a Disc Operating System (DOS). This ‘rulebook’ fascinated George Boole. He was convinced that if the laws of logic “are really deduced from observation, they have a real existence as laws of the human mind independently of any metaphysical theory”. He sought to identify those rules of thought and give them algebraic expression. In Proposition IV he identified the fundamental law of thought” as Aristotle’s principle of contra-diction — that It is imposssible for any being to possess a quality and at the same time not to possess it”. George Boole argues from its algebraic equivalent that “what has been com-monly regarded as the fundamental axiom of metaphysics is but the consequence of a law of thought, mathematical in its form.” If George Boole were living today he would stand in wonder and amazement pondering the magnificent machine language that has evolved since the publication of the Laws in 1854. I think especially that he would be transported to near ecstasy seeing the binary 1’s and 0’s in computer assembly language which symbolise the ‘on’ and ‘off’ bits. This is his Proposition IV evolved into a machine language that controls the electric circuits in everything in our daily world, from cash registers, airplanes, and washing machines to Cray Sumpercomputers. My illustrations have evolved from procedures made possible by Boolean logic. For several illustrations I adjusted my algorithms to use terms from Boole’s symbolic logic for the graphic improvisation. In those cases the 1’s and 0’s were distributed randomly around the centre of attraction. The visual effects are intended to suggest the dynamism inherent in logical systems. It is a tribute to Boole who perceived the value of a symbolic language of logical equivalence in advance of computer graphics.
Work in exhibition: Derivation of the Laws 5 copies of the limited letterpress edition of George Boole’s Derivation of the Laws with code generated illustrations by Roman Verostko (St Sebastian Press, Minneapolis, Minnesota, USA, 1991)
125 copies, handpulled, 4 colours, type set in Gill Sans; 100 regular bindings, 20 deluxe binding, 5 artistes livers. Works related to the books: Frontispieces sheet, 92.5 x 100cm Endpieces sheet, 100 x 60crn Gaia series, AM2, 100 x 60cm Gala series, AQ2, 100 x 60cm
These images explore archetypal correspondences between elements of nature and the organic body/psyche/soul. This work is the result of ongoing research into ways of circumventing visual conventions inherent to 3D computer technology such as linear perspective and ‘objective’ realism, which are, I believe, symptomatic of Western culture’s separation of mind from body, and self from nature. The intent of this research is to create metaphorical images that unify subject/object, interior/exterior, and physical/metaphysical realms of being, in order to reaffirm our essential embeddedness in the world.
The images are ‘still frames’ of three-dimensional computer-generated scenes, created with the 3D animation software SOFTIMAGE. They are exhibited as backlit transparencies as a means of recreating the luminosity of the computer screen, and to suggest the numinosity of archetypes issuing from a universal, morphogenetic ground.
This work is an interactive computer installation, but it is to be understood as a work that uses the medium against itself. I was inspired by a text by Kandinsky in which he compared the progress of each generation with a little beetle running beneath a glass plate. After a while the beetle can see, but can’t go any further. Thus I came to the sentence, “Little beetle, walk a bit further”. It expresses the forward-looking mentality one is exposed to when working with ‘interactive’ technology. But of course one strives to get away from this dull and stiff programming which excludes any flexibility and freedom. Even though, we still fear the day that we will have truly intelligent computers.
Another problem with interactivity was well pointed out by Herrmann Sturm, who said that true interac-tion is in fact stopped by so-called interactivity. Human acting is reduced to the pushing of keys; technical effects and results gain monumental proportions. Therefore Sturm’s question as whether experience is still possible in this push-button-world, sounds to me quite legitimate. These objections are reflected in the second sentence, “Lone searchers never trust serene lies”.
The two sentences are decisive for the work. In German each consists of five words. We have five fingers at each hand and five sensor keys at each side of the monitor. If the viewer wants to hear the whole sentence they must push all five sensor keys simultaneously, which is not so easy. If they want to hear the development of the first into the second sentence, they have to keep pushing. Pushing is ‘rewarded’ here, as it seems to be our main experience today — viewers will really have to work on their pushing-activity in order to receive the full information. The hologram is lit in order to be viewable from both sides (as a reflection and a transmission) and in order that the front image (from transmission) is reflected four times in four acrylic sheets, in the receding distance. This gives a kind of animated sequence of a repetitive image in a sculptural apparatus. The viewer is invited to look all around and in between the tripods supporting the four reflectors.
This is a self-portrait which is questioning my relationship — and human relationships in general —with technology: alienation or emancipation?
The ambiguity of holographic space symbolises intermediate time and space. It characterises that which is in a state of transition or transformation. When I employ the holographic medium, I seek to create a feeling of distance. White light transmission holography, with its rainbow of colours and transparent quality, allows me to more effectively explore the ambiguous and paradoxical relationships between natural and artificial forms. I wish here to emphasise the notions of fading, absence, fragility, emptiness and transparency.
The hologram is lit in order to be viewable from both sides (as a reflection and a transmission) and in order that the front image (from transmission) is reflected four times in four acrylic sheets, in the receding distance, This gives a kind of animated sequence of a repetitive image in a sculptural apparatus. The viewer is invited to look all around and in between the tripods supporting the four reflectors. This is a self-portrait which is questioning my relationship — and human relationships in general —with technology: alienation or emancipation?
This series is built on the juxtaposition of fragments from both personal and cultural history Ironically, the computer has placed me in a new relationship to tradition. It has provided me with the means to access my cultural legacy directly , and I often begin works by placing medieval Hebrew manuscripts inside the computer. I purposely quote sources from different times and places to tell the story of my own culture which comes form migrations over the last several hundred years from place to place as a subculture which is in and of itself a blended culture. A single work, for example, might bring together images from a 17th century Hebrew manu-script from Italy, the house I lived in as a young child, 19th century polish synagogues, and trees from the yard of my current home in Vermont. The work is totally conceived and developed within the computer, as the mechanism for the selective merging, layering, and melding of my sources. How the source images enter the computer (either through my own photographs or direct drawing) is not important, as in any case I must work hours with the sources within the computer to make them truly part of the evolving computer image.
What the computer offers is the means to create, shift, and emphasise hierarchies of both relationship and representation among pieces of the past which I bring together in my images.
The archetypal pattern of the maze or labyrinth is echoed in the design of the interactive videodisc world of Bicycle TV. Traditionally associated with spiritual initiation, growth and transformation, it is appropriate that the maze presented in Bicycle TV serves as a metaphor for the challenges and opportunities offered by new electronic technologies.
Pornography and tourism are the most obvious applications of virtual reality systems, particularly when computer graphics based. Videodisc based systems such as Bicycle TV offer alternate realities and experiences without confining them to the level of arcade games or popular culture’s equivalent, the pseudo-realities of high budget amusement theme parks.
This body of work is the first concrete product of a number of ideas I had formulated on fusing design, art, and computer technology. Chief among these ideas was to merge a typographic and visual communication into one unified composition and to accomplish this with unique technical innovations I had developed. This work was both an exercise in experimental typography and computer science. I describe my work as experimental typography. For most people, this intention is not understood at first glance because all that is seen is an image. On closer inspection of the work, it is discovered that the larger image is constructed by use of smaller images that are combined by the eye. The smaller images are actually characters of a font of type I have designed. In fact, the pictures are a paragraph of text and the font could be changed to another with the result of totally changing or destroying the picture. In a sense, the individual pictures (letterforms) can be seen (read) and combined to form a more complete idea than if my specially designed letterforms had not been used.
The fonts for my work are obviously not he conventional drawings of the letters ‘a’, ‘1)’, or ‘c’. The various font designs I developed are the same in every way to the design and structure of any font. Each character of the font still performs the role of identifying a particular letterform; the letterform, however, ultimately is used as a picture’s pixel. The first character of the font represents black, the last character is white, and in between are a range of greys. This opens a variety of creative opportunities as to what the letterform could look like. The letterforms in my fonts are both the symbol of a pixel’s value and the value. The only criterion of communication is that it is the correct grey value it symbolises.
A digitised picture is, in a sense, a paragraph of data. Data that in most cases does not give a clue to its content. It would be very difficult to know something about the picture by reading its data. The challenge in this poetic game is to treat a picture as a paragraph and to design letterforms using pictures and text. My intention is to articulate a new, more complex vocabulary of my life and ideas by the unique juxtaposition of symbols, phrases and pictures.
As fast as I could learn to integrate systems, they built boxes that could do it all. Hard shells with mysterious inner soft bits. Then at last interactivity, unencumbered. Now you, the interactor get to make some of the decisions at least. Puts you in the picture, literally. The passive experience of the art observer is no longer relevant. Motion and response loop endlessly through time. Space is now sensed but no less sensible. My art practice is of a complex multi media nature. It utilises audio and visual production and control technology in sophisticated and synaesthetic ways. It manifests as performance, installation, computer graphics and animation, slides, soundtracks, video and prints. It is the crossover and interfacing of these technologies and the ability to create new artforms that particularly interests me, along with placement of body in these contexts. To this end, interactive control over the integrated system is currently being explored. My conceptual concerns centre primarily in the two areas of dreams and the new technological revolution and the issues raised by it, in socio-logical, practical and moral terms.
Pragmatic considerations lead to the design of computerised vehicles allowing elegance and optimal flexibility while playing with ideas. The general approach is cognitive rather than procedural or mechanistic. We conceive and develop machine partners that assist the artist in the process of exploration and discovery. Digital media may encourage intimate machine interaction, i.e. the interactive evaluation of the behavioural potential of a given idea. In addition, the artist learns about the true nature of his intentions through visual feedback.
The objective of my work is conceptual navigation. Pragmatic considerations lead to the design of computerised vehicles allowing elegance and optimal flexibility while playing with ideas. The general approach is cognitive rather than procedural or mechanistic. We conceive and develop machine partners that assist the artist in the process of exploration and discovery. Digital media may encourage intimate machine interaction, i.e. the interactive evaluation of the behavioural potential of a given idea. In addition, the artist learns about the true nature of his intentions through visual feedback.
Consider the development of virtual work spaces of which the artist is both inventor and explorer. The central material component is knowledge, rather than information. This implies that we are interested in the meaning of things rather than their visual appearance. The auto-matic generation of intricate pictorial complexi-ties as such is of no concern. However, the study of levels of autonomy in the creative process is important since we aim to design computational environments that accommodate mental models of creative behaviour. Computers allow for the manipulation of ideas on the symbolic level. Arbitrary concepts like conflict resolution, adaptation or responsibility are formalised and activated in a simulated, virtual world. The activity in this world manifests itself in pictures. These pictures are visual representations that emerge from the inherent abstract activity and careful selection of physical attributes imposed by the artist. The pictures document themselves.
In summary, the sharing of responsibilities between human and machine — while aiming to create in a common effort — is at the heart of the matter. The initial spark for many incarnations of activity and interactivity is borrowed from examples in nature or it may be a product of human imagination. In either case, our objec-tive remains the interpretation rather than the understanding of the internal dynamics of the cognitive process. The idea is to create a context for the exploration of the psychology of humans as well as the psychology of machines. The final works are side effects of the very activity of navigating in unknown conceptual territories.
Portal to an Alternative Reality has been produced in partnership with the ZERO1 American Arts Incubator, the U.S. Department of State’s Bureau of Educational and Cultural Affairs, the U.S. Consulate General, Wuhan, and K11 Art Foundation China, Portal to an Alternative Reality acts as an access point where the public can immerse themselves in virtual and augmented reality experiences that document the rapidly changing city of Wuhan. In 2014, ZERO1 and the U.S. State Department’s Bureau of Educational and Cultural Affairs launched a new media and digital arts program, the American Arts Incubator. It showcases artists as engaged and innovative partners in addressing social issues, in addition to creating a cross-cultural exchange of ideas.
In 2015, public artist John Craig Freeman was selected by the U.S. Consulate in Wuhan to spend 28 days in Wuhan where he was asked to engage and empower the youth of the city. Early in 2016, a portal gate was built in the courtyard in front of the K11 art village in Wuhan. The construction was directed by local master craftsmen and mediated with four iPad viewing devices connected to a powerful projector with screen for evening events. In April 2016 Freeman led an intensive five day virtual and augmented reality workshop, where he assembled and trained four production teams made up of faculty and students from local Universities. The goal was to have the teams engage the community to determine which parts of the city to document in virtual and augmented reality. The resulting work was then placed at the precise GPS location of the portal gate in the courtyard of the K11 art village.
The public was able to experience the work on smartphone mobile devices using a free downloadable augmented reality browser app and during special evening events using the iPad viewing devices. The virtual and augmented reality scenes were created with photogrammetry techniques. Photogrammetry is the science, technology, and art of obtaining reliable information from non-contact imaging and other sensor systems, in this case, to create 3D models from series of photographs taken at various angles. If an object, person or scene is photographed at multiple angles, software can analyze the parallax difference between key features in the image and extract a three dimensional reconstruction of the image in the form of a point cloud, points in space with XYZ coordinates and RGB color values. Polygons can then be created by connecting the dots, so to speak.
Augmented reality is virtual reality in a physical location. It is a new medium that has the capacity to support aesthetic research and artistic creation, particularly in public space. Viewed through the camera of common smart phones and other mobile devices, augmented reality allows vast audiences to experience new and emergent realities. Virtual objects can be located at precise longitude and latitude coordinates anywhere in the world. The mobile device becomes a kind of cybernetic prosthesis that can extend human perception and the sensorium, making the virtual world that is forming around us, visible.
Meaning is constructed in augmented reality much like montage in filmmaking where shots are juxtaposed. Rather than adjacent film clips cut together over time however, augmented reality juxtaposes the real and virtual, over space. Furthermore, looking through the virtual world to the physical world beyond disrupts our sense of what is real and what is virtual, causing a profound shift in our established ontologies. In May 2016 the project was moved to Hong Kong for exhibition during the symposium, and to seed a possible expansion of the project.
Xon Kon is a street game set in Hong Kong. Featured as part of the 2016 International Symposium of Electronic Art: 香港 Cultural R>evolution, the game results from a collaboration between interdisciplinary artists and game makers: Hugh Davies and Troy Innocent.
Drawing on the rich history of Hong Kong’s development as a global trading port, XonKon invites players to excavate the commercial forces that shaped the city from the tea and opium trade to the technology and finance sectors. Central to the game is the discovery, collection and translation of codes.
Cities are highly coded locations. Unique to Hong Kong is that its symbols and systems have been continually recoded over the past 200 hundred years. Transitioning from Chinese to British rule and back again (with a brief period of Japanese occupation in WW2) Hong Kong has been repeatedly rebooted: linguistically, politically, culturally and geographically. The remains of bi-lingual messages, remixed symbols, and obsolete fragments of code remain scattered at street level throughout the architecture, traditions, food and fashion. Walking around the Central Districts evokes an appreciation of the city’s complex multiculturalism, as well as its mercantile past and present.
Fundamental to Hong Kong’s foundation and continued way of life, is its role as a global trade centre. Established by Britain in the 1840’s, the city has served as a key port-of-trade connecting East and West. Once dealing primarily in tea, silk, opium, gold, cotton, and spices, the city’s more recent exchanges of favour are fashion accessories, consumer electronics and international finance.
Talking Spaces is a site-specific sound performance series in which situational audio recordings are converted into an improvised sound performance in public space. Through this performance series, the artist seeks to draw attention to the inseparable sonic bond between listener, space, and performer during performance, as well as between listeners and the fascinating sound environments embedded in their daily lives. With a microphone, audio of the space is periodically recorded. Recordings are processed to varying degrees and arranged into improvised music with a digital audio workstation and MIDI controller. The resulting sound, expelled from a speaker, blends with the natural sound environment, overlaying it with a modified version of itself. Each performance is totally unique, since its only materials are situational recordings gathered during the performance. This performance series is strongly tied to Sound Walks and Land Art. During a Sound Walk, the goal for participants is simply to listen to the sounds of the environment.
However, in addition to surrounding environmental sound, participants almost certainly make sound themselves by moving throughout the space, for example with their breath, footsteps, or clothing. Thus, the two seemingly separate entities of human and environment become sonically intertwined. It is a goal of this performance series to generate a similarly meditative and socioenvironmentally adhesive sensory experience for the performer and listener alike. Talking Spaces can not exactly be considered a “Sonic Land Art,” since its compositions are not comprised of natural materials but recordings of natural materials. However, like Land Art, performances are site-specific, derive their materials from the environment or public space, and position in/on that space an artwork which resembles but redesigns it in the aesthetic of the artist. Also, the two are ephemeral and sometimes created within the solitude of remote locations, preserved and viewed by third parties only by way of documentation. A passing pedestrian who sees audio equipment connected to speakers in public will likely assume it marks the presence of a street performance. Because conventions of sound or music performance command or suggest attention to the performed sound, the conversion of recordings of the surrounding environmental sound into a musical performance thus draws attention to listeners’ surroundings.
As a street performance in public space, the absence of a stage or rows of chairs for audience seating removes the conventional notion of separation between audience, performer, and venue. Instead, listeners may feel free to approach the performer and experiment making sounds into the microphone or even to engage in conversation with the artist. A Talking Spaces performance is comprised of three basic components: Electronics, Environment, and Performer. The electronic element of the performance is essential in that it facilitates the capturing, processing, and performing of digital audio. The quality and character of the sounds entering and exiting the devices depends on the quality and character of the equipment. While, in field recording, attempts are often made to prevent extensive sounds associated with the word “noise,” such as wind, feedback, or low fidelity, these qualities might instead be appreciated as a compositional contribution of the equipment to the performance. Other compositional elements affected by electronics include structures and processes within the digital audio workstation as well as effects featured on the speaker(s) or guitar amp(s) used to project the performed sound. The audio equipment as objects also provides visual cues to the audience. A microphone pointed away from the performer suggests environmental input and possible audience participation, should members decide to approach the microphone. The environment, of course, provides the sound material to be recorded and processed. The natural environment as a system of dynamic and everchanging patterns provides a generative compositional contribution to the performance.
Additionally, the visual appearance of the environment connects sampled sounds to their origins as physical events, which offers some transparency to audience members regarding the performance process. A performance in public space which draws attention to its dynamic sonic presence consequently draws attention to its visual presence as well, encouraging listeners to become active viewers of their environment. The performer’s role in the performance has much to do with awareness. As with a musician playing in an ensemble, listening, adjustment, and reaction are required of the performer, who decides the amount of sound and the character of sound to add to the pre-existing musical texture. Other fundamentally human qualities contributed by the performer for this project include personal musical tastes or influences, emotional expression, skill level, mood, sense of humor, and so on, all of which affect the musical composition of the performance. Though the project’s configuration provides some constraints, compositional possibilities are seemingly endless. DAWs enable field recordings to be manipulated and arranged quite elaborately, even during an improvisation. Depending on the physical controls featured on the MIDI controller in use, the performer has access to varying degrees of tactile expression, though digital options on the DAW can sometimes compensate. This particular artist happens to be heavily influenced by the textures and compositional arcs of large-scale symphonic works and techno alike. Decisions must be made, however, with the environment in mind. Thin textures sometimes subtly obscure the distinction between environment and performance, while, in contrast, heavy effects and mechanized repetitions can quite starkly distinguish the two from each other. This artist prefers to strike a balance and, perhaps most importantly, at the end of the performance, to leave the audience wondering if it’s over.
Musical Noice: City Ambient Sounds is a musical performance that captures the city’s ambient sounds as an artefact for textual building in the collaborative free improvisation by iLOrk, a laptop orchestra/ mobile device ensemble from the Hong Kong Institute of Education led by Dr. Lee Cheng. A collective of ambient sounds representing various cityscapes of this city will be recorded, which will then be magnified, distorted and/or synthesized to convert the meaningless signals to artefacts of this event. Values and meanings are therefore injected to the ambient sounds of the city, which are usually regarded as unwanted noise to the citizens. In this semi-structured and interactive performance, iLOrk members will collaboratively make music with ‘sonic trees’ – microphone and speaker stands that hanged with digital/electronic instruments and amplifiers – that surround the in-door and open space performance venue. Audiences are welcome to participate by responding to the city’s soundscape via interacting with those sonic trees. Art should be made to communicate ideas and values, and provide meanings to its audience. Musical performance as an art form should therefore be able to anchor people’s sonic experience, triggering their reflective thinking on the ambient sound that surrounds in everyday life. The title of this musical performance, Musical Noice: City Ambient Sounds, offers several meanings – it could mean an accented version of the word “nice”; taking out one stroke from the letter “N” it becomes “Voice”; interchanging the letter “c” with “s” it becomes “Noise”. Musical Noice responses to the sub-theme “Noise Contra Signal”, that noise can be interrupted, intervened and reinterpreted as a meaningful artefact, which contributes as part of the musical context. The emphasis on economic development and fast pace of life in this city have ignored the invisible layer of its ambient surroundings, considering the ambient sounds of this city as unwanted noise (and meaningless signals in the electronic understanding). The proposed event attempts to raise people’s awareness of the ambient environment through imbuing new meanings from the sound that susurrounds them, and therefore diverging people’s perception on noise and unfolding the possibilities on the involvement of noise in art-making.
The Internet of Shoes is an experimental swarm light installation and network for street-level interactions. In this installation, a group of pedestrians can perform collective actions on the street (e.g. triggering light waves that take hold of other participants) using interactive LED shoelaces that communicate with each other in a wireless mesh network (in collaboration with Lab 11 and DoIIT).
The Colours of A Wooden Flute (version ISEA 2016 Hongkong). Fragments of memories (produced both by human beings and by computer) generate a synthesis of sounds and visuals. The sounds of live instruments serve as interface in an audiovisually interactive concert that merges a sophisticated instrumental sound and realtime computing in an amazing improvisation. While visual images and processes are being generated during the concert, a multi channel granular synthesis, spectral delays and virtuoso chances fit together minute tonal par ticles that make up the instrumental sounds into a constantly changing acoustic stream made up of different pitches, durations and positions in the electro-acoustic space. The musical and visual components interact and reciprocally influence each other in order to blend into a unique, synaesthetic, improvisational work of art.
Improvised instrumental music and audiovisual realtime processes interact and reciprocally influence each other in order to blend into a unique work of art of realtime composition. While visual images and processes interact with the music during the concert, a multi channel granular synthesis and a a multichannel spectral delay generate a spatialization of frequency oriented delays, pulses and feedbacks which sometimes sum up to an even reverberating ambience and fit together minute tonal particles that make up the instrumental sounds into a constantly changing acoustic stream made up of different pitches, durations and positions in the electro-acoustic space. Our art work and research describes the hook-up between human and machine, between musical inspiration and digital concepts. Musical instruments act as interfaces for digital audio processing and enable human beings to communicate with digital technologies as well as to generate, receive and exchange data versus emotions.
Nowadays, as different forms of machine musicianship, are blooming where computer act like virtuoso musical instruments we are focusing on a very specialized form of realtime performance with a computer system virtuoso audiovisual interaction with musical instruments. Every performance of our interactive audiovisual works, even of the same title, is unique not only because of the inherent concept of improvisation, but also because the computer system and the progamming are further developed for every event. The realtime processes of an audiovisual interactive computer system collude with a free artists musical expression. Our art work and research describes the hook-up between human and machine, between musical inspiration and digital concepts.
WM_EX10 WM_A28 TCM_200DV A1.2FPP BK26 is a 10 channel audio/video noise installation consisting of ten pairs of CRT monitors and speakers.
The sound and video is generated through short circuits the artist produces with his wet fingers on opened devices. The skin’s resistance and the conductance of the human body combined with the components of the circuits are modifying the sound. The audio signals that are audible through multiple speakers are sent to tube monitors which are visualizing the signal in flickering and abstract shapes and lines in black and white. Different kinds of noise are produced and forming a time-based sculpture creating a sound and video scape.
The spectator is invited to move in and through the installation.
Is there a potential language below language? …below consciousness? …beyond the sign? What are the post/para-human potentials for language beyond linguistic signification, below the level of consciousness? What can the subconscious communicate, and what might such communication look like?
Performing Hypo-Linguistics considers these questions through its proposition of a technologically mediated, cyborg communication and collaboration system. In this system, traditional linguistic concepts/objects/materialities are impossible to relay. Furthermore, neither performer can enact conscious control of the system. Thus, each performer must rely on the instinctive reactions of their subconscious state to provide ambiguous, but instantaneous feedback to the other. Hence, the system proposes a communicative feedback loop beyond (and below, more visceral than) the consciously mediated realm of linguistic representation.
Neural Science, through the discovery of mirror neurons — neurons which mimic in our brains the mental state of the other — allows us to imagine the possibility of a para/hyper-linguistic system that is perpetually engaged, but inaccessible to our conscious minds. As we see in Abramovic’s Mutual Wave Machine, these neurons drive a perpetual process of synchronization. However, they also imply a form of communication — a constant state information transfer occurring just below our sense-experience. It is this communication that we seek to hijack and to hack. As such, we hope to create a harmonious discourse that reveals our distinctly human potential for collaborative evolution. Performing Hypo-Linguistics proposes an alternative enunciation of a pre/post-linguistic signification system that is realize-able only through the human-as-cyborg.
Foxconn Frequency (No. 2) — for one visibly Chinese performer investigates the consequences of disconnecting action and labour from sound. Using the poetry of Xu Lizhi (許立志) -a former Foxconn worker- as a structural blueprint to move through a series of dictations and testings, the piece seeks to create a space for failure and stakes. The most obvious and clear negation is the purposeful disconnection between the musician and her instrument. The use of technology here is meant to disrupt, instead of enable; to create a space of new possibilities through subtraction. For the core of the piece, a system was devised to “test” the performer’s competency with multiple exercises. This system calls these exercises (or “gestures”) differently every performance, keeping the performer present and engaged through-out the piece. The performer must execute these gestures successfully under shifting parameters that determine overall difficulty before progressing forward. This creates a scenario for the player to fail. While traditional scores have created difficulty, a software-driven system allows for new permutations. The generative and responsive nature of the system subverts any attempt for the performer to prepare.
The struggle becomes real and perceptible, a part of the piece as it unfolds. There are many reasons for the restriction of “one visibly Chinese performer.” In music composition, we often specify instruments (e.g for solo violin), but almost never the body itself. By making this distinction, it is my intention to draw focus to the performer’s identity, to engage the eyes as well as the ears, and to bring attention to the “extra-musical,” shifting the mode of audience perception to multiple modalities. It felt necessary to specify race when confronting the narratives of Foxconn and Xu Lizhi’s poetry, as it is the Chinese body at work. The piano, an iconic Western object, is an equal presence to the body, acting as the main resonator and origin of most sounds. They are separated by physical distance, allowing us to see these two entities as separate, and not together (as in most concert music), and to explore this reconfigured space.
Terrainor is a live performance work that uses the practices of field recording and site listening to provide both raw sound and video material and a structure for the composition built from them. The project is a form of expanded phonography that uses concepts of soundscape to structure readings of landscape. In it specific sites are documented on both micro and macro scales, and use the local acoustic conditions to guide the process of sonic mapping. This approach results in recordings that are not traditionally representational, but instead reflect the experience of moving into the sites’ geographies in ways that are idiosyncratic and reference the social and experiential definitions of site as much as physical ones.
In performance the field recordings are processed and spatialized through a multi-channel sound system accompanied by a single channel of video. The material cycles through a number of scenes each of which provides a distinct and particular method of navigation through it. The structure and pacing of the material is designed to encourage a sustained and focused experience, one that clearly demarcates the shifting representations of audio and video in each scene.
Boundary Synthesizer II is an interactive audiovisual system that translates familiar moving images such as cityscapes and fireworks into impressive sound. A horizontal “boundary” line is extracted from the outline in each video frame and directly transformed into a sound waveform. Users can interact with Boundary Synthesizer by changing the video contents, controlling the frequency and manipulating the image data using image effects such as wave, mirror and mosaic. Thus, this system is an audiovisual synthesizer in which the oscillator’s waveform is structured by the visual boundary. This waveform is changed in time by videos, and unexpected sound artifacts are continuously generated automatically. Therefore users can explore the intuitive and expressive connections of image and sound by operating the physical controller.
What happens when a contemporary sound artist and a jazz pianist remain shut indoors in a baroque castle in the Piemontese mountains for two weeks? The result is Sjö a collaborative project between the Zurich-based electronic composer Marcel Zaes (CH) and the Paris-based jazz pianist Andrea Manzoni (ITA). From the sheltered rooms of their baroque retreat, Zaes and Manzoni present a sonic research project which is as contemporary as it is versatile. Sjö explores questions such as what it means for a piano to make a sound in the 21st century in a century over-determined by an electronic environment. Starting from Manzoni’s jazz-based technical expertise, the duo experiments with electronic transformations and the impact of their environment whether low-key piano bar or deep techno club, their sound seeks to adapt the piano to different 21st century musical settings. The visuals accompanying Sjö’s sonic investigations are provided by the artwork of the Munich-based artist duo Anna Schölß (GER) and Kristijan Kolak (GER).
Sjö : två is a one-hour concert program consisting of nine cocompositions by Manzoni and Zaes. All compositions include a tonal-melodic piano part and an electronic part which is based upon sonic research ideas. Several research settings that get used by Sjö are inspired by the creation of artificial spatiality which exceeds the possibility of the acoustic instrument. One of them is the use of convolution reverb in live performance. Several pedal hits, microphoned in the inside of the grand piano, act as impulse responses, while the live piano’s attacks trigger these responses. The struck piano string, resonating in the inside of the piano, microphoned, going through a convolution reverb where it resonates again in the virtual inside of the piano, creates supernaturality. Another setting consists of the use/abuse of stereophony. For the sake of theater-suitability we limit our work so far to stereophony and try to go to its very limits. Stereo as an assembly of two mono channels instead of an “authentic” image of space is the underlying concept. The use of a “wrong” mapping of the piano microphones, phase inversion, the mixing down of the left and right piano microphone into one mono channel while the same signal passes delayed in the other mono channel, or the application of a convolution reverb on both piano microphones but with a different impulse response for both channels; all of this, when assembled again and played-back as if it were an “ordinary stereo”, results in an artificial spatiality which could not exist in reality.
The creation of artificial temporality is the second concept used by Sjö, as the interest lies on the temporal perception of sound. The live piano playing of Manzoni necessitates a human-created temporality, which Zaes with his algorithms interferes. Detaching the X from the Y axis, the temporal envelope from the momentary timbre is the underlying concept, which Marcel Zaes often uses in all of his works, not only in Sjö. The momentary piano event – an attack, a decay, a release or a piano body noise – is freezed with diverse granular and freeze algo rithms. Thus, the piano event results in an extemporal static sound, which then is shaped by Zaes again in function of time. The detached spectrum is joined with an artificially created envelope. Both the shape of this envelope and the parameters/ quality of the underlying freeze are controlled by Zaes on stage. Further ideas of temporality include the real-time reversing of a single piano note—a concept that for the reason of its physicality remains impossible—yet can be approached, or the concept of introducing pure sine waves as natural overtones in an ongoing piano note, as the played note—once released—is prolonged and results again in a continuous sonic event that can be artificially shaped.
Andrea Manzoni: composition, production, live performance, piano, synthesizer Marcel Zaes: composition, production, live performance, electronics, programming Anna Schölß: artwork, visual installation and video Kristijan Kolak: video
Pulse Project is a doctoral performance research series exploring the relational interfaces between medicine, culture and technology. In this study, I embody and perform research practice itself through adopting the role of artist-acupuncturist-investigator and acting as an instrument or medium between myself and others and between cultural traditions for understanding and mediating the body. Pulse ‘reading’, case histories, notations of pulses and acupuncture point locating are all used together as methods for exploring the cultural encounter between artist, participants and diverse medical practices. Drawing upon my experience as a clinical acupuncturist (with training in biomedicine), I use traditional Chinese medicine and music theories together with technology to compose bespoke algorithmic soundscapes expressive of an individual’s ‘being’ that registers along a spectrum between Asian and Western approaches to the body. These soundscapes are not sonifications of western principles of circulation but offer another perspective to conceive of/listen to the interior spaces of the body as each participant’s pulse is interpreted as a unique set of soundwave images based on traditional Chinese pulse diagnosis (a complex set of 28+ waveform images corresponding to states of being) and also according to traditional Chinese music theory (Lewis-King, 2014).
projectanywhere.net/pulse-project-a-sonic-investigation-across-bodies-cultures-and-technologies-michelle-lewis-king codephd.wordpress.com soundcloud.com/cosmosonicsoma/sets clang.cl/pulse-landscapes-2
The vibration of air molecules can be perceived as a tangible kind of beauty, at times extreme and other times subtle. This aesthetic abstraction both moves toward and pushes away from how we normally define synesthesia, in which two senses such as hearing and sight form a single, hybridized sensory experience.
This is an audiovisual work that explores a connection between the sonic, the visual, and space. The audiovisual elements are synchronized and unified into a new form of synesthetic experience. A custom designed patchable machine and sixteen channel software oscilloscope are used as apparatuses to connect sound, visual, and space. In the work, the oscilloscope has a functional tendency, which displays the representation of voltage in the center of the screen. However, in the aesthetical aspect, I stumbled on a way in which to detour the screen-centered display and to think about how to decentralize the elements. I used the relation between the screen and the wall, which became an important aspect of the piece, and it led to the decentralization of the main storyline.
Composed by Daniel Scheidt, performed by Trevor Tureski
Obeying the Laws of Physics (1987) is a software composition for percussionist and interactive computer response system. The percussionist performs inprovisationally on a set of electronic drum pads which are used to drive a pair of synthesizers. The computer ‘observes’ the percussionist’s performance and generates its own responses according to the percussionist’s actions. These responses involve eleborations, ornamentations, tranformations, and literal quotes derived in real-time from the material provided by the percussionist.
Audio: Obeying the Laws of Physics from album Action/Réaction (1991)
Composed by Reynold Weidenaar Performed by Peter Luit
In the Thundering Scream of Seraphim’s Delight (1987) the double bass is revealed on video as a metaphoric microcosm of sprited human effort. Close-ups of performance phrases and gestures extract the dance-like suppleness and elegant fluidity, the elusive spontaneity, and the sometimes exuberant drama or wrenching struggle that support seemingly small and minor movements. Using the extended character of the hands, a luminescent dialogue ensues as the various interactive audio and video performances respond and recoil. The work explores energetic physicality and a spectrum of inner and outer states, from subdued tension to ecstatic whimsy. Thus is disclosed the magically angelic presence and commanding strength of the remarkable instrument upon which these musical dramas unfold.
This work is formed as a suite of 31 brief sonic/scenic events, each extending in duration anywhere from 6 to 64 seconds. The musical and visual materials of each scene were conceived together. Thus, as the basic musical ideas were being composed, certain primary visual elements also come to mind: camera angle, framing, lighting, camera movement and visual composition. After these underlying sights and sounds were synchronously recorded, the piece was formed by incorporating complementary image-processing designs, mimetic performance footage, and digital material (derived from the sampled double bass), as well as a live-performance part for the double bass. The piece is arranged in a nearly-symetrical arch form, with two hologram scenes and double bass solos on either side of the center. Video: The Thundering Scream of Seraphim’s Delight at SIGGRAPH 1988
Composed by Robert Rowe, performed by Harry Sparnaay
Hall of Mirrors (1986) is a duet for a human playing a bass clarinet and the 4X computer system. The computer takes in the sound of the bass clarinet and reflects it, sometimes faithfully, sometimes not. The human listens to what the computer does and modifies his own performance according to how his partner is playing. Humans and computers are very different things: Hall of Mirrors is an effort by the composer to let them make music together, realizing and emphasizing each other’s strengths. All of the sounds you hear come from the bass clarinet: the computer only shuffles around with what it hears, or multiplies it, or changes its speed. The image of the title comes from an idea of the piece as a series of reflections, fragmented, distorted, or true, cast back from the two patterns onto each other. The version of the piece played today uses a tape recording of the 4X part: since the performace of both partners changes with each playing, this represents a snapshot of the possible outcome. Hall of Mirrors is a commission from the Fonds voor de Scheppende Toonkunst of the Dutch Ministry of Culture. The support of IRCAM is also gratefully acknowledged.
Composed and Performed by Robert Mulder & Kristi Allik
Cometose (1986) was created with the financial assistance of the Media section of the Canada Council. Conceptually the work deals with the death of Samuel Clemens during the perihelion of Halley’s Comet in April of 1910. Clemens had said during his life that he ‘came in with the comet (1835) and would go with it’. In this story his wishes come through and he is transported with house and all to the core of the comet. There he views the world, reflects his own past and the present from within the safety of ‘Stormfield’, his house. During the major part of the voyage Twain watches as many strange and frightening things appear in the windows and his beloved pool table. When he returns to the orbit of Earth in 1985 he wakes up and looks at our busy world. Alas, before he can comment upon what he has experienced, the satellite Giotto, in collision course with the comet smashes his house.
The music for this work was created with electronic and acoustic means. Among the electronic sources were: the Casio CZ5000, Casio CZ101, Yamaha DX7 etc. The acoustic sources include Allik’s singing, narration of Clemens’ writing and the sound of simple toys. The original recorded acoustic material was processed using devices such as: the Yamaha REV-7, the Roland SL-50 digital sampler, a digital vocoder and other equipment.
The imagery was taken from some original material of the period.Interior shots were taken at Eldon House, London Ontario. The exterior of the house and other material was photographed from Mulder’s artwork, magazine cut-outs, drawings, complex models and original period photographs. The computerized material was created on an Amiga 1000 computer using A-squared Live! digitizer and Dpaint software. The synchronizing software was written in Electrosonic ESCLAMP on an Apple computer.
Composed and Performed by Daniel Brandt
Portable Music is a series of improvisational and rhythmic pieces for solo performer, an electronic percussion instrument (a Roland Octapad) and Midilodica (a unique MIDI) keyboard instrument built at STEIM in 1987). The musician plays the instruments which send data directly to a computer that is programmed to ‘listen’ and respond by processing the information and sending its output to synthesizers. The result is an interactive system, allowing real-time, gestural control over many layers of musical processes. Each composition exploits this performance system in a different manner.
Repetez, s.v.p (1987). The computer, operating as an interactive MIDI recorder, repeats rhythms that are performed on the Octapad. These phrases are layered one by one until a complex pattern is created. The performer then plays the Octapad, modifying and adding to the pre-recorded patterns.
Four Echoic Episodes (1987), is performed on the Midilodica. This portable keyboard, originally a Hohner melodica, has been modified to transmit a wide range of MIDI commands. These are sent directly to the computer which interprets them in a variety of ways. Each section of this composition has its own micro-tonal scale, and uses variations of a program that transforms a solo performance into that of an ensemble with the simple device of an echo.
The objectives of Sonic Electric: sonic recipes with experimental intermedia are located within the fields of experimental sound performance and electronic art as inter-media [sound/video, performative and community practice] to develop a strategic laboratory for collective engagement by exploring principles of participation and relationality. The aim of this project designed for ISEA Hong Kong is to develop a practice-based creative analysis of identity, power and place, through the immersive environment of the kitchen. The spatial politics of the communal kitchen workplace shape our daily experience. With a collection of hand tools and electric motors that will be gathered together from kitchens in Hong Kong, local participants will be bending kitchen instruments and appliances into doing things they were not designed to do and will learn to shift and reposition their status as tools and social markers, in ways that seek to open up sites of resonance and resistance. Participants will also learn how the spatial and deep listening experience of the kitchen interior may open through collaboration via the fidelity of kitchen utensils and motors to sonic textures in sound performance. Sound has the ability to create a relational space, a meeting point, diffuse and yet pointed. This makes sound a significant model for thinking and experiencing the contemporary condition, for as a relational spatiality global culture demands and necessitates continual reworking. It locates us with an extremely animate and energetic environment that, like auditory phenomena, often exceeds the conventional parameters and possibilities of representation. This connection is equally a spatial formation whose temporary appearance requires occupation, as a continual project, emphasizing our place and is also potentially, emphasizing our local community.
This dynamic provides a key opportunity for moving through contemporary social discourse by creating shared spaces; it belongs to no single public. It can exist as a network that teaches how to belong, to find place and still search for a new connection, for proximity. A critical and practical hands-on workshop will introduce recruits to, the concepts of Sound Improvisation from a dynamic perspective using kitchen tools and appliances and the manipulation of such tools and appliances. Utilizing Performative Improvisation, Sound Exploration, Music Electronics and Music Recording. Participants will also experience how to make deep listening an effective tool or as an active collective process. The workshop will finalize into a sound and performance art work presentation fusing cryptic beats and experimental noise with live sound improvisation created by the amplification of manipulated kitchen apparatus assisted by electronic hardware such as Contac mics plugged into bass guitar amplifiers. They will play their kitchen appliances through these devices for the final performance presentation.
“…there appears to be a complicated set of rules for computing with neurons which prevents many of them from working at once. The neurons are electrically triggered, and if the rules are broken we get an electrical overload. This is the cybernetic explanation (in brief) of what we usually call epilepsy, or (perhaps) what our forefathers called ‘possession’”. Stafford Beer, 1965.
Possession Trance is a live performance project spawned within underground D.I.Y. rave and experimental music communities. Slowly mutating for the past nine or so years the performance uses a combination of minerals, metals, electronic circuits, high powered stroboscopes, dense smoke, high volume noise and ritual incense in an attempt to create powerful, hallucinatory phenomena within audience members aimed toward provoking a liberating, communal and shared experience.
Initially using Nicolas Collins CMOS synthesisers in combination with stroboscopic light, the project has developed by examining the core underlying structure of electronic and computational circuitry. Currently this performance utilises self built crystal amplifiers constructed with iron and chalcopyrite, copper-oxide and copper solar cells, and will be soon incorporating magnetite and maghemite audio “tape” to create recorded feedback loops live.
Possession Trance tends to spirals sideways into bastard tekno, burrowing through noise infested cybernetics, crude neuroscience and distorted physiology in an attempt to piece together our fragmentary daemons and split the nine-fold reality layers of human perception; from communing with the dead to disturbing the holographic brain; from trance states to opening flicker portals in optic nerve fibres; these practitioners practice a dark hypnosis in psychoactive hyperventilation clubs. This is far more potent than those Burroughsian opiate dreaming machines. These are the dank back alleys of the contemporary Core clans; turbo charged amphet-psycho-triptamine splittercore; highly potent chemical potions releasing anxiety ridden, fraudulent time-travel. We live on the peripheries of our own being. The veil is removed. Portals are opened. The superimposed fluorescent grid clamps the mind like thousands of splintered fingers sinking deep inside the brain and pulling out neural pathways in all directions. The pathways split into rotating cylinders folding in opposite directions spiralling into infinity. The holoflux is engaged as Purkinje shifts in his nauseated hallucinations, Fechner lurches into view weaving his pattern induced flicker colours in Helmholtz’s face.
They all dissipate in photic voltages tracing themselves into oblivion through rods and cones penetrating dank neuronal alleyways at high velocity running unstoppable into head-on collisions with particles swarming from the OI! factory and binding with tactic centres of the inner muscular shell. The signal is now at the level of internal mineral solutions performing ionic transfers causing electrical lightening storms shooting through cells. Axon hillock is firing like a spasmodic machine gun spewing out incandescent howls and screams of white noise automatically reloading the onslaught continues as possession trance is manifest. Believe you are possessed by a god or deity. Something other has entered you, nonhuman. It is the nonhuman we are now in communication with.
Martial Law (Hong Kong Version) belongs to the ongoing Martial Law series, which consists of an evolving chain of interactive sound installations with electroacoustic, mechanic and electronic elements, which has already been reimagined for its showing in three different cities under very particular circumstances. Its main component is centered around a customized and repurposed tambourine the viewer can move with a joystick, triggering a series of unexpected visuals and sound events, as several objects glide across the tambourine’s head as viewed through a fresnel lens. The piece as a whole presents the audience with a stochastic sound and visual generator that is controlled via unclear interactive parameters, guiding the audience into an unmediated discovery process in which the artwork reveals its possibilities and nature in the midst of actual interaction. It plays with notions of free improvisation and the connections between body awareness, sound generation and synesthetic and cymatic phenomena. It stands as a complex and unpredictable self-organizing audiovisual system that the audience can influence and direct but not fully control. In all its versions, Martial Law is a sound installation that can also be understood as an electronic musical instrument. Using Arduino, hand-wired analog electronics and digital fabrication techniques, it is always conceived as an outgrowth of the space it is placed in, animating it through sound. Through its almost continuous sound emission, it erects a live streaming sound commentary on the space’s mood changes as its lighting and occupants change in the course of the day. Its visual presentation [text truncated in original, editor]
Its first incarnation was shown in Hamburg, in the context of an event that took over low-budget hotel rooms for the staging of small scale exhibitions under several different curators. At that time, it was displayed inside a small ceramic sink at the room assigned to curator Armando Rosales. In its next version, it was reconstructed for its display at Taipei’s Japanese diplomatic office, where it became a wall sculpture with a tabletop device. Its next incarnation took shape in Tokyo, at a group show themed around anatomical drawing and representations of the body, where it became a more ambitious sound installation and performance stage, including amplified stringed platforms and proximity-activated granular synthesizer. For ISEA 2016, the work has been adapted for its ephemeral installation at Run Run Shaw Creative Media Centre, entering into a dialogue with the building’s unique architecture. It will be greeting visitors in their way to performance stage, framing a performance by Aquiles Hadjis and Rie Tashiro and staying on for the rest of the day in order to be interacted with.
Is anything original any more? Can an artist create a completely singular work, free from outside influence? Derivative Works offers no answers to these questions. Drawing from a wide range of musical, timbral, methodological and ideological sources, Derivative Works takes the form of an audio-visual barrage, compressing complex rhythms into tight temporal spaces and offering neuro-linguistic diatribes. Shifting and groaning synths, triggered by an impossible drum machine. Inspirations: science fiction, gabba, object oriented philosophy, hardstyle, ui design, rubber, trap, native instruments, compressed air, open-mindedness, nike, rave, linn drum, history, candy, tr-808, matte black cars, computer music, recycling, time, guitars, cleanliness, careers, spatialisation, techno, strobe lights, family, automated testing.
Aquatint is a bespoke mix of Overlap’s award winning transitional landscapes: a mesmeric dance of shapes, lights and abstract imagery on the cusp of the recognizable, reflecting the emotional response we experience in powerful natural environments: atmospheric, sensory and textual, delivered through a systematic patterning within a void and augmented by their melodic minimalist music. A plastic view of nature.
Running Forest is a middle distance electromagnetic slide through an endless forest. Lazy Wave. Underwater sky, a surface never reached. Cloud Edged. Perhaps the purpose of abstract [original text truncated, editor]
Self-luminous 2 is an experimental handmade instrument shown as performance. It is a series-project which I have been working on since 2013 and finally developed into shape in 2014. I am looking for an intimate and personal instrument that reflects on the relation of digital sound and light code. In computer language, light on is 1 and light off is 0. If more than 2 lamps, it could be code or readable possibility by the meanings. When I press the button or turn the knob, the message will be sent to Pure Data, and the sound will be triggered in live by Pure Data.
The Data of sound such as frequency and volume, are analyzed and sent out to the second Arduino to control the light. The light, in this case, is an intuitive element for human beings. From this point, it is really close to sound which disturbs our biological body directly. The lights are visualized and they can be transferred them into messages. The message might be readable by coincidence with the link to the code. The light is bright enough to let audiences to have persistence of vision in mind. During the performance, the sound will be reproduced by code and part of it is impromptu.
In Hong Kong Algorave Performance, 160520 improvised programming generates danceable percussive music emphasizing generative rhythms and their variations. All of my interaction with the system is projected for the audience to see. The custom live coding system, a Haskell language library called Conductive. It triggers a software sampler built with the Haskell bindings to the SuperCollider synthesizer and loaded with thousands of audio samples (as many as 18,000). Through live coding, I manipulate multiple concurrent processes that spawn events, including the number of processes, the type of events spawned, and other parameters.
At least two methods of generation of base rhythms are used: stochastic methods and L-systems (an algorithm describing plant growth). In the former, sets of rhythmic figures are generated stochastically. From them, figures are selected at random and joined to form larger patterns. In the latter, L-systems are coded live and used to generate patterns. These patterns are then processed into a stack of variations with higher and lower event density. That stack is traveresed according to a time-varying value to create dynamically changing rhythms. Simultaneously, patterns in which audio samples and other parameters are assigned to sequences of time intervals are generated through similar methods. The concurrent processes read the generated data and use it to synthesize sound events according to the rhythm patterns described above.
This performance also uses additional agent processes which change system parameters (conductor agents) run alongside sample-playing agent processes (instrumentalist agents). These conductor agents are the result of my recent research into how autonomous processes can complement live coding activity. These conductors stop and start instrumentalists, as well as change the other parameters used by the instrumentalists for sample-triggering, such as which sample to play and which rhythm pattern to follow. The live coding involves not only the patterns for rhythms and samples but also the algorithms which the conductors use during the performance.
My interaction involves activities such as generating data, continuously reselecting which data to use throughout the performance, changing the number of running concurrent processes, and determining when changes occur. By manipulating both instrumentalist and conductor agents and the data they read, a rapidly changing stream of rhythmically complex bass, percussion, noise, and tones is improvised according to a rough sketch of the overall performance structure. This improvisation crosses the genre boundaries of bass music, noise music, and free improvisation. The projection shows all of my activities, including code editing and execution of code in the interpreter. When I press “Enter” on the keyboard, the line under the cursor is sent to the interpreter and immediately executed. Pressing F11 causes the code block under the cursor to be sent and executed. Text output of functions is printed in the interpreter.
The primary technologies used include: – Conductive, a library for live coding in Haskell – the Haskell programming language, through the Glasglow Haskell Compiler interpreter – the SuperCollider synthesis engine (but not its programming language) – hsc3, the Haskell bindings to SuperCollider – the xmonad window manager – the vim text editor – the tmux terminal multiplexer – the tslime plugin for vim
Other open-source tools are essential for the performance, including: – an Arch Linux computer – jackd – the Calf Jack Host – patchage
To afar the water flows reconstructs the city into a high-rise garden utopia, emphasizing a genuine harmony between man-made structures and its natural surroundings. A decade ago, I left my home in Beijing, where the rapid transformation of the urban landscape dramatically reshaped the city and people’s lives, and I came to America to begin an immigrant’s journey—migrating from the west coast to the east coast, and from the east coast to the Midwest. To afar the water flows is both a visual diary of this journey and a loving portrait of American cities.
My video installation is inspired by the concept of architectural relief (a technique where the sculpted elements remain attached but raised above the background plane). Audiences experience a gradual shift in the appearance and depth of the installation from a flat image to a three-dimensional view. I use techniques like relief and projection mapping to enhance the framed glimpses of scenes as well as emphasize the physicality of digital video.
In this project, we attempt to elicit the immensely upsetting yet anticipative feel at the ending of a chapter before moving on to our next journey towards the greater possibilities into the unknown.
The music is from Nicolas Scherzinger’s ‘inter-sax-tive’, a series of improvisational works for saxophone and interactive computer. It involves a variety of granular synthesis techniques and is produced by the computer and the performer interact with one another in real time.
Using Processing programming language, we build up evolving abstract compositions that were linked with the musical riffs and patterns. Through purposely emphasizing or neglecting certain aspects of music signals, the image arrives as the visual counterpoint of the music.
“The performative is no longer the domain of humans: matter has its own agency, pulse, vitality, rhythm.” Rewa Wright
Algorithms experience time as microtemporal, a series of quickly expressed mathematical events, occurring one after the other in a precise order. As humans, we cannot begin to comprehend the microtemporal in its real time unfolding, but we can attempt to approach it with that most flawed apparatus, the eye. What we see with the eye is never the whole story, and in this case it only gives liminal access to the performance recorded in the film ‘an algorithmic life,’ where vibrant virtual matter pushes forth to slice the eye with its microtemporal pulse. Generated using custom shaders and displacement mapping to create terrain out of entirely non-manifold geometry, this work simulates and speculates upon algorithmic ‘life’ as an emergent computational strata. Each frame is generated using parameters set loosely by the artist, over an extensive timeline of 20,000 frames, which is then left to render by itself for up to three weeks. The resulting rendered frames are mostly a complete surprise to the artist, who has taken a step back and allowed algorithms to generate their own performative record. Music is composed by Simon Howden using an improvisational technique that traces the contours of the visual recording.
dist.solo is inspired from the moment of intimacy in eye contact and the indefinite variables in relationships. “dist.” – the short form of distance or district. It is also a term widely used for mathematical and programming terminology for distance calculation. In this work, it represents as both relational and mathematical distance. We encounter momentary connections with people in our everyday lives. We synchronize and repel with one another from time to time. Attachment and detachment; the rhythmic dance as well as chaotic crash between the two create a metaphor for the momentary, temporary relationship that exists between them. The work involves kinetic intervention of the pendulum movement as well as the combination of digital sensors. The custom software generates random position of the unbalanced weight, hence the rhythms of swings are always indefinite. The work and the digital screen intentionally combine the rational and irrational rules, dynamic time, expressing the artist’s personal feeling towards human relationship in his current time.
B-face is a project that proposed hide all textual information of the user interface. All readable element will delete for get a layer of information about the organization of the screen. The procedure tries to reveal the different visual aspects of communication, detecting the component layers and highlight the semantic nature of the tool. Proposes an alternative to undress and examine the structure of the interface. Is a resistance attempt to face the daily over-information.
The context of his work is characterized by the use of tools and procedures, where the technique is formally manifested as practical and as theme of his work. The training acquired during the industrial school is present as application and conceptual axis. He combines different languages, such as video, installation and drawing. Has focused its research on the relationship between artistic production and production technique. Taking this idea as a starting point, his proposal is developed in two lines: – Procedures, processes and mechanisms of production. – Social configuration of technology, intervention and use of tools.
Referring to the metaphorical bond between technology and nature, the film questions the realities we create using constructed and very similar illusive imagery that erase the line between real and artificial, obvious and mysterious, finding beauty in synthetic experiences. It questions our current obsession with a memory, which often becomes more important than the event itself. The title suggests the idea of neglected moments being put together to create, morphing from one form to another, the thread of digital existence. Synthetic Curiosities is an exploration of the contemporary friction, made through repetition and slight shifts. Our common contemporary identity is formed through repetition of events, blurring the line between the real and digital.
Talk with Your Hands Like an Ellis Island Mutt uses new media tools to explore cultural identity in a way that analog tools would not permit, suggesting that identity is the result of multiple lifelong collisions between elements of personality that we have inherited with or without knowing it. Designed for performative screening or interactive exhibition and built in the Korsakow cinema database system, HandMutt embodies my experience as a second-generation American whose ancestors came through Ellis Island from Europe (Poland, Hungary, and Scotland between 1907 and 1914), and whose identity is formed by a multi-generational assimilation process that often feels perpetual. With three of my four grandparents being born outside the US in non-English speaking environments (and the fourth born here not long after his own parents arrived), I have been aware of my own otherness all my life, particularly in regard to my working class and Eastern European heritage.
Part of the expression of that heritage is talking with one’s hands – a habit attributed in America almost exclusively to working class immigrants and their descendants. (One expected result of successful assimilation is the loss of this habit, which I did not achieve.) The idea for HandMutt came during the editing process for my digital lyric memoir daddylabyrinth, which premiered at the International Conference on Interactive Digital Storytelling in 2014 at the ArtScience Museum of Singapore. In its video footage I found many hand gestures that led me to reflect on my own heritage and on the multigenerational assimilation process of American immigrants. This resulted in the birth of HandMutt as a separate project. The technology-enabled fragmentation of story, image, and consciousness in HandMutt reflects the fragmentation of cultural identity itself, which is often so dispersed in our lives that we can scarcely identify it. The work generates a nuanced, intimate understanding of the immigration and assimilation experience that would not be possible with traditional monolinear narrative and noninteractive media.
Talk with Your Hands Like an Ellis Island Mutt is made of interlocking fragments. It consists of 157 short clips (most 8-12 seconds in length) including altered images of my own hand gestures, archived records and objects of my immigrant ancestors, and archival footage of newcomers arriving at Ellis Island—through which the majority of European immigrants passed as America’s population exploded between 1890 and 1920. It offers multiple unique playthroughs in an order determined both by interactor choice and the computational operation of its database. The video clips in HandMutt come from four distinct sources, and each is paired with short voice-over commentary by me. The largest group is a set of sixty-four of my hand gestures harvested from the video selfies in my digital work daddylabyrinth. Each of these gestures is isolated from its communicative context and broken down into micromotions -often only a few frames in length- that are repeated, reversed, and subjected to manipulations in frame rate using Final Cut Pro X editing software. This visual idea is an homage to my film mentor Ken Jacobs, whose performative Nervous System films of the 1980s and 90s created a similar “stutter” effect (one that we now see all around us, in altered form, thanks to GIF culture).
The second group of clips I worked with was harvested from public domain archival footage of immigrants arriving at Ellis Island in 1903 and 1906, which are subject to the same Ken Jacobs-esque manipulations. I left these untouched in terms of color to keep the historical record of immigration as cleanly represented as possible. These images, which were shot with a primitive stationary camera, are blown up to capture the micro-dramas embedded within the larger frame—a squabbling couple, a frightened child, an old woman uncertain of the life ahead of her. HandMutt also includes clips of tangible family relics and photographs from my maternal and paternal families.
I have few possessions from either side, and many have achieved near-iconographic status for me no matter how simple or utilitarian they are. The final group is simple text animations of keywords that are used in the Korsakow database that resonated with themes which emerged from the other three categories of clips. These represent the emotions and psycho-social forces that have shaped my relationship with my ancestry. The overall result of these multiple strands in conversation with one another is that of many stories weaving together to form a narrative experience without the benefit of Aristotelian coherence—a mosaic portrait of American assimilation and its multi-generational challenges.
Interactors can experience HandMutt via installation or web browser. They navigate through a series of brief clips, choosing the next one from a set of thumbnail images that appear at the end of each— all representing other nodes in the video database. The short length of the clips forces interactors into an active viewing mode in which they must make frequent choices about which clip to watch next without knowing how that choice will alter their movement through the material. Choosing a given thumbnail affects the next set of choices, but does not set the interactor on a “path,” as there are no predetermined navigations in the work. Instead, the next set of clips is determined by the keywords used to describe the one that has just played; while the number of variations is by no means infinite, the computational operation of the Korsakow database makes it extremely difficult to replicate the same navigation twice. The project is designed so that interactors—or the author in a performative screening—can navigate it in approximately half an hour.
The exhibited installation through the means of photography and new media, presents the living space of the fishing village Tai O, with snapshots produced both with traditional techniques and with virtual three-dimensional technology. The photos and 3D virtual interiors on view here are the results of the research project, “Using digital visualisation to preserve local cultural heritage: a case study of Tai Ping Street, Tai O” GRF (General Research Fund). The work was supported by Drs Richard Charles & Esther Yewpick Lee Charitable Foundation and the City University of Hong Kong.
The interactive installation in view presents individual details of the interiors of family houses in Tai O. With the interface connected to the installation, the viewer can look around the interiors and examine the details of the furnishings and household objects up close. The appearance of the interiors, in accordance with the possibilities and limitations of digital technology, is precise and rich in detail. The home interiors presented in the installation are virtual reproductions of the real ones. These virtual 3D objects were realised on the basis of photos. Once they selected the suitable houses and came to an agreement with the homeowners, the artists took several hundred photos of the interiors, and wherever possible, of each object from many angles. There is a selection of these photos on view alongside the installation, as small prints.
There is also a large print behind the computer screen. It shows the part of the village where the house is located. The photos were indispensable in the production of the installations in a number of ways. With the application of photogrammetry technology, the computer is capable of automatically calculating a three-dimensional model on the basis of the photos taken from many angles. While this model could be extremely beautiful and rich in detail, unfortunately, it could not be used in the interactive installation. It was composed of too many details and became overly complicated. It was mainly effective in determining the precise position and dimensions of the objects.
The final models can be realised with either the reconstruction, simplification and paring down of the photogrammetry models, or to design them on the basis of the photos with handiwork. This is an extremely time-consuming, complex work that demands special knowledge. This work added up to a large portion of the time of the artists working in the project. The artists cover the surface of the models with details taken from the original photos. This texture mapping procedure renders the virtual 3D object true to life. Thus, we might say that the final installation is, in fact, a three-dimensional photo-collage, placing the details cut out from the original photos into the three-dimensional space, in accordance with the dimensions and position of the real objects.
Hong Kong is an extremely rapidly changing city. The old buildings quickly vanish, and with them, so do lifestyles. By the fact that the artists participating in the project attempted to document these spaces with as much detail as possible, and tried to model these objects as precisely as possible, with time-consuming precise work, the visual representation of these spaces will become a part of the material of a general knowledge, and they will be able to live on, and even constitute the basis for further research.
The technology employed in this project is not new. Many heritage projects use similar software and hardware. This project perhaps differs from the familiar similar projects in two respects. The first is that I have intentionally tried to use inexpensive technology that is accessible to everyone: for instance, open source software, typical cameras. The second is that while most similar projects present historically important and well-known buildings or locations, this project would like to introduce an everyday lifestyle of people living contemporaneously with us.
This work, then, is not a submission for a memorial, but a documentary endeavour. Just as the documentary filmmaker tries with her/his camera to present the lives of people, and just as, in the interest of this, s/he leaves out certain details and emphasises others, and in general tries with the devices of film to give as precise as possible a picture of the subject, in the same way I, in my own way as a new media artist, with my own special devices, and with the techniques and aesthetics that are suited to it, attempt to represent as precisely as possible the reality that I have selected and observed.
The project was supported by:
• General Research Fund (GRF), • Research Grants Council (RGC) of Hong Kong; • Drs. Richard Charles & Esther Yewpick Lee Charitable Foundation; • City University of Hong Kong
The Captcha Project aims to highlight the undefined boundaries between humans and machines, originals and copies. The project started as a reflection on immaterial labour and artistic practice in a neoliberal network society and takes the form of a series of paintings created by Chinese painters from the village of Dafen. Despite the fact that their work consists of a mechanical reproduction of preexisting images for the Western market, Dafen painters consider themselves artists and value their work. I signed an agreement with them, splitting the costs and profits of this project in half and sent them screenshots of CAPTCHA codes, which they transformed into precise oil reproductions. CAPTCHA (Completely Automated Public Turing test to Tell Computers and Humans Apart) codes aim to obstruct criminals and companies whose goal is to use online services en masse, using bots and automated processes. They are easy for humans to decipher, but impossible for bots.
However, it is possible to replace bots with human workers in poor countries, who manually solve thousands of tests every day. These people are required to perform a mechanic type of work that a bot is unable to perform. CAPTCHA was invented partially to distinguish humans from machines, but its effect is the partial transformation of other humans into machines. At the same time, artistic production is shifting. In 2004, Dafen Oil Paintings Village, with its 5000 artists mainly involved in creating accurate reproductions of Western masterpieces for the Western market, was officially declared a “Chinese Cultural Industry Model Base”.
This project was made possible thanks to Shenzen Dafen and Deco Co., LTD and DafenVillageOnline.
The viewer becomes involved in the art through the contemplation of empty space: a reflection based on the Japanese concept of “Ma”, where the gap or pause between two forms allows for intensification of vision and awareness.
(amended by artist 7/2020)
In China’s Zhejiang province in early spring of 2013 dead pigs began appearing in multitudes on the Huangpu River, which supplies drinking water to Shanghai’s 26 million residents. When the count was finally complete, over 16,000 were hauled from the water. Speculations arose about the unchecked productivity of pig farming indicating excessive supply despite increased demand (estimated at over half the world’s pork consumption) and the dispatching of pigs as a desperate strategy to maintain market values. Meanwhile, the polluted water that flows into the city’s taps can only be matched by the toxic visions promulgated by apocalyptic evangelicals in America.
This work is part of the “Repurposed Web Reports” Series, a series of “reports” composed entirely of media collected from the Internet. Using the web as an investigative archive, these works mine the margins of the public sphere for vicarious insights into the contemporary state of humanity. Each work is prompted by a Google search, with the results creating the parameters of information and research as well as the dynamic media (image and sound) to be used as source material. Typically the subjects or events are at the margins of Western media representation and the content is often generated by nonprofessionals amateurs, tourists, and other on site witnesses using portable personal recording devices but in some instances it is either mixed with reports from conventional media outlets or originates from them singularly. Through editing, dialectic audio and sound juxtaposition, lo-fi video and glitch EFX, and text interplay, I then recast and remix this material to better illuminate and critique the deeper meanings and insights that can be generated. This approach is akin to the Situationist’s strategy of détournement – a form of appropriation where the materials are altered and subverted so that rather than supporting the status quo, their meaning becomes altered in order to put across a more radical or oppositional message.
“Does it really say that?… I try to focus the words… they separate in meaningless mosaic…” William S. Burroughs, Naked Lunch (p.52).
“It makes mosaic-like combinations of particles possible, technical images, a computed universe in which particles are assembled into visible images. This emerging universe, this dimensionless, imagined universe of technical images, is meant to render our circumstances conceivable, representable, and comprehensible.” Vilém Flusser, Into the Universe of Technical Images, 2011 (p.13).
The structure of language – musical at its origin – is the source of this installation. Based on texts and melodies originating from operas telling the Faust myth (the epic of human curiosity and desire), the installation explores the underlying contour of language. The work is made of industrial objects. 102 screens and speakers creating an emergent space, arranged in repetitive patterns. Blowing up the virtual into space. Phrases and melodies of the vocalists are constantly reproduced using machine learning software. Powerful algorithms, which transform the way we act and think, omnipresent in our society and in permanent interaction with us. A new version of Faust is created, fragmented and with varying degrees of legibility, recreated in light and sound movements. It is a game with the boundaries of perception. The point where language loses its meaning and becomes abstract. Language which is pushed to its limits, where nothing is left but pure rhythmic and melodic structure. It is the organic nature of language, imitated by a machine. This reveals the proper poetics – in all its absurdity – of the digital.
There are two questions at the foundation of this project: Language has evolved from something bound to a materialized object, like the book, to something virtual. It is becoming flexible, alterable, ductile. It is organized by digital machines, which allow us to interpret it in a new way. Our knowledge is regenerated by these machines, which enable us to understand the vast extent of human substance anew. The second question of the project: Our world is full of technical objects conducting us. Displays, images and sounds construct our urban environment. It is crucial for the understanding of our society to understand the nature of these objects and their messages. By the means of alienating their primary functions, their potentials can be explored.
We cut a swath through this tangled world. A straight line loophole for all intents and purposes and a far view over this jungle that only appeared to be intangible, multidimensional, but that we knew would turn out to be flat (once we flattened it). And rasterized. Composed of precisely arranged concrete districts, subdivided by straight tarmaced lines its sound, we discover that the city is full of hidden potentials.
Still, it is not the ever-advancing, hyper-engineered, super efficient vision of what is to come. But the idea of advancing in all directions: This future is an ocean and we are navigating on it now. Going nowhere and yet getting everywhere -in circles- or is it spirals? Complex systems and network processes, self-made code and circuits, hacked tools, misused instruments and objects of mass consumption have become crucial components of Truniger’s work. He is engaging in different notions of language, visual and acoustic potentials of algorithmic processes and their ability for an apparently organic behavior. He implements these formal ideas in sculptural objects, multimedia installations, musical performances and instruments as well as compositions.
In many nations globally, sex work is a criminal activity and the active status of the condoms as evidence of prostitution policy allows law enforcement to treat condoms as contraband. Advocacy groups internationally are actively promoting to repeal the condoms as evidence policy and to decriminalize condoms. Cops and Rubbers is a web-based narrative game adapted in 2016 from a tabletop game of the same name, which was originally commissioned by Open Society Foundations and which launched at the 2012 International AIDS Conference in Washington, D.C. This serious game serves as an interactive alternative to this report by employing role-playing and interactivity. The game allows citizen voters, law enforcement, and policymakers alike to connect to the human rights and public health issues of arresting for possession of condoms from the perspective of a sex worker, including increased vulnerability to HIV infection.
In 2012, Open Society Foundations (OSF) released Criminalizing Condoms, a report documenting the practice of treating condoms as contraband in six countries – United States, Russia, South Africa, Zimbabwe, Namibia, and Kenya – and identifying their consequences on sex workers’ lives, including abuses to their health and human rights and their vulnerability to HIV. Open Society Foundations also commissioned Cops and Rubbers, a facilitated tabletop game, as a companion to this report.
In this serious game, players take on the role of one of six sex workers, who each have a set of game-end goals: to earn enough money for a personal need and to avoid a sexual transmitted infection. In each round, an outreach worker may provide each player with a condom that he or she must then hide from the police. Players can choose to hide their condom in one of three places that surveyed sex workers reported hiding their condoms: shoes or boots, purse or wallet, or underwear. If a player is caught in possession of a condom by the non-player police character, he or she must suffer the consequence, such as police damaging the condom or police extorting money or sex in exchange for avoiding arrest. All in-game police search narratives are inspired by accounts from real sex workers. Therefore, a related quote from an actual sex worker or a realworld statistic is also shared with the player to reinforce the reality of the in-game narrative. In-game character personas also cover a spectrum in terms of gender identification and sexual orientation to reinforce the reality that this practice of treating condoms as contraband is not isolated to a particular sex work demographic.
The digital adaptation of this game uses Twine, an open-source tool for telling interactive, nonlinear stories, to broaden the reach of this role-playing experience. This online version provides the same narrative as the tabletop experience but adapted for an un-facilitated, singleplayer online experience. The addition of an online game platform allows for extended advocacy efforts awareness and ultimately to increase opposition to the practice of using condoms as evidence of prostitution.
These ingame character narratives provide players the opportunity to role-play, a theoretical concept dating back to G. H. Mead, who defined role-playing as being empathetic toward a character and adopting the character’s point of view. Allowing players to temporarily step into the shoes of a sex worker reduces the influence of stigma, particularly as each character has two key goals that are relatable to a general audience: earning money for their personal needs – such as paying rent, providing for loved ones, or saving for education or medical expenses – and protecting their health.
Role-playing as a sex worker provides the opportunity for players to develop empathy for this typically marginalized community. Preliminary data from a 2014 quasi-experimental study indicates that knowledge gain was statistically equivalent between participants who played Cops and Rubbers compared to participants who read the Criminalizing Condoms report but that game players were more likely to oppose the policy. Thus, the game is an effective advocacy tool that can elicit players’ personal reactions and conclusions regarding these policing practices, including increased intentions to oppose the condoms-as-evidence issue.
The installation is composed of two microscopes watching and recording each other and a three screen projection which visualizes the process of interpretive analysis occurring within the software.
The system acts as a rudimentary form of AI, where the visual stimuli is translated, in a performative act of seeing, in the centre image (which is comprised of the two microscope feeds concatenated into one). The resulting data is then transposed first to the image on the left, taking the form of a neuron and is responsive in both emergent growth and behaviour. The data chain continues, flowing to the right hand image of a neuron. In a “mirror neuron” scenario, this image is influenced by the actions of the first andindependently reacts and generates its own growth patterns and behaviours.
The audio created by the apparatus itself, coupled with a composed score is introduced into the system where it too acts upon the behaviour and responsivity of the images. The systems behaviour is, in a mimetic sense, reflective of several kinds of processes which operate under acts of translation and analysis. The parsing of information, existing as it does at the very foundations of embodied cognition, is central to our understanding of the bodies, networks and ecosystems in which we exist.
In the installation, the visual instantiation of complex notions of being collide with features of surveillance and even further into cartographic renderings of both the microscopic, in the form of neuronal imagery to the macro in terms of alluding to the mapping and visualization of connectivity (through data analytics) within socio-geo-politcal bodies.
A Digital Music Box Ensemble is a Music Box, which reads a punching card. When the holes of the punching cards flick onto a pin of a rotating wheel, it triggers a sound. Takahashi adapted a so-called “Punched Card Music Box” to a digital device. The most interesting part of the “Conventional” Punched Card Music box is the possibility by any user, even without any musical education, to easily compose music by making holes into a predefined grid on a card. However, no one has ever tried to add the possibility to modify the tone of sound by which the metal flicks. In order to fill in that gap, this device can convert the data from user created cards. It simply sends MIDI data via a USB cable to the computer built-in synthesizer, which play their favorite tones.
The pitch has been assigned to vertical axis of a conventional punching card. The holes on the card can be punched on a position on the Y axis from 1 to 8. These holes correspond to a specific audio channel previously set on a computer. This apparently simple device allows a wide possibility of music composition. It recognizes the instrument channels, the tones and the specific speakers. So the users can compose and experiment with many kind of music, which require a complex setting like an orchestra, or other stereophonic sounds that use multi-speaker system.
A concept of the device. Takahashi created this music box in order to highlight the concept that human handwork and computer digital processing can intercommunicate with the help of a simple device. To explain his work in more detail, Takahashi uses the term “soft generative” to describe a human type of input that generates digital data on a computer. It implicates that human inconsistency can also be part of the process of composition.
Takahashi’s artwork was inspired by the contemporary musician, Conlon Nancarrow (1912-1997), who had a very systematic and interesting way to experiment with micro-durational composition. Takahashi would also describe Nancarrow’s work as “soft generative”. “I completely had no interest in a harmony and a melody.” Nancarrow said himself, therefore he always focused on the rhythm and the tempo of the music from the very beginning. So he used his focus on the “Player Piano” in order to invent music, which was far beyond human instrument skills. The player piano is an instrument just like a regular piano but instead of a human player it uses the input of a perforated roll of paper, which contains the score of a song. It uses the power of the air to activate the piano mechanism. The rolls of paper used by Nancarrow are very similar to the punched cards Takahashi used for his work. Instead of writing a regular musical score book, Nancarrow used a different way to compose. He used piano tones and the complicated relationship between rhythm and tempo to make eccentric rhythm and tempo sonification with great precision. It opened new possibilities for the contemporary music field. The device Takahashi have developed is like Nancarrow’s experiment, which combines complicated rhythm and tempo. No other device uses the legacy from Nancarrow’s innovative ideas to create a simple way to convert human handwork and digital media.
System overview. Height different digital audio channels correspond to height specific positions on the Y axis of a punch card. For example, a hole on the bottom position of the card will activate the number 1 audio channel in the computer and a hole on the top position of the card will activate the channel number 8. On the X axis (left to right), users can punch holes on a defined grid system. They can use any length they want. After making holes into the card and insert it into the “Digital Music Box”, user can turn the handle to play their music. When a hole of the punch card reaches the switches connected to the micro controller, Arduino, inside of the Music Box reads the position of holes, and converts it to a digital signal. If the micro controller senses a hole, it sends a message “MIDI NOTE ON”, or if there is no hole, it sends the message “MIDI NOTE OFF”. After that it uses the hole position to create a flow of MIDI information and send it to the computer via a USB cable. On the computer, Max/msp and AbletonLive8 use the flow of MIDI information of the device, to produce sounds. The algorithm written in Max/ msp, was programmed to harmonize the MIDI signal and send it to 8 audio channels in AbletonLive8. Then, AbletonLive8 assign and play a digital instrument on each specific channel. In addition, the device offers the possibility to change the instrument played. The user can assign each audio channel from 1 to 8 in AbletonLive8 to an electric piano to channel 1, for example. Then assign a drum kit in channel 2 and a bass in channel 3 and so on.
The Unuseless Machine for Democracy was created in response to the Umbrella Movement in Hong Kong:
We saw a partial image of the Tiananmen Square Incident on September 28, 2014. The spirit of democracy lives on as the quest for freedom extends to the streets of Hong Kong. Forever reminded of the agonizing loss of 1989, we will continue to uphold the torch of democracy in peace.
“The Most Useless Machine, Ever!” is a renowned project, which gave birth to a philosophically challenging theme in the DIY circle through its contradictory ON/OFF function. By subverting this paradox, “The Unuseless Machine for Democracy” celebrates and supports the relentless spirit of the road to democracy.
35 Unuseless Machines are assembled to form the number “928”. Each Machine has an LED candle that randomly burns out. Once the light goes out, a dove will surface to peck at the candle, lighting it up once again. This cycle goes on forever.
The project developed further and a stand-alone version of the Unuseless Machine was created for supporters of the Umbrella Movement to make at DIY workshops. All together 52 Unuseless Machines were created at workshops held in Hong Kong, Taipei and Japan, bringing the total number of machines to 87, which symbolizes the peaceful retaliation of the unarmed Hong Kong protesters against the 87 rounds of tear gas fired.
Matière Sensible | Sensitive Matter is a sculpture made of very thin and delicate wood veneer sheet. Here the artists use ash wood. These sheet of wood has distinct sonorous touch zones that follow the natural veins of wood. The researches of Scenocosme have enabled them to develop an artistic and technical process invisible and delicate. A meticulous and invisible design work gives them the ability to define a musical score spread over different areas of the wood. They have invented this process that they call «interactive marquetry». This wood sculpture produces sounds when the spectators touch them. They use sounds to stimulate haptic and gestural behaviour. Thus, the design of these sculpture looks like instruments which reveal by the touch various kind of sounds.
Since many years, Scenocosme’s artists invent interactive works through a singular process of hybridizations between natural elements and technology. They create symbolic and sensorial relationships between the body and the environment natural or social. This wood sculpture offers a sensory and intimate relationship between the wood and the body of the viewer by revealing a sound memory in physical contact with the matter. The electrostatic energy of the human body is the trigger for this artwork. Interactive zones follow exactly the veins of wood. Scenocosme has developped a sensitivity and a unique reactivity of the wood material, thus providing a new sensory interaction in the continuity of their artistic hybridizations between nature and technology. They have developed an original marquetry process with the material wood which is invisible, transparent, and sensitive. This approach follows their intent to create interactive, sensory artworks that enhance extra ordinary relationships with natural elements in which the technology disappears. As media artists, they explore capacities of technologies in order to draw sensitive relationships through specific stagings where senses are augmented. Their works came from possible hybridizations between the living world and technology which meeting points incite them to invent sensitive and poetic languages.They suggest to sound out, to feel elements of reality which are invisible or to whom we are insensitive.
They use the idea of the cloud as a metaphor of the invisible. Because it has an unpredictable form, it is in indeterminate metamorphosis, and his process escapes to our perception. Various natural and artificial clouds surround us (climatic, biological, energetic or electromagnetic). Through their artworks, they evoke invisible energetic clouds (electrostatic) which follow living beings like unpredictable shadows. Sometimes, these clouds cross together and exchange some information. In a poetic way, they interpret these invisible links through sonorous and visual stagings. Then, when they imagine the energetic clouds of living beings, the limits of the body become permeable, and with their technology , in a way they design extraordinary relationships, between humans, and between humans and environment too. Interactions they offer in their works make invisible exchanges sensitive. Rather than revealing clearly their complexity, they open everyone’s imagination. Between the reality and our perception, there is always a «blind point» which stimulates the imagination.
When they create interactive works, Scenocosme invent sonorous or/and visual languages. They translate the exchanges between living beings and between the body and its environment. They suggest interrelations where invisible becomes perceptible. Materialized, our sensations are augmented. Through a poetic interpretation of invisible mechanisms, technologies allow them to draw sensory relationships, and to generate unpredictable living interactions. Their hybrid artworks play with their own augmented senses. They live with technology and have reactions which escape deliberately to their control.
Z is a moving image analysis system that produces abstract representations of video sequences using a system of predefined grayscale disks. Visually, the disks are characterized by two properties: their radial order (the number of concentric rings) and their frequency or repetition (the number of circular sections). Disks of the same order are shown on the same column and those of the same repetition on the same row. The visual content of each frame in the movie is represented by rotations and brightness changes in the various disks. The video is thus rendered as a sequence of changing configurations of circular shapes. This representation does not correspond to any pre-existing concepts used in film theory and criticism. For instance, it is not the case that one disk represents the amount of camera movement while another represents the camera position or angle. None of the information contained in any of the disks has a clear interpretation in terms of traditional cinematic categories. Instead, Z proposes an alternative paradigm for the perception, description and analysis of moving images.
An important inspiration for this project is Soviet filmmaker S. M. Eisenstein’s suggestion that cinematic images are akin to musical overtones. An image is for Eisenstein the total sum of many visual overtones. The interest of Eisenstein’s proposal lies in the suggestion that we ought to pay close attention to the subtle and varying micro-events of a moving image stream, but the notion of a visual overtone was not precisely defined and has not been adopted by cinema theorists. Z supplies a mathematical formalization of Eisenstein’s idea, enabling a precise decomposition of an image into local harmonics. For this purpose, Z employs a mathematical framework originally devised by physicist Frits Zernike to describe the aberrations of microscopes, telescopes and other optical systems. The circular shape of the disks evokes the images seen through (e.g.) a microscope. Just as the microscope grants perceptual access to what cannot be seen by the naked eye, so the disks in Z draw attention to the often unnoticed micro-events of a moving image.
Z explores the application of Zernike’s framework to the analysis and synthesis of moving image sequences. The overall visual design of the system constitutes a diagram of the various procedural steps in the computation of the Zernike moments of a moving image, self-reflexively exposing the computational process on which the system depends. The graphical composition expresses the idea of a circuit, starting with the source image displayed in the lower part and ending with its reconstruction in the upper part. This visual approach affirms the aesthetic value of process-based diagrammatic representation in the context of computational art.
Dead Presidents, deceased dictators, passed poets and reigning sovereigns – they watch us daily, as we swap goods for labor-power, as we sell commodities and buy resources, as we hawk dues and invest in securities, as we barter mining rights for dumps and pass printed paper for vouchers, tickets, coupons, slips and receipts, ever so lightly tickled by Adam Smith’s invisible fingers. My prints of the Face Value series are enlargements of eyes from the ‘heroes’ on banknotes from various countries. The text and figures you find with an augmented reality app are quotes from specialists on finance and graphs of different situations, all connected to the person portrayed.
The first series consists of international notes such as Korea, Japan, UK, Singapore, USA and Vietnam, whereas the second series depicts only banknotes which were replaced by the common European currency Euro. Both series are printed as small booklet.
Beyond Non-deterministic Connections Version B (BNC B) is a sculptural network of Bayonet NeillConcelman (BNC) tee connectors and a two-channel video installation. Serving as a monumental analog video mixer, BNC B is also a statement and metaphor for global connectedness. With its sharp 90 degree angles its structure developed out of the aesthetics of L-systems, a class of generative algorithms that have widely been used in the modelling of plants and recursive processes and are representative for the governance of algorithms and data today.
As a combination of sculpture, installation and video art, BNC B is blurring the genre boundaries of current electronic art. Two DVD players and two small flat screen TVs are connected through video amplifiers to a sculptural network of 400 BNC tee connectors that topologically form a tree. Through their connections, the two video signals are mixed analogically, creating effects of colorful distortion and glitch. The two videos running on the DVD players are video works in their own respect, each between two and four minutes in length, consisting of footage taken from the artist’s video archive: all samples have been taken in public spaces in Brazil (Sao Paulo), Argentina (Buenos Aires), Germany (Berlin) and Poland (Warsaw) between 2013 and 2015, feeding the notion of a space in which meaningful inter-human connections can be established. In direct contrast, the almost surgical but still organiclooking sculpture in the middle of the installation serves as a metaphor not only for global connectedness through electronic circuits in its manifold forms, but also as a metaphor for the (usually invisible) inner wirings of modern day technology.
While the complete structure is visible to the spectator, its fractal-like appearance, which could have been the result of a Lindenmayer system generator with 90 degree angles, seems hard to trace. Even though being a completely deterministic system without any method of interaction by the spectator, BNC B produces highly colorful and always changing outputs that are seemingly random, but follow a very welldefined pattern that is only determined by the luma and chroma values of the composite video signals being mixed as well as the alignment of the two signals in the time-dimension.
Hentai Haiku is a series of interactive 3D environments, each involving the visitor in a virtual erotic encounter. Like traditional Japanese haiku poems each ‘hentai haiku’ aims to condense a rich sensual impression into a minimalist piece of art with no lengthy narrative: all for the visitor to see and play with is always right there in front of them. Hentai Haiku are outbursts from the vast realm of imagination that spins around physical sexuality, with themes ranging from common fetishes to outright ridiculous fantasies. Slightly beyond the limits of human anatomy, Hentai Haiku introduces characters of a fictitious human-like race that is all and only about our flesh’s desires, and where your mind may take them to. Hentai Haiku asks the visitor to assume an active role in each environment by providing the means to interact with scene objects. Like with 3D video games, Hentai Haiku’s visuals are generated in real-time from 100% synthetic character models, which would come to life as an immediate response to the visitor’s actions. Turning the visitor’s curiosity into playful exploration like this allows conveying more complex matters, an advantage well-deserved by an intricate topic like physical sexual communication.
To make this experience easily accessible, Hentai Haiku relies on video game technology and related paradigms for user interface design. Originally published for desktop web-browser via a dedicated website, the projects current implementation for ISEA 2016 is to explore the more casual user experience of touch-screen devices. The series’ pieces are based on a custom framework for procedural character animation. At its heart sits an AI running a model for sexual stimulation that continuously translates low-level sensory stimulus into higher level perceptions, adjusts internal states (such as arousal), and, based on the latter, generates feedback via multimodal expressive behaviors. Each Haiku now presents an arrangement of requisites, dynamic and static, set up to allow the visitor to physically interact with the virtual character’s body at varying degrees of freedom, where some allow to control limb motion restraints applied to the character, and others allow to cause sensory stimulation more or less directly. With visitor and character entering into this haptic dialogue (a key component of sexual interaction), physical believability becomes important. Here a second technological cornerstone of this work comes into play: All scene objects that may interact with each other (body parts, bars, clamps, ropes, …) are simulated using a state of the art video game physics engine.
Best described as a “fully animated ragdoll”, all character motions are carried out by applying appropriate muscle forces to joints of the doll’s skeleton. Muscle forces depend on various factors, such as determination, verve and fatigue, and will, in combination with obstacles and external forces, determine if and how far body expressions will come about. The result of this unique combination of techniques is playful exploration on the visitors’ part and continuous believable reactions of the characters’. Hentai Haiku explores a novel way of arts patronage through a custom crowd-funding model: While the series is published online via its dedicated website, the same site allows inclined patrons to have their own alias become part of the artworks they contribute funds to, thereby granting a partial virtual ownership.
Hentai Haiku is work in progress, and populating a wide and barely claimed field of artistic expression as well as technological disciplines pointing into the far future, Hentai Haiku is bound to evolve, with respect to visual and physical quality, artificial intelligence, freedom of user interaction and customization options. First and foremost however, future Haiku can be expected to explore more surreal spatial and mechanical arrangements, dissolving rigid rules of decency, asking what next is exciting, what is love and what is at all thinkable, in sex. The human need for sexual connection and unification is a blessing, a gift that is however little appreciated by societies around the globe. Canonized into highly repressive cultural rule-sets (often based on ancient morals), today sexual needs are widely used as a means to exert power on one another, thus cancelling out love, mutual respect and freedom of expression, while turning this graceful gift into addiction and shame. But we must celebrate sexuality! We are here to explore, express and implement our bodily desires, and to share them with the ones we love! ‘If sex was food, we’d all be on a bread and water diet. Or pretend to be anyway.’
Hentai Haiku is an excerpt from somewhere between the rapidly evolving worlds of 3D games, anthropomorphic robotics and VR erotica. When beginning to work on the by now published exhibition, back in 2013, I realized that proponents in neither of the aforementioned fields would embark on anything like a fusion anytime soon, for reasons both of technical and marketing challenge. Adult 3D entertainment would regurgitate concepts proven to sell, rely on scripted story to build up erotic energy, and then, for the climax, replay sequences showing short statically key-framed animations, with often rather restrictive camera navigation. Nothing procedural here, despite of obvious advantages. Robotics research on the other hand is all about procedural animation! They have to, because their agents must deal with unforeseen circumstances and surprises. But their grants usually wont be issued for anything adult-oriented. Even if erotic limb motions should be some of the easiest things to synthesize: Characters don’t need to stand or walk, just lie down. And then move erratically. So with Hentai Haiku I explore the possibilities in this interdisciplinary no-man’s land. It is as much an endeavor to raise the bar for physical interaction interfaces in adult video games, as it is a leap into uncharted terrain of surreal sexual dreamscapes.
Explosive Conductor is an artifact of the research project “Anästetiker” (2012–2013) by Evelina Rajca. The title is coind by the terms, “anesthetic” and “aesthete”. Within this research project different artifacts show experimentally to what extent the density and aggregated state of matter influences the presumed presence or absence of options for action they culminate in a complex mixed media room installation or they can be shown separately as audiovisual installations. In this research project Rajca was interested in elaborating different methods in order to figure out “how we can become aware of substances that we, for example do not sense `directly´ but that affect our well-being”.
Explosive Conductor is a two-channel audiovisual installation, here within the right channel an experiment with solid carbon dioxide (CO2) is presented, which doubles as a compositional sound study. Dry ice is placed in a sealed partial vacuum glass box. A brass pin inside the box strikes and drills through the dry ice. The pressure in the box rises due to the sublimation of dry ice. The needle moves in accordance with the concentration of the gas production. The sound piece ends with the explosion of the glass vitrine. Computers, sensors and detectors steer the process. The other channel shows a snowy landscape made out of dry ice, inspired by early geoengineering / weather modification experiments.
Solid carbon dioxide, also known as “dry ice,” does not naturally occur on earth, but it is an industrial product used, for example, for the cooling of food. As it changes from solid into gaseous form, there it develops a gas cushion around the carbon dioxide, which, when pressed against a suitable resonating body made of metal, generates extraordinary vibrations. This results in a variety of sounds –cries, sighs, calls of delight and of horror– that are based on different parameters such as volume, surface structure, and temperature of the materials and thus can be deliberately modulated by the instrumentalist. Depending on its density, carbon dioxide has different effects: analgesia –anesthesia– death.
Since the density and aggregate”d” state of a substance determines its level of toxicity, keywords and ideas such as irreversibility and fetish are also of central importance in Evelina Rajca research process “Anästhetiker” and plays a role in another artifact within this project: the sound instrument “fortune teller” made from antique, radioactive uranium glass plates. Mrs. Rajca’s interdisciplinary projects convince not least because of their surprising, yet sensitive interconnection of artistic experiment and scientific speculation. The work by Evelina Rajca is distinguished by a combination of high technical skill with the intelligent use of the symbolic and social aspects of a particular topic.
In Neuro memento mori: a portrait of the artist contemplating death, digital animations and live action video are projection mapped onto a 3D print of the artist’s head and neck. The 3D printed bust is made from data from scanning, firstly using 3D facial scanning and, secondly, brain data gathered during novel MRI experiments, designed in collaboration with two neuroscientists. In these experiments the artist viewed memento mori paintings and meditated on death in the scanner. The life-sized 3D printed sculptures are dissected to reveal the artist’s brain and ‘make real’ fMRI data gathered during the experiments in the MRI scanner. Computation is used to produce 3D neuroimages, 3D prints and computer animations that are projection-mapped back onto the 3D object. The artworks, made with neuroscientists, are contemporary memento mori made from data. Carl Sagan’s quote reminds us to remember our small scale in relation to the universe and to live better, more compassionate lives, as a result. Memento mori and vanitas artworks, popular in the seventeenth century had a similar function.
They supposedly reminded each viewer of their mortality, prompting them to live better lives. Neuro memento mori is inspired by an object in the Wellcome Trust Permanent Collection, “Wax model of a Female head depicting life and death” (Unknown 1701-1800). It shows a woman’s bisected head, the left half apparently a detailed portrait of a living woman, open-eyed, with painted lips and blond hair arranged in ringlets. Her left hand frames her face while the right half of her head is shown in post mortem decay. Resting on her skeletonised right hand, her skull crawls with insects, maggots and worms. A snake emerges from her empty eye socket. This compelling object prompted me to question whether, as we look at memento mori artworks, we do ‘remember, we must die’.
What parts of our brain are active when we look at these artworks, and, when we contemplate death directly, without looking at memento mori art? Made in collaboration with neuroscientists Zoran Josipovic (NYU) and Andreas Roepstorrf (Aarhus University), I looked at representations of memento mori while in a MRI scanner that records my brain function. Following Josipovic’s instructions, I learned to meditate, to contemplate death, and repeated that meditation in the scanner. Neuroimages were processed to produce 3D data of my brain, to make 3D printed sculptural objects. The form of the life-sized portrait sculpture refers to the Wellcome Trust object, the artist’s head is dissected, revealing the skull and brain. Video and computer animations are then projection-mapped onto the sculpture to create a contemporary memento mori.
Discussions of Big Data have drawn attention to the acquisition and manipulation of personal data, including medical data such as neuroimages which provide huge amounts of quantifiable data, most of which is indeterminate to the non-expert. Scholars of media art and comparative literature have drawn attention to the collapse of boundaries between information and embodiment and the ways that bodies are co-constituted with data, or emerge with data (Paul 2011, Thacker 2003) It is common for such writing to take either utopian stance, described by Katherine Hayles as the wish, “to be raptured out of the bodies that matter in the lust for information,” (Hayles 2008) or conversely, a dystopian view. This work shows that there are porous boundaries, or entanglements, between nonhuman animal and human, between life and death. It uses the artist’s corporeal data as media for artistic expression to subvert the formation of subjects by Big Data. The work specifially addresses the porous boundaries between life and death, growth and decay.
Neuro memento mori: a portrait of the artist contemplating death is one of a series of works made over the last twenty years that emerge through interdisciplinary collaboration. To a greater or lesser extent these works each owe a debt to conversations and insights gained from rich collaborative partnerships with engineers, medical researchers, surgeons and scientists. In this case they key collaborative partners are from the field of neuroscience. The MRI experiments were designed by us as a group and then conducted at Aarhus University, lead by Andreas Roepstorrf, who is the Director of the Interacting Minds Centre; his colleague, Joshua Skewes, working closely with the artist on the experiment design. From New York University, Zoran Josipovic developed the meditation experiment design and trained the artist for a year to prepare her for the experiments.
We always speak of “visions of the future”, but what if we were to let the auditory realm lead our imaginings?
Sounding the Future brings together the worlds of speculative fiction and audio art. It is an intensive program of research and creative development that will seek to predict what art in future will sound like. This act of prediction is inevitably informed by the present and thus it will also take stock of the sound of art today.
Sounding the Future will result in a body of ficto-critical works that can be delivered via gallery installation, radiophonic presentation and e-publishing. The spine of the work will be a number of future scenarios in which sound and its manifestation as sound art, is the dominant novum — the paradigm shifter.
The narratives revolve around two themes: Future Human — the integration of technology and biology resulting in trans- and post-human conditions; Future City — the exploitation of the sonic potentials of the new cities we imagine. Nested within these narrative trajectories will be non-fictional material — Future Citings — short text essays, video, artist interviews interviews and studies which provide factual and theoretical grounding for these future speculations.
“Noor” (which translates as ‘light’ in arabic) is a brain opera that asks, though metaphor, analogy, sound, text, light, movement, brain sensors and audience interaction with an ‘actor’ wearing an Emotiv EEG brainwave headset just one simple question – “is there a place in human consciousness where surveillance cannot go?” Noor is an original ‘brain opera’. Though artists have been working with EEGs producing musical events since the 1960s and 1970s, this is a full a full audio visual brain opera. Using an EEG enabled headset, the performer’s emotional states will, at various times, launch digital databanks of video, audio and a libretto while simultaneously displaying the performer’s brainwaves live-time for audience viewing.
Noor is loosely based on the true story of Noor Inayat Khan, a Russian born, European raised Sufi-Muslim princess whose father, Hazrat Inayat Khan, brought Sufism to the west. During WWII, Noor became a covert wireless operator for British intelligence by parachuting deep inside occupied Vichy ruled France. For a period of three months, Noor (code name “Nora”) was the only communications link transmitting critical information back to the allies. Caught by the gestapo, who were unable to break her to find out any information about her transmission cell, Noor was shot inside the infamous Dachau prison shortly before the end of the war. Noor will be used as metaphor to work with issues of surveillance, privacy and faith.
In Noor, the performer moves through the audience. The audience either stands, or sits depending on the venue. The spoken word libretto based on the life of Noor is heard along with the sonic environment. The performer’s brainwaves are displayed live-time for the audience to see as they interact with the performer, and the story. As the performer’s emotional states change, different videos, audios, and parts of the libretto are triggered on four different screens. This in turn changes the performer’s emotional state, which changes the mood and responses of the audience. Only the screen displaying the performer’s brainwaves will be active all the time. The other four emotional states will cause the displayed images and sound to ebb and flow, depending on the live-time interaction between the audience and the performer.
The fifth screen will display the performer’s brainwaves live time. Though artists have been experimenting with the human brain since the 1960s and 1970s, new technologies, research and methods about the brain being developed through global government initiatives in the coming decades raises real concerns over issues of privacy, surveillance, autonomy and consciousness. Do our brainwaves contain the essence of who we are and what we think?
In the future how will our personal neurobiological data be used for security identification, thought reeducation, manipulating memory, and personal ‘brain fingerprinting’? If our cognitive process can be monitored and harvested, how do we prepare for this new frontier? ‘Noor (which translates as ‘light’ in Arabic) A Brain Opera’ uses the true story of Noor Inayat Khan, her faith, capture and murder by the Gestapo during World War II as a metaphor of the theatre of war to the theatre of surveillance in a live-time immersive, interactive performance triggered by brainwaves. Being able to view someone’s brainwaves during the performance serves as metaphor for the perfect storm brewing of potentially invasive and identifiable tools of cognitive analysis. These methods and techniques will potentially track, categorize, manipulate and surveil the human brain. The opera raises, but does not answer a simple question – ‘Is There A Place In Human Consciousness Where Surveillance Cannot Go?’
Using an Emotiv headset and max/ msp, a databank of images, sounds and a libretto is calibrated to launch according to different emotional states of the ‘actor’. Those states and databases are keyed to the following mental states:
1. Relaxation 2. Engagement (attention) 3. Interest (enjoyment) 4. Stress 5. A fifth screen displays the ‘actor’s live-time brainwaves
Untitled II builds on modified membraphones developed by Marianthi Paplexandri-Alexandri as main instruments utilizing Pe Lang’s motor-activated devices. The sound of Untitled II can be influenced by manipulating the tension of the nylon lines, changing the speed of the motor, turning on and off the motors or by depressing the membrane with the fingers while it is vibrating to vary the pitch. The work creates and explores a soundscape of machine-produced long sustained sounds and textures with organic character and without any post processing. It is a work adaptable to any space that can be presented both as a sounding sculpture and as an instrument in the context of a solo live performance, thereby questioning the role of the performer and the difference between performing and operating ( “music as art” and “music as music”).
Untitled V is a sound sculpture that consists of miniature speakers acoustically activated by a motor driven mechanism. A nylon thread is fastened through a hole at the center of the membrane; the end of the nylon thread is loosely secured to a motor turned rosined wheel to produce friction. Sound is produced by the action of the rim of the rotating wheel rubbing the thread as the wheel is turned. The two surfaces alternating between sticking to each other and sliding over each other, with a corresponding change in the force of friction. The motor speed is reduced at the lowest speed. The slow turn of the wheel creates changes in the tension of the thread, resulting as sounds (crackling impulse) in the membrane of the speaker. Untitled V creates a very quiet listening experience.
The artwork depicts the picture as a model of reality. The picture as a fact. The camera becomes part of the body. The natural eye sees only what the artificial eye looks at. A poetic portrayal in 5 sequences of an artist drawing a self portrait, pixelated within the virtual space. The mind begins to question everything and breaks down it’s confidence in seeing as a truth. The visible space becomes mental space.
Seeing is a tricky, dangerously-seeming-easy act of constructing a mental image. We interpret what we see. But what do we understand of what we see? My current artwork addresses the issue of ‘seeing things’ and is grounded in a practice-driven examination of methodology and subjective experience within art on issues of perception, embodiment and representation. Normal perception is a kind of blindness. We perceive what we attend to.
Perception is an act we hardly think about -it is a complex experiencing of spatial relations of subject-object within the environment- using on faces, on background objects, depth, colour, shadow, light-dark contrast and sound cues for example, all happening within the bottom-up processing of sensory information and topdown effects of learning, memory, expectation and attention. We interpret what we see. But what do we understand of what we see? Interested volunteers will have the opportunity to put their own perception skills to the test in the art experiment – to sample a selected art image being flashed at the edge of eyesight, to the right or left visual field only and paint/ draw what they perceived after a single visual exposure (up to 4 exposures allowed). The technique used is adapted from neuroscience. It is intended to demonstrate our relationship to the world.
A posthuman mythical hybrid beast, the Lamassu Kentaurosu Wagyu, poses in a pastoral landscape, unaware that she is being groomed for consumption.
A young woman of the future wonders what to write on her immigration form: can a being be reduced to its functional identity?
Pathetic Fallacy is an intergenerational dialogue about growing old. Youth doesn’t believe it will age. Age believes it knows best. Humans believe in the pathos of humanity. And the cycle continues. The screenplay wraps the empathic notion of kokoro around the subject of aging: aging of humans, of women, of technologies, of matter, of robotic or cyborg assemblages. The piece is a ‘two-hander’ between an elderly woman and a ‘young’ ‘female’ android. A conventional mother-and-daughter or Julietand-Nurse figuration is applied to an unconventional scenario, as the video explores a new familial paradigm. Pathetic Fallacy was the first two-handed dialogue drama created for film/video involving an actroid (see, since, Fukada’s 2015 feature Sayonara). The English term ‘pathetic fallacy’ denotes the ascription of human traits or feelings to inanimate nature. In keeping with this theme, Actroid-F’s simple face-detection and face-tracking software are employed in its acting. This robot cannot currently distinguish between individual faces or ‘decide’ to follow a face; its decisions are made by humans via computer interface. However, it often fools its interlocutors into believing that it can make these determinations. It mimics the human development of empathy via neurobiological mirror-learning: visual emulation in face-to-face situations. Framed as a fantasy, but yet endeavoring to strip back layers of fantasy and futuristic nostalgia, Pathetic Fallacy provides another snapshot of a point-in-time where science fact is not caught up with science fiction, and where speculation and education are based inelegantly upon what is known.
Elaborating a poetics of point-of-view, the actroid and the elderly woman discourse about aging in a dramaturgical möbius loop. Their dialogue comprises their attempting and not attempting to understand each other, and reveals a lack of clarity as to where one entity ends and the other begins. Insensibly entangled in a beatific human–robot mis/understanding, each character relies on what she perceives as empirical evidence to promote her own partial fallacy. Each character also relies on her typecasting with reference to specific cinematic emotions—the maternal fondness of the elder, the smart self-indulgence of the younger—and are unfortunately unable to break these time-honored molds (the gaze of the artist attempts to query, destabilize, break the molding for them). Mirroring the dialogue’s intersubjective mirroring, time is also on a loop in the video, which can seamlessly repeat and repeat; as long as each interlocutor is closed or oblivious to the other’s perspective, the loop remains closed and the questions remain open.
Pathetic Fallacy seeks to make explicit, and relate to technoculture, tensions between generational feminisms and non-feminisms in terms of claims to authenticity. In arguing about aging, even in their gentle, clichéd way, the old woman and the robot in the looping scene perform a claim-staking solipsism present in modern-day feminist discourse. They are both wrong, and they are both right. The gynoid will age, but not as the human elder thinks it will. The gynoid has the overconfidence of the (literal) digital native; the woman the overconfidence of the rational anthropocentric. The ‘child’ in the scenario is the burgeoning intelligence that exists in the mutual space between them, in their intersubjectivity: in the mirror.
Control is an art game experience that intends to provoke discussion and reflection on the limitations of the physical interface and the nature of the human computer symbiosis in videogaming as mediated through the manual game controller.
The game takes place in a computer that is part early IBM PC compatible, part tape loading 8bit home micro. Use the basic arcade control scheme of 8 directions and one action button to control a downsampled representation of your hand, while negotiating a series of increasingly complex videogame control devices.
Control has 10 levels. The first 9 of these are based on existing videogame controllers, while Level 10 is the ‘OctoPad’, an experimental concept prototype. The player must successfully press all the highlighted controls to proceed to the next stage. If the timer or energy level reaches zero then it’s ‘Game Over’!
The player is represented onscreen by a hand avatar, which is controlled using the basic arcade videogame control mechanism of 8 directions and one action button. The 5 digits of the hand can be individually used to press the onscreen game controls. In order to use one of the fingers, the player must hold down the action button along with directional control for either left, top left, up, top right, or right. Each of these 5 movements corresponds to a single digit, for example, the combination of left and action corresponding to the thumb.
Control echoes the hand to controller aspect of the videogame interface in the diegetic space of the visual interface through a downsampled meta interface. It makes the game interface the constant point of focus, rather than have it disappear to make way for an unrelated feedback visual. This goes against the notion of the ideal of interface design where an interface should be so intuitive that it for all intents and purposes ‘disappears’. In Control the visual interface will not let you forget that your are manually interfacing with the computer through a hand to controller link.
Control’s visual style is in part inspired by early PC gaming graphics, combining the colours from the different 4 colour CGA video modes into a new palette. This reference to early PC gaming is merged with the tape loader aesthetic of 8bit computers such as the Commodore 64. Animated raster bars are commonly associated with the anticipation of game loading, but in this case are used as feedback indicators, changing colour in response to the player’s progress. The audio is also kept deliberately lofi, combining samples of ZX Spectrum loading sounds alongside 4 channel loops generated on the Nintendo Game Boy.
By using a low fidelity reproduction of the hand in the playfield, both visually and in terms of the available control scheme, the game reflects the resolution divide between the analog and digital worlds. In addition to the challenge provided, the increasing button count of the onscreen game controllers is intended to reflect the evolution of game input devices. The final level of Control confronts the user with the speculative ‘OctoPad’ prototype game controller that exaggerates the complexity of existing devices. The progress a player makes through the game levels is a measure of their own patience and ability to play within a constrained control scheme and increasingly more difficult level layout.
Extensions of a No-Place (文徵明) is a fivechannel animation made in response to the Ming dynasty scroll painting ‘Imitating Zhao Bosu’s Latter Ode on the Red Cliff’ as well as the digital animation that is currently presented alongside it at the National Palace Museum (Taipei). I made this work whilst living in Taipei and able to study the painting for an extended period. My animation combines 3D modeling, green screen video and photomontage to recreate Wen Zhengming’s 16th century painting of the Yangtze River. It speculates on the layers of poetry, calligraphy and alcoholic reverie that characterise this particular landscape. My use of digital media drastically alters the visual qualities of the painting in order to highlight the changes that occur when an analogue artwork is translated into a digital medium. I also use this as a metaphor for the shifts that occur when studying a painting from outside its historical and cultural context.
Wen Zhengming’s scroll painting ‘Imitating Zhao Bosu’s Latter Ode on the Red Cliff’ refers to the second of two poems written by Song dynasty poet Su Shi (Dongpo). The landscape painting by Wen Zhengming visually illustrates Su Shi’s journey down the Yangtze River, and is punctuated with a similar deployment of cultural metaphors. Wen Zhengming was responding to Song Dynasty painter Zhao Bosu’s painting of the same poem, which introduced the blue and green style of painting to this poem as a symbol for the aesthetics and philosophies of the Tang dynasty. In the transparent brushwork of Wen Zhengming, scholars also trace a reference to Yuan dynasty painter Zhou Mengfu, who was also credited with using archaic painting styles as a means to comment on contemporary culture and politics. From this a cursory description, it is clear that the landscape images within this scroll enact a sort of cultural re-inscription, where the poetry and the paintings are engaged in a process of recreation and reconstruction to form an extended artistic conversation.
My recreation of this same landscape, using 3D modeling and photomontage sought to continue the conversation of re-creation. The underlying perspective for my study of Wen Zhengming’s painting was that landscape images are artificially constructed environments that comprise layers of accumulated symbols, and that to create a landscape is to enact a process of substitution and reconfiguration of those symbols. The five screens of this animation represent one half of the Wen Zhengming scroll, and were modeled by superimposing a digital image of the painting on top of my mesh, as well as by copying the scroll by eye (as painters work from other paintings). The surface of the mesh was covered with a photographic texture of the white ceramic tiles that are characteristic of many East Asian urban environments, such as Taipei and Hong Kong. The white gridded surface also recalls the post-war utopian architects such as Superstudio, and my use of this texture sought to connect the poetic landscape of the scroll to the intellectually conjured landscape of the utopian tradition.
My animated landscape makes a conscious aesthetic shift from Wen Zhengming’s scroll painting and seeks to articulate the levels of mistranslation involved in studying such a historical work. Philosopher and Sinologist Francois Jullien describes the material dependence between the analogue qualities of ink and the spatial indeterminacy of Chinese landscape painting. Jullien translates texts from theorists such as Wang Wei (8th century) and Guo Xi (11th century) to explain the importance of the transparency of ink and the incompleteness of painted forms in revealing the psychological potential of the Chinese landscape. Wang Wei describes forms “as if they were – as if they were not” and Guo Xi writes “if you wish to make the mountain look tall, do not show it in its entirety” to suggest that, according to Jullien, the ability of ink to seamlessly shift between representation and non-representation (or presence and absence), materially realises the philosophy of Chinese landscape painting. The dependence of Chinese literati theory on the phenomenological qualities of ink and brush suggests that the translation of ink and brush works into digital media must have some effect on the meaning system of the image.
Since 2011, the National Palace Museum has commissioned a number of digital animations that merge high definition scans of scroll paintings with high quality digitally animated content, which are often displayed alongside the original works in order to the communicate their historical and cultural content. Jullien’s argument would suggest that converting the scroll painting into a digital animation must alter the relationship between its material properties as a visual object and it’s cultural logic. The fluid relationship between foreground and background must become concrete according to the logic of binary code. Having both a mandate to preserve and to educate, the museum successfully employs contemporary technology to present historical works to the public, however by translating a landscape from ink painting to digital animation, it is interesting to now question how the landscape has changed in it’s spatial and philosophical structure. In 3D modeling the terrain of the Wen Zhengming painting, my gridded surfaces draw attention to the Cartesian environment of 3D animation. The spatial inaccuracy of my recreation highlights the process of translation that we enact when transferring such works between analogue and digital mediums. Having only studied the English translations of Su Dong Po’s poetry, my comprehension of the symbolic relationship between Wen Zhengming’s painting and its poetic references are similarly distorted. My recreation of Wen Zhengming’s scroll was inspired by my appreciation of the cultural conversation it embodies, and was structurally determined by the media and mediations that characterised my interaction with it.
The performative aspect of this work responds to a more general question associated with landscape painting, something along the lines of “What is a Landscape For?” Wen Zhengming’s landscape reperformed Su Shi’s expressions by painting images of the poet enacting the experiences described in the poem. After creating my own landscape of cultural resampling, my question also became how to inhabit (or ‘animate’) this environment? At the time of making the work, I had set up a makeshift green screen in my apartment, and after hours of modeling and compositing, I would perform actions in the green screen that came to comprise the character of The Lost Man, who inhabits this landscape. I performed various forms of exploration, from spatial wandering, to planting a small garden, and to meeting a second character 啊看(played by Taiwanese artist Nick Kan). The musical performance of The Lost Man was the final act that completed this extended metaphor of cultural karaoke.
AZ: Move and Get Shot is a net based piece which shows the landscape of the U.S. / Mexico border in the state of Arizona, through the eyes of six surveillance cameras.
These cameras are part of an online platform created by a group of landowners with properties in the U.S. border. The platform shows the images of six surveillance cameras located in the border territory. The main purpose of this community is to provide the public with raw images of immigrants crossing the border illegally through their lands. Each camera incorporates a motion sensor which triggers the capturing of images when detecting the slightest vibration of the landscape. Then, these pictures are sent to a server and displayed directly on the web page.
While the main goal of the landowners is to capture and disseminate photographs of immigrants entering the United States illegally, the camera is programmed to detect and record any kind of movement. By delegating the surveillance to a machine, the original human intention is lost, and the original purpose takes shape as a collection of images which reveal not only immigrants but all kinds of human, animal and natural activity. Therefore, the monitoring action becomes something uncontrollable and potentially meaningless.
The piece is composed of six independent films automatically made from the images captured by each camera. Every 24 hours, a Bot, which is running since 2011, detects whether there are, or not, new pictures. Then, these new images are saved to a local server and added algorithmically right after the last frame of the corresponding video. Thus, the films expand and reveal, day by day, the pace and the nature of the movement of the Arizona borderland.
The Ouroboros is an ancient symbol of a serpent or dragon eating its own tail. Here something is constantly recreating itself. This symbol has a variety of interpretations and is found represented in number of ancient cultures, the two oldest being China and Egypt. One interpretation of the Ouroboros is the idea of the eternal return, where the universe is has been recurring forever.
Ouroboros the robotic artwork is an embodiment of this idea. Using various DIY mechanisms and components this robot extrudes a plastic coil like “tail” that winds across the floor. The “tail” ultimately is returned to the robots “mouth” as a vacuum and rollers in the machine intake the plastic “tail” grind it back up, melt it down and re-extrude the tail as a new coil. The interaction of material, process, variation and machine connects to many other possible interpretations including ideas from: biology, ecology, behavior studies, industrial production and micro-manufacturing.
HDPE, High Density Polyethylene is the common material used in everything from milk jugs to toys and cutting boards. With proper collection and resources this material is easily recycled. Instead of using a ready made filament or plastic pellet the Ouroboros robot only consumes shredded plastic milk jugs. It will thus only recycle and renew as long as it has its tail or extra shredded recycled plastic to consume. Obvious parallels are made here between the symbol of the Ouroboros and the potential of a recycling system.
The tail of this robotic serpent will have the capacity to enlarge as it recycles with each generation from vacuuming in additional the plastic source material covering the floor. In addition to the “tail” enlarging there will be variation in each new iteration of the tail. A form of reincarnation occurs through constant rebirth of the Ouroboros’s plastic tail. This “Tail” material is melted at a safe temperature of 350 degrees Fahrenheit or less where minimal off gassing will occur. All heat and shredding components are safely secured inside the “body” of the Ouroboros mechanism. Blow formed transparent polycarbonate panels will provide windows into the body of the serpent robot where observers can witness these processes unfolding. Select parts of the robot are also made from recycled cast HDPE using a different process than the extruded tail. These recycled mechanical parts will be cast and fabricated in Daniel Miller’s studio at the University of Iowa.
The processes used in Ouroboros connect to a larger DIY community where recycling of plastics into 3D printing filaments and other components provides a possible real low cost alternative for individuals and production at the local level worldwide. The DIY systems used within the Ouroboros represent a shift in hierarchy from governments and corporations to open source communities and individuals. This shift has been taking place for the past 20 years, as robotic technologies become ever more accessible at the local level. DIY systems used in the Ouroboros include: Ardunio controllers, homemade filament extruders, a repurposed vacuum cleaner and various prototype mechanisms. While some of the Ouroboros’s structural components are made with CNC tools other parts like the polycarbonate windows are made with simple electric cooking ovens and compressed air. The Ouroboros challenges ideas behind automation and manufacturing. Robotics has infiltrated ever level of manufacturing. Commercial production processes have been key to the ever-evolving bio-political relationship between humans and their natural world. The Ouroboros approaches these ideas from a more intimate and social level where the micro-manufacturer can change relationship between machine and body.
One can regard the human animal as not above, separate or other but rather equal to the world we inhabit. The effects of human technology are woven into the destinies of all beings of this planet. While the Ouroboros directly references the serpent or snake form, its mythology is also very much human. I am interested in the internal function and flow that occurs within the body. The recycling system that takes place within the body of the Ouroboros both looks at ideas of mechanical manufacturing and material processing and the biological analogues to these processes. Further correlations can also be drawn between this project and the natural recycling system of plant earth as shown through plate tectonics.
This work is can be installed on the floor or on a low platform. It is intend for the visitors to walk around the work and have access to view the inner works of the mechanism through the polycarbonate windows in the elements. If possible, it would be nice to allow the extruded plastic tail to hang down or trail over a stairwell, staircase or elevated floor. The dimensions of the project can vary with the space due the expandable hoses that connect the mechanical systems.
Floor Space Dimensions: 12ft x 12ft (3.65m x 3.65m) is preferred, with a minimum of 8ft x 8ft (2.43m x 2.43m) required.
Preferred Lighting: Spot lighting is desired for this work, general floodlights would be sufficient. When the project is complete it will have internal light sources in various components.
Power Requirements: This project will need one 100-240 VAC outlet with 10 amps available. A floor outlet is preferred but a wall outlet would be sufficient.
This project is funded with support from an Old Gold Summer Fellowship from the College of Liberal Arts and Sciences and The School of Art and Art History at the University of Iowa.
New Game is a romantic mutiny against the restraints of ubiquitous techno-utopian culture. The common and traditional prescribed role of installation media as a mise en scene acquires a new role as an active performer and actant. The game controller becomes a cypher key to bring meaning to other places – wild places, which as yet, can only be imagined. Meaning flickers on and off like a relay switch or an interrupted signal transmission. The game encourages the performativity of player through colors, igt free slot games, live video feed and theatrical motifs that are an indirect consequence of the mise-en-scène of the hardware and software, the ‘liveness’ in the approach encouraged could be perceived as unruly and un-governed. The artwork opens up standard devices and situations, it explores the notion of a playful dream that exists in the interstices between the real and the fictional. In fact, the game becomes the performer the player concedes agency to the code. Perhaps you are really a nonplayer in a game controlled by someone or something else; perhaps you are the one who is played.
Unity Game Engine
War Zone: Reproduction of Historical Missile Launch Trajectories Using Google Earth
War Zone explores the military background of mainstream technology. In this video work based on historical facts, the artist reconstitutes three missile trajectories with a subjective camera angle using Google Earth. This work articulates around haunting high technologies by introducing cues of their military origins. Here we focus on the missiles, the ancestor of the space program and satellite launches, whose shots and cartographies we use on an daily basis.
So we follow the trajectory of a V2 rocket, developed by the Nazis and launched from the Netherlands to London in 1945. Then a Scud rocket launched from Kuwait to Saudi Arabia during the Gulf War, and finally an air-to-ground missile launched from Israel to Gaza in 2014. Riding our missile in the style of Dr Strangelove, we find ourselves wondering about the evolution of those instruments of death. From the poor precision of a V2 to the surgical strikes of Israel, the country with the largest number of start-ups, particularly in cyber-defence, and where the military is the prime crucible of innovation. This work is based on data gathered on newspaper archives, interviews, blogs, historical facts.
Ivan Murit: Software development
Paisaje para una persona (Landscape for a person) traces a path through different locations into a sequence of images. Places as the backdrop for a story that slips from its possible representation, building an invisible layer of meaning between the image and the story. This video was constructed from material filmed on Google Street View and then edited with audio interviews of people who were in conflict of transit or deportation.
“I am he as you are he as you are me and we are all together.” — John Lennon, lyrics in The Beatles song “I am the Walrus”, 1967.
Walrus (2011 – 2016) is an interactive installation consisting of an augmented mirror that only reflects the face of its users, while supplanting the interactor’s face with a previously recorded one in the same position and with a similar facial expression. This supplantation occurs in every frame, using a different person’s face each time.
Walrus proposes a reflection on identity and self-perception, while also commenting on the current conversations on interaction, technology, and surveillance. Mirrors have always played an important role in culture. With a history dating back 8000 years, not only they have always been present in art and in myths, but they can also be thought of as the first interactive artworks. Mirrors have always been very popular in new media art, and new technologies are always used to create new kinds of mirrors.
With Walrus I attempt at leveraging this “design pattern” by creating a mirror that proposes a self-contradicting interaction: a mirror that infers a human essence, a common trait of all people (amalgamating all its users into a continuous stream of visual feedback), while at the same time maintaining its most basic behaviour. Mirrors, in Borges words, trouble the depths. They exist in the reflection of light, outside of the image, simultaneously expanding a scene and constituting its border, its limit. Mirrors are interactive, yet blind; however, Walrus’s manipulation of the image proposes a mirror that partially sees us, understands us, and explicitly constructs a representation of us. An idea of us.
The installation also inserts the dynamics of computational ubiquity and surveillance in which our identity is partially defined by the objectification of being “the surveilled”. It is in this reflection where the installation proposes a dialogue. In Flusser words, “the task of a philosophy of photography is to reflect upon this possibility of—and thus its significance—in a world dominated by apparatuses”
“Walrus” uses a Microsoft Kinect Sensor, a computer, a screen and an oval-shaped picture frame. It uses the sensor to track the interactor’s head, and Microsoft’s Face Tracker to locate the face and extract some gestural features: mouth shape, eyebrows position, eyes opening, etc.
The computer stores each new face and its associated data into a database, and returns an existing one from the database. The database is organised as a hash table, with similar faces stored under the same hash entries. Face similarity is defined by an L∞ norm of the head rotation, plus similar gestural features.
When a new face is entered, it is put into the entry of the hash entry with most similar poses. To avoid running out of storage, there is a fixed maximum size of each hash entry, and an existing element is randomly deleted when the limit is reached. This can be considered as a cheap way of finding similar faces to the input through hashing.
Moving Objects is a kinetic sculpture consisting of motors, rings, and silicon wire that approach the visual phenomena between chaos and order and challenges the viewer to constantly seek patterns and principles. The technology behind the art – the machines, the wiring, the motors – is integral to Pe Lang’s piece, bringing the audience in touch with the technical processes that are so often hidden in our high-tech society.
Light Catcher is a dialogue between sound and light. The installation is composed of numerous optical fibres that interact with sound. Sound is the driving force, light is generating the space. An electric spark is igniting the reaction. Light is entrapped inside the optical fibres, reacting only to sound stimulus. Acting like a catcher, the installation absorbs and emits light. This results an emergent dance of light from the fibres and the reflections on the surrounding surfaces. The white light dominates the installation but shades of other colors exist to highlight an audio event, like the sounding French horn playing a minor second interval. The visitor is invited to walk around the installation and explore the different visual and audio aspects.
The project was created using C++ for designing the light choreography. The sound was generated in Pure Data along with acoustic instruments that were mixed together. In the final installation, a Raspberry Pi hidden inside, is playing the audio and controlling the LEDs using Python.
One of the oldest forms of colour articulation in complex dynamic performance media is the Cantonese Opera. All aspects of costumes, texture, masks and form are geared together to create a total work of art. Characters are articulated through elaborate colour coding, communicating mental state, behaviour, status and role. Such complex narrative materiality and communication is comparable with the transdisciplinary approach of cybernetics – one of constructed communication feedback loops in closed systems. Taking the local specificity of the opera’s rich history, we speculate on a new form of storytelling and non-linear performance. In the opera, characters are based on notated behaviors and roles, articulated through elaborate costumes and colour coding indicating mental state, behavior and character. The colours are static and so is the narration.
MASK (臉譜) takes the static codification and translates the narration and character development into a dynamic, reactive and cybernetic system of conversation. The work is the design of a continuous conversation between two protagonists of the Chinese Opera. It takes the form of an interactive installation with two newly designed 3D printed masks are augmented by the means of chemical and biological agents as well as 3D reactive projection mapping. Each of the augmentations is reactive and environmentally controlled. Each of the reagents has a particular time span to change colour, from the instant change of a projection map, the intermediate reaction of polychromatic pigments to the slow change of bioluminescent bacteria or slow growing crystallisations. The mask here is object and condition at the same time. The reaction of the mask is recorded, played back and looped, creating a 2nd layer of influencing input, suggesting the layered repetition in the opera. stimulating and inhibiting the crystalline growth.
Using 3D projection mapping, that as a source takes the recordings from the biological augmentation as well as the polychromic stimuli, allows the audience to interactively stimulate the masks behaviour up to the point of chemically altering its shape through the precipitation of Aluminium potassium Sulphate – the curing of the mask in its disfiguration. The narration culminates and form is lost for chemical augmentation and change. The extreme time spans from the instantaneous colour changing of the pixels in the LED matrix of the projector to the long curing of the mask using crystalisation are only able to be seen as a process and never as a whole-similar to the momentous perception of the enacted Opera.
Well-formed breasts…coincidence? is a video performance created for the camera. This video work makes reference to the power of the symbols (archetypes) that at the same time are shapes and, consequently, numbers. These symbols, as well explained by C.G. Jung, have dominated our life because they are part of our collective unconscious determining what is beautiful and good and what is ugly and bad. Thus, the religions and governments have used them throughout the history in order to create resounding effects that manipulate the society. It means that strong or weird physical experiences can turn into aesthetic and pleasant experiences (images) just because they follow powerful and inherited patterns and symbols. This video talks about the most important numeric sequence, Fibonacci sequence, and about the master number par excellence, the number 11, which is interpreted as the resonance of the oneness. Through this work, which is part of a specific communication project, Jai Du wants to reinforce the concept of the importance about the unification of the science and spiritual worlds. Poetically it can be explained as follows:
Maybe you don’t remember this number sequence maybe even consciously you don’t know what it means search on Google! The name is Fibonacci sequence and it has nothing to do with esoterism remember it not only if you want to win the Euromillions it can also help you to understand everything around you: the DNA defining you the Egypt’s pyramids the terrorist attacks of 11S and 11M the power of British monarchy the black holes and even the hole of the Bic pen also why you have to wake up every morning to work at an office that you don’t like at all this sequence explains everything! because it is part of the collective subconscious and, obviously, the abstract art has never existed because God exists.
A contemporary update and a creative reworking of Albert Camus’s “The Myth of Sisyphus”, S for Sisyphus is an experimental video game project aiming to explore action, meaning, and human intentionality via a simple animated and playable experience. It consists of two related but standalone versions: S for Sisyphus ¬ (playable version) and S for Sisyphus ∞ (machinima version), each articulates the theme of the work with a slightly different accent and experience.
In the playable version, the player is confronted with a cube, and is left with the choice of a single action in the game world: pushing the cube forward in an endless horizon, or else ending the game by taking the character’s life. The player has the choice to experience from four different perspectives: the Player, the Cube, the Other and the Anonymous God, as the action unfolds.
An alternate and extended version, S for Sisyphus ∞ is a live animated film that complements the game version of the project. The major difference between the “game” version and the “film” version is that no human player is involved in the latter. Instead the game is “played” by the computer software automatically in real-time.
Surveillance is a impromptu performance, co-operated with Sharky(red) and George(black). During the performance, Sharky & George are being tracked by a really-time analysis system, which shows the speed they are moving, the distance they have moved. In the performance, Sharky & George are talking to each other the latest news from BBC News Service in real time. The news is updating in every 10 seconds. The really-time analysis system are keep tracking Sharky & George’s movements and project the processed information back to them in real time.
Graphite Piano is a wooden sound instrument, shaped to resemble a piano. It is built with 3 sets of 8 pencils, each with a different darkness ranging from B to 8B. The instrument also has 24 keys, one per pencil. Each pencil is placed above one key. The keys have been drawn on the wood with various shades of pencils. The shade of each key is the same as the shade of the pencil that corresponds to it. The pencils in each set are all placed at three different distances from the keys. Therefore, the same pencil shade will generate different tones, depending on its distance to the key.
Musicians can play the instrument by pressing the keys. This action facilitates the contact of the pencil and the key, which will complete the sound oscillator circuit and generate the tone. The pencil and its marks thus work as a sound switch. This idea is related to other instruments or sound generating devices, including Louis Bertrand Castel’s 18th century Clavecin pour les yeux (Ocular Harpsichord) (Hankin 1995, 73), as well as Theremin created by Leon Theremin and the Moog synthesizer by Robert Arthur Moog (and others), both made in the 20th century. The main mechanism of playing the Moog synthesizer is based on flipping on/off switches, so as “Graphite Piano” by pressing the keys to turn on/off of the sound circuit. Besides, musicians can play “Graphite Piano” by waving their hands above the keys. Similarly, sound generation in the Theremin is based on the player’s interaction with an electromagnetic field.
Using the pencil as one of the electronic components for generating sounds seems absurd yet completely makes sense because of its physical properties. The pencil core is made of graphite, which enables the pencil to serve as an electrical conductor. It is also a tool traditionally associated with writing and drawing. The physicality and the function of pencil make it an ideal element to be used in my sound sculpture. This work expresses my interest in the physical aspects of sound and language.
In this 3D animated video, classical sculptures in the museum participate in a destructive performance triggered by software glitches, distortions, and misused simulation, turning the space into a madhouse theater with both classical beauty and digital chaos. It attempts to connect the sense of space, history, turmoil, and transformation to the idea of technological sublime in a twisted way, breaking the restrictive association between advanced graphics technology and high-end cinema production while attempting to put classical art into a contemporary digital aesthetic context.
The work tries to engage with our cultural heritage in a disruptive way, occupying the static space of a cultural institution and turning it into a chaotic digital performance. In the computer, each of the sculptures has a triple identity: a human or animal figure, a sculptural ornament, and digital data. The original sculptures were crafted by the old masters with care, treasured in their time, broken and tossed in historical turmoil, and preserved and admired again. The original delicacy and the torment of time together created a new sensibility. In the video, they are destructed not only as ornaments, but also as bodies with cultural associations, which resonates with the Greek idea of the tragic and the sublime.
It is also an attempt to reimagine the aesthetic directions we can take using high-end visual effect technologies. Today, moving images are heavily computer generated. The shiny cars or glittering beer in TV commercials are 3D models rendered photo-realistically, not to mention all the collapsing and explosions in Hollywood blockbusters. Taking the tools that big visual effect studios use, but working alone as an artist instead of with a team of hundreds, I try to reclaim the independency of the technology. Most of the “effects” are created through errors or misused simulation, celebrating rather than concealing the “wrong” and “mistakes”.
Every once in a while, a new technology makes the world a better place without demanding some sort of major trade-off (think Polio vaccinations). More often, however, our advancements require that society shift to accommodate their less desirable effects. ThreeYearContract calls our attention to the impossible promise of the smartphone: simultaneous presence both here and somewhere else. TYC is made up of two identical units, each appearing to be no more than a simple block of wood. When an audience member pressed their forhead against one of these blocks, it places a cellular phone call to its counterpart. This call can be answered when a second participant adopts the same peculiar stance as the first. The work’s absurd interactive requirement forces both its users to commit fully to the conversation at hand, and encourages reflection on a growing culture of tele-absence.
This artwork contemplates the screen and its origins. The moving image’s preference for the clear screen ignores its historical roots in translucent organic materials. Naturally-occurring translucent materials (dove feathers, fur, reptile skin, seeds, and seashells) were selected for their unique qualities to diffuse light. Embedded in organic resins, the materials’ variations in texture, color, and density affect the video playing on the commercial-grade LED grids behind them in custom-made raw steel frames.
The relationship between the video image and the screen are key to the piece–nature acts as an organic lens and filter for a digital image. The light sculpture explores both mediated organics and translucency’s ability to create both a material and discursive surface, looking at and looking through. Using a simple walk cycle, a foundation of animation, the sculpture shows a man pacing, trapped between a mediated environment and a constructed nature.
The work was developed concurrently with the Sustainable Cinema series of five public sculptures and provided another perspective to cinema’s relationship with nature. The five machines created the moving image with natural force; this work intends to consider the origins of moving image screen. In each, cinema’s beginnings are reimagined by designing an alternate fictional history that maintained the original natural power systems and early organic surfaces. The works both dream of a world where the moving image was not appropriated by the industrial and digital ages. Media archaeology and new media art are placed in a new, co-dependent relationship with an active natural environment once again.
Hiatus [gap, break, void] is an installation where every visitors actuation reveals a dark patch in the light field of a light object that is otherwise appearing clean and functional. The light object is installed at the ceiling where 16 pull cords in various lengths and positions are applied to 18 identical shades which are positioned in a white grid gadget. The outgoing light is warm-white tempered, which is in general associated with comfortable impressions. That impression is irritated by a cold manufactory-like design and by the malfunctioning of the light performance, which becomes a constant part of the room situation. The dark patch vanishes for a moment but then it returns falteringly in a different shape each time a pull cord is actuated. The programmed gaps in the matrix are activated randomly by pull switches. The pull cords are adjusted according to the spatial conditions so that some of them touch the ground, some are too short to do so and one is of excessive length. A leading motive of the setup of the room installation are the several recurring forms discovered in the pull cords, the cones that complete them, the shades, the framed misprints and the diverging dark patches.
Commonly, hiatus describes a short pause in which nothing happens or a space where something is missing. On the other side I understand it as a reference to the state of in-between. For the setup of the room installation I chose different recurring forms and I used a set of rules to create a clean and functional impression which should be questioned and contradicted by a scattering and faltering nonfunctional light performance and by the soothing variety of pull cords. The answer to an unambiguous signal, evoked by the use of pull switches, triggers a diffuse and edge transcending reaction of the light. The outcome of the performative human-object-interaction can be read as a dusty answer or open up room for reflections. The framed pictures of reproduced misprints are taken from daily work life. The diffuseness and the malfunction of the light object are a kind of disrupting „noise“, just like the misprints are a product of noise, too. That correlates to the noise that is attributed to signals in modern communication technologies. It is in fact an attribute of life itself. Malfunctions or noise can be disturbing or empowering, they can take a stand against unambiguity. The installation was built in an old customhouse and asks for the sense and senselessness of daily human actions.
“Heraclitus says that everything changes and nothing remains and by illustrating the flow of the river he says that you cannot step twice into the same river.” by Socrates, after Plato, (Cratylus). After the conquer of Ionia by Persians in the middle of VI century B.C. many Greeks emigrated to western colonies, to the so called Great Greece (southern part of the Italian Peninsula and part of Sicily) with philosophers such as Xenophanes and Pythagoras amongst them. Xenophanes was the ancestor of Eleatic School of Philosophy and Pythagoras was the progenitor of Pythagorean Union. For Pythagoreans the most important substance of the universe was not matter, but the form it acquires by the inscribed information. Driven by such assumption they were discovering the harmony of the world (cosmos) associating it with music. The world cosmos (κόσμος) was applied to the Universe by Pythagoreans and it means harmony and order. One of the tools for learning philosophy was the monochord – a musical instrument by the help of which Pythagoreans discovered the harmonic tones. The most significant supporter of Pythagoreanism was Plato. He developed the concept of idea (ιδέα) as a matrix of all objects of the same kind existing in the material world. He considered the world of ideas as real and the material world only as a copy shaped by it (as it can be seen in the known parable about the cave). Aristotle saw the matter (ύλη) as host for the form (μορφή), which by his understanding was the information inscribed in the initially unshaped matter. For the Pythagoreans (numbers) and Plato (ideas) the substance (ουσία) was the information itself and for Aristotle it had dual nature, consisted of form (information) and matter (host). The word information comes from the Latin word informare, which means applying form, shaping.
The analogical word in Greek language, μόρφοση, stands for education. By considering the etymology of that notion, a wide definition of information would be: shaping anything, material or non-material, in a way that it could be later extracted and read by the receiver. The dualism of the substance (ουσία) consisted of unshaped matter (ύλη) and the information as a form (μορφή) was introduced by Aristotle. In physics (φυσική – physics, φύση – nature) there is a notion of entropy that can be described commonly as a measure of disorder. According to the second order of thermodynamics the entropy in a closed system always proceeds to the state of equilibrium that is a configuration with maximum entropy. There is a singularity in such system called life. Life decreases entropy and produces information in its spot. In nature there is a common process associated with life endless replication of information for the purpose of its propagation and preservation (DNA). According to the second law of thermodynamics life is a singularity and it is like a time-reverse process. The word history comes from the Greek word ιστορία, which comes from the word ίστωρ witness, critic and that in turn comes from earlier form ίδτωρ, the combination of ιδ (from οίδα or είδα I know) and ίστμι (stream, record). Contemporary concepts of space and time are classifying time as another spatial dimension. In physics there are still unsolved outlooks about the nature of gravitation and magnetism and paradoxes of timespace, like e.g.: while time travel is theoretically possible, what would happen if we would go back and change some properties of the past? Would this change the course of history?
This artwork shows the idea of turbulent time-space that is constantly changing its form, where it is not possible to go back or forward to the place that existed or will be existing, because the target is never the same that it was in the past or it will be in the future. A section of a four-dimensional object generates a three-dimensional form. By moving the section’s location steadily in time, the resulting form will be subject of change depending on the properties of the four-dimensional object and will give the impression of motion. If the parent object is dynamical, re-section of a previously cut (“past”) location will give a different result. By analogy, a place of a future section will produce different image from the one generated by the assumed “present” section steadily moved in time. The future and the past generated from present point of view is different than those generated by the “present” section moving in time. The projection “You will not enter twice into the same river” presents this idea by animation, on a model reduced by one spatial dimension.
Animation is dynamic, architecture is static. Animation renders on a flat surface, architecture is three dimensional. Animation requires a medium, architecture is medium and content at once. The projected artwork is fusing the inherent qualities of animation with those of architecture. It utilizes architecture as a canvas for animation and utilizes animation as an emphasis of hidden dynamics in architecture. It virtually breaks walls, transforms concrete surfaces into human faces and let geometry dance. It dissolves the rigidness of architecture by mapping animated projections on its surface and embodies animated shapes through physical structures.
33 1/3 Revolutions is a game art installation that deals with Hong Kong’s record store culture and with vinyl records as objects of tangible heritage and cross-cultural importance. A single player computer game built with the Unity3D editor will be made accessible to festival visitors as well as a browser game that can be played online. The game presents a fictitious urban environment that is constructed from pictures that are taken from Hong Kong record stores. The level consists of a vinyl hero, a ‘digitization spaceship’, buildings and huge vinyl records that are larger than (wo)man-sized and invite the player to start and stop the respective music contained on these records.
The cultural history of 20th century apparatuses contains two machines that became iconic for youth culture, film, and Western civilization, both use a revolving device to achieve their functionality: The revolver (predecessors date back to the 16th century, but it is popularly known as the “colt” in Western movies) and the record player. Both of these apparatuses (let’s forget about the third potential sibling here: the KODAK carousel slide projector) meet in a game that celebrates turntables and vinyl music and equips the player with a revolver to fight against the dematerialization of music. The revolver equipped music connoisseur has to make his way through the streets of an urban environment that contains famous Hong Kong record stores like the ones of Paul Au at 239 Cheung Sha Wan Road in Sham Shui Po or by Ho Hing Ming at Lamma Island.
The player is attacked by a digitization spaceship, but he or she can shoot back, can duck and cover and can play the rough and warm sounds of the analog vinyls that are a threat to the digitization spaceship. Vinyl records have almost completely disappeared from the shops and homes of Hong Kong for more than two decades, before the inhabitants of Hong Kong once more fell in love with vinyl. It is interesting to see how certain cult shops keep a nostalgic collection of outmoded bands like The Who, Bob Marley’s Wailers and the like. The unpredicted phenomenon of a revival of vinyl is not unique for Hong Kong, it rather mirrors a global tendency that shows a 400% increase in global vinyl sales in between 2007 and 2014. However, what is typical for Hong Kong is the radical enthusiasm about media innovations, which quickly drop into oblivion. In regard to vinyl this has been described by record collector Paul Au: “Hong Kong people like to follow trends. They listen to whatever is popular, so they throw away a lot of old things. In the 80s, they threw away all the things from the 60s. They cannot stick to one lifestyle for long and they also don’t give many genres a chance. Back in ‘83, when Metallica started getting famous, local record dealers imported only 50 copies of their first album. Only 50 copies for the whole fucking colony! What kind of a city is this?” (hk-magazine.com/city-living/article/vinyl-hero-storeowner-paul-au)
The game is an open exploration single user computer game that can be played as a download on Apple IOS, Linux or Windows machines and also as a browser game in conventional web browsers like Safari Internet Explorer, Firefox etc. The games level has been built in Unity3D and exported for the respective platforms. The game has an expected playing time of 8 to 30 minutes per player and can be restarted at any time or continued by a successor player. In order to make the game accessible for as many players as possible in conditions of crowded festival conditions an automated change of players will be facilitated. A message of the type “Please hand over the system to new players… or continue” will be displayed if activated by the system operators.
Tunnel finds Edwin Abbott’s novel Flatland: A Romance of Many Dimensions as an inspiration. It is a story centered on a two-dimensional geometric figure. A square. A Mr. Square. Who is occupying a land of flatness, a land of only length and width. Through a series of encounters with a higher dimensional being, a three dimensional sphere, discovers a greater reality outside of his own limited perception. First he refuses to believe, but comes to understand, despite his limitation, the concept of a third dimension. After Mr. Square’s mind has been opened to a new third dimension, he dreamed of a visit to a one-dimensional world (Lineland), where he in turn was the higher being. Inhabited by single dimensional lines, He attempts to convince the realm’s monarch of a second dimension; but is unable to do so. Dejected, he travels about and meets a singular point. “Can you consider having length?” Mr. Square asked. “Length? What humorous thoughts I come up with,” the point responded. The Sphere came and explained to Mr. Square, that he had arrived in Pointland, and that the points here do not acknowledge dimensions at all: “You see, how little your words have done. So far, as the point understands your words at all, he accepts them as his own – for he cannot conceive of any other except himself – and plumes himself upon the variety of Its own thought, as if it were an instance of his own creative power. Let us leave this god of Pointland to the ignorant fruition of his omnipresence and omniscience: Nothing that you or I can do can rescue him from his self-satisfaction.” Flatland is an allegory of an idealism. Through its examination of the view of multiple dimensions, it offers an insightful metaphor towards human being’s existential relationship to the larger cosmos. This is where Fu’s work invites and engages its viewers.
As in flatland, Fu’s virtually rendered work guides the viewer into a metaphorical higher dimensional world, where the artwork becomes a physical symbol for the viewer’s physical perception, in relation to the greater reality, and the installation, posing as a limitation to the viewers perception, is a port to that world. One is either detoured or offended by the limitation presented by the installation, or one accepts his or her limitation and humbly explores what can be seen.
Tunnel is also inspired from one particular experience Fu had going through a small mountainside tunnel on a trip to Arizona. “There were five windows carved out of one side of the rock tunnel, giving glimpses of an extraordinary expanse of the Arizona landscape, in an otherwise pitch-black tunnel. The guide explained that there are six windows, each increasing in size as you traveled. I counted as we went along. One. Two. Three. Four. Five. Where was the sixth? As we came to the mouth of the tunnel, the guide said, this is the sixth window.” Fu took that as a metaphor of human understanding of the larger physical, spiritual, and metaphysical. In that, She is interested in the nature and significance of the reveal and expectation.
The following quote from Woody Vasulka’s Notes on Installation summarizes the characteristic of the digital space revealed in Fu’s Experimental 3-D animation and installation:
“…digital space has no generic method for looking at the world the way that a camera does through its pinhole/lens apparatus. Digital space is constructed space, in which each component, aspect, concept, and surface must be defined mathematically. At the same time, the world inside a computer is but a model of reality as if seen through the eye of a synthetic camera, inseparable from the tradition of film. Yet, in this context, no viewpoint is ever discarded, the internal space is open to a continuous rearrangement and access to a selection of views and narrative vectors in infinite, not only to the author, but also, with the use of certain strategies, to the viewer. Once the author constructs and organize a digital space, the viewer can enter into a narrative relationship with it. A shot in film indicates a discrete viewpoint. Its narrative purpose is to eliminate other possible views. In contrast, the world in the computer contains the infinity of undivided space, undissected by the viewpoints of narrative progression. In the world of the machine, all sets of narrative vectors are offered in an equal no-hierarchical way. The machine is indifferent to the psychological conditioning of a viewpoint. All coordinates of space are always present and available to the principles of selected observation.”
Fu’s animations as a whole reference C. D. Friedrich’s painting, “Wonder Above Sea and Fog”, on the account of the emotional response of the contemplative figure encountering both the physical and metaphysical infinity, which is also a major concern in Chinese Traditional Landscape Painting. With a former painter’s sensibility, she approaches the subject of the sublime using topographical computer rendered abstraction set on a time line. The animation projected into space becomes a necessary physical metaphor for the discourse of human physical perception. This invites the viewer to physically and mentally enter into a liminal Gordon Matta Clark like interior within a digitally constructed space, where the viewer’s body is motivated to expand their perception but their physical ability to perceive all that is potentially visible is limited. The limitless virtual world entices and calls, but the physical fights against it. Like Friedrich’s painting, her abstract perception as a frame and opening to another world and experience, invites the viewer to look into the virtual landscape. “Tunnel” continues Fu’s formal and conceptual exploration. The piece functions as a window into a parallel dimension that stimulates an awareness of both consciousness and space, extending out from the pictorial and expanding into the land of virtual reality.
Jane is a dialogue for two actors that you can change throughout while maintaining credible meaning. There is no definitive version, no directors cut. It is rather the sum of its manifold possibilities. The work in many ways belongs to the theatre with its black box setting. And it owes probably more to work of Samuel Beckett than anyone else. At its strange heart the piece is a writing automaton. It chooses what the algorithms decide for it against the carefully calculated script.
The situation is absurd. John and Jake talk about a woman they share called Jane. Both, more than coincidentally, are married to a woman called Jill. The men exhibit no rivalry whatsoever over Jane. They know she belongs to everyone and no one. Jane is simultaneously a daydream and a nightmare. Shape-shifter extraordinaire, she is the constantly mutating remainder of male desire. Whatever men crave and fear in the female sex, Jane exemplifies and amplifies. Jane herself knows full well that she is the empty heart of desire for she is the absence that gives birth to desire itself. In a very concrete sense, Jane could only be expressed in something as fluid and pliable as the work that takes her name.
John and Jake too are rather ambivalent. They can be read as individuals or the same person separated by the attrition of time. Taken as individuals, John is the older, more jaundiced and stoic of the pair. He is also probably happier or at least more accepting of his marriage and its inevitable compromises. Jake is the more idealistic and yet the more troubled too. He bears the burden of a sexual ambiguity both in himself and the troubled relation his wife has with her father. Originally I had in fact planned a companion piece called ‘James’ where the wives called Jill would similarly fantasize and fear. There I was also entertaining the possibility of the intriguing male to female interweavings that could ensue.
Jane is also about itself or rather the types of story that something like Jane can tell. I tried to get the most out of the line by line granularity of change by investing the alternatives with as much narrative content as they could hold. One way was to use the well known films in the conversation ‘Old Films, New Endings’ and thereby leverage, extend and subvert narratives already known by many. The conversation ‘Bedtime Stories’ does much the same, and harkens back to our childhood need for night time narrative that like cinema unfolds itself in the dark. In ‘Filmic Fantasies’, I evoke actors throughout and use their names to conjure the clouds of narrative allusion associated with each. And then there is Jane. Of her occupation we know that it was once something cinematic: director, actress, editor, and auteur.
In other interactive stories that are varieties of choose-your-own-adventure, the main narrative branches at key decision points. Progression then leads to a normally limited set of outcomes. I eschew this approach and use instead variants that have localized polyvalence. This creates a cumulative field of multiplying nuance where one change can directly or indirectly affect the rest. Alternatively, I also allow for the absence of any interaction like you normally would expect in the passively received medium of film. You can simply auto-generate a film and play it without changing a single line or camera angle; you can dictate the script or be dictated to.
All this does mean that the narratives I produce are less directed than other approaches. The films you see in Jane are therefore more like vignettes than conventional stories that end in a tidy little knot. They are the equivalents of the groundhog day conversations we engage in daily with others or ourselves and upon which we gently improvise in a meagre exercise of will.
Il ne reste plus que l’attente (Wait and See), is a software linked to the web and made with a video game engine (Unity 3D). Every 15 minutes this software check on Twitter via its search query, the occurrence of a list of expressions.Those expressions are made of words related to the lexical field of liquidity combined with terms usually used in finance (liquid, stream, wave, flow, finance, exchange, management, market, etc). The more those expressions have popped on Twitter, the more the sea seems agitated, on the opposite, the less the words are use, the more the sea looks quiet. This program acts like a barometer of the discurses about liquidity and finance on the internet. This witness of the speeches which reactivate today a collective image linked to water, questions the figure of the ocean as the new paradigm of our digital society: a liquid world with many streams where everything communicate, and in which we evolve without being able to embrace all its complexity and inner mechanisms