“AI what do I need you for when I have wings to fly?” by Alinta Krauth
Title:
- AI what do I need you for when I have wings to fly?
Artist(s) and People Involved:
Exhibiting Artist(s):
Symposium:
Artist Statement:
What would a future look like where humans canmore simply and directly understand the communication signals of other animals? What does it mean for society, for culture, for religion? For our moral values? For our in-built biases regarding who is labelled conscious and thinking and who isn’t? What would it mean for food production and animal agriculture? What would it mean for our philosophies and humanities? For misguided human-exceptionalism? Or would it change nothing at all? Would humans continue to inflict harm and oppression towards other species no matter what they might say? We are a species capable of great atrocities but perhaps with a little nudge from the right technologies we could reconsider our relations with other species. ‘AI what do I need you for when I have wings to fly?’ is four separate listening devices that have been developed using machine learning technologies. Accessible publicly via your smartphone and requiring your microphone to listen in real time the work will listen to specific vocalizations of either Australian Magpies, Pied Butcher birds, Noisy Miners and Grey-Headed Flying Foxes and interpret their vocalizations into written words and visual displays via your screen. Two of these species vocalise in ways that are known to science to have clear behavioural and situational contexts in other words ‘meanings’. However this project also includes two much more experimental cases, two improvisational songbirds: the butcherbird and the magpie. This project uses experimental and innovative techniques in the development of audio recognition AI models for reimagining birdsong and vocalisations and then uses techniques from combinatory and algorithmic poetry to present that data in a human language. Each model represents a long process of 1) research into the known and observable vocalization and behavioural habits of each species 2) audio recording fieldwork 3) choosing strong audio samples 4) audio classification of vocalization samples into classes to develop unique data corpori for AI learning 5) ‘teaching’ these classes to an AI and then importantly 6) considering how to present the outcomes of this via a visual and textual display.