Seeing with Machines: Decipherability and Obfuscation


Session Title:

  • Hybridisation and Purity (SP)

Presentation Title:

  • Seeing with Machines: Decipherability and Obfuscation



  • Adversarial images, inputs designed to produce errors in machine learning systems, are a common way for researchers to test the ability of algorithms to perform tasks such as image classification. “Fooling images” are a common kind of adversarial image, causing  mis-categorisation errors which can then be used to diagnose problems within an image classification algorithm. Situations where human and computer categorise an image differently, which arise from adversarial images, reveal discrepancies between human image interpretation and that of computers. In this paper, aspects of state of the art machine learning research and relevant artistic projects touching on adversarial image approaches will be contextualised in reference to current theories. Harun Farocki’s concept of the operative image will be used as a model for understanding the coded and procedural nature of automated image interpretation. Through comparison of current adversarial image methodologies, this paper will consider what this kind of image production reveals about the differences between human and computer visual interpretation.