Deeply Listening Through/Out the Deepscape
Abstract
This paper presents early artistic, conceptual and technical work toward practicing and theorising through/out the deepscape. I first introduce the concept of deepscape, which may designate the global flows of media intensively computed by deep learning throughout the Internet, entangled with the material, human and cultural resources they capitalise on throughout corporate infrastructures of artificial intelligence (AI). I then propose to explore deep listening of soundscapes generated by deep learning as a practice to raise awareness of the planetary scale of the deepscape. I relate the diffractive prototyping of a deep generative model of soundscape, based on the multichannel hacking of the Realtime Audio Variational au-toEncoder (RAVE), trained on worldwide soundscape data that I transversally recorded over 28 places in late April 2022, using the Locustream online sound map. I argue that listening to the planetary soundscape that continually flows from this deep generative model may reveal the ethico-onto-epistem-ology of deep learning, by recalling the landscapes that are being exploited by infrastructures of AI, while situating data collection practices and training costs of deep learning. The paper ends by discussing art and science work that might be engaged to reveal and reconfigure the deepscape in depth.
Origin | Files produced by the author(s) |
---|