Derek Curry & Jennifer Gradecki
Northeastern University
Derek Curry (US) is an artist-researcher whose work critiques and addresses spaces for intervention in automated decision-making systems. His work has addressed automated stock trading systems, Open Source Intelligence gathering (OSINT), and algorithmic classification systems. His artworks have replicated aspects of social media surveillance systems and communicated with algorithmic trading bots. Derek earned his MFA in New Genres from UCLA's Department of Art in 2010 and his PhD in Media Study from the State University of New York at Buffalo in 2018. He is currently an Assistant Professor at Northeastern University in Boston. https://derekcurry.com/
Jennifer Gradecki (US) is an artist-theorist who investigates secretive and specialized socio-technical systems. Her artistic research has focused on social science techniques, financial instruments, technologies of mass surveillance, intelligence analysis, artificial intelligence, and social media misinformation. She received her MFA in New Genres from UCLA in 2010 and her PhD in Visual Studies from SUNY Buffalo in 2019. She is currently an Assistant Professor at Northeastern University in Boston. https://jennifergradecki.com/
Curry and Gradecki have presented and exhibited at venues including Ars Electronica (Linz), Media Art History (Krems), NeMe Arts Center (Cypress), Art Machines (Hong Kong), ISEA (Vancouver), ADAF (Athens), and the Centro Cultural de España (México). Their research has been published in Big Data & Society, Visual Resources, Leonardo, and Leuven University Press. Their artwork has been funded by Science Gallery Dublin, Science Gallery Detroit, the Puffin Foundation, and the NEoN Digital Arts Festival.
Infodemic
Infodemic
Infodemic is a neural network-generated video that questions the mediated narratives created by social media influencers and celebrities about the coronavirus. The speakers featured in the video are an amalgam of celebrities, influencers, politicians, and tech moguls that have contributed to the spread of misinformation about the coronavirus by either repeating false narratives, or developing technologies that amplify untrue content. The talking heads are generated using a conditional generative adversarial network (cGAN), which is used in some deepfake technologies. Unlike deepfake videos where a neural network is trained on images of a single person to produce a convincing likeness of that person saying things they did not say, we trained our algorithms on a corpora of multiple individuals simultaneously. The result is a talking head that morphs between different speakers or becomes a glitchy Frankensteinian hybrid of different people that contributed to the current infodemic speaking the words of academics, medical experts, or journalists that are correcting false narratives or explaining how misinformation is created and spread. The plastic, evolving, and unstable speakers in the video evoke the mutation of the coronavirus, the instability of truth, and the limits of knowledge.