top of page

Ray LC
 

Copy of ray lc.jpeg

RAY LC’s practice creates interaction environments for building bonds between humans and machines. He takes perspectives from his own research in neuroscience (pubs in Nature, J. Neurosci, J. Neurophys) and in HCI (pubs in CHI, DIS, HRI, TEI, Frontiers, etc) in his artistic practice, with notable exhibitions at BankArt, 1_Wall, Process Space LMCC, New York Hall of Science, Saari Residency, Kiyoshi Saito Museum, Elektra Montreal, ArtLab Lahore, Ars Electronica Linz, NeON Digital Arts Festival, New Museum, CICA Museum, NYC Documentary Film Festival, Burning Man, NeurIPS, Deconstrukt, Elektron Tallinn, Floating Projects, Jockey Club Creative Arts Centre, Osage Gallery. His current work uses artistic interventions to probe our spatial relationship with machines, and has been funded by National Science Foundation, National Institutes of Health, Japan Society for the Promotion of Science, Verizon Connected Futures, Adobe Design Award, Microsoft Imagine Cup, Kone Foundation, Davis Peace Foundation, NY Foundation for the Arts.

STUDIO FOR NARRATIVE SPACES STUDIO FOR NARRATIVE SPACES is a collective of creative practitioners at City University of Hong Kong School of Creative Media who work with neuroscientists, roboticists, performers, designers, architects to tell immersive stories and grasp how human behaviors are shaped by environmental storytelling. 

Down to the Holograph

Copy of Ray LC & Zhiyuan Zhang, DOWN TO THE HOLOGRAPH, 1 - Alberta Leung.jpg

Down to the Holograph

Down to the Holograph is a Machine-Learning generated audio-visual experience of possibilities based on the idea of Hong Kong as a magnifying holograph for the world, a mini-garden in its infinite diversity. Video interpolation is generated using StyleGANS2 on 360 photos taken by RAY LC around Hong Kong. The musical accompaniment was generated using Google Magenta on the artist’s improvised melody. We no longer make works on our own. Just as the paintbrush and canvas was part of the artist’s palette centuries ago, machine learning has become part of our toolkit for seeing how the world moves, hear the sounds we create, and tell us how to interpret our surroundings. Down to the Holograph immerses us in three dimensions of expression, creating a 360 view of our surroundings that constantly changes around us, showing the possibilities that we collected throughout our lives in one fell swoop. Just as video expanded the temporal dimension of photography, and virtual reality (VR) expanded the spatial dimension of video, machine learning (ML) too expands our capability to see, as it compresses the shared knowledge in spatio-temporal dimensions to a single, potentially interactive paradigm. In this work, a summary of over 5000 360-photos of Hong Kong are summarized by ML traversing through the space of possibilities, showing us how the images we collected relate to each other. Slowly we can perceive the water dissolving into buildings as the island becomes a mountain, and then the mountain becomes a building in turn. These perceptions remind us of the commonalities of these scenes we observe, so that in one minute we can see the world as we saw it in one year's worth of photos. The music follows a similar pattern to the visual dimension by transitioning between timbres using the GANSynth algorithm. The original music was improvised by the artist and exported in MIDI, which can then be transformed by ML into a customized transition between different timbres. Timbres are chosen based on how they emphasize or hide different pitches, producing a traversal in auditory space analogous to the traversal happening in the 3D spatial dimension.

bottom of page