Leon Krykhtin is a British-Ukrainian architect, new media artist, and AI filmmaker whose practice spans generative audiovisual installations, immersive environments, and AI cinema. Trained at the Architectural Association's Design Research Laboratory in London, he founded LKDN Studio, an NVIDIA Inception Program partner. His large-scale works combine real-time particle systems, interactive AI video, and live performance, and have earned awards at the Busan International Film Festival, Seoul International AI Film Festival, MIT AI Filmmaking Hackathon, and beyond. A co-founder of the Shanghai AI Short Film Festival and PhD researcher at the University of Nottingham, Leon also teaches AI and new media at art and design universities across China. His work — featured in Wallpaper and Dezeen and exhibited across London, Paris, New York, Tokyo, and Shanghai — treats technology as a contemplative mirror for our values, vulnerabilities, and vision of a more ethical future.
How would you describe your artistic practice?
My practice sits at the intersection of architecture, computation, and neuroscience. I trained as a parametric architect - seven years at Zaha Hadid Architects after studying at the Architectural Association’s Design Research Lab - and that foundation in spatial thinking and algorithmic systems never left. It just migrated. Today I work across GLSL shader coding, generative audiovisual installation, live performance with dancers, and AI filmmaking, but the underlying question has remained consistent: how do you design environments that are not static containers but living, responsive systems?
More recently, my PhD research has pulled this toward neurofeedback - specifically collective neurofeedback, what happens when multiple brains co-regulate a shared generative space. So the practice has become less about me as a singular author producing objects and more about designing the conditions under which emergent aesthetic experiences can occur between people.
What interests you about working with data and generative technologies?
What interests me is that data is never neutral - it carries the trace of a living process. For example, a brainwave signal, a weather API feed from a salt flat somewhere in Taiwan, the flocking vectors of a murmuration - these are all data, but they encode something about the behaviour of complex systems in the world. Generative technologies let me work with that encoding as a creative material rather than just a representation.
I’m drawn to the gap between the raw signal and its aesthetic expression. When I pipe EEG data into a shader, I’m not illustrating brain activity - I’m creating a feedback loop where neural states and visual states begin to co-evolve. The technology becomes interesting precisely at the point where it stops being a tool and starts behaving as an interlocutor.

What is your philosophy on art and technology?
I resist the framing that treats technology as either a utopian instrument or something artists need to “humanise.” Technology is already deeply embedded in how we perceive, think, and relate to one another - the interesting question is not whether to use it but what models of experience it makes possible that didn’t exist before.
My philosophical anchor is Gilles Deleuze’s concept of the virtual - the idea that reality contains a vast field of potential that is real but not yet actualised. I think of generative systems as machines for accessing that field. A well-designed generative environment doesn’t produce random variation; it navigates a phase space of possibilities that are latent in its rules. The artist’s role is to define the topology of that space - its attractors, its thresholds, its points of bifurcation - and then step back enough to let genuinely unpredictable things emerge.
This is why I’m skeptical of work that uses AI or generative tools merely to accelerate existing production pipelines. The point isn’t efficiency. The point is ontological: these technologies can give us access to forms of experience - collective, non-linear, emergent - that sequential, author-driven media simply cannot.
How are you working with real-time?
Real-time is central to almost everything I make. My installations use particle systems, GLSL shaders, and frameworks like TouchDesigner and Three.js , Unreal Engine, vibe coding with AI agentic systems to generate visuals that are computed live, never pre-rendered. In pieces like Metamorphosis and POST-NATURE, dancers perform within large-scale LED environments where the visuals respond to their movement - so the piece literally cannot exist without the live event.
My current PhD research pushes this further by introducing neural data. I’m working with EEG headsets to build environments where the generative system responds to the collective brainwave patterns of multiple participants simultaneously. The real-time dimension here isn’t just aesthetic - it’s structural. The work is about inter-brain coupling, the measurable phenomenon where people’s neural rhythms begin to synchronise during shared experience. Real-time computation is what makes that feedback loop possible: sense, process, generate, sense again - all within milliseconds.
For me, real-time isn’t a technical specification. It’s a philosophical commitment to art as a living process rather than a fixed artefact.
What do you want the viewer to take away from your work?
Honestly, I want people to leave with a shifted sense of where the boundary is between themselves and their environment - and between themselves and other people in the room. So much of our experience of art is still structured around individual contemplation of a discrete object. I’m interested in creating situations where the environment is palpably responsive to your presence, and where your experience is entangled with the experiences of others around you.
If the work succeeds, you shouldn’t walk away thinking about the technology. You should walk away with a bodily memory of having been inside something that was alive and that you were part of. A feeling that the space between minds is not empty - that it has texture, dynamics, maybe even a kind of intelligence. That’s what I’m trying to build toward.