Designing tomorrow, one step ahead.
Digital Twin Manifesto — Designing Identity Through AI
-
ProjectPersonal AI Project
-
Year2025
-
TypeAI Digital Twin
-
IndustryGen AI
-
Download Portfolio
One click, and my voice turns design into a story you’ll want to hear.
About the
ProjectIn 2025, I designed and directed my first full AI Digital Twin experience—a cinematic manifesto voiced by my ElevenLabs-cloned voice and generated with Runway Gen-4. Through custom-trained ChatGPT prompt engineering and AI video tools, I crafted a synthetic version of myself to explore fear, identity, and evolution in the age of artificial intelligence.
-
ProjectPersonal AI Project
-
Year2025
-
TypeAI Digital Twin
-
IndustryGen AI
-
Download Portfolio
The Journey
One click, and my voice turns design into a story you’ll want to hear.
It’s 2025, and I’ve just finished a project where I appear on screen—without ever stepping in front of a camera. My voice, my face, my story—but generated entirely with AI tools, crafted with purpose and design. This is not a gimmick. It’s a Digital Twin—my avatar, speaking in my cloned voice, walking through cinematic spaces, delivering a manifesto on evolution, fear, and creativity.
The process? Fully mine. I wrote the script, designed the prompts, supervised the scenes. It’s AI-powered storytelling, but with a soul. And that soul is human.
The Why
I started this project to explore the emotional tension between fear and transformation. AI today feels unsettling to many. It threatens control, identity, even jobs. But it also expands what’s possible—faster creation, multilingual communication, visual worlds you can only dream of. My goal was to show that tension on screen: the moment where uncertainty meets empowerment.
So I built a synthetic version of myself. And then I let him talk.
Building the Voice
To create a Digital Twin that felt real, I had to start with something deeply personal—my voice.
Using a Beyerdynamic M70 PRO X microphone, I recorded my vocal samples and trained a model with ElevenLabs, one of the most advanced voice cloning platforms available. The clarity was stunning. My clone voice could speak not only English, but multiple languages with perfect nuance. It sounded like me. It was me.
The power of voice cloning with ElevenLabs is its flexibility. It allowed me to script lines in any language and maintain tone, emotion, even rhythm. This made the project globally scalable, and that’s a key insight for any business thinking ahead: AI avatars can go multilingual, instantly.
Creating the Avatar with Runway Gen-4 and Higgsfield AI
I didn’t want a cartoon. I wanted cinema.
I used Runway Gen-4, a cutting-edge generative AI video model, to bring my avatar to life. Gen-4 could maintain facial consistency across scenes—essential for keeping my Digital Twin recognizably me. Every shot was prompt-engineered for emotional tone and filmic balance.
One of the key moments? A slow-motion tracking shot of my avatar walking through a misty nighttime pine forest, captured in symmetrical composition with soft moonlight shafts and pastel-muted colors. It looked straight out of a dream—or a film by Denis Villeneuve. That scene came from a prompt like this:
“Slow-motion tracking shot of a slim man with short dark hair, black zipped jacket and black pants, walking through a misty nighttime pine forest. Medium shot with 50mm lens. Centered framing, pastel muted colors, symmetrical composition, soft moonlight shafts, layered fog, painterly cinematic aesthetic with clean depth and theatrical balance.”
My prompts were not random. They were guided by a Custom ChatGPT I personally trained in screenwriting and filmmaking, fed with documentation on lighting, lenses, camera styles, and narrative flow.
Filmmaking Tools + Post-Production
I shot real-world textures using an iPhone 15 Pro Max, fitted with a Moment Tele 58mm lens and the Blackmagic Camera App for full manual cinematic control.
I edited everything in Adobe Premiere, and graded it in DaVinci Resolve to ensure a cinematic color profile that unified AI-generated and real footage. This hybrid workflow created a seamless experience between the digital and the physical.
The Real Message
I know this might feel uncanny. Seeing “yourself” speak through code can feel like staring in a mirror too long.
But here’s the point: if AI feels unsettling, it’s because we haven’t claimed authorship yet. I did. I made the tools work for me—not the other way around.
This isn’t just about digital creativity. It’s about survival and evolution in business. The companies, the creators, the designers who embrace these tools—those who design with AI instead of running from it—they’ll lead. Because AI is not the designer. You are.
This is my AI Manifesto. And it speaks with my face, my words, and yes—my cloned voice.

Digital
VideoA dual-language cinematic experiment exploring identity through AI. In this project, I built a Digital Twin of myself—visually and vocally—to deliver an AI manifesto in both English and Traditional Chinese. From cloned voice to generative video, every frame is a reflection on how we navigate fear, evolution, and authorship in the synthetic age.
English Version | Digital Twin AI Manifesto | Clone Voice + Runway Gen-4 + ElevenLabs (Full Video)
AI 數位分身宣言|語音克隆 + Runway Gen-4 + ElevenLabs(完整版)