Conversational TV
In the AI era, every device is trying to reinvent itself — but TV has remained a passive black screen, designed for consumption rather than conversation. I led the UX design and prototyping for the AI living display. This project reimagines TV as an intelligent, conversational companion, turning a lean-back experience into a natural, lean-in interaction.

Challenge
As the Visual Display division prepared to announce its first AI-powered TV, our team was tasked with exploring ideas about a truly conversational experience on TV.
The challenge was to design an AI experience that felt natural and intelligent, yet within the real constraints of the TV’s hardware and context. Unlike phones, the TV lives at a distance and is usually perceived as a passive content-consuming screen. Meanwhile, people’s expectations for AI have never been higher.
So, what does it mean for TV to become AI-native, not just AI-enabled?
Build to Think
The Wizard-of-Oz experiment revealed a core insight: the key to transforming TV was to shift from a task-based, transactional experience to a living, evolving experience.
To validate this, we surveyed users (n:600) on how they'd envision a human-like AI on a large screen
Overall, people saw AI’s core value as enabling Growth, Inspiration, and Co-creation — helping them feel genuinely accompanied and cared for.
%
Helps me grown, learn, or reflect
%
Fuels inspiration, and discovery
41
%
Acts like a creative partner
Mental Model
With extensive benchmarking on current AI experiences, I identified a core mental model of how people can build trust and connection with an AI-native screen, and gradually shift their perception: from curiosity, to collaboration, to companionship.
I mapped this journey into three key stages:

Vision
As leadership searched for fresh ideas, we saw an opportunity to translate our early insights into a vision that could inspire the next step.
I led the core UX design and partnered with a visual designer to build several fast, visually expressive vision experiences — a voice-first, hands-free companion designed to feel effortless and human. In just three weeks, we brought six core scenarios to life: three enhancing familiar TV moments (onboarding, recommendations, and watching content), and three expanding what AI could enable (small talk, travel companion, and memory).

We presented the work to C-level leaders with engaging videos, from the CEO to multiple VPs of the other business sections. The response was immediate — it changed how they imagined AI on TV and opened doors for the innovation team to collaborate directly with product.
Here's a quick snapshot:
Testing
The vision earned executive excitement, but as momentum shifted toward implementation, a critical concern surfaced:
If Samsung couldn't secure OTT partnerships, would a conversational experience still feel compelling with limited content?
I designed UI schemas for a constrained-content scenario and partnered with a researcher and data scientist to run an eye-tracking study across multiple video prototypes, examining attention patterns and emotional reactions across screen sizes.
The finding was clear: even with constrained content, talking to a large screen felt natural and engaging, especially when paired with dynamic visual support.
Prototyping
To answer how the experience should work at scale, and to prove the value for a TV to integrate lightweight context and persistent memory, I built a functional prototype to prove coherence across five scenarios:
Dynamic AI Home: a living homepage that adapts to the user over time
Visual Conversation: ambient chat between user and TV, with AI surfacing images to sustain the flow
Content Q&A: while watching, users can ask questions about what's on screen
Lifestyle Collections: users save AI responses as cards and pull recommendations from their own context
Central to the prototype was an AI intention layer — interpreting user intent before generating a response or UI, making conversations feel predictable and controlled.
Production
While production constraints and launch timelines made a fully conversational experience infeasible in the short term, this work directly shaped the thinking behind the Vision AI Companion, announced at CES 2026.
Several ideas from our exploration carried through:
Dynamic visual layout: moving away from static, text-heavy responses toward visually engaging UI on a large screen
Multi-turn conversations: replacing purely transactional interactions with a foundation for continuity and memory
Conversation-aware flow design: shifting from rigid templates toward a component-based, flow-oriented interaction model

Takeaways
1. Design impact starts with shaping belief. In early stages, there are no metrics — only uncertainty. Impact came from using UX instinct and storytelling to help others believe in a new possibility before feasibility was known.
2. Vision only matters if it survives contact with reality. Inspiration alone isn't impact. Real progress came from engaging with constraints, adapting ideas without losing intent — knowing when to inspire, when to listen, and when to get specific.
3. Designing for agentic AI means designing behavior, not screens. In conversational systems, the interface is no longer static. UX shifts from arranging screens to defining when the system should speak, show, wait, or remember — shaping behavior over time.
4. In a fast-moving AI era, learning speed is the advantage. No solution stays final for long. The most valuable design skill today is the ability to rapidly learn, prototype, collaborate across disciplines, and recalibrate — while staying grounded in human needs.

