AURI X1 is a wearable AI camera earphone that transforms first-person perspective into cinematic third-person video. For creators, it's a hands-free film crew. For enterprise, it's the world's most natural egocentric data collection device.
A tiny camera on your ear captures your world. Our AI reconstructs your body, understands the scene, and generates cinematic third-person video in real time.
Ear-mounted wide-angle camera
3D scene from sparse frames
Human pose and appearance
Virtual camera cinematography
Cinematic third-person video
AI generates follow-cam, orbit, dolly, and low-angle shots from your first-person capture. Professional angles, zero effort.
Sparse keyframe capture meets cloud-scale AI scene completion. Battery-efficient recording that reconstructs full cinematic sequences.
Cryptographic proof-of-capture verifies every frame is real. Content authenticity built into the silicon.
First-person video, spatial audio, and IMU data generate high-value training data for robotics, world models, and embodied AI.
The same hardware that empowers creators also captures the most natural first-person video data for training embodied AI. Ear-mounted sensors closely approximate robot head positioning, minimizing domain shift.
Ear-mounted camera angle approximates robot head sensor layout. Data collected by humans transfers directly to robot policy training with minimal adaptation.
Purpose-built collection hardware paired with large-scale human operators. Continuous first-person video, depth estimation, spatial audio, and 6-DOF IMU data.
Every recording captures synchronized video, audio, inertial motion, and environmental context. Ideal for training foundation models that bridge perception and action.
Ruggedized collection devices for factory and field environments. Custom data schemas, privacy controls, and integration with existing ML pipelines.
AURI X1 sits on your ear like premium earbuds. It hears, it sees, and it creates.
Premium earphone experience with active noise cancellation and immersive spatial audio. Your daily driver.
Ultra-wide camera captures your perspective continuously. Intelligent recording triggers capture what matters most.
Real-time speaker identification, conversation context, and social assistance. Your personal networking copilot.
Designed around your ear's natural geometry. Lightweight, stable, and built for hours of wear.
Product designer and inventor with 33 granted patents worldwide. Previously built Nums, the smart touchpad keyboard adopted globally and featured by Wired, The Verge, CNET, and Unbox Therapy (1.7M+ views). UCSC AI Visiting Scholar, Tsinghua University alumnus.
Nums smart keyboard: designed, manufactured, and shipped globally from Shenzhen. Featured on major tech media, delivered to Mark Zuckerberg and Satya Nadella.
"Seven Methods of Innovation" — contracted with People's Fine Arts Publishing House, releasing mid-2026. A methodology framework for product innovation.
Small team, outsized ambition. We're looking for engineers and researchers who want to define a new category of wearable AI.
Ego-centric pose estimation, 3D reconstruction, neural rendering. If you've worked with NeRF, Gaussian Splatting, or human body models, we'd love to talk.
Motion-driven video generation, diffusion models, real-time inference optimization. Turning skeleton data into photorealistic video.
Camera modules, audio DSP, low-power SoC design, mechanical engineering for ear-wearable form factors.
Defining the user experience for an entirely new product category. Consumer hardware meets AI-native workflows.
Whether you're an investor, an enterprise data partner, or someone who wants to join the team — we'd love to hear from you.
shawn@luckey.to