AURI X1 is a wearable AI camera earphone that transforms first-person perspective into cinematic third-person video. For creators, it's a hands-free film crew. For enterprise, it's the world's most natural egocentric data collection device.
A tiny camera on your ear captures your world. Our AI reconstructs your body, understands the scene, and generates cinematic third-person video in real time.
Ear-mounted wide-angle camera
3D scene from sparse frames
Human pose and appearance
Virtual camera cinematography
Cinematic third-person video
AI generates follow-cam, orbit, dolly, and low-angle shots from your first-person capture. Professional angles, zero effort.
Sparse keyframe capture meets cloud-scale AI scene completion. Battery-efficient recording that reconstructs full cinematic sequences.
Cryptographic proof-of-capture verifies every frame is real. Content authenticity built into the silicon.
First-person video, spatial audio, and IMU data generate high-value training data for robotics, world models, and embodied AI.
The same hardware that empowers creators also captures the most natural first-person video data for training embodied AI. Ear-mounted sensors closely approximate robot head positioning, minimizing domain shift.
Virtual Drone Capture
Social Media Output
Human Data Collection
Cross-Morphology Transfer
Ear-mounted camera angle approximates robot head sensor layout. Data collected by humans transfers directly to robot policy training with minimal adaptation.
Purpose-built collection hardware paired with large-scale human operators. Continuous first-person video, depth estimation, spatial audio, and 6-DOF IMU data.
Every recording captures synchronized video, audio, inertial motion, and environmental context. Ideal for training foundation models that bridge perception and action.
Ruggedized collection devices for factory and field environments. Custom data schemas, privacy controls, and integration with existing ML pipelines.
AURI X1 sits on your ear like premium earbuds. It hears, it sees, and it creates. One AI pipeline, multiple form factors.
Now. All-day wear. 360° dual fisheye.
Every run looks like a GoPro ad. Hands free.
Every dive becomes a nature documentary.
Premium earphone experience with active noise cancellation and immersive spatial audio. Your daily driver.
Ultra-wide camera captures your perspective continuously. Intelligent recording triggers capture what matters most.
Real-time speaker identification, conversation context, and social assistance. Your personal networking copilot.
Designed around your ear's natural geometry. Lightweight, stable, and built for hours of wear.
CEO / Founder
Forbes 30 Under 30. Youngest-ever Red Dot Best of the Best (age 19). BMW 2040 Global Champion. 33 patents. Tsinghua. Invented Nums Ultra-thin Smart Keyboard, 1M+ units shipped globally.
World Model Lead
Stanford CS. Co-created Oasis — first real-time playable world model. Former researcher at World Labs (Fei-Fei Li). ICLR 2026 first author (Percy Liang). Chose LUCKEY over MIT, Berkeley, Stanford PhD offers.
Head of Product Ops
7 years at Google (hardware & software PM, Mountain View). HK PolyU undergrad, UCLA Anderson MBA. Global supply chain network. End-to-end product delivery.
Small team, outsized ambition. We're looking for engineers and researchers who want to define a new category of wearable AI.
Ego-centric pose estimation, 3D reconstruction, neural rendering. If you've worked with NeRF, Gaussian Splatting, or human body models, we'd love to talk.
Motion-driven video generation, diffusion models, real-time inference optimization. Turning skeleton data into photorealistic video.
Camera modules, audio DSP, low-power SoC design, mechanical engineering for ear-wearable form factors.
Defining the user experience for an entirely new product category. Consumer hardware meets AI-native workflows.
Whether you're an investor, an enterprise data partner, or someone who wants to join the team — we'd love to hear from you.
shawn@luckey.to