AURI X1 is a pair of camera earbuds with fisheye lenses mounted at the ear — the most biomechanically stable point on the human body. Users wear them to exercise, commute, and live their lives. Our AI transforms raw footage into cinematic third-person video for users, and structured training data for robotics companies. Win-win, at near-zero marginal cost.
The same hardware that empowers creators also captures the most natural first-person video data for training embodied AI. Ear-mounted sensors closely approximate robot head positioning, minimizing domain shift.
Virtual Drone Capture
Social Media Output
Human Data Collection
Robot Learns the Same Action
Ear-mounted camera angle approximates robot head sensor layout. Data collected by humans transfers directly to robot policy training with minimal adaptation.
Purpose-built collection hardware paired with large-scale human operators. Continuous first-person video, depth estimation, spatial audio, and 6-DOF IMU data.
Every recording captures synchronized video, audio, inertial motion, and environmental context. Ideal for training foundation models that bridge perception and action.
Ruggedized collection devices for factory and field environments. Custom data schemas, privacy controls, and integration with existing ML pipelines.
AI doesn't need 30fps video — it learns from sparse keyframes + IMU data. This means ultra-low power, all-day battery, and always-on data collection. Competitors record for human eyes. We record for AI.
Sparse keyframes, not continuous video. Skeletal data, not pixel data. What leaves the device contains no faces, no voices, no personal information. Other wearables ask you to trust their policy — AURI makes surveillance technically impossible.
Ear-mounted wide-angle camera
3D scene from sparse frames
Human pose and appearance
Virtual camera cinematography
Cinematic third-person video
AI generates follow-cam, orbit, dolly, and low-angle shots from your first-person capture. Professional angles, zero effort.
World model technology transforms raw footage into structured embodied data: skeleton trajectories, action semantics, object interactions, 3D scene reconstructions. Data that robotics companies can use directly for training.
Cryptographic proof-of-capture verifies every frame is real. Content authenticity built into the silicon.
First-person video, spatial audio, and IMU data generate high-value training data for robotics, world models, and embodied AI.
Smart glasses sit on the front of your face — same direction you're looking, narrow FOV, same blind spots. The ear sits on the side, like an eagle's eye.
Dual fisheye lenses — one on each ear — capture a full 360° panoramic field of view. Forward, backward, up, down, AND the wearer's own body — all simultaneously.
The ear is the body's most stable mounting point during locomotion. Unlike glasses (forehead shakes), chest (breathing), or wrist (arm swing), the ear sits at the natural pivot of head movement — a biological gimbal.
500M+ people already wear earbuds daily. No new behavior. No social stigma of camera glasses. The best data collection device is one people already want to use.
Oasis, SORA, Genie 2 proved real-time world simulation from video is possible. Turning your view into cinematic third-person video — impossible 2 years ago — now runs in seconds.
NVIDIA EgoScale (Feb 2026): log-linear scaling with no saturation at 20K hours. Every robotics company is desperate for egocentric data. Supply is ~100K hours total — demand is millions.
AI doesn't need 30fps video — it learns from sparse keyframes + IMU data. Ultra-low power, all-day battery, always-on collection. Combined with mature TWS supply chains, BOM under $80 is achievable today.
AURI X1 sits on your ear like premium earbuds. It hears, it sees, and it creates. One AI pipeline, multiple form factors.
Now. All-day wear. 360° dual fisheye.
Every run looks like a GoPro ad. Hands free.
Every dive becomes a nature documentary.
Premium earphone experience with active noise cancellation and immersive spatial audio. Your daily driver.
Ultra-wide camera captures your perspective continuously. Intelligent recording triggers capture what matters most.
Real-time speaker identification, conversation context, and social assistance. Your personal networking copilot.
Designed around your ear's natural geometry. Lightweight, stable, and built for hours of wear.
CEO / Founder
Forbes 30 Under 30 (Consumer Tech). Youngest-ever Red Dot Best of the Best (age 19). BMW 2040 Global Champion. 33 patents. Tsinghua. Invented Nums Ultra-thin Smart Keyboard, 1M+ units shipped globally. Deep Shenzhen hardware supply chain.
World Model Lead
Stanford CS. Co-created Oasis (Israel) — world's first real-time playable world model. Former scientist at World Labs (Fei-Fei Li). ICLR 2026 first author (Percy Liang). Chose LUCKEY over MIT, Berkeley, Stanford PhD offers.
Head of Product Ops
7 years at Google (hardware & software PM, Mountain View). HK PolyU undergrad, UCLA Anderson MBA. Global supply chain network. End-to-end product delivery.
Small team, outsized ambition. We're looking for engineers and researchers who want to define a new category of wearable AI.
Ego-centric pose estimation, 3D reconstruction, neural rendering. If you've worked with NeRF, Gaussian Splatting, or human body models, we'd love to talk.
Motion-driven video generation, diffusion models, real-time inference optimization. Turning skeleton data into photorealistic video.
Camera modules, audio DSP, low-power SoC design, mechanical engineering for ear-wearable form factors.
Defining the user experience for an entirely new product category. Consumer hardware meets AI-native workflows.
Whether you're an investor, an enterprise data partner, or someone who wants to join the team — we'd love to hear from you.
shawn@luckey.to