Building in Mountain View, CA

Your ears see more than you think.

AURI X1 is a wearable AI camera earphone that transforms first-person perspective into cinematic third-person video. For creators, it's a hands-free film crew. For enterprise, it's the world's most natural egocentric data collection device.

Request Early Access How It Works
AURI X1 Camera Earbuds
Tsinghua Stanford Google Forbes 30 Under 30 Red Dot Design Award World Labs
01 / TECHNOLOGY

Ego-to-Exo Video Synthesis

A tiny camera on your ear captures your world. Our AI reconstructs your body, understands the scene, and generates cinematic third-person video in real time.

AI Pipeline
01

Capture

Ear-mounted wide-angle camera

02

Reconstruct

3D scene from sparse frames

03

Understand

Human pose and appearance

04

Synthesize

Virtual camera cinematography

05

Output

Cinematic third-person video

01

Autonomous Cinematography

AI generates follow-cam, orbit, dolly, and low-angle shots from your first-person capture. Professional angles, zero effort.

02

World Model Integration

Sparse keyframe capture meets cloud-scale AI scene completion. Battery-efficient recording that reconstructs full cinematic sequences.

03

Hardware-Rooted Authenticity

Cryptographic proof-of-capture verifies every frame is real. Content authenticity built into the silicon.

04

Embodied AI Data Engine

First-person video, spatial audio, and IMU data generate high-value training data for robotics, world models, and embodied AI.

85
Patent Claims Filed
16
AI Subsystems
33
Patents Granted (Prior Work)
5
Pipeline Stages
02 / ENTERPRISE DATA

Egocentric Data for Robotics & World Models

The same hardware that empowers creators also captures the most natural first-person video data for training embodied AI. Ear-mounted sensors closely approximate robot head positioning, minimizing domain shift.

Virtual drone capture

Virtual Drone Capture

Social media output

Social Media Output

Human skeleton data

Human Data Collection

Robot learning same action

Cross-Morphology Transfer

E1

Embodiment-Aligned Collection

Ear-mounted camera angle approximates robot head sensor layout. Data collected by humans transfers directly to robot policy training with minimal adaptation.

E2

Scalable Data Pipeline

Purpose-built collection hardware paired with large-scale human operators. Continuous first-person video, depth estimation, spatial audio, and 6-DOF IMU data.

E3

Multi-Modal Richness

Every recording captures synchronized video, audio, inertial motion, and environmental context. Ideal for training foundation models that bridge perception and action.

E4

Enterprise-Ready

Ruggedized collection devices for factory and field environments. Custom data schemas, privacy controls, and integration with existing ML pipelines.

03 / PRODUCT

More Than a Camera. More Than an Earphone.

AURI X1 sits on your ear like premium earbuds. It hears, it sees, and it creates. One AI pipeline, multiple form factors.

Ear-clip earbuds

Ear-Clip Earbuds

Now. All-day wear. 360° dual fisheye.

Ski goggles

Ski Goggles

Every run looks like a GoPro ad. Hands free.

Dive mask

Dive Mask

Every dive becomes a nature documentary.

A

Spatial Audio + ANC

Premium earphone experience with active noise cancellation and immersive spatial audio. Your daily driver.

B

Always-On AI Vision

Ultra-wide camera captures your perspective continuously. Intelligent recording triggers capture what matters most.

C

Social AI Agent

Real-time speaker identification, conversation context, and social assistance. Your personal networking copilot.

D

All-Day Comfort

Designed around your ear's natural geometry. Lightweight, stable, and built for hours of wear.

04 / TEAM

Built to Ship Hardware + AI

Shawn Gong

Shawn Gong

CEO / Founder

Forbes 30 Under 30. Youngest-ever Red Dot Best of the Best (age 19). BMW 2040 Global Champion. 33 patents. Tsinghua. Invented Nums Ultra-thin Smart Keyboard, 1M+ units shipped globally.

Julian Quevedo

Julian Quevedo

World Model Lead

Stanford CS. Co-created Oasis — first real-time playable world model. Former researcher at World Labs (Fei-Fei Li). ICLR 2026 first author (Percy Liang). Chose LUCKEY over MIT, Berkeley, Stanford PhD offers.

Mingmin She

Mingmin She

Head of Product Ops

7 years at Google (hardware & software PM, Mountain View). HK PolyU undergrad, UCLA Anderson MBA. Global supply chain network. End-to-end product delivery.

Recognized By
Forbes 30 Under 30 Red Dot Design Award Tsinghua University Stanford University Google USPTO Patent Pending
05 / TEAM

We're Hiring Builders

Small team, outsized ambition. We're looking for engineers and researchers who want to define a new category of wearable AI.

Computer Vision / 3D

Ego-centric pose estimation, 3D reconstruction, neural rendering. If you've worked with NeRF, Gaussian Splatting, or human body models, we'd love to talk.

Generative AI / Video

Motion-driven video generation, diffusion models, real-time inference optimization. Turning skeleton data into photorealistic video.

Hardware / Embedded

Camera modules, audio DSP, low-power SoC design, mechanical engineering for ear-wearable form factors.

Product / Growth

Defining the user experience for an entirely new product category. Consumer hardware meets AI-native workflows.

GET IN TOUCH

Let's build together.

Whether you're an investor, an enterprise data partner, or someone who wants to join the team — we'd love to hear from you.

shawn@luckey.to
Mountain View, CA / Singapore / Shenzhen