Learning from the Field: Post-Launch Telemetry
What we're learning from real users using real devices - the telemetry infrastructure and insights that drive product improvement.
Devices are in the wild. For the first time, we see how real users interact with our perception system. The data is humbling.
Telemetry Architecture
Every Magic Leap One collects (with consent):
- Tracking quality metrics (not position data)
- Error events and crash logs
- Performance counters
- Feature usage statistics
Data flow:
Device → Local Buffer → Batch Upload → Cloud Processing →
Aggregation → Dashboards
Privacy-preserving: no raw images, no position data, no user-identifiable content. Aggregate statistics only.
What We're Measuring
Tracking Health
- Tracking loss events per hour
- Re-localization success rate
- Time to first track after headset don
- Sustained tracking duration
Performance
- Frame rate distribution
- Latency percentiles (p50, p95, p99)
- Thermal throttling frequency
- Battery drain rate during use
Feature Usage
- Spatial mapping request frequency
- Hand tracking activation rate
- Eye tracking gaze events
- Image tracking anchor count
Surprising Findings
Usage patterns differ from testing:
- Lab testing: controlled motions, clean environments
- Real users: rapid head movements, cluttered spaces, challenging lighting
Lighting is the killer:
- 35% of tracking issues correlate with lighting extremes
- Users near windows during day, in dark rooms at night
- Our auto-exposure algorithms need work
Short sessions dominate:
- Median session: 8 minutes
- 90th percentile: 45 minutes
- Battery life less critical than we thought; instant-on more critical
Re-localization failure rate:
- 12% of relocalization attempts fail
- Main cause: environment changed since map creation
- Users don't understand why content moved
Action Items from Telemetry
Immediate fixes (software update):
- Tune auto-exposure for extreme lighting
- Improve relocalization failure messaging
- Reduce tracking loss recovery time
V2 requirements:
- Better low-light performance (sensor or algorithm)
- Persistent maps that handle change
- Faster cold start
Deeper investigation:
- Categorize tracking loss events (is it sensor? algorithm? integration?)
- Understand environment diversity in user homes
- Correlate feature usage with satisfaction
The Feedback Loop
Telemetry enables a virtuous cycle:
- Observe field behavior
- Identify failure patterns
- Reproduce in controlled test
- Fix and validate
- Deploy update
- Measure impact
This cycle was impossible before launch. Now it's our primary improvement mechanism.
Privacy Considerations
Every telemetry addition gets privacy review:
- What's the minimum data needed?
- Is it aggregatable without individual identification?
- Could it be correlated with external data?
- Is consent clear?
We've rejected useful telemetry that crossed privacy lines. User trust is worth more than data.