EasyAR headset support
EasyAR SDK provides powerful cross-platform AR capabilities, and its design philosophy is equally applicable to emerging spatial computing devices—headsets. This article will introduce how EasyAR supports headset devices and how developers can leverage these features to build immersive experiences.
Terminology
In this document, "headset" specifically refers to a category of head-mounted computing devices that support immersive or see-through interactions. They can present virtual content before users' eyes, enabling augmented reality (AR) or mixed reality (MR) experiences. This includes:
- Optical see-through (OST) headsets: View the real world directly through semi-transparent lenses
- Video see-through (VST) headsets: View the real world captured by cameras as video streams
Basic working principles of headsets
To better understand how EasyAR supports headsets, we first need to understand the basic workflow of headset devices:
- Environmental perception: Use built-in multi-cameras, depth sensors (e.g., iToF), and inertial measurement units (IMU) to perceive the surrounding environment's geometric structure, lighting conditions, and object surfaces in real-time.
- Spatial computing: Based on sensor data, track the user's head 6DoF pose (position + orientation) in real-time through the SLAM system.
- Content rendering and display: Render 3D content (e.g., models, effects) according to the device pose and project the results onto the display. For VR mode, pure virtual scenes are displayed; for AR/MR mode, virtual scenes are composited with the real environment (VST camera feed or OST see-through background).
- Interaction system: Receive user commands and respond through controllers, gesture recognition, voice, or eye tracking.
How EasyAR supports headsets
EasyAR does not replace the headset's native spatial tracking or rendering pipeline but works collaboratively as a spatial computing enhancer. As a professional AR algorithm engine, it provides spatial perception and computing capabilities for various AR scenarios, efficiently collaborating with the device's native system.
| Scope of responsibility | Role division |
|---|---|
| Head 6DoF tracking, display rendering, basic interactions, etc. | Headset native SDK/runtime |
| Advanced perception capabilities like image/object recognition and tracking, large-scale localization | EasyAR SDK |
EasyAR SDK provides core AR functionalities for world perception, such as image/object recognition, sparse reconstruction, dense reconstruction, and large-scale localization. It is responsible for "understanding" the world and telling the headset's application where virtual content should be placed.
EasyAR SDK integrates as a plugin or library into the headset's application development framework (typically Unity or Unreal). It receives raw data streams from the device system, processes and computes them, then outputs a pose matrix relative to the device's spatial coordinate system. Finally, the headset engine's rendering pipeline draws virtual objects at the correct positions.
Support status and implementation methods
EasyAR provides comprehensive support for mainstream headset development platforms, primarily through the following two methods:
- Via Unity/Unreal Engine: This is the most mainstream and recommended approach. Headset manufacturers usually provide dedicated Unity/Unreal plugins or XR SDKs. EasyAR can seamlessly integrate with the manufacturer's SDK.
- Via native platform: For scenarios requiring ultimate performance or specific native development, EasyAR's C++/Java/Objective-C native interfaces can be used. This typically requires developers to handle the interface connection with the device's underlying data themselves.
EasyAR has been tested and verified on multiple mainstream headset platforms through Unity. Currently confirmed supported devices are as follows:
| Headset model | System/SDK version requirements |
|---|---|
| Apple Vision Pro | visionOS 2 or later |
| PICO 4 Ultra Enterprise | PICO Unity Integration SDK 3.1.0 or later |
| Rokid AR Studio | Rokid Unity OpenXR Plugin 3.0.3 or later |
| XREAL Air2 Ultra | XREAL SDK 3.1 or later |
| Xrany X1 | Xrany Yuan Ni SDK |
Note
Rokid AR Studio can support Rokid UXR 3 via Rokid Unity OpenXR Plugin, but using XR Interaction Toolkit is recommended, especially for cross-device usage.
Important
Apple Vision Pro, PICO, and XREAL all require corresponding enterprise licenses for use. Contact business support if you have questions.
- Due to Apple Vision Pro interface authorization restrictions, only devices with Apple enterprise API licenses are supported.
- Due to PICO interface authorization restrictions, only PICO enterprise edition devices are supported.
- Due to XREAL interface authorization restrictions, only devices with enterprise authorization are supported.
For other manufacturers' headset devices not mentioned above, EasyAR provides extension access methods such as custom cameras. Refer to Creating EasyAR headset extension packages for integration. You can complete the connection yourself.
This typically involves the following steps:
- Obtain device development permissions: Apply for the target headset's developer account and SDK documentation.
- Acquire sensor data streams: Obtain necessary data like camera images (video frames) and camera parameters from the device SDK.
- Call EasyAR APIs: Use EasyAR's underlying APIs to feed acquired sensor data into EasyAR
FrameSourcefor processing. - Acquire and apply computation results: Obtain computation results (camera pose) from the EasyAR engine and apply them to your 3D rendering engine.
We provide detailed development guides and sample code to assist you in this process. If you encounter issues during integration, feel free to seek technical support in our developer community.
Core features available
On headset devices, you can fully utilize EasyAR's complete feature matrix to build rich spatial applications:
- Planar image tracking: Recognize and track preset images, overlaying dynamic videos or 3D models on them.
- 3D object tracking: Recognize and track preset 3D models (e.g., toys, product packaging), enabling virtual content to interact with them.
- Sparse spatial mapping: Scan the surrounding environment to generate 3D visual maps, providing visual localization and tracking capabilities. Generated maps can be saved or shared in real-time across multiple devices.
- Dense spatial mapping: Scan and generate dense point cloud maps and mesh models of the environment, enabling physical occlusion relationships between virtual and real objects, greatly enhancing immersion.
- Cloud image recognition: Connect to EasyAR cloud databases to enable recognition and management of massive image libraries, suitable for exhibitions, education, and other scenarios.
- Mega large-scale localization: City-scale spatial computing solution connecting to EasyAR cloud localization services, enabling stable, fast, and precise localization and tracking, significantly expanding the scope of AR experiences.
Platform-specific guides
To help you get started quickly on specific platforms, we have prepared detailed multi-platform integration guides. Click the tabs below to view quickstart tutorials for corresponding platforms.
- Introduction
- Using headset samples
- Enabling headset support
- Project configuration
- Creating EasyAR headset extension packages