Table of Contents

EasyAR headset support

EasyAR SDK provides powerful cross-platform AR capabilities, and its design philosophy is equally applicable to emerging spatial computing devices—headsets. This article will introduce how EasyAR supports headset devices and how developers can leverage these features to build immersive experiences.

Terminology explanation

In this document, "headset" or "Headset" specifically refers to a type of computing device with a head-worn form factor that supports immersive or see-through interaction. It can present virtual content in front of the user's eyes to achieve augmented reality (AR) or mixed reality (MR) experiences. This includes:

  • Optical see-through (OST) headsets: View the real world directly through semi-transparent lenses
  • Video see-through (VST) headsets: Capture the real world through cameras and view it as a video stream

Basic working principle of head-mounted displays

To better understand how EasyAR supports head-mounted displays, we first need to understand the basic workflow of these devices:

  1. Environmental perception: Through built-in multi-camera systems, depth sensors (such as iToF), and inertial measurement units (IMUs), the device continuously perceives the surrounding environment's geometric structure, lighting conditions, and object surfaces.
  2. Spatial computation: Based on sensor data, the SLAM system tracks the user's head 6DoF pose (position + orientation) in real-time.
  3. Content rendering and display: Renders 3D content (such as models and effects) according to the device's pose and projects the results onto the display. In VR mode, purely virtual scenes are displayed; in AR/MR modes, virtual content is composited with the real environment (either VST camera feed or OST transparent background).
  4. Interaction system: Receives user input and responds through controllers, gesture recognition, voice commands, or eye tracking.

How EasyAR Supports Head-Mounted Displays

EasyAR does not replace the native spatial tracking or rendering pipeline of head-mounted displays but works collaboratively as a spatial computing enhancer. As a professional AR algorithm engine, it provides various spatial perception and computing capabilities for AR scenarios, efficiently cooperating with the device's original system.

Scope of Responsibility Role Division
Head 6DOF tracking, display rendering, basic interaction, etc. Native SDK/runtime of the head-mounted display
Advanced perception capabilities like image/object recognition and tracking, large-scale localization, etc. EasyAR SDK

The EasyAR SDK offers core AR functionalities for world perception, such as image/object recognition, sparse reconstruction, dense reconstruction, and large-scale localization. It is responsible for "understanding" the world and informing the head-mounted display's application where virtual content should be placed.

The EasyAR SDK is integrated into the application development framework of the head-mounted display (typically Unity or Unreal) as a plugin or library. It receives raw data streams from the device system, processes and computes them, and then outputs a pose matrix relative to the device's spatial coordinate system. Finally, the rendering pipeline of the head-mounted display's engine draws the virtual objects in the correct position.

Support status and implementation methods

EasyAR provides comprehensive support for mainstream headset development platforms, mainly through the following two methods:

  • Through Unity/Unreal Engine: This is the most mainstream and recommended approach. Headset manufacturers usually provide dedicated Unity/Unreal plugins or XR SDKs. EasyAR can seamlessly integrate with the manufacturer's SDK for use.
  • Through native platform (Native): For scenarios requiring extreme performance or specific native development, EasyAR's C++/Java/Objective-C native interfaces can be used. This typically requires developers to handle the interface connection with the device's underlying data themselves.

EasyAR has been tested and verified on multiple mainstream headset platforms through Unity. The currently confirmed supported devices are as follows:

Headset model System/SDK version requirements
Apple Vision Pro visionOS 2 or later
PICO 4 Ultra Enterprise PICO Unity Integration SDK 3.1.0 or later
Rokid AR Studio Rokid Unity OpenXR Plugin 3.0.3 or later
XREAL Air2 Ultra XREAL SDK 3.1 or later
Xrany X1 Xrany YuanNi SDK
Note

Rokid AR Studio can support Rokid UXR 3 through the Rokid Unity OpenXR Plugin, but it is recommended to use the XR Interaction Toolkit, especially for cross-device usage.

Important

Apple Vision Pro, PICO, and XREAL all require their corresponding enterprise licenses for use. If you have any questions, please contact the business team.

  • Due to Apple Vision Pro interface authorization restrictions, only devices with Apple enterprise API permissions are supported.
  • Due to PICO interface authorization restrictions, only PICO enterprise edition devices are supported.
  • Due to XREAL interface authorization restrictions, only devices with enterprise authorization are supported.

For headset devices from other manufacturers not mentioned above, EasyAR provides extended access methods such as custom cameras. For details, refer to Creating an EasyAR headset extension package to complete the integration.

This typically involves the following steps:

  1. Obtain device development permissions: Apply for a developer account and SDK documentation for the target headset.
  2. Acquire sensor data streams: Obtain necessary data such as camera images (video frames) and camera parameters from the device SDK.
  3. Call EasyAR APIs: Use EasyAR's underlying APIs to feed the acquired sensor data into the EasyAR FrameSource for processing.
  4. Obtain and apply calculation results: Retrieve the calculation results (camera poses) from the EasyAR engine and apply them to your 3D rendering engine.

We provide detailed development guides and sample code to assist you in this process. If you encounter any issues during integration, feel free to seek technical support in our developer community.

Available core features

On the head-mounted display device, you can fully utilize EasyAR's comprehensive feature matrix to build rich spatial applications:

  • Plane image tracking: Recognize and track preset images, overlaying dynamic videos or 3D models on the images.
  • 3D object tracking: Recognize and track preset 3D models (such as toys, product packaging), and enable virtual content to interact with them.
  • Sparse spatial map: Scan the surrounding environment to generate a 3D visual map and provide visual positioning and tracking capabilities. The generated map can be saved or shared in real-time across multiple devices.
  • Dense spatial map: Scan and generate dense point cloud maps and mesh models of the surrounding environment, achieving physical occlusion relationships between virtual and real objects, greatly enhancing immersion.
  • Cloud image recognition: Connect to the EasyAR cloud database to enable the recognition and management of vast amounts of images, suitable for scenarios such as exhibitions and education.
  • Mega large-scale positioning: A city-level spatial computing solution that connects to the EasyAR cloud positioning service, achieving stable, fast, and precise positioning and tracking, significantly breaking through and expanding the scope of AR experiences.

Platform-specific guides

To help you get started quickly on specific platforms, we have prepared detailed multi-platform integration guides. Click the tabs below to view the quick start tutorials for the corresponding platform.