Table of Contents

Create image and device motion data input extension

By creating an image and device motion data input extension, developers can extend custom camera implementations for EasyAR Sense to support specific head-mounted devices or other input devices. The following content introduces the steps and considerations for creating an image and device motion data input extension.

Before you begin

Create an external frame data source class

They are all subclasses of MonoBehaviour, and the filename should be the same as the class name.

For example, to create a 6DoF device input extension:

public class MyFrameSource : ExternalDeviceMotionFrameSource
{
}

When creating a head-mounted display extension, you can use the com.easyar.sense.ext.hmdtemplate template and modify it based on the template. This template is included in the Unity plugin package downloaded from the EasyAR website.

Device definition

Override IsHMD to define whether the device is a head-mounted display.

For example, set it to true on a head-mounted display.

public override bool IsHMD { get => true; }

Override Display to define the display of the device.

For example, the default display Display.DefaultHMDDisplay on a head-mounted display, which defines the display rotation as 0.

protected override IDisplay Display => easyar.Display.DefaultHMDDisplay;

Availability

Override IsAvailable to define whether the device is available.

For example, the implementation in RokidFrameSource is as follows:

protected override Optional<bool> IsAvailable => Application.platform == RuntimePlatform.Android;

If IsAvailable cannot be determined during session assembly, you can override the CheckAvailability() coroutine to block the assembly process until availability is confirmed.

Session origin

Override OriginType to define the origin type defined by the device SDK.

If OriginType is Custom, you also need to override Origin.

For example, the implementation in RokidFrameSource is as follows:

protected override DeviceOriginType OriginType =>
#if EASYAR_HAVE_ROKID_UXR
    hasUXRComponents ? DeviceOriginType.None :
#endif
    DeviceOriginType.XROrigin;

Virtual camera

If OriginType is Custom or None, you need to override Camera to provide a virtual camera.

For example, the implementation in RokidFrameSource is as follows:

protected override Camera Camera => hasUXRComponents ? (cameraCandidate ? cameraCandidate : Camera.main) : base.Camera;

Physical camera

Use DeviceFrameSourceCamera type to override DeviceCameras to provide device physical camera information. This data will be used when inputting camera frame data. CameraFrameStarted must be completed when true.

For example, the implementation in RokidFrameSource is as follows:

private DeviceFrameSourceCamera deviceCamera;
protected override List<FrameSourceCamera> DeviceCameras => new List<FrameSourceCamera> { deviceCamera };

{
    var imageDimensions = new int[2];
    RokidExtensionAPI.RokidOpenXR_API_GetImageDimensions(imageDimensions);
    size = new Vector2Int(imageDimensions[0], imageDimensions[1]);
    deviceCamera = new DeviceFrameSourceCamera(CameraDeviceType.Back, 0, size, new Vector2(50, 50), new DeviceFrameSourceCamera.CameraExtrinsics(Pose.identity, true), AxisSystemType.Unity);
    started = true;
}

Override CameraFrameStarted to provide the identifier for when camera frame input starts.

For example:

protected override bool CameraFrameStarted => started;

Session start and stop

Override OnSessionStart(ARSession) and perform AR-specific initialization. Ensure to call base.OnSessionStart first.

For example:

protected override void OnSessionStart(ARSession session)
{
    base.OnSessionStart(session);
    StartCoroutine(InitializeCamera());
}

This is the appropriate place to open device cameras (such as RGB cameras or VST cameras, etc.), especially if these cameras are not designed to remain open continuously. It is also a suitable location to obtain calibration data that remains constant throughout the lifecycle. Sometimes, it may be necessary to wait for the device to be ready or for data updates before this data can be retrieved.

Additionally, this is a suitable place to start a data input loop. You can also write this loop in Update() or other methods, particularly when data needs to be acquired at a specific point in Unity's execution order. Do not input data until the session is ready (ready).

If needed, you can also ignore the startup process and perform data checks during each update, depending entirely on specific requirements.

For example, the implementation in RokidFrameSource is as follows:

private IEnumerator InitializeCamera()
{
    yield return new WaitUntil(() => (RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus() >= RokidTrackingStatus.Detecting && (RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus() < RokidTrackingStatus.Tracking_Paused);

    var focalLength = new float[2];
    RokidExtensionAPI.RokidOpenXR_API_GetFocalLength(focalLength);
    var principalPoint = new float[2];
    RokidExtensionAPI.RokidOpenXR_API_GetPrincipalPoint(principalPoint);
    var distortion = new float[5];
    RokidExtensionAPI.RokidOpenXR_API_GetDistortion(distortion);
    var imageDimensions = new int[2];
    RokidExtensionAPI.RokidOpenXR_API_GetImageDimensions(imageDimensions);

    size = new Vector2Int(imageDimensions[0], imageDimensions[1]);
    var cameraParamList = new List<float> { focalLength[0], focalLength[1], principalPoint[0], principalPoint[1] }.Concat(distortion.ToList().GetRange(1, 4)).ToList();
    cameraParameters = CameraParameters.tryCreateWithCustomIntrinsics(size.ToEasyARVector(), cameraParamList, CameraModelType.OpenCV_Fisheye, CameraDeviceType.Back, 0).Value;
    deviceCamera = new DeviceFrameSourceCamera(CameraDeviceType.Back, 0, size, new Vector2(50, 50), new DeviceFrameSourceCamera.CameraExtrinsics(Pose.identity, true), AxisSystemType.Unity);

    RokidExtensionAPI.RokidOpenXR_API_OpenCameraPreview(OnCameraDataUpdate);
    started = true;
}

Override OnSessionStop() and release resources, ensuring to call base.OnSessionStop.

For example, the implementation in RokidFrameSource is as follows:

protected override void OnSessionStop()
{
    base.OnSessionStop();
    RokidExtensionAPI.RokidOpenXR_API_CloseCameraPreview();
    started = false;
    StopAllCoroutines();
    cameraParameters?.Dispose();
    cameraParameters = null;
    deviceCamera?.Dispose();
    deviceCamera = null;
}

Input camera frame data

After obtaining the camera frame data update, call HandleCameraFrameData(DeviceFrameSourceCamera, double, Image, CameraParameters, Pose, MotionTrackingStatus) / HandleCameraFrameData(DeviceFrameSourceCamera, double, Image, CameraParameters, Quaternion) to input the camera frame data.

For example, the implementation in RokidFrameSource is as follows:

private static void OnCameraDataUpdate(IntPtr ptr, int dataSize, ushort width, ushort height, long timestamp)  
{  
    if (!instance) { return; }  
    if (ptr == IntPtr.Zero || dataSize == 0 || timestamp == 0) { return; }  
    if (timestamp == instance.curTimestamp) { return; }  

    instance.curTimestamp = timestamp;  

    RokidExtensionAPI.RokidOpenXR_API_GetHistoryCameraPhysicsPose(timestamp, positionCache, rotationCache);  
    var pose = new Pose  
    {  
        position = new Vector3(positionCache[0], positionCache[1], -positionCache[2]),  
        rotation = new Quaternion(-rotationCache[0], -rotationCache[1], rotationCache[2], rotationCache[3]),  
    };  
    // NOTE: Use real tracking status when camera exposure if possible when writing your own device frame source.  
    var trackingStatus = ((RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus()).ToEasyARStatus();  

    var size = instance.size;  
    var pixelSize = instance.size;  
    var pixelFormat = PixelFormat.Gray;  
    var yLen = pixelSize.x * pixelSize.y;  
    var bufferBlockSize = yLen;  

    var bufferO = instance.TryAcquireBuffer(bufferBlockSize);  
    if (bufferO.OnNone) { return; }  

    var buffer = bufferO.Value;  
    buffer.tryCopyFrom(ptr, 0, 0, bufferBlockSize);  

    using (buffer)  
    using (var image = Image.create(buffer, pixelFormat, size.x, size.y, pixelSize.x, pixelSize.y))  
    {  
        instance.HandleCameraFrameData(instance.deviceCamera, timestamp * 1e-9, image, instance.cameraParameters, pose, trackingStatus);  
    }  
}  
Caution

Do not forget to execute Dispose() or release Image, Buffer, and other related data through mechanisms such as using. Otherwise, severe memory leaks may occur, and buffer pool acquisition may fail.

Input render frame data

After the device data is ready, call HandleRenderFrameData(double, Pose, MotionTrackingStatus) / HandleRenderFrameData(double, Quaternion) for each rendered frame to input render frame data.

For example, the implementation in RokidFrameSource is as follows:

protected void LateUpdate()
{
    if (!started) { return; }
    if ((RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus() < RokidTrackingStatus.Detecting) { return; }
    if ((RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus() >= RokidTrackingStatus.Tracking_Paused) { return; }

    InputRenderFrameMotionData();
}

private void InputRenderFrameMotionData()
{
    var timestamp = RokidExtensionAPI.RokidOpenXR_API_GetCameraPhysicsPose(positionCache, rotationCache);
    var pose = new Pose
    {
        position = new Vector3(positionCache[0], positionCache[1], -positionCache[2]),
        rotation = new Quaternion(-rotationCache[0], -rotationCache[1], rotationCache[2], rotationCache[3]),
    };
    if (timestamp == 0) { return; }
    HandleRenderFrameData(timestamp * 1e-9, pose, ((RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus()).ToEasyARStatus());
}

Next steps