Creating image and device motion data input extensions
By creating image and device motion data input extensions, developers can extend custom camera implementations for EasyAR Sense, supporting specific head-mounted displays or other input devices. The following content introduces the steps and considerations for creating image and device motion data input extensions.
Before you begin
- Understand basic concepts like cameras, input frames.
- Read External frame data source for detailed interface requirements when creating an external frame data source.
- Read External input frame data to learn about camera frame data and rendered frame data.
Creating external frame data source classes
- If you need to create a 6DoF device input extension, inherit from ExternalDeviceMotionFrameSource
- If you need to create a 3DoF device input extension, inherit from ExternalDeviceRotationFrameSource
Both are subclasses of MonoBehaviour, and the filename should match the class name.
For example, creating a 6DoF device input extension:
public class MyFrameSource : ExternalDeviceMotionFrameSource
{
}
When creating HMD extensions, you can use the com.easyar.sense.ext.hmdtemplate template and modify it based on this template. This template is included in the Unity plugin package downloaded from the EasyAR website.
Device definition
Override IsHMD to define whether the device is a head-mounted display.
For example, set it to true on HMD devices.
public override bool IsHMD { get => true; }
Override Display to define the display device.
For example, use the default HMD display Display.DefaultHMDDisplay information on HMD devices, which defines the display rotation as 0.
protected override IDisplay Display => easyar.Display.DefaultHMDDisplay;
Availability
Override IsAvailable to define whether the device is available.
For example, the implementation in RokidFrameSource is as follows:
protected override Optional<bool> IsAvailable => Application.platform == RuntimePlatform.Android;
If IsAvailable cannot be determined during session assembly, override the CheckAvailability() coroutine to block the assembly process until availability is confirmed.
Session origin
Override OriginType to define the origin type defined by the device SDK.
If OriginType is Custom, you also need to override Origin.
For example, the implementation in RokidFrameSource is as follows:
protected override DeviceOriginType OriginType =>
#if EASYAR_HAVE_ROKID_UXR
hasUXRComponents ? DeviceOriginType.None :
#endif
DeviceOriginType.XROrigin;
Virtual camera
If OriginType is Custom or None, you need to override Camera to provide a virtual camera.
For example, the implementation in RokidFrameSource is as follows:
protected override Camera Camera => hasUXRComponents ? (cameraCandidate ? cameraCandidate : Camera.main) : base.Camera;
Physical camera
Use the DeviceFrameSourceCamera type to override DeviceCameras to provide device physical camera information. This data is used when inputting camera frame data. Must be completed when CameraFrameStarted is true.
For example, the implementation in RokidFrameSource is as follows:
private DeviceFrameSourceCamera deviceCamera;
protected override List<FrameSourceCamera> DeviceCameras => new List<FrameSourceCamera> { deviceCamera };
{
var imageDimensions = new int[2];
RokidExtensionAPI.RokidOpenXR_API_GetImageDimensions(imageDimensions);
size = new Vector2Int(imageDimensions[0], imageDimensions[1]);
deviceCamera = new DeviceFrameSourceCamera(CameraDeviceType.Back, 0, size, new Vector2(50, 50), new DeviceFrameSourceCamera.CameraExtrinsics(Pose.identity, true), AxisSystemType.Unity);
started = true;
}
Override CameraFrameStarted to provide the identifier for when camera frame input begins.
For example:
protected override bool CameraFrameStarted => started;
Session start and stop
Override OnSessionStart(ARSession) and perform AR-related initialization. Ensure to call base.OnSessionStart first.
For example:
protected override void OnSessionStart(ARSession session)
{
base.OnSessionStart(session);
StartCoroutine(InitializeCamera());
}
This is a good place to open device cameras (such as RGB or VST cameras), especially if they aren't designed to stay always on. It's also suitable for obtaining calibration data that remains constant throughout the lifecycle. Sometimes you may need to wait for device readiness or data updates before this information becomes available.
Additionally, this is an appropriate location to start data input loops. Alternatively, implement such loops in Update() or other methods, particularly when data acquisition needs to align with specific points in Unity's execution order. Do not feed data until the session is ready.
If needed, you can skip initialization and perform data checks during every update—this entirely depends on specific requirements.
For example, the implementation in RokidFrameSource is as follows:
private IEnumerator InitializeCamera()
{
yield return new WaitUntil(() => (RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus() >= RokidTrackingStatus.Detecting && (RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus() < RokidTrackingStatus.Tracking_Paused);
var focalLength = new float[2];
RokidExtensionAPI.RokidOpenXR_API_GetFocalLength(focalLength);
var principalPoint = new float[2];
RokidExtensionAPI.RokidOpenXR_API_GetPrincipalPoint(principalPoint);
var distortion = new float[5];
RokidExtensionAPI.RokidOpenXR_API_GetDistortion(distortion);
var imageDimensions = new int[2];
RokidExtensionAPI.RokidOpenXR_API_GetImageDimensions(imageDimensions);
size = new Vector2Int(imageDimensions[0], imageDimensions[1]);
var cameraParamList = new List<float> { focalLength[0], focalLength[1], principalPoint[0], principalPoint[1] }.Concat(distortion.ToList().GetRange(1, 4)).ToList();
cameraParameters = CameraParameters.tryCreateWithCustomIntrinsics(size.ToEasyARVector(), cameraParamList, CameraModelType.OpenCV_Fisheye, CameraDeviceType.Back, 0).Value;
deviceCamera = new DeviceFrameSourceCamera(CameraDeviceType.Back, 0, size, new Vector2(50, 50), new DeviceFrameSourceCamera.CameraExtrinsics(Pose.identity, true), AxisSystemType.Unity);
RokidExtensionAPI.RokidOpenXR_API_OpenCameraPreview(OnCameraDataUpdate);
started = true;
}
Override OnSessionStop() and release resources. Ensure to call base.OnSessionStop.
For example, the implementation in RokidFrameSource is as follows:
protected override void OnSessionStop()
{
base.OnSessionStop();
RokidExtensionAPI.RokidOpenXR_API_CloseCameraPreview();
started = false;
StopAllCoroutines();
cameraParameters?.Dispose();
cameraParameters = null;
deviceCamera?.Dispose();
deviceCamera = null;
}
Input camera frame data
After obtaining the camera frame data update, call HandleCameraFrameData(DeviceFrameSourceCamera, double, Image, CameraParameters, Pose, MotionTrackingStatus) / HandleCameraFrameData(DeviceFrameSourceCamera, double, Image, CameraParameters, Quaternion) to input the camera frame data.
For example, the implementation in RokidFrameSource is as follows:
private static void OnCameraDataUpdate(IntPtr ptr, int dataSize, ushort width, ushort height, long timestamp)
{
if (!instance) { return; }
if (ptr == IntPtr.Zero || dataSize == 0 || timestamp == 0) { return; }
if (timestamp == instance.curTimestamp) { return; }
instance.curTimestamp = timestamp;
RokidExtensionAPI.RokidOpenXR_API_GetHistoryCameraPhysicsPose(timestamp, positionCache, rotationCache);
var pose = new Pose
{
position = new Vector3(positionCache[0], positionCache[1], -positionCache[2]),
rotation = new Quaternion(-rotationCache[0], -rotationCache[1], rotationCache[2], rotationCache[3]),
};
// NOTE: Use real tracking status when camera exposure if possible when writing your own device frame source.
var trackingStatus = ((RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus()).ToEasyARStatus();
var size = instance.size;
var pixelSize = instance.size;
var pixelFormat = PixelFormat.Gray;
var yLen = pixelSize.x * pixelSize.y;
var bufferBlockSize = yLen;
var bufferO = instance.TryAcquireBuffer(bufferBlockSize);
if (bufferO.OnNone) { return; }
var buffer = bufferO.Value;
buffer.tryCopyFrom(ptr, 0, 0, bufferBlockSize);
using (buffer)
using (var image = Image.create(buffer, pixelFormat, size.x, size.y, pixelSize.x, pixelSize.y))
{
instance.HandleCameraFrameData(instance.deviceCamera, timestamp * 1e-9, image, instance.cameraParameters, pose, trackingStatus);
}
}
Caution
Do not forget to perform Dispose() or release Image, Buffer and other related data through mechanisms such as using after use. Otherwise, severe memory leaks may occur, and buffer pool acquisition may fail.
Input render frame data
After device data is prepared, call HandleRenderFrameData(double, Pose, MotionTrackingStatus) / HandleRenderFrameData(double, Quaternion) each rendering frame to input render frame data.
For example, the implementation in RokidFrameSource is as follows:
protected void LateUpdate()
{
if (!started) { return; }
if ((RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus() < RokidTrackingStatus.Detecting) { return; }
if ((RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus() >= RokidTrackingStatus.Tracking_Paused) { return; }
InputRenderFrameMotionData();
}
private void InputRenderFrameMotionData()
{
var timestamp = RokidExtensionAPI.RokidOpenXR_API_GetCameraPhysicsPose(positionCache, rotationCache);
var pose = new Pose
{
position = new Vector3(positionCache[0], positionCache[1], -positionCache[2]),
rotation = new Quaternion(-rotationCache[0], -rotationCache[1], rotationCache[2], rotationCache[3]),
};
if (timestamp == 0) { return; }
HandleRenderFrameData(timestamp * 1e-9, pose, ((RokidTrackingStatus)RokidExtensionAPI.RokidOpenXR_API_GetHeadTrackingStatus()).ToEasyARStatus());
}
Next steps
- Create a headset extension pack