Table of Contents

Get the running result of the session

During the running of the session, the transform of some objects in the scene and the image of the camera will be modified. Sometimes, these modifications do not meet the needs of the application, and it may be necessary to obtain the running result of each frame of the session and perform secondary processing on these data. This article introduces how to obtain and use these result data.

Before you begin

Get InputFrame updates

You can use the InputFrameUpdate event to get updates of InputFrame. This event is triggered only when the InputFrame in the session's per-frame output data changes.

Note

InputFrameUpdate is only valid in sessions where EasyAR performs rendering. Generally, it is invalid when using AR Foundation or head-mounted displays, and you need to use the methods provided by these third-party libraries to get data updates.

Using InputFrame, you can obtain physical camera images, camera parameters, timestamps, the transformation of the physical camera relative to the world coordinate system, and tracking status, etc. However, since the camera transformation has already been applied by the session to virtual cameras and other objects, you usually don't need to obtain the camera transformation through InputFrame.

Get the physical camera image of the current frame

You can use the InputFrame.image() method to obtain the physical camera image data of type Image.

For example, the following code can obtain the physical camera image when the InputFrame is updated:

Session.InputFrameUpdate += (inputFrame) => {
    using (var image = inputFrame.image())
    {
    }
};
Caution

When using data of type Image or other class-type data obtained from it, you must ensure that Dispose() is correctly called (the using statement in the above code ensures this). Otherwise, memory leaks or even issues like frozen screen updates may occur.

If you need to retain InputFrame or Image for use in the next frame, you must increase the value of ARAssembly.ExtraBufferCapacity according to the amount of data retained. Otherwise, data retrieval may fail due to insufficient buffer capacity.

If you need to retain InputFrame, you must also call the Clone() method to create a reference copy, and then call Dispose() on the copy when it is no longer needed.

Since the frame rate of the physical camera is usually lower than the rendering frame rate, not every rendering frame will receive the InputFrameUpdate event. However, similarly, the physical camera image rendering is not updated every rendering frame either. The image content of all rendering frames before the next InputFrameUpdate event will be consistent with the current InputFrame's image.

Note

The image in InputFrame will always match the current frame's virtual camera background image. However, the background image may be scaled or cropped during rendering, so it is normal for the obtained image size or aspect ratio to differ from what is displayed on the screen.

Additionally, note that the image data returned by InputFrame.image() is CPU-readable and not a GPU texture. If you need to use the image data on the GPU, you must upload the image data to a GPU texture or directly obtain a GPU texture through the CameraImageRenderer.RequestTargetTexture(Action<Camera, RenderTexture>) interface.

[Optional] Intercept physical camera image rendering

You can use ARAssembly.CameraImageRenderer to control the rendering of physical camera images.

The following code can stop the rendering of physical camera images:

if (Session.Assembly != null && Session.Assembly.CameraImageRenderer.OnSome)
{
    Session.Assembly.CameraImageRenderer.Value.enabled = false;
}

Note that you need to check whether ARAssembly.CameraImageRenderer exists first.

Note

The above method to stop image updates only works in sessions where EasyAR handles the rendering. Generally, it is ineffective when using AR Foundation or head-mounted displays, and you need to use methods provided by these third-party libraries to achieve the corresponding functionality.

After stopping the physical camera image rendering, the application can obtain physical camera image data through InputFrame and use this data for custom rendering.

Get transform updates

You can obtain the transform data of objects in the scene after each frame update of the session through the PostSessionUpdate event.

Note

For some features (such as Mega), even if the image does not change and there is no explicit request for service updates, AR calculations are still running every rendering frame. Therefore, if you need to obtain all transform changes, you must retrieve transform data every frame and cannot only obtain it in certain frames.

Get the transform of the virtual camera

You can get the transform of the camera in the scene through ARAssembly.Camera.

Session.PostSessionUpdate += () =>
{
    var position = Session.Assembly.Camera.transform.position;
    var rotation = Session.Assembly.Camera.transform.rotation;
};

Get the transform of the target

You can obtain the transform of the target in the scene by accessing the specific target object in use. For example, in the case of image tracking, this target is the GameObject where the ImageTargetController component is located.

Session.PostSessionUpdate += () =>
{
    var position = target.transform.position;
    var rotation = target.transform.rotation;
};

[Optional] Get pose

A pose is a data structure that describes the position and orientation of an object, usually consisting of two parts: position and rotation. In AR applications, a pose is typically used to describe the position and orientation of a physical camera or tracking target relative to a reference frame.

Unity does not provide raw pose data because pose is generally used to drive the movement of objects in the scene, which is what the session automatically handles. For content calculation and rendering, transform is sufficient.

Important

Before reading the methods below, please reconsider whether the transform data of objects like the camera and tracking targets in the scene already meets your needs. Typically, additional pose data is not necessary.

If you indeed need pose data for some reason, you can calculate the required pose value from the transform in the PostSessionUpdate event. Generally, the relative transform between the target and the camera obtained in PostSessionUpdate is the pose.

The following code demonstrates how to get the transform of the camera and target and calculate the relative pose between them:

Session.PostSessionUpdate += () =>
{
    Pose cameraToWorld = new(Session.Assembly.Camera.transform.position, Session.Assembly.Camera.transform.rotation);
    Pose targetToWorld = new(target.transform.position, target.transform.rotation);
    Pose worldToTarget = new()
    {
        position = Quaternion.Inverse(targetToWorld.rotation) * (-targetToWorld.position),
        rotation = Quaternion.Inverse(targetToWorld.rotation)
    };
    Pose cameraToTarget = cameraToWorld.GetTransformedBy(worldToTarget);
};
Caution

If you are also using AR Foundation, a headset, or other third-party libraries that are running, these libraries may also modify the transform of the camera in the scene. Ensure that the update logic of these libraries is completed before performing related pose calculations; otherwise, the results may be incorrect. In such scenarios, the relative pose between the target and origin in PostSessionUpdate remains accurate.

[Optional] Intercept transform updates

During AR operation, the transforms of objects such as cameras and tracking targets in Unity are typically updated automatically by the session. These updates ensure the correctness and consistency of AR rendering, so there is no straightforward way to intercept them.

However, if you need to customize the transform update logic for objects, you can achieve this by listening to the PostSessionUpdate event. This requires a somewhat cumbersome approach:

  1. Normally, rendering content should be attached as child nodes or additional components under the objects controlled by the session. But if you need to customize the transform updates for objects, you must remove these objects from the hierarchy controlled by the session. In other words, these objects should not be child nodes of session-controlled objects.
  2. In the PostSessionUpdate event, record the transforms of the objects you want to update with custom logic.
  3. Finally, in the PostSessionUpdate event, update the transforms of these objects using your custom logic based on the data provided by the session.
Note

Using the PostSessionUpdate event is essential because only after this point will the session stop manipulating the objects in the scene.

Note that this method cannot be used to modify the camera, as more complex logic is required to handle custom updates for the camera.

Additionally, this method can only be used to customize the transform updates of objects and cannot be used to modify the transforms of session-controlled objects. If the transforms of session-controlled objects are modified externally, the session will still overwrite these modifications in the next frame update, potentially affecting the correctness of some calculations.

Caution

Using this method requires you to ensure the correctness of the object transforms; otherwise, it may lead to AR rendering errors.

If you are also using AR Foundation, headsets, or other third-party libraries, these libraries may also modify the transforms of objects in the scene. You must ensure that the update logic of these libraries does not conflict with your custom logic, as this could lead to unexpected results.