Namespace easyar
Classes
- Accelerometer
Accelerometer calls the system-provided accelerometer and outputs AccelerometerResult. When the device is no longer needed, you can call close to close it. After close, it should not be used anymore. It is not recommended to open it multiple times simultaneously, as it may become unusable or accuracy may decrease.
- AccelerometerResult
Accelerometer readings. The positive x-axis points from the device center to the right side of the screen. The positive y-axis points from the device center to the top of the screen. The positive z-axis points perpendicularly outward from the device center through the screen. Units for x,y,z are m/s². Units for timestamp are seconds.
- AccelerometerResultSink
Accelerometer result input port. Used to expose an input port of a component. All members of this class are thread-safe.
- AccelerometerResultSource
Accelerometer result output port. Used to expose the output port of a component. All members of this class are thread-safe.
- ARCoreCameraDevice
ARCoreCameraDevice implements an ARCore-based camera device that outputs InputFrame (containing images, camera parameters, timestamps, 6DOF position information, and tracking status). When using it, first use java.lang.System.loadLibrary to load libarcore_sdk_c.so. After creation, you can call start/stop to begin and stop collecting video stream data. When the device is no longer needed, call close to shut it down. After close, it should not be used further. ARCoreCameraDevice outputs InputFrame through inputFrameSource, and inputFrameSource should be connected to InputFrameSink for use. bufferCapacity represents the buffer capacity of InputFrame. If more than this number of InputFrame are output from this device and not released, the device will stop outputting new InputFrame until the previous ones are released. This may cause issues such as screen freezes. Note: The current implementation of ARCore (v1.13.0) has memory leaks when creating and destroying sessions. Repeated creation and destruction can lead to continuously growing memory usage that is not released even after destruction.
- ARCoreDeviceListDownloader
ARCoreDeviceListDownloader is used for downloading and updating the calibration parameters of the device list used in ARCoreCameraDevice.
- ARKitCameraDevice
ARKitCameraDevice implements an ARKit-based camera device that outputs InputFrame (containing images, camera parameters, timestamps, 6DOF position information, and tracking status). After creation, you can call start/stop to start and stop collecting video stream data. When the device is no longer needed, you can call close to shut it down. Do not continue to use it after close. ARKitCameraDevice outputs InputFrame through inputFrameSource. inputFrameSource should be connected to InputFrameSink for use. bufferCapacity represents the capacity of the InputFrame buffer. If more than this number of InputFrame are output from this device and not released, the device will no longer output new InputFrame until the previous ones are released. This may cause issues such as screen freezing.
- AttitudeSensor
AttitudeSensor calls the system-provided attitude sensor and outputs AttitudeSensorResult. When the device is no longer needed, you can call close to close it. After close, it should not be used further. It is not recommended to open it multiple times simultaneously, as it may become unusable or accuracy may decrease.
- AttitudeSensorResult
Attitude sensor reading. The positive direction of the x-axis points from the device center to the right side of the screen. The positive direction of the y-axis points from the device center to the upper part of the screen. The positive direction of the z-axis points vertically outward from the device center perpendicular to the screen. The device rotates around the axis (x, y, z) by an angle θ. The unit rotation quaternion is (cos(θ/2), x*sin(θ/2), y*sin(θ/2), z*sin(θ/2)). (v0, v1, v2, v3) = (cos(θ/2), x*sin(θ/2), y*sin(θ/2), z*sin(θ/2)). The reference coordinate system is defined as a set of oriented orthonormal bases, where
Z points towards the sky and is parallel to the ground. (X, Y, Z) form a right-handed system.The unit of timestamp is seconds.
- AttitudeSensorResultSink
Attitude sensor result input port. Used to expose an input port of a component. All members of this class are thread-safe.
- AttitudeSensorResultSource
Output port for attitude sensor results. Used to expose the output port of a component. All members of this class are thread-safe.
- BlockInfo
The model obtained from dense reconstruction is represented using a triangular mesh, referred to as a mesh. Since the mesh undergoes frequent updates, to ensure efficiency, the entire reconstructed model's mesh is divided into numerous mesh blocks. A mesh block consists of a cube with a side length of approximately 1 meter, containing elements such as vertices and indices. BlockInfo is used to describe the content of a mesh block. Here, (x,y,z) is the index of the mesh block. Multiplying (x,y,z) by the physical size of each mesh block gives the origin coordinates of this mesh block in the world coordinate system. By using the position of the mesh block in the world, the parts that need to be displayed can be filtered in advance to save rendering time.
- BlockPriorResult
Prior map information. blockIds are map IDs. mode indicates how the prior information is used by the system, which can be soft or hard. Refer toBlockPriorMode.
- Buffer
Buffer stores the raw byte array and can be used to access image data. In the Java API, you can get the buffer from Image and then copy data to a Java byte array. In all versions of EasyAR Sense, you can access image data. Refer to Image.
- BufferDictionary
A mapping from file paths to Buffer. Used to represent multiple files stored in memory.
- BufferPool
BufferPool implements a memory pool that can be used for custom camera access and other functions requiring repeated allocation of same-sized memory, reducing memory allocation time.
- CalibrationDownloader
CalibrationDownloader is used for downloading updates to the calibration parameters used in MotionTracker. After downloading, MotionTracker needs to be re-created for it to take effect.
- CallbackScheduler
Callback scheduler. There are two subclasses DelayedCallbackScheduler and ImmediateCallbackScheduler. Among them, DelayedCallbackScheduler is used to delay callbacks until they are manually invoked, and can be used in single-threaded environments (such as various UI environments). ImmediateCallbackScheduler is used to execute callbacks immediately, and can be used in multi-threaded environments (such as servers or background services).
- CameraDevice
CameraDevice implements a camera device that outputs InputFrame (containing images, camera parameters, and timestamps). Available on Windows, Mac, Android, and iOS. After opening, you can call start/stop to begin and stop data acquisition. start/stop does not affect previously set camera parameters. When the device is no longer needed, you can call close to shut it down. Do not continue using it after close. CameraDevice outputs InputFrame through inputFrameSource; inputFrameSource should be connected to InputFrameSink for use. bufferCapacity represents the capacity of the InputFrame buffer. If more than this number of InputFrames are output from the device and not released, the device will stop outputting new InputFrames until previous ones are released. This may cause issues like frame freezing. On Android, you need to add the android.permission.CAMERA permission declaration in AndroidManifest.xml. On iOS, you need to add the NSCameraUsageDescription permission declaration in Info.plist.
- CameraDeviceSelector
Used to select the Camera API on Android (camera1 or camera2). camera1 has better compatibility but lacks some necessary information, such as timestamps. camera2 has compatibility issues on some devices. Different options will choose camera1 or camera2 based on the purpose.
- CameraParameters
Camera parameters, including image size, focal length, principal point, camera type, and the rotation angle of the camera relative to the device's natural orientation.
- CloudLocalizer
CloudLocalizer implements the cloud localization feature.
- CloudLocalizerBlockInstance
The instance of the block located by CloudLocalizer.
- CloudRecognizer
CloudRecognizer implements cloud recognition functionality. Cloud recognition requires creating a cloud recognition gallery in the cloud to use; please refer to the EasyAR CRS documentation. When the component is no longer needed, you can call close to shut it down. It should not be used after close. Before using CloudRecognizer, you need to set up and prepare an ImageTracker. Any returned target should be manually loaded into the ImageTracker using loadTarget before being tracked. After loading, the recognition and tracking of the target are the same as for local targets. After a target is recognized, you can obtain it from the callback, and then you should use the target uid to distinguish different targets. The target runtimeID is dynamically generated and is not suitable as a unique identifier for targets in cloud recognition scenarios.
- DelayedCallbackScheduler
Delayed callback scheduler. Used to postpone callbacks until they are manually invoked, can be used in single-threaded environments (such as various UI environments). All members of this class are thread-safe.
- DenseSpatialMap
DenseSpatialMap is used for precise 3D dense reconstruction of the environment. The reconstructed model is represented by triangular meshes, called mesh. DenseSpatialMap occupies one camera buffer.
- EventDumpRecorder
Event Dump Recorder. Used to save some key diagnostic information to EED files. All members of this class are thread-safe.
- FeedbackFrame
Feedback frame. Contains an input frame and a historical output frame, used for feedback-style synchronous processing components such as ImageTracker.
- FeedbackFrameFork
Feedback Frame Splitter. Used to transmit a feedback frame in parallel to multiple components. All members of this class are thread-safe.
- FeedbackFrameSink
Feedback frame input port. Used to expose an input port of a component. All members of this class are thread-safe.
- FeedbackFrameSource
Feedback frame output port. Used to expose the output port of a component. All members of this class are thread-safe.
- FrameFilterResult
FrameFilterResult is the base class for all results using synchronous algorithm components.
- Gyroscope
Gyroscope invokes the system-provided gyroscope and outputs GyroscopeResult. When the device is no longer needed, call close to shut it down. Do not continue using it after close. It is not recommended to open it multiple times simultaneously, as it may become unusable or lose accuracy.
- GyroscopeResult
Gyroscope readings. The positive direction of the x-axis points from the center of the device to the right side of the screen. The positive direction of the y-axis points from the center of the device to the top of the screen. The positive direction of the z-axis points perpendicularly outward from the screen, from the center of the device. x, y, z represent the angular velocity of rotation around the corresponding axis, in radians per second. The counterclockwise direction, when looking at the device from a point along the positive axis direction, is the positive direction of rotation. The timestamp is in seconds.
- GyroscopeResultSink
Gyroscope result input port. Used to expose a component's input port. All members of this class are thread-safe.
- GyroscopeResultSource
Gyroscope result output port. Used to expose an output port of a component. All members of this class are thread-safe.
- Image
Image stores image data, used to represent an image in memory. Image provides access to raw data in the form of a byte array, and also provides an interface to access information such as width and height. In all versions of EasyAR Sense, you can access image data. In iOS, you can access it like this:
#import <easyar/buffer.oc.h> #import <easyar/image.oc.h> easyar_OutputFrame * outputFrame = [outputFrameBuffer peek]; if (outputFrame != nil) { easyar_Image * i = [[outputFrame inputFrame] image]; easyar_Buffer * b = [i buffer]; char * bytes = calloc([b size], 1); memcpy(bytes, [b data], [b size]); // use bytes here free(bytes); }In Android,
import cn.easyar.*; OutputFrame outputFrame = outputFrameBuffer.peek(); if (outputFrame != null) { InputFrame inputFrame = outputFrame.inputFrame(); Image i = inputFrame.image(); Buffer b = i.buffer(); byte[] bytes = new byte[b.size()]; b.copyToByteArray(0, bytes, 0, bytes.length); // use bytes here b.dispose(); i.dispose(); inputFrame.dispose(); outputFrame.dispose(); }
- ImageHelper
Image Helper Class.
- ImageTarget
ImageTarget represents a target for planar images, which can be tracked by ImageTracker. The values inside ImageTarget need to be filled through methods like create... before they can be read. Then, after successfully loading via loadTarget, it can be detected and tracked by ImageTracker.
- ImageTargetParameters
ImageTargetParameters represent the parameters required to create ImageTarget.
- ImageTracker
ImageTracker implements detection and tracking of planar cards. ImageTracker occupies (1 + SimultaneousNum) camera buffers. You should use camera.setBufferCapacity to set a value not less than the total number of camera buffers occupied by all components. After creation, you can call start/stop to begin and stop operation; start/stop are very lightweight calls. When the component is no longer needed, you can call close to shut it down. It should not be used after close. ImageTracker inputs FeedbackFrame through feedbackFrameSink; you should connect FeedbackFrameSource to feedbackFrameSink for use. Before a Target can be tracked by ImageTracker, you need to load it via loadTarget/unloadTarget. You can get the load/unload results by passing in a callback interface.
- ImageTrackerConfig
ImageTracker create configuration.
- ImageTrackerResult
the result of ImageTracker
- ImmediateCallbackScheduler
Immediate callback scheduler. Used to execute callbacks immediately, can be used in a multithreading environment (such as servers or background services). All members of this class are thread-safe.
- InertialCameraDevice
InertialCameraDevice implements a camera device based on motion inertia, outputting CameraTransformType as FiveDofRotXZ InputFrame (including images, camera parameters, timestamp, pose transformation matrix, and tracking status). After creation, you can call start/stop to start and stop collecting video stream data. When the device is no longer needed, you can call close to shut it down. After close, it should not be used anymore. InertialCameraDevice outputs InputFrame through inputFrameSource, and inputFrameSource should be connected to InputFrameSink for use. bufferCapacity represents the buffer capacity of InputFrame. If more than this number of InputFrame are output from the device and not released, the device will stop outputting new InputFrame until the previous ones are released. This may cause issues like screen freezing.
- InputFrame
Input frame. Contains image, camera parameters, timestamp, camera transformation relative to the world coordinate system, and tracking status. Among these, camera parameters, timestamp, camera transformation relative to the world coordinate system, and tracking status are optional, but specific algorithm components may have specific requirements for input.
- InputFrameFork
Input Frame Splitter. Used to transmit an input frame in parallel to multiple components. All members of this class are thread-safe.
- InputFramePlayer
Input frame player. There is an input frame output port to extract input frames from EIF files. All members of this class are thread-safe.
- InputFrameRecorder
Input-frame recorder. Has an input-frame input port and an input-frame output port, used to save passing input frames to an EIF file. All members of this class are thread-safe.
- InputFrameSink
Input frame input port. Used to expose an input port of a component. All members of this class are thread-safe.
- InputFrameSource
Input/output port. Used to expose a component's output port. All members of this class are thread-safe.
- InputFrameThrottler
InputFrameThrottler. Contains an input frame input port and an input frame output port, used to block new input frames from entering the algorithm component when it has not finished processing the previous frame's data. InputFrameThrottler occupies 1 camera buffer. Use camera.setBufferCapacity to set a value no less than the total number of camera buffers occupied by all components. All members of this class are thread-safe. Note: Connecting or disconnecting its signalInput should not occur while data is actively flowing, otherwise it may enter an irrecoverable state. (Recommend completing data flow connections before starting the Camera.)
- InputFrameToFeedbackFrameAdapter
Input frame to feedback frame adapter. There is an input frame input port, a historical output frame input port, and a feedback frame output port, used to combine input frames and historical output frames into feedback frames, and pass them to algorithm components that require feedback frames as input, for example ImageTracker. Each time an input frame is input, it combines with the historical output frame from the previous input to synthesize a feedback frame. If no historical output frame has been input, the historical output frame in the feedback frame is empty. InputFrameToFeedbackFrameAdapter occupies 1 camera buffer. The camera's setBufferCapacity should be set to no less than the number of camera buffers occupied by all components. All members of this class are thread-safe.
- InputFrameToOutputFrameAdapter
Input frame to output frame adapter. There is an input frame input port and an output frame output port, used for wrapping the input frame into the output frame, achieving the function of rendering directly without connecting to algorithm components. All members of this class are thread-safe.
- JniUtility
JNI utility class. Used to wrap Java arrays and ByteBuffer in Unity. Not supported on iOS platform.
- LocationResult
Position reading. latitude and longitude are in degrees. altitude is in meters. horizontalAccuracy is the horizontal accuracy in meters. verticalAccuracy is the vertical accuracy in meters.
- LocationResultSink
Position result input port. Used to expose an input port of a component. All members of this class are thread-safe.
- LocationResultSource
Position result output port. Used to expose a component's output port. All members of this class are thread-safe.
- Log
Logging class. Used for outputting logs or setting a custom log output function.
- Magnetometer
Magnetometer calls the system-provided magnetometer, outputs MagnetometerResult. When the device is no longer needed, you can call close to close it. After close, it should not be used anymore. It is not recommended to open it multiple times simultaneously, as it may become unusable or accuracy may decrease.
- MagnetometerResult
Magnetometer readings. The positive direction of the x-axis points from the center of the device to the right side of the screen. The positive direction of the y-axis points from the center of the device to the top of the screen. The positive direction of the z-axis points perpendicularly outward from the center of the device through the screen. Units for x, y, z are uT (microtesla). Timestamp unit is seconds.
- MagnetometerResultSink
Magnetometer result input port. Used to expose an input port of a component. All members of this class are thread-safe.
- MagnetometerResultSource
Magnetometer result output port. Used to expose an output port of a component. All members of this class are thread-safe.
- Matrix33F
A third-order square matrix. Data arrangement is row-major.
- Matrix44F
fourth-order square matrix. The data arrangement is row-major.
- MegaLandmarkFilter
MegaLandmarkFilter implements the VPS cloud positioning filtering function.
- MegaTracker
Provides cloud positioning functionality. MegaTracker occupies one camera-buffer.
- MegaTrackerBlockInstance
Block instance located by MegaTracker.
- MegaTrackerLocalizationResponse
Response to the MegaTracker positioning request.
- MegaTrackerResult
The output of MegaTracker will be updated at the frequency of OutputFrame.
- MotionInputData
Motion input data. Contains timestamp, transformation relative to the world coordinate system, and tracking status.
- MotionTrackerCameraDevice
MotionTrackerCameraDevice implements a real-scale 6DOF motion tracking camera device that outputs InputFrame (containing image, camera parameters, timestamp, 6DOF position information, and tracking status). After creation, you can call start/stop to begin and stop the data stream. When the device is no longer needed, you can call close to shut it down. After close, it should not be used further. MotionTrackerCameraDevice outputs InputFrame through inputFrameSource, and inputFrameSource should be connected to InputFrameSink for use.
- ObjectTarget
ObjectTarget represents a 3D-object target, which can be tracked by ObjectTracker. The size of ObjectTarget is determined by the
objfile. You can modify the size by changing thescale. The defaultscaleis 1. After ObjectTarget is successfully loaded into ObjectTracker via loadTarget, it can be detected and tracked by ObjectTracker.
- ObjectTargetParameters
ObjectTargetParameters represents the parameters required to create an ObjectTarget.
- ObjectTracker
ObjectTracker implements the detection and tracking of 3D object targets. ObjectTracker occupies (1 + SimultaneousNum) camera buffers. You should use the camera's setBufferCapacity to set a buffer count that is not less than the total number of camera buffers occupied by all components. After creation, you can call start/stop to begin and halt operations; start/stop are very lightweight calls. When the component is no longer needed, you can call close to shut it down. It should not be used after close. ObjectTracker inputs FeedbackFrame through feedbackFrameSink. You should connect FeedbackFrameSource to feedbackFrameSink for use. Before a Target can be tracked by ObjectTracker, you need to load it via loadTarget/unloadTarget. You can obtain the load/unload results by passing in interface callbacks.
- ObjectTrackerResult
The result of ObjectTracker.
- OutputFrame
Output frame. Contains input frame and the output of the synchronous processing component.
- OutputFrameBuffer
Output frame buffer. There is an output frame input port and an output frame acquisition function, used to convert the acquisition method of output frames from asynchronous to synchronous polling, suitable for frame-by-frame rendering. OutputFrameBuffer occupies one camera buffer. Use the camera's setBufferCapacity to set no less than the number of camera buffers occupied by all components. All members of this class are thread-safe.
- OutputFrameFork
Output Frame Splitter. Used to transmit one output frame in parallel to multiple components. All members of this class are thread-safe.
- OutputFrameJoin
Output frame merger. Used to merge output frames from multiple components into a single output frame. All members of this class are thread-safe. Note that connecting and disconnecting multiple inputs should not be done while data is flowing in, otherwise it may enter a state where it cannot output. (It is recommended to complete data stream connections before starting the Camera.)
- OutputFrameSink
Output frame input port. Used to expose a component's input port. All members of this class are thread-safe.
- OutputFrameSource
Output frame output port. Used to expose a component's output port. All members of this class are thread-safe.
- ProximityLocationResult
Proximity location reading. x, y, z units are meters. Origin is the map tile origin. y is upward. accuracy unit is meters. timestamp, validTime units are seconds. is2d indicates whether y is not used.
- ProximityLocationResultSink
Proximity location result input port. Used to expose an input port of a component. All members of this class are thread-safe.
- ProximityLocationResultSource
Proximity location result output port. Used to expose a component's output port. All members of this class are thread-safe.
- Recorder
Recorder implements screen recording functionality for the current rendering environment. Currently, Recorder only works on Android (4.3 or newer) and iOS under OpenGL ES 3.0 environments. Due to dependency on OpenGL ES, all functions of this class (except requestPermissions, including destructors) must be called in a single thread that contains the OpenGL ES context. Unity Only In Unity, if Multi-threaded rendering is enabled, the script thread will be separated from the rendering thread, making it impossible to call updateFrame on the rendering thread. Therefore, if you need to use screen recording, you should disable Multi-threaded rendering. On Android, you need to add the android.permission.RECORD_AUDIO permission declaration in AndroidManifest.xml. On iOS, you need to add the NSMicrophoneUsageDescription permission declaration in Info.plist.
- RecorderConfiguration
RecorderConfiguration is the startup configuration for Recorder.
- SignalSink
Signal input port. Used to expose an input port of a component. All members of this class are thread-safe.
- SignalSource
Signal output port. Used to expose an output port of a component. All members of this class are thread-safe.
- SparseSpatialMap
SparseSpatialMap provides core functionalities: map generation and storage, map loading and localization, while also enabling access to environmental information like point clouds and planes, and performing hitTest. SparseSpatialMap occupies 2 camera buffers. Use camera.setBufferCapacity to set a capacity not less than the total number of camera buffers used by all components.
- SparseSpatialMapConfig
Used to configure the localization strategy in sparse mapping.
- SparseSpatialMapManager
SparseSpatialMap management class for managing the sharing functionality of SparseSpatialMap.
- SparseSpatialMapResult
Get the output of the sparse mapping and localization system, which updates at the frequency of OutputFrame.
- SurfaceTracker
SurfaceTracker implements tracking of environmental surfaces. SurfaceTracker occupies one camera buffer. The camera's setBufferCapacity should be set to no less than the total number of camera buffers occupied by all components. After creation, you can call start/stop to begin and halt operation; start/stop are very lightweight calls. When the component is no longer needed, you can call close to shut it down. It should not be used after close. SurfaceTracker inputs InputFrame through inputFrameSink; InputFrameSource should be connected to inputFrameSink for use.
- SurfaceTrackerResult
The result of SurfaceTracker.
- Target
Target is the base class for all targets that can be tracked by ImageTracker or other algorithms in EasyAR.
- TargetInstance
TargetInstance is the target tracked by the tracker. TargetInstance includes the original Target that was tracked and the current status and pose of this Target.
- TargetTrackerResult
TargetTrackerResult is the base class of ImageTrackerResult and ObjectTrackerResult.
- TextureId
TextureId encapsulates the texture object in the graphics API. Among them, OpenGL/OpenGLES should use getInt and fromInt, Direct3D should use getPointer and fromPointer.
- ThreeDofCameraDevice
ThreeDofCameraDevice implements a three-dof camera device that outputs CameraTransformType as ThreeDofRotOnly InputFrame (containing images, camera parameters, timestamps, pose transformation matrices, and tracking status). After creation, you can call start/stop to begin and stop collecting video stream data. When the device is no longer needed, you can call close to shut it down. After closing, it should not be used further. ThreeDofCameraDevice outputs InputFrame through inputFrameSource, which should be connected to InputFrameSink for use. bufferCapacity represents the buffer capacity of InputFrame. If more than this number of InputFrame are output from the device and not released, the device will stop outputting new InputFrame until the previous ones are released. This may cause issues such as screen freezing.
- Vec2F
two-dimensional float vector.
- Vec2I
two-dimensional int vector.
- Vec3D
three-dimensional double vector.
- Vec3F
3D float vector.
- Vec4F
4-dimensional float vector.
- Vec4I
4-dimensional int vector.
- VideoInputFramePlayer
Input frame player. There is an input frame output port for extracting input frames from EIF MKV files. All members of this class are thread-safe.
- VideoInputFrameRecorder
Input frame recorder. There is an input frame input port and an input frame output port for saving the passed input frames to an EIF MKV file. All members of this class are thread-safe.
- VideoPlayer
VideoPlayer is the video playback class. EasyAR supports normal video, transparent video, and streaming media playback. Video content is rendered to the texture passed to setRenderTexture. This class only supports OpenGLES 3.0 textures. Due to dependency on OpenGLES, all functions of this class (including destructors) must be called in a single thread containing the OpenGLES context. The current version requires both width and height to be multiples of 16. Supported video file formats: Windows: Media Foundation-compatible formats; installing additional decoders can support more formats, refer to Supported Media Formats in Media Foundation, DirectShow not supported Mac: Not supported Android: System-supported formats, refer to Supported media formats. iOS: System-supported formats, currently no effective reference documentation available.
- VisionOSARKitCameraDevice
VisionOSARKitCameraDevice implements a camera device based on VisionOS ARKit, outputting InputFrame (which includes images, camera parameters, timestamps, 6DOF position information, and tracking status). After creation, you can call start/stop to begin and stop collecting video stream data. When the device is no longer needed, you can call close to shut it down. After close, it should not be used anymore. VisionOSARKitCameraDevice outputs InputFrame through inputFrameSource, and inputFrameSource should be connected to InputFrameSink for use. bufferCapacity represents the buffer capacity of InputFrame. If more than this number of InputFrame are output from this device and not released, the device will no longer output new InputFrame until the previous ones are released. This may cause issues such as frame freezing.
- XREALCameraDevice
XREALCameraDevice implements a camera device based on the XREAL Enterprise Native SDK Plugin, outputting InputFrame (unable to obtain images, camera parameters, timestamps, 6DOF position information, and tracking status). After creation, you can call start/stop to start and stop collecting video stream data. Ensure that isDeviceSupported returns true before calling start. When the device is no longer needed, call close to shut it down. After close, it should not be used further. XREALCameraDevice outputs InputFrame through inputFrameSource, which should be connected to InputFrameSink for use. bufferCapacity represents the buffer capacity of InputFrame. If more than this number of InputFrame are output from the device and not released, the device will stop outputting new InputFrame until the previous ones are released. This may cause issues like screen freezing.
Enums
- BlockPriorMode
PriorMapInformationWorkingMode
- CameraDeviceType
Camera device type.
- CameraModelType
Camera model types.
- CameraTransformType
Camera transformation type.
- CloudLocalizerStatus
Cloud location status.
- MegaApiType
MEGA API type.
- MotionTrackingStatus
Describes the quality of device motion tracking.
- PixelFormat
PixelFormat represents the image pixel format. The pixel orientation for all formats is from left to right and top to bottom.
- StorageType
StorageType represents the storage location for images, json files, videos, or other files. StorageType specifies the root directory for file storage, and you can use relative paths relative to this root directory in all related interfaces.