Package apple.arkit

Class ARFrame

  • All Implemented Interfaces:
    NSCopying, NSObject

    public class ARFrame
    extends NSObject
    implements NSCopying
    An object encapsulating the state of everything being tracked for a given moment in time. The model provides a snapshot of all data needed to render a given frame.
    • Constructor Detail

      • ARFrame

        protected ARFrame​(org.moe.natj.general.Pointer peer)
    • Method Detail

      • accessInstanceVariablesDirectly

        public static boolean accessInstanceVariablesDirectly()
      • alloc

        public static ARFrame alloc()
      • allocWithZone

        public static java.lang.Object allocWithZone​(org.moe.natj.general.ptr.VoidPtr zone)
      • anchors

        public NSArray<? extends ARAnchor> anchors()
        A list of anchors in the scene.
      • automaticallyNotifiesObserversForKey

        public static boolean automaticallyNotifiesObserversForKey​(java.lang.String key)
      • camera

        public ARCamera camera()
        The camera used to capture the frame’s image. The camera provides the device’s position and orientation as well as camera parameters.
      • cancelPreviousPerformRequestsWithTarget

        public static void cancelPreviousPerformRequestsWithTarget​(java.lang.Object aTarget)
      • cancelPreviousPerformRequestsWithTargetSelectorObject

        public static void cancelPreviousPerformRequestsWithTargetSelectorObject​(java.lang.Object aTarget,
                                                                                 org.moe.natj.objc.SEL aSelector,
                                                                                 java.lang.Object anArgument)
      • capturedDepthData

        public AVDepthData capturedDepthData()
        The frame’s captured depth data. Depth data is only provided with face tracking on frames where depth data was captured.
      • capturedDepthDataTimestamp

        public double capturedDepthDataTimestamp()
        A timestamp identifying the depth data.
      • capturedImage

        public CVBufferRef capturedImage()
        The frame’s captured image.
      • classFallbacksForKeyedArchiver

        public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
      • classForKeyedUnarchiver

        public static org.moe.natj.objc.Class classForKeyedUnarchiver()
      • copyWithZone

        public java.lang.Object copyWithZone​(org.moe.natj.general.ptr.VoidPtr zone)
        Specified by:
        copyWithZone in interface NSCopying
      • debugDescription_static

        public static java.lang.String debugDescription_static()
      • description_static

        public static java.lang.String description_static()
      • displayTransformForOrientationViewportSize

        public CGAffineTransform displayTransformForOrientationViewportSize​(long orientation,
                                                                            CGSize viewportSize)
        Returns a display transform for the provided viewport size and orientation. The display transform can be used to convert normalized points in the image-space coordinate system of the captured image to normalized points in the view’s coordinate space. The transform provides the correct rotation and aspect-fill for presenting the captured image in the given orientation and size.
        Parameters:
        orientation - The orientation of the viewport.
        viewportSize - The size of the viewport.
      • hash_static

        public static long hash_static()
      • hitTestTypes

        public NSArray<? extends ARHitTestResult> hitTestTypes​(CGPoint point,
                                                               long types)
        Searches the frame for objects corresponding to a point in the captured image. A 2D point in the captured image’s coordinate space can refer to any point along a line segment in the 3D coordinate space. Hit-testing is the process of finding objects in the world located along this line segment.
        Parameters:
        point - A point in the image-space coordinate system of the captured image. Values should range from (0,0) - upper left corner to (1,1) - lower right corner.
        types - The types of results to search for.
        Returns:
        An array of all hit-test results sorted from nearest to farthest.
      • instanceMethodSignatureForSelector

        public static NSMethodSignature instanceMethodSignatureForSelector​(org.moe.natj.objc.SEL aSelector)
      • instancesRespondToSelector

        public static boolean instancesRespondToSelector​(org.moe.natj.objc.SEL aSelector)
      • isSubclassOfClass

        public static boolean isSubclassOfClass​(org.moe.natj.objc.Class aClass)
      • keyPathsForValuesAffectingValueForKey

        public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey​(java.lang.String key)
      • lightEstimate

        public ARLightEstimate lightEstimate()
        A light estimate representing the light in the scene. Returns nil if there is no light estimation.
      • new_objc

        public static java.lang.Object new_objc()
      • rawFeaturePoints

        public ARPointCloud rawFeaturePoints()
        Feature points in the scene with respect to the frame’s origin. The feature points are only provided for configurations using world tracking.
      • resolveClassMethod

        public static boolean resolveClassMethod​(org.moe.natj.objc.SEL sel)
      • resolveInstanceMethod

        public static boolean resolveInstanceMethod​(org.moe.natj.objc.SEL sel)
      • setVersion_static

        public static void setVersion_static​(long aVersion)
      • superclass_static

        public static org.moe.natj.objc.Class superclass_static()
      • timestamp

        public double timestamp()
        A timestamp identifying the frame.
      • version_static

        public static long version_static()
      • cameraGrainIntensity

        public float cameraGrainIntensity()
        The frame’s camera grain intensity in range 0 to 1. A camera stream depicts image noise that gives the captured image a grainy look and varies with light conditions. The camera grain intensity can be used to select a texture slice from the frames camera grain texture.
      • cameraGrainTexture

        public MTLTexture cameraGrainTexture()
        A tileable texture that contains image noise matching the current camera streams noise properties. A camera stream depicts image noise that gives the captured image a grainy look and varies with light conditions. The variations are stored along the depth dimension of the camera grain texture and can be selected at runtime using the camera grain intensity of the current frame.
      • detectedBody

        public ARBody2D detectedBody()
        A detected body in the current frame.
      • estimatedDepthData

        public CVBufferRef estimatedDepthData()
        A buffer that represents the estimated depth values for a performed segmentation. For each non-background pixel in the segmentation buffer the corresponding depth value can be accessed in this buffer.
      • raycastQueryFromPointAllowingTargetAlignment

        public ARRaycastQuery raycastQueryFromPointAllowingTargetAlignment​(CGPoint point,
                                                                           long target,
                                                                           long alignment)
        Creates a raycast query originating from the point on the captured image, aligned along the center of the field of view of the camera. A 2D point in the captured image’s coordinate space and the field of view of the frame's camera is used to create a ray in the 3D cooridnate space originating at the point.
        Parameters:
        point - A point in the image-space coordinate system of the captured image. Values should range from (0,0) - upper left corner to (1,1) - lower right corner.
        target - Type of target where the ray should terminate.
        alignment - Alignment of the target.
      • segmentationBuffer

        public CVBufferRef segmentationBuffer()
        A buffer that represents the segmented content of the capturedImage. In order to identify to which class a pixel has been classified one needs to compare its intensity value with the values found in `ARSegmentationClass`.
        See Also:
        ARSegmentationClass
      • worldMappingStatus

        public long worldMappingStatus()
        The status of world mapping for the area visible to the frame. This can be used to identify the state of the world map for the visible area and if additional scanning should be done before saving a world map.
      • geoTrackingStatus

        public ARGeoTrackingStatus geoTrackingStatus()
        The status of geo tracking.
      • sceneDepth

        public ARDepthData sceneDepth()
        Scene depth data.
      • smoothedSceneDepth

        public ARDepthData smoothedSceneDepth()
        Scene depth data, smoothed for temporal consistency.