Class AVCaptureResolvedPhotoSettings

  • All Implemented Interfaces:
    NSObject

    public class AVCaptureResolvedPhotoSettings
    extends NSObject
    AVCaptureResolvedPhotoSettings An immutable object produced by callbacks in each and every AVCapturePhotoCaptureDelegate protocol method. When you initiate a photo capture request using -capturePhotoWithSettings:delegate:, some of your settings are not yet certain. For instance, auto flash and auto still image stabilization allow the AVCapturePhotoOutput to decide just in time whether to employ flash or still image stabilization, depending on the current scene. Once the request is issued, AVCapturePhotoOutput begins the capture, resolves the uncertain settings, and in its first callback informs you of its choices through an AVCaptureResolvedPhotoSettings object. This same object is presented to all the callbacks fired for a particular photo capture request. Its uniqueID property matches that of the AVCapturePhotoSettings instance you used to initiate the photo request.
    • Constructor Detail

      • AVCaptureResolvedPhotoSettings

        protected AVCaptureResolvedPhotoSettings​(org.moe.natj.general.Pointer peer)
    • Method Detail

      • accessInstanceVariablesDirectly

        public static boolean accessInstanceVariablesDirectly()
      • allocWithZone

        public static java.lang.Object allocWithZone​(org.moe.natj.general.ptr.VoidPtr zone)
      • automaticallyNotifiesObserversForKey

        public static boolean automaticallyNotifiesObserversForKey​(java.lang.String key)
      • cancelPreviousPerformRequestsWithTarget

        public static void cancelPreviousPerformRequestsWithTarget​(java.lang.Object aTarget)
      • cancelPreviousPerformRequestsWithTargetSelectorObject

        public static void cancelPreviousPerformRequestsWithTargetSelectorObject​(java.lang.Object aTarget,
                                                                                 org.moe.natj.objc.SEL aSelector,
                                                                                 java.lang.Object anArgument)
      • classFallbacksForKeyedArchiver

        public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
      • classForKeyedUnarchiver

        public static org.moe.natj.objc.Class classForKeyedUnarchiver()
      • debugDescription_static

        public static java.lang.String debugDescription_static()
      • description_static

        public static java.lang.String description_static()
      • hash_static

        public static long hash_static()
      • instanceMethodSignatureForSelector

        public static NSMethodSignature instanceMethodSignatureForSelector​(org.moe.natj.objc.SEL aSelector)
      • instancesRespondToSelector

        public static boolean instancesRespondToSelector​(org.moe.natj.objc.SEL aSelector)
      • isSubclassOfClass

        public static boolean isSubclassOfClass​(org.moe.natj.objc.Class aClass)
      • keyPathsForValuesAffectingValueForKey

        public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey​(java.lang.String key)
      • new_objc

        public static java.lang.Object new_objc()
      • resolveClassMethod

        public static boolean resolveClassMethod​(org.moe.natj.objc.SEL sel)
      • resolveInstanceMethod

        public static boolean resolveInstanceMethod​(org.moe.natj.objc.SEL sel)
      • setVersion_static

        public static void setVersion_static​(long aVersion)
      • superclass_static

        public static org.moe.natj.objc.Class superclass_static()
      • version_static

        public static long version_static()
      • isDualCameraFusionEnabled

        public boolean isDualCameraFusionEnabled()
        [@property] dualCameraFusionEnabled Indicates whether DualCamera wide-angle and telephoto image fusion will be employed when capturing the photo. As of iOS 13, this property is deprecated in favor of virtualDeviceFusionEnabled.
      • isFlashEnabled

        public boolean isFlashEnabled()
        [@property] flashEnabled Indicates whether the flash will fire when capturing the photo. When you specify AVCaptureFlashModeAuto as you AVCapturePhotoSettings.flashMode, you don't know if flash capture will be chosen until you inspect the AVCaptureResolvedPhotoSettings flashEnabled property. If the device becomes too hot, the flash becomes temporarily unavailable. You can key-value observe AVCaptureDevice's flashAvailable property to know when this occurs. If the flash is unavailable due to thermal issues, and you specify a flashMode of AVCaptureFlashModeOn, flashEnabled still resolves to NO until the device has sufficiently cooled off.
      • isStillImageStabilizationEnabled

        public boolean isStillImageStabilizationEnabled()
        [@property] stillImageStabilizationEnabled Indicates whether still image stabilization will be employed when capturing the photo. As of iOS 13 hardware, the AVCapturePhotoOutput is capable of applying a variety of multi-image fusion techniques to improve photo quality (reduce noise, preserve detail in low light, freeze motion, etc), all of which have been previously lumped under the stillImageStabilization moniker. This property should no longer be used as it no longer provides meaningful information about the techniques used to improve quality in a photo capture. Instead, you should use -photoQualityPrioritization to indicate your preferred quality vs speed when configuring your AVCapturePhotoSettings. You may query -photoProcessingTimeRange to get an indication of how long the photo will take to process before delivery to your delegate.
      • livePhotoMovieDimensions

        public CMVideoDimensions livePhotoMovieDimensions()
        [@property] livePhotoMovieDimensions The resolved dimensions of the video track in the movie that will be delivered to the -captureOutput:didFinishProcessingLivePhotoToMovieFileAtURL:duration:photoDisplayTime:resolvedSettings:error: callback. If you don't request Live Photo capture, livePhotoMovieDimensions resolve to { 0, 0 }.
      • photoDimensions

        public CMVideoDimensions photoDimensions()
        [@property] photoDimensions The resolved dimensions of the photo buffer that will be delivered to the -captureOutput:didFinishProcessingPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error: callback. If you request a RAW capture with no processed companion image, photoDimensions resolve to { 0, 0 }.
      • previewDimensions

        public CMVideoDimensions previewDimensions()
        [@property] previewDimensions The resolved dimensions of the preview photo buffer that will be delivered to the -captureOutput:didFinishProcessing{Photo | RawPhoto}... AVCapturePhotoCaptureDelegate callbacks. If you don't request a preview image, previewDimensions resolve to { 0, 0 }.
      • rawPhotoDimensions

        public CMVideoDimensions rawPhotoDimensions()
        [@property] rawPhotoDimensions The resolved dimensions of the RAW photo buffer that will be delivered to the -captureOutput:didFinishProcessingRawPhotoSampleBuffer:previewPhotoSampleBuffer:resolvedSettings:bracketSettings:error: callback. If you request a non-RAW capture, rawPhotoDimensions resolve to { 0, 0 }.
      • uniqueID

        public long uniqueID()
        [@property] uniqueID uniqueID matches that of the AVCapturePhotoSettings instance you passed to -capturePhotoWithSettings:delegate:.
      • embeddedThumbnailDimensions

        public CMVideoDimensions embeddedThumbnailDimensions()
        [@property] embeddedThumbnailDimensions The resolved dimensions of the embedded thumbnail that will be written to the processed photo delivered to the -captureOutput:didFinishProcessingPhoto:error: AVCapturePhotoCaptureDelegate callback. If you don't request an embedded thumbnail image, embeddedThumbnailDimensions resolve to { 0, 0 }.
      • expectedPhotoCount

        public long expectedPhotoCount()
        [@property] expectedPhotoCount Indicates the number of times your -captureOutput:didFinishProcessingPhoto:error: callback will be called. For instance, if you've requested an auto exposure bracket of 3 with JPEG and RAW, the expectedPhotoCount is 6.
      • dimensionsForSemanticSegmentationMatteOfType

        public CMVideoDimensions dimensionsForSemanticSegmentationMatteOfType​(java.lang.String semanticSegmentationMatteType)
        dimensionsForSemanticSegmentationMatteOfType: Queries the resolved dimensions of semantic segmentation mattes that will be delivered to the AVCapturePhoto in the -captureOutput:didFinishProcessingPhoto:error: AVCapturePhotoCaptureDelegate callback. If you request semantic segmentation mattes by calling -[AVCapturePhotoSettings setEnabledSemanticSegmentationMatteTypes:] with a non-empty array, the dimensions resolve to the expected dimensions for each of the mattes, assuming they are generated (see -[AVCapturePhotoSettings enabledSemanticSegmentationMatteTypes] for a discussion of why a particular matte might not be delivered). If you don't request any semantic segmentation mattes, the result will always be { 0, 0 }.
      • isRedEyeReductionEnabled

        public boolean isRedEyeReductionEnabled()
        [@property] redEyeReductionEnabled Indicates whether red-eye reduction will be applied as necessary when capturing the photo if flashEnabled is YES.
      • isVirtualDeviceFusionEnabled

        public boolean isVirtualDeviceFusionEnabled()
        [@property] virtualDeviceFusionEnabled Indicates whether fusion of virtual device constituent camera images will be used when capturing the photo, such as the wide-angle and telephoto images on a DualCamera.
      • photoProcessingTimeRange

        public CMTimeRange photoProcessingTimeRange()
        [@property] photoProcessingTimeRange Indicates the processing time range you can expect for this photo to be delivered to your delegate. the .start field of the CMTimeRange is zero-based. In other words, if photoProcessingTimeRange.start is equal to .5 seconds, then the minimum processing time for this photo is .5 seconds. The .start field plus the .duration field of the CMTimeRange indicate the max expected processing time for this photo. Consider implementing a UI affordance if the max processing time is uncomfortably long.
      • portraitEffectsMatteDimensions

        public CMVideoDimensions portraitEffectsMatteDimensions()
        [@property] portraitEffectsMatteDimensions The resolved dimensions of the portrait effects matte that will be delivered to the AVCapturePhoto in the -captureOutput:didFinishProcessingPhoto:error: AVCapturePhotoCaptureDelegate callback. If you request a portrait effects matte by calling -[AVCapturePhotoSettings setPortraitEffectsMatteDeliveryEnabled:YES], portraitEffectsMatteDimensions resolve to the expected dimensions of the portrait effects matte, assuming one is generated (see -[AVCapturePhotoSettings portraitEffectsMatteDeliveryEnabled] for a discussion of why a portrait effects matte might not be delivered). If you don't request a portrait effects matte, portraitEffectsMatteDimensions always resolve to { 0, 0 }.
      • rawEmbeddedThumbnailDimensions

        public CMVideoDimensions rawEmbeddedThumbnailDimensions()
        [@property] rawEmbeddedThumbnailDimensions The resolved dimensions of the embedded thumbnail that will be written to the RAW photo delivered to the -captureOutput:didFinishProcessingPhoto:error: AVCapturePhotoCaptureDelegate callback. If you don't request a raw embedded thumbnail image, rawEmbeddedThumbnailDimensions resolve to { 0, 0 }.
      • isContentAwareDistortionCorrectionEnabled

        public boolean isContentAwareDistortionCorrectionEnabled()
        [@property] contentAwareDistortionCorrectionEnabled Indicates whether content aware distortion correction will be employed when capturing the photo.