Package apple.avfoundation
Class AVCaptureVideoDataOutput
- java.lang.Object
-
- org.moe.natj.general.NativeObject
-
- org.moe.natj.objc.ObjCObject
-
- apple.NSObject
-
- apple.avfoundation.AVCaptureOutput
-
- apple.avfoundation.AVCaptureVideoDataOutput
-
- All Implemented Interfaces:
NSObject
public class AVCaptureVideoDataOutput extends AVCaptureOutput
AVCaptureVideoDataOutput AVCaptureVideoDataOutput is a concrete subclass of AVCaptureOutput that can be used to process uncompressed or compressed frames from the video being captured. Instances of AVCaptureVideoDataOutput produce video frames suitable for processing using other media APIs. Applications can access the frames with the captureOutput:didOutputSampleBuffer:fromConnection: delegate method.
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class apple.NSObject
NSObject.Function_instanceMethodForSelector_ret, NSObject.Function_methodForSelector_ret
-
-
Constructor Summary
Constructors Modifier Constructor Description protectedAVCaptureVideoDataOutput(org.moe.natj.general.Pointer peer)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description static booleanaccessInstanceVariablesDirectly()static AVCaptureVideoDataOutputalloc()static java.lang.ObjectallocWithZone(org.moe.natj.general.ptr.VoidPtr zone)booleanalwaysDiscardsLateVideoFrames()[@property] alwaysDiscardsLateVideoFrames Specifies whether the receiver should always discard any video frame that is not processed before the next frame is captured.booleanautomaticallyConfiguresOutputBufferDimensions()[@property] automaticallyConfiguresOutputBufferDimensions Indicates whether the receiver automatically configures the size of output buffers.static booleanautomaticallyNotifiesObserversForKey(java.lang.String key)NSArray<java.lang.String>availableVideoCodecTypes()[@property] availableVideoCodecTypes Indicates the supported video codec formats that can be specified in videoSettings.NSArray<java.lang.String>availableVideoCodecTypesForAssetWriterWithOutputFileType(java.lang.String outputFileType)availableVideoCodecTypesForAssetWriterWithOutputFileType: Specifies the available video codecs for use with AVAssetWriter and a given file type.NSArray<? extends NSNumber>availableVideoCVPixelFormatTypes()[@property] availableVideoCVPixelFormatTypes Indicates the supported video pixel formats that can be specified in videoSettings.static voidcancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)static voidcancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)static NSArray<java.lang.String>classFallbacksForKeyedArchiver()static org.moe.natj.objc.ClassclassForKeyedUnarchiver()static java.lang.StringdebugDescription_static()booleandeliversPreviewSizedOutputBuffers()[@property] deliversPreviewSizedOutputBuffers Indicates whether the receiver is currently configured to deliver preview sized buffers.static java.lang.Stringdescription_static()static longhash_static()AVCaptureVideoDataOutputinit()static NSObject.Function_instanceMethodForSelector_retinstanceMethodForSelector(org.moe.natj.objc.SEL aSelector)static NSMethodSignatureinstanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)static booleaninstancesRespondToSelector(org.moe.natj.objc.SEL aSelector)static booleanisSubclassOfClass(org.moe.natj.objc.Class aClass)static NSSet<java.lang.String>keyPathsForValuesAffectingValueForKey(java.lang.String key)CMTimeminFrameDuration()Deprecated.static java.lang.Objectnew_objc()NSDictionary<java.lang.String,?>recommendedVideoSettingsForAssetWriterWithOutputFileType(java.lang.String outputFileType)recommendedVideoSettingsForAssetWriterWithOutputFileType: Specifies the recommended settings for use with an AVAssetWriterInput.NSDictionary<?,?>recommendedVideoSettingsForVideoCodecTypeAssetWriterOutputFileType(java.lang.String videoCodecType, java.lang.String outputFileType)recommendedVideoSettingsForVideoCodecType:assetWriterOutputFileType: Specifies the recommended settings for a particular video codec type, to be used with an AVAssetWriterInput.static booleanresolveClassMethod(org.moe.natj.objc.SEL sel)static booleanresolveInstanceMethod(org.moe.natj.objc.SEL sel)NSObjectsampleBufferCallbackQueue()[@property] sampleBufferCallbackQueue The dispatch queue on which all sample buffer delegate methods will be called.AVCaptureVideoDataOutputSampleBufferDelegatesampleBufferDelegate()[@property] sampleBufferDelegate The receiver's delegate.voidsetAlwaysDiscardsLateVideoFrames(boolean value)[@property] alwaysDiscardsLateVideoFrames Specifies whether the receiver should always discard any video frame that is not processed before the next frame is captured.voidsetAutomaticallyConfiguresOutputBufferDimensions(boolean value)[@property] automaticallyConfiguresOutputBufferDimensions Indicates whether the receiver automatically configures the size of output buffers.voidsetDeliversPreviewSizedOutputBuffers(boolean value)[@property] deliversPreviewSizedOutputBuffers Indicates whether the receiver is currently configured to deliver preview sized buffers.voidsetMinFrameDuration(CMTime value)Deprecated.voidsetSampleBufferDelegateQueue(AVCaptureVideoDataOutputSampleBufferDelegate sampleBufferDelegate, NSObject sampleBufferCallbackQueue)setSampleBufferDelegate:queue: Sets the receiver's delegate that will accept captured buffers and dispatch queue on which the delegate will be called.static voidsetVersion_static(long aVersion)voidsetVideoSettings(NSDictionary<java.lang.String,?> value)[@property] videoSettings Specifies the settings used to decode or re-encode video before it is output by the receiver.static org.moe.natj.objc.Classsuperclass_static()static longversion_static()NSDictionary<java.lang.String,?>videoSettings()[@property] videoSettings Specifies the settings used to decode or re-encode video before it is output by the receiver.-
Methods inherited from class apple.avfoundation.AVCaptureOutput
connections, connectionWithMediaType, metadataOutputRectOfInterestForRect, rectForMetadataOutputRectOfInterest, transformedMetadataObjectForMetadataObjectConnection
-
Methods inherited from class apple.NSObject
accessibilityActivate, accessibilityActivationPoint, accessibilityAssistiveTechnologyFocusedIdentifiers, accessibilityAttributedHint, accessibilityAttributedLabel, accessibilityAttributedUserInputLabels, accessibilityAttributedValue, accessibilityContainerType, accessibilityCustomActions, accessibilityCustomRotors, accessibilityDecrement, accessibilityDragSourceDescriptors, accessibilityDropPointDescriptors, accessibilityElementAtIndex, accessibilityElementCount, accessibilityElementDidBecomeFocused, accessibilityElementDidLoseFocus, accessibilityElementIsFocused, accessibilityElements, accessibilityElementsHidden, accessibilityFrame, accessibilityHint, accessibilityIncrement, accessibilityLabel, accessibilityLanguage, accessibilityNavigationStyle, accessibilityPath, accessibilityPerformEscape, accessibilityPerformMagicTap, accessibilityRespondsToUserInteraction, accessibilityScroll, accessibilityTextualContext, accessibilityTraits, accessibilityUserInputLabels, accessibilityValue, accessibilityViewIsModal, addObserverForKeyPathOptionsContext, attemptRecoveryFromErrorOptionIndex, attemptRecoveryFromErrorOptionIndexDelegateDidRecoverSelectorContextInfo, autoContentAccessingProxy, awakeAfterUsingCoder, awakeFromNib, class_objc, classForCoder, classForKeyedArchiver, copy, dealloc, debugDescription, description, dictionaryWithValuesForKeys, didChangeValueForKey, didChangeValueForKeyWithSetMutationUsingObjects, didChangeValuesAtIndexesForKey, doesNotRecognizeSelector, fileManagerShouldProceedAfterError, fileManagerWillProcessPath, finalize_objc, forwardingTargetForSelector, forwardInvocation, hash, indexOfAccessibilityElement, isAccessibilityElement, isEqual, isKindOfClass, isMemberOfClass, isProxy, methodForSelector, methodSignatureForSelector, mutableArrayValueForKey, mutableArrayValueForKeyPath, mutableCopy, mutableOrderedSetValueForKey, mutableOrderedSetValueForKeyPath, mutableSetValueForKey, mutableSetValueForKeyPath, observationInfo, observeValueForKeyPathOfObjectChangeContext, performSelector, performSelectorInBackgroundWithObject, performSelectorOnMainThreadWithObjectWaitUntilDone, performSelectorOnMainThreadWithObjectWaitUntilDoneModes, performSelectorOnThreadWithObjectWaitUntilDone, performSelectorOnThreadWithObjectWaitUntilDoneModes, performSelectorWithObject, performSelectorWithObjectAfterDelay, performSelectorWithObjectAfterDelayInModes, performSelectorWithObjectWithObject, prepareForInterfaceBuilder, provideImageDataBytesPerRowOrigin_Size_UserInfo, removeObserverForKeyPath, removeObserverForKeyPathContext, replacementObjectForCoder, replacementObjectForKeyedArchiver, respondsToSelector, self, setAccessibilityActivationPoint, setAccessibilityAttributedHint, setAccessibilityAttributedLabel, setAccessibilityAttributedUserInputLabels, setAccessibilityAttributedValue, setAccessibilityContainerType, setAccessibilityCustomActions, setAccessibilityCustomRotors, setAccessibilityDragSourceDescriptors, setAccessibilityDropPointDescriptors, setAccessibilityElements, setAccessibilityElementsHidden, setAccessibilityFrame, setAccessibilityHint, setAccessibilityLabel, setAccessibilityLanguage, setAccessibilityNavigationStyle, setAccessibilityPath, setAccessibilityRespondsToUserInteraction, setAccessibilityTextualContext, setAccessibilityTraits, setAccessibilityUserInputLabels, setAccessibilityValue, setAccessibilityViewIsModal, setIsAccessibilityElement, setNilValueForKey, setObservationInfo, setShouldGroupAccessibilityChildren, setValueForKey, setValueForKeyPath, setValueForUndefinedKey, setValuesForKeysWithDictionary, shouldGroupAccessibilityChildren, superclass, validateValueForKeyError, validateValueForKeyPathError, valueForKey, valueForKeyPath, valueForUndefinedKey, willChangeValueForKey, willChangeValueForKeyWithSetMutationUsingObjects, willChangeValuesAtIndexesForKey
-
-
-
-
Method Detail
-
accessInstanceVariablesDirectly
public static boolean accessInstanceVariablesDirectly()
-
alloc
public static AVCaptureVideoDataOutput alloc()
-
allocWithZone
public static java.lang.Object allocWithZone(org.moe.natj.general.ptr.VoidPtr zone)
-
automaticallyNotifiesObserversForKey
public static boolean automaticallyNotifiesObserversForKey(java.lang.String key)
-
cancelPreviousPerformRequestsWithTarget
public static void cancelPreviousPerformRequestsWithTarget(java.lang.Object aTarget)
-
cancelPreviousPerformRequestsWithTargetSelectorObject
public static void cancelPreviousPerformRequestsWithTargetSelectorObject(java.lang.Object aTarget, org.moe.natj.objc.SEL aSelector, java.lang.Object anArgument)
-
classFallbacksForKeyedArchiver
public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
-
classForKeyedUnarchiver
public static org.moe.natj.objc.Class classForKeyedUnarchiver()
-
debugDescription_static
public static java.lang.String debugDescription_static()
-
description_static
public static java.lang.String description_static()
-
hash_static
public static long hash_static()
-
instanceMethodForSelector
public static NSObject.Function_instanceMethodForSelector_ret instanceMethodForSelector(org.moe.natj.objc.SEL aSelector)
-
instanceMethodSignatureForSelector
public static NSMethodSignature instanceMethodSignatureForSelector(org.moe.natj.objc.SEL aSelector)
-
instancesRespondToSelector
public static boolean instancesRespondToSelector(org.moe.natj.objc.SEL aSelector)
-
isSubclassOfClass
public static boolean isSubclassOfClass(org.moe.natj.objc.Class aClass)
-
keyPathsForValuesAffectingValueForKey
public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey(java.lang.String key)
-
new_objc
public static java.lang.Object new_objc()
-
resolveClassMethod
public static boolean resolveClassMethod(org.moe.natj.objc.SEL sel)
-
resolveInstanceMethod
public static boolean resolveInstanceMethod(org.moe.natj.objc.SEL sel)
-
setVersion_static
public static void setVersion_static(long aVersion)
-
superclass_static
public static org.moe.natj.objc.Class superclass_static()
-
version_static
public static long version_static()
-
alwaysDiscardsLateVideoFrames
public boolean alwaysDiscardsLateVideoFrames()
[@property] alwaysDiscardsLateVideoFrames Specifies whether the receiver should always discard any video frame that is not processed before the next frame is captured. When the value of this property is YES, the receiver will immediately discard frames that are captured while the dispatch queue handling existing frames is blocked in the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. When the value of this property is NO, delegates will be allowed more time to process old frames before new frames are discarded, but application memory usage may increase significantly as a result. The default value is YES.
-
availableVideoCVPixelFormatTypes
public NSArray<? extends NSNumber> availableVideoCVPixelFormatTypes()
[@property] availableVideoCVPixelFormatTypes Indicates the supported video pixel formats that can be specified in videoSettings. The value of this property is an NSArray of NSNumbers that can be used as values for the kCVPixelBufferPixelFormatTypeKey in the receiver's videoSettings property. The first format in the returned list is the most efficient output format.
-
availableVideoCodecTypes
public NSArray<java.lang.String> availableVideoCodecTypes()
[@property] availableVideoCodecTypes Indicates the supported video codec formats that can be specified in videoSettings. The value of this property is an NSArray of AVVideoCodecTypes that can be used as values for the AVVideoCodecKey in the receiver's videoSettings property.
-
init
public AVCaptureVideoDataOutput init()
- Overrides:
initin classAVCaptureOutput
-
minFrameDuration
@Deprecated public CMTime minFrameDuration()
Deprecated.[@property] minFrameDuration Specifies the minimum time interval between which the receiver should output consecutive video frames. The value of this property is a CMTime specifying the minimum duration of each video frame output by the receiver, placing a lower bound on the amount of time that should separate consecutive frames. This is equivalent to the inverse of the maximum frame rate. A value of kCMTimeZero or kCMTimeInvalid indicates an unlimited maximum frame rate. The default value is kCMTimeInvalid. As of iOS 5.0, minFrameDuration is deprecated. Use AVCaptureConnection's videoMinFrameDuration property instead.
-
recommendedVideoSettingsForAssetWriterWithOutputFileType
public NSDictionary<java.lang.String,?> recommendedVideoSettingsForAssetWriterWithOutputFileType(java.lang.String outputFileType)
recommendedVideoSettingsForAssetWriterWithOutputFileType: Specifies the recommended settings for use with an AVAssetWriterInput. The value of this property is an NSDictionary containing values for compression settings keys defined in AVVideoSettings.h. This dictionary is suitable for use as the "outputSettings" parameter when creating an AVAssetWriterInput, such as, [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings sourceFormatHint:hint]; The dictionary returned contains all necessary keys and values needed by AVAssetWriter (see AVAssetWriterInput.h, -initWithMediaType:outputSettings: for a more in depth discussion). For QuickTime movie and ISO file types, the recommended video settings will produce output comparable to that of AVCaptureMovieFileOutput. Note that the dictionary of settings is dependent on the current configuration of the receiver's AVCaptureSession and its inputs. The settings dictionary may change if the session's configuration changes. As such, you should configure your session first, then query the recommended video settings. As of iOS 8.3, movies produced with these settings successfully import into the iOS camera roll and sync to and from like devices via iTunes.- Parameters:
outputFileType- Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).- Returns:
- A fully populated dictionary of keys and values that are compatible with AVAssetWriter.
-
sampleBufferCallbackQueue
public NSObject sampleBufferCallbackQueue()
[@property] sampleBufferCallbackQueue The dispatch queue on which all sample buffer delegate methods will be called. The value of this property is a dispatch_queue_t. The queue is set using the setSampleBufferDelegate:queue: method.
-
sampleBufferDelegate
public AVCaptureVideoDataOutputSampleBufferDelegate sampleBufferDelegate()
[@property] sampleBufferDelegate The receiver's delegate. The value of this property is an object conforming to the AVCaptureVideoDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured. The delegate is set using the setSampleBufferDelegate:queue: method.
-
setAlwaysDiscardsLateVideoFrames
public void setAlwaysDiscardsLateVideoFrames(boolean value)
[@property] alwaysDiscardsLateVideoFrames Specifies whether the receiver should always discard any video frame that is not processed before the next frame is captured. When the value of this property is YES, the receiver will immediately discard frames that are captured while the dispatch queue handling existing frames is blocked in the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. When the value of this property is NO, delegates will be allowed more time to process old frames before new frames are discarded, but application memory usage may increase significantly as a result. The default value is YES.
-
setMinFrameDuration
@Deprecated public void setMinFrameDuration(CMTime value)
Deprecated.[@property] minFrameDuration Specifies the minimum time interval between which the receiver should output consecutive video frames. The value of this property is a CMTime specifying the minimum duration of each video frame output by the receiver, placing a lower bound on the amount of time that should separate consecutive frames. This is equivalent to the inverse of the maximum frame rate. A value of kCMTimeZero or kCMTimeInvalid indicates an unlimited maximum frame rate. The default value is kCMTimeInvalid. As of iOS 5.0, minFrameDuration is deprecated. Use AVCaptureConnection's videoMinFrameDuration property instead.
-
setSampleBufferDelegateQueue
public void setSampleBufferDelegateQueue(AVCaptureVideoDataOutputSampleBufferDelegate sampleBufferDelegate, NSObject sampleBufferCallbackQueue)
setSampleBufferDelegate:queue: Sets the receiver's delegate that will accept captured buffers and dispatch queue on which the delegate will be called. When a new video sample buffer is captured it will be vended to the sample buffer delegate using the captureOutput:didOutputSampleBuffer:fromConnection: delegate method. All delegate methods will be called on the specified dispatch queue. If the queue is blocked when new frames are captured, those frames will be automatically dropped at a time determined by the value of the alwaysDiscardsLateVideoFrames property. This allows clients to process existing frames on the same queue without having to manage the potential memory usage increases that would otherwise occur when that processing is unable to keep up with the rate of incoming frames. If their frame processing is consistently unable to keep up with the rate of incoming frames, clients should consider using the minFrameDuration property, which will generally yield better performance characteristics and more consistent frame rates than frame dropping alone. Clients that need to minimize the chances of frames being dropped should specify a queue on which a sufficiently small amount of processing is being done outside of receiving sample buffers. However, if such clients migrate extra processing to another queue, they are responsible for ensuring that memory usage does not grow without bound from frames that have not been processed. A serial dispatch queue must be used to guarantee that video frames will be delivered in order. The sampleBufferCallbackQueue parameter may not be NULL, except when setting the sampleBufferDelegate to nil.- Parameters:
sampleBufferDelegate- An object conforming to the AVCaptureVideoDataOutputSampleBufferDelegate protocol that will receive sample buffers after they are captured.sampleBufferCallbackQueue- A dispatch queue on which all sample buffer delegate methods will be called.
-
setVideoSettings
public void setVideoSettings(NSDictionary<java.lang.String,?> value)
[@property] videoSettings Specifies the settings used to decode or re-encode video before it is output by the receiver. See AVVideoSettings.h for more information on how to construct a video settings dictionary. To receive samples in their device native format, set this property to an empty dictionary (i.e. [NSDictionary dictionary]). To receive samples in a default uncompressed format, set this property to nil. Note that after this property is set to nil, subsequent querying of this property will yield a non-nil dictionary reflecting the settings used by the AVCaptureSession's current sessionPreset. On iOS, the only supported key is kCVPixelBufferPixelFormatTypeKey. Supported pixel formats are kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange and kCVPixelFormatType_32BGRA.
-
videoSettings
public NSDictionary<java.lang.String,?> videoSettings()
[@property] videoSettings Specifies the settings used to decode or re-encode video before it is output by the receiver. See AVVideoSettings.h for more information on how to construct a video settings dictionary. To receive samples in their device native format, set this property to an empty dictionary (i.e. [NSDictionary dictionary]). To receive samples in a default uncompressed format, set this property to nil. Note that after this property is set to nil, subsequent querying of this property will yield a non-nil dictionary reflecting the settings used by the AVCaptureSession's current sessionPreset. On iOS, the only supported key is kCVPixelBufferPixelFormatTypeKey. Supported pixel formats are kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange and kCVPixelFormatType_32BGRA.
-
availableVideoCodecTypesForAssetWriterWithOutputFileType
public NSArray<java.lang.String> availableVideoCodecTypesForAssetWriterWithOutputFileType(java.lang.String outputFileType)
availableVideoCodecTypesForAssetWriterWithOutputFileType: Specifies the available video codecs for use with AVAssetWriter and a given file type. This method allows you to query the available video codecs that may be used when specifying an AVVideoCodecKey in -recommendedVideoSettingsForVideoCodecType:assetWriterOutputFileType:. When specifying an outputFileType of AVFileTypeQuickTimeMovie, video codecs are ordered identically to -[AVCaptureMovieFileOutput availableVideoCodecTypes].- Parameters:
outputFileType- Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).- Returns:
- An array of video codecs; see AVVideoSettings.h for a full list.
-
recommendedVideoSettingsForVideoCodecTypeAssetWriterOutputFileType
public NSDictionary<?,?> recommendedVideoSettingsForVideoCodecTypeAssetWriterOutputFileType(java.lang.String videoCodecType, java.lang.String outputFileType)
recommendedVideoSettingsForVideoCodecType:assetWriterOutputFileType: Specifies the recommended settings for a particular video codec type, to be used with an AVAssetWriterInput. The value of this property is an NSDictionary containing values for compression settings keys defined in AVVideoSettings.h. This dictionary is suitable for use as the "outputSettings" parameter when creating an AVAssetWriterInput, such as, [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings sourceFormatHint:hint]; The dictionary returned contains all necessary keys and values needed by AVAssetWriter (see AVAssetWriterInput.h, -initWithMediaType:outputSettings: for a more in depth discussion). For QuickTime movie and ISO file types, the recommended video settings will produce output comparable to that of AVCaptureMovieFileOutput. The videoCodecType string provided must be present in the availableVideoCodecTypesForAssetWriterWithOutputFileType: array, or an NSInvalidArgumentException is thrown. Note that the dictionary of settings is dependent on the current configuration of the receiver's AVCaptureSession and its inputs. The settings dictionary may change if the session's configuration changes. As such, you should configure your session first, then query the recommended video settings. As of iOS 8.3, movies produced with these settings successfully import into the iOS camera roll and sync to and from like devices via iTunes.- Parameters:
videoCodecType- Specifies the desired AVVideoCodecKey to be used for compression (see AVVideoSettings.h).outputFileType- Specifies the UTI of the file type to be written (see AVMediaFormat.h for a list of file format UTIs).- Returns:
- A fully populated dictionary of keys and values that are compatible with AVAssetWriter.
-
automaticallyConfiguresOutputBufferDimensions
public boolean automaticallyConfiguresOutputBufferDimensions()
[@property] automaticallyConfiguresOutputBufferDimensions Indicates whether the receiver automatically configures the size of output buffers. Default value is YES. In most configurations, AVCaptureVideoDataOutput delivers full-resolution buffers, that is, buffers with the same dimensions as the source AVCaptureDevice's activeFormat's videoDimensions. When this property is set to YES, the receiver is free to configure the dimensions of the buffers delivered to -captureOutput:didOutputSampleBuffer:fromConnection:, such that they are a smaller preview size (roughly the size of the screen). For instance, when the AVCaptureSession's sessionPreset is set to AVCaptureSessionPresetPhoto, it is assumed that video data output buffers are being delivered as a preview proxy. Likewise, if an AVCapturePhotoOutput is present in the session with livePhotoCaptureEnabled, it is assumed that video data output is being used for photo preview, and thus preview-sized buffers are a better choice than full-res buffers. You can query deliversPreviewSizedOutputBuffers to find out whether automatic configuration of output buffer dimensions is currently downscaling buffers to a preview size. You can also query the videoSettings property to find out the exact width and height being delivered. If you wish to manually set deliversPreviewSizedOutputBuffers, you must first set automaticallyConfiguresOutputBufferDimensions to NO.
-
deliversPreviewSizedOutputBuffers
public boolean deliversPreviewSizedOutputBuffers()
[@property] deliversPreviewSizedOutputBuffers Indicates whether the receiver is currently configured to deliver preview sized buffers. If you wish to manually set deliversPreviewSizedOutputBuffers, you must first set automaticallyConfiguresOutputBufferDimensions to NO. When deliversPreviewSizedOutputBuffers is set to YES, auto focus, exposure, and white balance changes are quicker. AVCaptureVideoDataOutput assumes that the buffers are being used for on-screen preview rather than recording.
-
setAutomaticallyConfiguresOutputBufferDimensions
public void setAutomaticallyConfiguresOutputBufferDimensions(boolean value)
[@property] automaticallyConfiguresOutputBufferDimensions Indicates whether the receiver automatically configures the size of output buffers. Default value is YES. In most configurations, AVCaptureVideoDataOutput delivers full-resolution buffers, that is, buffers with the same dimensions as the source AVCaptureDevice's activeFormat's videoDimensions. When this property is set to YES, the receiver is free to configure the dimensions of the buffers delivered to -captureOutput:didOutputSampleBuffer:fromConnection:, such that they are a smaller preview size (roughly the size of the screen). For instance, when the AVCaptureSession's sessionPreset is set to AVCaptureSessionPresetPhoto, it is assumed that video data output buffers are being delivered as a preview proxy. Likewise, if an AVCapturePhotoOutput is present in the session with livePhotoCaptureEnabled, it is assumed that video data output is being used for photo preview, and thus preview-sized buffers are a better choice than full-res buffers. You can query deliversPreviewSizedOutputBuffers to find out whether automatic configuration of output buffer dimensions is currently downscaling buffers to a preview size. You can also query the videoSettings property to find out the exact width and height being delivered. If you wish to manually set deliversPreviewSizedOutputBuffers, you must first set automaticallyConfiguresOutputBufferDimensions to NO.
-
setDeliversPreviewSizedOutputBuffers
public void setDeliversPreviewSizedOutputBuffers(boolean value)
[@property] deliversPreviewSizedOutputBuffers Indicates whether the receiver is currently configured to deliver preview sized buffers. If you wish to manually set deliversPreviewSizedOutputBuffers, you must first set automaticallyConfiguresOutputBufferDimensions to NO. When deliversPreviewSizedOutputBuffers is set to YES, auto focus, exposure, and white balance changes are quicker. AVCaptureVideoDataOutput assumes that the buffers are being used for on-screen preview rather than recording.
-
-