Class AVSampleBufferDisplayLayer

    • Constructor Detail

      • AVSampleBufferDisplayLayer

        protected AVSampleBufferDisplayLayer​(org.moe.natj.general.Pointer peer)
    • Method Detail

      • accessInstanceVariablesDirectly

        public static boolean accessInstanceVariablesDirectly()
      • allocWithZone

        public static java.lang.Object allocWithZone​(org.moe.natj.general.ptr.VoidPtr zone)
      • automaticallyNotifiesObserversForKey

        public static boolean automaticallyNotifiesObserversForKey​(java.lang.String key)
      • cancelPreviousPerformRequestsWithTarget

        public static void cancelPreviousPerformRequestsWithTarget​(java.lang.Object aTarget)
      • cancelPreviousPerformRequestsWithTargetSelectorObject

        public static void cancelPreviousPerformRequestsWithTargetSelectorObject​(java.lang.Object aTarget,
                                                                                 org.moe.natj.objc.SEL aSelector,
                                                                                 java.lang.Object anArgument)
      • classFallbacksForKeyedArchiver

        public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
      • classForKeyedUnarchiver

        public static org.moe.natj.objc.Class classForKeyedUnarchiver()
      • debugDescription_static

        public static java.lang.String debugDescription_static()
      • defaultActionForKey

        public static CAAction defaultActionForKey​(java.lang.String event)
      • defaultValueForKey

        public static java.lang.Object defaultValueForKey​(java.lang.String key)
      • description_static

        public static java.lang.String description_static()
      • hash_static

        public static long hash_static()
      • instanceMethodSignatureForSelector

        public static NSMethodSignature instanceMethodSignatureForSelector​(org.moe.natj.objc.SEL aSelector)
      • instancesRespondToSelector

        public static boolean instancesRespondToSelector​(org.moe.natj.objc.SEL aSelector)
      • isSubclassOfClass

        public static boolean isSubclassOfClass​(org.moe.natj.objc.Class aClass)
      • keyPathsForValuesAffectingValueForKey

        public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey​(java.lang.String key)
      • needsDisplayForKey

        public static boolean needsDisplayForKey​(java.lang.String key)
      • new_objc

        public static java.lang.Object new_objc()
      • resolveClassMethod

        public static boolean resolveClassMethod​(org.moe.natj.objc.SEL sel)
      • resolveInstanceMethod

        public static boolean resolveInstanceMethod​(org.moe.natj.objc.SEL sel)
      • setVersion_static

        public static void setVersion_static​(long aVersion)
      • superclass_static

        public static org.moe.natj.objc.Class superclass_static()
      • version_static

        public static long version_static()
      • controlTimebase

        public CMTimebaseRef controlTimebase()
        [@property] controlTimebase The layer's control timebase, which governs how time stamps are interpreted. By default, this property is NULL, in which case time stamps will be interpreted according to the host time clock (mach_absolute_time with the appropriate timescale conversion; this is the same as Core Animation's CACurrentMediaTime). With no control timebase, once frames are enqueued, it is not possible to adjust exactly when they are displayed. If a non-NULL control timebase is set, it will be used to interpret time stamps. You can control the timing of frame display by setting the rate and time of the control timebase. If you are synchronizing video to audio, you can use a timebase whose master clock is a CMAudioDeviceClock for the appropriate audio device to prevent drift. Note that prior to OSX 10.10 and iOS 8.0, the control timebase could not be changed after enqueueSampleBuffer: was called. As of OSX 10.10 and iOS 8.0, the control timebase may be changed at any time.
      • enqueueSampleBuffer

        public void enqueueSampleBuffer​(CMSampleBufferRef sampleBuffer)
        Description copied from interface: AVQueuedSampleBufferRendering
        enqueueSampleBuffer: Sends a sample buffer in order to render its contents. Video-specific notes: If sampleBuffer has the kCMSampleAttachmentKey_DoNotDisplay attachment set to kCFBooleanTrue, the frame will be decoded but not displayed. Otherwise, if sampleBuffer has the kCMSampleAttachmentKey_DisplayImmediately attachment set to kCFBooleanTrue, the decoded image will be displayed as soon as possible, replacing all previously enqueued images regardless of their timestamps. Otherwise, the decoded image will be displayed at sampleBuffer's output presentation timestamp, as interpreted by the timebase. To schedule the removal of previous images at a specific timestamp, enqueue a marker sample buffer containing no samples, with the kCMSampleBufferAttachmentKey_EmptyMedia attachment set to kCFBooleanTrue. IMPORTANT NOTE: attachments with the kCMSampleAttachmentKey_ prefix must be set via CMSampleBufferGetSampleAttachmentsArray and CFDictionarySetValue. Attachments with the kCMSampleBufferAttachmentKey_ prefix must be set via CMSetAttachment.
        Specified by:
        enqueueSampleBuffer in interface AVQueuedSampleBufferRendering
      • error

        public NSError error()
        [@property] error If the display layer's status is AVQueuedSampleBufferRenderingStatusFailed, this describes the error that caused the failure. The value of this property is an NSError that describes what caused the display layer to no longer be able to enqueue sample buffers. If the status is not AVQueuedSampleBufferRenderingStatusFailed, the value of this property is nil.
      • flush

        public void flush()
        Description copied from interface: AVQueuedSampleBufferRendering
        flush Instructs the receiver to discard pending enqueued sample buffers. Additional sample buffers can be appended after -flush. Video-specific notes: It is not possible to determine which sample buffers have been decoded, so the next frame passed to enqueueSampleBuffer: should be an IDR frame (also known as a key frame or sync sample).
        Specified by:
        flush in interface AVQueuedSampleBufferRendering
      • flushAndRemoveImage

        public void flushAndRemoveImage()
        flushAndRemoveImage Instructs the layer to discard pending enqueued sample buffers and remove any currently displayed image. It is not possible to determine which sample buffers have been decoded, so the next frame passed to enqueueSampleBuffer: should be an IDR frame (also known as a key frame or sync sample).
      • initWithLayer

        public AVSampleBufferDisplayLayer initWithLayer​(java.lang.Object layer)
        Description copied from class: CALayer
        This initializer is used by CoreAnimation to create shadow copies of layers, e.g. for use as presentation layers. Subclasses can override this method to copy their instance variables into the presentation layer (subclasses should call the superclass afterwards). Calling this method in any other situation will result in undefined behavior.
        Overrides:
        initWithLayer in class CALayer
      • isReadyForMoreMediaData

        public boolean isReadyForMoreMediaData()
        Description copied from interface: AVQueuedSampleBufferRendering
        [@property] readyForMoreMediaData Indicates the readiness of the receiver to accept more sample buffers. An object conforming to AVQueuedSampleBufferRendering keeps track of the occupancy levels of its internal queues for the benefit of clients that enqueue sample buffers from non-real-time sources -- i.e., clients that can supply sample buffers faster than they are consumed, and so need to decide when to hold back. Clients enqueueing sample buffers from non-real-time sources may hold off from generating or obtaining more sample buffers to enqueue when the value of readyForMoreMediaData is NO. It is safe to call enqueueSampleBuffer: when readyForMoreMediaData is NO, but it is a bad idea to enqueue sample buffers without bound. To help with control of the non-real-time supply of sample buffers, such clients can use -requestMediaDataWhenReadyOnQueue:usingBlock in order to specify a block that the receiver should invoke whenever it's ready for sample buffers to be appended. The value of readyForMoreMediaData will often change from NO to YES asynchronously, as previously supplied sample buffers are decoded and rendered. This property is not key value observable.
        Specified by:
        isReadyForMoreMediaData in interface AVQueuedSampleBufferRendering
      • requestMediaDataWhenReadyOnQueueUsingBlock

        public void requestMediaDataWhenReadyOnQueueUsingBlock​(NSObject queue,
                                                               AVQueuedSampleBufferRendering.Block_requestMediaDataWhenReadyOnQueueUsingBlock block)
        Description copied from interface: AVQueuedSampleBufferRendering
        requestMediaDataWhenReadyOnQueue:usingBlock: Instructs the target to invoke a client-supplied block repeatedly, at its convenience, in order to gather sample buffers for playback. The block should enqueue sample buffers to the receiver either until the receiver's readyForMoreMediaData property becomes NO or until there is no more data to supply. When the receiver has decoded enough of the media data it has received that it becomes ready for more media data again, it will invoke the block again in order to obtain more. If this method is called multiple times, only the last call is effective. Call stopRequestingMediaData to cancel this request. Each call to requestMediaDataWhenReadyOnQueue:usingBlock: should be paired with a corresponding call to stopRequestingMediaData:. Releasing the AVQueuedSampleBufferRendering object without a call to stopRequestingMediaData will result in undefined behavior.
        Specified by:
        requestMediaDataWhenReadyOnQueueUsingBlock in interface AVQueuedSampleBufferRendering
      • setControlTimebase

        public void setControlTimebase​(CMTimebaseRef value)
        [@property] controlTimebase The layer's control timebase, which governs how time stamps are interpreted. By default, this property is NULL, in which case time stamps will be interpreted according to the host time clock (mach_absolute_time with the appropriate timescale conversion; this is the same as Core Animation's CACurrentMediaTime). With no control timebase, once frames are enqueued, it is not possible to adjust exactly when they are displayed. If a non-NULL control timebase is set, it will be used to interpret time stamps. You can control the timing of frame display by setting the rate and time of the control timebase. If you are synchronizing video to audio, you can use a timebase whose master clock is a CMAudioDeviceClock for the appropriate audio device to prevent drift. Note that prior to OSX 10.10 and iOS 8.0, the control timebase could not be changed after enqueueSampleBuffer: was called. As of OSX 10.10 and iOS 8.0, the control timebase may be changed at any time.
      • setVideoGravity

        public void setVideoGravity​(java.lang.String value)
        [@property] videoGravity A string defining how the video is displayed within an AVSampleBufferDisplayLayer bounds rect. [@discusssion] Options are AVLayerVideoGravityResizeAspect, AVLayerVideoGravityResizeAspectFill and AVLayerVideoGravityResize. AVLayerVideoGravityResizeAspect is default. See for a description of these options.
      • status

        public long status()
        [@property] status The ability of the display layer to be used for enqueuing sample buffers. The value of this property is an AVQueuedSampleBufferRenderingStatus that indicates whether the receiver can be used for enqueuing and rendering sample buffers. When the value of this property is AVQueuedSampleBufferRenderingStatusFailed, clients can check the value of the error property to determine the failure. To resume rendering sample buffers using the display layer after a failure, clients must first reset the status to AVQueuedSampleBufferRenderingStatusUnknown. This can be achieved by invoking -flush on the display layer. This property is key value observable.
      • videoGravity

        public java.lang.String videoGravity()
        [@property] videoGravity A string defining how the video is displayed within an AVSampleBufferDisplayLayer bounds rect. [@discusssion] Options are AVLayerVideoGravityResizeAspect, AVLayerVideoGravityResizeAspectFill and AVLayerVideoGravityResize. AVLayerVideoGravityResizeAspect is default. See for a description of these options.
      • supportsSecureCoding

        public static boolean supportsSecureCoding()
      • _supportsSecureCoding

        public boolean _supportsSecureCoding()
        Description copied from interface: NSSecureCoding
        This property must return YES on all classes that allow secure coding. Subclasses of classes that adopt NSSecureCoding and override initWithCoder: must also override this method and return YES. The Secure Coding Guide should be consulted when writing methods that decode data.
        Specified by:
        _supportsSecureCoding in interface NSSecureCoding
        Overrides:
        _supportsSecureCoding in class CALayer
      • timebase

        public CMTimebaseRef timebase()
        Description copied from interface: AVQueuedSampleBufferRendering
        [@property] timebase The renderer's timebase, which governs how time stamps are interpreted. The timebase is used to interpret time stamps. The timebase is read-only. Use the AVSampleBufferRenderSynchronizer to set the rate or time.
        Specified by:
        timebase in interface AVQueuedSampleBufferRendering
      • cornerCurveExpansionFactor

        public static double cornerCurveExpansionFactor​(java.lang.String curve)
      • preventsCapture

        public boolean preventsCapture()
        [@property] preventsCapture Indicates that image data should be protected from capture.
      • preventsDisplaySleepDuringVideoPlayback

        public boolean preventsDisplaySleepDuringVideoPlayback()
        [@property] preventsDisplaySleepDuringVideoPlayback Indicates whether video playback prevents display and device sleep. Default is YES on iOS. Default is NO on macOS. Setting this property to NO does not force the display to sleep, it simply stops preventing display sleep. Other apps or frameworks within your app may still be preventing display sleep for various reasons. Note: If sample buffers are being enqueued for playback at the user's request, you should ensure that the value of this property is set to YES. If video is not being displayed as part of the user's primary focus, you should ensure that the value of this property is set to NO.
      • setPreventsCapture

        public void setPreventsCapture​(boolean value)
        [@property] preventsCapture Indicates that image data should be protected from capture.
      • setPreventsDisplaySleepDuringVideoPlayback

        public void setPreventsDisplaySleepDuringVideoPlayback​(boolean value)
        [@property] preventsDisplaySleepDuringVideoPlayback Indicates whether video playback prevents display and device sleep. Default is YES on iOS. Default is NO on macOS. Setting this property to NO does not force the display to sleep, it simply stops preventing display sleep. Other apps or frameworks within your app may still be preventing display sleep for various reasons. Note: If sample buffers are being enqueued for playback at the user's request, you should ensure that the value of this property is set to YES. If video is not being displayed as part of the user's primary focus, you should ensure that the value of this property is set to NO.
      • requiresFlushToResumeDecoding

        public boolean requiresFlushToResumeDecoding()
        [@property] requiresFlushToResumeDecoding Indicates that the receiver is in a state where it requires a call to -flush to continue decoding frames. When the application enters a state where use of video decoder resources is not permissible, the value of this property changes to YES along with the display layer's status changing to AVQueuedSampleBufferRenderingStatusFailed. To resume rendering sample buffers using the display layer after this property's value is YES, clients must first reset the display layer's status to AVQueuedSampleBufferRenderingStatusUnknown. This can be achieved by invoking -flush on the display layer. Clients can track changes to this property via AVSampleBufferDisplayLayerRequiresFlushToResumeDecodingDidChangeNotification. This property is not key value observable.