Class AVSampleBufferAudioRenderer

  • All Implemented Interfaces:
    AVQueuedSampleBufferRendering, NSObject

    public class AVSampleBufferAudioRenderer
    extends NSObject
    implements AVQueuedSampleBufferRendering
    AVSampleBufferAudioRenderer AVSampleBufferAudioRenderer can decompress and play compressed or uncompressed audio. An instance of AVSampleBufferAudioRenderer must be added to an AVSampleBufferRenderSynchronizer before the first sample buffer is enqueued.
    • Constructor Detail

      • AVSampleBufferAudioRenderer

        protected AVSampleBufferAudioRenderer​(org.moe.natj.general.Pointer peer)
    • Method Detail

      • accessInstanceVariablesDirectly

        public static boolean accessInstanceVariablesDirectly()
      • allocWithZone

        public static java.lang.Object allocWithZone​(org.moe.natj.general.ptr.VoidPtr zone)
      • audioTimePitchAlgorithm

        public java.lang.String audioTimePitchAlgorithm()
        [@property] audioTimePitchAlgorithm Indicates the processing algorithm used to manage audio pitch at varying rates. Constants for various time pitch algorithms, e.g. AVAudioTimePitchSpectral, are defined in AVAudioProcessingSettings.h. The default value on iOS is AVAudioTimePitchAlgorithmLowQualityZeroLatency and on macOS is AVAudioTimePitchAlgorithmTimeDomain. If the timebase's rate is not supported by the audioTimePitchAlgorithm, audio will be muted. Modifying this property while the timebase's rate is not 0.0 may cause the rate to briefly change to 0.0.
      • automaticallyNotifiesObserversForKey

        public static boolean automaticallyNotifiesObserversForKey​(java.lang.String key)
      • cancelPreviousPerformRequestsWithTarget

        public static void cancelPreviousPerformRequestsWithTarget​(java.lang.Object aTarget)
      • cancelPreviousPerformRequestsWithTargetSelectorObject

        public static void cancelPreviousPerformRequestsWithTargetSelectorObject​(java.lang.Object aTarget,
                                                                                 org.moe.natj.objc.SEL aSelector,
                                                                                 java.lang.Object anArgument)
      • classFallbacksForKeyedArchiver

        public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
      • classForKeyedUnarchiver

        public static org.moe.natj.objc.Class classForKeyedUnarchiver()
      • debugDescription_static

        public static java.lang.String debugDescription_static()
      • description_static

        public static java.lang.String description_static()
      • enqueueSampleBuffer

        public void enqueueSampleBuffer​(CMSampleBufferRef sampleBuffer)
        Description copied from interface: AVQueuedSampleBufferRendering
        enqueueSampleBuffer: Sends a sample buffer in order to render its contents. Video-specific notes: If sampleBuffer has the kCMSampleAttachmentKey_DoNotDisplay attachment set to kCFBooleanTrue, the frame will be decoded but not displayed. Otherwise, if sampleBuffer has the kCMSampleAttachmentKey_DisplayImmediately attachment set to kCFBooleanTrue, the decoded image will be displayed as soon as possible, replacing all previously enqueued images regardless of their timestamps. Otherwise, the decoded image will be displayed at sampleBuffer's output presentation timestamp, as interpreted by the timebase. To schedule the removal of previous images at a specific timestamp, enqueue a marker sample buffer containing no samples, with the kCMSampleBufferAttachmentKey_EmptyMedia attachment set to kCFBooleanTrue. IMPORTANT NOTE: attachments with the kCMSampleAttachmentKey_ prefix must be set via CMSampleBufferGetSampleAttachmentsArray and CFDictionarySetValue. Attachments with the kCMSampleBufferAttachmentKey_ prefix must be set via CMSetAttachment.
        Specified by:
        enqueueSampleBuffer in interface AVQueuedSampleBufferRendering
      • error

        public NSError error()
        [@property] error If the renderer's status is AVQueuedSampleBufferRenderingStatusFailed, this describes the error that caused the failure. The value of this property is an NSError that describes what caused the renderer to no longer be able to render sample buffers. The value of this property is nil unless the value of status is AVQueuedSampleBufferRenderingStatusFailed.
      • flush

        public void flush()
        Description copied from interface: AVQueuedSampleBufferRendering
        flush Instructs the receiver to discard pending enqueued sample buffers. Additional sample buffers can be appended after -flush. Video-specific notes: It is not possible to determine which sample buffers have been decoded, so the next frame passed to enqueueSampleBuffer: should be an IDR frame (also known as a key frame or sync sample).
        Specified by:
        flush in interface AVQueuedSampleBufferRendering
      • flushFromSourceTimeCompletionHandler

        public void flushFromSourceTimeCompletionHandler​(CMTime time,
                                                         AVSampleBufferAudioRenderer.Block_flushFromSourceTimeCompletionHandler completionHandler)
        flushFromSourceTime:completionHandler: Flushes enqueued sample buffers with presentation time stamps later than or equal to the specified time. This method can be used to replace media data scheduled to be rendered in the future, without interrupting playback. One example of this is when the data that has already been enqueued is from a sequence of two songs and the second song is swapped for a new song. In this case, this method would be called with the time stamp of the first sample buffer from the second song. After the completion handler is executed with a YES parameter, media data may again be enqueued with timestamps at the specified time. If NO is provided to the completion handler, the flush did not succeed and the set of enqueued sample buffers remains unchanged. A flush can fail becuse the source time was too close to (or earlier than) the current time or because the current configuration of the receiver does not support flushing at a particular time. In these cases, the caller can choose to flush all enqueued media data by invoking the -flush method.
        Parameters:
        completionHandler - A block that is invoked, possibly asynchronously, after the flush operation completes or fails.
      • hash_static

        public static long hash_static()
      • instanceMethodSignatureForSelector

        public static NSMethodSignature instanceMethodSignatureForSelector​(org.moe.natj.objc.SEL aSelector)
      • instancesRespondToSelector

        public static boolean instancesRespondToSelector​(org.moe.natj.objc.SEL aSelector)
      • isMuted

        public boolean isMuted()
        [@property] muted Indicates whether or not audio output of the AVSampleBufferAudioRenderer is muted. Setting this property only affects audio muting for the renderer instance and not for the device.
      • isReadyForMoreMediaData

        public boolean isReadyForMoreMediaData()
        Description copied from interface: AVQueuedSampleBufferRendering
        [@property] readyForMoreMediaData Indicates the readiness of the receiver to accept more sample buffers. An object conforming to AVQueuedSampleBufferRendering keeps track of the occupancy levels of its internal queues for the benefit of clients that enqueue sample buffers from non-real-time sources -- i.e., clients that can supply sample buffers faster than they are consumed, and so need to decide when to hold back. Clients enqueueing sample buffers from non-real-time sources may hold off from generating or obtaining more sample buffers to enqueue when the value of readyForMoreMediaData is NO. It is safe to call enqueueSampleBuffer: when readyForMoreMediaData is NO, but it is a bad idea to enqueue sample buffers without bound. To help with control of the non-real-time supply of sample buffers, such clients can use -requestMediaDataWhenReadyOnQueue:usingBlock in order to specify a block that the receiver should invoke whenever it's ready for sample buffers to be appended. The value of readyForMoreMediaData will often change from NO to YES asynchronously, as previously supplied sample buffers are decoded and rendered. This property is not key value observable.
        Specified by:
        isReadyForMoreMediaData in interface AVQueuedSampleBufferRendering
      • isSubclassOfClass

        public static boolean isSubclassOfClass​(org.moe.natj.objc.Class aClass)
      • keyPathsForValuesAffectingValueForKey

        public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey​(java.lang.String key)
      • new_objc

        public static java.lang.Object new_objc()
      • requestMediaDataWhenReadyOnQueueUsingBlock

        public void requestMediaDataWhenReadyOnQueueUsingBlock​(NSObject queue,
                                                               AVQueuedSampleBufferRendering.Block_requestMediaDataWhenReadyOnQueueUsingBlock block)
        Description copied from interface: AVQueuedSampleBufferRendering
        requestMediaDataWhenReadyOnQueue:usingBlock: Instructs the target to invoke a client-supplied block repeatedly, at its convenience, in order to gather sample buffers for playback. The block should enqueue sample buffers to the receiver either until the receiver's readyForMoreMediaData property becomes NO or until there is no more data to supply. When the receiver has decoded enough of the media data it has received that it becomes ready for more media data again, it will invoke the block again in order to obtain more. If this method is called multiple times, only the last call is effective. Call stopRequestingMediaData to cancel this request. Each call to requestMediaDataWhenReadyOnQueue:usingBlock: should be paired with a corresponding call to stopRequestingMediaData:. Releasing the AVQueuedSampleBufferRendering object without a call to stopRequestingMediaData will result in undefined behavior.
        Specified by:
        requestMediaDataWhenReadyOnQueueUsingBlock in interface AVQueuedSampleBufferRendering
      • resolveClassMethod

        public static boolean resolveClassMethod​(org.moe.natj.objc.SEL sel)
      • resolveInstanceMethod

        public static boolean resolveInstanceMethod​(org.moe.natj.objc.SEL sel)
      • setAudioTimePitchAlgorithm

        public void setAudioTimePitchAlgorithm​(java.lang.String value)
        [@property] audioTimePitchAlgorithm Indicates the processing algorithm used to manage audio pitch at varying rates. Constants for various time pitch algorithms, e.g. AVAudioTimePitchSpectral, are defined in AVAudioProcessingSettings.h. The default value on iOS is AVAudioTimePitchAlgorithmLowQualityZeroLatency and on macOS is AVAudioTimePitchAlgorithmTimeDomain. If the timebase's rate is not supported by the audioTimePitchAlgorithm, audio will be muted. Modifying this property while the timebase's rate is not 0.0 may cause the rate to briefly change to 0.0.
      • setMuted

        public void setMuted​(boolean value)
        [@property] muted Indicates whether or not audio output of the AVSampleBufferAudioRenderer is muted. Setting this property only affects audio muting for the renderer instance and not for the device.
      • setVersion_static

        public static void setVersion_static​(long aVersion)
      • setVolume

        public void setVolume​(float value)
        [@property] volume Indicates the current audio volume of the AVSampleBufferAudioRenderer. A value of 0.0 means "silence all audio", while 1.0 means "play at the full volume of the audio media". This property should be used for frequent volume changes, for example via a volume knob or fader. This property is most useful on iOS to control the volume of the AVSampleBufferAudioRenderer relative to other audio output, not for setting absolute volume.
      • status

        public long status()
        [@property] status Indicates the status of the audio renderer. A renderer begins with status AVQueuedSampleBufferRenderingStatusUnknown. As sample buffers are enqueued for rendering using -enqueueSampleBuffer:, the renderer will transition to either AVQueuedSampleBufferRenderingStatusRendering or AVQueuedSampleBufferRenderingStatusFailed. If the status is AVQueuedSampleBufferRenderingStatusFailed, check the value of the renderer's error property for information on the error encountered. This is terminal status from which recovery is not always possible. This property is key value observable.
      • superclass_static

        public static org.moe.natj.objc.Class superclass_static()
      • timebase

        public CMTimebaseRef timebase()
        Description copied from interface: AVQueuedSampleBufferRendering
        [@property] timebase The renderer's timebase, which governs how time stamps are interpreted. The timebase is used to interpret time stamps. The timebase is read-only. Use the AVSampleBufferRenderSynchronizer to set the rate or time.
        Specified by:
        timebase in interface AVQueuedSampleBufferRendering
      • version_static

        public static long version_static()
      • volume

        public float volume()
        [@property] volume Indicates the current audio volume of the AVSampleBufferAudioRenderer. A value of 0.0 means "silence all audio", while 1.0 means "play at the full volume of the audio media". This property should be used for frequent volume changes, for example via a volume knob or fader. This property is most useful on iOS to control the volume of the AVSampleBufferAudioRenderer relative to other audio output, not for setting absolute volume.