Class AVAudioPlayerNode

  • All Implemented Interfaces:
    AVAudio3DMixing, AVAudioMixing, AVAudioStereoMixing, NSObject

    public class AVAudioPlayerNode
    extends AVAudioNode
    implements AVAudioMixing
    AVAudioPlayerNode Play buffers or segments of audio files. AVAudioPlayerNode supports scheduling the playback of `AVAudioBuffer` instances, or segments of audio files opened via `AVAudioFile`. Buffers and segments may be scheduled at specific points in time, or to play immediately following preceding segments. FORMATS Normally, you will want to configure the node's output format with the same number of channels as are in the files and buffers to be played. Otherwise, channels will be dropped or added as required. It is usually better to use an `AVAudioMixerNode` to do this. Similarly, when playing file segments, the node will sample rate convert if necessary, but it is often preferable to configure the node's output sample rate to match that of the file(s) and use a mixer to perform the rate conversion. When playing buffers, there is an implicit assumption that the buffers are at the same sample rate as the node's output format. TIMELINES The usual `AVAudioNode` sample times (as observed by `lastRenderTime`) have an arbitrary zero point. AVAudioPlayerNode superimposes a second "player timeline" on top of this, to reflect when the player was started, and intervals during which it was paused. The methods `nodeTimeForPlayerTime:` and `playerTimeForNodeTime:` convert between the two. This class' `stop` method unschedules all previously scheduled buffers and file segments, and returns the player timeline to sample time 0. TIMESTAMPS The "schedule" methods all take an `AVAudioTime` "when" parameter. This is interpreted as follows: 1. nil: - if there have been previous commands, the new one is played immediately following the last one. - otherwise, if the node is playing, the event is played in the very near future. - otherwise, the command is played at sample time 0. 2. sample time: - relative to the node's start time (which begins at 0 when the node is started). 3. host time: - ignored unless the sample time is invalid when the engine is rendering to an audio device. - ignored in manual rendering mode. ERRORS The "schedule" methods can fail if: 1. a buffer's channel count does not match that of the node's output format. 2. a file can't be accessed. 3. an AVAudioTime specifies neither a valid sample time or host time. 4. a segment's start frame or frame count is negative. BUFFER/FILE COMPLETION HANDLERS The buffer or file completion handlers (see scheduling methods) are a means to schedule more data if available on the player node. See `AVAudioPlayerNodeCompletionCallbackType` for details on the different buffer/file completion callback types. Note that a player should not be stopped from within a completion handler callback because it can deadlock while trying to unschedule previously scheduled buffers. OFFLINE RENDERING When a player node is used with the engine operating in the manual rendering mode, the buffer/file completion handlers, `lastRenderTime` and the latencies (`latency` and `outputPresentationLatency`) can be used to track how much data the player has rendered and how much more data is left to render.
    • Constructor Detail

      • AVAudioPlayerNode

        protected AVAudioPlayerNode​(org.moe.natj.general.Pointer peer)
    • Method Detail

      • accessInstanceVariablesDirectly

        public static boolean accessInstanceVariablesDirectly()
      • allocWithZone

        public static java.lang.Object allocWithZone​(org.moe.natj.general.ptr.VoidPtr zone)
      • automaticallyNotifiesObserversForKey

        public static boolean automaticallyNotifiesObserversForKey​(java.lang.String key)
      • cancelPreviousPerformRequestsWithTarget

        public static void cancelPreviousPerformRequestsWithTarget​(java.lang.Object aTarget)
      • cancelPreviousPerformRequestsWithTargetSelectorObject

        public static void cancelPreviousPerformRequestsWithTargetSelectorObject​(java.lang.Object aTarget,
                                                                                 org.moe.natj.objc.SEL aSelector,
                                                                                 java.lang.Object anArgument)
      • classFallbacksForKeyedArchiver

        public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
      • classForKeyedUnarchiver

        public static org.moe.natj.objc.Class classForKeyedUnarchiver()
      • debugDescription_static

        public static java.lang.String debugDescription_static()
      • description_static

        public static java.lang.String description_static()
      • hash_static

        public static long hash_static()
      • instanceMethodSignatureForSelector

        public static NSMethodSignature instanceMethodSignatureForSelector​(org.moe.natj.objc.SEL aSelector)
      • instancesRespondToSelector

        public static boolean instancesRespondToSelector​(org.moe.natj.objc.SEL aSelector)
      • isSubclassOfClass

        public static boolean isSubclassOfClass​(org.moe.natj.objc.Class aClass)
      • keyPathsForValuesAffectingValueForKey

        public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey​(java.lang.String key)
      • new_objc

        public static java.lang.Object new_objc()
      • resolveClassMethod

        public static boolean resolveClassMethod​(org.moe.natj.objc.SEL sel)
      • resolveInstanceMethod

        public static boolean resolveInstanceMethod​(org.moe.natj.objc.SEL sel)
      • setVersion_static

        public static void setVersion_static​(long aVersion)
      • superclass_static

        public static org.moe.natj.objc.Class superclass_static()
      • version_static

        public static long version_static()
      • destinationForMixerBus

        public AVAudioMixingDestination destinationForMixerBus​(AVAudioNode mixer,
                                                               long bus)
        Description copied from interface: AVAudioMixing
        Returns the AVAudioMixingDestination object corresponding to specified mixer node and its input bus When a source node is connected to multiple mixers downstream, setting AVAudioMixing properties directly on the source node will apply the change to all the mixers downstream. If you want to set/get properties on a specific mixer, use this method to get the corresponding AVAudioMixingDestination and set/get properties on it. Note: - Properties set on individual AVAudioMixingDestination instances will not reflect at the source node level. - AVAudioMixingDestination reference returned by this method could become invalid when there is any disconnection between the source and the mixer node. Hence this reference should not be retained and should be fetched every time you want to set/get properties on a specific mixer. If the source node is not connected to the specified mixer/input bus, this method returns nil. Calling this on an AVAudioMixingDestination instance returns self if the specified mixer/input bus matches its connection point, otherwise, it returns nil.
        Specified by:
        destinationForMixerBus in interface AVAudioMixing
      • isPlaying

        public boolean isPlaying()
        [@property] playing Indicates whether or not the player is playing.
      • nodeTimeForPlayerTime

        public AVAudioTime nodeTimeForPlayerTime​(AVAudioTime playerTime)
        nodeTimeForPlayerTime: Convert from player time to node time. This method and its inverse `playerTimeForNodeTime:` are discussed in the introduction to this class. If the player is not playing when this method is called, nil is returned.
        Parameters:
        playerTime - a time relative to the player's start time
        Returns:
        a node time
      • obstruction

        public float obstruction()
        Description copied from interface: AVAudio3DMixing
        [@property] obstruction Simulates filtering of the direct path of sound due to an obstacle Only the direct path of sound between the source and listener is blocked. Range: -100.0 -> 0.0 dB Default: 0.0 Mixer: AVAudioEnvironmentNode
        Specified by:
        obstruction in interface AVAudio3DMixing
      • occlusion

        public float occlusion()
        Description copied from interface: AVAudio3DMixing
        [@property] occlusion Simulates filtering of the direct and reverb paths of sound due to an obstacle Both the direct and reverb paths of sound between the source and listener are blocked. Range: -100.0 -> 0.0 dB Default: 0.0 Mixer: AVAudioEnvironmentNode
        Specified by:
        occlusion in interface AVAudio3DMixing
      • pan

        public float pan()
        Description copied from interface: AVAudioStereoMixing
        [@property] pan Set a bus's stereo pan Range: -1.0 -> 1.0 Default: 0.0 Mixer: AVAudioMixerNode
        Specified by:
        pan in interface AVAudioStereoMixing
      • pause

        public void pause()
        pause Pause playback. The player's sample time does not advance while the node is paused. Note that pausing or stopping all the players connected to an engine does not pause or stop the engine or the underlying hardware. The engine must be explicitly paused or stopped for the hardware to stop.
      • play

        public void play()
        play Start or resume playback immediately. equivalent to playAtTime:nil
      • playAtTime

        public void playAtTime​(AVAudioTime when)
        playAtTime: Start or resume playback at a specific time. This node is initially paused. Requests to play buffers or file segments are enqueued, and any necessary decoding begins immediately. Playback does not begin, however, until the player has started playing, via this method. Note that providing an AVAudioTime which is past (before lastRenderTime) will cause the player to begin playback immediately. E.g. To start a player X seconds in future:
         // start engine and player
         NSError *nsErr = nil;
         [_engine startAndReturnError:&nsErr];
         if (!nsErr) {
                const float kStartDelayTime = 0.5; // sec
                AVAudioFormat *outputFormat = [_player outputFormatForBus:0];
                AVAudioFramePosition startSampleTime = _player.lastRenderTime.sampleTime + kStartDelayTime * outputFormat.sampleRate;
                AVAudioTime *startTime = [AVAudioTime timeWithSampleTime:startSampleTime atRate:outputFormat.sampleRate];
                [_player playAtTime:startTime];
         }
         
        Parameters:
        when - the node time at which to start or resume playback. nil signifies "now".
      • playerTimeForNodeTime

        public AVAudioTime playerTimeForNodeTime​(AVAudioTime nodeTime)
        playerTimeForNodeTime: Convert from node time to player time. This method and its inverse `nodeTimeForPlayerTime:` are discussed in the introduction to this class. If the player is not playing when this method is called, nil is returned.
        Parameters:
        nodeTime - a node time
        Returns:
        a time relative to the player's start time
      • position

        public AVAudio3DPoint position()
        Description copied from interface: AVAudio3DMixing
        [@property] position The location of the source in the 3D environment The coordinates are specified in meters. Mixer: AVAudioEnvironmentNode
        Specified by:
        position in interface AVAudio3DMixing
      • prepareWithFrameCount

        public void prepareWithFrameCount​(int frameCount)
        prepareWithFrameCount: Prepares previously scheduled file regions or buffers for playback.
        Parameters:
        frameCount - The number of sample frames of data to be prepared before returning.
      • rate

        public float rate()
        Description copied from interface: AVAudio3DMixing
        [@property] rate Changes the playback rate of the input signal A value of 2.0 results in the output audio playing one octave higher. A value of 0.5, results in the output audio playing one octave lower. Range: 0.5 -> 2.0 Default: 1.0 Mixer: AVAudioEnvironmentNode
        Specified by:
        rate in interface AVAudio3DMixing
      • renderingAlgorithm

        public long renderingAlgorithm()
        Description copied from interface: AVAudio3DMixing
        [@property] renderingAlgorithm Type of rendering algorithm used Depending on the current output format of the AVAudioEnvironmentNode, only a subset of the rendering algorithms may be supported. An array of valid rendering algorithms can be retrieved by calling applicableRenderingAlgorithms on AVAudioEnvironmentNode. Default: AVAudio3DMixingRenderingAlgorithmEqualPowerPanning Mixer: AVAudioEnvironmentNode
        Specified by:
        renderingAlgorithm in interface AVAudio3DMixing
      • reverbBlend

        public float reverbBlend()
        Description copied from interface: AVAudio3DMixing
        [@property] reverbBlend Controls the blend of dry and reverb processed audio This property controls the amount of the source's audio that will be processed by the reverb in AVAudioEnvironmentNode. A value of 0.5 will result in an equal blend of dry and processed (wet) audio. Range: 0.0 (completely dry) -> 1.0 (completely wet) Default: 0.0 Mixer: AVAudioEnvironmentNode
        Specified by:
        reverbBlend in interface AVAudio3DMixing
      • scheduleBufferAtTimeOptionsCompletionHandler

        public void scheduleBufferAtTimeOptionsCompletionHandler​(AVAudioPCMBuffer buffer,
                                                                 AVAudioTime when,
                                                                 long options,
                                                                 AVAudioPlayerNode.Block_scheduleBufferAtTimeOptionsCompletionHandler completionHandler)
        scheduleBuffer:atTime:options:completionHandler: Schedule playing samples from an AVAudioBuffer. It is possible for the completionHandler to be called before rendering begins or before the buffer is played completely.
        Parameters:
        buffer - the buffer to play
        when - the time at which to play the buffer. see the discussion of timestamps, above.
        options - options for looping, interrupting other buffers, etc.
        completionHandler - called after the buffer has been consumed by the player or the player is stopped. may be nil.
      • scheduleBufferCompletionHandler

        public void scheduleBufferCompletionHandler​(AVAudioPCMBuffer buffer,
                                                    AVAudioPlayerNode.Block_scheduleBufferCompletionHandler completionHandler)
        scheduleBuffer:completionHandler: Schedule playing samples from an AVAudioBuffer. Schedules the buffer to be played following any previously scheduled commands. It is possible for the completionHandler to be called before rendering begins or before the buffer is played completely.
        Parameters:
        buffer - the buffer to play
        completionHandler - called after the buffer has been consumed by the player or the player is stopped. may be nil.
      • scheduleFileAtTimeCompletionHandler

        public void scheduleFileAtTimeCompletionHandler​(AVAudioFile file,
                                                        AVAudioTime when,
                                                        AVAudioPlayerNode.Block_scheduleFileAtTimeCompletionHandler completionHandler)
        scheduleFile:atTime:completionHandler: Schedule playing of an entire audio file. It is possible for the completionHandler to be called before rendering begins or before the file is played completely.
        Parameters:
        file - the file to play
        when - the time at which to play the file. see the discussion of timestamps, above.
        completionHandler - called after the file has been consumed by the player or the player is stopped. may be nil.
      • scheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler

        public void scheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler​(AVAudioFile file,
                                                                                  long startFrame,
                                                                                  int numberFrames,
                                                                                  AVAudioTime when,
                                                                                  AVAudioPlayerNode.Block_scheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler completionHandler)
        scheduleSegment:startingFrame:frameCount:atTime:completionHandler: Schedule playing a segment of an audio file. It is possible for the completionHandler to be called before rendering begins or before the segment is played completely.
        Parameters:
        file - the file to play
        startFrame - the starting frame position in the stream
        numberFrames - the number of frames to play
        when - the time at which to play the region. see the discussion of timestamps, above.
        completionHandler - called after the segment has been consumed by the player or the player is stopped. may be nil.
      • setObstruction

        public void setObstruction​(float value)
        Description copied from interface: AVAudio3DMixing
        [@property] obstruction Simulates filtering of the direct path of sound due to an obstacle Only the direct path of sound between the source and listener is blocked. Range: -100.0 -> 0.0 dB Default: 0.0 Mixer: AVAudioEnvironmentNode
        Specified by:
        setObstruction in interface AVAudio3DMixing
      • setOcclusion

        public void setOcclusion​(float value)
        Description copied from interface: AVAudio3DMixing
        [@property] occlusion Simulates filtering of the direct and reverb paths of sound due to an obstacle Both the direct and reverb paths of sound between the source and listener are blocked. Range: -100.0 -> 0.0 dB Default: 0.0 Mixer: AVAudioEnvironmentNode
        Specified by:
        setOcclusion in interface AVAudio3DMixing
      • setPan

        public void setPan​(float value)
        Description copied from interface: AVAudioStereoMixing
        [@property] pan Set a bus's stereo pan Range: -1.0 -> 1.0 Default: 0.0 Mixer: AVAudioMixerNode
        Specified by:
        setPan in interface AVAudioStereoMixing
      • setPosition

        public void setPosition​(AVAudio3DPoint value)
        Description copied from interface: AVAudio3DMixing
        [@property] position The location of the source in the 3D environment The coordinates are specified in meters. Mixer: AVAudioEnvironmentNode
        Specified by:
        setPosition in interface AVAudio3DMixing
      • setRate

        public void setRate​(float value)
        Description copied from interface: AVAudio3DMixing
        [@property] rate Changes the playback rate of the input signal A value of 2.0 results in the output audio playing one octave higher. A value of 0.5, results in the output audio playing one octave lower. Range: 0.5 -> 2.0 Default: 1.0 Mixer: AVAudioEnvironmentNode
        Specified by:
        setRate in interface AVAudio3DMixing
      • setRenderingAlgorithm

        public void setRenderingAlgorithm​(long value)
        Description copied from interface: AVAudio3DMixing
        [@property] renderingAlgorithm Type of rendering algorithm used Depending on the current output format of the AVAudioEnvironmentNode, only a subset of the rendering algorithms may be supported. An array of valid rendering algorithms can be retrieved by calling applicableRenderingAlgorithms on AVAudioEnvironmentNode. Default: AVAudio3DMixingRenderingAlgorithmEqualPowerPanning Mixer: AVAudioEnvironmentNode
        Specified by:
        setRenderingAlgorithm in interface AVAudio3DMixing
      • setReverbBlend

        public void setReverbBlend​(float value)
        Description copied from interface: AVAudio3DMixing
        [@property] reverbBlend Controls the blend of dry and reverb processed audio This property controls the amount of the source's audio that will be processed by the reverb in AVAudioEnvironmentNode. A value of 0.5 will result in an equal blend of dry and processed (wet) audio. Range: 0.0 (completely dry) -> 1.0 (completely wet) Default: 0.0 Mixer: AVAudioEnvironmentNode
        Specified by:
        setReverbBlend in interface AVAudio3DMixing
      • setVolume

        public void setVolume​(float value)
        Description copied from interface: AVAudioMixing
        [@property] volume Set a bus's input volume Range: 0.0 -> 1.0 Default: 1.0 Mixers: AVAudioMixerNode, AVAudioEnvironmentNode
        Specified by:
        setVolume in interface AVAudioMixing
      • stop

        public void stop()
        stop Clear all of the node's previously scheduled events and stop playback. All of the node's previously scheduled events are cleared, including any that are in the middle of playing. The node's sample time (and therefore the times to which new events are to be scheduled) is reset to 0, and will not proceed until the node is started again (via play or playAtTime). Note that pausing or stopping all the players connected to an engine does not pause or stop the engine or the underlying hardware. The engine must be explicitly paused or stopped for the hardware to stop.
      • volume

        public float volume()
        Description copied from interface: AVAudioMixing
        [@property] volume Set a bus's input volume Range: 0.0 -> 1.0 Default: 1.0 Mixers: AVAudioMixerNode, AVAudioEnvironmentNode
        Specified by:
        volume in interface AVAudioMixing
      • scheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler

        public void scheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler​(AVAudioPCMBuffer buffer,
                                                                                       AVAudioTime when,
                                                                                       long options,
                                                                                       long callbackType,
                                                                                       AVAudioPlayerNode.Block_scheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler completionHandler)
        scheduleBuffer:atTime:options:completionCallbackType:completionHandler: Schedule playing samples from an AVAudioBuffer.
        Parameters:
        buffer - the buffer to play
        when - the time at which to play the buffer. see the discussion of timestamps, above.
        options - options for looping, interrupting other buffers, etc.
        callbackType - option to specify when the completion handler must be called
        completionHandler - called after the buffer has been consumed by the player or has finished playing back or the player is stopped. may be nil.
      • scheduleBufferCompletionCallbackTypeCompletionHandler

        public void scheduleBufferCompletionCallbackTypeCompletionHandler​(AVAudioPCMBuffer buffer,
                                                                          long callbackType,
                                                                          AVAudioPlayerNode.Block_scheduleBufferCompletionCallbackTypeCompletionHandler completionHandler)
        scheduleBuffer:completionCallbackType:completionHandler: Schedule playing samples from an AVAudioBuffer. Schedules the buffer to be played following any previously scheduled commands.
        Parameters:
        buffer - the buffer to play
        callbackType - option to specify when the completion handler must be called
        completionHandler - called after the buffer has been consumed by the player or has finished playing back or the player is stopped. may be nil.
      • scheduleFileAtTimeCompletionCallbackTypeCompletionHandler

        public void scheduleFileAtTimeCompletionCallbackTypeCompletionHandler​(AVAudioFile file,
                                                                              AVAudioTime when,
                                                                              long callbackType,
                                                                              AVAudioPlayerNode.Block_scheduleFileAtTimeCompletionCallbackTypeCompletionHandler completionHandler)
        scheduleFile:atTime:completionCallbackType:completionHandler: Schedule playing of an entire audio file.
        Parameters:
        file - the file to play
        when - the time at which to play the file. see the discussion of timestamps, above.
        callbackType - option to specify when the completion handler must be called
        completionHandler - called after the file has been consumed by the player or has finished playing back or the player is stopped. may be nil.
      • scheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler

        public void scheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler​(AVAudioFile file,
                                                                                                        long startFrame,
                                                                                                        int numberFrames,
                                                                                                        AVAudioTime when,
                                                                                                        long callbackType,
                                                                                                        AVAudioPlayerNode.Block_scheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler completionHandler)
        scheduleSegment:startingFrame:frameCount:atTime:completionCallbackType:completionHandler: Schedule playing a segment of an audio file.
        Parameters:
        file - the file to play
        startFrame - the starting frame position in the stream
        numberFrames - the number of frames to play
        when - the time at which to play the region. see the discussion of timestamps, above.
        callbackType - option to specify when the completion handler must be called
        completionHandler - called after the segment has been consumed by the player or has finished playing back or the player is stopped. may be nil.
      • pointSourceInHeadMode

        public long pointSourceInHeadMode()
        Description copied from interface: AVAudio3DMixing
        [@property] pointSourceInHeadMode In-head rendering choice for AVAudio3DMixingSourceModePointSource in AVAudio3DMixingRenderingAlgorithmAuto Default: AVAudio3DMixingPointSourceInHeadModeMono Mixer: AVAudioEnvironmentNode
        Specified by:
        pointSourceInHeadMode in interface AVAudio3DMixing
      • setPointSourceInHeadMode

        public void setPointSourceInHeadMode​(long value)
        Description copied from interface: AVAudio3DMixing
        [@property] pointSourceInHeadMode In-head rendering choice for AVAudio3DMixingSourceModePointSource in AVAudio3DMixingRenderingAlgorithmAuto Default: AVAudio3DMixingPointSourceInHeadModeMono Mixer: AVAudioEnvironmentNode
        Specified by:
        setPointSourceInHeadMode in interface AVAudio3DMixing
      • setSourceMode

        public void setSourceMode​(long value)
        Description copied from interface: AVAudio3DMixing
        [@property] sourceMode Controls how individual channels of an input bus are rendered Default: AVAudio3DMixingSourceModeSpatializeIfMono Mixer: AVAudioEnvironmentNode
        Specified by:
        setSourceMode in interface AVAudio3DMixing
      • sourceMode

        public long sourceMode()
        Description copied from interface: AVAudio3DMixing
        [@property] sourceMode Controls how individual channels of an input bus are rendered Default: AVAudio3DMixingSourceModeSpatializeIfMono Mixer: AVAudioEnvironmentNode
        Specified by:
        sourceMode in interface AVAudio3DMixing