Class AVSpeechSynthesizer

  • All Implemented Interfaces:
    NSObject

    public class AVSpeechSynthesizer
    extends NSObject
    AVSpeechSynthesizer AVSpeechSynthesizer allows speaking of speech utterances with a basic queuing mechanism. Create an instance of AVSpeechSynthesizer to start generating synthesized speech by using AVSpeechUtterance objects.
    • Constructor Detail

      • AVSpeechSynthesizer

        protected AVSpeechSynthesizer​(org.moe.natj.general.Pointer peer)
    • Method Detail

      • accessInstanceVariablesDirectly

        public static boolean accessInstanceVariablesDirectly()
      • allocWithZone

        public static java.lang.Object allocWithZone​(org.moe.natj.general.ptr.VoidPtr zone)
      • automaticallyNotifiesObserversForKey

        public static boolean automaticallyNotifiesObserversForKey​(java.lang.String key)
      • cancelPreviousPerformRequestsWithTarget

        public static void cancelPreviousPerformRequestsWithTarget​(java.lang.Object aTarget)
      • cancelPreviousPerformRequestsWithTargetSelectorObject

        public static void cancelPreviousPerformRequestsWithTargetSelectorObject​(java.lang.Object aTarget,
                                                                                 org.moe.natj.objc.SEL aSelector,
                                                                                 java.lang.Object anArgument)
      • classFallbacksForKeyedArchiver

        public static NSArray<java.lang.String> classFallbacksForKeyedArchiver()
      • classForKeyedUnarchiver

        public static org.moe.natj.objc.Class classForKeyedUnarchiver()
      • debugDescription_static

        public static java.lang.String debugDescription_static()
      • description_static

        public static java.lang.String description_static()
      • hash_static

        public static long hash_static()
      • instanceMethodSignatureForSelector

        public static NSMethodSignature instanceMethodSignatureForSelector​(org.moe.natj.objc.SEL aSelector)
      • instancesRespondToSelector

        public static boolean instancesRespondToSelector​(org.moe.natj.objc.SEL aSelector)
      • isSubclassOfClass

        public static boolean isSubclassOfClass​(org.moe.natj.objc.Class aClass)
      • keyPathsForValuesAffectingValueForKey

        public static NSSet<java.lang.String> keyPathsForValuesAffectingValueForKey​(java.lang.String key)
      • new_objc

        public static java.lang.Object new_objc()
      • resolveClassMethod

        public static boolean resolveClassMethod​(org.moe.natj.objc.SEL sel)
      • resolveInstanceMethod

        public static boolean resolveInstanceMethod​(org.moe.natj.objc.SEL sel)
      • setVersion_static

        public static void setVersion_static​(long aVersion)
      • superclass_static

        public static org.moe.natj.objc.Class superclass_static()
      • version_static

        public static long version_static()
      • continueSpeaking

        public boolean continueSpeaking()
      • isPaused

        public boolean isPaused()
      • isSpeaking

        public boolean isSpeaking()
      • outputChannels

        public NSArray<? extends AVAudioSessionChannelDescription> outputChannels()
        Specify the audio channels to be used for synthesized speech as described by the channel descriptions in AVAudioSession's current route. Speech audio will be replicated to each specified channel. Default is nil, which implies system defaults.
      • pauseSpeakingAtBoundary

        public boolean pauseSpeakingAtBoundary​(long boundary)
      • setOutputChannels

        public void setOutputChannels​(NSArray<? extends AVAudioSessionChannelDescription> value)
        Specify the audio channels to be used for synthesized speech as described by the channel descriptions in AVAudioSession's current route. Speech audio will be replicated to each specified channel. Default is nil, which implies system defaults.
      • speakUtterance

        public void speakUtterance​(AVSpeechUtterance utterance)
        AVSpeechUtterances are queued by default. Enqueing the same AVSpeechUtterance that is already enqueued or is speaking will raise an exception.
      • stopSpeakingAtBoundary

        public boolean stopSpeakingAtBoundary​(long boundary)
        Call stopSpeakingAtBoundary: to interrupt current speech and clear the queue.
      • mixToTelephonyUplink

        public boolean mixToTelephonyUplink()
        Set to YES to send synthesized speech into an outgoing telephony audio stream. If there's no active call, setting this property has no effect.
      • setMixToTelephonyUplink

        public void setMixToTelephonyUplink​(boolean value)
        Set to YES to send synthesized speech into an outgoing telephony audio stream. If there's no active call, setting this property has no effect.
      • setUsesApplicationAudioSession

        public void setUsesApplicationAudioSession​(boolean value)
        The AVSpeechSynthesizer will use the AVAudioSession sharedInstance when set to YES. The AVSpeechSynthesizer will use a separate AVAudioSession for playback when set to NO. Additionally, the audio session will mix and duck other audio, and its active state will be managed automatically. The separate audio session will use AVAudioSessionRouteSharingPolicyDefault, which means that it may have a different route from the app’s shared instance session. Default is YES.
      • usesApplicationAudioSession

        public boolean usesApplicationAudioSession()
        The AVSpeechSynthesizer will use the AVAudioSession sharedInstance when set to YES. The AVSpeechSynthesizer will use a separate AVAudioSession for playback when set to NO. Additionally, the audio session will mix and duck other audio, and its active state will be managed automatically. The separate audio session will use AVAudioSessionRouteSharingPolicyDefault, which means that it may have a different route from the app’s shared instance session. Default is YES.
      • writeUtteranceToBufferCallback

        public void writeUtteranceToBufferCallback​(AVSpeechUtterance utterance,
                                                   AVSpeechSynthesizer.Block_writeUtteranceToBufferCallback bufferCallback)
        Use this method to receive audio buffers that can be used to store or further process synthesized speech. The dictionary provided by -[AVSpeechSynthesisVoice audioFileSettings] can be used to create an AVAudioFile.