KeenASR Framework v2.1 (8b72cc4)
Keen Research
Loading...
Searching...
No Matches
KIOSRecognizer Class Reference

#include <KIOSRecognizer.h>

Instance Methods

Config Parameters
(void) - setVADParameter:toValue:
Deprecated methods and properties
(NSString *lastRecordingFilename) - __deprecated_msg
(NSString *lastJSONMetadataFilename) - __deprecated_msg
(BOOL) - prepareForListeningWithCustomDecodingGraphWithName:
(BOOL) - prepareForListeningWithCustomDecodingGraphAtPath:
(BOOL) - startListeningFromAudioFile:
(void) - enableBluetoothOutput:
(void) - enableBluetoothA2DPOutput:

Class Methods

Other
(nonnull NSString *) + version
(void) + setLogLevel:

Properties

id< KIOSRecognizerDelegatedelegate
KIOSRecognizerState recognizerState
NSString * asrBundlePath
NSString * asrBundleName
NSString * currentDecodingGraphName
NSString * recordingsDir
NSString * miscDataDirectory
BOOL rescore
(nullable KIOSRecognizer *) + sharedInstance

Audio Handling

BOOL handleNotifications
(BOOL) + echoCancellationAvailable
(float) - inputLevel
(BOOL) - performEchoCancellation:
(void) - setBluetoothA2DPOutput:
(BOOL) - deactivateAudioStack
(BOOL) - activateAudioStack
(void) - reinitAudioStack

Initialization, Preparing, Starting, and Stopping Recognition

(BOOL) + initWithASRBundle:
(BOOL) + initWithASRBundleAtPath:
((unavailable("new not available, call sharedInstance instead"))) + __attribute__
(BOOL) + teardown
(void) - setVadGating:
(BOOL) - prepareForListeningWithDecodingGraphWithName:withGoPComputation:
(BOOL) - prepareForListeningWithDecodingGraphAtPath:withGoPComputation:
(BOOL) - prepareForListeningWithContextualDecodingGraphWithName:andContextId:withGoPComputation:
(BOOL) - prepareForListeningWithContextualDecodingGraphAtPath:andContextId:withGoPComputation:
(BOOL) - startListening:
(void) - stopListening

Speaker Adaptation

(BOOL) + removeAllSpeakerAdaptationProfiles
(BOOL) + removeSpeakerAdaptationProfiles:
(void) - adaptToSpeakerWithName:
(void) - resetSpeakerAdaptation
(void) - saveSpeakerAdaptationProfile

Detailed Description

An instance of the KIOSRecognizer class, called recognizer, manages recognizer resources and provides speech recognition capabilities to your application.

You typically initialize the engine at the app startup time by calling +initWithASRBundle: or +initWithASRBundleAtPath: method, and then use sharedInstance method when you need to access the recognizer.

Recognition results are provided via callbacks. To obtain results one of your classes will need to adopt a [KIOSRecognizerDelegate protocol](KIOSRecognizerDelegate), and implement some of its methods.

In order to properly handle audio interrupts you will need to implement [KIOSRecognizerDelegate recognizerReadyToListenAfterInterrupt:] callback method in which you need to perform audio play cleanup (stop playing audio). This allows KeenASR SDK to properly deactivate audio session before app goes to background.

You can optionally implement [KIOSRecognizerDelegate recognizerReadyToListenAfterInterrupt:] callback method, which will trigger after KIOSRecognizer is fully setup after app comes to the foreground. This is where you may refresh the UI state of the app.

Initialization example:

// keenAK3-nnet3chain-en-us is the name of the ASR Bundle (it might be
// different in your setup
[KIOSRecognizer initWithASRBundle: @"keenAK3-nnet3chain-en-us"];
}
// for convenience our class keeps a local reference of the recognizer
self.recognizer = [KIOSRecognizer sharedInstance];
// this class will also be implementing methods from KIOSRecognizerDelegate
// protocol
self.recognizer.delegate = self;
// after 0.8sec of silence, recognizer will automatically stop listening
[self.recognizer setVADParameter:KIOSVadTimeoutEndSilenceForGoodMatch toValue:.8];
[self.recognizer setVADParameter:KIOSVadTimeoutEndSilenceForAnyMatch toValue:.8];
// TODO define callback methods for KIOSRecognizerDelegate
Definition KIOSRecognizer.h:347
nullable KIOSRecognizer * sharedInstance()
BOOL initWithASRBundle:(nonnull NSString *bundleName)

After initialization, audio data from all sessions when recognizer is listening will be used for online speaker adaptation. You can name speaker adaptation profiles via adaptToSpeakerWithName:, persist profiles in the filesystem via saveSpeakerAdaptationProfile, and reset via resetSpeakerAdaptation.

Warning
Only a single instance of the recognizer can exist at any given time.

Method Documentation

◆ activateAudioStack

- (BOOL) activateAudioStack

Activates audio stack that was previously deactivated using deactivateAudioStack method. This method should be called after all other audio systems have been setup to make sure AVAudioSession is properly initialized for audio capture.

Returns
TRUE if audio stack was successfully activated, FALSE otherwise.

◆ adaptToSpeakerWithName:

- (void) adaptToSpeakerWithName: (nonnull NSString *) speakerName

Defines the name that will be used to uniquely identify speaker adaptation profile. When recognizer starts to listen, it will try to find a matching speaker profile in the filesystem (profiles are matched based on speakername, asrbundle, and audio route). When saveSpeakerAdaptationProfile method is called, it uses the name to uniquely identify the profile file that will be saved in the filesystem.

Parameters
speakerName(pseduo)name of the speaker for which adaptation is to be performed. Default value is 'default'.

The name used here does not have to correspond to the real name of user (thus we call it pseudo name). The exact value does not matter as long as you can match the value to the specific user in your app. For example, you could use 'user1', 'user2', etc..

Warning
If you cannot match names to your users, it's recommended to not use this method, and to not save adaptation profiles between sessions. Adaptation will still be performed throughout the session, but each new session (activity after initialization of recognizer) will start from the baseline models.

In-memory speaker adaptation profile can always be reset by calling resetSpeakerAdaptation.

If this method is called while recognizer is listening, it will only affect subsequent calls to startListening methods.

◆ deactivateAudioStack

- (BOOL) deactivateAudioStack

Deactivates audio session and KeenASR audio stack. If handleNotifications is set to YES, you will not need to use this method and its counterpart activateAudioStack; KeenASR Framework will handle audio interrupts and notifications when the app goes to background/foreground.

If your app is handling notifications explicitly (handleNotifications is set to NO), you may want to call this method when an audio interrupt occurs. If recognizer is listening, this method will automatically stop listening, and then deactivate the audio stack. When the app comes active or audio interrupt finishes, you will need to call the activateAudioStack.

Returns
TRUE if audio stack was successfully deactivated, FALSE otherwise.
Warning
: It's recommended that any audio IO (playing audio, video, etc.) is stopped before calling this method.

◆ echoCancellationAvailable

+ (BOOL) echoCancellationAvailable

Provides information about echo cancellation support on the device.

Returns
YES if echo cancellation is supported, NO otherwise

◆ initWithASRBundle:

+ (BOOL) initWithASRBundle: (nonnull NSString *) bundleName

Initialize ASR engine with the ASR Bundle, which provides all the resources necessary for initialization. You will use this initalization method if you included ASR bundle with your application. See also initWithASRBundleAtPath: for scenarios when ASR Bundle is not included with the app, but downloaded after the app has been installed. SDK initialization needs to occur before any other work can be performed.

Parameters
bundleNamename of the ASR Bundle. A directory containing all the resources necessary for the specific recognizer type. This will typically include all acoustic model related files, and configuration files. The bundle directory should contain decode.conf configuration file, which can be augmented with additional config params. Currently, that is the only way to pass various settings to the decoder. All path references in config files should be relative to the app root directory (e.g. librispeechQT-nnet2-en-us/mfcc.conf). The init method will initiallize appropriate recognizer type based on the name and content of the ASR bundle.
Returns
TRUE if succesful, FALSE otherwise.
Warning
When initializing the recognizer, you need to make sure that bundle directory contains all the necessary resources needed for the specific recognizer type. If your app is dynamically creating decoding graphs, ASR bundle directory needs to contain lang subdirectory with relevant resources (lexicon, etc.).

◆ initWithASRBundleAtPath:

+ (BOOL) initWithASRBundleAtPath: (nonnull NSString *) pathToASRBundle

Initialize ASR engine with the ASR Bundle located at provided path. This is an alternative method to initialize the SDK, which you would use if you did not package ASR Bundle with your application but instead downloaded it after the app has been installed. SDK initialization needs to occur before any other work can be performed.

Parameters
pathToASRBundlefull path to the ASR Bundle. For more details about ASR Bundles see initWithASRBundle:
Returns
TRUE if succesful, FALSE otherwise.
Warning
When initializing the recognizer, make sure that the bundle directory contains all the necessary resources needed for the specific recognizer type. If your app is dynamically creating decoding graphs, ASR bundle directory needs to contain lang subdirectory with relevant resources (lexicon, etc.).

◆ inputLevel

- (float) inputLevel

Retrieves the peak RMS audio level computed from the most recent audio buffer that's been processed by the recognizer. RMS is computed on a 25ms chunks of audio and peak value from the most recent audio buffer is returned.

If recognizer is not listening, or if no valid RMS level has been computer, returns NaN (see std::nan).

Returns
The peak RMS level in dB from the most recent audio buffer, or NaN if not available.

◆ performEchoCancellation:

- (BOOL) performEchoCancellation: (BOOL) value

EXPERIMENTAL Specifies if echo cancellation should be performed. If value is set to YES and the device supports echo cancellation, then audio played by the application will be removed from the audio captured via the microphone.

Parameters
valueset to YES to turn on echo cancellation processing, NO to turn it off. Default is NO.
Returns
TRUE if value was successfully set, FALSE otherwise. If the device does not support echo cancellatio and you pass YES to this method, it will return FALSE.
Warning
Calls to this method while the recognizer is listening will be ignored end the method will return FALSE.

◆ prepareForListeningWithContextualDecodingGraphAtPath:andContextId:withGoPComputation:

- (BOOL) prepareForListeningWithContextualDecodingGraphAtPath: (nonnull NSString *) dgPath
andContextId: (nonnull NSNumber *) contextId
withGoPComputation: (BOOL) computeGoP 

Prepare for recognition by loading custom decoding graph that was typically bundled with the application.

After calling this method, recognizer will load the decoding graph into memory and it will be ready to start listening via startListening method.

Goodness of pronunciation scoring (GoP) requires ASR Bundle with relevant models; if such models are not available in the ASR Bundle, GoP scores will not be computed regardless of the computeGoP setting.

Parameters
dgPathabsolute path to the decoding graph directory which was created ahead of time and packaged with the app.
contextIdid of the context that should be used. This number will be in range of 0 - contextualPhrases.length, where contextualPhrases is an <NSArray<NSArray *>> used to build contextual graph.
computeGoPgoodness of pronunciation scores will be computed if this parameter is set to TRUE
Returns
TRUE if successful, FALSE otherwise.
Warning
If custom decoding graph was built with rescoring capability, all the resources will be loaded regardless of how rescore paramater is set.

◆ prepareForListeningWithContextualDecodingGraphWithName:andContextId:withGoPComputation:

- (BOOL) prepareForListeningWithContextualDecodingGraphWithName: (nonnull NSString *) dgName
andContextId: (nonnull NSNumber *) contextId
withGoPComputation: (BOOL) computeGoP 

Prepare for recognition by loading decoding graph that was prepared via [createContextualDecodingGraphFromPhrases:forRecognizer:usingAlternativePronunciations:andTask:andSaveWithName:] family of methods.

After calling this method, recognizer will load the decoding graph into memory and it will be ready to start listening via startListening method.

Goodness of pronunciation scoring (GoP) requires ASR Bundle with relevant models; if such models are not available in the ASR Bundle, GoP scores will not be computed regardless of the computeGoP setting.

Parameters
dgNamename of the decoding graph
contextIdid of the context that should be used. This number will be in range of 0 - contextualPhrases.length, where contextualPhrases is an <NSArray<NSArray *>> used to build contextual graph.
computeGoPgoodness of pronunciation scores will be computed if this parameter is set to TRUE
Returns
TRUE if successful, FALSE otherwise

◆ prepareForListeningWithDecodingGraphAtPath:withGoPComputation:

- (BOOL) prepareForListeningWithDecodingGraphAtPath: (nonnull NSString *) pathToDecodingGraphDirectory
withGoPComputation: (BOOL) computeGoP 

Prepare for recognition by loading custom decoding graph that was typically bundled with the application. You will typically use this approach for large vocabulary tasks, where it would take too long to build the decoding graph on the mobile device.

After calling this method, recognizer will load the decoding graph into memory and it will be ready to start listening via startListening method.

Goodness of pronunciation scoring (GoP) requires ASR Bundle with relevant models; if such models are not available in the ASR Bundle, GoP scores will not be computed regardless of the computeGoP setting.

Parameters
pathToDecodingGraphDirectoryabsolute path to the custom decoding graph directory which was created ahead of time and packaged with the app.
computeGoPgoodness of pronunciation scores will be computed if this parameter is set to TRUE
Returns
TRUE if successful, FALSE otherwise.
Warning
If custom decoding graph was built with rescoring capability, all the resources will be loaded regardless of how rescore paramater is set.

◆ prepareForListeningWithDecodingGraphWithName:withGoPComputation:

- (BOOL) prepareForListeningWithDecodingGraphWithName: (nonnull NSString *) dgName
withGoPComputation: (BOOL) computeGoP 

Prepare for recognition by loading decoding graph that was prepared via [createDecodingGraphFromPhrases:forRecognizer:usingAlternativePronunciations:andTask: andSaveWithName: family of methods:] family of methods.

After calling this method, recognizer will load the decoding graph into memory and it will be ready to start listening via startListening method.

Goodness of pronunciation scoring (GoP) requires ASR Bundle with relevant models; if such models are not available in the ASR Bundle, GoP scores will not be computed regardless of the computeGoP setting.

Parameters
dgNamename of the decoding graph
computeGoPgoodness of pronunciation scores will be computed if this parameter is set to TRUE
Returns
TRUE if successful, FALSE otherwise

◆ reinitAudioStack

- (void) reinitAudioStack

Reinitializes audio stack. Calling this method is equaivalent to calling deactivateAudioStack followed by activateAudioStack method.

◆ removeAllSpeakerAdaptationProfiles

+ (BOOL) removeAllSpeakerAdaptationProfiles

Remove all adaptation profiles for all speakers.

◆ removeSpeakerAdaptationProfiles:

+ (BOOL) removeSpeakerAdaptationProfiles: (nonnull NSString *) speakerName

Removes all adaptation profiles for the speaker with name speakerName.

Parameters
speakerNamename of the speaker whose profiles should be removed

◆ resetSpeakerAdaptation

- (void) resetSpeakerAdaptation

Resets speaker adaptation profile in the current recognizer session. Calling this method will also reset the speakerName to 'default'. If the corresponding speaker adaptation profile exists in the filesystem for 'default' speaker, it will be used. If not, initial models from the ASR Bundle will be the baseline.

You would typically use this method id there is a new start of a certain activity in your app that may entail new speaker. For example, a practice view is started and there is a good chance a different user may be using the app.

If speaker (pseudo)identities are known, you don't need to call this method, you can just switch speakers by calling adaptToSpeakerWithName: with the appropriate speakerName

Following are the tradeoffs when using this method:

  • the downside of resetting user profile for the existing user is that ASR performance will be reset to the baseline (no adaptation), which may slightly degrade performance in the first few interactions
  • the downside of NOT resetting user profile for a new user is that, depending on the characteristics of the new user's voice, ASR performance may initially be degraded slightly (when comparing to the baseline case of no adaptation)

Calls to this method will be ignored if recognizer is in LISTENING state.

If you are resetting adaptation profile and you know user's (pseudo)identity, you may want to call saveSpeakerAdaptationProfile method prior to calling this method so that on subsequent user switches, adaptation profiles can be reloaded and recognition starts with the speaker profile trained on previous sessions audio.

◆ saveSpeakerAdaptationProfile

- (void) saveSpeakerAdaptationProfile

Saves speaker profile (used for adaptation) in the filesystem.

Speaker profile will be saved in the file system, in Caches/KaldiIOS-speaker-profiles/ directory. Profile filename is composed of the speakerName, asrBundle, and audioRoute.

◆ setBluetoothA2DPOutput:

- (void) setBluetoothA2DPOutput: (BOOL) value

Enables or disables bluetooth output via AVAudioSessionCategoryOptionAllowBluetoothA2DP category option of AVAudioSession.

Parameters
valueset to true to enable Bluetooth A2DPOutput or to false to disabled it.

◆ setLogLevel:

+ (void) setLogLevel: (KIOSRecognizerLogLevel) logLevel

Set log level for the framework.

Parameters
logLevelone of KIOSRecognizerLogLevel

Default value is KIOSRecognizerLogLevelWarning.

◆ setVadGating:

- (void) setVadGating: (BOOL) value

Sets Voice Activity Detection to either FALSE or TRUE. If set to TRUE recognizer will utilize a simple Voice Activity Detection module and no recognition will occur until voice activity is detected. From the moment voice activity is detected, recognizer operates in a standard mode.

All the information in KIOSResponse (audio file, ASR result, etc.) is based from the moment of voice activity detection, NOT from the moment of startListening call.

This should be set to YES primarily in always-on listening mode to minimize the number of listening restarts as well as to minimize battery utilization.

Parameters
valueTRUE or FALSE

◆ setVADParameter:toValue:

- (void) setVADParameter: (KIOSVadParameter) parameter
toValue: (float) value 

Set any of KIOSVadParameter Voice Activity Detection parameters. These parameters can be set at any time. If they are set while the recognizer is listening, they will be used immediately.

Parameters
parameterone of KIOSVadParameter
valueduration in seconds for the parameter
Warning
Setting VAD rules in the config file within the ASR bundle will NOT have any effect. Values for these parameters are set to their defaults upon initialization of KIOSRecognizer. They can only be changed programmatically, using this method.

◆ sharedInstance

+ (nullable KIOSRecognizer *) sharedInstance

Returns shared instance of the recognizer

Returns
The shared recognizer instance
Warning
if the engine has not been initialized by calling +initWithASRBundle:, this method will return nil

◆ startListening:

- (BOOL) startListening: (NSString *_Nullable *_Nullable) responseId

Start processing audio from the microphone.

Returns
TRUE if successful, FALSE otherwise

After calling this method, recognizer will listen to and decode audio coming through the microphone using decoding graph you specified via one of the prepareForListening methods.

For example:

NSString *responseId;
if ([recognizer startListening: &responseId]) {
NSLog(@"Started listening; responseId: %@", responseId);
}

The listening process will stop after:

  • an explicit call to [KIOSRecognizer stopListening] is made
  • one of the VAD tresholds set via [KIOSRecognizer setVADParameter:toValue:] are triggered (for example, max duration without speech, or end-silence, etc.),
  • if audio interrupt occurs (phone call, audible notification, app goes to background, etc.).

When the recognizer stops listening due to VAD triggering, it will call [recognizerFinalResponse:forRecognizer:]([KIOSRecognizerDelegate recognizerFinalResponse:forRecognizer:]) callback method.

When the recognizer stops listening due to audio interrupt, no callback methods will be triggered until audio interrupt is over.

If decoding graph was created with the trigger phrase support, recognizer will listen continuously until the trigger phrase is recognized, then it will switch over to the standard mode with partial results being reported via [recognizerPartialResult:forRecognizer:]([KIOSRecognizerDelegate recognizerPartialResult:forRecognizer:]) callback.

VAD settings can be modified via setVADParameter:toValue: method.

Parameters
responseIdaddress of the pointer to an NSString, which will be set to a responseId if startListening is successful. responseId is a unique identifier of the response. You can pass NULL to this method if you don't need responseId.
Note
You will need to call one of the prepareForListening methods before calling this method. You will also need to make sure that user has granted audio recording permission before calling this method; see AVAudioSessionRecordPermission and AVAudioSession requestRecordPermission: in AVFoundation framework for details.

◆ stopListening

- (void) stopListening

Stop the recognizer from processing incoming audio.

Warning
Calling this method will not trigger recognizerFinalResponse delegate call. If you would like to obtain the final result, you can dynamically set VAD timeout thresholds to trigger finalResponseCallback:.

◆ teardown

+ (BOOL) teardown

Teardown recognizer and all the related resources. This method would typically be called when you want to create a new recognizer that uses different ASR Bundle (currently the SDK supports only one recognizer instance at the time). For example, if you are using English recognizer and want to switch to a Spanish recognizer you would teardown the English recognizer, and then create a Spanish recognizer.

Returns
TRUE if the singletone instance of the recognizer was succesfully tore down or if sharedInsance is already nil, FALSE otherwise.
Warning
This method will return FALSE if recognizer is in KIOSRecognizerStateListening or KIOSRecognizerStateFinalProcessing state. Any application level references to sharedInstance will become invalid if the teardown is successful.

◆ version

+ (nonnull NSString *) version

Version of the KeenASR framework.

Property Documentation

◆ asrBundleName

- (NSString*) asrBundleName
readnonatomicassign

Name of the ASR Bundle (name of the directory that contains all the ASR resources. This will be the last component of the asrBundlePath.

◆ asrBundlePath

- (NSString*) asrBundlePath
readnonatomicassign

Absolute path to the ASR bundle where acoustic models, config, etc. reside

◆ currentDecodingGraphName

- (NSString*) currentDecodingGraphName
readnonatomicassign

Name of the decoding graph currently used by the recognizer

◆ delegate

- (id<KIOSRecognizerDelegate>) delegate
readwritenonatomicweak

Delegate, which handles KIOSRecognizerDelegate protocol methods

◆ handleNotifications

- (BOOL) handleNotifications
readwritenonatomicassign

If set to YES (default behavior), SDK will handle various notifications related to app activity. When app goes to background, or a phone call or a audio interrupt comes through, the SDK will stop listening, teardown internal audio stack, and then upon app coming back to foreground/interrupt ending it will reinitialize internal audio stack.

If set to NO, it is developer's responsibility to handle notifications that may affect audio capture. In this case, you will need to stop listening and deactivate KeenASR audio stack if an audio interrupt comes through, and then reinit the audio stack when the interrupt is over. Setting handleNotifications to NO allows the SDK to work in the background mode; you will still need to properly handle audio interrupts using deactivateAudioStack, activateAudioStack or reinitAudioStack, and stopListening methods.

◆ miscDataDirectory

- (NSString*) miscDataDirectory
readnonatomicassign

Path to the directory where miscelaneous data will be saved

◆ recognizerState

- (KIOSRecognizerState) recognizerState
readatomicassign

State of the recognizer, a read-only property that takes one of KIOSRecognizerState values

◆ recordingsDir

- (NSString*) recordingsDir
readnonatomicassign

Path to the directory where audio/json files will be saved for Dashboard uploads

◆ rescore

- (BOOL) rescore
readwritenonatomicassign

Two-letter language code that defines language of the ASR Bundle used to initialize the recognizer. For example: "en", @"es", etc. If set to YES, recognizer will perform rescoring for the final result, using rescoring language model provided in the custom decoding graph that's bundled with the application.

Default is YES.

Warning
If the resources necessary for rescoring are not available in the custom decoding graph directory bundled with the app, and rescore is set to YES, rescoring step will be skipped.

The documentation for this class was generated from the following file: