How to implement speech-to-text via the Speech framework in Objective-C?(如何通过Objective-C中的语音框架实现语音到文本的转换?)
本文介绍了如何通过Objective-C中的语音框架实现语音到文本的转换?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我想使用iOS语音框架在我的Objective-C应用程序中进行语音识别。
我找到了一些快速的例子,但在Objective-C中找不到任何东西。
是否可以从Objective-C访问此框架?如果是,如何?
推荐答案
花了足够的时间寻找Objective-C示例--甚至在苹果的文档中--我也找不到像样的东西,所以我自己找出来了。
头文件(.h)
/*!
* Import the Speech framework, assign the Delegate and declare variables
*/
#import <Speech/Speech.h>
@interface ViewController : UIViewController <SFSpeechRecognizerDelegate> {
SFSpeechRecognizer *speechRecognizer;
SFSpeechAudioBufferRecognitionRequest *recognitionRequest;
SFSpeechRecognitionTask *recognitionTask;
AVAudioEngine *audioEngine;
}
方法文件(.m)
- (void)viewDidLoad {
[super viewDidLoad];
// Initialize the Speech Recognizer with the locale, couldn't find a list of locales
// but I assume it's standard UTF-8 https://wiki.archlinux.org/index.php/locale
speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"]];
// Set speech recognizer delegate
speechRecognizer.delegate = self;
// Request the authorization to make sure the user is asked for permission so you can
// get an authorized response, also remember to change the .plist file, check the repo's
// readme file or this project's info.plist
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
NSLog(@"Authorized");
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
NSLog(@"Denied");
break;
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
NSLog(@"Not Determined");
break;
case SFSpeechRecognizerAuthorizationStatusRestricted:
NSLog(@"Restricted");
break;
default:
break;
}
}];
}
/*!
* @brief Starts listening and recognizing user input through the
* phone's microphone
*/
- (void)startListening {
// Initialize the AVAudioEngine
audioEngine = [[AVAudioEngine alloc] init];
// Make sure there's not a recognition task already running
if (recognitionTask) {
[recognitionTask cancel];
recognitionTask = nil;
}
// Starts an AVAudio Session
NSError *error;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:&error];
[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
// Starts a recognition process, in the block it logs the input or stops the audio
// process if there's an error.
recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
AVAudioInputNode *inputNode = audioEngine.inputNode;
recognitionRequest.shouldReportPartialResults = YES;
recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
BOOL isFinal = NO;
if (result) {
// Whatever you say in the microphone after pressing the button should be being logged
// in the console.
NSLog(@"RESULT:%@",result.bestTranscription.formattedString);
isFinal = !result.isFinal;
}
if (error) {
[audioEngine stop];
[inputNode removeTapOnBus:0];
recognitionRequest = nil;
recognitionTask = nil;
}
}];
// Sets the recording format
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[recognitionRequest appendAudioPCMBuffer:buffer];
}];
// Starts the audio engine, i.e. it starts listening.
[audioEngine prepare];
[audioEngine startAndReturnError:&error];
NSLog(@"Say Something, I'm listening");
}
- (IBAction)microPhoneTapped:(id)sender {
if (audioEngine.isRunning) {
[audioEngine stop];
[recognitionRequest endAudio];
} else {
[self startListening];
}
}
现在,将委派添加到SFSpeechRecognizerDelegate
以检查语音识别器是否可用。
#pragma mark - SFSpeechRecognizerDelegate Delegate Methods
- (void)speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available {
NSLog(@"Availability:%d",available);
}
说明和说明
记住修改.plist文件以获得用户对语音识别和使用麦克风的授权,当然<String>
值必须根据您的需要进行自定义,您可以通过创建和修改Property List
中的值来实现这一点,或者右键单击.plist
文件和Open As
->Source Code
并在</dict>
标签之前粘贴以下行。
<key>NSMicrophoneUsageDescription</key> <string>This app uses your microphone to record what you say, so watch what you say!</string>
<key>NSSpeechRecognitionUsageDescription</key> <string>This app uses Speech recognition to transform your spoken words into text and then analyze the, so watch what you say!.</string>
还要记住,为了能够将语音框架导入到项目中,您需要安装iOS 10.0+。
要让它运行并测试它,你只需要一个非常基本的用户界面,只需创建一个UIButton并为它分配microPhoneTapped
操作,当按下时,应用程序应该开始监听并将它通过麦克风听到的所有内容记录到控制台(在样例代码中,NSLog
是唯一接收文本的东西)。再次按下时应停止录制。
我用一个样例项目创建了一个Github repo,享受!
这篇关于如何通过Objective-C中的语音框架实现语音到文本的转换?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
沃梦达教程
本文标题为:如何通过Objective-C中的语音框架实现语音到文本的转换?
基础教程推荐
猜你喜欢
- 如何在 iPhone 上显示来自 API 的 HTML 文本? 2022-01-01
- 如何在没有IB的情况下将2个按钮添加到右侧的UINavigationbar? 2022-01-01
- android 应用程序已发布,但在 google play 中找不到 2022-01-01
- 如何在 UIImageView 中异步加载图像? 2022-01-01
- 如何让对象对 Cocos2D 中的触摸做出反应? 2022-01-01
- 在 gmail 中为 ios 应用程序检索朋友的朋友 2022-01-01
- Kivy Buildozer 无法构建 apk,命令失败:./distribute.sh -m “kivy"d 2022-01-01
- UIWebView 委托方法 shouldStartLoadWithRequest:在 WKWebView 中等效? 2022-01-01
- Android:对话框关闭而不调用关闭 2022-01-01
- 当从同一个组件调用时,两个 IBAction 触发的顺序是什么? 2022-01-01