我想在我的应用程序中集成连续(免提)语音命令识别功能,以实现家庭自动化

I want to incorporate continuous (hands free) voice command recognition in my app for home automation(我想在我的应用程序中集成连续(免提)语音命令识别功能,以实现家庭自动化)

本文介绍了我想在我的应用程序中集成连续(免提)语音命令识别功能,以实现家庭自动化的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经创建了一个简单的Android应用程序,用于控制连接到我的树莓PI的继电器。我已经使用按钮和基本的语音识别来触发这些按钮,并打开/关闭相应的中继频道。

从现在起,语音识别部分是由RecognizerIntent处理的,我需要按应用程序上的一个按钮来打开谷歌语音提示,它会监听我的语音命令,并激活/停用控制继电器开关的相应按钮。

我想用连续语音识别来做同样的事情,它允许应用程序连续收听我的命令,而用户不必按应用程序上的按钮,从而实现免提操作。

这是我现有的代码,这是一种非常简单的语音识别方法,允许我打开和关闭连接到继电器的各种设备的按钮:

public void micclick(View view) {
        if(view.getId()==R.id.mic)
        {promptSpeechInput();}
}

private void promptSpeechInput() {
    Intent i= new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
    i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
    i.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
    i.putExtra(RecognizerIntent.EXTRA_PROMPT,"Speak!");
    try{
        startActivityForResult(i,100);

    }
    catch (ActivityNotFoundException a)
    {
        Toast.makeText(MainActivity.this,"Sorry your device doesn't support",Toast.LENGTH_SHORT).show();
    }
}
public void onActivityResult(int requestCode, int resultCode, Intent i) {
    super.onActivityResult(requestCode, resultCode, i);
    String voicetxt;
    switch (requestCode) {
        case 100:
            if (resultCode == RESULT_OK && i != null) {
                ArrayList<String> result2 = i.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
                voicetxt = result2.get(0);
                if (voicetxt.equals("fan on")) {
                    StringBuffer result=new StringBuffer();
                    toggleButton1.setChecked(true);
                    result.append("Fan: ").append(toggleButton1.getText());
                    sc.onRelayNumber="a";
                    new Thread(sc).start();
                    Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
                }
                if (voicetxt.equals("fan of")) {
                    StringBuffer result=new StringBuffer();
                    toggleButton1.setChecked(false);
                    result.append("Fan: ").append(toggleButton1.getText());
                    sc.onRelayNumber = "a_off";
                    new Thread(sc).start();
                    Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
                }
                if (voicetxt.equals("light on")) {
                    StringBuffer result=new StringBuffer();
                    toggleButton2.setChecked(true);
                    result.append("Light: ").append(toggleButton2.getText());
                    sc.onRelayNumber = "b";
                    new Thread(sc).start();
                    Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
                }
                if (voicetxt.equals("light off")) {
                    StringBuffer result=new StringBuffer();
                    toggleButton2.setChecked(false);
                    result.append("Light: ").append(toggleButton2.getText());
                    sc.onRelayNumber = "b_off";
                    new Thread(sc).start();
                    Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
                }
                if (voicetxt.equals("air conditioner on")) {
                    StringBuffer result=new StringBuffer();
                    toggleButton3.setChecked(true);
                    result.append("AC: ").append(toggleButton3.getText());
                    sc.onRelayNumber = "c";
                    new Thread(sc).start();
                    Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
                }
                if (voicetxt.equals("air conditioner of")) {
                    StringBuffer result=new StringBuffer();
                    toggleButton3.setChecked(false);
                    result.append("AC: ").append(toggleButton3.getText());
                    sc.onRelayNumber = "c_off";
                    new Thread(sc).start();
                    Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
                }
                if (voicetxt.equals("heater on")) {
                    StringBuffer result=new StringBuffer();
                    toggleButton4.setChecked(true);
                    result.append("Heater: ").append(toggleButton4.getText());
                    sc.onRelayNumber = "d";
                    new Thread(sc).start();
                    Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
                }
                if (voicetxt.equals("heater off")) {
                    StringBuffer result=new StringBuffer();
                    toggleButton4.setChecked(false);
                    result.append("Heater: ").append(toggleButton4.getText());
                    sc.onRelayNumber = "d_off";
                    new Thread(sc).start();
                    Toast.makeText(MainActivity.this, result.toString(),Toast.LENGTH_SHORT).show();
                }
            }
            break;
    }
}

我想在不按按钮的情况下实现相同的功能。请注意,我是Android应用程序开发的新手。如果可能,请描述外部库的使用,如果需要的话,因为我认为使用Google的RecognizerIntent是不可能持续识别的。我推测我可能需要包括像CMUSphinx这样的库,但我不确定如何着手。

推荐答案

对于连续识别/听写模式,您可以执行几项操作。您可以从Android本身使用Google语音识别,不建议用于连续识别(如https://developer.android.com/reference/android/speech/SpeechRecognizer.html所述)

此接口的实现可能会将音频串流到远程 执行语音识别的服务器。因此,此API不是 用于连续识别,这将消耗 大量电池和带宽。

但如果您真的需要它,您可以通过创建自己的类并继承IRecognitionListener来解决问题。(这是我在Xamarin-Android上写的,语法非常类似于原生Android)

public class CustomRecognizer : Java.Lang.Object, IRecognitionListener, TextToSpeech.IOnInitListener
{
    private SpeechRecognizer _speech;

    private Intent _speechIntent;


    public string Words;


    public CustomRecognizer(Context _context)
    {
        this._context = _context;
        Words = "";
        _speech = SpeechRecognizer.CreateSpeechRecognizer(this._context);
        _speech.SetRecognitionListener(this);
        _speechIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
        _speechIntent.PutExtra(RecognizerIntent.ExtraLanguageModel, RecognizerIntent.LanguageModelFreeForm);
        _speechIntent.PutExtra(RecognizerIntent.ActionRecognizeSpeech, RecognizerIntent.ExtraPreferOffline);
        _speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1000); 
        _speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1000);
        _speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 1500);
    }

    void startover()
    {
        _speech.Destroy();
        _speech = SpeechRecognizer.CreateSpeechRecognizer(this._context);
        _speech.SetRecognitionListener(this);
        _speechIntent = new Intent(RecognizerIntent.ActionRecognizeSpeech);
        _speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputCompleteSilenceLengthMillis, 1000);
        _speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputPossiblyCompleteSilenceLengthMillis, 1000);
        _speechIntent.PutExtra(RecognizerIntent.ExtraSpeechInputMinimumLengthMillis, 1500);
    StartListening();
    }
    public void StartListening()
    {
        _speech.StartListening(_speechIntent);
    }

    public void StopListening()
    {
        _speech.StopListening();
    }

    public void OnBeginningOfSpeech()
    {

    }

    public void OnBufferReceived(byte[] buffer)
    {
    }

    public void OnEndOfSpeech()
    {

    }

    public void OnError([GeneratedEnum] SpeechRecognizerError error)
    {
        Words = error.ToString();
        startover();
    }

    public void OnEvent(int eventType, Bundle @params)
    {
    }

    public void OnPartialResults(Bundle partialResults)
    {
    }

    public void OnReadyForSpeech(Bundle @params)
    {
    }

    public void OnResults(Bundle results)
    {

        var matches = results.GetStringArrayList(SpeechRecognizer.ResultsRecognition);
        if (matches == null)
            Words = "Null";
        else
            if (matches.Count != 0)
            Words = matches[0];
        else
            Words = "";

        //do anything you want for the result
        }
        startover();
    }

    public void OnRmsChanged(float rmsdB)
    {

    }

    public void OnInit([GeneratedEnum] OperationResult status)
    {
        if (status == OperationResult.Error)
            txtspeech.SetLanguage(Java.Util.Locale.Default);
    }


}

在活动上调用它:

void StartRecording()
    {
        string rec = PackageManager.FeatureMicrophone;

        if (rec != "android.hardware.microphone")
        {
            // no microphone, no recording. Disable the button and output an alert
            Toast.MakeText(this, "NO MICROPHONE", ToastLength.Short);
        }
        else
        {

            //you can pass any object you want to connect to your recognizer here (I am passing the activity)
            CustomRecognizer voice = new CustomRecognizer(this);
            voice.StartListening();

        }
    }

别忘了申请许可才能使用麦克风!

解释:

-这将删除令人讨厌的"点击开始录制"

-这将始终记录您调用StartListening()的时刻,并且永远不会停止,因为我总是在每次它完成记录时调用startover()或StartListening()

-这是一个非常糟糕的解决方法,因为它在处理您的录音时,录音机在调用StartListening()之前不会收到任何声音输入(此问题没有解决方法)

-谷歌识别对于语音命令来说并不是很好,因为语言模型是"[lang]个句子",所以你不能限制单词,因此谷歌总是会努力做出一个"好句子"。

为了更好的效果和UX,我真的建议你使用Google Cloud API(但它必须在线,而且成本很高),第二个建议是CMUSphinx/PocketSphinx,它是开源的,可以做离线模式,但所有事情都要手动做

PocketSphinx优势:

  1. 您可以创建自己的词典
  2. 兼容脱机模式

  3. 您可以自己进行声学模型(语音等)的培训,以便根据您的环境和发音进行配置

  4. 您可以通过访问PartialResult获取实时结果

PocketSphinx的缺点:您必须手动完成所有操作,从设置声学模型、词典、语言模型、阈值等。(如果您想要简单的东西,则需要过度杀伤力)。

这篇关于我想在我的应用程序中集成连续(免提)语音命令识别功能,以实现家庭自动化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本文标题为:我想在我的应用程序中集成连续(免提)语音命令识别功能,以实现家庭自动化

基础教程推荐