Using Voice Commands In React

Aidan McBride
4 min readNov 6, 2020

--

There are a number of reasons a developer might want the ability to use speech to interact with a web application. For incorporating use for hearing impaired individuals, to use in games, or maybe to make a website more interactive for its users. Whatever the reason, a plugin called react-speech-recognition makes it easy to use voice commands in your React application.

This blog will outline how to set up this resource in a react application, and a couple of possible uses that may come in handy. Overall, this tool is sure to spark some creativity and ingenuity, and you should find your own ways to incorporate it into your projects.

Setting Up The Application

If you have never used React before, it is a frontend framework for JavaScript that is component based. I will not go into too much detail on the inner workings of react, but I will say we will be using mainly functional components, as opposed to class components, in this application.

We can create our app with the following command:

npx create-react-app your-app-name

For this example, let’s call this voice-commands-app. Once we create our react app, let’s go ahead and install react-speech-recognition with the following command:

npm install --save react-speech-recognition

This allows us to use react-speech-recognition anywhere in our application that we import it. In this application, all the work will be done from the App.js file, so we can import the following there:

import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'

We should then clear out everything in the return statement in our App.js file except a div tag. We can also throw in an h1 tag as a title for our application. The App.js file should now look like this:

Functionality

Now that we have set up the basic outline for our application, we can start integrating the speech recognition implementation! If you look at the first example in the react-speech-recognition documentation, you will see a file very similar to this:

SpeechRecognition has built in methods including startListening and stopListening. We have created a template to tell the browser when to start and stop listening based off button clicks, and to render the tex we create in a <p> tag. Running npm start and navigating to localhost:3000 in our browser, we can test the application so far!

Using Commands

Perhaps the most useful feature of this tool is the ability to recognize commands from the microphone. By creating a commands array of expected microphone input, we assign each command a callback function to execute upon receiving this command! For now, let us simply print a response message to the screen based on our input.

And we receive the following functionality!

When your command is recognized, the callback method will be called and display your message

A Step Deeper

I wanted to write this blog to explore a step deeper into what can be done with this resource. Thought printing responses to commands it certainly do-able and a great place to start, we should consider what else we can do with this functionality. Since we are calling a callback function in response, we can execute just about whatever you can think of inside that callback function.

For example, we can render new objects to the DOM. Let’s say we wanted to render some shapes. We can set commands that call functions we have pre-defined that create shapes and append them to the page. Our code would look like this:

Commands in Action

This is just one more example of the uses if react-speech-recognition. You can do anything from rendering objects, to moving to different pages, to contacting API’s and retrieving data. Anything we can fit in a function, or a number of functions for that matter, can be executed with voice commands.

Multiple Languages

I want to briefly touch on one last amazing feature, which is the ability to recognize different languages. By passing the startListening function a language object with a key to indicate which language, we can listen for specific languages and respond accordingly. This could be used form anything to building an app to teach someone a second language to translating from user to user to help ease use of the application.

In practice

Wrapping Up

A list of languages and extended features can be found in the documentation linked bellow, but this blog outlines some key uses and how to get started with react-speech-recognition. I encourage you to be creative as always, and explore all the different possible uses and integrations you can come up with. Start simple, explore, and maybe even add this feature to an existing application. Comment with any questions or concerns, and happy coding!

Resources:

--

--

Aidan McBride
Aidan McBride

Written by Aidan McBride

I am a Front End Engineer and graduate of Flat Iron coding bootcamp. Currently I work in the regulatory industry using React.