Using Voice Commands In React

There are a number of reasons a developer might want the ability to use speech to interact with a web application. For incorporating use for hearing impaired individuals, to use in games, or maybe to make a website more interactive for its users. Whatever the reason, a plugin called react-speech-recognition makes it easy to use voice commands in your React application.

This blog will outline how to set up this resource in a react application, and a couple of possible uses that may come in handy. Overall, this tool is sure to spark some creativity and ingenuity, and you should find your own ways to incorporate it into your projects.

Setting Up The Application

We can create our app with the following command:

npx create-react-app your-app-name

For this example, let’s call this voice-commands-app. Once we create our react app, let’s go ahead and install react-speech-recognition with the following command:

npm install --save react-speech-recognition

This allows us to use react-speech-recognition anywhere in our application that we import it. In this application, all the work will be done from the App.js file, so we can import the following there:

import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'

We should then clear out everything in the return statement in our App.js file except a div tag. We can also throw in an h1 tag as a title for our application. The App.js file should now look like this:

Functionality

SpeechRecognition has built in methods including startListening and stopListening. We have created a template to tell the browser when to start and stop listening based off button clicks, and to render the tex we create in a <p> tag. Running npm start and navigating to localhost:3000 in our browser, we can test the application so far!

Using Commands

And we receive the following functionality!

When your command is recognized, the callback method will be called and display your message

A Step Deeper

For example, we can render new objects to the DOM. Let’s say we wanted to render some shapes. We can set commands that call functions we have pre-defined that create shapes and append them to the page. Our code would look like this:

Commands in Action

This is just one more example of the uses if react-speech-recognition. You can do anything from rendering objects, to moving to different pages, to contacting API’s and retrieving data. Anything we can fit in a function, or a number of functions for that matter, can be executed with voice commands.

Multiple Languages

In practice

Wrapping Up

Resources:

I ‘m a Web Developer and a Flatiron coding bootcamp alumni. I currently work in the financial tech industry as a Front End Engineer