Monday, October 17, 2011

Kinect speech recognition in linux

Audio support is now part of libfreenect. Additionally it is now possible to load the microsoft SDK version of the audio firmware from linux courtesy of a utility called kinect_upload_fw written by Antonio Ospite.This version of the firmware makes the kinect appear to your computer as a standard USB microphone.
This means you can now record audio using your kinect, but that's not all that interesting in and of itself. Linux support for speech recognition at this point is not all that great. It is possible to run dragon naturallyspeaking via wine or to use the sphinx project (after much training), but neither of those approaches really appealed to me for simple voice commands (as opposed to dictation). The google android project happens to include a speech recognizer from Nuance which by default is meant to be built for an ARM target, like your phone. After extensive hacking around the build system I was able to instead build for an x86 target, like your desktop. Now, you can combine these two things- kinect array microphone + android voice recognition to do some more interesting things, i.e. toggle hand tracking on and off via voice.



How to get started:

1) Check if you have the "unbuffer" application which is part of the linux scripting language called expect:

which unbuffer

If the above command comes up empty you should download a copy of unbuffer from the link here:
http://dl.dropbox.com/u/11217419/unbuffer

copy unbuffer to a directory that is in your path, like /usr/local/bin or ~/bin

2)Download my precompiled version of the srec subproject from here:
http://dl.dropbox.com/u/11217419/srec_kinect.tgz

3)save the tarball from step 1 in a convenient directory then unpack it with this command:
tar xfz srec_kinect.tgz

4)switch into the subdirectory where I've placed some convenience scripts:
cd srec/config/en.us

5) Open a second terminal and in that second terminal also switch into srec/config/en.us

6) In the first terminal execute
./run_SRecTestAudio.sh
and in the other terminal execute
cat speech_fifo

7) try speaking into your microphone and wait for recognition results to appear in both terminals. Note that the vocabulary as configured at this point is very small- words like up,down,left,right and the numbers from 1-9 should be recognized properly.

Integrating the kinect:
1)Acquire Antonio Ospite's firmware tools like so:
git clone http://git.ao2.it/kinect-audio-setup.git/

2)move into the kinect-audio-setup subdirectory:
cd kinect-audio-setup

3)build kinect_upload_fw as root:
make install

4)Fetch and extract the microsoft kinect SDK audio firmware (depending on your directory permissions, this may also need to be run as root):
./kinect_fetch_fw /lib/firmware/kinect

This will extract the firmware to this location by default:
/lib/firmware/kinect/UACFirmware.C9C6E852_35A3_41DC_A57D_BDDEB43DFD04

5)Upload the newly extracted firmware to the kinect:
kinect_upload_fw /lib/firmware/kinect/UACFirmware.C9C6E852_35A3_41DC_A57D_BDDEB43DFD04

6)Check for a new USB audio device in your dmesg output

7)Configure the kinect USB audio device to be your primary microphone input and
try out run_SRecTestAudio.sh again as described earlier.


Additional Notes:

I unfortunately no longer remember all the changes I had to make in order for the srec project within android build for x86. Perhaps someone with better knowledge of the android build system can chime in at the comments below. In the interim, use the precompiled copy that I have linked above, just be aware that it is old, I think it dates back to the froyo branch of android or earlier (I compiled it a long time ago). If you want to take a shot at building the latest srec yourself, check out the android source code then look under external/srec/

The run_SRecTestAudio.sh script sets up the speech recognizer to run on live audio and pipes the recognition results to a fifo in the same directory called speech_fifo. Running cat in the second terminal lets you read out the recognition results as they arrive. Instead of cat you could alternatively have whatever programs needs recognition results read from the fifo and act accordingly. Unbuffer is used to make sure you see recognition results right away rather than waiting for the speech_fifo to fill up.

The srec recognizer does not require any training but has certain limitations. The most significant limitation is the vocabulary it can recognize. The larger the vocabulary you specify, the less accurate the recognition results will likely be. As a result this recognizer is best used for a small set of frequently used voice commands. Under srec/config/en.us/grammars/ there are a number of .grxml files which define what words the recognizer can understand. You can define your own simple grammar (.grxml) here which, for example, only recognizes the digits on a phone keypad. To do this you can follow the syntax of any of the other .grxml files in the directory and then execute run_compile_grammars.sh which will produce a .g2g file from the .grxml file. There is also a voicetag/texttag file with extension .tcp which needs to point to the g2g file of your choice. You can find the .tcp files under the srec/config/en.us/tcp directory. run_SRecTestAudio.sh points to a tcp file which you can specify.