I have found several programs which can bind vocal input to any video game. However these programs have overhead issues or are very cumbersome. Voice input is built into Windows as every system has Windows Narrator Technology. It is very simple to add commands into C# if the CryEngine has support for C# microsoft libraries. Even then subprogram can port commands into the MWO application.
I have issues moving my hand around the keyboard and while in intense firefights or precise timing I can't take my hand off the WASD area.
I would love to say "Fire Artillery Strike " at a good moment.
Or "Fire UAV". This would resolve not disrupting other hand positions, where players have limited buttons on mouse, or a keyboard that doesn't resolve hand position issues.
Often placing such strikes involves evading enemy fire in faster mechs and moving the hand away to another position from the primary game keys will result in mech explosion and pilot death.
Other commands would be "Activate WeaponBank Four" to move the weapons selector to that bank.
More advanced commands could be "JumpJets Burn %50" however implimenting keypresses over a period of time is a bit more complicated ("But still pretty easy to do").
Has anyone used vocal commands in MWO? Would you like to see it integrated into the game?


Integrated Vocalized Commands To Mech
Started by Sir Wolfenx, Apr 13 2015 07:48 AM
7 replies to this topic
#1
Posted 13 April 2015 - 07:48 AM
#2
Posted 13 April 2015 - 01:25 PM
I like the idea of voice commands but it seems like it'd be difficult to implement, at least for PGI's limited staff. I voted yes, but only if it wouldn't take too much time from other, more critical things.
#3
Posted 13 April 2015 - 03:01 PM
It's nice idea but from my experience there isn't enough keyboard commands to really make use of what voice activation can provide.
It's handy for things such as activating the consumables, toggling the battlegrid or changing vision modes but generally I find I don't use it that much.
There isn't that detailed level of micro-management in the game to really get much from voice activation.
It's handy for things such as activating the consumables, toggling the battlegrid or changing vision modes but generally I find I don't use it that much.
There isn't that detailed level of micro-management in the game to really get much from voice activation.
#4
Posted 16 April 2015 - 05:23 AM
TheArisen, on 13 April 2015 - 01:25 PM, said:
I like the idea of voice commands but it seems like it'd be difficult to implement, at least for PGI's limited staff. I voted yes, but only if it wouldn't take too much time from other, more critical things.
Well if they let me touch their source code I could probably do it in one week. Its EASY with windows. You just access windows narrator in CSharp/C#. Seriously. using System.Speech.Recognition
Once you get the speech recognition working you just add it to the GUI.
This code was all it took for me to get speech recognition speaking to a UNITY3D game I am working on.
using System; using System.Collections.Generic; using System.Text; using System.IO; using System.Speech.Recognition; using System.Threading; using System.Diagnostics; using WindowsInput; // Keyboard Emulator (InputSimulator.dll) >>> http://inputsimulator.codeplex.com/ using System.Windows.Forms; namespace RecoServeur { class Program { public static SpeechRecognitionEngine speechRecognitionEngine; public static string wordRecognized = ""; public static string endApp = "Terminer"; public static string startReco = "Ouverture"; public static string endReco = "Fermeture"; public static string[] recoGrammar = new string[1000]; public static string[] recoTagSending = new string[1000]; public static int recoNumber = 0; public static string newLine = ""; public static Boolean isReco = false; public static Boolean isDisplay = false; public static double validity = 0.70f; public static void Main(string[] args) { System.Media.SystemSounds.Question.Play(); speechRecognitionEngine = new SpeechRecognitionEngine(SpeechRecognitionEngine.InstalledRecognizers()[0]); // First Installed Recognizers try { // create the engine // hook to event speechRecognitionEngine.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(engine_SpeechRecognized); // load dictionary try { Choices texts = new Choices(); string[] lines = File.ReadAllLines(Environment.CurrentDirectory + "\\grammar.txt"); foreach (string line in lines) { // Reco endApp if (line.StartsWith("#C")) { var parts = line.Split(new char[] { ' ' }); endApp = parts[1]; Console.WriteLine("Close Application Word : " + parts[1]); continue; } // Reco startReco if (line.StartsWith("#S")) { var parts = line.Split(new char[] { ' ' }); startReco = parts[1]; Console.WriteLine("Start Recognition Word : " + parts[1]); continue; } // Reco endReco if (line.StartsWith("#E")) { var parts = line.Split(new char[] { ' ' }); endReco = parts[1]; Console.WriteLine("End Recognition Word : " + parts[1]); continue; } // Reco validity if (line.StartsWith("#V")) { var parts = line.Split(new char[] { ' ' }); validity = Convert.ToInt32(parts[1]) / 100.0f; Console.WriteLine("Validity : " + parts[1]); continue; } // Display Reco if (line.StartsWith("#D")) { isDisplay = true; Console.WriteLine("Display (Verbose) on..."); continue; } // skip comments and empty lines.. if (line.StartsWith("#") || line == String.Empty) continue; // add the reco grammar and the tag ************************************* var parts2 = line.Split(new char[] { ',' }); // the recognition grammar texts.Add(parts2[0]); // then sended string 'tag' recoGrammar[recoNumber] = parts2[0]; recoTagSending[recoNumber] = parts2[1]; recoNumber++; // add the reco grammar and the tag ************************************* } // foreach grammar.txt Grammar wordsList = new Grammar(new GrammarBuilder(texts)); speechRecognitionEngine.LoadGrammar(wordsList); } catch (Exception ex) { throw ex; System.Environment.Exit(0); } // use default microphone speechRecognitionEngine.SetInputToDefaultAudioDevice(); // start listening speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple); } catch (Exception ex) { speechRecognitionEngine.RecognizeAsyncStop(); speechRecognitionEngine.Dispose(); MessageBox.Show( ex.Message + "\n\nSeem we have an error here : \n\na) Do you have connected a MicroPhone ?! \nb) Have you forgot the 'grammar.txt' file ?!!"); // exit System.Environment.Exit(0); } // Ready Console.WriteLine("\n\nVocabulary added : " + recoNumber.ToString()); Console.WriteLine("\n\nReady....."); Console.WindowWidth = 50; Console.BackgroundColor = ConsoleColor.Red; Console.WriteLine("** Sending Recognition is now DISABLE **"); while (true) { Thread.Sleep(10); } } // main public static void engine_SpeechRecognized(object sender, SpeechRecognizedEventArgs e) { if (e.Result.Confidence >= validity) { wordRecognized = e.Result.Text; // End of app if (wordRecognized == endApp) { speechRecognitionEngine.RecognizeAsyncStop(); speechRecognitionEngine.Dispose(); System.Environment.Exit(0); } // activate the sending reco. if (wordRecognized == startReco) { isReco = true; Console.Title = wordRecognized; Console.BackgroundColor = ConsoleColor.Green; Console.WriteLine("** Sending Recognition is now ACTIVATED **"); return; } // de-activate the sending reco. if (wordRecognized == endReco) { isReco = false; Console.Title = wordRecognized; Console.BackgroundColor = ConsoleColor.Red; Console.WriteLine("** Sending Recognition is now DISABLE **"); return; } // Try to find the word if (isReco) { for (int i=0; i < recoNumber; i++) { // we have a winner... if (wordRecognized == recoGrammar[i]) { // send the tag if (recoTagSending[i].Length > 1) newLine = "\n"; else newLine = ""; // Keyboard Emulator (InputSimulator.dll) >>> http://inputsimulator.codeplex.com/ InputSimulator.SimulateTextEntry(recoTagSending[i] + newLine); Console.Title = wordRecognized; if (isDisplay) Console.WriteLine("Sending : " + recoTagSending[i]); return; } } } } } } // Program } // RecoServeur // End source
Then you add your grammar file in grammar.txt
# comment #E Terminer #V 70 Merci O A B C D E F G Au revoir Terminer zéro un deux trois quatre cinq six sept huit neuf dix Patrick Denis Albert Jean Paul Antoine Bonjour Salut Il fait beau Matin Chien Cheval Chat Cochon Ordinateur Il fait beau ce matin Parfait Ouverture Fermeture Le petit chaperon rouge Unity Le ciel est bleu et le soleil brille Gauche Droite Haut Bas Saute Tir 1984 2012 1962 1504 2013 22 33 11
The code gets sent to the program VIA a port, but it could be easily wrapped into the game.
WINDOWS NARRATOR does all the work for you.
#5
Posted 16 April 2015 - 07:05 AM
Just going to make a point, vocal commands are generally a bad idea, without some complex vocal software, which cost lots of money. As simple vocal software is unlikely to be able to decipher accent types. From personal experance, computers have a hard time with those that have drawls.
#6
Posted 16 April 2015 - 12:09 PM
No need for them to spend development time on something few are likely to bother with. Also there's already a 3rd party app you can download for this purpose. It's kinda niche and doesn't really warrant much attention unless you need it because of a disability.
#7
Posted 16 April 2015 - 01:32 PM
Just use glovepie or pieglove or whatever. very good.
#8
Posted 19 April 2015 - 11:54 AM
I'm inclined to say that this sort of thing is better implemented with external programs for those who want to use them. I have considered using them myself, mainly for artillery strikes, but I've gotten somewhat involved with CW groups and use voice activated teamspeak so it might be a bad idea for me.
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users