I wonder if it would be fairly easy to interface with Sirikit or dictation api o iOS to create an audio trigger type. On the editor side of things, I would imagine authors entering a word or short phrase (like the AR code backup for the decoder) to be recognized as having been said. In the client, a tab, like the decoder, but with just a mic button or something. The function would work by ARIS transcribing that brief phrase and then trying to match the resulting text with the string given by the author. "My voice is my password. Verify me." They could be used as non-multiple choice answers by exiting from a conversation, or just as a more general trigger.
Besides being cool, this is something that the language learning folks would probably eat up. And basically, I was wondering if anyone knew how much trouble overall it might be to try something like this. The feature would seem to me to fit into the design of ARIS pretty seamlessly.
If this sounded vaguely feasible and amenable, we could start looking for funding. Any ideas on this?