Page 1 of 1

Speculation: New possible trigger type: audio

Posted: Sun Feb 26, 2017 9:32 am
by chrish
I wonder if it would be fairly easy to interface with Sirikit or dictation api o iOS to create an audio trigger type. On the editor side of things, I would imagine authors entering a word or short phrase (like the AR code backup for the decoder) to be recognized as having been said. In the client, a tab, like the decoder, but with just a mic button or something. The function would work by ARIS transcribing that brief phrase and then trying to match the resulting text with the string given by the author. "My voice is my password. Verify me." They could be used as non-multiple choice answers by exiting from a conversation, or just as a more general trigger.

Besides being cool, this is something that the language learning folks would probably eat up. And basically, I was wondering if anyone knew how much trouble overall it might be to try something like this. The feature would seem to me to fit into the design of ARIS pretty seamlessly.

If this sounded vaguely feasible and amenable, we could start looking for funding. Any ideas on this?

Re: Speculation: New possible trigger type: audio

Posted: Sun Feb 26, 2017 10:32 am
by djgagnon
That sounds awesome and totally feasible, at least to prototype. From a UX perspective, the main concern would be figuring out how to keep the user entertained while you are uploading an audio clip and waiting for a server response. Siri does an okay job at it, but much of what us mobile devs have to do is make it look like you are connected to the internet even when that connection is spotty.

From a fundraising/grant writing standpoint, I would expect this to take up about a month of developer time plus some designer time for the editor and client interfaces. Even with testing it would be under $10k.

As usual, it will be a project that really needs this feature that will end up paying for it.