Race against Amazon demands rapid iteration, and cash
To sweeten its offer, Google extended its Actions on Google platform – by which developers can define how Google Assistant responds to input – to iOS phones and taught Assistant to ingest text input from mobile phones, for times when speaking aloud might be awkward.
It also announced support for transactions, presently available for Home apps and coming soon to mobile apps, for buying things by voice and text.
The Chocolate Factory has even promised to integrate image recognition capabilities in the coming months, through a service called Google Lens, to allow Assistant to understand spoken or text queries made in reference to objects depicted in images.
Brad Abrams, group product manager for Google Assistant, said the goal with Assistant is to facilitate “a conversation between you and Google that helps you get things done.”
Google’s goal with developers is similar – to help them build software that encourages conversations with its Assistant software. Thanks to tools like the Actions Console, a web-based dashboard for configuring projects that use Assistant and for making them accessible to apps and devices, the necessary programming isn’t that difficult.
The flow, as explained by Adam Coimbra, partner technology manager at Google, goes something like this:
- Create a project using the Actions Console.
- Use API.AI, or another natural language processing service, link input to an action.
- Configure a webhook, using Node.js or another language, to trigger the action.
- Test in Google web simulator or on a device.
- Ensure the app works appropriately for mobile devices.
- Integrate the app with the Transaction API, if necessary.
The process is similar to a configuring service like IFTTT, where an input event triggers a response event or series of them. The difference is that those developing for Assistant have to anticipate a wide range of possible input because there are no fixed format for commands spoken or typed in natural language.
That’s where API.AI comes in. The developer maps intents – what the user says or types – to a specific action. Thus, an app creator might map “Book a ticket to Los Angeles” to a specific book_ticket action defined in code, making sure to ask for necessary parameters, like time, date, and location if they’re not supplied.
The challenge involves planning for other possible variations of that intent, such as “Buy me a ticket to Los Angeles.” A modest amount of human intelligence is necessary to instruct API.AI’s algorithms to do their thing.
One reason developers may want to bother dipping their toes in the water is that Google intends to make Assistant an ecosystem, one that (hopefully) benefits developers and users, just as Search does for publishers and readers, and Google Play does for Android developers and customers.
Assistant developers can look forward not only to a directory that promotes apps, but Vera Tzoneva, who works on global product partnerships for the Google Assistant, explained that Google will promote Assistant apps through explicit invocation – when a Google Home user, for example, says the name of the app – and implicit invocation – when someone asks Assistant to take an action associated with one or more apps.
So saying, “Hey Google, I want to work out” might bring up a developer’s exercise-related app, even if it was not explicitly named.
Implicit invocation thus functions like search, and may eventually provide similar marketing benefits to developers. Unfortunately for developers, Google appears to have no desire to explain its specific criteria for invoking Assistant apps.
In terms of ranking, Tzoneva said a lot of different factors get considered. “One of the primary things is the quality of the app and user context,” she added.
So, make a brilliant Assistant app and Google may love you. Maybe. ®