MULTIMODAL & VOICE-FIRST WRITING

ROLE: UX Content Strategist

DURATION: November 12, 2018 to December 31, 2019

SKILLS: UX Content Strategy, Conversation Design, UX Research

TOOLS: Botsociety, G Suite

Note: To comply with my non-disclosure agreement, I have omitted and excluded confidential information and, in particular, the project details and result findings in this case study. All information given here is my own, and does not necessarily reflect the views of Google.

PROJECT

For Google Assistant, I wrote for various surfaces such as a voice-centric conversation for voice-activated speakers or a visual conversation on a surface that the Assistant supports.

Multimodal writing

TAKEAWAYS

I wrote the display prompts as a condensed version of the spoken prompt, making them easy to scan.

I used contractions to keep the conversation natural and life-like.

I kept the chips conversational for users to remember, made them relevant and action-oriented. I included here the options given in the prompts so the user can quickly tap them to respond.

VOICE-FIRST WRITING

TAKEAWAYS

As a rule of thumb, I find it easier to write for the spoken conversation first, as in an actual human-to-human conversation, since I could be distracted by a visual screen in mind.

As speech is fleeting and linear, I kept the messaging crisp and clear to lessen the cognitive load.

comparison

TAKEAWAYS

I designed the spoken and display prompts in such a way so the user could understand each independently.

On the multimodal surface, I gave the short answer in the prompts while giving the details in the visuals.

I ensured that all the elements give a single unified response where the voice and tone are consistent.

CONVERSATION FLOW

TAKEAWAYS

I sketched out a “blue sky” (best path) VUI flow to show all the paths that could be taken by the system. I listed out all different ways the user could branch to the next state. The flow doesn’t list every phrase user could say, but tries to group them.

Since the user wouldn’t automatically know what to ask, I tried to guide him by providing signposts. I further tried to guide the user by the concept of priming. At each step, I made the call to action clear by asking a question.

The flow also anticipates the need of the user for a way out of a particular dialog state, in case: the user got there because of a false accept (he was misrecognized); he has changed his mind; isn’t ready to answer; something else has come up; or the options are not interesting for him.

Even in such cases, the conversation keeps moving forward as there is no going ‘back’ in VUI. There is no ‘back button’ or ‘arrow’ in speech.