Modality is the Secret Weapon for Voice and Conversation Channels, Here’s Why
A component of conversation that is a silent influencer (unless ignored entirely, ironically bringing it to the surface) is the context of where the conversation is being had. Most people speak more softly on planes or in trains. In movie theaters most people will lean to the ear of their movie-mate to ask a question. In settings with children people will show, rather than tell, information that could be inappropriate.
The surrounding context is a critical part of having a conversation.
Think of Jack Nicholson in the movie ‘As Good as it Gets’ screaming his dinner order across the restaurant. Or the person sitting next to you on a plane with their mobile playing a video at full volume. Or a stranger leaning in to tell you something sensitive.
The ignorance of contextual surroundings breaks trust and comfort; characteristics critical to voice assistant purpose.
What makes this part of voice experience a challenge is the volume of devices voice assistants are deployed on. CNET lists over eighty kinds of devices that are ‘enabled’ with Google Assistant, Wikipedia lists as many for Alexa .
Each devices has their own modalities. Each of these devices can be brought into different situational contexts.
Smart speakers and displays tend to be set into the room of a house and not moved. Mobile devices are taken nearly everywhere. Wearables often go everywhere else. Listening not only for the voice of the user, but also for when and where they are engaging the assistant is critical to a brands ability to build trust and execute their purpose.
Below is an illustration of how modality decisions combine to connect emotion and intellect for the user.