As Many Paths as People
Many tasks have a single path, a basic on/off nature that simplifies the designer’s challenge. Think about a light switch. The light is either on or off. It goes in only one direction at a time and the current state is as easy to know as whether or not the room is dark.
Now think about smart lights you can command by voice or through a mobile app. Suddenly there are multiple paths to turning on the lights, from “Hey Google, turn off the bedroom” to finding the exact app that controls the smart switch. Each path is valid toward the same result, and each user will discover their own preferred way of turning on the lights.
The smarter we make our devices, from home automation to advanced automobiles, the more options we have to control them. Application services which exist in the cloud aren’t limited to a single physical device but can instead be accessed independently across many interfaces.
Product designers have to think broadly about how they want users to navigate across the different interfaces open to them as well as how users will find their own paths. It’s this kind of multi-surface design challenge that the Pulse Labs platform is ideally suited to measure and optimize.
Surface Reflections
What do we mean by a multi-surface user experience? We’re all familiar with transitioning between devices, for example, listening to a song or podcast on a mobile phone using touch controls and then switching to a smart speaker when at home via voice commands. The user is controlling the same experience using each device in sequence depending on their location and needs.
In more complex environments, multiple surfaces are always available. The user can simultaneously and interchangeably switch between interface inputs to accomplish a single task. The example we used in the introduction, turning off lights in a room via a mobile app, voice command, or the physical switch on the wall, are all possible paths to light up a room. Each path is simultaneously valid and could be used exclusively or in any combination.
With today’s advanced automotive environments, so many of the potential cabin controls and infotainment options all live in this second, multi-surface environment. Just like the smart lights at home, automotive comfort controls or passenger entertainment might be controllable through voice commands, physical knobs on the dashboard, touch controls on a large central display, or from a Bluetooth connected mobile app. In that single environment, there may be many ways to select media, change volume, launch an app, get directions or change the cabin temperature. As the user interacts through each interface, the feedback and behavior must adapt to the user in a way that maximizes responsiveness and minimizes dangerous distractions.
The Pulse Labs usability testing platform, now available for automotive environments, has the unique multi-camera ability to capture and detect key user events regardless of interface. Voice, touch, and tactile inputs can all be captured regardless of how the driver chooses to interact with the vehicle.
Feedback and Driver Distraction
Today’s automobiles are the perfect example of a complex, multi-surface environment. Cars have larger touch screens than ever before, with more and more capabilities split between physical knobs or toggle switches and the on-screen touch buttons. In fact, some models even use touch on trackpad-like surfaces and reflect the input on the vehicle’s head unit.
However, it’s voice, with the ability to issue commands to the automobile or mobile phone while keeping the driver’s eyes on the road and hands on the wheel, that has received the most attention. “Hands-free” isn’t just a nice-to-have as many jurisdictions have made holding a mobile device for any purpose illegal while driving. As more vehicles integrate the latest automotive operating systems directly, even Google is moving from the Android Auto phone app to a voice-focused Google Assistant driving mode.
Yet voice isn’t a panacea in cars. Vehicles are noisy environments which can make it difficult for voice recognition to distinguish commands over background road noise, passenger cross-talk, and high-volume music.
Designing for voice in the car means enabling a multi-surface path to user success. Commands that are only for voice input should be avoided so that users can find whatever path works best for them and fall back to switches or touchscreens if the virtual assistant misunderstands. Voice can combine actions that might require multiple touches into a single command, but generally works best for shorter commands with brief feedback loops.
“Would you like fries with that?”
Surprisingly, despite how often consumers use their cars to eat on the road or pick up food on the way home, ordering through a voice assistant still hasn’t become commonplace. There are several reasons for this, but foremost is that voice requires the user to know the menu. Asking for a cheeseburger might give you a simple sandwich or a deluxe specialty item. A voice assistant reading the menu can be more confusing than a waiter racing through today’s specials. Remembering a particular order in which to issue what commands, like defining a pizza’s size before asking to add toppings, can also be confusing for a user trying to avoid being distracted by driving.
This is where multi-surface design and usability testing can shine. While the user can be encouraged to use voice, their choices can be reinforced through visual cues on secondary screens. Users will have more confidence in their virtual assistants when they get multiple confirming signals.
Multi-surface Success
Preparing for the ease of use challenges means testing in a natural environment under real-world conditions. This is where the Pulse Labs platform excels. By combining multiple camera capture along with advanced machine learning algorithms for critical event detection, user experience researchers can ensure that all user actions are captured and cataloged no matter whether they choose voice, touch, or tactile input.
Pulse Labs can also ensure that the feedback matches the input. Video captures of the vehicle screens, along with the integration of car data, can clearly show where usability targets are met and where the experience falls short.
Consumers demand advanced vehicles with the level of intelligent assistance they are used to at home and from their mobile devices. Designing a complete multi-surface experience that adapts to the user’s needs, rather than adding a voice layer into the car, requires commitment to a holistic approach and ongoing, real-world testing.