Summary
We worked with a major US fitness equipment company to add conversational AI to their workout equipment for strength and cardio. The majority of the work on the project was on the strength products, using mobile applications to coach users through a workout and gather data hands-free.
For this project, I was a voice UX architect, a voice UX designer, and our side’s UX researcher. I worked with our sales team to create the original pitch for us to move into the fitness domain, including selecting the use cases and the story behind them.
Problem
Working out is a busy, often sweaty experience. Users are on the go, and it’s possible they could too far from a device to see or touch a screen, or have their hands full and have a need to interact with their fitness system without their hands. During and after the pandemic, many users expressed a design to limit touch interaction in shared spaces. Voice interaction could help the user meet all of these needs.
[image]
However, it’s also possible that users could be out of breathe and unable to use a voice assistant. This means that the optimal user experience likely requires multi-modal interaction, with multiple ways to use the system that should be accessible, and sensible in their design.
Our task was to work with a large US fitness OEM to take their cardio and strength experiences (ranging from treadmills with screens, to bluetooth-enabled weights with a companion app) and make them multi-modal. While we provided the voice/text engines, a third party provided vision engines for motion tracking, and the OEM provided the graphical experience, we all feared a disjointed experience. Our challenge was to take these established products from the OEM, and craft a multi-modal user experience loyal to the brand, consistent enough for their users, but that was holistic in its design.
Design
The project already had a well-established GUI and set of touch interaction flows, across the strength and cardio platforms. For strength exercises, this was done with a mobile app (tablet or phone) that guided users through workouts, and tracked stats. For cardio, these were either built into a device (e.g. a treadmill), or part of the same mobile app that users could use while doing activities such as taking a run.
Personas were provided by the customer, and all design work was done using these.
My initial work was to propose the architecture for integrating voice into their existing products. The original proposal was done by generating an audio flow and showing it within powerpoint to give an example of flows. Next, videos of their existing product were taken and edited to show changes in the flows and interactions.
[Image of sample flow]
From here, I worked with the customer’s design team to finalize the proposal and UX requirements to get work started. During this time, I brought a UX designer on my team onto the project and we worked on the early designs together. We later hired an additional designer to help, and I shifted into a supervisory role on the project.
At the same time as this transition, a computer vision company was added to the project, with the ability to use a camera to count reps during a workout. I led the effort to re-assess the experience to bring this in at first, and oversaw it as the new designers took over.
Research
Prior to making the sales pitch to the customer, I conducted a couple of studies investigating users’ perceptions of using voice as part of their workout. I also did a systematic review of the customer’s app reviews on the Play Store and App store. Both of these were use to support the pitch, and later guided the proposals for the UX architecture and selection of use cases.
Once the project began, the customer’s UX research team conducted all research. However, they were new to the voice domain, and were relatively junior in UX research. I mentored their researchers on studying voice, provided feedback on each of their study protocols, attended many user sessions, assisted with some data analysis, and supported presentations of results.
Outcome
While there were many delays due to changes in budgets and difficult integrating all three companies’ SDKs, the product was in beta and was preparing to launch shortly after my departure from Cerence.