Summary
The e.Go people mover was a (semi)-autonomous bus being developed in Germany.
For this project, I was the UX architect, graphical UX and voice UX designer, and UX researcher. I was the product owner, oversaw the UI design and engineering implementation, assisted with QA, and owned the demonstration at CES 2020. I supported the creation of new microphone arrays and modifications to our speech-signal enhancement software through feedback and testing.
Problem
Some of the primary challenges of semi-autonomous and autonomous buses are ensuring that users are able to interact with it to get the information they need, to request stops, and to report trouble. Additionally, as these would be deployed in major cities across the globe, we needed to be able to account for users who spoke different languages.
We aimed to create a multi-modal AI solution that would allow users to see information on displays, and interact with conversational AI in noisy environments — from the bus stop to the interior of the bus. In addition, the graphical displays needed to be able to work in various lighting conditions.
Design
The physical design of the bus itself was handled by e.Go. Most of the windows and the interior glass were replaced with glass from Saint Gobain, which could be projected onto as a display, as well as “milked” — voltage-controlled tinting. Microphone arrays were fitted inside for 360 degree listening, and outside for those standing near the window with a display.
As the product owner, my job was to determine the requirements in addition to my role as UX designer and researcher. I proposed a list of use cases, later refined through research, as well as the technical requirements.
As the UX designer and architect, I created the interaction framework, visual layout, voice flows (VUX), and prompts (VUI) for every use case. One challenge here was that the displays needed to be visible throughout the bus, so the text had to be large, even when the voice prompts contained a great deal of information.
I created personas based on user research. These were later turned into “Persona cards” given to people who attended demos to guide them as they tried out the system.
Use cases ranged from output-only cases (such as announcing the departure and the next stop), to user-requested use cases. Users could request information about the bus, the route, stops along the route, the ability to make connections, get information about landmarks, and more. Some of the use cases we added based on research included requesting the bus waited for someone running toward the bus, reporting smoking or other negative behavior on the bus, and requesting the dimming of the glass when it was too bright. The displays were updated for every use case to show the relevant information, including on a map when appropriate.
The system supported two or more languages simultaneously. By greeting the bus in German, for example, the system would respond and complete the interaction in German instead of English.
A member of my team created the graphical UI, and I was responsible for overseeing this and approving all final designs.
Research
Ahead of creating the design, I conducted two online studies to gather information to create requirements and use cases for the project. After the design was in draft form, a third study was conducted.
The first study was to gather generally impressions about riding on a fully or semi-autonomous bus, conducted in the US, UK, and Germany. People generally reported concerns about reliability of them across all countries, but were open to the idea of riding on such a bus once they felt they had reached a level of acceptance in their own communities <rephrase>.
Next, users were asked in a survey to identify use cases that they could see for using voice and viewing visual information on the bus. This included an initial open-ended question, and then later contained a list of use cases we expected. Most of the results supported our initial list, outlined above in the design. However, we added additional use cases from this to include reporting bad behavior (from smoking to vomiting to aggressive behavior), notify the bus of someone approaching before departure, and gathering information about landmarks viewed along the route.
Finally, once use cases were scripted out, the flows were piloted with users in the US and Germany to gather feedback on the flows and wording. Revisions were made accordingly.
Outcome
Our project was demonstrated at CES 2020 and received generally positive feedback, as well as some important suggestions for improvements. People who came to the demo were provided with sample Persona cards to put them in the shoes of a possible rider, and able to interact with it themselves.
You can see an example of a demo in the video below. The demo in the video was given by Cait, a member of my team and the graphical UI designer.