Quick Chef, a proof-of-concept Alexa skill that finds recipes and provides step-by-step meal preparation directions for when user hands are busy with actual cooking. An end-to-end design from concept to prototype, I detailed user and system personas, and created user stories, user flows, and voice script. I conducted usability tests to refine the scripts and prototype.
Design an Alexa skill that allows users to browse several recipes, select one, and have Alexa assist with the meal preparation.
Working from a high-level design brief, I created user and system personas, wrote user stories and sample dialogs to generate a digital user flow prototype, designed a voice script, and conducted voice interface usability tests to refine the script and final skill.
This was a start-to-finish design process for a proof-of-concept voice user interaction that was more involved and complex than the relatively simple haiku skill. The result was a more intentional and thoughtful approach that contributed to a more robust Alexa skill design to fulfill project requirements based on the original design brief.
I started exploring use cases for voice design -- which scenarios were best for voice user interfaces (VUI), which were unsuitable, and which benefited from VUI complementing GUI or vice versa. VUI improved user experience in situations where the user's hands or eyes were occupied and was therefore was ideal for users to check recipes during meal preparation.
In order to assist the user through recipes as they prepared it, we first needed a repository of recipes that the user can browse or search and criteria by which the user can browse or search. Based on these criteria, Alexa will select and offer a recipe, which the user can confirm to start; if the user does not like it, they can ask for a different recipe.
Meal preparation and following a recipe can be challenging, and the skill needed clear and simple directions. We also wanted the user to confirm when they are ready to move on to the next step in the recipe, and to have the system repeat any step of the recipe.
Finally, we needed to incorporate voice design features and skill conventions such as learnability and onboarding users with clear options, as well as system responses and prompts to handle different user voice requests such as help requests or unrecognized user requests.
I thought about the kind of user who would use the Quick Chef skill -- someone who is busy and doesn't know the recipes they want by memory. They might be feeling rushed and want to simplify the conventional cooking and recipe-checking process so that it's more manageable and less stressful. They might be a new cook, such as a young professional, and would like guidance as they cook, or someone who simply likes variety and wants to try new dishes. From these considerations, I created one likely user proto-persona, Lupe.
With Lupe as my user persona, and with the user needs in mind, I designed a system persona, based on Alexa, to be a helpful recipe finder and cooking assistant. The system persona would be professional and friendly, to ensure that the meal preparation process will be simple and clear, and at the same time enjoyable and as hassle- and stress-free as can be.
Next, I wrote user stories based on the project's high-level functional requirements. At this stage in the project, it was challenging to identify the most relevant (useful) user stories, and in hindsight some are superfluous or redundant. Sketching out sample dialogs in the next step helped establish the stories that covered what I needed to tackle with the skill.
"As a busy professional, I would like a selection of quick recipes that I can easily make."
"I would like the recipes to be organized by meal type (breakfast, lunch, dinner, and snacks) so that I can more easily request the type I want."
"I would like to be able to save/favorite recipes to recall later or request specific meal recipes.
"As a picky eater, I would like an adequate number of recipes according to the specifications I request."
"When preparing meals, I would like to be able to use voice interaction to get recipes so that my hands are free."
I sketched out several sample dialogs (some overlapped) based on the user stories. This helped me get a better idea of the complexity of the skill and its requirements. Below is an example ("U" is user and "S" is system).
U: Can you look up a recipe for me?
S: Sure, what recipe would you like?
S: [Waits for prompt]
S: I can search by meal, dish name, or ingredients. You can also request saved recipes. What would you like?
U: By meal.
S: Sure, would you like a recipe for breakfast, lunch, dinner, or a snack?
U: Breakfast.
S: I have a breakfast recipe for a microwave-poached egg with pepper and herbs. Would you like to try that one, or hear more options? You can also specify ingredients.
U: I would like to hear more options.
S: (Alt 1) Sure, I found three recipes: an oatmeal recipe, an omelette recipe, and a crepe recipe. Which one would you like? You can say "more" to hear more options.
S: (Alt 2) I found three recipes: an oatmeal recipe, an omelette recipe, and a crepe recipe. Which one would you like? Or say "more" to hear more options.
U: The oatmeal recipe.
S: Okay, would you like to start recipe, or send to your phone?
From the sample dialogs and user flows, I created initial comprehensive voice scripts for the following:
To take into account multiple interactions, I added novice and a tapered prompts. I also used proactive information strategy to anticipate what the user will most likely want to access in an interaction, surface user data strategy to personalize interaction, and contextual prompts to make the interaction feel more thoughtful and human.
With the voice script developed, I conducted usability tests with 6 participants. To do this, I wrote a test plan, and test script that included a briefing and debriefing. Prior to conducting the tests, I ran through the script and made adjustment and fixes where I identified issues.
I collated and summarized the test findings in a test report. For each issue identified, I ranked it in terms of priority (low, medium, or high), provided evidence for the issue, and suggested changes to resolve it. I implemented most of the suggested changes, and made a note of recommended changes that would require longer-term implementation or a complicated development process.
While the aim of the design was to make an enjoyable experience that minimized stress, the current technological state of AI and natural language understanding/generation limits how natural and realistic voice interaction can be. Depending on how familiar a user is with VUI in general, and in particular with Alexa, the initial interaction may not be as easy as intended. None of the usability test participants had extended previous experience with Alexa, and both expectations and ease of interaction varied greatly between participants. This can cause reduced user satisfaction, which can lead to users abandoning further interaction. I considered the quirks of voice design to anticipate and mitigate this issue, with further improvements based on the usability testing findings. However, I realized through this project that learnability is a key challenge in voice design, especially as a technology that has not yet been widely adopted, and with only a few players, each with its own conventions and capabilities, in the space.
Alexa has several constraints, including the timing out after a few seconds if the user provides no voice input. Therefore, it currently is not the ideal system for something like a recipe skill, which requires multiple steps and pauses to wait for the user to be ready to move on to the next step.
In a typical setting, designers would work with a developer who would be able to implement the designs and flows with code. For this educational design scenario, I worked with a basic skill code and added to it with my designs. Because these additional features were not implemented in the code, I used Wizard of Oz tests to conduct user research.
Quick Chef was a design scenario for this specific use case for finding and completing recipes through voice. I leveraged Alexa to design and create a proof-of-concept lower-fidelity prototype. To implement it fully, I would need to use other tools or systems due to Alexa's limitations.
This was a challenging but very educational project. It was more complex than the previous UX projects I've worked on because it required more focus on flow and logic. It also differed from previous projects in that to keep the voice skill usable, it needed to be simple and constrain what the user could do (for example, providing only three recipes at a time rather than returning a list of all recipes that matched the user request). Therefore, the features were limited and again, the focus was on logic, understandability, and economy.
Throughout the Quick Chef design process, I became more aware of the particularities of voice interaction and design, and this challenging process helped hone the skill to be more robust in terms of voice user interface. Looking forward, I hope that if we continue to work on it from a user-centered approach, there will be more improvements and offer better user experiences and value, and become more widely adopted for a variety of use cases.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.