HOMEOverviewBackgroundSolutionResearchIdeateVUI DesignTestingFinal DesignNext Project
I designed an AI-driven voice user interface with head-mounted camera for the blind to improve their object and text recognition process.

My RoleUX Designer

TimelineMar - Aug 2020

Team
2 x UX Designers
1 x Researcher

Tools
Figma
Miro

My contribution

User Research
Designed research to understand and design with users who are blind, understanding current assistive technology usage and shopping behavior. Synthesized insights, design principles and design opportunities.

Interaction Design
Designed the conversational experience of the smart glasses, building sample dialogues and VUI flow,

Prototype Testing
Design the prototype and testing sessions to iteratively improve the experience leveraging user's existing behaviors.

Demo Video
Scripted and directed a video to showcase the features and scenarios when IRIS can bring value to the users.



Product Demo Video scripted and co-directed by Weixi

— Design Challenge

How might we assist the blind in retrieving relevant product information through an independent experience?

Background

8.96 million blind and
visually impaired in 2050

1/50 people

According to CDC, there are currently around 4.3 million people in the US with blindness and visual impairment, and this number is projected to double by 2050. In a world that mostly is designed BY and FOR the sighted people, the blind often face daily obstacles to perceive their environments, such as reading printed instructions, sorting mails, and differentiating medicine bottles.



— Problem

Usability Issues with
Existing Technology

Current computer vision technologies such as Seeing AI that aim to help people with blindness in text and object recognition come with usability issues, such as requiring users to aim the camera on an object accurately and to frame a clear picture for successful recognition, most of them also require user to use one or both hands to frame the picture while holding an object.


User who is blind using Seeing AI to frame a picture to read text

How it works

01. Simply hold an object
Iris aims to remove the restrictions that current technologies have by allowing users to recognize objects through simply holding or pointing at an item with their hands.

02. Ask for desired information
Users can ask Iris for specific information from the item that they are holding. For example, the ingredient of the sanitizer gel.

03. Receive information directly
Iris’ sound system and haptic feedback allow users to ask and receive the information through headphones if they need privacy.

RESEARCH PROCESS

Expert Interviews
Accessibility Research Tips

After doing literature review, we gained a general understanding of how blind people use the Internet. With that knowledge, we spoke with experts in this field to expand our understanding of blind people’s online shopping behaviors.

The key takeaways from expert interviews are:

•  Blindness is a spectrum, narrow down the target user.
•  Don’t assume how the blind use technology.
•  Don’t exclude those who have lower tech proficiency.
•  Examine the current accessibility guidelines.
•  Start talking with the users NOW!

— Primary Research
Understanding
Assistive Technology Usage

9 Phone Interviews with Directed Storytelling
At first, we wanted to understand our user's online shopping journey by collecting first-hand data of blind users' past experiences, opinions, and attitudes through interview. We also wanted to identify any advancements and frustrations that exist within the current technologies

5 Remote Moderated Studies
Through Zoom screen sharing, we observed participants online shopping behaviors such as browsing and selecting both familiar and unfamiliar items. Our goal was to gain a deep understanding of blind people’s online shopping behaviors and the pain points of using the tools, to spot any missing features that could ease their process of online shopping.

— Synthesize
Insights

Intentional Shopping For Efficiency
Most blind people are intentional shoppers, who are not interested in browsing items without purpose or taking website’s recommendations as reference.

Familiarity and Recurring Items
Familiarity with items and websites during shopping is crucially important to blind customers.

Relevant and Specific Information
Blind users are specific about relevant information about products: they use screen readers to skim websites through only reading key elements.

Inter-dependent with Assistive Technology
Most blind people like using VUI devices for short commands to complete every
day tasks, such as setting alarms, checking time and the weather.

DESIGN PROCESS

— Voice UI Design
Dialogue Flow &
Sample Conversations

I design the dialogue flow of recognizing an object and text, identifying specific information from an object. The conversation, saving items on the glasses for easy future recognition, and calling family and friends through the glasses. Possible error cases are also highlighted in these user journeys.


Detailed Dialogue Flow Design of Each Feature 🔍

— Validation Survey
Use Cases Validation

In order to validate the scenarios people would like to use our product, We conducted an online survey that got 35 responses, to understand importance of each use cases, and explore scenarios  they encounter that we might have missed.


35 Survey responses help solidifying use cases

Remote Prototype Testing

Our objective for testing was to understand the conversational and interactive experience, instead of the ergonomic design of the wearable, which is why we choose to use a head-mount to stabilize the smart phone on the participant

We shipped the headset to 3 participants. We video-called them and once they wore the headset with a phone, we could see the items they were holding,


Low-fi physical prototyping head-mount



We use Wizard of Oz techniques and I acted as Iris, following the dialog flows that we prepared and spoke the information that is in users’ camera’s frame.
While my teammates act as the facilitator and note taker. With that, we tested both success and error cases with the participants.


My team and I testing through video call and head-mount smart phone with dialogue flow

Features & Iterations

— Main Features
1. Identify objects and short texts

Simply holding or pointing at an item with their hands and ask for desired information about the object. The camera on the glasses will take in the images to extract information being asked and provides an efficient experience to users with real-time feedback.

2. Read document

We leverage users’ existing habit of using a screen reader, to jump between different paragraphs. User can tap or say the request verbally to skip or revisit any part of the document. Users can also adjust Iris’ pace of speaking to a comfortable level to read long documents faster or slower.

3. Fast Recognition & Shopping

With Iris, users can save an item with a specific name, for faster recognition in the future, such as medicine bottles that they use often. For staple items, they can also use Iris to recognize it, and add it to the linked shopping account.

Galene
🌊  App & System Design
Star Travel
Product & Website Design  👁️

Designed & Illustrated by Weixi Zhang

emaillinkedin