Microsoft Accessibility Insights
Boosted the efficiency of novice accessibility programmers by 55%
Overview
The Problem
Microsoft's Accessibility Insights tool was difficult for novice users to understand.
The Goal
Improving learnability and efficiency of novice users
We conducted a learnability study with novice users, that simulated real usage frequency. Evaluated designs with experts, novices, and disabled individuals.
Redesigned interfaces helped novice users report accessibility failures 55% faster
Role
UX Researcher (Qualitative)
UX Designer
UX Writer
Team
Product Manager from Microsoft
UX Researchers
UX Designer
Duration
6 months
What makes this project special?
We didn't just design for novice users, we also designed for and tested with disabled individuals. We wanted to ensure the product itself was accessible and empowered disabled developers in their workflows.
Three iterations and critiques later…
The Plugin Launcher
Standardized nomenclature of features
Introduced tooltips for additional context
Enhanced visibility of 'New to accessibility testing?'

The Overview Page
Introduced a data analytics section to indicate progress
Improved button copy and hierarchy
Introduced sorting and filtering options for ease of use


The Navigation Bar
Alphabetised tests
Introduced search, sorting and filtering options
Introduced a second-level navigation to show status of sub-tests

How we got here
Product teardown
Broke down the entire product flow in Figjam
Critiqued every screen and flow
Conducted a competitive study of other product's flows

Research planning
Conducted a deep dive into the research areas of interest
Collated data and information needs and chose appropriate methods

The idea was to understand how novice users' learned the platforms functionality over time and improve their learning curve.
Understanding learning patterns
3 tasks based on 3 common novice user behaviours
Ran 3 trials per participant with a gap of 1-2 days (average gap for early usage)
Measured time on task, number of errors

How did users analyse the test results?
For the third task, we asked users to summarise the test results
The hypothesis was that users would use the overview page
Interestingly, most users started summarising using the Notes app, Microsoft Word, Google Sheets, etc

Affinity mapping
Conducted visual analysis of user's notes from Task 3
Plotted a curve of time on task vs number of errors

We understood that
Users were generally able to perform all tasks by the third trial
Interestingly, the time on task for the third task increased as users got familiar with the tool

Scoping & Prioritizing
We uncovered over 13 problems hampering learnability
As the product manager, I analysed the effort for each and prioritised three areas to work on

Identifying Design Patterns Through Rapid Sketching
I ideated predictable design patterns specific to novice user behaviour based on the visual analysis of user's notes
We conducted design critique sessions with UX experts

Designing Information Hierarchy
I converted the sketches to mid-fidelity wireframes and conducted some early user testing
After three iterations, we had enough to begin the design phase

Testing with blind users
We conducted user testing with 3 visually impaired / legally blind users
Interesting insight - Screen readers do not explain graphs well, ARIA labels if not descriptive are usually useless for graphs

Crafting UX copy
Based on users vocabulary and notes, I created UX writing guidelines
Clarified vocabulary and added guidelines on graph descriptions for accessibility

Overall, the redesign improved novice users' learnability
For onboarding, users’ average time on task went down by 93%
This increase in efficiency is also supported by qualitative data in which users expressed that the information provided about each test was comprehensive and clear.
For the task where users were asked to report failures from the overview page, the time on task went down by 55%
