ValueBasedCare_640x320.webp

3M Health Information Systems

The contents of this project are subject to NDA with 3M. Images have been blurred to protect confidentiality.

HOW MIGHT WE INCREASE EFFICIENCY IN MEDICAL RECORD PROCESSING WHILE SUPPORTING THE REVIEWER?
Project Overview

I conducted a moderated usability study to understand how 3 design variations impacted the objective of the design. 

As a result of the learnings from this study, our team was able to pull different successful aspects from the 3 designs to create an effective and appreciated final solution.

Skills Developed:

  • Quantifying qualitative goals and study results

  • Testing design variables against qualitative goals

  • Understanding why certain variations performed better or worse to inform future design iterations

Initial Problem Discovery and Framing
Screen Shot 2021-09-26 at 12.40_edited.j

This project was kicked off by observing Clinical Documentation Specialists' (CDI) workflows to understand their process and what's important at each moment in their day.

This observation helped us understand the goals that our design should achieve.

Screen Shot 2021-09-26 at 12.41.08 PM.png

After conducting observations, we did a round of qualitative interviews to understand CDI frustrations, inefficiencies, and mindsets around their current tools.

Screen Shot 2021-09-26 at 12.42.23 PM.png

As we made design changes based on user feedback, I conducted several rounds of moderated usability studies with qualitative reflection to understand how we can refine the design and make it more intuitive and effective in helping users streamline their workflow.

Screen Shot 2021-09-26 at 12.41_edited.j

Insights from these usability tests were organized into a 3x3 matrix: mapped by scale of impact and amount of additional work required from the design team to resolve. This helped us understand the highest priority ticket items that we could resolve the fastest to get started.

Screen Shot 2021-09-26 at 12.39.40 PM.png

One opportunity discovered from this work was to redesign a particular data indicator. 

Based on user feedback and exploratory research in the industry, we began brainstorming many different ideas to discuss and narrow in on.

After discussions and iterations, we narrowed in on 2 sets of 3 design variations, resulting in 9 independent design versions that needed to be tested against one another and against a control.

Usability Testing
Screen Shot 2021-09-26 at 12.16_edited.j

The usability study was set up to reduce measuring biases while testing all 9 designs across 8 participants.

Each participant would be exposed to testing one variable at a time, over the course of 4 tests. When averaging results across all participants, we would be able to see the impact of each design variation.

Screen Shot 2021-10-24 at 3.01_edited.jp

Each 30 minute interview was scheduled as the following:

  • 5 minute orientation, introducing the participant to the context and allowing them the chance to become comfortable with the testing tool (Miro)

  • Test 1: Activity where the user conducted a single action over the series of 4 design variations. This test informed us on each design's impact on Objective #1.

    • The same variable was held constant, same variable changing to independently test a single variable. Order of the designs were shuffled for each participant to reduce order bias. Time to complete the action, action accuracy, and user preference/perception were recorded.

  • Test 2: Activity to inform us on each design's effectiveness towards Objective #2.

    • Time, preference, and accuracy were recorded.​

  • Test 3: Activity to inform us on each design's effectiveness towards Objective #3.

    • Time, preference, and accuracy were recorded.

  • Test 4: A qualitative interaction, where the user explored an interactive web page and provided feedback as they explored. Accuracy and user preference were considered.

  • Final reflection: The user was shown each design next to one another to reflect qualitatively and provide their input on preferences, which led to a discussion on why those preferences exist.

Analysis
Screen Shot 2021-09-26 at 12.23.44 PM.png

To analyze the data, each test was recorded on accuracy, time to complete, and user preference.

Screen Shot 2021-09-26 at 12.25_edited.j

Additional qualitative notes including discussions on why users had a particular perspective were collected and synthesized to inform design iterations beyond the quantitative results.

Screen Shot 2021-09-26 at 12.21.08 PM.png

A grid quantifying the results from each test across each measurement of success was created for each participant to summarize the results from the tests.

From this grid, a quantitative average for each design variation could be calculated and compared to understand the differences in the design's performance.

Screen Shot 2021-09-26 at 12.17.55 PM.png

By averaging results across all participants, a non-biased quantitative result for each design could be reached.

This allowed us to understand which design variations were most and least successful in accomplishing our goals. From here, we could reference the qualitative notes to understand why this was the case.

Screen Shot 2021-09-26 at 12.18.29 PM.png

Here you can see a zoomed in example of what this analysis looked like after congregating data values. The cells in red show areas in which the design under-permormed. With this quick visual guideline, we can see across all designs which variations were effective and which were not.

Screen Shot 2021-09-26 at 12.20_edited.j

From the qualitative feedback, we could understand why certain design elements may have made a design perform better or worse. These insights were used to create a future design iteration.