Timing: 2020

Scale: 8 months (including efforts in getting buy-in for the project, opportunistic customer conversations, and multiple iterations of the redesign)

Overview of HOW I IDENTIFIED THE RIGHT PROJECT, GOT BUY-IN, AND DEMONSTRATED VALUE OF RESEARCH TO STAKEHOLDERS WHO WERE SKEPTICAL


GETTING BUY-IN

“I’ve worked on this product longer than you’ve been alive.”

An engineering lead responded skeptically to my proposal to research customer needs and simplify our complex technical product. As a newcomer, my focus on usability was perceived as naive; the prevailing belief was that customers prioritize our industry-leading features over ease of use.

Identify the appropriate project: Stakeholders were personally affected

I wanted to find a project to demonstrate what the true value of research can be. I noticed that some of the very senior engineers are complaining about the same issue: customers are confused about the storage capacity of their systems, and these engineers have to spend a lot of time explaining it to the customers, taking time away from other important work. I started to investigate this issue, thinking it would be a great project to get the skeptical engineers aligned on research since they were personally frustrated.

Identifying cost of inaction

In order to gain more support for this project, I investigated existing internal data sources (support tickets, install base data) to see how widespread the issue was and what the cost to external customers and the cost to internal employees would be if we don’t make improvements.

  • there were thousands of support tickets being opened every year on this exact topic

    • only a small amount of the tickets get escalated to the engineering team

    • more questions were fielded by the account teams before they even get to support

  • several times a year, customers actually run out of capacity and get into catastrophic situations (losing data/having data unavailable to them), which could result in tens of millions of dollars in damages

    • the quality team traditionally categorizes these catastrophic events as “customer error“ since they could not be traced back to a software or hardware bug

At this point, I knew I had a good case to make for different teams to pay attention to this issue. I have impacts to the customer ($$$ in damages), impacts internally (support cost and engineer’s time), and a few key stakeholders who are personally annoyed by this issue. I made an argument for change not only on the basis of “making customer’s lives easier“, but also on “reducing cost“.

Aligning on goals

My immediate product team aligned with me on the goal to dig deeper to understand why specifically capacity was confusing, and implement changes in the next release to improve the experience.

As a secondary goal, I aligned with the quality team on using this project to show how “customer error“ issues can be mitigated with UX improvements.


RESEARCH

What information do customers need and why are they confused?

I conducted interviews with companies of varying sizes and users with different experience levels on how they monitor and manage the capacity of their systems. It turned out that customers just needed information on whether they should be worried about their capacity, and what to do if they did have to worry. Very simple and very logical requests, yet the information we provided does not answer those questions directly. We provided metrics and indicators, and customers were either not proficient enough or don’t have time to calculate their own answers. Without the direct answers, customer sometimes don’t even know they are heading towards catastrophe.

What happens to all the alerts we send?

Even if customers are not getting the information on whether or not capacity issues may arise in the near future, they should still be getting alerts when they are in trouble. However, case data on timing shows that customers are not taking actions to prevent the situations from getting worse. Investigations into this topic showed that customers may be experiencing alert fatigue or didn’t know what can be done once they receive the alert.

How do we provide the information customers need?

I spent a lot of time with engineering experts to understand how structurally and architecturally capacity functioned within the product, and why the different metrics matter from an engineering perspective. This understanding helped me map out a workflow for what the customer needed at different stages, and what information we should provide to help navigate and troubleshoot. The workflow helped bring everyone on the same page, and helped me establish credibility amongst the skeptical engineers. 

The workflow was then turned into wireframes and mockups that we tested with customers, support team members, and account team members. We were able to verify the new design not only helps the customer understand the complicated subject, the visuals also served as better instructional tools for the internal team members who may need to explain concepts to customers.


FROM FINDINGS TO IMPACT

Skeptics turned into allies

The engineering team and the product management team became my allies in this effort after working with me on developing the workflow to provide the right information and guidance customers need at the right step. They were very receptive to redesigning the entire flow, and were open to directions for future improvements.

This was the first project this team participated in that involved conducting research + designing solutions + iterating BEFORE coding ever happened. The stakeholders were able to follow my mixed-methods process and understood where feedback and data came from.

The previously skeptical engineers also saw my effort trying to understand the technical and architectural pieces of the workflow, and understood the perspectives I was bringing. One of the engineers even worked with me to submit a patent application on the underlying mechanisms of displaying the right information to the customers. 🥳

Product quality understanding evolved

The quality team was somewhat skeptical at first (“there is nothing we can do if they don’t see the alerts”), but when I was able to show the actual information customer needed (i.e. not “you are in trouble“ but “this is what you need to do“) and where we were lacking, they saw the need for change. Alert messaging and knowledge base articles were improved.

Demonstrating the improvements the engineering was planning on making also helped the quality team see how “customer errors“ can be mitigated with UX improvements. This helped make it easier for me to work on other types of “customer error“ cases later on.

Impact across product lines

Over the course of this project, I have become somewhat a subject matter expert on capacity. I was able to share the knowledge about user needs and workflows, the context for different types of use cases, and the best practices across product lines to make sure consistent experiences are being implemented.