In my previous role at a financial services company, I led a project aimed at digitizing analog processes to improve the user experience for our life insurance products. We focused on enhancing the Declaration of Insurability process, but the challenge was that we had limited knowledge about the specific user pain points and were working on many assumptions.
As the primary researcher, I initiated the project by conducting a series of in-depth interviews with recent life insurance customers. The goal was to uncover what worked well, what didn’t, and what they wished had been different in their application experience. These insights laid the foundation for our user-centered approach
Based on the interviews, I collaborated with the design team to create a comprehensive journey map. This visual breakdown detailed the phases of the insurance application process and the specific steps within each phase. To bring the user perspective to life, I embedded key quotes and video clips directly from the interviews, which enriched our understanding of user needs at each stage of the journey.
While the qualitative insights provided valuable depth, I knew we needed a broader understanding of the scale of these challenges. To complement the interviews, I designed and ran a quantitative survey that filled in the gaps. The survey allowed us to gauge the scope of the identified opportunities, enabling us to quantify user preferences and pain points.
The combination of qualitative and quantitative research gave our team a much deeper understanding of our users’ needs. With these insights, we established a clear “north star” that guided both product and design decisions. This user-centered data became instrumental in shaping our product roadmaps and future development plans, ensuring that our decisions were driven by real user needs, not just business assumptions.
Deliverables:
Journey map – built iteratively through multiple rounds of research Survey results – multiple rounds of surveys to help inform long term strategy and immediate decision making
If issues required agent touch or could be self serviced
Why tNPS scores were unsatisfactory
When we began this project, the customer care team had little insight into why people were calling in to the CARE center.
We needed to target the top call drivers
Initial exploration
Examining the data, we found that 75% of the issues fell into just 3 categories (Account Set Up, Technical Issue, or Other) indicating that the categories were not specific enough or the agents were fatigued with replying in such a short time. The only categorization was being done by the agents in a 30 second allotted time post call. The list of potential issues Agents were to choose from, was extremely long and often not specific to what the agent was working on.
Because of these confounds, it was impossible to accurately measure call volume by specific issues.
We had little insight into which issues absolutely required agent assistance to handle vs. issues that customers could be self serviced. The building support of articles for issues resulted in customers eventual still having to call in for answer, e.g. Customers would read through articles first instead of being directed to call immediately, increasing overall effort and frustration on the customers part.
Finally, there were problems routing customers to correct channels. Thus, customers sent in multiple emails about an issue before finally being directed by an agent to call the support line. This led to higher operating costs and lower tNPS scores
Methods
We set up a multi-pronged approach to identify problems and address potential recommendations to the teams responsible for each problem
First Phase: Customer Listening
Auditory observation in order to have a form of context
The purpose of this was to create logical categories to sort the incoming calls by to determine navigation on the self-help pages. We used the largest Business Unit to help us set a scaffold of these issues as well as who was most likely to have them.
Listening to 350 customers, we identified groups of customer segments by their roles: Administrators, Billing administrators, event organizers, attendees, and prospective customers.
Listening to hours of customer care calls led to unexpected insight on the agent’s process – there were many instances of agents using:
work arounds,
running into problems that should have been easy to fix,
and generally delaying the time to resolution
From this project we iterated and planned a project to understand the agent experience in true contextual inquiry.
Second Phase: Issue Gathering, Affinity Mapping, Card Sorting
The purpose of this phase to discover an exhaustive list of issues that the agents handle.
An issue gathering workshop with groups of agents from other different business units was held We asked agents to affinity map the issues as a group into different categories that they determined as essential.
To discover the hierarchy of information, a card sort with individual agents from two of the Business Units was run. (N= 12, n=6)). We also asked agents to sort issues by” “self-serviceable” or “agent touch”
We ran the same exercise with the Partner Management teams from all three Business Units, and compared the information and issues listed from all groups. In addition, the partner management team also ran an issue-to-effort sort. Issues were sorted by perceived level of effort on the X axis and whether the issue was self-serviceable or required agent touch on the Y axis.
We uncovered low-effort agent touch issues that we could expose to customers for self-service. We also discovered high-effort customer-facing issues that we could provide more targeted support and education around on the support website.
Third Phase: The Issue Mapping Spreadsheets
This phase allowed us to organize content creation in an issue-first strategy for the support site by uncovering what issues didn’t have support content while.
Data from past projects, Gartner’s industry best practices, and our own content from our self-help page, A spreadsheet with all the issues gathered was categorized into self-service or agent touch issues, level of effort, the resolution job that the the issue fell into, and links to any self-help article we had on the topic.
*note – each of the 21 products had it’s own spreadsheet.
Currently these spreadsheets are being used as a content map for BOLD360ai (the chatbots for each product) and Clarabridge modeling (our NLP that looks at call transcripts to identify customer issues).