Digitizing Life Insurance Application Processes: Mapping out the user journey

In my previous role at a financial services company, I led a project aimed at digitizing analog processes to improve the user experience for our life insurance products. We focused on enhancing the Declaration of Insurability process, but the challenge was that we had limited knowledge about the specific user pain points and were working on many assumptions.

As the primary researcher, I initiated the project by conducting a series of in-depth interviews with recent life insurance customers. The goal was to uncover what worked well, what didn’t, and what they wished had been different in their application experience. These insights laid the foundation for our user-centered approach

Based on the interviews, I collaborated with the design team to create a comprehensive journey map. This visual breakdown detailed the phases of the insurance application process and the specific steps within each phase. To bring the user perspective to life, I embedded key quotes and video clips directly from the interviews, which enriched our understanding of user needs at each stage of the journey.

Initial qualitative journey map

While the qualitative insights provided valuable depth, I knew we needed a broader understanding of the scale of these challenges. To complement the interviews, I designed and ran a quantitative survey that filled in the gaps. The survey allowed us to gauge the scope of the identified opportunities, enabling us to quantify user preferences and pain points.

Journey map with quantitative supporting data

The combination of qualitative and quantitative research gave our team a much deeper understanding of our users’ needs. With these insights, we established a clear “north star” that guided both product and design decisions. This user-centered data became instrumental in shaping our product roadmaps and future development plans, ensuring that our decisions were driven by real user needs, not just business assumptions.

Deliverables:

Journey map – built iteratively through multiple rounds of research
Survey results – multiple rounds of surveys to help inform long term strategy and immediate decision making

Conversational Design – GoToWebinar

Defining the problem

As an organization we field many questions for our products via our customer support. This is a painful process for our customers and an expense to the business.

A trend we were noticing from the data was that many of these support requests were self serviceable.

We had support content on our self help page so we knew the answers were out there our main question was Why aren’t the customers finding the answers for themselves?

Hypothesis: Open Search vs Conversational Guide

Our current self help environment requires that the customer type out their question or query into a text box and then hope that it matches an article we have in our knowledge base.

This creates two obstacles for our customers.

First, they have to figure out how to articulate what they need help with in an open ended way.

Second, they have to then match the concept in their head with technical or company jargon that might not be familiar to them.

Convincing The Team

I proposed that we do a conversational chat in which we open the interaction by asking the customer to choose from a constrained set of topics that would trigger another question for clarification.

I had previously created several flows using data I had collected from listening to customer care calls and organizing those calls into themes and trends.

I built out a prototype to share the concept with the team which is linked here

Image of chat bot prototype
Chat bot first opens
Image of chat. bot prototype
Chat bot follow up question

Impact: measuring success.

We decided to compare the performance of our guided conversational chat bot to our support site. The results were as follows

  • There was 2.8x more conversational traffic when compared to the self help page
  • Users were 2x less likely to escalate to a human assisted interaction

What this meant was that more users were using our in product chat bot than were using our help page and that conversational chat bot was twice as likely as our support page to help the customer without needing to escalate that customer interaction to a live representative!

Understanding the Customer Support Experience

Image of a telephone support office with rows of desks

Understanding the Support Experience

M. Fraser, Sr. UX Researcher

J. Swartz, Assoc UX Researcher

R. Zelaya, CX Researcher

Agents as a function of CARE were seen as a pain point in the following ways

  • Financially
  • Time taken to resolve cases was higher than average
  • tNPS scores were unsatisfactory

A solution suggested was to redesign the agent dashboard. A list of features, possible designs and improvements and ways of understanding the process were seen as the potential outcomes.  Since we had very little knowledge of what the day to day needs and work of our care agents, so designed a research project getting at this knowledge

Initial exploration

Reviewing the care rep performance data uncovered an unexpectedly long amount of time both during and after the care calls. We had expected the average call to be 5-10 minutes but the average call was 12 minutes or more. In addition, resolution rates were lower than expected possibly meaning repeat calls by the customer to resolve their issue leading to higher than expected call volume.

Methods:

We used a primary and secondary methodology to identify impacts on time.

The primary methodology was contextual in nature.   We shadowed and interviewed agents during the process of answering calls.  We attempted to understand their normal work flow and understand the pain points associated with the process.  We investigated tools, dashboards, and systems used to handle calls. The end goal here was to quantify product issues and customer facing issues.

Care agent interview
A follow-up interview after a call shadowing session

The second was a cafe study.  We chatted with agents over coffee one-on-one and spoke about their day-to-day experiences, specifically asking about the pros and cons of working at the call center.   This was done to understand the motives behind their process and work life. We interviewed the agents outside of the work environment and asked about their day to day lives, commuting, living situation etc. We took them out of the work environment to give them a space to vent about their jobs and lives without a manager’s presence influencing their answers or speech. We wanted to understand their lives in order to streamline their workspace thus ensuring a greater ability to manipulate dashboards and other tools.

Results

Affinity mapping revealed a few themes.

First, we noticed there were organizational processes which increased customer effort. For example: our partners in costa rica would hold a required manager’s meeting each morning for 15-20 minutes. However managers were part of a required business process for refunds approval. If a customer was unlucky enough to call during this time they would have to wait longer for help with a simple business process.

We also learned about frustrations with tools the agents were equipped with. For instance for security reasons agents weren’t allowed able to download or save files. So if a customer wanted a report of their usage,  a supervisor or manager had to download the file to email it to the customer. This was problematic as agents couldn’t take notes of their calls to save for follow ups. In short, agents were denied simple procedural behaviors that would allow them to do their job quickly, creating a longer time on task.

Image of an employee going through security

Additionally, we noticed the work culture at the partner site led to distrust.  Regular security checks, the aforementioned security restrictions and a lack of mobile devices (needed to troubleshoot mobile applications). We recommended a renegotiation of contract with our near shore partners, or a return to having internal customer care teams.

Outcomes

Customer care operations are moving back in house. This affords greater control over tools and processes, so agents and customers  are set up for a lower effort experience.

Agents are now given the same permissions as supervisors, and are trained to use their own judgment for refunds and other issues that would have required supervisor intervention.

FInally a culture of trust is fostered. Agents are now a part of our company and not external resources hired for a 3rd party contract. This creates a sense of belonging and trust in our representatives who are on the front lines of our customer experience.

Understanding and Mapping Customer Pain Points

Picture of phone support floor

M. Fraser, Sr. UX Researcher

J. Swartz, Assoc UX Researcher

R. Zelaya, CX Researcher

Initial Situation

The Customer Care team had no insight into:

  • Why customers were calling support
  • If issues required agent touch or could be self serviced
  • Why tNPS scores were unsatisfactory

When we began this project, the customer care team had little insight into why people were calling in to the CARE center.  

We needed to target the top call drivers

Initial exploration

Examining the data, we found that 75% of the issues fell into just 3 categories (Account Set Up, Technical Issue, or Other) indicating that the categories were not specific enough or the agents were fatigued with replying in such a short time.  The only categorization was being done by the agents in a 30 second allotted time post call. The list of potential issues Agents were to choose from, was extremely long and often not specific to what the agent was working on.

Because of these confounds, it was impossible to accurately measure call volume by specific issues.

We had little insight into which issues absolutely required agent assistance to handle vs. issues that customers could be self serviced.  The building support of articles for issues resulted in customers eventual still having to call in for answer, e.g. Customers would read through articles first instead of being directed to call immediately, increasing overall effort and frustration on the customers part.

Finally, there were problems routing customers to correct channels. Thus, customers sent in multiple emails about an issue before finally being directed by an agent to call the support line.  This led to higher operating costs and lower tNPS scores

Methods

We set up a multi-pronged approach to identify problems and address potential recommendations to the teams responsible for each problem

First Phase: Customer Listening

Image of two researchers Jessica Swartz and Rene Zelaya shadowing an agent as they take a call.
Call shadowing as part of our call listening project.

Auditory observation in order to have a form of context

The purpose of this was to create logical categories to sort the incoming calls by to determine navigation on the self-help pages. We used the largest Business Unit to help us set a scaffold of these issues as well as who was most likely to have them.

Listening to 350 customers, we identified groups of customer segments by their roles: Administrators, Billing administrators, event organizers, attendees, and prospective customers.

Listening to hours of customer care calls led to unexpected insight on the agent’s process – there were many instances of agents using:

  • work arounds,
  • running into problems that should have been easy to fix,
  • and generally delaying the time to resolution

From this project we iterated and  planned a project to understand the agent experience in true contextual inquiry.

Second Phase: Issue Gathering, Affinity Mapping, Card Sorting

Image of Card sort in progress
Card sort with customer service reps organizing the issues we gathered into categories.

The purpose of this phase to discover an exhaustive list of issues that the agents handle.

An issue gathering workshop with groups of agents from other different business units was held   We asked agents to affinity map the issues as a group into different categories that they determined as essential.  

Image of agents creating affinity map
Researcher Rene Zelaya observing agents as they create and organize issues.

To discover the hierarchy of information, a card sort with individual agents from two of the Business Units was run. (N= 12, n=6)).  We also asked agents to sort issues by” “self-serviceable” or “agent touch”

Researcher Jessica leads brainstorming session for Affinity Mapping Workshop

We ran the same exercise with the Partner Management teams from all three Business Units, and compared the information and issues listed from all groups.  In addition, the partner management team also ran an issue-to-effort sort. Issues were sorted by perceived level of effort on the X axis and whether the issue was self-serviceable or required agent touch on the Y axis.

Issue Mapping with subject matter experts

We uncovered low-effort agent touch issues that we could expose to customers for self-service. We also discovered high-effort customer-facing issues that we could provide more targeted support and education around on the support website.

Third Phase: The Issue Mapping Spreadsheets

This phase allowed us to organize content creation in an issue-first strategy for the support site by uncovering what issues didn’t have support content while.

Data from past projects, Gartner’s industry best practices, and our own content from our self-help page, A spreadsheet with all the issues gathered was categorized into self-service or agent touch issues, level of effort, the resolution job that the the issue fell into, and links to any self-help article we had on the topic.

*note – each of the 21 products had it’s own spreadsheet.

Currently these spreadsheets are being used as a content map for BOLD360ai  (the chatbots for each product) and Clarabridge modeling (our NLP that looks at call transcripts to identify customer issues).

Clarifying password reset to reduce call volume.

Step 1: Defining the problem

As a lvl 1 care agent I dealt with customer care calls about usability issues whether they are technical or due to user error. There isnt much we can do to reduce the number of technical issues besides report them but one does notice where small, strategic changes to a user’s experience will have a sizeable impact on reducing call volume and thus reducing the cost of customer care.

Here is what our password reset looked like:
Enter a new password

Continue reading “Clarifying password reset to reduce call volume.”

Napses Mobile

Napses was an awesome experience and during my relationship with them I was driven to fully explore my curiosity in UX.  When I first joined them it was the Summer after I had graduated college. We were all feeling a little disappointed by the technology that was being used as a CMS for classrooms during our last years at school and felt that there could be a better way of managing a college course.
Continue reading “Napses Mobile”