Research at CYB: creating user segments to make tailored hypotheses (part 1)
At Caution Your Blast Ltd (CYB), user research is central to every project we take on. Understanding everything from business goals to user needs is vital in order to build highly-tailored, user-friendly products and services - and detailed, focussed user research is absolutely essential.
Today, we’re starting a blog series delving into an area of research we’ve found to be crucial in helping us - how we create data validated user segments, and then use them to create tailored hypotheses. In other words - how we can analyse qualitative and quantitative data to identify patterns that allowed us to create user segments, which can then be used to design tailored solutions that improve that service, and ultimately the user experience.
Using our work with the Foreign, Commonwealth and Development Office (FCDO) on Emergency Travel Documents (ETD) as a case study, Katie John, our Head of Research, will expand on the topic next week.
But before that, CYB’s associate Senior Performance Analyst John Drinkwater writes about the work that laid the foundations for that research – how we developed a resource planning capability to allow the FCDO to forecast and upscale ETD staff during busy periods - that shows why building data capability is so important.
The background - what is an Emergency Travel Document?
CYB were tasked with working together with the FCDO to improve the service for ETDs – single use documents that are issued to British nationals abroad who have lost or don't have their UK passport, but have an urgent need to travel.
There are approximately 40,000 applicants every year - as you can imagine, there are untold ways in which people can lose their passport, from drunken escapades to being pickpocketed on the street, all the way down to sheer forgetfulness. And given the reasons why people may need to travel home in an emergency - everything from getting back for work to attending the funeral of a loved one - it is vital British nationals get the help they need as easily, efficiently and accurately as possible.
In 2018, a centralised service was launched to improve responsiveness and efficiency. For the first 3 years, new processes were developed at new ETD Centres, identifying new scenarios and methods of coping with strong peaks of demand to situations - such as the COVID-19 pandemic and a range of international emergency situations.
The start of the project - what we found
During this often-stressful time, IT systems were slow to alter to respond to new learning or scenarios, and useful management information was not easy to produce. The team developed a culture of manual work-arounds and close collaboration to maintain service levels, treating each application as needing a “tailor-made” response. Applicants were often delighted with this level of support - this can be a particularly trying time - but during peak demand the team struggled to maintain this level of service despite very strong team commitment and the addition of short-term staff.
Due to the lack of management information to highlight potential areas of improvement and guide prioritisation, the organisation developed a resistance to change. They found they could not confidently predict the impact of changes on their process, which was mainly supported by the experience and knowledge of staff.
The initial question
CYB were briefed to develop a resource planning capability for the ETD Centre to help the organisation forecast staff numbers needed to support seasonal peaks in demand, while also maintaining excellent service levels. By carrying out a process mapping and timing exercise at the centres, we were able to produce a workable model - but we also identified clear gaps in the organisation’s understanding of how the systems and processes were actually working, which was caused by a distinct lack of useful management information.
Without a deeper data-based understanding, the FCDO would not be able to break out of its efficiency vicious circle. With better management information we could help the FCDO ask much better questions.
Asking better questions
Our initial question was: “what staff resource do the team need to meet volume forecast and maintain service levels?”
We had developed an understanding when mapping the current application review process, identifying a core process and steps/actions needed for certain situations and application scenarios. We had also carried out an initial snapshot time recording exercise to gain a view of the most resource-heavy process steps.
However, following a “total quality” methodology, we had also identified a significant number of events outside the ideal process flow which were accounting for a third of resource needs, in particular:
Rework - cases pended awaiting a response from applicants for more/alternative documents or information
Handovers - events when agents need to familiarise themselves with a case that has already been worked on by another agent.
The new question then was: “what are the most common causes of process rework, and how could the team reduce or eliminate them?”
What data do we need?
Our first step was to clarify what data we needed to understand the resource impact of each of these causes of process inefficiency - this helped us identify what already existed and what additional collection might be required. We might find this data in service databases, web analytics, contact centre systems, surveys or team email and calendar systems.
When an ETD agent spots an issue with an application that needs further review, they send an email to the applicant requesting further information or documents, and mark the application as pending response. The first challenge was to collect data about these events, both the number of events per application and the reason.
The search for the data started with a manual extract of data from the test environment, then (with data protection officer agreement) a review of redacted data from the production environment. This confirmed that data about these pending events were stored in the audit table of the system database. The events could be linked to each application, and data stored about each application could be used to segment - but there was no detail of the reason for the pending.
Emails were often sent to applicants using Outlook as a separate system, so we reached an agreement with the agent team to use standard subject headers to identify the reason for a 1 month period. We also suggested temporary use of power-automate and sharepoint lists to collect email data, allowing us to analyse and establish the 15 most common reasons for pending.
With this information we were able to design and implement a pop-up menu in the application processing tool for agents to easily record the reason for pending and application each time.
Setting up the MI infrastructure
The next step was to gain access to the production data and establish a method (using AWS S3, Databrew and Quicksight) for extraction, transformation and visualisation of the data that met security and data protection requirements. This enabled us to provide a daily count of rework events, as well as a percentage of applications pending for each reason.
Some were due to applicants lacking clarity on requirements; some to request information that is no longer required. Others were due to a process design approach of asking the most common questions on the initial application, and following up for further information once an application is reviewed by an agent. And so we found that we needed to review each application, and treat it as a custom case requiring agent knowledge to determine what additional information is needed.
We were able to provide the team with data visualisations and guide them in focussed conversations to understand the causes and identify potential solutions.
Developing insight and making changes
The data enabled us to size each issue in terms of its ongoing cost to the organisation, and support prioritisation and a business rationale to funding changes
Investigation into root causes and potential fixes revealed to the organisation that not all solutions would be technical – many instead needed change at policy or operational level. This use of data prompted more collaborative problem solving with colleagues, who began to understand exactly what was needed.
Analysis of this new information enabled the product team to develop a short-term product roadmap for the following 6-12 months, with releases focussed on the reduction or elimination of the reasons for rework (through policy changes, operational redesign or technical update). This effort reduced applications that involved rework by 1/3, delivering significant savings and a reduction in stress levels to the organisation and individuals.
During this period, the organisation started to engage with a weekly dashboard, spotting trends and potential opportunities for further improvements, updating policy or operational methods and requesting service changes.
Process breakthrough – seeing the forest not the trees
As the initial reduction in the frequency of rework was realised and the complication and stress on the team involved in managing rework reduced, the processing team started to change its default view that each application required a custom solution. This gave space for design and test of a new process approach to identify ‘standard’ applications and to review and complete these in one session, without the need for escalation or handover between multiple agents. This reduced processing time for many applications, and further reduced cost.
Supporting organisation data maturity
Through these initial improvements, the organisation gained an increased sense of control of the process through analytics data, and an understanding of how it can support continuous improvement.
Our next step was to develop a more detailed understanding of user needs and scenarios, and the development of user personas to support more detailed analysis and monitoring.
Katie will blog about that - keep an eye out for that next week!