Case Study

Conducting user research to jumpstart human-centered experimentation with AI

We met with a diverse group of stakeholders to gain a better understanding of the benefit applications process. Our insights are helping us identify use cases and design experiments to test how AI-powered tools can expand access to public benefits.

Summary

In partnership with Benefits Data Trust (BDT), we’re creating, testing, and piloting an AI-powered tool that may help government agency staff and benefits navigators easily identify which families can enroll in key public benefit programs such as the Special Supplemental Program for Women Infants and Children (WIC), the Supplemental Nutrition Assistance Program (SNAP), and Medicaid. In order to identify use cases for AI-powered tools and opportunities for experimentation, we conducted user research with stakeholders at every level of the benefits application process. We prioritized speaking with diverse populations so that our research would reflect a variety of lived experiences and contribute to technological advancements that promote equity. 

Our research yielded powerful insights into how AI might be leveraged in the public benefits space. Armed with these insights, we’re conducting a series of human-centered experiments to better understand how AI-powered tools might expand access to public benefits. 

Process

We conducted research with stakeholders at every level of the benefits application process. This included professionals who help people apply for benefits, often called navigators, program beneficiaries, and strategic advisors from organizations in the public benefits space. We spoke to our participants in 60-minute, remote sessions and sourced them through a combination of personal connections, online advertisements, and referrals from community-based organizations. We will continue to engage with beneficiaries and strategic advisors throughout the project. 

We spoke to 14 navigators and subject matter experts who possess deep, cross-programmatic expertise in helping others apply for benefits. Collectively, they connect with beneficiaries over the phone, in WIC clinics, hospitals, and other community-based settings. They represented the following organizations:

  • Maryland Hunger Solutions (SNAP Outreach)

  • Public Health Solutions NYC

  • Legal Assistance of Western New York

  • Food Research and Action Center (FRAC)

  • Benefits Data Trust (BDT)

  • WIC state and local agencies

Our Strategic Advisory Council, which includes leaders in early childhood systems change, human services policy, and civic technology, meets monthly to provide ongoing feedback on our research findings and prototype development, emphasizing state and local contexts for applying AI-powered tools. Our strategic advisors represent the following organizations:

  • American Public Human Services Association (APHSA)

  • Center for Public Sector AI

  • Having and Child and Early Childhood Life Experience Team

  • Center on Budget and Policy Priorities (CBPP)

  • Family Voices

  • Help Me Grow National Center

  • Center for Health Care Strategies (CHCS)

In addition to our strategic advisors and navigators, we formed a Family Advisory Board. It consists of 12 parents and caregivers who have experience applying for and managing benefits. The Family Advisory Board members represent different races, ethnicities, languages, immigration status, ages, caretaking roles, abilities, locations, programs, and access to technology. They also had experience applying for and managing benefits in the following programs: 

  • Early Head Start

  • WIC

  • Medicaid

  • Medicare

  • SNAP

  • Cash assistance/Temporary Assistance for Needy Families (TANF)

  • Family and Medical Leave (FMLA)

  • Unemployment insurance

  • Disability benefits

  • Home Energy Assistance Program (HEAP)

  • Supplemental Security Income (SSI)

  • Housing assistance

  • Job readiness program

When interviewing our strategic advisors, navigators, and beneficiaries, we focused on asking about participants’ lived experiences applying for or helping others apply for benefits, which revealed several pain points in the benefits application process. By identifying pain points, we were able to brainstorm applications of AI-powered tools that might make the benefits application process easier. We also worked with navigators and strategic advisors to gain an understanding of the end-to-end process of applying for benefits, from initial outreach to the administration of benefits. This helped us pinpoint use cases for AI at specific stages of the process, such as during the initial screening of an applicant or during a phone call between an applicant and a navigator. Lastly, we spoke to participants about their feelings on AI in public benefits and ideas for how AI might be leveraged for different use cases. Their concerns and ideas are informing how we design our experiments.   

Outcomes

From our research, we identified some core needs that beneficiaries and navigators have throughout the benefits application process. If those needs aren’t being met, we considered it a potential use case for an AI-powered tool. For several of these use cases, we designed experiments to gauge whether AI models might help meet peoples’ unique needs.  

Early on in the project, we decided that all of our experiments would keep a human in the loop, meaning that a navigator will vet a tool’s output to ensure accuracy and reduce risk of harm. 

Identifying AI  use case opportunities 

Empowering navigators and applicants with trusted information 

The beneficiaries we spoke to said they appreciated working with a navigator when filling out benefits applications because they could get answers to their questions on the spot. Navigators are a great resource to answer clarifying questions, but in order to do so, they need trusted information on the benefits application process. Often, navigators must sift through pages of complex policy to find relevant information. Virginia’s SNAP manual, for example, is nearly 800 pages. 

We see an opportunity for an AI-powered chatbot that gives navigators immediate support while working with clients. While the navigator focuses on human interaction with clients, the chatbot could search trusted sources of information to define terms and look up program rules and concepts. The tool could then summarize and translate the information, and create a script to help navigators explain complex concepts to clients. 

Giving people timely referrals 

More often than not, people applying for benefits require wraparound services, not just the assistance they’re applying for. It’s also common that people will require services to hold them over until their benefits are administered. This is where referrals come in. 

Referrals to community resources, such as food banks, can serve as a lifeline to people applying for public benefits. That’s why many of the beneficiaries we interviewed said they wish they knew about referrals sooner. 

Navigators are great at providing referrals because they have an expert understanding of what community resources are available. This enables them to recommend services that can help someone’s specific situation, and to provide concrete instructions on how to obtain a service. 

“211 was very, very vital to me,” one beneficiary said. “They asked me, what's my situation? They didn't really let me just say ‘I need help with food or shelter.’ They were like, ‘Where are you living? How long have you been living there? How are you eating?  What do you need help with—insurance? They let me know there was a gamut of services that they could connect me to.”

Given the power of referrals, there’s an opportunity for AI-powered tools that analyze phone calls between navigators and applicants, quickly surface a list of referrals, and share that list with the applicant. However, this tool would need to replicate the accuracy and situational awareness that navigators possess when giving referrals. 

Helping navigators summarize case notes

It’s important that navigators are able to summarize calls and meetings with applicants in order to provide them with information on next steps. However, writing and summarizing case notes can be extremely time consuming and burdensome for navigators. 

Large language models (LLMs) like ChatGPT may be able to write up case notes and suggest next steps to a navigator. This has the potential to reduce navigators’ burden because instead of summarizing case notes, they’d only need to vet the tool’s output. There’s also an opportunity to experiment with automated follow-up texts and emails to remind applicants to complete next steps. 

Ensuring documents meet requirements

We heard from navigators and beneficiaries that people often struggle to understand document requirements, which can lead to issues with their applications and delay how soon they receive benefits. In those cases, applicants must provide missing documents or correct erroneous ones, which can be grueling. 

“Knowing what paperwork to fill out, what to bring— it was a back and forth,” one beneficiary said. “It gets exhausting...and at that time I was a hustling 19-year-old pregnant girl trying to get a home for a new baby coming.”

Some of the beneficiaries we spoke with even said they believed they were denied benefits because they filled out the application incorrectly.

This is why applicants appreciate having a navigator read through their application. Navigators can flag missing or incorrect information, identify documents that are too low resolution, and tell applicants how to fix their mistakes. We heard from navigators that ensuring documents meet requirements can save applicants anywhere from a few weeks to months of delays in accessing benefits. 

AI-powered tools that flag errors in documents and tell applicants how to fix those errors may be useful for applicants and navigators. There’s also an opportunity for AI-powered tools that clean up low-resolution document scans before applicants submit their applications.  

Analyzing calls to streamline work for navigators

The navigators we spoke to said they need a quick and easy way to analyze trends across calls. This can help them identify common issues that applicants face, application questions that might be confusing, or new regulations that pose challenges. These insights enable navigators to create more effective training materials and understand which issues to escalate to state agencies or their organization’s leadership. 

We believe an AI-powered tool could analyze trends across calls to identify the most common questions asked by navigators. This can help navigators optimize training and pick out common sources of errors in applications, which they can escalate to state agencies.  

Perceptions of AI

Once we gained a nuanced understanding of our participants’ experiences applying for and helping others apply for public benefits, we were able to pose informed questions around their perceptions of AI. This yielded insights into how AI-powered tools might be received by our participants and how they could be applied to benefits to yield the best outcomes. Overall, participants expressed that they’d support AI in public benefits if it helped make the application process more transparent. 

“If it's smarter, then definitely,” one beneficiary said. “Some people don't have the time to wait for a live person. If your automated machine is able to give me all the information I need...then yes."

We then identified three common themes among the concerns our participants expressed:

  • Lack of human connection: Participants agreed that although AI can be helpful in some scenarios, it mustn’t replace the human connection between navigators and applicants. 

“Having that sense of connecting with another person makes people feel more validated and more heard and seen. That's from my own personal experience," one beneficiary said. 

  • Privacy and data security: Participants were concerned about how their information would be used and shared. However, everyone with those concerns said they would be open to using AI if organizations can clearly explain how their data is protected

  • Accuracy: Participants shared prior experiences asking chatbots for help and getting answers that are too general or even inaccurate. They said they lose trust in technology when it gives a response that doesn’t answer their question or is too general/scripted. 

As we enter the next phase of the project, we’re keeping these concerns at the forefront of how we design our experiments. 

Conclusion

Conducting user research with navigators, strategic advisors, and program beneficiaries helped us gain a nuanced, global understanding of the benefits application process. We leveraged our insights to identify potential use cases for AI-powered tools in public benefits. Currently, we’re designing a series of experiments to help us measure how effective AI-powered tools are in meeting our stakeholders’ unique needs. 

Written by


Alicia Benish

Program Strategist

Alicia Benish is a program strategist at Nava. Previously, Alicia gained over a decade of experience working in community and public health.

Kira Leadholm

Editorial Manager

Kira Leadholm is the Editorial Manager at Nava. Before working at Nava, she held various editorial roles and worked as a reporter at outlets including the Better Government Association, SF Weekly, and the Chicago Reader.

Ryan Hansz

Designer/Researcher

Ryan Hansz is a Designer/Researcher at Nava. Previously, he gained years of experience in human-centered design roles in civic tech and across the public and private sectors.

Kanaiza Imbuye

Designer/Researcher

Kanaiza Imbuye is a Designer/Researcher at Nava. Before joining Nava, she worked at Google designing user experiences for AI developers.

PublishedApril 30, 2024

Authors

Partner with us

Let’s talk about what we can build together.