AI is helping outreach workers in L.A. predict and prevent homelessness
The homelessness prevention project has helped 560 Angelenos in the two years since it started
When Dana Vanderford's team reaches out to clients at risk of becoming homeless, she says the calls almost always come when they need help the most.
"The reaction we get more often than not is, 'Thank goodness you called me today. I'm losing my housing next week and I didn't know what I was going to do next,'" Vanderford, the associate director of homelessness prevention with the Los Angeles County Department of Health Services, told As It Happens host Nil Köksal.
The trick behind their impeccable timing? A predictive artificial intelligence tool that identifies who in the county is most at-risk of becoming homeless.
The tool, created by researchers at the University of California, Los Angeles' California Policy Lab (CPL), analyzes over 400 types of records documenting emergency room visits, arrests, receipt of public benefits and other interactions with local systems. With that data, the CPL algorithm can predict with a high degree of accuracy who is at risk of falling into homelessness within the next 12 months.
Homelessness prevention project staff receive a list of medical records every three months. Those records are initially kept anonymous, and later matched with the individual they belong to, before staff can cold call them.
While these records are private and personal, Vanderford says this use of data is necessary to help find those most at risk of losing their housing.
Despite the many referral support programs available in Los Angeles, she says many people don't reach out because they don't know where to start or don't realize how precarious their living situation is. Those are the people her team tries to find and support.
"I believe deeply that there's a need for targeted prevention programming," Vanderford said. "Without the ability to use AI, we don't have a good shot at targeting these resources to those who need them most."
By cold-calling those who the algorithm determines to be most at risk, Vanderford and her colleagues have provided support to 560 Angelenos. Some experts say that while these kinds of AI require checks and balances, the initial results are still a positive sign.
Clients feel 'blessed'
The homelessness prevention program provides individual clients with $4,000 or $6,000 US ($5,437 or $8,156 Cdn) of support, while families get $6,000 or $8,000 US ($8,156 or $10,875 Cdn), plus additional funds depending on the number of people in the household.
Rather than being deposited directly into their bank accounts, which could disqualify them for other public benefits, the program makes payments on behalf of the clients or doles out gift cards.
Most people use it to pay for rent, utilities, groceries and car expenses, though the program rarely denies to pay for what a client says they need most.
"We believe that people know best what they need to stabilize and their housing," said Vanderford.
One client, Ricky Brown, told NPR that he's been "blessed" with the program.
Brown endured a streak of bad luck with his housing; a building that he lived in was sold, then his elderly mom who he was living with died.
After finally finding a stable place he could afford, he became the sole caregiver for his three young grandsons. Before long, his income was no longer enough to support both himself and the kids.
"I had a little money put away, but boy ... I went through it," Brown told NPR. "Because these kids eat."
Homelessness prevention program workers are helping Brown access food stamps and cash aid so he can provide for himself and his grandsons. They're also trying as hard as they can to get the family into a two-bedroom apartment.
Brown's situation is a typical one, where "life just happened," Vanderford said.
"When you're a person who is sort of barely holding on to the stability that you've achieved, one disruptive event … can topple everything over."
Checks and balances required
Janey Rountree, executive director of the California Policy Lab that built the algorithm, says the real test will be to see whether or not the program is in fact reducing the number of people who experience homelessness.
The program is in the middle of conducting a long-term study, but the initial results are positive, she said. Fewer people who were given support through the prevention program went on to experience homelessness than at-risk people surveyed who didn't.
But experts say checks and balances are needed when AI is used to cure social problems.
Eyra Abraham, the founder of Toronto-based AI tech company Lisnen, says it's important that an AI program's algorithms are up to date and are trained on accurate data to ensure they work properly.
That data also needs to be location-specific. Because poverty in Toronto is different from poverty in Halifax, "you're going to need to retrain [the algorithm] to reflect those communities," said Abraham.
WATCH: Study suggests AI can help find breast cancer
Underrepresentation of minority groups in the data used to train algorithmic tools like this can also pose problems, according to Nisarg Shah, an associate professor in the department of computer science at the University of Toronto. If the original data lacks representation, minority groups could be offered less social services as a result.
"That's a direct form of inequity that can be caused by AI, and at a much greater scale than when a human being made the same decision," says Shah.
Abraham adds that AI might not make other important distinctions, as well.
Two years ago, a Pittsburg couple's child was taken into foster care after they took the baby to the hospital because she refused to drink the formula they were feeding her.
The couple's information was run through an AI-driven child welfare tool before the child was taken from them — which the parents think wrongly identified the child's refusal to eat as a sign of neglect. Abraham says this is an example of why AI integration requires guardrails.
Despite the potential downfalls, Shah still says that the use of AI to solve human problems rather than to help companies sell a product should be seen as a "democratization of technology."
Rountree says, too, that "the sky's the limit" when it comes to social problems that AI could help fix.
"Everything from what ads you see on your phone to what shows or movies streaming services suggest for you … [uses] predictive modelling. So it's very possible that the same technology could be used to improve health care delivery systems or identifying who needs certain preventative screenings," she said.
Clarifications
- This story has been updated to clarify that medical record data the AI algorithm uses to predict homelessness is kept anonymous before any outreach is made to individual clients.Oct 13, 2023 11:34 AM ET
Interview with Dana Vanderford produced by Morgan Passi