Artificial intelligence tools present many risks for society. But this technology also presents opportunities, like improving medical treatment for victims of intimate partner violence. Ten million people are physically abused each year by a domestic partner, and 20,000 calls are placed every day to related hotlines, according to the National Coalition Against Domestic Violence. This Domestic Violence Prevention Month, we spoke to a researcher who believes her AI work could eventually help doctors support victims of intimate partner violence. That kind of domestic violence involves current or former romantic partners.

Irene Chen is an assistant professor at UC Berkeley and UC San Francisco (UCSF) in Computational Precision Health and is affiliated with Berkeley's Department of Electrical Engineering and Computer Sciences. She discussed how her domestic violence research came about, where it is now and what makes it unique in the computer science field. She also talked about her broader efforts using machine learning to improve equitable health care and the upcoming Algorithmic Justice in Precision Medicine event she’s helping organize. 

This Q+A has been edited for length and clarity.

Irene Chen new headshot
Irene Chen is an assistant professor at UC Berkeley and UC San Francisco (UCSF) in Computational Precision Health and is affiliated with Berkeley's Department of Electrical Engineering and Computer Sciences. (Photo/ Ben Kuhn)

Q: What prompted you to begin studying domestic violence using AI?

A: About a month before the pandemic, I gave a talk at Harvard Medical School. In the audience was a woman who would become my clinical collaborator, Dr. Bharti Khurana. After the talk, she came up to me and we started talking about different research ideas and ways to essentially broaden machine learning for health care to include people who have been underrepresented or underserved by the existing health care system. My research focuses on machine learning for improving equity and making health care accessible to all.

Dr. Khurana is a radiologist and was starting a new project working on intimate partner violence. She had noticed that in the emergency department where she works, people would come in, and they would have these really questionable injuries that would seem like maybe there was something more sinister going on. She was interested if we could use machine learning to learn the patterns [on radiology reports that indicate] who are intimate partner violence patients and who are not.

Q: What did you find during that study?

A: We found a couple of things. One is that it is quite possible – and actually our model ended up being quite accurate – to predict from people's radiology reports whether or not they were an intimate partner violence patient.

In fact, we were able to predict that about three years in advance of when the patient would come in later and say, “Actually, I am a victim. I would like some hospital resources.” At Brigham and Women's Hospital, which is the hospital we were collecting the data from, they have a patient advocacy program where patients could come forward and say, “I am a victim of domestic violence. I would like some help.” We saw how much earlier the machine learning algorithm could reliably, accurately and robustly figure out who would likely be a victim. That is what we call early detection. It turns out there are some pretty clear signs that we were able to pinpoint as part of our research. That was the second thing. 

The third thing is we were able to find what the machine learning algorithm was picking up on – what were the things that were the biggest signs of being a domestic violence victim? There are a lot of questions such as, what do these patients look like? What are their demographics? What do these injuries manifest as? Better understanding that landscape is incredibly useful.

Q: What are the benefits of being able to predict who is a likely victim?

A: We're at a really interesting time right now, where machine learning is being used in the clinical setting for a variety of different things. One of the things that we've shown tremendous success on is using machine learning for things that clinicians are already very good at, like diagnosing from a chest x-ray very quickly, maybe even quicker than a radiologist. 

This was an exciting project because it actually showed that machine learning could predict things that we aren't actually very good at. Clinicians receive a tremendous amount of training, but they don't have that much exposure or that much structured formal training on how to work with and how to identify patients of intimate partner violence. Our algorithm’s promising results show that we could, in fact, identify these patients. We could figure out in a timely manner that these were serious patients that we should give an extra eye to. We could start to identify why these patients were the ones who are being flagged as intimate partner violence patients.

It'd be great if we could figure out how to implement this kind of algorithm safely, securely and in a patient-centered way in the clinic. We have a little bit more ways to go for that. But I think fundamentally, this is sort of a proof of concept. We can, in fact, get this information. If we were able to get more hospital data for longer time periods or for a larger cohort group, we would continue to see results and learn more about this population.

Q: How would that kind of algorithm help those individuals? What would look different in terms of the outcomes of care?

A: It's a great question. As far as what the actual patient will experience differently, it is a combination of clinicians being more informed about what kinds of people they should be keeping an eye out for. If, let's say, this algorithm were to go off, and they were to say, “Hey, this person is high risk,” there is a opportunity here for the clinician to provide resources to the patient, for the clinician to better understand what factors about this patient put them at higher risk and to be able to perhaps open that discussion. 

I want to emphasize that there are a lot of non-technical components of a project like this. For example, making sure that the patient feels like they are still being centered in this exchange, that nothing is going to happen to them against their will, that their care will not change if they don't want it to change based on this new information. I and the rest of the team are very aware that we want this algorithm to be used for the good of the patient, as opposed to perpetuating any kind of system where the patient would feel uncomfortable or perhaps targeted. 

So as far as your question was about what the actual patient in the clinic would experience differently – the hope is more aware, more empathetic and more educated clinicians who could better connect them to the kind of care that they would need to be able to address any kind of condition they want to address.

Q: What’s the next step in this research?

A: We looked at just radiology reports. Currently, we have a manuscript under review that looks at radiology reports and clinical indicators, the whole clinical history, not just things in the medical imaging department. What about things related to mental health or substance abuse or even pregnancy? We're definitely diving into this rich source of data that we have and figuring out how that might work with even more information.

We also want to externally validate our algorithm. Working with clinicians at UCSF, we've started to see if this algorithm could work for a completely different patient population [domestic violence victims at UCSF]. Would the predictions hold up, or are they slightly different because the patient populations are different? 

From there, there's a lot of questions about how to do a pilot test in a way that makes sense. That's a longer term question about how we would go about getting that in the clinic – in front of clinicians – and make sure it fits within the clinical workflow.

Q: How does this specific part of your research related to intimate partner violence fit into your broader body of work?

A: My research focuses on machine learning for equitable health care. In one case, we are very interested in figuring out how we can have treatments – clinical interventions – that work for a wide range of patients. We want to serve the entire patient population, including patients who might be overlooked by the current health care system.

We are also interested in figuring out health insights about the patient population. So what is it like to have this kind of disease? How does it manifest over time? And how can we learn about that from machine learning, electronic health records, wearables data, billing codes, insurance claims, all kinds of information? 

Then lastly, as we start to develop algorithmic tools for clinical use, can we make sure that those tools themselves are not perpetuating bias? How can we algorithmically audit algorithms that are used for clinical settings and attribute and fix them if they end up looking a little wonky?

Q: You are one of the organizers of a related upcoming event. Can you talk about that?

A: Absolutely. There is an event at UCSF called Toward Algorithmic Justice in Precision Medicine, which is a one-day event. Our keynote speaker will be Alondra Nelson, who helped co-author the Blueprint for the AI Bill of Rights. It's a very interesting event. I'm on the advisory committee, and one of the parts of the event I'm really proud of is that there's a huge emphasis on bringing together a wide array of people. We haven't finalized the speakers just yet. But we're reaching out to policy people, clinicians, computer scientists, reporters, anthropologists, sociologists, all kinds of people who have opinions, research and also lived experiences about how to define and enact these ideas of algorithmic justice. 

It is very clear that machine learning and AI have a huge opportunity here to change health care, and it’s incredibly heartening that there are a lot of people at UC Berkeley and UCSF who are excited about making sure that it’s done in an unbiased, equitable and accountable way.

For more information