Healthcare facilities use clinical algorithms to help decide how medical care is doled out to patients, and many hospitals use a process called “race norming” – also known as race adjustment. It has been proven to negatively impact the care Black and Latino residents of the city receive. The announcement includes examples of real-world situations impacting the city’s residents, including an “adjustment” factor for Black patients in reference to kidney function. The adjustment factor calculates Black kidney function levels to be healthier than white patients for the same measured result; this has led to delays in Black patients’ care. Race norming in maternal care has led to Black and Latino people being more likely to have unnecessary caesarean sections – despite having the “same age, health status, and past birthing history as White women.” These algorithms more generally have led to “less favorable outcomes” for Black and Latino people who give birth, exacerbating existing maternal and health inequities. To address these issues, the New York City Health Department has formed a coalition to end the inclusion of race adjustment in clinical algorithms. “The Coalition to End Racism in Clinical Algorithms (CERCA) will help lead to a healthier and more equitable city,” said Health Commissioner Dr. Dave Chokshi. Deputy Commissioner and Chief Medical Officer Dr. Michelle Morse added, “Raising awareness about the problematic concept and practice of race adjustment in clinical algorithms has the potential to create more equitable health care outcomes and experiences for those who identify as Black, Indigenous, and People of Color (BIPOC).” NYC Health + Hospitals President and CEO Mitchell Katz said the city’s public health system is looking forward to collaborating with the health department and other health systems. “This coalition has the power to further accelerate the work and impact of our system’s existing ‘Medical Eracism’ initiative that also aims to challenge the status quos of race-based algorithms… that have been widely accepted in our field for decades,” Katz said. CERCA-involved institutions include: NYC Health + Hospitals, Maimonides Medical Center, MountSinai Health System, New-York Presbyterian, Northwell Health, NYU Langone Hospitals, One Brooklyn Health, SBH Health System, SUNY Downstate, Wyckoff Heights Medical Center, and Cortelyou Medical Associates. Each has said they will “end race adjustment in at least one clinical algorithm and create plans for the evaluation of racial inequities and patient engagement.” They also plan to meet bi-monthly for two years and will produce an annual report, the first of which will come in June 2022. Black medical experts have long highlighted the racism behind race adjustments used in the algorithms that hospitals deploy for treatment. Dr. Dorothy Roberts, professor of Africana Studies, Law, and Sociology at the University of Pennsylvania, said race adjustments in medicine are “not only based on false, racist beliefs about human biology, but also have been shown to harm Black patients.” Caitlin Seeley George, campaign director at technology advocacy group Fight for the Future, said race adjustments are not only an “extremely deplorable medical practice based on racist beliefs,” but are also a prime example of how artificial intelligence amplifies these racist beliefs. “The concept of biological differences based on race existed long before AI, but this technology has been weaponized to exacerbate these racist practices. There are so many algorithms in place that mostly work in a black box with little-to-no transparency into how they make their decisions. In order to tear down racist structures across our society we need to open up these algorithms to find what other racist beliefs are a part of their calculations,” Seeley George said. “We need all of our health systems to do this work, we need legislation that stops harmful algorithms from being adopted, and in New York City it must go beyond one clinical algorithm (which is their current scope). And these efforts must also include the real people who have been harmed by these algorithms as well as human rights organizations and experts who can give critical insight into the harms of these algorithms on real people.” Dr. Danya Glabau, assistant professor at the NYU Tandon School of Engineering and director of the Science and Technology Studies Department of Technology, Culture, and Society at the university, said “algorithms” in the context of medicine have a much longer history than the computerized systems that the council seeks to reform. Medical algorithms, according to Glabau, are essentially any kind of decision tree doctors use to make treatment decisions, and using non-computerized medical algorithms has been a cornerstone of evidence-based medicine for decades. Automated algorithms, however, take the physician’s judgment out of treatment decisions to a greater or lesser extent because a computer makes the decisions. When the digital algorithms were rolled out, it was thought that removing humans would remove human racism, Glabau explained. “However, since automated algorithms’ decisions are based on data from past human decisions, human biases like racism and classism still factor into these tools. So they don’t really solve the problem of racism on their own because the history of medicine is racist,” Glabau said. “It’s hard to say exactly how widespread digital algorithms are and how many of them look at race in particular. But the chances are that most providers use several on a daily basis, and may or may not be aware of it.” Researchers like Amy Moran-Thomas have shown that even simple devices like pulse oximeters can have racist outcomes. “In this case, designers simply did not consider how skin color would affect the readings given by an optical sensor. We also know that tools like scheduling software can have racist outcomes even though scheduling doesn’t seem to have anything to do with race. But Black patients in particular were double or triple booked because many had difficulties making it to appointments on time due to factors outside of their control,” Glabau said. “These examples show how tricky it can be to anticipate how algorithmic and other digital systems will have racist outcomes. In a city like New York, where COVID has hit BIPOC communities hard and where zip code is correlated with income and racial segregation, such seemingly mundane technologies can have significant consequences for health.” But the only way for the council to succeed is for it to be given full access to hospital system operations, software, and technical documentation from the companies that produce the algorithms, Glabau added. The council also needs to be given the authority to make binding guidelines that can be implemented across the city. “If this council isn’t given teeth, it may find shocking information or make well-intended recommendations, but it will not change anything for patients or accomplish its anti-racist mandate,” Glabau explained. Glabau has written extensively about how mundane technologies can have a significant effect on health outcomes.