As we’ve been ideating up a storm, surrounded by Post-it notes and Sharpie scents, I’ve been thinking a lot about the “persona,” or end user, of our stroke solution. When I paint a portrait of typical stroke patients we’ve seen through our shadowing at MedStar, they share demographic commonalities; and, for the most part, they are poor, black, and elderly.
In health spaces, we address race and inequality across public health, epidemiology, biomedical sciences, and population genetics. For example, there is established research dissociating the misconception that African Americans are genetically pre-disposed to hypertension (just look at lower hypertension incidence in Africans vs African Americans). The research points instead to social determinants of health in the United States. There exist volumes of research on health inequality, the reification of race, and social determinants, but what about when it comes to tech in the health space?
A Time magazine article written this past summer argues that technology and machines themselves are incapable of being racist but are rather a reflection of the data (and as a result of the society at large) upon which the tech relies. While I understand that a line of code itself cannot be racist, I find this argument clashes rather fiercely with the principles of human-centered design. Good design compels empathy for and deep understanding of the humans for whom the product is created. Allowing an innovation to neglect attributes of large portions of users is fundamentally poor and implicitly biased design. I would argue that racist tech is not simply a reflection of a racist society but also a reflection of ineptitude on the part of its designer.
I’ve also been thinking a lot about striking a balance of design that addresses the race and socioeconomic status of the overwhelming majority of stroke population without pathologizing them as a group. The BiDil controversy of 2007 did just that in the pharmaceutical market. NitroMed developed and marketed a drug that specifically reduced heart failure in blacks purely for commercial opportunism, despite BiDil having no evidence that it works better in blacks than in people of any other race. This controversy shed light onto the perils of racialized medicine and how social factors in disease causation are discounted in an overly simplistic assumption of racial, genetic differences. An analogous line of reasoning is relevant to innovation – how do we balance designing for the user and also not inadvertently discriminate against the user? For instance, assuming that the poor, elderly population would not be able to utilize mobile apps (due to resource or learning constraints) can be as equally erroneous as assuming that they would. For me, appropriately addressing these disparities during the ideation process remains an area I have yet to unpack.
That’s not to say, as HFA fellows, I think we should exclusively design solutions that address social inequalities in health, but I think it is an inextricable element of our users that we should be mindful of in whatever device, software, service, etc., we ultimately end up implementing. When we paint the portrait of the “human” in our human-centered design process, I hope we do not sketch a monochrome one-size-fits-all profile for a standardized stroke patient, lacking attributes of race, socioeconomic status, and social support. I hope we innovate in color.