April 10, 2024
164: Artificial Intelligence and Health Equity: A Cautionary Tale
Artificial Intelligence is gaining widespread popularity, but despite the growing number of AI applications, many questions remain about how the technology could affect health disparities — for better or worse.
“We know how technology has had a disparate impact and harms on people, and medicine has had disparate impact and harms,” says Bill Jordan, a family and preventive medicine doctor based in New York City. “We need to prepare physicians and future physicians to have these conversations with their patients and be able to explain… what the inequities could be based on, what we've seen in history, and then also what the opportunities are.”
This week on the Health Disparities podcast: hosts Dr. Melvyn Harrington and Doreen Johnson discuss AI — and its pros and cons pertaining to health equity — with Dr. Jordan, along with Maia Hightower, CEO and co-founder of Equality AI, and Rebecca Stone, the executive director of Generation 7 Industries.
The transcript from today’s episode has been lightly edited for clarity.
Bill Jordan: We know how technology has had a disparate impact and harms on people, and medicine has had disparate impact and harms. And we need to prepare physicians and future physicians to have these conversations with their patients and be able to explain AI and what the inequities could be, based on what we've seen in history, and then also what the opportunities are.
Melvyn Harrington: You are listening to the Health Disparities Podcast -- program of Movement is Life, being recorded live and in person at Movement is Life’s annual health equity summit. Our theme this year is “Bridging the Health Equity Gap in Vulnerable Communities,” and as always we are convening with a wonderful community of participants, workshop leaders and speakers. I’m Dr. Melvyn Harrington, an orthopedic surgeon based in Houston, Texas, and Professor, Orthopedic Surgery, Baylor College of Medicine, Director of Orthopedic Surgery Residency and Fellowship Programs at Baylor College of Medicine.
Doreen Johnson: And I am Doreen Johnson, I’m based in New York City, and I am a practicing orthopedic nurse, teacher, and the post-president of the New York chapter of the National Association of Orthopedic Nurses. Together we are hosting a workshop all about Artificial Intelligence called: “AI and Health Equity: A Cautionary Tale.” So I’m going to read an introduction for the workshop that was actually created by ChatGPT. It starts with a quote: “AI is a tool. The choice about how it gets deployed is ours.” —Oren Etzioni, Professor Emeritus of Computer Science and Founding CEO, Allen Institute for AI
Harrington: It goes on to say: Join us for an engaging and interactive workshop delving into the critical topic of artificial intelligence (AI) and its potential unintended consequences specific to health equity. Our expert panelists will introduce attendees to the multifaceted landscape of AI and how it can exacerbate health disparities.
Through real-world examples and audience participation, we'll dissect how AI can perpetuate inequities and brainstorm actionable strategies to mitigate these effects.
This workshop invites you to be part of the solution, fostering collaborative dialogue to ensure that AI in healthcare contributes positively to health equity. Don't just listen, participate in shaping a fairer future of healthcare with AI.
P.S. Most of this description was authored by ChatGPT, a generative AI tool.
And we have our workshop presenters with us to share their insights with our listeners.
Bill Jordan: It's good to be with you here this morning. I'm Bill Jordan. I'm a family and preventive medicine doctor based in New York City.
Maia Hightower: My name is Maia Hightower, I'm the CEO and co founder of Equality AI. And I've spent most of my career as a Chief Digital Technology or Chief Chief Medical Information Officer at various academic health centers.
Rebecca Stone: And good morning, I'm Rebecca Stone, and I am an experienced researcher, and I own a company called Generation seven industries. And I really focus on trying to understand the impact be in the experiences between any types of products and in technologies, and its impacts on communities and different groups of people. Thank you for having me.
Harrington: Yes, and thank you for helping to put this wonderful workshop on for us, I think our listeners will be very interested to hear about some of the general aspects of AI and healthcare. And then we can get on into how this can be a cautionary tale. So I would say let's start with AI 101. What is artificial intelligence in general terms?
Hightower: So in general terms, AI is a mathematical representation of data that is then synthesize to predict future outcomes based on the data that is presented using various statistical methods.
Harrington: So, with that, introduction to the basics of AI, how is it currently being used in the healthcare field that patients and clinicians and providers may not be aware of?
Hightower: I would say that now that that introduction was extremely basic, but as far as how AI systems are currently being used In healthcare, it's been used both for clinical and administrative use cases within healthcare. When we think of administrative use cases, you can think of things like prior authorizations and the how To make workflows within healthcare more efficient. And now we're using it for clinical documentation to even communicate with patients using tools like chat GPT, embedded within the electronic medical record, to be able to provide patients with information.
When it comes to clinical care. Often, historically, we've used it most for risk prediction, in terms of being able to determine whether or not a patient is at increased risk for a particular outcome and trying to intervene. There's models such as sepsis prediction models, there's risk of deterioration, there's risk of adverse outcome from chronic diseases. In addition, there are diagnostic use cases. So in retinal imaging, to be able to detect if somebody has diabetic retinopathy in imaging within radiology, to be able to diagnose disease within radiology. So there's a number of use cases, and it's, it's actually occurring pretty commonly within healthcare already.
Johnson: Would you be able to tell us a little bit about the different types of AI, such as generative diagnostic or decision support?
Stone: I can give some examples of the different types of AI that you just mentioned, as well as talk about a couple of other ones. So of course, generative AI works on on content and patterns. And an example of that would be like DeepMind, if you've heard of Deep Mind, it's been around for a while. And they have AlphaGo, which is a board game, if you are familiar with those. So that's one example of just like very basic that a lot of people are familiar with. And of course, Maia had talked about diagnostic AI, and how we can use that to potentially, you know, identify certain patterns and different types of data, whether that's healthcare data, or one to many other types of datasets out there. And then of course, decision support. It's just that, it helps us process decisions and helps give us insights in just this very, very, very basic form. And Salesforce Einstein, in business intelligence, that particular tool is an example of decision making AI. We also have things like assistive AI, it could be like virtual assistants, your morning, wake up with your little Echo, or whatever you have. When you're asking Alexa, or Siri on your iPhone, you know, those are all considered assistive AI devices. And there's many additional ones. So we have things like adversarial AI, and understanding the weakness and the vulnerabilities of other AI systems, for example. And there are many, many experts inside all of these fields, who really deep dive into that. I am not a an expert on any of those fields. But hopefully, that's just enough to answer your question and kind of get you thinking about things that you might want to explore, or conversations that you want to have with your peers and colleagues.
Johnson: That's very interesting. Thank you for that definition. People don't think of Siri as being AI.
Harrington: I think that really does show that AI is really here. Probably more than we all recognize. I know, Bill, you've focused primarily in the healthcare space. What are some of the ethical concerns with AI?
Jordan: Well, I think there are ethical and equity concerns with AI. And it happens throughout the lifecycle. There's really the way people are developing products, and what the original purposes or the question that they're asking, can have ethical concerns in terms of how that's constructed. And then there are concerns in terms of where the data is being pulled from, and how the data is being used, you know, so there might be issues in terms of AI pulling from data that already has biases built into it. You know, if we think about things like how housing has been structured in the United States historically, we know that people of color have been historically shut out of housing opportunities. And so when you have AI pulling in information from historical data on housing and then using it to make lending decisions, then it's it can potentially reinforce the historical injustices that we see.
Hightower: Yeah, absolutely. And when we think about ethics and AI, I like to think of it broadly within this broad category of ethical AI. And then bias, which Bill has talked about, and then specifically, around fairness, equity, injustice, broadly speaking, even some of the ethical concerns are around misrepresentation deep fakes, and even the ethics around the high energy use of generative AI. And so there's a lot of different ethical issues that sometimes we lose track of when we're thinking more narrowly on say, responsible AI or fairness, equity and justice.
Now, within equity and justice, as Bill mentioned, this whole lifecycle approach from the data by which models are generated from, to the problem formulation, like who has power to ask a question, right, who is the one that is funding, building models? And do they really serve utility and purpose for everyone? So when you think about a lot of our models are industry-developed, and does industry have the same concerns that our community has? Then within the model development process, once a problem is determined, there's just so many decisions that are made. And if you look at the workforce within machine learning, and healthcare in general, in STEM, does the workforce really represent everyone, and all the voices? Even if the the actual lived experience may not be broad, does each person come with that broader representation in mind. And so those, when it comes to data scientists, they can make a lot of decisions that are biased towards, you know, whoever's in power. And so those decisions all the way to the model development can incorporate bias all the way to the deployment. So you can have a perfectly debiased model. But it's in, now, the real world environment, what kind of biases are then introduced by those that are using the tool in real time?
Stone: Excellent. Thank you, Maia. So just to add a little bit of like, my perspective, and all of that is absolutely I agree with everything that Maia and Bill have stated this morning. And as somebody who comes from a background where voices of my people, so I'm a Native American and Indigenous, we're not always represented, I always ask the question, and this is what I do for a living inside my space, one of my big things I focus on is missing data. And I started kind of feeling and understanding what that meant when I was very, very young. I'm like, why am I not there like that? That's not me. My people aren't represented. I couldn't say my name, but I can see my tribe. So why am I not on this piece of paper? Why am I and of course, as technologies, you know, became more and more available. It's like, why am I not in this, like who designed this. And of course, you know, working in HCI, and data science and all of that, I get the opportunity to explore the missing data. And for me, missing data has so much of an impact, as well. But we often talk about and explore the data that we see, just like many things, like if we can see it, then we talk about it. But I believe that it's just as important to understand what's missing as it is to understand what we can see. And, of course, I don't think I'm misspeaking, when I say this all plays into into the items and the topics that both of you had talked about, especially being doctors and working in the healthcare sector, how this can play out from historical perspectives, and then the way in which we create products as I work on product development, and then how those impact the actual long term care of patients and anyone else for any other products that we create.
Harrington: Those are great perspectives. One question I have is what are the guardrails? I know, in Europe, for example, I know you, Rebecca, you've had experience there. You know, there are pretty strict rules that are being developed versus here in the US where it's a little bit more of a free open field.
Stone: Sure. Thank you for bringing that up. So I had this wonderful opportunity. About a year ago, I just dropped everything that I was doing and I went to France and I wanted to understand how the creation of and development of different systems and products, especially since I work in experience research was impacted or applies to human rights. And inside of understanding and working among various agencies and academics, I had this opportunity to put forth my perspective from a very suppressed and oppressed and underrepresented group of people when it comes to AI as part of write ups for the artificial intelligence that, of course, that all passed through Europe, in the just recent months. It seems like forever ago, but it hasn't actually been that long. And so one of the goals of me going there was to really understand the conversations that people are having, and to bring that back to America. Now, I'm not saying that people don't have these conversations in America. Absolutely. But do people who come from tribal communities, are thet at the table? The likelihood it might happen, and they might be there, and I hope so. But I'm going to gamble with the background and say, most likely not. And so I had this opportunity. So I bring it forward. And then I get to work with people like Bill, and everyone here, who's at, you know, Movement is Life and talk about this. So I'm very excited that people and policymakers both from a federal level in the United States as well, at the state level are creating various policies and laws to help put boundaries and borders around emerging technology in this particular aspect of artificial intelligence.
I'm very curious, also know what tribal politics are saying about this, because I don't really, even somebody like me, who's always sticking around and being where am I? Where are my people? I don't really see a lot of talk from a policy perspective about it. So hopefully, any tribal community members or legislators who are out there who deal with tribal communities, I hope that there's a conversation being had someplace, and maybe they can work with people like Maia and you Dr. Harrington and Dr. Johnson on, like exploiting this from a health perspective. I mean, we have a lot of, there's a lot of other things that we don't have time to get into today inside Indigenous communities and tribal communities around the US. So and, of course, that spans across the world.
Hightower: Absolutely. I would say from a guardrails perspective, we can think of it in terms of, say international norms, then federal guidelines and policy in states. So some states are really active in in proposing regulation. And then at the more local level, you know, what are institutional policies? And then what are the norms of teams that are actually building these tools. And so there actually aren't any uniform standards when it comes to international law. When it comes to federal law, right now, there are guidelines. So there are proposed rules. But there's nothing that, other than HIPAA, which has been around for quite some time, that overlaps with ideas around governance and policy for AI, but doesn't quite meet the need of of AI products today. And then, like I said, California has some rules. On the federal level, it's really mostly focusing on, President Biden has his mandate from the White House, around regulation, proposing regulation through mostly through the various federal agencies. And then within healthcare were mostly mandated by HHS and some of the federal agencies with around the Health and Human Services, so there's the FDA or the CMS, ONC, Office of Civil Rights. These are some of the agencies that are proposing rules. But I think local governance is extremely important. So as Rebecca mentioned, you know, at each institutional level, whether you're a health system, whether you're a federally qualified health system, whether you're a tribal community, and developing some sort of policy or process around how to ensure that the products of AI really are providing equity, fairness, and utility for everyone in the community is so important.
Jordan: I would echo what Rebecca and Maia have said. I think it's really a patchwork. Unfortunately, you know, for a lot of us, we don't understand the technology very well. And so, you know, when you look at as Maia was saying, the federal agencies that might have an interest in this, it can range from, as she said, Civil Rights to Food and Drug Administration when it's in devices to the Federal Trade Commission oversight of products. So it's very messy, and we really don't know what it's gonna look like yet. And then at the state level, it's also complicated because often health systems and insurers who are already using these products are regulated by the state, you know, and sometimes under different agencies.
Johnson: On a more positive note, Movement Is Life is actually very concerned with equity. And we've kind of spoke about a lot of cautionary tales already, as this discussion is moving forward. How do you see a positive light spinning on AI when it comes to equity and health care? I know in many of the hospital organizations, we're already have aI embedded into the computer systems for nursing documentation, prevention of false, medication safety. But Maia, as you brought up, it's very important that the teams that are working with building these AI perspectives makes the difference as to whether it is fair and equitable for all communities. Would each of you speak about how this looks in your organizations or how you feel about the positive things that AI can bring out and build equity in health care today?
Hightower: Well, as the the co founder and CEO of Equality AI, our whole mission is to help healthcare systems monitor, evaluate and mitigate risk associated with AI, including incorporating fairness methods within the AI development process. Now, unfortunately, what we do know from from studies is that most models historically have not looked at subpopulations at a level adequate enough, where there's even visibility or transparency on how a model is going to affect various subpopulations. Doesn't matter if that subpopulation is a race, ethnicity, gender, age, there's a lot of various subpopulations. But historically, our model, we have not gone to that level of granularity. And that's where some of those cautionary tales come into effect, where we've been victims, so to speak, of models that have been deployed widely across various health systems that have been found to be racist or sexist, and then learned from that, after the fact. Only recently, has there been this focus on fairness, and how do you apply actual methods, whether it's diversity of teams, there's technical and nontechnical methods. On a nontechnical method, even having just a diverse team, present various stakeholders with different lived experiences, being able to bring that lived experience to the development of a model can mitigate a lot of bias, without having to have to rely on a technical method. But there are also these technical methods that are available as well. So you can measure the fairness or measure the accuracy of a model by subpopulation, or ensure that the population the data set actually is representative of the target population that you hope to benefit from the model. So those are, it's an evolving science. And because the standards haven't been well defined across industry, and across healthcare systems, it's now, it's still very variable. And so it's up to us to be intentional on applying these methods, these bias mitigation methods, in order to ensure that AI really is addressing these age-old problems of health equity, which it can. Or it can widen them if we're not intentional in using these methods that are available to us today.
Stone: I 100% agree with Maia. And I want to talk quickly about coming from the perspective of a team. So I'm often part of a team. I'm embedded inside of these teams, I have the opportunity to talk with people who may utilize a particular product or system. And a lot of times I notice that when I recruit and I look for people to participate, and to interview and maybe even to, you know, think more basic day to day things like surveys, which surveys, of course are not particularly basic, basic, but there's not a lot of representation from certain groups or communities and absolutely what you were saying about all these different like, granular groups and then of course we have we have the intersectionality piece right where it's like yes, and I'm this but I'm also this and I'm also does so I have perspective that I want to share with somebody. But if I'm not able, as an experienced researcher to recruit an individual who may have that perspective, then of course, that's lost when we go to combine that information with other data and other bits of info from other teams and things like that, but also to really hone in on, on what you were saying about the team. So, me, I often, you know, in history, we often know that women didn't always have a seat at the table, or were not really on these embedded teams within like, different spaces, especially like the technical space in the engineering space. And I'm very happy that we're, we've made progress when there's a lot more women in those spaces. I, and this may be a bias of mine, but I would agree, I would say that we're still not where we need to be even in the gender space. We don't, and we still, you know, a lot of times, talk about well, men and women in the space, but we don't include our nonbinary brothers and sisters, and other people who, who may not, you know, identify with just like a binary set of how their gender and their, you know, how they identify. So I think we really need to explore that. But also, as somebody who is a member of many things, intersectional, like, I am so happy when I have a seat at the table, and I'm like, I have to say I'm always like super proud when I can rep my own people. And I would assume that most people are pretty happy when they can represent the people who are like them. But all that to say is that, why are we not having a seat at the table? Well, I mean, I think we just have to go back to what does the educational system look like, especially in the U.S. to get people into those classrooms early, and then help them prepare and inspire them to go into a university, especially if you're Indigenous and your Native American and of these groups, sometimes, that doesn't always happen. There's this big gap statistically, with the statistics that we do have that demonstrates this, right? And of course, when Indigenous people go into the into university levels, they might not necessarily always go into this type of filled or the intersections between like communities, and technology, or they may not become data scientists. Now, that's not saying that they all don't, a lot of us go into healthcare, which is great. But how do we improve that I mean, that's not something that we can improve it on a like on a product team. But it starts way before you get into a particular team inside the organization. And so when I have conversations and different organizations, when I'm part of them, I sometimes I ask about why don't we have a Native voice on here? Or did we talk about, did we interview people from a res, or people who are inside a tribal nation who may not live on a reservation? And it kind of is one of these situations that we're like, well, we know we should do that. But maybe we don't know how. And I'm like, well, I'm here, I can help you. But that's also why I founded Generation 7, which has a whole nother background, if you don't know the story of the Seventh Generation, but is to have this opportunity to, when things like Movement is Life and different conferences come up, I can talk about a perspective holistically from the way I see it as somebody who grew up inside my tribal nation, right? And say, this is what I see in, say, different teams when I when I'm doing this work, and how conversations with all of you -- Maia, Bill, and everyone at the conference -- to say, well, what are some ideas in ways that we can pull that together? I have some other things to talk about. But of course, I know Bill has a lot to say. So I'm just gonna pass it off to him.
Jordan: Sure, thanks Rebecca. I think about this in terms of equity opportunities in a framework of acknowledgement of past and ongoing harms and redress or reparative approach. And I think about that at the individual level in terms of patient-physician communication. And then I think about it at the system or societal level in terms of reparative strategies at a, you know, from a policy perspective. I oversaw family medicine training at a medical school in the Bronx for several years and worked at a Community Health Center in the Bronx for almost a decade. And I think about all the work we did around helping doctors to be better communicators, and to come at patient communication from a shared decision making model. So I think about this AI technology and how it's coming into play, and how we know how technology has had a disparate impact and harms on people and and medicine has had disparate impact and harms. And we need to prepare physicians and future physicians to have these conversations with their patients, and be able to explain to AI and what the inequities could be based on what we've seen in history, and then also what the opportunities are.
From a societal or policy perspective, there are some emerging models, I don't think so much in healthcare, as in other areas, but as I mentioned earlier about housing, people are talking about reparative algorithms related to lending, and whether they could be thinking through how the algorithms could right the wrongs of differences and housing values, with communities of color, and other opportunities like that. I know that colleagues in Boston have worked on fixing admissions to the cardiac unit for people of color who were disproportionately put on the general medical floor. And they are, they're building that into their electronic health record. So those are more on the algorithm side and haven't really passed into AI yet. But I think those give some examples of where we could go.
Johnson: Those topics are very, thank you so much, they're so very important because one of the things that you talk about is education, physician, education, patient, education, nurse, colleague, education. This is something that is really fragmented many times and really can produce poor outcomes in health care. Movement is Life is always looking about equity, disparities and outcomes. Based on what you're saying, how will AI move forward into the general populations use as they interact with healthcare professionals today? How will our patients see AI? How will they use it? And what outcomes will prepare them for perhaps surgery prep, perhaps asking their medical team questions to prevent them from having complications or producing better outcomes? Communication is one of the biggest things that we're missing in health care, in general equity, throughout. Will each of you kind of talk about that? Maybe Bill, you want to start?
Jordan: I think about the existing disparities or inequities that we see in terms of access to technology. And I really think that we need to be addressing those. And think about how to carry that into AI. We talk a lot about, you know, just access to devices. The gap and access to the internet is closing to some degree because of people having smartphones. But that doesn't really hold up for rural areas, and in a lot of areas across the country where people don't have access to broadband. So they they they can't get the information, even though they might have a device. So I think there are real challenges and people not even having the equipment and the infrastructure and their communities to really benefit from these technologies.
Stone: Yes, that's an excellent point, Bill, I'm going to piggyback off of what you said slightly, and they take a different approach to communications. So absolutely. If we don't have access to the technologies, the infrastructure and things like that, you're going to have serious issues, right. And this all again plays into the physical capabilities of people. It also plays into laws and policies. But I would like to say there's also this things that get broken a lot, called treaties, that also play into that we don't necessarily talk about in America. And the reason why I keep bringing this up is because as a researcher and just as a person in the world who's just curious about things I never heard these conversations being had. So I'm going to have them with you all today, and hopefully spark some of our listeners, curiosity to have these these conversations. And so going on to a different person's perspective of communications. And what Bill and Maia have talked about in different aspects today. I'm going to say, one of the things that I'm very curious about, and that I'm really trying to advocate for, is communication based on language. And the reason why I say that is because there's over 7,000 languages in the world, but our AI systems and our models, they only accept a very small amount of certain languages, and most of them, not all of them, but most of them are Indo European languages. And so last year, I was testing out languages of my tribal language, which is Muskogean-based, and there's not a lot of them, some of them, these languages have become extinct, to see how things like ChatGPT would react, and, of course, my assumptions and my, like, hypothesis for a light term, were, you know, they were validated, like, they had no idea how to respond to what I was driving at. Right? And so I'm very curious, what will happen. And I think that this is something that we really should explore. And so people who are into linguistics, and understanding preservation of these languages, we really need to understand how we can bring the beautiful languages of the world into these systems, if we really want to make this equitable and accessible to everyone. We need to include equity, you know, the equity to be among the languages as well spoken and written and not just Indo European based languages. Now, I will put a caveat on that. One of the things that also concerns me, especially as someone from my background, and a lot of talks that are going on in various communities all over the world, and in different organizations is, what do we do about data sovereignty? Who owns that data? And so if you have a really sacred story, or information or language that you don't really want to share? Like, what do we, like, how do we have a conversation about preservation on that? And is AI the right way to do that? And if it is, how do we ensure that the people whose information is provided have the ownership, and then they can control how the technologies utilize that particular piece of information, or the stories that they share or the languages that are built upon that? Now, that's a huge conversation for many people in many professions to have. And I think that there is a bit of rumblings going on in the world talking about this and different agencies and different forums. I'm just very curious to see how that will play out. And I am curious to also understand how people here and my colleagues and the people at the table feel about, like, what can happen with that? Like, what are their thoughts about that?
Hightower: I would say that there's, you brought it up to really intersectional concepts, yes, one around digital literacy and language, and then the other round voice and amplifying voice, right. So we can all, say, speak the same language, but some voices are heard and some are not. Or in our current state, we have a lot, we don't have that common digital literacy or AI literacy, to even have effective conversations. Whether that voice is the physician clinician, the nurse on the care team, that patient, our policymakers, everyone is actually at various levels of digital literacy and understanding of AI concepts, which makes it very different difficult then, to at that vocal level, whether it's an individual patient to wanting to bring voice to a conversation with a provider or with a healthcare system, or on a broader level, amplifying that voice in two systems, whether it's through community advisory panels that actually helped shape the AI development process, or policy or advocacy, that systematic approach to amplifying voice isn't a competency we've mastered at scale. There are some health systems that have been able to leverage their community advisory panels to try to bring that voice to the AI development process. But it's been very ad hoc. So but I think those are the two main sort of areas of competency we need to continue to develop around digital literacy, around AI literacy, at the individual patient level, at the provider level at the care team level, at the societal level, and we just don't have that right now, physicians as well. I mean, you'd think doctors were, you know, are pretty educated people. But we know the studies actually show that physicians, there is this automaticity bias that when an AI system or an algorithmic system presents information, that there is this automatic, you know, tendency to say, okay, they must, the developers must, it's gone through the review process, it's in the EHR, must be okay. Accept, accept, accept or deny, deny deny, it's actually gonna go either way. Right. But there's this automaticity bias, whether it's physicians, like I said, there, and then I'm the patient level, you know, how do patients amplify voice in their perspective and values, individual conversation. Most patients aren't even aware how pervasive both AI as well as algorithms are, with practice within are embedded within the practice of medicine.
Johnson: Thank you so much. It looks like we need a lot of teams to consolidate all the ways that AI can take us all.
Harrington: I would say just sort of as a final wrap up question, sort of following off of your last statement regarding patients and their awareness of AI -- what's one simple thing would you leave as a piece of advice for patients encountering the healthcare system to know or to do or to push about, about the use of AI?
Hightower: I would recommend to anyone that's interested in at a patient level to get involved to ask Does your say healthcare system have a patient advisory committee? There's various names for these patient advisory councils or committees, but just having present, a seat at the table, to then have that power to amplify your voice. That I think is one way that we can all sort of shape the future of AI if we have strong opinions about how that future should look like if we want an equitable, just and fair future is get involved. Because once you're at that seat at the table, whether it's a community advisory group, you're either a subject matter expert, helping to develop a model, actual in some of these implementation teams, once you have that seat at the table, you can learn as you need to the most important is to have that seat and have that you know, amplify your voice and to come with strong principles around equity and fairness. And then you can learn the rest when you when you get there. So that's it. That's it, I would say.
Stone: I absolutely agree with Maia, that's spot on. I love that answer. So once you get your seat at the table, go, why am I not here? Let's talk about this. Ask lots of questions, be curious, communicate with each other. And at all different levels, I think that will really help out. So just just go for it.
Jordan: Yeah, I think at the individual level asking during the encounter with a physician or nurse, you know, what is happening in terms of how my information is being used and what's happening in terms of how decisions are being made? And what kind of technology is there in the background that's influencing this? At the at the societal level, I would echo what Maia and Rebecca said, in addition to getting on patient advisory committees at the health system level, I would think about getting on committees at the state or federal level to the extent that people have the time to do that. But it's so important to have patient voices on those committees to really bring their stories to policy decisions that are being made.
Harrington: All right, well, thank you all for such a wonderful discussion on that active and emerging topic that affects all of us. So that brings us to the end of our episode today. Thank you all for participating in our summit as well as our podcast and we hope all of our listeners will join us again in the future on the Health Disparities podcast. This is America's leading health equity podcast and until next time please be safe and be well.
CATEGORIES: Podcasts