The (Racist) Robots Are Already Here
Devin and Solana explain the impact of racist AI and algorithms on Black and brown communities.
Episode 18: The (Racist) Robots Are Already Here
References and resources
- ProPublica: Rent Going Up? One Company’s Algorithm Could Be Why.
- ProPublica: Pressure Grows on Real Estate Tech Company Accused of Colluding With Landlords to Jack Up Apartment Rents
- National Public Radio: Hidden audits reveal millions in overcharges by Medicare Advantage plans
- CNN: Amazon closes $3.9 billion deal to acquire One Medical
- The American Prospect: Why the Aetna and CVS Merger Is So Dangerous
- Brookings Institution: Police surveillance and facial recognition: Why data privacy is imperative for communities of color
Solana Rice (00:01):
It makes it very difficult to regulate not only the use of AI, but the use of AI in corporations. Even if we knew, “Okay, this is a decision being made by algorithms.” We often don’t know what the data is that is going into algorithms, and that data is often very, very racially biased.
(00:24):
Hello, hello. Hello. Welcome to Racism Is Profitable, a podcast about racism and the economy. I’m Solana Rice, co-founder, co-executive director of Liberation in a Generation Action, and I am joined by my colleague, Devin Barrington-Ward.
Devin Barrington-Ward (00:47):
I am back. They let me back. They let me come back. That means I haven’t messed up too bad. I was going to see if you were going to let me fake people out and if I was going to be able to act like I was Jeremie, but he’s not with us today. I have the job of filling some really big shoes. But I always love talking with you, Solana, especially about how we create this liberated economy, y’all.
Solana Rice (01:17):
Yeah. Well, it turns out that there might be some robots in between us and a liberation economy. Today we’re going to talk about what is all this kerfuffle about ChatGPT, and about algorithms, and about artificial intelligence and what does it have to do with black people in the economy?
Devin Barrington-Ward (01:43):
The robots are coming, the robots are coming, but actually they’ve kind of already been here, particularly as it pertains to black and brown folks and seeing some of the worst impacts of AI and the way that those things just don’t really consider us to even be people. Yeah.
Solana Rice (02:01):
I think that’s right. You know what’s so interesting to me is that the headlines recently have been like, “Has AI gone too far? Maybe it’s too much,” and it’s been too much for black and brown people already. This is not a fantastic future where we have to just sort of fantasize about the harms. There are harms happening and have been, and the picture is much grimmer, I think. In looking through just the many ways that we use algorithms and artificial intelligence, the many ways that our governments use it, the many ways that private or companies use it, the way that nonprofits use it, artificial intelligence is everywhere. The future is now, and it’s way-
Devin Barrington-Ward (02:58):
Yes. We’ve probably already been through Terminator three.
Solana Rice (03:04):
It’s much more than the robots are coming to take our jobs. We have many more things to worry about than the robots taking our job.
Devin Barrington-Ward (03:12):
Yes, absolutely.
Solana Rice (03:13):
But that’s how all this started. We have ChatGPT coming on the scene. People are like, “Oh my gosh, I got to get this ChatGPT.” Whole companies are like, “All my employees can use ChatGPT to do their work more efficiently.” Basically it’s artificial intelligence that folks can put in essay questions. They can ask the computer to “Create my resume for me, create my cover letter.” We’re seeing, basically, this is machines, computers, and their systems have tons of data. These systems are now learning, basically. They’re perceiving, they’re predicting, and they’re making-
Devin Barrington-Ward (04:03):
They have opinions.
Solana Rice (04:05):
They have opinions, and apparently we have to abide by these opinions. They’re making decisions. They’re making decisions.
(04:12):
We’re putting in past data, we’re putting in real time data. There’s these algorithms, which is basically a recipe where a person says, “Take all this data and do this to it.” Then it spits out like, “Okay, here’s your result. Here’s the decision based on all the data, this is what I’m recommending.” But unfortunately it goes way farther than just a recommendation.
Devin Barrington-Ward (04:38):
How far does it go, Solana?
Solana Rice (04:40):
Yeah. Well, I think we were talking about this. We thought, “Oh my gosh, this tool…” All artificial intelligence is is a tool. “This tool could really help our lives.” I just think we’re all seeing that it’s not just a help. There are ways it can help, but it is really reinforcing some serious inequities and injustices that we’ve seen with humans. Turns out when humans make-
Devin Barrington-Ward (05:15):
Technology.
Solana Rice (05:16):
Artificial… Technology and artificial intelligence-
Devin Barrington-Ward (05:19):
White supremacy inside of the technology, who would’ve thought, right? That white supremacy would make it into the recipe and the algorithms that these AI technologies are abiding by? Wow.
Solana Rice (05:32):
Yeah.
Devin Barrington-Ward (05:33):
Shocker.
Solana Rice (05:33):
Yeah. Again, the future is now, and what’s difficult is that artificial intelligence is just a bit more difficult to track because we don’t have some systems in place. Let’s talk about, is this really bad for black and brown people? We talked about it can create efficiencies. Artificial intelligence can create efficiencies, if that’s something that we value. In employment, we see, “Okay, there are companies that are using artificial intelligence to match candidates to job seekers. That could be good, because we’ve seen a lot of bias,” right?
Devin Barrington-Ward (06:12):
We’d like to think that a system that would take the bias out of the employment process, and some systems are better than others, right?
Solana Rice (06:20):
Yeah.
Devin Barrington-Ward (06:22):
Yeah, sure. If it’s used correctly. “With great power comes great responsibility,” I think Spiderman’s dad or uncle said.
Solana Rice (06:29):
Yeah, some family member said that. Yeah, sure. We might see some of that, but we can also see some bias in credit scoring. Yeah, sure. Now, credit scoring can be part of your employment history. Is that what we needed? Should we actually be using credit scores? I did not look at your credit score. I didn’t think it was germane to whether or not you could do this job.
Devin Barrington-Ward (07:07):
I appreciate that because like a lot of working class black folks, there’s a few dings there. But there are some AI scoring systems that are able to evaluate credit worthiness beyond just what traditional credit systems like Equifax and Experian typically rely on to decide whether a person is worthy of borrowing funds. But we also know that the polar opposite of that can happen and is happening right now. What we see is more folks are relying on the traditional systems that have algorithms that are just not preferential or flat out discriminatory to black and brown folks. Then there are platforms that are not, right?
Solana Rice (07:57):
Yeah. Yeah. We can’t underscore enough how many ways this artificial intelligence enters our lives probably without us even knowing. Let’s take housing, for example. Landlords are using artificial intelligence to screen rental applications. Lenders are using artificial intelligence to underwrite home mortgages. Insurance companies are using it for insurance on homes. We’re even seeing it in ways that cities are using it in order to distribute resources to communities. Kudos to Senators Warren, Markey, Sanders and Smith, they recently wrote a letter directly to a company called RealPage. RealPage has a product called Yieldstar. Yieldstar is used by landlords of nearly 20 million apartments across the country, and it’s probably single-handedly responsible for the rent hikes that we’re seeing across the nation.
Devin Barrington-Ward (09:09):
Listen-
Solana Rice (09:09):
Yieldstar-
Devin Barrington-Ward (09:18):
Yieldstar and me, we got beef.
Solana Rice (09:18):
Yeah, exactly. We all should have beef. That’s why these Senators are like, “Hey, y’all need to answer some questions.” YieldStar went as far as recommending that landlords should just keep their properties vacant in order to get these higher prices. Landlords have gone on record as saying, “I would’ve never even thought to charge this rent until YieldStar came along and showed me that I could charge so much more than what I have been. Just hold out, keep your units vacant, get what you deserve.” This is all we can send in the show notes, it’s all investigated by ProPublica. This is problematic. This is absolutely problematic, and we have no insight into what data Yieldstar is using, what the really widespread impact is. There’s still some questions. This is just the initial investigation.
Devin Barrington-Ward (10:18):
Sounds like a recipe for 2008 all over again.
Solana Rice (10:22):
All over again. People losing their homes. In all corners of having a home, whether you’re renting or owning, you’re going to have some interaction with AI that you probably don’t know. We’re seeing it in healthcare, too. This is to the detriment of all of us. Every single one of us has some interaction with our healthcare system, whether it be for us or for our kiddos, or for other people that we’re taking care of. It turns out that in healthcare, AI is helping insurance companies really overcharge us as taxpayers and as consumers on both ends. You have issues like Medicare Advantage, this is a part of Medicare that folks can opt into. Medicare Advantage overcharged us, the federal government, the US taxpayers by $30 billion between 2016 and 2019. Basically, it was just because they used this lifestyle data. It’s like, “Oh, Devin, I see you went and got a haircut, maybe you have a job interview coming up. Oh, I see that somebody that you’re taking care of needs diapers. Oh, maybe there’s somebody new in your life,” this kind of data.
Devin Barrington-Ward (11:51):
I really hope that the AI doesn’t talk in that high-pitched voice. That would make it even scarier than all of the facts that you’re throwing at us, right?
Solana Rice (12:02):
Oh, definitely. It has the most obnoxious voice you can think of, whatever the most obnoxious voice is. It uses data to predict what care you’re going to need. Now, let me be clear. Insurance companies are not interested in sick people. If you got issues, they’re like, “Nah, we don’t.” There’s an actual, they cherry-pick their consumers and they lemon drop the folks that they don’t want to insure, who might have illness because they’re all about not taking on that risk. They don’t actually want to provide and cover your care. They find all kinds of things in your data that give them clues about how risky you are. Then that helps them… Now, they say they don’t use that data to determine how much you should pay for insurance.
Devin Barrington-Ward (12:58):
Oh, please.
Solana Rice (12:59):
That’s what they say. Devin, we got to trust these people, that’s what they say. That’s-
Devin Barrington-Ward (13:07):
Trust and insurance company should never go hand in hand. It should, but it doesn’t.
Solana Rice (13:13):
On one hand they’re saying, “Okay, they’re not using it to determine your insurance premiums.” On the other hand, we have healthcare that the AI is helping determine what kind of healthcare we should actually get, so determining transplants, determining the risk, what kind of care I should be getting. You can imagine that the data that they’re using is not based on black people. It’s not like, “Oh, let me pull out the black people algorithm.” It’s based on data from mostly white folks and sometimes not even white folks in this country. Sometimes it’s based on data that’s from Canada that has a totally different population –
Devin Barrington-Ward (14:00):
A totally different-
Solana Rice (14:01):
Than we have.
Devin Barrington-Ward (14:01):
Healthcare system as well, with different healthcare outcomes. I’m hearing word on the street, word on these capitalist streets is that Amazon is thinking about buying One Medical Group.
Solana Rice (14:12):
Oh, it is well underway. It is well underway. You imagine Amazon sees that, “Oh, you bought something at Whole Foods, you bought some vitamins. Oh, you not feeling so good? Okay, noted. Oh, you buying something on amazon.com? Okay, I see you.” They can use all of that data. All of that data is inputted and they can cross share it, in theory, with One Medical.
Devin Barrington-Ward (14:42):
Right, because now it becomes proprietary data to anything that is related to Amazon, including One Medical group.
Solana Rice (14:48):
We’re seeing this across the healthcare industry. It’s not just Amazon buying One Medical, it’s CVS, who pretty sure just bought Aetna. We have a major consolidation issue that is accelerated by artificial intelligence. Artificial intelligence is not to blame here, let’s be clear. It is a tool. Without real transparency and an entryway into how this tool is being used, our hands are starting to get real tied, especially if you have artificial intelligence creating artificial intelligence, which is basically what ChatGPT is starting to do.
Devin Barrington-Ward (15:31):
Absolutely. I also think shout out to just our amazing policy folks who are doing work around reigning in corporate power, because we need these firewalls between big organizations trying to purchase Amazon, trying to get into the health… Not trying, but successfully getting into the healthcare space. There are no firewalls. There is technically nothing that is stopping them from being able to use that data to leverage more and build more of a profile around you so that they can leverage more profits out of us.
Solana Rice (16:05):
Yeah, that’s absolutely right. That’s absolutely right. Unfortunately, it doesn’t stop at employment. It doesn’t stop at housing and healthcare. We also see this in the overcriminalization of people of color, generally, and black people in particular. We see this in policing artificial intelligence. Look, none of this is new. This has been going on for years, and years, and years.
Devin Barrington-Ward (16:31):
Robocop is not a new concept.
Solana Rice (16:34):
No.
Devin Barrington-Ward (16:34):
Robocop has been around for a really long time now.
Solana Rice (16:37):
A minute. We have police using AI for predicting where crime is going to happen based on location, based on weather details and previous crime statistics and previous arrests. Well, guess what? When we create this… When we have a database of arrests, arrests themselves are racially biased. Black people are twice as likely to be arrested than white folks, and five times more likely to be stopped. The arrest data doesn’t really tell us anything, and the stop data doesn’t really tell us anything about actual crime.
Devin Barrington-Ward (17:22):
I love stop and frisk. We are using data from stop and frisk era New York to inform the rollout of “Public safety resources.” We know that many of those stops that occurred from stop and frisk resulted in no arrest, resulted in finding no illegal contraband, drugs, or guns, or anything like that. But that data is still being used to inform these algorithms and these AI systems in terms of how they are using technology to police our communities.
Solana Rice (17:56):
It doesn’t stop at policing. If you are arrested and you’re in the system, we depend on AI right now for sentencing. It determines bail costs. Even in our Child Protective Services, which is questionable about whether or not it’s protecting children, there are several counties across the country who have started to use artificial intelligence to predict whether or not families are going to be in their Child Protective Services programs. Usually, I think it’s something like 30% of the time, maybe, the prediction is incorrect. That is too high. Right now, we’re not taking these predictions as suggestions. We’re taking them as fact. Humans still need to discern what is useful, and right, and truthful.
Devin Barrington-Ward (18:59):
Oh, so-
Solana Rice (18:59):
Yeah. Yeah.
Devin Barrington-Ward (19:02):
I won’t be watching the Matrix. I won’t be watching Terminator. I think even the Transformers is a bit too much for me right now.
(19:10):
But, with that being said, I got a question for you. AI is here. The idea that we’re going to somehow put it away and put it in a box, and it’s going to go away somewhere is probably foolish at best, ignorant at worst. With that being said, how do we ensure that these technologies are going to be used to actually advance equity and justice?
Solana Rice (19:38):
I think there are a couple of things that we have to keep in mind, and it’s for-
Devin Barrington-Ward (19:42):
Okay that big breath scared me.
(19:43):
Please tell me you got some answers, Solana, please. Please. Okay. She got answers. All right. We’re good, y’all.
Solana Rice (19:50):
Okay, get this, Devin, we’re just going to dig a hole.
Devin Barrington-Ward (19:53):
Nope. No, nope. You started with dig a hole. That’s not the advice that I was looking for. I don’t want to have to build a bunker.
Solana Rice (20:04):
No, no, no, no, no. Did I ever tell you I was in an airport with Post Malone and he was coming from Utah. I’m pretty sure in the headlines the next day I saw, “Oh, Post Malone’s building a bunker in Utah.” Just FYI, if you need… You just search around, use your metal detector across the state of Utah. I’m sure you’ll come across it.
Devin Barrington-Ward (20:27):
Okay. The bunker’s still on the table. Got it. All right.
Solana Rice (20:31):
Got it. Okay. Besides Post Malone’s bunker, we have to have, as the public, but also and in our federal, state and local governments, a better understanding of how artificial intelligence is being used. Right now, in many of these cases, whether it be a lot, often in the private sector especially, we don’t even understand how companies are using AI to make decisions. That is in itself problematic. The fact that YieldStar is a “secret tool” is it is not acceptable, especially as we’re trying to expand affordable housing, uphold fair housing standards and the like. That means the federal government has got to quickly get its act in order to have folks that actually understand this technology in every agency, in literally every… We just ticked down four different issue areas, all of which are governed and funded by different agencies across the federal government. We need to have an actual Office of Technology that understands what’s happening across all the agencies, so both in agency and interagency is absolutely necessary.
Devin Barrington-Ward (22:01):
Because there’s like four or five other big things that we didn’t name, like food and agriculture. There’s so many other things that we interact with on a daily basis that AI is intimately involved in the processes of this, and we have no clue what’s going on.
Solana Rice (22:18):
Yeah. It makes us very difficult to regulate not only the use of AI, but the use of AI in corporations. Then we also don’t know what’s actually going… Even if we knew, “Okay, this is a decision being made by algorithms,” we often don’t know what the data is that is going into algorithms. That data is often very, very racially biased.
Devin Barrington-Ward (22:44):
Hello? The credit system.
(22:48):
We can-
Solana Rice (22:48):
We even have more superficial things like the algorithm often doesn’t recognize darker faces.
Devin Barrington-Ward (22:57):
Right? That’s why I don’t get in those self-driving cars. Nope.
Solana Rice (23:02):
Well, you also need as a pedestrian to watch out for those self-driving cars. Because they might not see you crossing the street.
Devin Barrington-Ward (23:10):
Lord, listen. Now the autobots and the decepticons are after us. Oh God, help us.
Solana Rice (23:16):
I think we also have to recognize that data is an asset. A lot of those companies are like, “Well, all of this data that we’re using is in the public domain,” but we don’t often know how it’s being used. Yeah, we’re accepting this, but there could easily be violations of our privacy that we are, in theory, opting into. But we don’t know how all of it being used. That’s just unfair. I think the last thing is that we can support organizations that are focused on racial justice, focused on ending discrimination against black people, in particular in algorithms, folks like Algorithmic Justice League, Partnership on AI, Data for Black Lives, Media Justice. These are just a few of the groups that are really attuned to these issues and are allies.
Devin Barrington-Ward (24:13):
That’s awesome. I’m glad that there are some organizations out here that are specifically working on that. But also it lets us know that there’s a lot of work that still needs to be done. One of the things that I think about is also kind of the more… I consider myself to be a philosopher at times. There is also where are the writing? Where are the scholars? Who are the thought leaders, particularly from black and brown communities that are talking about the ethics and the morality that is connected to AI and its use in our lives and in our communities. I consider you to be one of those thought leaders. I appreciate you sharing your perspective and your brilliance on this issue, because it’s not going anywhere. I don’t see how we create a liberated economy if we have algorithms in AI technology that’s grounded in oppression.
Solana Rice (25:17):
That’s absolutely right. Mic drop on that. Thanks y’all.
Devin Barrington-Ward (25:20):
Thanks y’all.
Solana Rice (25:23):
Peace.
(25:26):
Thanks for listening. For more information, check out our list of episode resources and visit us at liberationinagenerationaction.org. Shout out to our producer and audio editor, Nino Fernandez, the design team at TrimTab and the LibGen Action communications team. Like what you heard? Help us make some noise by telling two friends about the Racism is Profitable podcast. Until next time, y’all, peace.