AI in Black History Month
Vim: [00:00:03]
hi, I'm Vimla Appadoo. You're listening to the distributed future podcast.
This is a podcast where we talk about all things, future that our perception of it. Now, in this episode, we're going to be talking about black history month and in particular AI, I've got Tim here with me.
Tim: [00:00:23] Hi, I'm Tim Panton,
Vim: [00:00:26] we are the hosts together. so Tim, what's your perception of AI at the moment?
Tim: [00:00:31] horrific. so it's kind of lazy. So it's really interesting actually. I mean, I do a little bit of, I don't do a great deal with AI, but when I do like, there's a lot of kind of prebuilt little demos that kind of nearly work and the temptation to just throw them into a product. Without really testing it around the edges and without knowing whether it does, you know, correctly face track, nonwhite faces, or people wearing masks or anything that like wasn't in the test set for the free software you've downloaded.
Right. It's so easy to ship that without anybody noticing until it all goes horribly wrong.
Vim: [00:01:09] Yeah. Yeah. And that's the, that's the problem. That's why. AI, it will fail us as a society at large. If we don't address those problems. Now. I think there's inequality in the datasets, but lack of testing, understanding and research through to end deployment, too.
Tim: [00:01:39] Yeah, I think so. Like we're not treating it as an engineering product. Like we're not, we don't set definitions, somebody will say, Oh, I want facial recognition. They don't say, I want facial recognition that works with this population to this accuracy. And this level of failure is acceptable. Whereas like pretty much any other engineering discipline, if you were like ordering a drill or something, you'd be giving some specifications about like, it's got to go through.
wood, or it's for it's for this, or I want, you know, I expect it to last a hundred thousand hours or whatever, and those numbers just don't exist in the AI world. People don't specify them,
Vim: [00:02:25] but we also don't care. And we don't care because in my opinion, because the driving force behind innovation and AI at the moment is financial rather than an acceptance.
So AI is exciting and it gets the attention of investors and new markets and the potential to develop it is exciting. Plus the mechanisms that sit behind researchers or a data scientist or phDs the requirements for the data sets that are needed for machine learning and artificial intelligence aren't dependent on, on ethics.
So you have to prove that you've got, an ethical dataset or you you've you've got consent, or, you know, you've got to have a valid data set is that you don't have to prove that it's inclusive or representative or, Unbiased, not racist, you know, all of that kind of stuff. So the, the fruit, this is for
me,
the application of the technology, rather than the, the, rigor at the data.
And then it's not, you know, it's funded at that point, in my opinion, incorrectly, but then it's funded once it gets into development or into markets and incorrectly to.
Tim: [00:03:55] Yeah, I think that, I mean, I agree with, with, with, I think all of that, but there's a sort of sense that you don't, a lot of these data sets are built because they're so expensive to build they're built with
one purpose in mind, which may have been a lot of them were built for academic purposes or they were, they were gathered by some means, right. But that doesn't necessarily correlate with the way that they're going to end up being used. So you see a lot of datasets that been got by, you know, people volunteering, to put their photos on some sharing site or whatever, but there's no guarantee that, that, that the way that you take those photos, demographic of those people and everything else in any way lines up with how I to get used for, licenses for, for, you know, for checking whether you're the right person as you come in through the airport.
And the use is the way that these things are getting used. Doesn't correlate with the way that they're being collected. And, and so it's inevitable that they're going to be, it's going to be a mismatch. Like it doesn't matter how good you are at it. Like if you collected it, pictures of people in coffee shops and your use case isn't in coffee shops, then you started out with bad data.
Vim: [00:05:16] Yeah, yeah, yeah.
Yeah, I don't, it's just really ridiculous. So you're the example of that. I think you were leading to the passport control, and not working for darker skin tones. And in particular it's recently been on the BBC news that it disproportionately affects black women. Yes.
Has become the norm now for me in how AI is and will continue to be used because we still don't with our bar for acceptance of, AI is so low because I think we build these excuses for ourselves that the data is too hard to get, or. It's expensive or you just work with what you're given and figure out as you go along.
However, we want to justify it. We will keep doing so.
Tim: [00:06:22] I totally agree with you because what's happens is that they are those they're pitched as it's pitched as a cheap solution. Like a lot of these, lot of the AI product projects are basically, as you said, at the beginning, their cost based, they lay there.
It's cheaper than having somebody looking at down at the, at your passport and up at you and, and, or, or, you know, whatever that, the salute thing it's solving, it's generally because it's cheaper. And therefore it's always seen as the only has to do kind of. Acceptable. Like the level of it, you, you you're right.
The, what is acceptable is very, very low because it's cheap. Like you're getting something that costs you a hundredth of having a person there doing it. And so it's sort of acceptable if it's, if it's error ridden for management, but not for the consumer.
Vim: [00:07:23] Yeah. We, the it's, The people who it doesn't work for are forgotten about as well.
You know, it, no one will think, I can't, I can't think of the right analogy and I know there's one there that I've heard, but it's almost, what's good enough for us. It doesn't matter about anyone else. And the us is white people and the everyone else's black or Brown or. You know, people of color and that's just, that's the norm that we continue.
It's
Tim: [00:08:03] okay. Yeah. But, but that's, that's not going to stay that way. Like, you know, the Americans have, I think made a, a really, this is a political point, but I think they've made a strategic, huge strategic error in Blocking export of technology to China or starting to cause what they're driving there is is the choice that the necessity for the Chinese to develop their own technology.
So it's going to be very, very soon. You'll start to find that, that, Camera you buy, that's got the AI chip in. It does really, really good facial recognition on Chinese people, but doesn't do it very well on, on Finns. And that's going to be, I mean, we're, I think we're only three or four years away from that.
Vim: [00:08:57] I hope so, but I'm not convinced because. Even in
well, actually, I can't speak for any other country other than the one I'm in, but the culture I've grown up in, which is a British culture, but living in a, Asian household, no, my norm home was still too assume white is this the, the status to provide for.
Tim: [00:09:33] Right.
Vim: [00:09:35] So, and I know that will be different in different countries, in different, you know, different cultures.
Tim: [00:09:44] Aye.
Vim: [00:09:46] Hard to see it happening.
Tim: [00:09:49] I think there is some interesting kind of hope out there. There's some kind of good news stories in this, like, I think it's Amsterdam is saying now that if you, if you, sell, AI software to the city of Amsterdam, then you have to document the provenance of the data. You have to document it and show, show that it's egalitarian and that it's unbiased.
And, and that's part of the requirements of the city's purchasing and see the same thing in France where know, and part of that is to do with trying to repatriate AI like, you know, Europeans in particular petrified that they're losing out on this one. but they are not prepared to sacrifice the values, which is just a really interesting, line to, to take.
I think French insurance companies are not allowed to diff like they have to explain what the the algorithm is they have to be able to explain the algorithm.
Vim: [00:10:51] Yeah, because I don't know her. I don't
remember
which state in America,it was, a court in the deep South had, had implemented artificial intelligence software for,

for prosecution.
So they were setting. Jail time based on the algorithm and the data that they were using was systematically racist and, sending more black men to jail for longer terms. But the contracts that the courthouse had signed, meant they couldn't challenge the algorithm or, the data set that was used. So the black men that were repeating their sentences got turned away.
and it was black box, so they just couldn't do anything about it
Tim: [00:11:36] at all. I think it's changing that is changing. I mean, one of the things that, one of the problems and I found this with, with like in little experiments I've done with AI is that you have no idea why it did something like you can train it and then.
It will do something that isn't what you expected and you have to like stare at it to try and work out why it did that. We're starting to see is that that's becoming in certain domains, that's becoming unacceptable. Like insurance, for example, you can't do that. You have to have an explicably algorithm.
You have to be able to get up. In court and this fine. Describe your algorithm. Describe why it's making that choice. And if you can't, then you lose the case. Now that needs to happen. That needs to happen everywhere. But
Vim: [00:12:32] that was what my question was going to be. How long do you think it would take that to be a global change versus a national?
Tim: [00:12:40] Oh, I think there are places where it will never happen.
Vim: [00:12:44] Right.
Tim: [00:12:44] You know? I don't know. I don't think I can name them, but I, I think it's, you know, and a lot of this is to do with, with very, very much the Silicon Valley thing of externalizing costs. Like what, what, what a lot of this software is doing is saying, Hey, I mean, the prime example is, is in, is in moderation in, on social networks.
Like, well, there's the, the, the, the algorithm, if you want to call it that, is making a decision rather than you having a yeah. After the fact decision about blocking something, rather than having. An editor, paying an editor to make an editorial judgment about whether this goes in your newspaper.
And, and that, that is obviously a hundred thousand times cheaper. And so like the algorithm is, but basically then you, what it means is that people have to complain before anything changes. So you you're kind of externalizing the supervisory costs out to the users and the victims. So the victims get to get to do the work, to fix it rather than it being seen as the, you know, the manufacturer's problem that it, that it's a fault in their social networks design it's seen as like people aren't complaining vociferously enough, or we're not responding quickly enough to criticism, but that's, it's like it's intrinsic to the design that you've never be quick enough.
Vim: [00:14:20] Yeah.
Tim: [00:14:20] Cause it's after it's gone out, like it's too late, already, too late. So I, that bothers me a lot. That externalizing of costs. It's kind of like, it's kind of like pollution basically.
Vim: [00:14:33] Yeah. I was listening to a talk recently. that said exactly the same thing. If we don't have to capitalize. I have tech as a part of the capitalist economy society.
We ha we can try and do it differently. Cause that's the whole point of technology and innovation. We just haven't started to yet.
Tim: [00:14:59] But I don't think that externalizing costs is intrinsically capitalist. Like you can, you can do capitalism without doing that. It's just that you have to have a regulatory agency that imposes fines for polluting the environment.
I don't know, or you have to have a court, which will, will, will shut you down if you discriminate. Right. You have to have regulatory structures that work though, that prevent that externalization of, of risk, which is essentially what, you know, if you can, if you can externalize the risk, in a, in a capitalist system, you're kind of almost like as a.
As a, as a managing director, you're kind of obliged to like, it's your fiduciary duty to move the risk out if you can. and so what you have to do, if you want to keep a capitalist system is you have to construct a thing where moving those risks out, isn't legal or it's too risky, but that doesn't like, we're not there yet for AI.
Vim: [00:16:07] No.
Yeah, I think there's that there is a responsibility for people in power, regulators, et cetera, too, to learn at least the basics of this. To make the right decisions on when to speed up and when to slow down.
Tim: [00:16:30] Yeah, we don't always have the choice though. I mean, one of the, like one of the most egregious ones recently that I've seen as being in, in this business about, proctoring software for, for students doing exams where, you know, it.
Because it doesn't scale to have one person, one human teacher watching one student, they do it with AI. You have a, you have a thing which supposedly AI, but the thing that makes sure you don't have anybody else in the room that makes sure that you're sitting in front of the screen all the time and all of these sorts of things and.
You know, if, if they're are a whole bunch of people for whom that simply hasn't worked and they've had their, their marks marked down because the AI says that they left the room and they didn't, it's just like, you know, the camera doesn't pick up their skin tone. or, you know, was there was another really, really dumb example.
I can't remember it was now, but like, you know, and it wasn't even racist. It was just stupid. you know, it was something like, girl wearing the wrong t-shirt. And it just couldn't couldn't see her because it was reflective Tshirt. And I remember what it was just dumb. Right. But my point can we match this is that we've been dumped in that because of COVID.
So like you either don't do teaching or you do teaching some other way, and then you use the tools that you've got to hand and you have to throw them at the problem. The issue seems to be is that nobody has sort of. Thought about how you might mitigate that risk because nobody really saw that it would be one.
Vim: [00:18:14] Yeah.
Well,
I think
people saw it would be one, but you just, it like, it just happened too quickly. For them to be able to call out that did there, the decision making cycles closed rather than opened for reflection debate and consideration because decisions needed to be made, but the decisions didn't need to be made.
They could have, you know, It's just slowed that down a little bit.
Tim: [00:18:50] Yeah. I mean, you know, I don't know if you've been, if you've been following it, but, Stella, who we interviewed a couple of years ago now. but actually we've talked to her about, about cars and, and, and them snooping on you. but she's actually now doing, doing stuff in education and she's like horrified at.
The lack of care, that's gone into lack of thinking. That's gone into, educational software in terms of protecting people from discrimination and, and actually exploitation in some cases.
Vim: [00:19:27] Yeah. But that lack of consideration discrimination, all of it. Exists anyway, it's just, when you see it in a data set, it makes it more real.
That's the horrible bit to me
is you,
it simplifies this massive problem and it's not a bad thing. I just think that that needs to be a recognition that. It's not a new problem. It's always been that way. It's only, now that we can try and use technology to solve it rather than makeing it continue
Tim: [00:20:15] it. I think it's, it's, it's probably worse than that in the sense that I think that the technology, has the capability and indeed in many cases does magnify it.
Like you can take, you can take the idol, ignorant prejudice of one programmer or one design team, and you can blow that out to a hundred thousand students. And suddenly, instead of it being like, Mildly aggravating that you had, that you might have like bumped into an ignorant person on the train. What actually happens is that a hundred thousand students get disadvantaged by it, like so this massive magnifier effect, potentially.
I think that's that, that for me, that's terrifying. And it's so irresponsible.
Vim: [00:21:09] It is, it is. But for me, that's an. No, I, sorry, I shouldn't happen. And it should never happen. But the fact that it is, is a huge opportunity to stop it. Just stop. You know, it's a huge opportunity for uproar to say, this should not exist.
And this is why. We need to stop. I can't think of another word other than just stop it because it's yes.
Prejudice or like lazyness it, whatever we want to call it from the way it's been built is just a microcosm of society at large and the bias, microaggressions, what all of it that we play out all the time. That is then applied to this huge amount of people who get held back, but they would have been held back anyway, just on an individual basis.
And it's much harder to make a claim or prove that that's what's happened. Whereas now all of a sudden, there's this evidence of, see, this is what happens when we're not considered, or we, when, when. Decisions are made en mass. There were not, in my opinion, we're not putting those two things together of everyone's lived reality and lived experiences against the magnification of that when it's the put through technology and then apply it to thousands of people.
Tim: [00:22:53] Yeah. Yeah. I mean, I suppose so you, by having it writ so large and so obvious. There's an opportunity there too, to make it a simple, clear cut case that might cut through the, the political noise. Is that, I mean, is that what you're saying?
Vim: [00:23:16] Yeah, it's kind of like, well, of course, you know, obviously if you use historic data from schools across the country to decide the A-level results of every student this year, instead of letting them take exams, of course, it's going to tell you that black student should get lower grades, but that's the point. That's the system we built and designed. It's not that their students aren't capable it's that the systems we have in place will hold them back continuously. That's why we need reform. That's why we need to change it. That's why we need equality and equity.
Tim: [00:23:53] But I mean, that makes sense. It's such a lost opportunity as well.
Vim: [00:23:57] Yeah. Yeah. That's why I get frustrated is because the time is now for such huge opportunity to change things, but we're stuck in these cycles of continuing to make the same mistakes because with,
with. We're
moving a really slow and fast pace at the same time when no one is taking the opportunity to see that if we change things, now we can really make a difference.
Tim: [00:24:28] Yeah. I mean, I, I think a lot of the kind of engines for change are built to move slowly. And to some extent that's a good thing. Like, you know, really rapid changes. Is scary and sometimes it's good and sometimes it's terrible. And, and I think, you know, sort of slowing change down is a natural kind of
desire from people, particularly the most comfortable, like the more comfortable, slower you want the change to happen. and so that's kind of inevitable in a way, I suppose,
Vim: [00:25:09] but I think for me, all it takes is for the young, angry, not even young, just angry and passionate people to be asking "why?" More.
Because we don't know have to keep accepting below standard levels of anything anymore because the excuses have run out before we say no one had the time to do anything, but we've just come out of a period of six months where people don't have to work because they were furlohed or. We're going into it into a time of a lot of job losses and huge amounts of time.
There's so much time to spend on things that the pool for excuses for me is running low of why we will accept what we used to accept as the norm.
Tim: [00:26:09] I think that's that's right. And it, it, I kind of hope that. As you say, this is a moment of, of opportunity for change. I think it is, but I suspect that it's a change.
Some of the changes are going to be fairly unpredictable. I think we're on, we're on an odd balance point now. Where where small movements, like take you off in quite unpredictable directions and whether we'll like where we end up I'm less confident than you, I think.
Vim: [00:26:46] I will say you, you know, I speak from a place of big of massive privilege where.
I'm not having to worry about where my next meal comes from all, or it's just the, kind of the real hardships of what the next few months could look like. So I know that that kind of access to time and energy is, is a huge, huge privilege at the moment. so I have got the account afford to be hopeful, and domestic and in the times of dismay, but.
I'm trying to use that hope and optimism to, create the change that I really want to see. And a lot of it for me is baked
into
technology in the future implications of that and how we can try and use it to create a neutral market and create opportunities for everyone.
Tim: [00:27:38] I think in a lot of cases, that's going to be.
Like, there's a good chance that will happen. you know, I, I wouldn't want to be, I'm, I'm really happy not to have children at the moment. And like, I have children, like they grown up, but having, having, trying to get through lockdown with kids around the house must have been quite hard work. And certainly none of those people had a
second of spare time. Like I, you know, I admire who, anyone who got through the who's got through this year with young children in the house. I just like admire the heck out. that, that said, I see you think that though, a lot of the changes, particularly the remote working stuff is going to do, going to change things.
Immense amount, much more than people realize I was reading somewhere today. Is that like remote working people, haven't caught up with the idea that. It's really easy to change your job. If you're remote working, like if you've got a skill that, that permits remote working, then you don't have to put up with bad employment practices because they're like you can switch employers just by logging into a different website.
You don't have to move house. You don't have to like, change your car, change your commute, rebook, your travels, whatever. Like it's just, it's a different website in the morning. And, and that's fat freedom for those, as you say, privileged individuals, it's going to make a huge, huge change to the way that recruitment practices work.
Because as somebody who's saying, like, if you do, if you mess up as a CTO or a CEO, you mess up and you might find the next morning, you have no employees. Literally, they could all just vanish overnight and find themselves new jobs overnight. And particularly in the kind of more gigish economy. Like they might just not bother to log in.
And so your, I think for jobs, that are still in demand and what those are going to be. We just don't know. But whatever they are, those jobs, those people are going to be able to demand what they want. And, and if that, and they're going to expect to be treated fairly and reasonably. Where, that leaves everybody else.
I'm not too sure.
Vim: [00:30:14] Yeah.
Tim: [00:30:17] It's a, it's a weird time. And I was saying the other day, it's going to be really interesting to look at it in like, look back on it in 10 years time and think, Oh, well that was the inflection point that we completely missed. Like, Oh, didn't see that. Didn't see that coming.
Vim: [00:30:33] We seize it.
Tim: [00:30:36] Yes. All right.
All right. All right. I will try and seize at least some of them, you see some others and maybe, maybe we could like move the needle. I think that's true of that sheet. Small changes now we'll make huge changes in the future.
Vim: [00:30:54] Yeah, I think a big part of it. I keep saying for me, I need to stop. Sorry, everyone.
I think that the time is now to have a small awakening to the power that we do have as individuals to create small changes in a world pre lockdown, where we moved as one. Hum to a commuter beat, and you know, our rhythms and rituals were defined by where we worked, how we worked our broadband speed, like all of this stuff.
We now have individual power to try and change little things and, and we know we can do it because we, the whole world changed overnight.
Tim: [00:31:44] Right? Yes. It's no longer fixed.
Vim: [00:31:49] Yeah. Yeah.
Tim: [00:31:51] It's, it's obviously changeable. It's obviously more flexible than, than we thought.
Vim: [00:31:57] Yeah. And we've, we have been able to prove that we can do things completely differently to what we ever thought we could in, in so many ways, from looking after our neighbors, to, community WhatsApp groups or.
how we, cook, because we can't get access to shops or everything in anything between even, you know. We were joking about it before, but, and I, and our preacher of Zoom quizzes, we found new ways to socialize, to connect, to, to do all of this stuff so we can do it.
Tim: [00:32:35] Yeah. Yeah. I mean, I think it's been really weird from my point of view.
Watching a lot of these changes happen because there were a lot of these things that we touched on in the previous, like year and a half or two years of, of this podcast. They're like lot of the things we'd been talking about. sort of with the idea that maybe in five years time, they will be relevant .Boom, there they are.
They are like, you know, remote working. And I just all just like, come at us like a, a train it's been amazing. It's like, Whoa, I didn't think that would crop up. And when there is, so I think change is happening much faster than I expect it to anyway.
Vim: [00:33:17] So what I'm hearing. Is you are saying, we predicted the future on this podcast.
Tim: [00:33:23] If people have been listening and paying attention, they'd have been ahead, you know?
Vim: [00:33:37] Yeah. Couldn't agree more!
Tim: [00:33:39] Cool. Cool. Hey, really good to talk to you as ever as ever, I'm gonna push this save button. Unless we got like, we should, we should say, you know, we should do the thing of like, say there, there are show notes, although actually there will be no links in the show notes this time, but, but in general,
Vim: [00:33:57] There are some things I do want to link to
actually.
Tim: [00:33:59] Okay, cool.
Vim: [00:34:01] Around AI and race
Tim: [00:34:03] mainly. Yeah, I know I've probably got one or two as well, actually. So there are show notes, go to their website, which is distributed future and that's distributedfutu.re. and. Pick up the show notes and subscribe to the podcast. Because as VIM said, if you'd been listening, you'd have known all of this before and you'd have been way ahead.
So, you know, do, do sign up, subscribe to it. We are on a slightly erratic schedule at the moment, but, but you know, there's good stuff in the pipeline. So keep listening.
Vim: [00:34:37] Yeah.
Tim: [00:34:37] Same here. All right.