all
Tim: [00:00:00] I'm Tim Panton.
Vim: [00:00:06] And I'm Vimla Appadoo
Tim: [00:00:08] This is the distributed future podcast. This episode is a discussion between Vim and myself about the kind of. Practical tech ethics, a specific instance cropped up, the other day for me. And I thought it'd be really interesting to kind of talk through the decision making process and, and, and what it, what it means.
and amusingly it kind of, overlaps with our previous, talk about standards. So, it's kind of overlaps into that space. So, yeah, I'm kind of trying to think about how to start describing this. I'm I do some standards work as some of you might know, which is kind of a basically trying to set standards for various, tech things.
And, and one in particular is the technology that we're using to make this recording. And that, that goes into things like Google meet and those sorts of things. And then I'm kind of actively involved in setting those standards. And I think cropped up the other day, Where, you know, they're saying, Oh, well, we want to add this new feature to the standard.
And I'm, I'm actually all in favor of the feature, but what it enables is what's called funny hats, which is basically the idea that you can, like, you know, you can superimpose a mask on somebody's face, or you can, you know, put a hat on them or make, give them a mustache or whatever, or you can kind of blow out their background and that kind of thing.
So basically it tracks people's faces. And, and I'm, I'm all in favor of this as a, as a kind of feature that people want in video conferencing, but it occurred to me that if that's done badly, then it gets to be discriminatory as per our conversation a couple of episodes ago. So then the question is like, what do I do about that?
How do I, you know, act on that, on that observation. And that's where I'm kind of sitting at the moment, like, you know, Am I in the wrong.
Vim: [00:02:03] So instinctively, I say, you're not in the wrong because it's, you have to kind of pull this stuff out, but I can also understand from a purely tech perspective, it makes sense. but that's, that's what, that's where the conversation started is how you bring ethics into tech and how you build those standards in without limiting the innovation or creativity, but making sure things get thought about in the right way.
I was listening to an interesting talk recently, about the future of, of, of standards and how we don't need to rush anymore, where we're at a point where. We have all of the technology we could possibly need. And we we've got there through sacrifice and, inadequate standards. So now's the time to slow down and build those standards in so that we stop putting technology out into the world that's dangerous or, doesn't serve everyone that it needs to.
Tim: [00:03:09] I very much subscribed to that idea. And I think it harks back to some of the things we were talking about about like, non-governmental bodies kind of imposing ethical standards on their members or kind of safety standards also on their members. And I think that that is a good route for those things to go.
But. you know the concrete thing I have is, is like, there's this, the standards document that's out there and that there's nothing wrong with it, per se. I can just see that it's going to be misused or not, not even actively misused, but they're going to be people who will put a bad implementation of a face tracker in, and then it won't work for 10% of the population.
and that, you know, Okay. I raised this as a, sort ofan issue on the, on the side comments, on the standard and, got back the reply that they were separable concerns. And that's really interesting that like, you know, that it, if you can separate it out into another, like make it somebody else's problem.
and that's very much a kind of tech response to, to. Like a lot of ethical things.
Vim: [00:04:24] Yeah. How, how did they dispose them? The separable?
Tim: [00:04:29] well literally, he says that it would seem like it might be a separable concern because basically what I'm saying, what I've said is that, you know, there's nothing wrong with the specific.
Like API that they've specified in the standard. It's just that without alongside it, a high quality implementation of a face tracker, then people will put bad face trackers in with it and thereby misuse it. So it's, it's kind of a little bit like the knife argument. It's not the knife, that's the problem it's how
it's get used. But on the other hand, there are certain names that you don't want to sell. Because they're not really good for cutting chicken, you know,
Vim: [00:05:08] I use the knives analogy further forwards. There are rules around it. So you don't sell minds to under sixteens. you know, there, there, there are things that kind of try to protect the use of, of a knife and that's what standards are for me.
So. To say the separate issues. Does it make sense? Because the star, so the standard makes sense that you have to think about that application of the technology in order to make sure it's safe and secure for people.
Tim: [00:05:42] Right. And I think where they've, where the, where I'm on, where the objection, my objection is is on stronger grounds is that they specifically used the, funny hats face tracking as the use case they're trying to address.
And my argument is that that's not addressable in a safe way without a guarantee of there being a good face tracker in place. And. Yeah. I mean, I, I, I'm still to see how the response to that, that kind of next round of response goes to that and see how that discussion goes. but it's kind of interesting to be brought up short a little bit and find that like, you know, in my own practice I'm I hit the problems we've been talking about in the podcast.
Vim: [00:06:33] Yeah. Yeah, I think, but I think that's, the beauty of it as well is that we can move, pull out these issues and we start to see them in everyday life because, or we have the conversations to then see and apply to everyday life. Because if we didn't, it'd be too easy to kind of say, all the standards are good enough.
And we trust, we trust people to act in the right way.
Tim: [00:07:00] Yeah. I mean, I must confess though. One of the things I'm doing with this is, is to some extent, Making it somebody else's problem to provide a good face tracker. But on the other hand, my argument is that if you're going to build a browser, like adding a decent face tracker is not a huge overhead to the rest of the work that you have to do, but maybe that's over-simplistic of me.
Vim: [00:07:23] Yeah. But I, I think it's, I don't think it is it's I find, I really struggle with this because I, in my head it's so. Intrinsically linked. I can't. And I just don't understand why you would ever add in this country to my service, because my background is I'm not going to put a service out there that doesn't meet the requirements of the user.
And so if one of the requirements of the user is a piece of technology that recognizes their face. And that recognizes everyone's face. I'm not going to put it out there and it, it works for everyone.
Tim: [00:08:04] Yeah. So, so this is slightly kind of tricky in that they don't. It's perfectly possible that people could supply their own decent face tracker.
Like if, if you, if you happen to be, you know, for sake of argument, that company, the size of zoom or Cisco, then you probably have a decent algorithm like in your back pocket and you just apply it to this API. And so you're, you're not like it's not, it doesn't this API doesn't make it impossible for there to be.
It to be done well, it just means that the lazy amongst us, we'll just pick up a free algorithm off, you know, github and it won't be any good and they weren't, we weren't having a good test set and like all the things we were talking about a few weeks ago. And so it's not that it's intrinsically unsafe.
It just kind of raises the risk. If you see what I mean, that people will do it lazy. Yeah.
Vim: [00:09:09] Yeah, I think, again, this is a bigger thing for me. I was having this conversation over the weekend of like our. the, the, the way society values. Instant gratification feedback or an instant solution, whether that's Amazon or a piece of tech or Uber or whatever it is, we don't want to work for anything anymore.
When we do just want other people to do the work for us. So passing on that, Yeah, passing on that kind of responsibility. If someone else needs to, to sort that out of it, I've done the bare bones also makes sense in that. That's how we're starting to view the world.
Tim: [00:09:56] That's true. But then there, it kind of just talk about the general kind of.
Instant gratification thing. I totally agree that a lot of things like that, but then there are a chunk of things that are absolutely the opposite. Like, you know, people are interested in, in, in long form. People will put a lot of effort into playing a game or reading a book or, or, you know, constructing a world or whatever.
There's like a whole bunch of, mostly creative to be fair, forms that, that people are prepared to put immense amounts of effort into. so it's not like everything has to be instant gratification, but I take your point that like a lot of things that shouldn't be are
Vim: [00:10:39] well, I would challenge you on that though, because even though.
Yeah, the creative forms, there is still the instant gratification versions of that. Whether it's a podcast or an audible or the TV series that's been made out of the book, we're starting to see a shift in the creative outlets for those long form things.
Tim: [00:11:02] Right? So you don't count game of Thrones as long form.
Vim: [00:11:07] No, I wouldn't.
Tim: [00:11:09] Or you think there's like, because of the, the way that it's constructed, it's more interesting.
Vim: [00:11:15] I think reading the book
would be.
Tim: [00:11:18] Yeah. Okay.
Well, yeah, never ending given that he hasn't finished it yet, but yeah, I do. I mean, I, I know what you're saying there, but I think, I still think there are certain things that people really aren't prepared to put a lot of effort into and, and. You know, when you look at us, some of the people, some of my friends who've kind of gone in lockdown, have gone completely into hydroponics and things like that.
And the amount of effort they go to to grow like four chilies in a pot or whatever is, is, is amazing. I mean, I think it's lovely, but on the other hand, like it's certainly not instant gratification.
Vim: [00:11:55] Oh no, don't get me wrong. I think that, that, the. Niche cases of it. But I think as, as a whole, as a society, we're still in that instantaneous realm.
Tim: [00:12:09] Right, right. Yeah. And that's probably probably fair. I
Vim: [00:12:14] know, but that's the thing. I think it gets dangerous when we apply that to the things that we're building or putting out into the world, because we then tell ourselves, we just have to do the minimum to get par, to get by. And pass the challenge or responsibility of the ethics or standards or applications.
Tim: [00:12:35] So yeah, no, I mean that, I mean, but on the flip side, standards is, is. Back. It's a nonprofit activity. Nobody makes any direct money out of it. It's hard enough as it is without making it any harder. I guess what I'm saying is I feel faintly guilty about pushing back because I know that it's going to make other people have to do.
Work as well as me. It's, it's, it's an amount of work for me. Probably not a huge amount, but some, and it's more work for other people and it's like, and the other thing is it's probably gonna make me slightly unpopular for delaying things. It's like, it's an interesting, interesting challenge.
Vim: [00:13:18] it is, but.
I don't know, I'm, I've, I've kind of made my career out of challenging things and being unpopular. I think it's as much as it is annoying and it creates work it's it's necessary and we need to stop seeing it as additional work or, yeah. Harder to do because it's the right thing to do. It's the same conversation I have around recruitment and recruiting diversity.
It will take longer to do, and it is harder and it, but that's the point, the kind of the benefits we see are, long-term not short. And we need to get used to that. And that's, you know, that's how we change things. And particularly with branded, where as we had in the kind of standards podcasts we did, it's not as, open to everyone as it could be.
seven till we get it to a point where there's representation in those groups, it is up for people that exist in the community to take on that burden now to make it better for people in the future.
Tim: [00:14:22] Right, right. No, I mean, I, you know, I do, I do agree and I, and it, and it's, it needs doing, and I think that's probably the way to look at it is that like, you know, without doing this, this is an unfinished piece of work.
And, and so, you know, it's not a matter of optionality, it's it is, it just needs doing. And so it's part of an intrinsic part of the work, which is why this thing about whether it's a separable concern or not is, is kind of interestingly dangerous.
Vim: [00:14:54] Yeah. Yeah. And actually that's yeah, I, a hundred percent.
One of the things that I've started to do on the product team that I'm on is we have, a kind of checkout list before anything goes live, or before we push anything live, it's like, have you thought of the data security? Have we checked the user need and made sure we've met it, like just really simple things.
have you put in the measurements of proved success? Like six months down the line and like all of those stuff that just help us sense check while we built the thing and why we're pushing it live. And if you haven't got those things checked, should it be going live?
Tim: [00:15:30] Right. I mean, interesting. It didn't crop up with Adam the other week, but, but.
Tom of those checks are in the standards world. So there is already like there's a privacy check and there's a security check. You, if you put a standard out, then there's a kind of separate group of experts on privacy, in a separate group of experts on security who will review it and not let it out until it's past that review.
what we, what we haven't seen is equivalent for. I don't know. I mean, almost as broad as harm.
Hmm. Yeah.
Vim: [00:16:09] Yeah. And I think that's for me, sorry to cut you off there. That's what's really missing for me is harm slash inclusion in even to get even more specific of just race or, yeah, I just there, so. There's so much missing.
I think from it at the moment, for me, it feels like it does the basics and it does the basic not basics. Cause I know how in-depth it is, but it just scratches the surface of what, what we'll say needs to be in it.
Tim: [00:16:41] Yeah. Yeah. And I think, you know, we're sort of starting to see, and this is the subject to the whole podcast.
Basically. We're now realizing that these tech decisions that like. Felt like pure tech, almost mathematics, actually, they will have real consequences in society. And, and that, you know, the way that you build a little app for kind of rating, the faces of people in your dorm room actually affects society in the end.
And then if you don't like, if that's not respectful, then at the empire you build out of it probably is also not respectful of people's, you know? Dignity in the end.
Yeah. And
Vim: [00:17:27] I think, so one of the things I'm most worried about over the next year as we start to figure out how we, I dunno how we get back to normal is that we will ignore all of the learning or all of the awakening that society has had to some of these issues.
And. There's such a huge opportunity now to consciously build these standards in and, and to take responsibility for what we're putting out there and not, and to not, and to stop and to stop shifting that responsibility to someone. Because it is on us
really.
It's on the people who work in the industry now to start having these conversations because we have to do better.
Tim: [00:18:20] But what, what, what aspect of this year makes that special? I mean, I'm not disagreeing with you at all. I'm just kind of tease out what is it about this year that makes it like. Now.
Vim: [00:18:32] Well, the, so in particular, the media coverage for the black lives matter movement this year, we've seen black lives matter.
Protests happen every few years for the last six to 10 years, there was something different about it. This time that men, it was on everyone's radar, not just people who cared, who care. And I think alongside that, we've had more time to sit and have discussions the US election, the COVID itself and how it's, affecting different areas and different people at different times.
Like there's been a huge wake up, wake up call to what life is like for different people. And I think we need to take that learning in whatever way or guise that's happened into what we build and what we put out into the world. because there's so much of it we've seen, we've seen a whole different perspective of life that we've never witnessed before.
Tim: [00:19:35] Yeah. I, I think, I think I was thinking. Maybe you were meaning something narrower than that? I mean, no, no, no. About, about the fact that, I mean, specifically that we've all been, become in lockdown, much more dependent on tech. And so whilst you could, like in the past, if the video conferencing software didn't work for you, it kind of.
It was a nuisance, but you could probably do your job without it, but now you just can't. And so like the, the, the specific case we're talking about is even more important than it was a year ago. Like it was, it was something we should have fixed a year ago, or it should be like on people's radar a year ago.
But, but now when everyone is dependent on video conferencing, everyone is dependent on tech for, you know, Yeah. Even attending a wedding, a funeral is I've done over, over video conferencing. The idea that that is discriminatory means like maybe going to a wedding doesn't work for you. Like, and that's, that's really deep into society.
So I think we, you know, from my perspective that it's a much narrow point that you're making, but, but from my perspective, that means that this stuff has to work because we are more. Has to work for everybody because we are much everybody, as a society is much more dependent on it. On the plus side, many people who are now coming into this world, who weren't in that we weren't using it.
And therefore they said they have expectations that it will work and not working is unacceptable, which is kind of, it's a positive.
Vim: [00:21:26] Oh, yeah, definitely. I've yeah, I actually said to my friend just yesterday, why, aren't people more angry that these things don't work for them. So facial recognition technology or, whatever it might be like, Y Y People of color, more angry that this is letting them down.
And then the rebuttal was because everything lets us down, like everything in society doesn't work for us. And this is just the tip of the iceberg. And so in the way that we build tech it's the most, in my mind is the easiest structure to try and reconstruct and make sure we do it differently. And so we don't want to lose that.
Sense of, openness and freedom in technology.
Tim: [00:22:17] I suppose one of the things is that we tech is still fairly malleable. It's still like, it's not set yet. And so, so like if we catch it now it'll probably be fixable. Whereas we leave it another five years, like a lot of the kind of AI structures and whatever are going to be.
Established and, and much harder to, to move. I was thinking though, that, like, this has been this sort of, stuff's been a problem for a while. I mean, you know, there's a very funny YouTube video of two Scotsmen trying to get her, voice recognition lift to work. And like, because it's expecting a California accent and if you don't have one it's not playing, you know, and I mean, and the what
reminded me of that is, is getting annoyed. Doesn't help. Like the voice recognition works even worse if you're, if you're angry. so it's like kind of, weird, like in that narrow instance, getting annoyed with the failure of the tech just digs you in worse. I mean, that's not a general lesson I hope but you know,
Vim: [00:23:30] There, there is something of, what do we do about it?
What can we, what can we, what can we do that I think is really important.
Tim: [00:23:41] Yeah. So the answer is stick my neck out. Yeah,
Vim: [00:23:47] sorry. People are gonna listen to you more than they listen to me.
Tim: [00:23:56] Well, yeah, but, but, maybe, but, but also maybe they will. Make me do some work on it, which is like, no, the price you pay, if you, if you get up and complain about something, then immediate response is, well, then try and fix it, which is, you know, I think that's given that everyone is to a greater or lesser extent donating their time and effort on that.
That's. That's a fair response. If I'm demanding something, then the least I can do is, you know.
Vim: [00:24:28] Yeah. And I think there's, I think the interesting thing for me is, how we respond to this type of call out or question or challenge because it's not trying to say the people who haven't thought about it are wrong.
It's trying to accept that the way we've been doing things thus far might not consider everyone. So how do we then, how do we, like you say, you're going to take it on yourself to do a bit of work or to create work brothers or, you know, play a part in the solution. It's trying to make that normal. But these responses, aren't a critique or criticism.
It's, it's an improvement for, to make it better.
Tim: [00:25:11] Right? Right. Yeah. I mean, I think that's, that's the risk. There is the more, You run the risk of becoming conciliatory at that point. And I'm like, that's not the message that you need. I need to convey like it's, I mean, cynically, it needs to be clear enough that it's raised as a bug and in the end.
It needs to be clear enough that nobody is prepared to close it. Like, you know, if as being not relevant, Because it's out there. And I mean, it's one of those things that you can't effectively, can't wind back in. And so somebody now has to close this bug either I do, which I'm not interested in or, somebody else has to close it, but then they're taking the responsibility for that.
and I can, you know, I could complain and everything, but it'd be interesting to see what happens next.
Vim: [00:26:12] Yeah, you have to let us know.
Tim: [00:26:16] Yeah, indeed. Yeah. No. Well, well, and, and, you know, maybe, maybe if an interesting conversation comes out, from like, you know, you and I have had a conversation, but there's another set of conversations with the standards body.
I should hasten to add that it's not the body that Adam was talking about. So different organizations. So it's actually. Like not related directly to, to what we were talking about the other week, but, but I'd be interested in maybe something, maybe there's a conversation to be had with somebody from, from that organization about like how this sort of thing should be treated and how, what the official line is that might be, might make for, a very interesting three hatted podcast.
Vim: [00:26:57] Yeah, definitely.
Tim: [00:26:59] I'm not sure anyone will sign up for it. I don't know who would sign up for it, but yeah. Yeah.
Vim: [00:27:05] And that's the thing as well. These conversations should be had out in the open it's about understanding everyone's different perspective and seeing where the middle ground is.
Tim: [00:27:15] Right. Right.
Absolutely. And, and, and, you know, that's the merit of doing this, of this particular issue. I raised it's public issue. It's on a public, standards, evolving space that, you know, and it's where everyone would expect to find it. So it's not hidden in any way. So it's like, you know, it's already an open issue.
literally, so yeah. Cool. Well, I'll keep you posted on, on, on that. And, I'll let you know, know what happens
cheers. Take care. Bye.