Tim: This is the distributed future podcast and I'm Tim Panton,
Vim: and I'm Vimla Appadoo
Tim: and this episode is somewhat of a reprise on something. We did a few months back talking about the. Obligations of automated systems in terms of like how they could build in biases and this kind of stuff but this episode is a little different in that it's looking at how you might manage that from a process sense. And also how you might try and fit that into the legal system. So it's quite a kind of eye-opener for me.
I found there's like a whole bunch of things. I hadn't thought about which is always a joy in this in this podcast talking to people who sort of suddenly tell you about things that like in retrospect and not exactly obvious but like inevitable, but you hadn't thought about at all.
Vim: Yeah. So when you say the the kind of legal aspects, what do you mean
Tim: well [00:01:00] so if you're going to if you've got a system that that somehow maybe an automated system or an AI system that somehow is biased against somebody then how do they claim redress how do they even find out in the first place and what would a court case look like?
Like, you know, would you like would you subpoena the log file? Would you like what kind of paperwork would you expect a company to keep in order to defend themselves against that kind of claim and unlike there's a whole bunch of aspects that we've really not kind of thought about I think or maybe I mean the point about this is that the there are a bunch of academics who are starting to think about this in policy terms and and what this means and coming up with nomenclature.
So one of the fun parts of this was like they were just like phrases and Words which. I hadn't thought about it, but are necessary for naming these things just kind of interesting.
Vim: Yeah, I [00:02:00] think the idea of the kind of one thing. I'm particularly interested in is around the academic space in this and the to me, it still feels really slow in terms of how Academia and legislation policy law are catching up with technology.
And I just wanted I don't know how we can start pre-empting the legal requirements that's needed to help keep people safe. But as technology advances
Tim: was so one of the nice things about this interview and I mean I should say that I met Naomi at a science fiction book club. So
Vim: amazing.
Tim: So she's she's a she's a science fiction reader.
And and and so she uses she's saying is that she uses it in her practice of like thinking up theoretical Futures that you would then need to try and understand what the legal consequences of it or what the social consequences [00:03:00] would be so it's kind of I mean, she's at some point she corrects me and said no, we speculative fiction rather than science fiction, but.
You know these possible Futures and Futures you have to you know, worry about in the near term. I think that's an interesting way of having the legislation kind of not trailing way behind the fact mmm.
Vim: Yeah. Yeah, definitely and I really I really like the idea of calling it speculative rather than science fiction because it adds a whole new.
Dimensions of it. I know that with kind of a lot of the Sci-Fi films or books out there governments. Do use it as a way of thinking of future plans as well? So I think I remember hearing a story of the book World War Z?
Tim: Yeah.
Vim: The zombie apocalypse thing and how governments used it as a right if zombies were to take over the world So that's the government [00:04:00] used world wars Z as a way of understanding what their plan of action would be if there were a zombie apocalypse.
So there's definitely. Something really interesting in creating that narrative and speculative future to think of what the planning would be. But I think it's different to have a plan versus the future legislation. It's almost it's almost the same as kind of we have these projected population growths or.
You know we predict what's going to happen. So we plan and legislate and create policy to meet that. But we start we don't do the same for technology.
Tim: Yeah, I mean but we've been famously been wrong about population growth. You know, I actually incredibly I know somebody who
Vim: yeah,
Tim: my first computer experience was somebody who is in the Ministry of Education who was analyzing births in order to to guess school places and amazingly you can actually get that [00:05:00] wrong because you because of immigration and emigration.
It so it's not like we're famously not been like terribly good at predicting that particular number even though it feels like it should be obvious. It's just kind of weird but but I mean in general, I think yes, we should be looking at least to see what the possible Futures are and some of them are sort of heading towards being inevitable when you can and that was the other thing.
That was really. Amusing about this conversation is realizing the lot of the things that we kind of speculating about are actually already happening. I was at a sort of event that's unrelated in theory, which is a telephony developers conference this week or last week now, but at the he we were having a discussion about whether you can tell whether the voice bot that brings you up or that you ring up whether it's a human or not.
And you [00:06:00] can't actually like in general. It's really hard to tell now,
Vim: that's really interesting.
Tim: So apparently even if you ask them and it's only one of the developers is really get even if you ask them "are you human?" they will laugh. Like you get a laugh back.
Vim: Wow,
Tim: so that then they don't have to lie.
Vim: Yeah, that's really interesting one of the so I when I was on a training course over the week and it was about leadership and culture intelligence. And the jump from kind of IQ to emotional intelligence EQ to Cultural Center agents, which they're calling CQ and the emphasis being on Automation and kind of the skills of the future and speaking about how.
Leadership and the roles that we will need in the in the future and work will be the human elements of working. [00:07:00] So being able to understand cultural differences and scenarios and context and recognizing tension in difference rather than the kind of the roles that we have at the moment that can be replaced by
Robots, and I think it's really interesting or Automation. And I think it's really interesting that as technology develops to start mimicking more and more human behavior. Whether those even still space for that,
Tim: right? I mean, yeah, I think some of the jobs that we think are intrinsically human kind of aren't actually particularly where we've so restricted the freedom of those workers so that they become efficient.
Like in those spaces the more restricted the script is in that in that case the easier it is to replace people with machines because like neither of them are allowed to stray off the script and so it makes less difference. [00:08:00] What I do Wonder though is whether we're going to see. Like the flip side of that are actually people paying extra for empathy, you know an empathic human who can actually see what you mean and and and understand and sympathize with your problem or at least empathize with it.
I wonder if we're going to start paying extra for that as a service but
Vim: that even that so if you were to pay for that for a service you could mimic that behavior in in an automated way because the sincerity of it isn't necessarily known to within a conversation.
Tim: I'm not sure. I'm really interested in this isn't well, I'm obviously somewhat interested in the sincerity, but I was actually thinking about like taking action and having the ability to to take action.
We know what is really prescriptive in a lot of these. Descript's is what things that they can do and what [00:09:00] things that the manager can do and what things that the duty manager can do like how far you have to escalate to get the problem actually solved and the ones that are done Well,
Vim: yeah,
Tim: that's reasonably flexible all that's my perception of them from the outside.
It's probably actually nightmare to manage a flexible system. Really? But so yeah, there's some nice nice new words to learn a nice New Concepts to come up with that and and the other thing that we talked about in this which again is one of these things I hadn't really thought through at all was the idea that you have to keep these things up to date.
So it's all very well putting in, you know, a corpus of knowledge into an AI That's based around, you know now but in five years time that may be linguistically out of date. It may be out of date demographically may be out of date politically and and what point like where are the obligations to [00:10:00] keep that up to date?
Vim: Yeah, and obligations to who as well,
Tim: well, yeah to the company to the consumers to society, you know, there's a there's a bunch of things there the way you you know, if you don't then you could almost stop Community or a civilization evolving by saying well you like we're not going to change the way that.
You know we do this even if socially we've decided that politically or we've made a decision that's going to change the social behavior. Like if it's still embedded in all the AIS that change doesn't really make any difference somehow.
Vim: Yeah, exactly,
Tim: which is disconcerting frankly and like, you know, if that actually happens then yeah, we'll be stuck which is depressing.
We we managed to have a slightly more optimistic conversation than then you and I having now,
Vim: but I feel like. I feel like [00:11:00] our conversations always turn quite negative negative quite quickly.
Tim: Oh really? Oh dear. Maybe it's maybe it's doing who should like take do them on a bright sunny days and you know and sunshine and them and have yeah have cheer.
I mean, I you know, I think some of the solutions what I like is that some of the solutions that people are coming up. I like the fact that people are thinking about this now and I like the fact that people are coming up with solutions that are at least plausible and may get put into action. And you know buying decisions are at least care about it somewhat.
Vim: Yeah, I do think there's a really big risk and I know you'll just you'll be in general but I don't think the majority of people are thinking about this. I think it's a very select. I think it's the minority and in the truest sense, I could think of very few people are thinking about it in reality.
Tim: I think you're right. I mean, I think I mean, you know given that. I like some of this came as a surprise to me and I ought to be thinking about this stuff [00:12:00] then yeah for sure. It's like it's a minority of a minority. But like that's generally true of Law isn't it? I mean like, you know, if you look at how the.
Um, what was it that the thing called the human Fertilization and embryology committee? Like they did let me that's actually really interesting case study in in law being put into place before it was needed. They put in a whole set of structures about what you're allowed to do with embryos. Like what kind of fertilization is legal and what isn't what the rules are before
It was technically practical like everyone knew it was gonna happen or not everyone but people in the field knew that it was going to happen, but then the legal framework it's put in place before it was technically doable which I thought was like somehow. I don't know how that happened. I don't know how to be interesting to find out how the politics of that.
Like at [00:13:00] work got done.
Vim: Yeah, I do think there's something interesting around the priority is often on things where we feel we're crossing that boundary into creation space. Whereas things that kind of infringe on our rights aren't seeing quite is important to legislate for
Tim: it's interesting. You think that's a religious different sorts.
Do you think it's emotional?
Vim: I didn't want to get into the religious bit of it. But yeah, I do. I think it's AI think it's driven by I do think it's driven by religious standings here
Tim: to people feeling that they don't want to play God and they don't want other people to either
Vim: exactly
Tim: interesting
Vim: and that makes me to become political and important and draw those boundaries around it.
Whereas I don't know taking. It's advancing things for the benefit of capitalism. So making things more quick and [00:14:00] efficient, even if it means loss of jobs, etc. Etc. Isn't it doesn't cross into that field. It's not as important.
Tim: Yeah. Yeah. Yeah. Yeah. I mean, I I think I think what we have to find is some other way of making it clear why these things matter
before the disaster. I mean we arguably we didn't do that with you know personal data. We left it too late and all of that data is out there. I saw the other day. I saw that forgotten the number but something like what was it something like 23 million Americans who uploaded their DNA samples to some site that tells you about the heritage.
And they've then they then subsequently sold it to an insurance company.
Vim: Geez, that's terrifying.
Tim: Right? Right, right. So, you know wanting to find out whether you were related to the Queen and now means that your insurance premiums will [00:15:00] be different and and that was like not the intended outcome.
I'm sure
Vim: yeah. That makes me feel funny.
Tim: I know it's like just just a bit but we don't I mean harking back to the thing the other day the previous episode we were saying like yeah, everyone needs a hacker friend actually we do need to be more suspicious of this stuff of like, you know, what's what's in it for me and what's in it for them and do these things balance the
Vim: cost benefit like yeah.
I think there will be huge space. In the same way that there's been a big swing in ethical branding recently. So off the top of my head fairphone, I know. Yeah. I know that it's not as. Necessarily Fair as it might seem but compared to other leading Brands and them on the market is and I think there's something really [00:16:00] interesting about the popularity of Alternatives like that.
Even being more expensive people are starting to opt for those Alternatives and I think that I think that that is going to get more and more popular. Things in that space.
Tim: Yeah, I mean I think some of those things are a trade-offs and and and and and you have to kind of decide whether which which of those trade-offs you want to make like I'm constantly having discussions with people who feel that their phones should be like they want to be able to take the battery out of the phone and replace it want to be able to replace the screen in their phone or add memory to their phone.
Like they used to be able to to their PCS and I'm sort of thinking well, you know actually really don't want to do that because mechanically the thing is less strong all the bits Fallout. I mean, you know, if you remember dropping Nokias, but like all the things fell out like you just [00:17:00] had this sort of heap of plastic on the floor.
Vim: Yeah. I remember having to put put a piece of paper behind my SIM card all the times make it stay in place where you just.
Tim: Right exactly. So so the more kind of moving detachable Parts you have have them will likely are they to move and detach and what's more in order to kind of get the strength that you had over more parts like to clip together and and whatever so it's like it some of these kind of decisions are engineering decisions that seem like they're somehow wasteful and maybe they are in the longer term and they're not in the short term then how long do you think of phone lasts?
Like is actually some of these things are quite complicated balances and some of them are much more obvious. Like, you know, are we going to use conflict diamonds in this? Well, no. Okay, that's easy. Where do we source some whatever it is molybdenum know something and that's again, you know this you can really pretty clearly avoid the [00:18:00] Warlords and and the super repressive regimes without that being too difficult.
I think
Vim: yeah. Yeah completely and bike again. I still think it's being conscious of it because we all know blood diamonds in have just run with the blood diamonds thing. We all know that's wrong. But that doesn't mean we do the research if we're going to buy diamonds to see if the jeweler's is got it fairly.
So I think it is it no matter what level you're kind of operating at its being conscious of what you want from that prod like the trade-offs.
Tim: Yeah, I've been I think I mean I think is you saying like if you go into it at least with that question in your mind, then the brands are forced to respond to that and and or not their game to some extent.
So I think that's you know, quite how we get there with with. Well I suppose maybe we are I mean Facebook's changed its color so maybe and they are talking about privacy. I don't know how much of that actual reality um very little yet be interesting to see what [00:19:00] happens there. But but you know the agendas.
moved I suppose at least
Vim: yeah. Yeah, the dial was definitely changed
Tim: and they're also coming back to the accountability thing. They're also now at least trying to appear to be transparent about the source of adverts and the source of why you're getting particular things on your screen.
Vim: Yeah, no, it's true
Tim: not convinced on how effective that's going to be but we'll see.
Vim: I mean, yeah exactly. I just I also just want like genuinely just wonder how much people care about it as well. Like do people just want the want that service more than they care about everything else around it
Tim: and think for the moment they do but once they start understanding what the long-term costs are.
To them. they may they may or may not change their minds but it may be may also be too late, which is I think we're going [00:20:00] to
Vim: change your mind for you.
Tim: Um, well I thinking I think to be fair most of them people agreed to. Like pretty much any use of that data and that's pretty much what's happening. So it's only trying to claw back.
Well, actually, I only really meant to use it for that quiz rather than for like all the you know, all decisions that are made about how to price my no insurance or whatever.
Vim: Yeah,
Tim: so yeah back to back to Gloom again. I've done it again which like we should find a we need to actually the playful stuff.
We don't there have been cheerful topics such as when isn't isn't one yet. I think but yeah, that's right. I think we'll leave it there and you can have a listen to the interview and see see whether you can pull some more cheerful things out of it.
Vim: Yeah, looking forward to it.
Naomi: S o hi. My name is [00:21:00] Naomi Jacobs and I'm a research fellow working at the University of Aberdeen and at the moment one of the projects that I'm working on is a project that's called RAINS which stands for realizing accountable intelligent systems and it's looking at when we've got all these new artificial intelligent machine learning algorithm type systems.
How can we make sure that they're accountable and we understand the. Assesses the has led to that decision making so I've got an interest in artificial intelligence some of the consequences and more generally how new technologies are affecting our lives and that spreads into some of my other work as well.
Tim: So you're looking at accountability in the formal legal sense or in terms of like people knowing what it is that they built for themselves to see what I mean?
Naomi: Yeah, so certainly the legal sense is quite important. And in fact, it's a project that's a collaboration with several universities and one of our partners is a [00:22:00] legal academic Oxford's and she is going to be specifically looking at some of the legal implications and.
The things like if there is an issue or there's auditing what information would lawyers and legal professionals need to be able to get from these systems to be able to make their their legal decisions.
Tim: Oh, so that's your that kind of implies that it's going to be perhaps some kind of traceability requirements that you have to kind of document what your inputs were or document what your process was.
Wow, that's interesting.
Naomi: So the the computer scientist members of the. In my colleagues here Aberdeen, they have expertise in provenance. So things that are recorded as you say about what the inputs were what processes they went through so you can track it back and that's obviously quite a difficult problem with some of these machine learning artificial intelligence systems because they are, you know, people talk about them as being a black box and it's not always straightforward to [00:23:00] understand how those decisions got made.
Tim: Yeah. I mean, I think the actual sort of this wire. In the inputs because in the ones that I played with the very sensitive to what your inputs were and the actual process that it sort of uses to draw. The inferences is is pretty opaque even to the person whose coding it.
Naomi: Yes, and it can make a big difference as you say and when some of the decisions are being used for really critical things, then that can be very important.
I mean some of the examples that promote that inspired this project is things like if these systems are being used to help with legal sentencing for example, seeing how long someone's prison sentence should be then it's making predictions based on the types of the crimes the person's background or other things.
But if there are some aspects of those inputs that are based on perhaps data [00:24:00] that's from a particular segment of the population. It might heavily influence the outcome in a way that actually isn't necessarily fair. So it needs to be accountable of why those decisions were made.
Tim: So do you mean looking forward to DC the expect to see legal challenges against like kind of as you say sentencing or employment decisions based around this?
Naomi: Certainly, I think there's already a lot of discussion because these systems are already starting to be used and people are asking questions and saying wait a minute. Should we be should we be using these should we do we need a lot more accountability for these systems before they start being used everywhere as opposed to just in certain instances as they come into into more popularity.
Tim: If you got an example of where it's they're currently being used.
Naomi: As I said that the sentencing system is certainly one theirs. Algorithmic systems being used in other [00:25:00] aspects of law enforcement in terms of predictive policing so about predicting where police should focus their efforts, but also you mentioned employment that's another big area where these kind of systems are potentially being used but not necessarily successfully.
There's been some recent press coverage of examples where. People try to develop systems for helping sort through candidates for employment and they start to develop these systems and then found actually they were having issues with this this aspect of bias and the the outputs which they were getting had significant problems.
For example, I can't remember exactly off the top of my head the company but if the CV had the word woman anywhere in it, not necessarily a female candidate. but just mention of the woman actually moved it significantly down the prospective hiring list,
Tim: [00:26:00] that's crazy. And and I mean fortunately that one got caught before it kind of hit production.
But but I mean the predictive policing one is isn't there a huge danger of that that self reinforcing that you know the data you get the next batch of data you put in is essentially generated by where you put your effort last quarter or whatever.
Naomi: I mean, it depends on the nature of the algorithms being used to predict that but / certainly those kind of things are things that need to be taken into account When developing these systems
Tim: and I guess I mean that leads me to think about like keeping these systems up to date.
Is there going to be an obligation for you to update your training inputs or we will be stuck with the training inputs that like happen to be the data that happened to be available in 2020 is going to be what we're those are the. To be effective the rules. That's kind of weird.
Naomi: Yeah, and again that comes down to the provenance.
So it might be that if you're if you're investigating one of these systems, one of the things you want [00:27:00] to be able to find out is when was that data from how long since it's been updated or reviewed for its appropriateness? -
Tim: and I don't mean is that going to be talking about a new profession here of like, you know, well, I can't think of a good word for it like the worshipful company of data provenance providers or something, you know, is is this a whole new career path that we're looking at?
Naomi: I mean it's certainly going to be coming into a lot of training that people in these kind of careers need and I mean the other aspect that's you know, a lot of discussion is coming into the moment is the the general data protection regulation of GDPR and there's a lot of new training and. Jobs to people being created around looking into how that's being enforced and how people's privacy is being protected.
So I see this being in a very similar aspect of something that's going to be a much wider part of things that [00:28:00] people do in in when this technology is being used.
Tim: So instead of really being a specialist role you think this is something that like your average project managers should be aware of and be part of their their tasks would be to keep an eye on this because it's in the end.
their responsibility.
Naomi: I think if these if these intelligent systems are going to be put into practice there needs to be someone in that role of examining the accountability and potentially provenance could play a big role in that
Tim: but I'm curious. I mean you probably don't have a definitive answer because we're still very early in this game, but I'm curious to know whether you think that's just another task that should be part of a generalist a generalist managers.
Portfolio of skills or is it something that you're going to end up having as a narrow specialist skill set. So is it like accounting or is it like, you know General project management and I'm [00:29:00] trying to kind of get a sense of where it's going to land in it and I don't have an opinion at it think.
Naomi: And I think it's not necessarily a question that we can answer yet. I think it might actually be a mixture of both and it will depend on this particular situation. So some of what we're trying to do in our project is think about tools that could help with this kind of accountability which might be different if it's to do with a project manager who doesn't have that high skill set, but just wants to be able to keep an eye on it, but then there might be different tools for someone who is as you say in that particular role of auditing.
These systems and isn't it is an expert in them.
Tim: Right? Right, right. And so when you say tools these kind of methodologies or other or are they computer programs or how do you how do you kind of categorize your tools?
Naomi: We might be that the system's themselves that there is aspects of provenance built into it. my aspect of the project is thinking [00:30:00] about you know aspects of policy and governance and regulation that could assist in helping people.
Think about these issues.
Tim: And do you think that that I mean again you it's probably fairly early in the project to kind of make definitive statements, but do you think that that's going to be legal framework or is that going to be a set of professional ethics or is it going to be like I'm in accounting I suppose is a bit of both, isn't it?
So maybe we're in that sort of territory.
Naomi: Yeah, and I think I think part of what we need to do is ask that question. See where should these things happen? And what should they look like?
Tim: And you do have any inkling of where that might end up or not?
Naomi: Ask me again in a few years when we've been working on the project I think because we are, you know, we are in very early days and there's a lot of a lot of these kind of discussions happening now as these Technologies are really being used more often.
Tim: So do you. You potential well as the people who [00:31:00] contributing to this project from from industry and are they genuinely potential consumers of this legislation or and or are they not I can kind of wondering whether the poacher gamekeeper thing applies whether people are actually going to be who are likely to be hit by this other people who are going to contribute to it or whether that's the sort of thing.
They're trying to stay away from.
Naomi: So we are working in this project with we are we are working with Partners to do use cases and look at ways in which these kind of systems are being deployed. And we hope to work with them in various different different spaces to think about the consequences and that isn't just necessarily the examples.
I was mentioning about them in terms of. The the the decision making legal aspects, but it might also be in spaces such as Healthcare all of these systems being used or autonomous vehicles is a very big space now for these kind of intelligent systems that you know are being used to help us out in the [00:32:00] real world.
But then if something goes wrong, we need to be able to understand why so we are working with some of these organizations creating these Technology Solutions to think about it at the very first stage. How do we make. Sure, they're accountable as well
Tim: and they receptive to that or they see it as a necessary evil or how does it was that kind of mindset in that area?
Naomi: I think I think a lot of companies involved in this kind of development are really aware that it's something that needs to be addressed because obviously. There's a lot of negative consequences of not thinking about these things and I think potentially they're aware that the you know, it's better to think about them early on rather than be hit by issues later.
Tim: Okay, so that sounds pretty responsible actually going and I'm just sort of aware of some of the more internet-facing. Companies can be a bit kind of wild west-ish till
Naomi: that's definitely true. And I think some of it already, you know, there's obviously a lot of issues being seen already where things are not [00:33:00] necessarily accountable certainly, you know, you mentioned the internet a lot of publicity and press at the moment around negative aspects of you know, social media and a lack of transparency with regard some of that decision-making in the ways that information is presented.
But that I guess is an example of how Things are going badly, but I think when we're doing a project like this, we need to be thinking about how we can promote best practice as opposed to you know, things that might not be so ideal. I think companies are receptive to that if it's going to benefit them as well.
Tim: So something that crops up for me the other day, which is it was very interesting was somebody was suggesting the idea that you should be as a consumer. You should be told if there if you got a customized price so. And so what was interesting about that and broader in the broader thing is like how do you detect that
There is a system here that you might be [00:34:00] wanting to challenge. How do you how do you you as a consumer or somebody who's maybe been disadvantaged by one of these or automated systems? How would you know
Naomi: and that's a really good question and I think an issue at the moment is a lot of the time you don't necessarily know and one of the things that's being discussed.
This space is maybe maybe that should be part of the of the legislation that says, you know, if if something's being decided for you that can actually negatively impacts you like the like the different prices on a customer website. You should be told that or if for example, you know, you call up a helpline.
And someone answer your questions, but it's not actually a person it's a an automated robot. They're getting better and better at some point. You're not going to be able to to know that that's not a person you're talking to but should it tell you this is a this is an artificial intelligence helpdesk rather than letting you assume it's a person.
So some of the things being discussed is, [00:35:00] you know, as they to let you know that yes there is there is an artificial intelligent involved in this decision.
Tim: It so happened. I hung out with us earlier this week with a bunch of people who build IVRs interactive voice response systems for telephones answering systems and stuff.
And the he we were laughing about how those a lot of those systems are built to be deceptive about whether they are automated but not actually to lie.
Naomi: Yeah
Tim: so so in if they are asked. Are you a human then the reply is something like "there is a human here".
Naomi: Hmm
Tim: or or just to laugh . They don't outright lie.
I'm and say I'm a human. They avoid the question or deflect which is fascinating and there are actually you know, they're at that level already of doing that [00:36:00] and and that that I sort of I kind of knew it would happen, but I haven't really twigged that it was happening if you see what I mean.
Naomi: A lot of people think oh maybe in 5 10 years without realizing it's been here for five years already right one, but those kinds of technologies that seem seem like science fiction.
Tim: Yeah. It's kind of depressing. You know, you have to apply Turing test to the phone call you're making but what me,
Naomi: the phrase has been used for for what I was suggesting is that you have what's called a Turing red flag. So when cars were first on the roads you had to have a man running in front with a red flag to tell people that there was a an automobile coming.
So maybe we need some kind of equivalent for artificial intelligence to say look. This is a this is an AI coming be aware
Tim: right? I mean, although I. Again in the same call center space that was really interesting project, which was done. I don't think I can say who by but where they [00:37:00] Blended that so you had they were basically automated response systems.
And then you had like nine screens that were watched over by a human. And the human would watch whether one of them one of the conversations was going like off track or awry and if so, they'd step in and correct it but they didn't step in by actually joining the call with their voice, but they would just nudge the parameters back of the of the of the the agent software which I thought was like really interesting that so you could be talking to something
That was a blend. Your conversation with something. It was a blend as sort of an augmented human or automated system with some human inputs. They so when it's not even would the red flag be at half mast for that or what you know,
Naomi: the other thing I'm recently been working on. Outside of the the kind of work I'm doing it, you know, [00:38:00] at the University of Aberdeen I've been writing a book about an episode of Doctor Who it's part of the black archive series of analysis of episodes and it's about an episode which shows an Amazon style Warehouse.
And the questions that I've been looking at is about, you know Automation and Ai, and if we have these artificial systems, what does that say for work and jobs and one of the things that came out in that is exactly as you say, it's not going to be necessarily people being replaced by robots and AI but working alongside them and how do you draw the line between how much how much involvement in event of an AI in a system means there is an automated process.
Where is that that blend in the middle and how much how does accountability factor into that when it's a it's a collaboration between people and machines.
Tim: Right? Right. I mean, I think that's that's very unclear to I think everybody in certainly in the call center field. I don't think that's I mean, although actually the fact that they determine that those [00:39:00] systems shouldn't lie is interesting.
Like that's a moral. It's partly a moral judgment, but I think it's also a legal
Naomi: Someone clearly made that decision. That was like maybe using right
Tim: and I think part of that is legal that I think the argument was that if your interaction was plainly false then any subsequent. Could just be wiped out so you
Naomi: There is a difference between telling the truth and being honest.
Tim: Oh, yeah. Yeah, very definitely and I and I don't know whether you can legislate for both of those. Maybe you can. So coming to the you science fiction aspect. To what extent do you think those sorts of stories illuminate like these sorts of dilemmas? Like do you think it's a. A good preparation for reality or and sometimes or at how do you feel about that?
Naomi: I certainly think that science [00:40:00] fiction is very useful in letting us explore aspects of how things can develop before we get there and some of the work I've done involves the use of what's called design fiction. So actually creating scenarios and objects from a kind of potential future and outcome and say, okay.
So this is where the technology could be, you know if it was here in. Place or if it was developed for another 5-10 years and letting people actually interact with it and say, okay. What does this mean for me? I think that's really useful in science fiction and speculative fiction in general is is a really powerful tool for letting us think about things.
Before it's too late before they are as you were as we were talking about already here. I think in the case of the particular episodes that that I was looking at. It is interesting because actually a lot of the technology being shown although it was supposed to be this, you know, future space Factory really wasn't as advanced and developed as things that are in the real world now, so I [00:41:00] think it needs to be far-reaching enough to actually say what are the questions here.
Tim: Right? Right. I mean. Yes, we are, some of the things that we've done already and built are quite surprising and I think sometimes even people who are trying to speculate about the future maybe haven't been like introduced to these systems because some of them are quite well like the whole IVR space is quite I don't want to say covert, but it doesn't, you know people don't talk about.
Bots and what they what software they're writing for their Bots because it's all sort of quite under the cover and and slightly deceptive
Naomi: and the assumption is that oh, well, it's going to be efficient and help people so. Why do people need to know about it? But that's not necessarily true.
Tim: Yeah, Vim and I have you know this recurring thing about convenience versus kind of privacy and Clarity. So yeah that said that's a running running topic on this [00:42:00] podcast. So so yeah, I mean, how do you feel that the like you moderately optimistic that we will get kind of sensible legislation that will keep this on track,
Naomi: I think.
We at least need to try. I think it's really good that people are starting to think about what the lister's legislation should be and how how we govern these systems whether or not it's going to be successful. I think we need to really just wait and see because we've seen in the past that some things have got out of control and and now causing potential issues
Tim: and you feel that the politicians are kind of at least listening to you on this or not.
Naomi: Certainly, there's I mean a lot of political interest in this this area in particular because AI is such a talked about area and is quite high profile. Certainly a lot of attention has been paid by by governments and Policy groups [00:43:00] not just in the UK, but around the world about thinking how to deal with these issues and it's nice to see that a lot of these reports and bills have been put out talk about some of these issues of ethics and accountability and transparency.
So there definitely is a move to try and at least think about some of these questions which is positive.
Tim: Hmm. Yeah, that's encouraging, great. Well, I think that's that's a little bit of an up note to leave it leave it on it could otherwise be a. quite gloomy subject I think.