AI, biases and the future
[00:00:00]Tim: This is the distributed Futures podcast and I'm Tim Panton
Vim: and I'm Vimla Appadoo
Tim: and this episode is about the future of AI and specifically really about kind of biases in AI had a really interesting conversation.
About the difficulties of building AI models that aren't biased subconsciously and when it and how much you like, you to do any big data any any AI stuff?
Vim: No, not at the moment.
Tim: Are you any of your projects coming looking at it?
Vim: They will be in the future well and now but as you can imagine in the public sector, it's not what the most Investments go.
Tim: Right? Right because because what's what comes out of the conversation kind of slightly to preempt it is is that it's all about the quality of the data you put in to train the model and the problem with that [00:01:00] is that it's usually historical data which has historical bias.
It's like traditionally women earn less than men. So, you know your HR AI. It's going to assume that women should be paid less than men if you feed it historical data says I really kind of hard to and of course, if you don't give it enough data, then it just like, you know, you get these crazy things.
There is some really nice articles as and I'll put them in the Links at the end but like, you know as one which if you start using too smaller sample of data to feed it, ie. Current data, for example you end up with these weird biases about like well, Only women lacrosse players get employed by this company because after yeah, right because actually as it happened the data they put in there they had a lacrosse team.
Vim: Yeah,
Tim: and so it's kind of you get this big data is got historical problems. And if you put two smaller data and you get these kind of coat totally crazy biases instead [00:02:00] and so it's a really kind of the huge issue and actually pretty hard to solve. It's just kind of interesting space to explore and and I don't.
I mean it was a little bit depressing in that like neither of us had kind of really great answers to it apart from hey be aware of it and be careful, but but I don't really know what you what one does apart. Well,
Vim: how do you how do you test the bias in AI
Tim: well statistically you can't, you know, there's no other way because it's the statistical system essentially.
I mean even that it's like it's slightly loosely statistical but like you can't. In the old days you could do code inspection. You could just look and say is there a line in there that says if lacrosse player employee, you could just look through it and find it but in would big data that's not how it works.
It's all statistically more likely based on the data.
Vim: So then you know the bias until the algorithms [00:03:00] built.
Tim: That's a risk. I think you I think if you're careful about how you select the data on the way in. And you run a proper statistical modeling on on the data? I mean, you know, you're almost kind of like scientific method of like, you know, we have to make sure that this trial is accurate because we have to have enough data.
We have to have make sure that it doesn't have systemic biases in it and whatever there's a whole series of scientific methods for doing that in, you know, drug trials or whatever. So I think we start to have to apply those sorts of methods in Ai and big data and the other thing we had a fascinating conversation about is like what is AI.
What point does statistical analysis become AI and where the where are the lines there and like you know, what do the words mean just because they're all quite blurred as far as like, yeah.
Vim: Absolutely.
[00:04:00] Tim: Yeah, it's kind of interesting to see how. You know, we depending on this technology to kind of solve a lot of problems like driverless cars and you know all of these things and and and we don't really understand.
Well a lot of us don't understand how it works. Actually.
Vim: Well I don't you sure I guess one of the questions that I have as well as that whose responsibility is it to make sure that it's unbiassed
Tim: yeah. Yeah, we had a conversation about that and where what part regulation plays and what part professional ethics play and and.
And whether there's a big enough downside for companies who get it wrong. But yeah the way that this gets fixed as if it's really painful if it goes wrong.
Vim: Yeah as always but I also wonder if there's any positivity in conscious bias. So for example, if you use data that does the opposite of what you say and actually [00:05:00] use data that shows equal pay between men and women.
Even though it might be very hand selected big data to then push the push that kind of thinking
Tim: I think you do when you build a selection of data like that. When you build a training model and you select data then yeah, you're always going to go for like the way that you select that data is going to have a set of biases and those biases might be as you say really to to further the goal of the organization, you know, your organization states that it's got certain ethical goals or financial goals, whatever and the model is then built to try and Achieve those those things and that.
So, I mean a lot of those who write a lot of those things could be positive goals and that you could you could build in a bias towards you know, and it teamworking working or in fact diversity is the other thing that crops that was really interesting and [00:06:00] like it's much easier to catch this. If you have a diverse team who will look at it from multiple angles rather than like you're collecting the data.
Vim: Yeah, absolutely understandably. So
Tim: yeah. Yeah. I mean that's kind of the moment you've heard it that's obvious. If you have yeah, very good. It's like might not kind of occur to you as being thinking in statistical and mathematical process, which is sort of isn't actually it's almost like art until yeah selection.
Um, the only the only time I've really done any I've done a little bit of vision processing which is sort of on the side of big data from the only one I've done was attempted to build a drone that would look for a banana.
Vim: Amazing
Tim: totally failed actually, but it's like basically train it by giving it a lot of pictures of bananas and lot of pictures without bananas and you tell it we can which but it got very confused by the carpet in the room.
I did it in that point I go but I know.
Vim: Was it yellow?
[00:07:00] Tim: I thought it no, it's just like I don't know. I don't know. I didn't have didn't really put enough effort into I'm pretty sure I could have fixed it but really tried but but it just went crazy and crashed into the walls instead.
Vim: So that's interesting.
Tim: It was fun was a laugh in as she's
Vim: a touching on that on the kind of diversity issue. One of the things we spoke about in a previous podcast. The Ian Forrester of is that there's just not enough data that exists in the side of Europe and kind of the western world. So what I was using the example of 23 and me and my brother and his girlfriend used it and my brother got kind of they were able to pinpoint his DNA for the whole of South Asia.
And that was it. Where is my brother's girlfriend his of European descent was able to get a much more. Specific geolocation [00:08:00] of like oh, we there's 20% from this country 15% from this country. Whereas he got 99% Southeast Asian, it was kind of like that's a huge area. So they're kind of the way we've collected data and the access people have to feed it into Big Data doesn't it is the problem like like you were saying it's that's why we're lot of the issues I and that.
Data's in equal at the moment.
Tim: Yeah, but I think part of that maybe to do with where they got the data from. I wonder whether that data is available. It just hasn't been made available to a Silicon Valley company.
Vim: Yeah,
Tim: it's you know, if you if you were in the Malaysian Health Ministry, you might have actually much better data on the genetics of you know, your populations than and maybe nearby populations than.
23andMe do because they've never been sold that data. So it's not that it doesn't exist. It's that it's not [00:09:00] necessarily in a source which could be easily easily imported either commercially or some scientifically maybe and but yeah, I mean sure I think that there's there are huge huge problems with the availability of of applicable data and and that's like, you know, I don't know what we do about that apart from.
acknowledge it's there.
Vim: Yeah, do you think any of this falls down onto the individual as a if you have a personal responsibility to feed into data sets because it prevents bias and therefore you're helping eliminate potential potential bias and future product Services systems.
Tim: Wow, you know my first impression with my first response would be to say no you never have an any obligation to provide data to anybody like you like you should hold off on that if you can well that's very kind of selfish thing.
I mean if you look at how [00:10:00] how health has progressed that's all about pooling data and looking at the statistics of you know of what was it the original the original geolocation thing was this was a. Where they found the source of a typhoid outbreak they be worked out which well it was by June who's got the disease and looking at where they got their water from and you only find that if you can collect that data, so.
So I guess I guess there are you know, you're right. There are reasons why it's morally correct to supply your data sort of how some reticense since about personally, but I think the other thing which sort of cropped up in the interview is about the obligation to participate in in the building of these things like it's not so much that.
The data in the data, you need the data that you also need people on in the teams who are saying hey, but that's wrong. You know, you've [00:11:00] left out this whole class of people or here actually here over 65s to drive cars or whatever it is that I isn't in the data.
Vim: Yeah. Yeah. We always I don't know what the scope of this would be in AI
but the way we always conduct research conduct researchers, we're building new products and services is will be used to describe statistical data by no means big data, but we used to just call data alongside the qualitative so that the it's rooted in a story. And it gives us a whole picture sense of what's really going on.
And I wonder if there's going to be something like that that starts working its way in
Tim: I kinda hope so. I mean, I do think Hmm, excuse me. I do think that the you know, the AI people have kind of gone off in a little bit when Ivory Tower and not really learn from best practice in other places and we talked about the scientific method earlier, but like what you're saying there also is that there's this perfectly good [00:12:00] understood practicality.
Is about how you build a service in you know for large numbers of people that haven't really sort of fed into the way they AI systems are built and I think that that best practice from other adjacent domains needs to be pulled in and I think yeah, I think you're right. I think that could help hugely that that sort of sort of attitude.
Vim: Yeah.
Tim: What do you mean by story sort of like Joe gets up in the morning or.
Vim: Yeah, but more the empathetic side. So right the typhoid breakout, for example, it's it's not just that 90,000 people are suffering from a recent life would break out. It's actually wool 60% of those are rarely stage and can be helped and these the symptoms were looking for and this is the these are the conditions they need to be and to be able to survive or.
You know, it's really putting the heart into it and giving the understanding of why it's therefore important to collect that data understand it and make those disappear.
[00:13:00] Tim: Yeah and a sense of what the you know necessary timing is exposed as well.
So maybe we'll leave that there.
Gunay: my name is Gunay Kazimzade, and I'm originally from Azerbaijan and I'm working here in Berlin in Weizenbaum Institut for the network Society in Germany turn attitude. And for those who haven't heard about it. It was founded in 2017 and.
Is coordinated by five main universities in Berlin area and we are dealing with technology and it's critical impact in society. I'm working in group number 20, which is dealing with artificial intelligence and how it is reflecting our lives and the society in society.
Tim: Wow. So, I mean that that's a great place to be and I think it's a hugely important kind of area that it's good that somebody's looking.
at it. I mean because I think a lot of the time that's kind of taken for granted and [00:14:00] people just think it'll sort itself out which is a mistake. So what's your specific kind of area of interest in that?
Gunay: I'm trying to deal with biases in AI. I'm so like laughing because it's a very hard topic actually for today's research area because like when you are talking about biases in AI you can't be very specific because.
Like defining bias itself in artificial intelligence. It's a great challenge. Like I'm trying to deal with it right now because AI but by itself is very interdisciplinary because it's reflecting on critical areas of our life and biases in AI are very case-specific and defining biases and AI is very challenging right now, but there are lots of examples that we can discuss today.
I think starting with the recruitment medical [00:15:00] domain Transportation predictive policing.
Tim: So maybe we need first to kind of talk about what we mean by it by Ai, and it couldn't leave me very broad broad space. It's anything from. A kind of slightly statistical algorithm through kind of machine learning to you know natural language processing and then kind of full-on neural network AI and then presumably there's something beyond that that I'm not really aware of but but so there's a huge Spectrum there of kind.
Where I think that for me is a job computer scientist Viewpoint is it the distinguishing thing is about how opaque the algorithm is. You know one and you can look at the code and you can deduce by inspection what's going to happen and then at the far end and the far end, you can't do that. You've no idea.
It's all to do with how you've trained the model. And so that [00:16:00] that's for me that spectrum is about kind of predictability. But that's a very reductionist kind of geek way of looking at it. It's probably not the way that the people on there. Outside of it look like all presumably use the researcher.
So maybe walk us through the kind of you see the areas that the. Categories
Gunay: yeah, actually, we will we will look to history that the the term artificial intelligence is like its not new like it's starting from 60s 1960s and people are trying to somehow Define omr what is artificial intelligence, but. 60 years passed since those times that time and we still can't really Define what is artificial intelligence.
Like I'm really not talking about that buzz word that is used today actually to scare us about robots taking over the world and you know, the making decisions and killing all of us. I'm talking [00:17:00] about artificial intelligence with which really was. Evolving during this 60 years. Let's say yeah, when we will look back to 90s artificial intelligence is totally different thing that it's meant by today.
There are intelligent systems the term intelligence system that was used like still. We're using this term for the system that are for example used for expert systems to making very high mathematical statistical decisions. But first for me, like I have I hopefully have my own definition of AI.
Artificial intelligence is something that is learning from its own experiences and was interaction with her from The Real World and learning by the by doing. Yeah, and if like it's [00:18:00] still a term that is differently defined in different research communities still if you will go to for example machine Learning Community artificial intelligence is defined totally different if you will go to Medical domain, it's totally different things.
Yeah, like I hope that I could I could Define what is artificial intelligence for me. It's something that can it can learn from its own experiences from their interactions with the real world. And of course can be trained within was their training sites. Like what was that data that you can put in?
Tim: Yeah, I mean, I think think for me that there's. Somehow there's a divide between the ones where the training is done one once and then kind of that model is burnt into the device and it's then sent off to do that. And I think a lot of the early voice recognition systems were like that. But now what you're starting to see is the voice recognition systems that
[00:19:00] learn from your correction of their errors. And so they learn our as you have them, you know, they get better as you use them more and I think that that's an interesting space of like a change in you know in very fairly recent years. And then I think the other thing that's kind of interesting to draw out is whether the model is
centrally aggregated. So whether like all of the voice samples that go into Siri or Google Assistant do they all get centralized and and you get one set of Corrections one model or is it distributed out to the edge and you have each individual thing as modeling and therefore can drift apart from the others have different behaviors for in even in effect from the others.
Did you see that difference or am I making that up?
Gunay: I do see the difference. Yes, and like in my like understanding of artificial intelligence or like okay, I [00:20:00] think here we have to somehow split machine learning from artificial intelligence because machine learning is part of artificial intelligence.
But artificial intelligence is not purely machine learning. Yeah, I think that's what you was talking about.
Tim: Okay good. So coming back to to biases or when we kind of first had this the idea of doing this this chat. One of the things that I found in the research was a really dreadful example of why I don't know if it was AI or machine learning actually, but but where this recruitment system was coming up with a
really bad set of choices, which were basically judging candidates on and whether they played rule lacrosse or not rather than their suitability for the job in it and it wasn't for lacrosse playing it was you know, some other Tech job. Can you now. Did you read that article in full? And what was what did you get from it?
Gunay: Yeah, I [00:21:00] read this article, but I was like, I'm more focused on it gender and racial biases and I can give you another very interesting example. Actually, I think he also read about what happened in Amazon or two weeks ago that they developed an AI system, which wasn't doing your first screening of applications.
See me screenings. We can like call it like this and in one day it had decided that woman like it was just eliminated woman candidate and they they were interested actually Amazon was interested to use for using this system, but it turned out with like in this way and they started actually to investigate on what was the reason actually for a system to decide that.
Okay, like I'm. Eliminating all the woman for let's say for top positions or for CTO positions. Yeah, and [00:22:00] it came out that they were using of course historical data 10 years old data and and the most of the labels that were used in the system were decided by men. So that was very straightforward why the system actually had decided that like decided that women are not good choices for some positions.
That's just one of the like million examples where we can determine like really like very noticeable gender and sometimes racial biases in artificial intelligence and for me, actually, it's a very. Inspirational example like why I have to do this research. Have you heard about this?
Tim: Right? I haven't heard about that specific example, but I've seen the you know that the historical that the using historical data to feeder a model is like it [00:23:00] basically ingrains all of the past
preconceptions and errors into your future in and it's kind of hard to get out because you can't see it like my absolute favorite one of these things and it's not, you know, it's it's my favorite because it's funny rather than sad or annoying It's a it was a medical one about it was about some kind of wound.
I can't remember what it was and they had a vision AI That was supposed to identify the specific kind. Lesion or something on the skin and and it they trained it with a huge Corpus of data from the medical textbooks. And what they found was that what they'd actually done is to train the thing to recognize a ruler because the the severity of the the thing was was measured on a with a ruler in the diagram.
So you'd have a picture of the thing next to it would be a ruler if it was a serious [00:24:00] case that the surgeon wanted to make a kind of big thing about how large this this this lesion was and so. What what actually the thing started doing is recognizing pictures that had rulers in them because the surgeon thought they were interesting.
And so that instead of actually recognizing the thing that it was supposed to be looking at it was looking at a source of subsidiary judgment that somebody else had made in between and then kind of zoning in on that and a lot of the time actually they turned out to be the same thing in the textbook but in real life, it doesn't help.
It doesn't tell you anything of any use and so does that really easy to fall into that trap and quite hard to get out of it? I think how what are your kind of thoughts about how? Get out of that that historical data problem.
Gunay: That's the question that I'm asking myself right now. I don't have an answer.
But like of course like there are thousands of the de-biasing algorithms right now in [00:25:00] machine learning. Communities that are used in different companies for example in different big companies to eliminate all of those, you know, like biases from the data that is used to train the systems, but I think yes, the historical data is one part of the big problem, but the bias is in the eye is more the problem of society in order to address this problem.
I think we just don't have to focus on right now on algorithms or models that we develop we have to focus on First and society and how we are developing those systems with biased with the map with mapping all of the Human by assessing to those data and into those systems like in my understanding.
It's a big interdisciplinary problem. And I think that it should be [00:26:00] approached not from there only. Technical perspective. It should be approached from from a social perspective from legal perspective from an ethical perspective and we cannot just tell that okay. This is a solution for this problem and will we will solve this problem by doing this this and this it's a huge big problem and somehow we have to bring this problem into public domain I think because the first problem is that there is a huge over trust of technology.
Gee nowadays. We are really blindly clicking on the things that we really don't understand. We are giving away our personal information sensitive information and in some cases there is a place of trust in technology which leads to very very critical consequences for example in aviation in autonomous driving.
And lots of examples like this. So I think the main problem is [00:27:00] the is a lack of understanding of citizens of the public about capabilities and limitations of AI and how it can impact their lives actually in the lives of the future generation. Yeah, that's my understanding.
Tim: So I think I'm going to kind of slightly disagree with you, which is I think that it isn't I mean Citizens need to understand this without doubt, but I think even more well who are implementing these systems need to starts there? I mean, you know unless you you. Take a little bit of a step back from the thing you're building and think well, you know where could I have accidentally built by biasses into this or or run?
Son kind of, you know, beta test to see whether it's producing sensible results and it's essentially, you know, it's kind of like the scientific method basically that you actually have to [00:28:00] validate your results against some kind of test set that lets you tell whether you've you know, biased your your mechanism or not, I think
you know from my perspective that we're being too unscientific about this which is essentially what you were saying about trust is that we trust these things far to much because and we choose not to understand them because it's easier not to does that kind of agree with roughly with what you're saying?
Gunay: Yes. Yes, and I totally agree what it was what you said just right now. I will just add to it. Like okay where you are talking about the implementers of those systems, but it all starts with was Education. First of all, was that correct education on for example, it takes in correct education on inclusiveness in Ai and sometimes actually really like there [00:29:00] are unconscious biases in data in the systems that are developed.
And when you are going into seven layers, let's say deeper into those systems and you really can get your mind that it is nothing more than just a little human bias. That was taken into granted so somehow yes, I do agree with what you said, but I think the problem is much more bigger than we think.
Tim: Yeah, and it's certainly well. I mean, it's huge if you look at what's wrong what for example has been happening in things like Facebook where the algorithms have been basically optimizeing for reaction and it doesn't matter whether it was a positive reaction or a negative reaction. Like if it was a click it was good which tends to make for a more fractious and and and disagreeing environment and and the algorithms were optimizing like what they showed you to try and maximize [00:30:00] that because that was what like the business case said, so I think think in addition to ethics you
think about what you know what the business case is what do you optimizing for in your business case? Because in the end most of these ai ar ebusiness driven and you have to look at that and and I think we're we're not we're not good at that yet. I mean, you know, maybe we haven't had enough disasters yet.
Gunay: Exactly. Yeah, or maybe we are just too focused on on not we of course, but they implementers are very focused on gaining money, but not the critical consequences of the thing that they are doing because everyone is so, you know, like honored about talking AI they are implementing in their systems like of course like.
Really if you will look just 20% of them are really doing aii the other started talking about it, but really [00:31:00] people are very, you know, money-oriented like maybe I'm too you know, like I mean living in like I'm dreaming about Utopia, but I think that most of the system should most of the businesses should should focus today on ethical aspects of using their systems
because if they will not focus on it, they will really lose a lot in a in 5 years in 10 years in a long-term period some today maybe it's not as critical as it will be in 10 years, but they will really feel it in their business.
Tim: Yeah. I mean, I think that that's one of the if you look at some of the other professions with the older professions things like.
Civil engineering and chemical engineering. One of the things they do with with new graduates is to before they let them kind of do anything out in the real world. They show them videos of [00:32:00] horrific failures, you know, like the Tacoma bridge falling down or or chemical plants exploding or whatever so that they get a sense of just how badly wrong things can go if and then they walk through how that happened.
So they kind of people understand that a single a couple of simple mistakes can like have really disastrous consequences and then think we do that in the in the software world in the same way yet, but we kind of need to
Gunay: know exactly we are humans and we are learning from the mistakes. Of course.
But yeah still I think that it's like we're living in the like technological Revolution like really we are very lucky people actually and everything is changing very fast, but in order to adapt to those changes we have to be also very fast on on making smart decisions.
Tim: So do you have any kind of sense of like good news stories things [00:33:00] where it's been done right or things where where it's been caught before it was too late.
Gunay: I would be very happy to hear about some use cases like this but regarding biases maybe I'm too critical of course, but I have determined more more bad news cases until now. Like I really don't have any examples. Maybe they are lots of them, but I don't have them where for example people were implementing.
I don't know something that were were decreasing exclusiveness in AI but of course there are lots of communities as we are for example in u.s. There are there are inclusive AI Community open AI Community who are working towards making more transparent more accountable. All systems more inclusive systems and we are trying to work with them.
Like we're trying to [00:34:00] invite from their research fellows. For example to give talks here to give us some like interesting use cases from US as well and not only from US of course, but we have lots of from there and that's so interesting that nowadays they like. Five years ago, they were more focused on that.
But unfortunately, it was a new elections actually the government decrease their focus on on AI and making it more transparent and accountable. I don't know why maybe you know,
Tim: I mean, I think I'm some interesting question. What is the role of government in this? I mean, it's just something that we've we've talked about earlier in this podcast with
other people is like to what extent should should the new new technologies be self-governing. If so in what four more do we need? [00:35:00] Government legislation to to guide us to stop us from making it kind of worst of mistakes. And I think that's still for me that's still at somewhat of an open question.
I mean, if you look at look at it, you know the Aeronautics it's a mix there's a set of behaviors that are like, you know, the pilots Union and whoever else enforces, but there's also. Set that are enforced by governments and it's kind of interesting to know how we'll end up within in the AI and machine learning space.
Like where will where will we need government intervention? And where can we like responsibly do it? Collectively as a community and I think what you're saying about the fact that there's an open AI Community that's specifically focusing on this is really good news. And I think it's a sort of thing that we should be, you know, encouraging people to contribute to and and if anything if nothing else just listen to the examples of [00:36:00] best, we what we really need in this and it gives examples of best practice and that's few and far between as far as I can see .
Gunay: and coming back to the government and its impact actually. I think they have a direct impact into like decision-making regarding implementing all of those systems in cities and communities and that's a key Point actually. Being able to create some bridge between research community and government like politicians because unfortunately in most countries the in this leavened in this level that is of lack of information and lack of understanding the critical consequences that we were talking about just right now, you know, and that's why in political level
also, there should be some [00:37:00] Awareness on how different Technologies can impact the life of people but not just blindly starting implemented. We had yesterday the researcher from Canada. They were working on smart city project in Toronto and. He was talking about the challenges of implementing this because you know city is Smart City.
Okay, also the buzzword but. When it comes to a real place with real people who where the sensors are really detecting all of your emotions and everything and somehow generating some new knowledge about it and somehow is used by Third parties. That's really becomes very complicated. And he was talking about the challenges and the challenges that they were discussing within political level in scientific level and they are still dealing with it like one year passed and they're still trying [00:38:00] to
to implement some kind of framework actually where they can work. They still don't know who is in charge of what is government in charge of this technological change, or is it a ticket? It says the company it was Google actually implementing this so lots of questions like this and I still think that you know in in political level in governmental level.
The whole world is not ready yet actual to accept all of this new technologies. It should not. It just the bomb. It should be some like Evolution evolution of Technology. Not just okay, but it's a smart city and you have to live in it.
Tim: Right? Right. I mean we haven't it's a shame in a way that I'm doing this interview and not not Vimla because she worked on a really nice smart city project in Manchester, but part of her part of their role in building that smart city was interviewing people
and [00:39:00] having having workshops with people about what they wanted from a smart city and what they were prepared to accept in terms of like its data use who got the data and what they thought the possible benefits would be that would be worth handing over that data and I didn't actually see the outputs of that
but I went along to one of those sessions so I kind of aware of what they were doing, but might be interesting for you to talk to her outside this to get a sense of how that worked out when it wasn't I had to say it wasn't really very artificial intelligence centered. It was much more kind of smart City
in terms of kind of providing information of who you know, how many buses there were where and and that sort of informational level rather than AI type decisions. You know, I don't know where it ended up actually, so I'm a bit ignorant on that but it's [00:40:00] interesting space that like, you know, you end up with a huge amount of information about
potentially about a city and and you could be you know designing the fire service based on that on that data and if it's biased then you could be you know making it so that it takes a very long time for a particular Community will be serviced by the fire brigade. So it's kind of they have these things have huge real consequences somewhat down the line unfortunately.
Gunay: Exactly exactly.
Tim: So you I mean have you had any interaction with politicians on this topic? Is it something you've managed to talk to anybody about? Yet.
Gunay: I have managed to talk to two representatives of liking I'm not into that area into like a governmental structure in Germany. I don't know like how it's called.
I think it's Senate's. Yeah, like I have it. I had a conversation with two senators [00:41:00] about their understanding of like like even not a I like digitalization and how it will impact the future of our society and I was quite impressed because they were part of the special group, which was elected to make suggestions for government in digitalization policymaking and yeah, I'm very very positive about this to people.
I'm not aware of others, of course. So yeah like and I hope that Weizenbaum institute will also be some kind of a bridge actually to German government like between researchers and German government to bring some kind of real valuable research work that will be very positively impacted to life of German citizens and not only German success but [00:42:00] over Europe actually.
Tim: Yeah. I mean, I think we might be seeing a divergence in the way that Europe treats this from the way that well other other countries like the US and China see this. I think we're taking a slightly different part hopefully a better one will will find out in 10 years time
Gunay: as first yeah fingers crossed
Tim: and do you get a sense that across Europe that that's even? I don't know how much you've talked to people from outside Germany.
I mean obviously presumably Azerbaijan, but anywhere else
Gunay: like about the specific topic.
Tim: Yeah.
Gunay: I had a chat with my fellows. I like I did my bachelor in Moscow State University. So I have lots. Connections from Russia. For example. Yeah, that is a thing actually about about talking with people from different countries.
I think [00:43:00] because you can learn much about their experiences, but it is not fully applicable in yours, for example. Because every country has their own for example data protection policy and they have their own digital like policies in digital digitalization and it's it's interesting to learn about their experiences.
To gain something from this and to maybe to add something to your research work but like coming back to really use cases for example in Europe in Germany. Some of them are not really applicable the same was the US for example in u.s. Autonomous cars are available. You can test them everywhere you want in California mostly in Germany
it's not. The same for iot devices and some. [00:44:00] In some places here that are not even like allowed to use. So yeah, it's good to talk to different people from different countries, but still like you have to focus on the place that you are doing research for.
Tim: Right. I mean I think think the place where that sort of you have to watch out for that is if if I as a software developer download a toolkit, it may come with a set of kind of implicit rules about how it uses data.
And if I download it from Google in the States, it will kind of have a. A Google mindset and and maybe the instructions or the or the I mean even things like some of them come with pre-built models and classically in the in the voice space. We always had that problem that you know, you would download a model of voice recognition model and it.
Would not recognize you saying "water" you'd have to say "wadda" and then it would recognize it, you know, and so those [00:45:00] sorts of I mean that's a bias of sorts. It's not really an AI bias, but that you have to watch out for the kind of provenance of the not only the. Data that you add but the original thing that you're starting with because that maybe may have you know, a couple of what's effectively a cultural bias already built in
Gunay: exactly
Tim: which is kind of weird.
So, I mean, I think that what I kind of hear you saying as a subtext of this is that if you don't have an open mind and a diverse team, then this is bound to happen. So that is that fairer my overstating.
Gunay: Yes. Yes. I totally agree with the statement like the AI development teams. I'm not talking about AI development teams, but I'm talking about every single stakeholder that is impacting all of this development starting with policy makers decision makers and everything.
It should be very diverse. And of course the main thing is they tough crowd of course data should [00:46:00] be. Not too homogenious and diverse enough actually to eliminate biases and the in the system's
Tim: cool. I think that's a great place to stop. Hopefully people will take that advice to heart and when they're building their teams and choosing their data.
Thanks so much for the conversation. That was great.
Gunay: Oh, thank you Tim for inviting me. It was very interesting discussion