[Music]
Tim Panton: This is the Distributed Future, podcast and I'm Tim Panton. The podcast is aimed at helping professionals in the tech world understand new technologies and new societal impact of those technologies. We do this by talking to people who are working in edgy, nichey, new areas that we may not know anything or much about, and we talk to them about the societal impact of what they're doing and we hope to guess or learn what the future might hold for the rest of us based on that. I encourage you if you enjoy this podcast to obviously, tell your friends, like it on the podcast host and on all of the usual things, subscribe of course so you won't miss any future episodes. This episode is principally talking about what's now being called EdTech, although it's not a term I'm particularly fond of, but I'd like our guests to introduce themselves and we'll go from there.
Jen Persson: Hello Tim, thanks for the invitation. So I'm Jen Persson, founder and director of defenddigitalme. We are a small NGO based in England, and we campaign for safe, fair and transparent data in the education sector. And increasingly, we see the questions around data, data privacy, data protection reaching across the whole of the public sector, but increasingly obviously in the UK, commercial companies and their products are widely used in education. And the interesting, I think, things to consider about societal impact is how some of those technologies are not only shaping the children and their education today, but how that might shape their perception or experience of society in the future. And some of those tech today are certainly being tested in terms of surveillance technologies now being rolled out more widely in the employment sector. So really glad to join you today.
Tim Panton: So, let's talk a little bit about what you mean by those tech. Some of us have been out of school for quite a long time and our kids are grown up and whatever, so we may not necessarily know what's going on in schools. Can you give us a little bit of background on that?
Jen Persson: Yeah. So, the UK, I think, is probably one of the leading adopters of different types of technologies in schools. And that can be very little from day to day for some schools that might just use mobile phones, SMS or any distributed messaging system and it's a one-way technology that replaces the teacher home note in the school bag.
Tim Panton: Always lost, those never got home, did they? [Jen laughs]
Jen Persson: Exactly, and right through to the spectrum of schools that are very, very keen on technology and would actually call themselves as having tech labs in schools. And so, we mapped a sort of 11-year-old's typical day in a school day in last year as part of a report we did, and we looked at about 12 different aspects of a child's day as they encounter technology. And it starts really from the preschool early years, age two, three, and four, where those interactions are with parents or the home guardians and that can be then messaging on the guardian’s mobile phone or the child as they get older, sort of telling them to bring something into school that day. And then as a child physically moves into the school premises, it's things like body cams on the school patrol officers as they cross the road, the CCTV in the playgrounds and corridors and classrooms. We've got systems that integrate everything that a child does in school. So, they basically are a school's information management system and they host everything from registration to behavior to sickness and absence through to homeschool combs and managing any timetable and lesson needs, and that then involves sometimes the big technology platforms today. So, you're looking at Google, Microsoft, and the all-encompassing learning and content and communications management through one platform. And a lot of that is then added to or supplemented through apps. So, you'll find that children are logging into various quiz apps or being tested through a filler at the beginning of a lesson or the end of a lesson by logging into any number of apps, and the schools can choose which they use, whether they're freeware or whether they buy them. And other apps are used to monitor behavior and sometimes they'll pop up things and make a sound if a child has misbehaved or pop up a smiley face on the screen if they've done well. And then you're going through to perhaps more emerging technologies such as artificial intelligence being promoted or marketed as part of learning platforms and what's called adaptive or personalized learning. And those kinds of technologies are pretty common in UK schools although would be in some places banned or not be used at all in others. But one of the more established technologies that's been really particular to England and the UK has been biometrics in schools as well, and so you'll find it in the majority of secondary schools and a significant proportion of primaries now. Fingerprint readers are used for cashless payment systems at lunch times and/or entry to school buildings and even sometimes to borrow a library book. So, we've got a full spectrum of internet digital or local server-based technologies being employed and really across the full spectrum from children preschool through to secondary and then their transition into higher education.
Tim Panton: Gosh, I had a sense of that some of those things were there, but I hadn't really felt the full thing of it. So broadly the categories are, there's a bunch of administrative stuff the school does, there's a bunch of educational and [crazy] educational stuff around quizzes. And then there's a bunch of other stuff which seems to be really more almost like classic surveillance. So, you're talking about particularly the body cams and things like that and presumably security cameras also in the schools. Those are classic surveillance security things. So that's a lot of tech that's arrived in relatively few years, I think. I mean, I don't remember any of that. Well, some of it. The admin stuff was there 10 years ago, but I think the rest of it is relatively new. Is that a fair assumption?
Jen Persson: Yeah, I think there's been a certain expansion of the number of providers of some of these technologies. So, there's been I think pretty rapid adoption of apps and quick, especially cheap freeware and of course, under COVID and home learning and remote learning, there's been adoption of some of these even more quickly over the last 18 months than they might have been otherwise. I certainly think the adoption in the UK has been things like biometrics, facial detection if not yet facial recognition, RFID chips and tags and things, that's been pretty unique to the UK. And I think some of those have been relatively recent, but in terms of recent, we're talking about sort of the last 10 years. What's more recent I think is because of the ability to share data now that's going beyond the school gates, so all the information about a child is no longer in their records in a locked cabinet in a staff room, it's now on a cloud-based system, can be shared with any number of people at the click of a button. I think that's what's changed most quickly, most recently, is the speed and scale of being able to share stuff with an unlimited number of third parties.
Tim Panton: Gosh, yeah. One of the things that always is a little bit of a warning sign for me is you talk about freeware and you wonder where's the funding for this coming from like these free quizzes and whatever? I suppose some people produce them out of the goodness of their hearts, but in general, there's a financial imperative there. Can you talk a little bit about what the motivation is for producing them?
Jen Persson: Education I think is a fascinating sector for that question. Because before I had children at secondary school, I had very little idea myself that any of this existed. So I've looked at this both from a professional perspective, but also wearing my hat as a mom. And it's fascinating to see some of the brilliant teachers, I mean really innovative, creative ideas that come from teachers, that they then go on to set up their own apps. So, they set up something perhaps they've been using literally in their own class or within their school and then through word of mouth and especially, of course, encouraged and promoted now through Twitter, the other schools want to use it and they then promote their app through just basically through word of mouth. And so, some of those small apps that teachers have built themselves are really designed around, what does a child need? What helps them learn? And what do the teachers want to achieve? Others are equally similar sorts of stories, but from different countries. There's a particular app that was built in the US, and they advertised the website as having been built by two parents in their basement, and they were keen to be able to promote children's reading. And so the couple invented this app and sold it locally. But I think something that happens to both of those types of startups, if you like, is that you don't really see as a parent and often even not as a school where the funding comes from as you rightly say from private equity often starts to get involved later down the line. One of those apps was now heavily invested in by, for example, private equity company based in the Cayman Islands and is a massive part of a massive conglomerate. But to look at, you’d think it's a very fairly amateur small app. And others, of course, become part of a promotional tool of big companies like publishers. So, you might get something that's really neat and actually very good functional app pedagogically from someone like, well, one of the big UK children's book publishers, and they create nice apps as well. But it's often really hard to tell, you can't identify where has this app been developed, how was it tested? How did it get quality tested? Does it work? Who gets the data out of it? Are they selling it on afterwards? A lot of that's quite opaque often when teachers start to use them because it's not the kinds of things they think about. They'll just see it and think, "Is this going to help me in the classroom?"
Tim Panton: And I suppose there's a degree of, I want to say compulsion, but as a student in those environments, you can't say no, I don't want to install this app on my phone because it's part of the lesson, you know? It's mandatory. So that's interesting almost unique situation for children because at least in most environments, I suppose it’s also true for work, but in most environments, you could say, "I'm not installing that, it's spyware or whatever." But they don't have that option.
Jen Persson: Yeah, it's a very important point and a very unique aspect of using technology in a classroom. And in fact, it's something we discussed, I did some work over the last 2019-2020 with the Council of Europe, and they're committing Convention 108 which is responsible for data protection and has been ratified in over 55 countries. And we together supported them to write guidelines for data protection of children in an educational setting. One of the things they looked at was how does this question of consent work when... You've got two questions around it, one is, does the child have any choice if they have to be in school? So that’s a sort of nonconsensual disempowered environment where they really have no autonomy as you described. And the second part is even in that situation then, can they at all give consent in terms of a very niche data protection term? So, one of the six bases of lawful basis for processing data. And both really came out and said, “No, no, there isn't really any clear ability for children to be able to consent to these types of apps, especially when there's peer pressure." It would be embarrassing to say I'm the only one that my parents say I don't want to use this. Actually, one of the big questions when COVID hit schools and schools were closed or physically closed was where it highlights digital divide. So, children in schools where they don't provide them with a device each are then having to use their mobile phone. And contrary to popular belief, not every child has a mobile phone, and it means that it was highlighting those children that asked to bring out their phone and use it for the app didn't have one. And of course, as you rightly say, it then brings up all sorts of questions around permissions, security, and being asked to install something on a personal device which schools would call BYOD, bring your own device, and then they install all sorts of terms and conditions around the use of that when schools say, “We want you to bring your own device into schools but on our terms,” and that means introducing safeguarding software, what's commonly called in the UK safety tech which we mentioned surveillance earlier. This really is surveillance under the guise of safety because it monitors everything that children do. So that's sometimes both on bring your own devices or on school-owned devices.
Tim Panton: So, would that be on mobile phones or just on laptops?
Jen Persson: Safety tech can be across both. It depends very much. There's about 15 companies in the UK that specialize in that area for the education sector and only some would market themselves as being compatible for personal mobile devices. But they can also be installed on personal laptops or personal iPads, for example. And often increasingly in schools due to austerity and lack of funding, schools have introduced what they call one to one purchase schemes, which means schools will suggest to parents strongly that they opt in and have a choice because if their children don't, they won't have a device. So, you can tell from my phrasing here really, there's no choice. Parents are told to buy a device through the school's purchase scheme, and it means that the parents basically have to buy often iPads or sometimes Chromebooks or other laptops and it's on the school’s terms and conditions and it means that either incrementally over time or in one-off purchase schemes, the parents have paid for the device and technically, in terms of the school's description, own it, however, it's only going to be used if the parents or the family agrees to have this safety tech installed, for example. Again, which monitors then no matter when a child is connected to the school network, it doesn't matter if they're in school or at home.
Tim Panton: I know particularly with iPads, you can administer your corporate fleet of iPads centrally. Would those be administered by the school or would the parents have their own individual control over the children's devices?
Jen Persson: That's the key question, and it's one that we have discussed with a number of the providers of this type of technology because it effectively gives the school's network, often the ICT administrator and/or the company, access to devices. So, depending again on which software or which service or which system it is that they're using, they have different functionality. But it will always mean that the content of the screen, now that doesn't matter whether it's incoming or outgoing communications, whether it's a platform, an app, a live webcam being run, anything that the child is basically seeing or hearing through the device can be monitored and is being so constantly looking for keywords. So, it looks for certain different types of what the technology companies call different types of harm. They're looking for contact, conduct, and the sorts of risk to others. Is your child bullying others or upsetting people in their contents and communications? Are they at risk from others like grooming or being bullied and that sort of thing? Or are they at risk to themselves? So things like self-harm. And all of these types of a dozen flagged areas create profiles and/or flags to alert members of staff at the school if a child has fallen into one of these categories of activity. So it's both being monitored then sometimes again by the IT responsible person as well as the safeguarding lead at a school, but also sometimes by companies themselves depending on which company it is, which system or which provider it is. But some do active monitoring and will actually say in their marketing, you know, “We saw X content from a girl when she was using a Microsoft Word document offline. We assume it then has tripped the flag when she's been reconnected to the school network, and we contacted the school and X, Y and Z intervened to do whatever follow-up they had.” So, it's some all sorts of different kinds of invasive depending on your perspective on these things.
Tim Panton: So, what are the children's responses to this? I mean, I can imagine that quite a few of them if they have the ability will have effectively burner phones to avoid all this. [Jen laughs]
Jen Persson: Well, there's two questions there, I think. One is, we don't know well yet from schools how common it is for them to be installed on mobile phones. So only some of the technologies will promote themselves as suitable for phones versus laptops, iPads and desktop equipment because some require the installation of a client and some don't, some just as soon as you connect to the network. Part of that question is actually one of the reasons that schools or the marketing around this tech will give for why they have to be installed at all, is that children, obviously, if they're not connected to the network, if they're just using data, won't be tripping the systems. And so depending again on your perspective of whether it is safety concerned or whether it's the most surveilled students, they will not necessarily trip things up if they're using their mobile phone but not connected to the school network. It's more a question of I think what do the children feel and how do they act and how do they respond as a result. And there's been some interesting research by a professor at UCL and a professor at Plymouth University, Dr. Sandra Leaton Gray and Andy Phippen, and they put out a book 2018 and did some research on this. And it wasn't surprising, I think if you work in this sector or if you have these concerns, the things that you expect are the things that children will report which is that they are concerned about somebody looking over effectively digitally looking over their shoulder. It can put them off looking for content that they think will trip the system, whether rightly or wrongly it should. And that would include things like sexual health or questions around gender and self-identity, the kinds of typical questions you think typical teens might be interested in using internet searches for. And it's I think concerning when you're looking at things like child counseling services and confidentiality, so we had a discussion with one of the companies in 2018, and they agreed to stop monitoring children's counseling chats because we discussed it with youth groups and they had the impression that if the children and the young people knew that they were being monitored, it might put them off having those really sensitive conversations that they need to have with say Childline or the equivalent counseling chats. And for some children or young people, that might be the only safe space they've got. If they're having problems at home in any number of areas, they might not be able to go on to a computer and/or their parents might be tracking their activity at home. And so these children we'd spoken to said they shouldn't be monitored when they're looking at these confidential counseling services. They need to be a trust that they can talk about these problems when they choose to, with whom they choose to and not have it imposed on them. But it's a really, I think, under-researched area yet. There's not been a lot of academic research on it. And partly, I think because parents and children are not well-informed, A, that this technology exists, B, how it's being used, and there's been no public reporting of what kind of profiles it's building up or how many children are being reported to prevent, for example, as a result, the UK’s counterterrorism obligations that are imposed on schools which obviously, some of these systems flag up things like they've looked up words that the systems have determined or the companies behind them have determined have to do with terrorism and extremism.
Tim Panton: I was wondering to what extent you feel that these technologies are more invasive or more active than what one would find on a BT home router that's been installed with the family settings on? From what I'm hearing, it sounds to me like it's a kind of another level in terms of actually watching screens and stuff like that. Would that be fair?
Jen Persson: I think there's a question of what does a family expect of a school's obligations and responsibilities? And I think we have developed technologies in ways that have allowed the boundaries of responsibilities between teachers and families to blur without good democratic debate about what that means or what it looks like. And in the same way that a child now connected to the school network at 10:00 at night doing homework or their laptop is connected at 10:00 at night but perhaps their brother, uncle, father is actually using the laptop browsing the internet for something what school would determine is inappropriate for the child, but it's actually not the child using the device.
Tim Panton: Like alcohol sales or whatever.
Jen Persson: All sorts of things you can imagine that a Friday night might be used for. Those things are being monitored for in a way that would overstep what you think a teacher would do in the classroom. And I think we're at a really interesting point in time where we're hitting the societal questions around teachers are overworked, over demands, they're finding it hard to switch off between their professional life and their home life, and part of that is because of this blurring of boundaries of what school is responsible for and what families are responsible for. And the decision making for that has already been taken away from families. So, a family can decide we want to install a home monitoring and filtering system and they switch that on or off on their own internet at home, but parents and children aren't given any choice over this when it's installed or just enabled through access to the school network. Now, I think the sensible thing to say is obviously, nobody would suggest it's appropriate for children to be accessing inappropriate content through their school network. So just because they've been given a device by school or they've been given internet access at school, it's perfectly appropriate for school to filter and block inappropriate content. I certainly wouldn't want my children coming home saying they'd accessed X, Y and Z through the school network. But there's a question I think when it comes to now this further enhanced level of monitoring which has moved from filtering and blocking content and monitoring content to now monitoring individual users. And instead of labeling content is inappropriate, they're labeling the children with those content labels. So, a child could be building up a profile and not know it that says they are a potential gang member or a suicide risk or involved in terrorism and extremism. And the opaqueness of who decides what those labels are and who decides what content trips those flags is completely opaque. And so, I think that's the difference between the family deciding do they switch on that system on filtering and monitoring and blocking at home versus the school doing it and that permanent profile built up about your child or about a pupil that they might not know exists. And that leads on to much more societal questions, I think.
Tim Panton: This is fascinating because it's just like the debate we're having in a much more open way between contextual advertising and profile-based advertising. And that debate is in the open, it is now anyway, and people like Apple are taking a stand against advertising that is done on building up a massive profile on that basis and tending to encourage more contextual-based advertising. But what I'm hearing from you is that debate is not even really happening in the education sector. I'm fascinated to see where Apple is in all this. You would think that their persona, their image would mean that they would be quite nervous of having their devices seen as being surveilled. Just from a sales point of view, they wouldn't want children to associate iPads always snoop on you. That wouldn't be a marketing line they would be keen on. So, I'm curious to know whether you see any of the big tech players pushing back against this?
Jen Persson: It's a really interesting question. We ourselves haven't spoken to those providers, and I think that's something that's on my to do list. We've basically been researching the safety tech functionality so far. And only recently I spoke to one CEO who said that and I'm paraphrasing, but basically, it's harder for them to use this tech on iPads and Apple devices. Now, I know schools that do employ it and that the monitoring happens, but it's, if you like, less granular than some of the others where they're saying they’ll put it on Chromebooks, for example. So, this CEO was disgruntled because he said he felt that schools using his technology with an Apple device was not able to give police the same level of detailed personal information about a user than then they might have done if they were using other devices. So, I think you've hit the nail on the head. There are some both technically architectural devices that are operating differently and how they interact with these either software or services or systems varies depending on the functionality of both, and I think it's that kind of debate we aren't seeing at all. You're absolutely right.
Tim Panton: And that leads me to wonder, the context of the advertising question leads me to wonder where is this data going? I mean, we've talked about it being as an input to either disciplinary action within the schools or guidance or counseling or in some cases, even out to social services or in extreme cases, I guess, out to the police for terrorism and other illegalities, so you got that spread. But is there anywhere else this data goes that you wouldn't obviously expect?
Jen Persson: I think there's two questions there. One is of who it goes to, so what's their function and why are they getting that information? And the second is where in terms of geography. I'll take a look at the second one first which is I think an interesting one when you think of what the information might say about a person. One at least of these companies is primarily funded by a Bahraini bank, for example, and when we think of how the technology is monitoring only in two languages, in English and Arabic, and you consider that's being used for the monitoring of the prevent program in schools, I think it puts a slightly different lens on what is its main purpose and who is getting that information. It's obviously something that the companies themselves are very sensitive about. To be sure about each individual technology, again, I can only emphasize they're all different and some would be very upset if they were to feel that you were suggesting that other people got the information other than the company. But I think the kinds of questions that are not asked and are completely opaque right now is what are the safeguards and the governance around these companies when they are in countries that are out with the UK governance system. So, if data, for example, is being both monitored and collected about activity in the US, for example, because a company will monitor activity 24/7 365 days a year and that includes some of the out of hours monitoring not being done by a UK based staff but by a US staff in the US office, so there's data certainly going abroad. And then you think, well, what does that mean for the oversight of that data? Because if you really want to get into the nitty gritty of surveillance, legislation and regimes, the access to Google or Microsoft or other companies have transparency reporting and civil society and others can take a look how many incidents those big tech companies were asked to hand over data to police or federal authorities in the States. But for these types of companies, we've got no transparency reporting at all. So, there's that geographic question I think that's of potential concern. And the main reason it's a concern is not perhaps so much for the children who are being flagged up as potential bullies, but it does raise questions of things like could it be used by immigration and visa authorities? Could it be used to be identifying children from different ethnicities and make inferences about their interests? And how valuable might that be to different parties? And the big concern we have is this sort of retention of profiles around prevent with inferences that children are interested in or being subject to terrorism and extremism. And the first report we knew of was last summer where further education authorities had handed over Prevent records about a teen from their colleges to higher education. And another civil society organization, Medact, had researched that and that was reported in the media in the UK. So, we don't yet know the connections between how often are these digital activities and digital monitoring records becoming part of permanent, if you like, school records and educational records, and then how might they be handed over about a child between different institutions or even go with them as they leave and move about the system. And again, because we just haven't yet ourselves and we haven't yet seen others do that research, that's a question we still have open.
Tim Panton: So just tracking back a long way to almost to the beginning of this conversation, but I think it's suddenly dawned on me that it's important is, how are the decisions made about selecting and purchasing this software? Who chooses to buy? Somebody must pay for it presumably to buy the software. How's that decision-making done? Who authorizes it?
Jen Persson: It’s a complicated question in terms of state procurement when the types of school settings are all different. So basically, we could look at it in England compared with Scotland. Scotland has still a pretty centralized if distributed school responsibility system because local authorities still have overarching responsibility for the schools within their region and the local authorities in Scotland have, for want of a better description, a traffic light system that approves or rejects apps, for example, for use in the schools in their region. The States does a similar thing. The States and in the US, you'll have under FERPA and schools’ privacy laws, each state will determine which technology is appropriate for their schools and their region, and they will negotiate contracts which are agreed with companies upfront on the terms and conditions that would then apply to every school in their region and that would include things like privacy terms.
Tim Panton: So you've got a level of local government democratic control to some extent of those procurement decisions?
Jen Persson: Exactly, that's Scotland and the US. That's completely missing in England. So, in England, anyone can decide basically depending on what their school's policy is. We know of schools where teachers will bring something in just on their own decision-making process into school. Obviously, schools would say that's only possible through IT support and they would get back up through their school's decision-making process that way and some, obviously, with their academies and a different type of structure of the education system, they would have a group purchasing process. But there isn't a consistent quality control, safety control, security-vetted procurement system for technology in England, and it's actually something we're calling for that needs to change because basically anyone can buy anything and at the moment, they do.
Tim Panton: Gosh, that's quite unexpected, let's say. [Jen laughs] I hadn't seen that coming at all. So yeah, that again brings me back to the idea about like, how do we improve this? Some of this stuff needs to be there and I think we're going to, although distance learning is going to track back somewhat once the pandemic is over, I still think this technology is still going to be there. What should a technologist who's newly arrived in an EdTech company, imagine they've been writing software for spreadsheets or something and then they turn up in an EdTech company, what should they know? What should they be doing to improve the situation?
Jen Persson: There's various things I think that need to happen from companies themselves in terms of awareness of design and development, and there's other things that need to happen from policy perhaps regulation, oversight, human process decision making. But I think if companies right now are looking at starting EdTech in the UK at least, we need to be aware of the Information Commissioner, that is the data regulator for England and Wales. They have a code of practice. Effectively, it's called the age-appropriate design code that's been around for a while, but it's becoming enforced first from September this year. And the age-appropriate design code has I think 12 principles which centre around what is permissible in terms of data processing for information society services so i.e., that’s defined in a directive, but it's basically apps or platforms or any software that is web-based and asks for information from the user. So, there's some interaction between the system and the user and so there's data collection going on, data processing. And it puts things first like the best interest of the child. Now, some of that isn't necessarily well-defined, and the age-appropriate design code I think is going to take some time for developers and industry to really see what effect it has. But certainly, its intentions are well-intentioned in terms of enabling access children participation, making sure that they are able to access web-based platforms and apps and tech and at the same time that they're also not exposed to commercial exploitation ads and over processing and over retention of their data. So, it's meant to be an extension of the data protection laws that would apply to them anyway, but it's more targeted towards children and targeted towards targeting them. Basically, don't target children and be as thoughtful about the best interests of the child in design as possible. The other question is things like how proactive a company is being in terms of security from the school's perspective, things that might not affect the company that's designed the product but will affect the schools. I think, recently, we saw an example in Kent in England where several schools had a ransomware attack and their systems were offline, so they actually had to send the children home because it meant that they couldn't access systems to make sure were children in school or absent. So they then are not able to fulfill their safeguarding duties to children. And so it had a very localized pretty devastating impact at that point, and I think we haven't yet got a really good I think holistic picture between the workings of industry, the workings of the education system, and children's rights as a tripartite three-way relationship that needs to happen in understanding how technology best serves children and how systems best serve children and schools that we don't find children serving the system and being this advertising tracking exploitative commodity for companies that are less well-intentioned towards them or foreign parties that might be misusing that to access school systems. So, we've got think a three-way discussion that needs to happen that's missing at the moment, and if companies and industry are keen to do that, they could do it potentially through things like the British Educational Suppliers Association, BESA, but I don't think they're yet having those types of discussions at the level we need to be having them.
Tim Panton: That's a shame because it seems like that would be the kind of body that could do this in a reasonably nuanced and intelligent way. They would know what the practical value of this stuff is but also know what was realistic to do. What I did hear from you which surprised me a little is the idea that the age sensitive design didn't make a distinction as to whether this app was for education purposes or just generic. I mean, it was about the age of the user and whether you should target them or not, but there was no distinction about how that was delivered, whether it came through a school or not, whether it was for school purposes. And therefore, as we said at the beginning, that kind of the consent question is quite different. Is there anyone moving in that space?
Jen Persson: It's an interesting question I think when it comes to the moving direction of travel of technology in schools because increasingly things like AI, machine learning, adaptive learning, programs and apps that are designed to predict children's mental health and well-being status, all of these emerging technologies have to basically be used on the basis of consent if and when they're developing the product. So, the question around can a school make a child use a certain product is twofold. You've got the consensual question in terms of, do children have any autonomy over the kinds of things they use? But then in the data protection processing perspective it's does the school have a public task that requires the use of this particular piece of technology? Is this the least invasive way of doing this process? And most often at the moment, the answer is actually no. There's alternatives that schools are not looking at because they're awkward, of course, to have perhaps only some of the children use a certain product and not others in the class, you know? They haven't yet got the infrastructure to be able to handle that. If I said to the school, “I understand that you're using this mental health tracking app, but I don't want my child to use it because I think it's hokum and I think you're selling snake oil.” Even though in terms of data processing, we should be going through a process that exists, which is you get a right to object, the school has to do a balancing test and tell you that they think it's necessary and then you could follow up through the next stages of that. As a parent, it's also really difficult to do that. You're disempowered because you don't want to be the problem parent and you don't want to cause a fuss. Your child has to stay at this school after all possibly for many years.
So, it's not only a question of this consent process for the child, but also for the parents, the families and then the products. And I think that's where consent really falls flat in schools, because a number of companies right now are using the data that they're getting from their products being used as the training data for their AI or machine learning. If it falls very much under the same model as Google DeepMind did where, let's go back, I think 2016, the ICO ruled that Google DeepMind had misused patient data from the NHS because they've been using it without active informed consent for product development. Then I think we might hit a lot of the questions of legality for schools in the same way for artificial intelligence and machine learning in schools as they did in the NHS. So the code is a step towards better data protection for children, and it starts to cross the boundaries of actually the ICO’s remit, which is what is defining what's the best interest of a child which really is going beyond the data protection remit of the ICO. But I think that's the kind of discussion we've not yet seen and I think the code, you're right, doesn't differentiate between what type of app and product it is. It's more about who's using it, are they a child and for what purposes and how is it using them?
Tim Panton: This has been a somewhat, from my perspective, somewhat depressing [Jen laughs] story. Is there any good news in the offing? Is there anything I can feel more cheerful about? [Jen laughs]
Jen Persson: I’m sorry about that, that wasn't my intention. So I think there's lots of things going on in schools that are interesting in terms of how children are using technology. I think it's really important to always consider every child using something, and where technology may be of most use is for children with learning disabilities, accessibility. You see the most amazing stories of children who are able to control the screen with eye contact and eye monitoring. And I think we've got to always bear in mind that the types of use of a program that perhaps your white, middle-class, able-bodied, privileged child, for the want of better description, average might find excessive surveillance, so I don't perhaps want my daughter's eye movements to be tracked by an app because I would find that excessive. It might be completely the opposite experience for a child that is using that as their interface with the program. So I think there's a lot of positive stories around technology. I think there's a lot of things that have been learned in COVID and in home learning which enabled better home school communications than would have been otherwise possible. But of course, it also exacerbated this divide of who's got access and who's got broadband and who's got equipment and so on and who hasn't.
So, I think what's positive is that we're seeing some more discussion around that as a result of COVID, I think sped up the discussion about the digital divide. I saw today, in fact, another report out saying we're calling for every child to be given a device in UK schools to enable equity of access. Some disagree with that. They say there's a reduced role for technology in schools. But I think the reality of life is that we have a huge advantage for children that are able with the skills to participate fully in society with access to an informed ability to understand how the world works, and that includes things like the basics of algorithmic decision making and being able to understand things like last summer's exams process better [Tim and Jen laugh] and those sorts of things. So I think if we're getting a debate around that, that's a good thing. I'd like to see more of it, and I'd like to see more perhaps wiser choices being made around what technology is used in schools and why. And I think there's more guidance and information coming out for schools, but I think to really see positive developments, we would need better cohesive planning and policy making from government to enable things like quality control, safety, security, consistency, and better teacher training right from the beginning so that we're getting a more positive use of all of the technologies that work and then that we can filter out and not procure those that don't.
Tim Panton: Okay, that's at least a positive-looking direction even if we're not necessarily travelled quite far enough down that route yet. I think that's a great place to leave it actually on a slightly positive note at least. Yeah. No, thank you so much for doing this. I really do appreciate it. And if you have any links that you think might help people understand the issues more or might contribute to practice in the area, send them over and I'll drop them into the show notes. And yeah, I must thank you so much for doing this. It's been an eye opener for me. [Jen laughs]