Tim Panton: This is The Distributed Future Podcast, and I'm Tim Panton. This podcast is aimed at helping you understand what the future might look like by us talking with people who are working on creating the future. They're working in small areas that will affect how society and technology interact in the future, and that's a way of helping you understand what that will look like hopefully. If you enjoy this, we encourage you to subscribe, tell your friends, all the usual things. We don't make any money out of this podcast, it's done entirely for our own amusement and hopefully for your education. This episode is essentially about cryptography, but also really about the impact on society. That's where we wanted to kind of explore, and I'm going to ask our guest to introduce themselves.
Sofía Celi: Yes. Hi. Thank you so much for having me here. It's really amazing to have this type of podcast that actually look into the future of technologies. My name is Sofía Celi, and I'm a cryptographer currently working at Brave. But in the past I have worked on other companies, and I have also worked in nonprofit organizations trying to deliver open source software that preserves the privacy and security of communications to the world. Thank you very much for having me.
Tim Panton: I think that I want to start with just the explosion of cryptography. I'm old enough to remember when cryptography was basically illegal. You couldn't export strong cryptography without a license, and here you are creating open source strong cryptography for, I don't know what the user base of Brave is, but hundreds of millions presumably. That's huge change. Do you feel a kind of responsibility for that?
Sofía Celi: Yes, it's a very interesting responsibility [in the first]. Now, cryptography doesn't need or most of the times it's not restricted by licenses or patents. Not that that doesn't exist, there's still some forms of cryptography that has that, but the majority is available for people to take it and implement as they see fit in the system. What that has created is an amount and a bigger array of systems that now rely on cryptography to actually secure the security and privacy of the communications. The reason why also since the beginning we didn't put or integrate it into the systems, cryptography itself, it was because at the beginning the internet was thought of more like this kind of little network that was mostly used by academics or sometimes the government, but with this whole explosion that now is the internet in which everybody basically uses it, there's much more considerations of what kind of privacy and security implications there are when using them. And not only from a technical perspective, but also we all have certain human rights that are preserved in the communications that we have nowadays and should also be preserved in the digital communications that we have.
Tim Panton: So do you feel that the cryptography has got ahead of the law or do you think the law is ahead of the internet? I mean, in terms of what privacy rights we have in different countries, what's the ordering there? Or do we have more legal right than we're actually getting or do you think the technology is ahead of law? I'm not expressing that very well, but hopefully you understand what I mean.
Sofía Celi: Yes. It depends on the specific topic. So sometimes, for example, the internet was such a disruptive technology that at the beginning there was actually no law to regulate the different types of communications. For example, there is even the famous case in which the different audio, the different music of different musicians was put online to download, and that was later prohibited because that was considered a copyright violation. But first it was this all of technology, and then people realized that there was some kind of abuse through certain kind of definition of that, and then there was the law in a specific region. That at least has happened more into the copyrights and security laws, where there has been the technology and then later the law has come to actually try to either protect the security or either reduce the misabuse of whatever is happening in the technology. In the terms of privacy has been a little bit more different. What we have seen is that the internet has allowed a bigger way and a bigger amount of surveillance technology, because sort of the communications were either happening in clear or if they were happening encrypted a lot of the metadata and a lot of the associated communication material was also subject to certain computation and analysis. With that, you can actually go against the privacy of the users. That is something that just recently certain regions in the world have decided to actually take it into account into the laws themselves. But for a while, that was kind of big playing field which no one was regulating the privacy of the users. This is interesting because maybe there were no regional laws, but there was more interregional rules, for example, the human rights declaration, which indeed said that privacy is a human right. But there was the assumption by the users that privacy with meaning the non-digital communications. There was also the misunderstanding that sometimes when we use the internet, privacy violations implications also happen on them, and I should be properly regulated to be more in line to the human rights declaration.
Tim Panton: I'm kind of interested you're talking about the internet as a separate communication method, I mean, at the risk of kind of talking about history somewhat, but how does that differ from telephones? Because we've had some understanding of what the rules were about telephone tapping or we thought we did, maybe we didn't actually, but we thought we had that. But you think the internet behavior is different.
Sofía Celi: Yeah. For example, the majority of the communication that happen over telephone were not so casual. The amount of communications that we now have put over the internet is much more bigger than what we used to have over the telephone. First on the early ages of the telephone, you only will make calls and a specific amount of people will have access to the telephone. If you have a specific amount of people, only few of them could actually make calls, and it was considered only, for example, emergency cases. Then that kind of also imploded into every household actually having a telephone and that become a little bit of a major technology. But still it was more a communication in which, for example, you will use for calling for emergencies or sometimes you will use it for chatting with someone about long conversations, but it wasn't, for example, used for buying something, which is also a kind of communication that you have with a shop basically, that you want to buy these and you're communicating with these, and they're sending back you these items. That's kind of also communication process. There was not much of that over the telephone. There's some ways to doing this, but there was not much of that. Also, for example, communications with the bank, we also do bank transactions over the internet. While that is indeed possible with the telephone, it's in much less degree than how is it happening now in the internet. I think nowadays almost no one actually goes to the bank to do direct transactions, almost everybody uses the browser or whatever system to actually access their bank, communicate with the bank, and make those transactions. And those are all kinds of forms of communication. But what have also given us the access is that now at least with mobile devices, we really easily can chat with anybody instead of having to go to a place where there is a telephone, for example, you go to your house or maybe you go to a specific place on the street where you know there's going to be a telephone booth. Instead of having to search for that, you use your phone, your mobile phone, mostly to do chats. Which is really impressive because nowadays we use mostly the mobile devices. Even though there are telephones and mobile telephones, we use them much more to do this informal chats over chat messaging applications than to actually call people or to even send SMS. We use more the WhatsApp. Maybe because people feel it's more immediate and maybe also because you send this communication in much more kind of a informal way. When you're talking over a telephone, you expect an immediate answer. With the chat-based applications, you send it immediately because it's immediately in the mind, and the person who is part of communication can respond much more later. So in that way, the internet communications are very similar to the telephone, but also very similar to the postal communication, but in much more immediate manner.
Tim Panton: Yeah. So I think we are using it so much more and for so much more that even if the same rules applied, it would sort of be more vulnerable to abuse, I suppose, in the sense that there's just so much more data about us going through these devices than there ever was before. Do you think progress is being made in terms of securing that data? Are we going backwards or do you think the future is kind of reasonably bright in terms of stronger cryptography?
Sofía Celi: So in a way we have made a lot of advances. Nowadays, a lot of the communications that we have is protected through TLS, for example, communications that we have over the browser specifically, and most of the time secured with TLS. There's other forms of encryption for chat-based applications, you have first OTR and then you have Signal, and now major chat communication providers are also integrated around cryptography on that. So that's definitely good. But one of the most big playing field nowadays is either data that you can access at the client side or metadata that is associated with the communication. Also, there's also the risk that, for example, there is TLS interception, and that at the server level you have the cryptos of the data of the communications and you can indeed do certain computations into that communication itself. So while we have secured the communications to know that indeed we are talking to the right entities, for example, we are assured that we are talking with the correct bank when we put the domain name and the address, the URL. We put it into the browser, we are now sure because of TLS that we're talking to the correct bank or to whatever or the correct URL that you are trying to access. At the same time, this means that someone who can intercept the TLS connection can indeed see or read the traffic. And that's one of the reasons that currently there has been this rise of the secure communications idea in which, for example, even if you assume that it's also secure through the end point, but also it's secure in the sense that even if you know you're talking to the correct URL and the correct domain, you also know that the data remains encrypted up until the user is the one who is able to decrypt it. This is great, but currently there's the idea of trying to pass certain kinds of legislations that seem to be threatening this at some level. For example, there's a lot of ideas of passing legislations that allow for a scanning traffic for child safety or for terrorism matters. There's some ideas also passing legislations in which you will have to store the metadata of conversations for much longer time and you will have to store much more fields of metadata than is currently needed to, because they also want to check for child safety or terrorism. All of these goes against the idea of having these end to end systems that are encrypted and are secure and private for the user. So right now, at least there's a current debate of that on how should we actually be able to know that the traffic does not excuse certain types of abuse, for example, there's no discussion of things against child safety or that are talking about terrorism, but at the same time preserving the security and privacy that they just issue. But there's ways to do this in cryptography, there's some technology that's for fully homomorphic encryption or this idea to actually using cryptographic techniques to be able to do computations, for example, to check for specific policies on encrypted data itself, so then you will not need to decrypt the data in order to be able to see if the traffic passes on the specific policy, but rather you will be able to do these policies with encrypted data itself.
Tim Panton: So you're going to have to walk me through that in some more depth. Let me just see what I did understand from that, and then maybe you could put me right. So you are suggesting that on the horizon there are technologies that would allow you to inspect say a WhatsApp message to see whether it violated some standards without necessarily being able to see inside it. So you could run it through some filter which said, "Yeah, this message talks about terrorism," but it doesn't necessarily tell you what it is. You can't read the content, you just know that it has the... What kind of confidence level would you get for that? Because I mean, the risk with those sorts of filtering systems is the false positives.
Sofía Celi: Yes, precisely. So just to maybe give you a quick example, something that has been proposed is, for example, that on certain regulations and in certain specific institutions, for example, let's say only schools. You sometimes want to actually check what kind of traffic you're trying to [ask], because you want to preserve child safety and you don't want people to be trying to access certain kind of domains, certain kinds of websites that go against this. So for these, you actually intercept the traffic and you monitor the traffic. What you can do in this case is that you can use serial knowledge groups to actually say, "Does this traffic pass all of these policies that have been set up for the system?" And then that's a validation just like a [bullion] that says true or false. And then if it's true, you know that the traffic goes against that policy, and if it's false, you know that this can be easily accessed. So that's one of the... Now that's the problem with this is about false positives. In the specific case that I'm just talking about, maybe it's much more easy because you know the domain names, you know the URLs that has been historically labeled as going against child safety, and maybe you can have a block list about all of these URLs. But on the case of chat-based applications, is much more difficult because how are you going to actually enforce these policies if they're talking about a specific key. If they say any specific word, maybe you will flag this as something against child safety, or maybe if they are sending the specific URL to a specific website that you know that goes against child safety, maybe that could be considered. But those are the things that are much more difficult to actually pinpoint. One of the reasons why this is great or this kind of technology exists, it shouldn't be the base to say that with this we should just allow this kind of monitoring of traffic. But rather if we are going to actually create a legislation that allows for this kind of computation over encrypted data, that we should really clarify how is it going to work. Not from the technology level, because from the algorithmic cryptographic algorithmic level, we know high risk and it's possible, but rather what specific kind of policies and what specific things we'll be looking at in the encrypted data to actually label it as child safety on.
Tim Panton: So that's really interesting, because it reminds me of this sort of slow progress that in particular I think Apple are driving, of apps revealing what it is that they're going to capture about you. So I mean, there's a push back, of course, but there's a push there in terms of an app having to explicitly state that, yes, it collects your images or whatever it is and the specific aspects of the system that it will access and what the privacy risks are and what they will retain. So I guess that's similar in that what you're saying here is that policy should be auditable, it should be something that is out in the open. Do you have a realistic expectation that that might happen?
Sofía Celi: I hope that that happens, but I don't want this to be rushed and that's what I feel like currently is happening. Not that people have now realized that there's this technology, this cryptographic technology of actually doing computations on encrypted data and they think that that's just a solution and that should be enough, but rather this actually have to have the careful consideration how we're going to properly determine that. I hope that that takes a while to actually determine and takes a lot of studies and academics studies and everything to actually properly assess. That will be my hope instead of just pushing something because now you think that the technology is great because you are dealing only with encrypted data.
Tim Panton: So now you're treading into, I think, is the heart of the podcast really, which is about the fact that there is an intersection between society and technology. It isn't enough to know that you have a technological solution to a problem, you have to understand how that integrates with society, and society has to understand and accept that that's what you're doing. That interplay is there, and I think as technologists, we quite often feel like it would be convenient to skip that step. I don't know how we train people not to, I mean, the podcast in a way is an effort to do that. In general, what's your experience there? Do you find that people like to try and skip that or that people take it seriously?
Sofía Celi: Yeah, so that's one of the things that I would wish that at least in my community will be more studies upon. Sometimes what is more important is how do you say even understand privacy or how they understand security, what models do they have for understanding these? More usability studies on the matter is something that I will definitely like to see and how it intersects with sociological or cultural aspects or whatever user is using that. Because I'm Latin American, for example, and one of the things that we have debated sometimes as part of the Latin American community is even if the notion of privacy that right now is pushed in the majority of the legislation or the parts of the regions of the world with the same notion of privacy that we have culturally in Latin America, do we understand privacy the same or some things for us are not considered so much private? This may be an anthropological intersection over here, how different cultures even understand privacy or security. Sometimes what happens is that we take this technology and push it for everybody like if it's going to be fitted on every culture on the planet, while this is certainly most of the cases not true. This has not only happened on cryptography, it also happens with artificial intelligence. For example, we train all of these models for a certain amount, and then we realize that we didn't take into account how these systems are ingrained into deep concepts of racism and discrimination because we didn't take into account how other people and how other culture use the technology.
Tim Panton: Yeah. I came across an interesting example of that when we were looking at privacy for baby monitors. What we found is that there's a category of privacy which is kind of not well represented, which is it's acceptable within the family. Particularly, we found the Asian background, there's a very strong concept of secrets that are family secrets and that shouldn't go outside a quite well defined family group. The kind of American-based technology just doesn't have that. There isn't a token for that. It was a really interesting experience. Why we tripped over this was for the baby monitor that is a fairly kind of representative thing, because the people you want to be able to see your baby sleeping is pretty much the family group and almost nobody else. So we stumbled over that kind of concept, and it's really, really interesting because I don't see a WhatsApp group that's... I mean, I suppose people build WhatsApp groups like that, but it's not kind of intrinsic to the technology somehow.
Sofía Celi: Yeah. Something that even further I would like more people to even take into account nowadays, there's a lot of conversation about passwords and if we still need passwords and what kind of new mechanisms there is to replace passwords. Because generally in the security model, passwords are really insecure because most of the times they're easy, a lot of people repeat the same passwords into different services, so it's a big security risk passwords. But some people have been thinking about replacing altogether passwords with, for example, YubiKeys or other kind of external device that can be used for authentication. While that is great, maybe in that way it will not fit everybody. First because maybe in many communities in the global south, they will not have access to this external device mechanism because it's not available in the market or because maybe it's to costly for them to acquire, and forcing them or forcing all the globe, all the work to buy this technology might not be great. Or on the other case, maybe there's some other parts of the community, maybe older generations are more accustomed to this idea of knowing your password and knowing that it's with you rather than having to use this external device. And that I would like to see in that specific cases where more consideration how different people actually use passwords, and if we are going to change them to some other kind of technology, how would actually that work at different cultures and age groups.
Tim Panton: Yeah. I mean, it's funny, kind of part of the origin story of the podcast is actually is about keys because I was helping a friend and I met somebody else and I said, as you do, I said, "Well, what do you do?" He said, "Well, I'm a modern locksmith." I said, "Well, what's a modern locksmith?" And he explained in his view that that traditional metal key was going to die out as a security mechanism because you can 3D print them. The moment you can take a picture of it, you can 3D print it. This was maybe five or six years ago. I said, "Well, what about electronic locks?" He said, "Well, if they're radio-based, then they're probably interceptable as well and so they're insecure." So I said, "Well, what's left then?" He said, "Well, the only thing that works is actually a physical key with contacts that have to physically make electrical connection in the lock, and then you have a reasonable level of security." I exaggerate that it was the only thing, but he basically said that that was where he thought a lot of the future access systems were going to go. I realized that he just told me what the future looked like five years out and which is kind of why we do the podcast. Because if you ask the right questions of people who are working in the field, they'll tell you what the future looks like. So that thing about keys and to an extent passwords is really interesting for me. I mean, we are playing with a notion that, an older notion that predates passwords, with the idea that ownership is the thing. You should be able to control things you own. So it's a matter of having a device or a service recognize that you own it. But in the modern world, we don't own much, we rent a lot of things, so that model doesn't really work so well, unfortunately. It does in some things like physical devices it does, but for services, it kind of doesn't work anymore.
Sofía Celi: Yeah. The idea of passwords is that you have something that you know that you own it. Now the problem with that is that of course you can forget it, which often happens, and also that sometimes you have to show this representation of what people know insecurely in a server on a database. That's usually what happens. But nowadays, for example, also people are thinking more about the concept of owning. What's the most integrated thing that people own is kind of the fingerprints or any kind of fingerprints-like mechanisms that you can use, any kind of biometrics that you can use. But the problem with that is that on the contrary with passwords, not that I'm defending completely passwords, but on the contrary with passwords that you know that you know, but no one can actually attest that you are the only one who knows that password because it will be difficult to attest, in the fingerprints or on the biometrics, it creates a unique identifier for you for forever. So that's also one is kind of bad. If you use this at the more regional or governmental level, then you have all of these biometrics stole from people from forever who could also be stolen or can be used to create a big surveillance of these people from different places.
Tim Panton: Yeah. It's shockingly unforgiving using the password fingerprint as a kind of password thing, because if you were to change your identity, be in a witness protection program or simply want to forget an old life and move on, you can't because you can't change your fingerprint. Well, I mean, we're not going to solve this problem now, but it's interesting to see what the possibilities are. One thing I did want to ask you about and I didn't see it in your bio, so maybe you'll tell me that this isn't something you want to talk about much, but the idea of multiple signers. So we've talked about kind of me proving that I'm me by knowing something or having my finger or whatever. But I think there's some cryptography emerging in terms of kind of four out of five people agree this or six out of nine or something, so kind of majority voting, but in a, if I understood it correctly, semi-anonymous way. So you don't know which of the people voted which way. Is that right and is that actually something that is deployable?
Sofía Celi: Yes, so that's precisely right. This is the idea of actually dividing the secret into different participants, which could definitely work. For example, right now this is not applied specific to password-based mechanisms, but the idea of level right now, there's a proposal to actually standardize one of these systems and the idea of the standardization of this process is not precisely for passwords, but rather for private measurements. So for example, when you are in the browser or you're in whatever system, most of the times you want to send to a server or somewhere you want to send measurements of how people are using your system. For example, the user seems to always use this specific system in this specific country and you want to send that. But sending that in the clear is obviously a privacy violation because most of the times these measurements are associated with a IP address and that's a privacy violation. So right now they're trying to standardize at the ITF level a system that is called Prio in which what you do is that, for example, let's say that you having a survey and the survey ask you do you use this system every day, and you say yes or no. So those are two of the options, and let's say that you have said yes. What you do is that you interpret this yes as a number, you interpret this as a one. And in this side of sending that specific private secret input that you answer correctly to this question in the clear, what you do is that you come up with other numbers. So for example, you come with minus 15 plus 14, and if you do that computation, you have a one. So instead of sending to the server this one, what you send is the minus 15 to one server and the 14 to another server, and when they do the computation, the aggregation, you actually figure out that the answer was a one. But you cannot pinpoint who specifically answered this one because what you got was a minus 15 and the other server got a 14. So that's ways to actually dividing, announce it, for example, into different servers, so no one actually knows the input but rather the aggregation of them gives you the correct answer. So that could be something that also can be used. Nowadays they're much more efficient because what happened in the past is that they existed technology or these ideas from a theoretical perspective, mostly, but they were inefficient to be used. Nowadays they're actually efficient, so maybe even this standardization of the ITF level for private measurements would be one of the first ones that is pushing this kind of technology further.
Tim Panton: That's quite exciting. Who's involved in that? Who's behind that effort. Is it the people who want to collect the data or is it the civil society, privacy people or both sides working together?
Sofía Celi: It's both sides. So the idea is that obviously the majority of companies that are behind the systems that we use, they do want to have measurements about how you use your system. But on the other hand, the civil society wants to preserve the privacy of those measurements. So it's a collaboration between both. Hopefully it turns out into a really good system.
Tim Panton: That's pretty exciting, actually. You mentioned deficiency and that reminded me of something else I wanted to talk about which is mentioned in your bio also, which is quantum. Now, I kind of got two questions here, one of which is a really basic one, which is kind of, how long until that starts to impact us? Then the other one is, is the impact going to be equal? Will we get back to the point where only very rich people can do strong cryptography?
Sofía Celi: So the first question which is how long, this is the million-dollar question that a lot of people ask. So recently there was a survey that researched it in Waterloo. Mosca, he did a research, and he asked a lot of quantum computer scientists about when did they think that quantum computers indeed will arrive. The answers were really varied. They were saying 30 to 50 years, maybe a little less, but around that period is what people think. The reason why is because quantum computers are very powerful, but because the idea of quantum theory is that when you observe an event, you kind of collapse it in one of the specific possibilities, this can also be interfered by the different kind of noise that exists in the world because the world itself follows the logic of the quantum theory. So that's one of the problems why we don't have quantum computer now. And even if there comes to a realization of a quantum computer that indeed is powerful enough to do a lot of the computation that we want to do, that doesn't mean that people are going to be carrying around in their pockets quantum computers, probably what is going to be happening is that certain research centers and maybe certain governments will have these quantum computers for a while prior to the general public being able to use. It's going to be maybe the same way that happens with computers in which in the early days only these research institutions have these gigantic computers. When actually the modern power technology came, then it actually made the possibility for the general public to having computers. There's right now no modern hardware for quantum computers, so if it happens, it will take really long. And the second thing is the impact. So quantum computers are really great for searching problems, for example, they will make searching problems much more faster. They will make humanity to be able to mimic nature in a much more realistic way, because if quantum computers follow the logic of quantum theory and the world follows the logic of quantum theory, then it's easily to actually represent it in a quantum computer. Not that it's impossible in the computers that we use nowadays, but the computers that we use nowadays are limited in the resources that they can access, and therefore from a time perspective, it's impossible to mimic all of the experiments from quantum theory that we want to do. So that's why quantum computers are great. But at the same time because they are so powerful, they also break a big chunk of the modern cryptography that we use nowadays. They specifically break any cryptography that is based on the discrete logarithm or the factorization problem, meaning any [divisibility 33.44] based system is basically broken with a quantum computer. But not all of the cryptography that we use nowadays is based on those two problems, hashes are not based in this problem, the majority of fully common multiple encryption that uses lattices is obviously not based in this, so not everything is going to be ruined. Right now at least the National Institute of Technology of the United States is running an ongoing process to select which ones are going to be the new winners of cryptography that are going to replace all of this cryptography that is broken by a quantum computer. So there's some hope in the future. But the reason why the people are also so concerned about quantum computing is because right now someone can be storing all of the traffic that happens nowadays over the internet or whatever system, and then when a quantum computer is come to a realization, it's now that the quantum computer is going to be able to solve or to decrypt any future traffic. A quantum computer is also going to be able to decrypt any past traffic. So if an adversary recorded all of the traffic of the history of the world, I don't know how but maybe, then a quantum computer will be able to decrypt any of those past traffic. So that's why people are very concerned or trying to push for these new protections against quantum computing, because it means that even the traffic that we use nowadays is threatened by quantum computer, even if they don't exist nowadays.
Tim Panton: So it's kind of threatened in the future?
Sofía Celi: Yeah, it's threatened in the future, but the future will be able to decrypt the past.
Tim Panton: Right, interesting. How confident are we about which algorithms will be more or less immune to that? You said, well, RSA is probably broken. I know broken is a relative term, but are we reasonably confident that we know what quantum computers will be good at or is that still speculative?
Sofía Celi: So there's at least that they break the mathematical foundations of RSA and [unintelligible 35.53]. Yes, we have probability. There's an algorithm that is for sure that indeed is able to break that. And that seems very feasible. And I think some of the experiments that some of the big companies around quantum computer has been experimenting with have actually shown that it's indeed possible. So for example, maybe there will be other kind of quantum attacks against other types of cryptography, or maybe there will be even more attacks into this new kind of cryptography that is supposed to be safe against quantum computers. So it's still kind of a young research area. So we still don't know if there will be more quantum attacks. There has been a lot of discussion. One of the mathematical foundations that nowadays is proposed to be able to be safe against a quantum computer is this idea of basically that if you take a point in the lattice and you actually want to see what's the closest point to these lattice in many dimensions, that's a very difficult problem to solve even for a quantum computer. But there has been currently recent research that shows that maybe that's not true, maybe it's easy to solve by a quantum computer. So there's still a lot of debating indeed which mathematical problem is going to be safe for a quantum computer and which one is not going to be. We have high assurance of some of the problems are going to be safe, code-based cryptography for sure seems to be safe against a quantum computer, but other fields are still needed to be debated. So it's difficult to tell because it's a young area at the moment.
Tim Panton: Right. So the headline summary of that is that anything that you encrypt now is probably going to be broken within 30 years and corporations and less by governments.
Sofía Celi: Yeah. We don't know if there are governments building quantum computers, there could be, but that is not so much publicly known.
Tim Panton: I'm just assuming. The first ones, I mean, you just said the first ones are going to be super expensive, and so the richer organizations will get them first. This was just something I was interested in in maybe talking to about is, do you feel that there's going to be an imbalance in... I mean, at the moment, we're in quite a democratic position in terms of access to encryption. Because of open source cryptographic algorithms, we're all in pretty much the same boat in terms of what we can do. Me and IBM probably use the same algorithms, but that's not going to be true in the future, or am I exaggerating the risk?
Sofía Celi: Yeah. So all of these algorithms that are safe against quantum computers, they don't need to run in a quantum computer to be able to be safe against that, so that's fine. But there's an imbalance, and that is definitely true in that only certain places are the ones that are going to be having access to a quantum computer. The fact that you can mimic nature in a quantum computer means that maybe you can mimic more how molecules work, and that means that maybe you will have more advances in the field of medicine or in the fields of creating different batteries, things like this, and that will be only available for such research centers which have the financial possibility to have a quantum computer.
Tim Panton: Okay. How do you feel that that works in terms of the different populations and different nations and different community groups? I mean, one of the things that we haven't talked about that we probably should actually is the distribution of cryptography to people who really need it. I mean, I need cryptography, I'm not threatened by anybody really, and I don't really need cryptography except to protect my bank account from criminals. But there are people who are maybe dissidents or have abusive partners or are working in a living in a repressive regime who really need cryptography. I'm kind of concerned that that push that we've got quite a long way down, it might kind of fade worrying too much.
Sofía Celi: So one of the big problems with quantum computers is also that we will have to migrate all of the systems that we use nowadays to post-quantum cryptography. So cryptography that's safe against quantum [breaches] is called post-quantum cryptography. So we will have to migrate everything to post-quantum cryptography, and that will be very difficult. For example, migrating the whole PKI, the public key infrastructure, to post-quantum cryptography is going to be very expensive, is going to be a little bit of a coordination process, and all the other system will have to together migrate to post-quantum cryptography. If this is going to be done on organization level, perhaps that would be great. If this is going to be done at a regional level, I don't know if that work will happen. Or if what happens also is that maybe National Institute of Technology is standardized with specific algorithms to be integrated into systems, but different other regions decide to standardize a different cryptographic algorithm and how all of these algorithms are going to inter-operate between each other or if other regions decide to not standardize anything at all and not use post-quantum cryptography because whatever reason, there is going to be problems with inter-operability of different systems.
Tim Panton: So again, we come back to the sort of people like the IOTF who hopefully can set a lead there. I noticed, and it's made me laugh, and I now understand why actually, it made me laugh without me really understanding why they were doing it. But I noticed that the quick specification, they built it so that you can change the cryptography really easy. What was the word? They had a word. It was unfossilized or something. It wasn't quite that. There was a phrase they were using saying that they wanted to make sure that they could move to a new algorithm quickly and efficiently. I guess this is one of the reasons.
Sofía Celi: Yes. So it's easier on systems that is really easy to switch the algorithms, that kind of systems it will not be easy to migrate. So for example, there's a lot of voting cards or different identity cards which have the cryptographic algorithms burned into the hardware level, those probably are not going to be able to be migrated. You will probably have to rebook them all and issue new ones.
Tim Panton: So things like passports will be--
Sofía Celi: Yeah, if there's a password or identity card or something like that that is digital and have a cryptographic algorithm and you now want one that is post-quantum, you will have to rebook them and reissue.
Tim Panton: Wow. I mean, I suppose these things age anyway, and then you do replace them. But yeah, it's sounds like a massive effort that we're going to have to get into. Maybe there's something we should be kind of starting to think about how we are going to do that now, but presumably, hopefully people are doing that.
Sofía Celi: Yeah. So there's some efforts right now of actually trying to think about what actual operational challenges there is. Of course, there's a lot of challenges from the TV, if indeed it's safe against a quantum computer and then trying to build an algorithm against that. But it's also important to think about the operational challenge, how we're going to re-expiring all of these TLS certificate keys and now change them to post-quantum cryptographic keys, how we're going to deal with the fact that all of these post-quantum algorithms have bigger [unintelligible 43.32] computational times in their operations and that maybe that will slow down the networks that we are accustomed to be super fast, how we're going to deal with all of that. So those are more operational questions that the community has to address.
Tim Panton: Coming back to one of the things we were talking about earlier in terms of distribution of cryptography, I was reading a thing about the necessity of people who don't need cryptography to adopt it such that the people who do need it become invisible because they're part of the mass adoption. Otherwise, they kind of flag themselves as being targets in effect. I mean, I notice you've kind of looked at human right aspect, have you done any work in that area?
Sofía Celi: Yeah. So I have done a little bit of research and also help people who have been suffering of digital gender violence, digital domestic violence and also gender violence, because these are people who most of the times we don't think about this kind of adversity from a theory perspective. But in situation of domestic violence or situation of gender violence, what happens is that you have a perpetrator, an abuser, that most of the time has access, complete access, to the system of the person that they are abusing. They have access to their phone, they have access to their laptops, because they force this person to give them access to. And then nothing actually protects it because they also force this person to give them their passwords or force them to give them their YubiKeys or whatever, their external device authentication. Most of the times people are forced to because they're living in an abusive situation. Those are kinds of adversaries. Sometimes when we design systems or algorithms, we don't think about them because we think, "Oh, that's so niche, that rarely happens in practice." Well, that's not true actually, there's really horrible numbers of the amount of cases of domestic violence or abuse relationships are very, very big. And that's something that should be also thought about when we actually create a threat model, so all of these different systems, an adversary that seems to have complete access to the system. There's some kind of technology that could prevent these. For example, there's operative systems such as THEOS or [Subgraph] that what it does is that it boots the operating system in your computer that once you turn it off, it'll completely immediately remove everything. Then even if an attacker has access to your computer, they will not have access to anything because everything gets removed immediately from the [46:11 highwall] itself. So that could be one thing that is something that I do wish that we'll be taking more into account of different groups that maybe they have a bigger amount of surveillance that we generally think of.
Tim Panton: I suppose, the trick there is not to leave any trail that the abuser can see. So what they don't know about, they can't get out of you if you see what I mean. Interesting. How do you think is best to get that specific message to designers of general purpose systems? Because presumably, this is best if it was in... If that feature was already in, I don't know, WhatsApp, that would be brilliant because it's a perfectly normal app that everyone has on their phone and therefore that feature would be available to anyone who needed it. But how do you think we can go about getting that into... I mean, I suppose Brave is working in that direction maybe--
Sofía Celi: So one of the things that happen is only certain kind of applications have these service. Example, Signal and Wire do have this possibility of disappearing messages, which could be great in the case of someone seizing your device because even they cannot see the messages because they already disappeared. I don't think WhatsApp has this, but I'm not sure, because that could be great. But one of the reasons why I think in general we don't think about this other abuses of different systems is because most of the times these kind of situations unbalancedly happen to genders that are not male. Unfortunately, the majority of people who develop the technologies that we use nowadays are mainly male, so then there's a lack of actually even knowing that this situation. This is because maybe the industry have a big imbalance of different genders working as part of technology. That could be maybe more so if indeed we will get into a more diverse employee area at the technology part, that could be one thing. The other thing is that we are trying to standardize at the [idea] level, a document of considerations of what should be considered in the case of the situation of gender violence abuse. That could also be one thing that at least sets in stone a document of things that people should take into consideration.
Tim Panton: So I mean, I'm speaking kind of on behalf of Vim here, who I think at this point would be saying much more about the general value of a diverse workforce, that in general you'll get better results. But we had a conversation-- must be nearly two, maybe three years ago-- with [unintelligible 48.54] about infosec in general, and one of the things that she was saying was that there are just very few women in that environment because it's so toxic. I kind of didn't want to dig into that in a great depth, but I did want to ask you whether you found particular environments where that's less true, are there pockets of good practice?
Sofía Celi: So I do agree that it's very, very toxic, and it's very unfriendly to anybody who's not male and also sometimes to anybody who's not white, definitely true. There's communities that we have created at least as women sometimes to talk about cryptography. In the cryptography area, we have to create these communities as women and try to push for that. Also, the same because I'm Latin American, we have also created our Latin American group that tries to push for more diversity, and that way we kind of help each other. There's other kinds of areas of human research in which their imbalance is not abuse. So for example, sometimes usability or design kind of research has more of a balanced workforce. I study originally literature and communication, and then definitely it was much more balanced in my university than any part of the STEM technology part of the university. So those are the ones in which is better. Why? Sometimes it's the effort of people to actually hiring or searching for people beyond the male pool. It's well known, for example, at least on the cryptographic part, that the reason why we have a lot of Latin American cryptographers specifically from Brazil was because of the enormous effort of two, three Brazilian cryptographers who decided this should not still be happening that we don't have Latin American cryptographers, and they returned to their home countries and search for people who will be actively interested in cryptography and help them to be introduced to other universities, to be introduced to other professors. That eventually created a bigger community, but it's an active effort. Something that I do not like on this of course that I'm speaking of, is that most of the times what happens in the technology community is that there's an imbalance of diversity, and then in order to solve these imbalance is often pushed into the diverse person itself. Well, I hope that it will be not the case, but so far is the case.
Tim Panton: I think there's some evidence that corporations in particular are seeing that having a diverse board of directors, for example, is more profitable. So it's slowly getting into that level. I think it's certainly in STEM and one of the areas I work in in Voice over IP, that message simply hasn't got through. It's just not there, and it's dispiriting. What I would say though is that medicine was like that 25 years ago and it isn't now. That change has happened. I've never really understood, I've never studied it, but there must be academic papers on how they achieved that in medicine. But so I kind of hope that maybe we can in STEM, but I think hope isn't enough, effort has to be made. So efforts are being made by some people, but it's a slow, slow progress. I wanted to ask you a kind of very future five, ten years out question just to finish with, what kind of excites you about what you think is on the horizon in five to ten years time? What do you think you'll be either working on or working with that kind of is just wonderful?
Sofía Celi: So I really like the fact that nowadays privacy on the internet in general, digital communication, has been much more talked about than what it was in the past. Nowadays, we here a lot of big companies also thinking about maybe when they do measurements they should be more private or that maybe the amount of metadata that they're sending through the connection should be more restricted. That didn't happen a decade ago, so I'm really, really happy that that's a sign to happen nowadays. Because that tells me that from a cryptographic perspective, all of the theoretical constructions that we have now can finally land into a real world application, and we could see they're efficient enough and that indeed pushes the field to even much more. And that I definitely love. I also really love the idea of post-quantum cryptography, and that is mainly because all of this mathematical construction which they are based on are very fascinating. So that's more my mathematical perspective, my really loving that more people are digging into this mathematics and actually applying it into something, I really love that. The last thing is that I really love quantum computers coming. I know that they will break cryptography, but I also think that they will mimic nature much more better and it will further our understanding of the physical world, it will make us better understand the physical world. That for me is fascinating, to actually maybe be able to experiment on the physics side on something that was only a theoretical thing. For example, recently someone did an actually experiment of a particle or something like this, and they actually show in an actual physical experiment that it exists. And that's fascinating because from the [feeling], a lot of these physicists have been doing from the theoretical perspective all of these findings, but they could have never, they didn't have the machinery to see it in an experimental level that it indeed is true. So that is fascinating about quantum computers, that it makes all of these feelings became true, became palpable.
Tim Panton: I used to hang out with astrophysicists at one point in my life, and basically what they did was they were using the entire universe as a kind of place to try and find an experiment that was already happening. It was a wonderful way to look at it, the maths was too hard for me, I couldn't stay in that space. But I just loved the idea of, as you say, trying to prove physics by observing it, but in their case, across the universe. So I want to thank you so much for doing this, it was absolutely wonderful. One thing we do like to try and do, if you can send me any links to things that you think the audience might want to read, I will put them into the show notes and then people can read that, and anything that catches their attention, they can read into more depth. But as I said, I want to thank you very much for doing this, and I also want to thank our listeners for listening. I would encourage them, as I say always, to subscribe, otherwise you'll miss an episode. And tell your friends because otherwise they'll miss an episode and they won't know what the future holds.
Sofía Celi: Yes. Thank you very much for having me. It was really amazing to talk with you about all of these new, crazy things that come.