menu

The Double-Edged Sword: AI

abdel-khalik-haney

In this episode of Cybersecurity (Marketing) Unplugged, Abdel-Khalik also discusses:

  • Patterns he’s seen in AI and the ability to automate things that humans understand very well;
  • How AI is skyrocketing all over the world in cybersecurity;
  • And cybersecurity as a science: Understanding the impact of attacks and the knowledge behind systems that are taking in false information.

Hany Abdel-Khalik is an associate professor at Purdue University in the School of Nuclear Engineering where he focuses on data mining applications in the content of reactors safety, economy and cybersecurity. Hany was born in Alexandria, Egypt where he obtained his pre-college education and a bachelor’s degree in nuclear engineering from Alexandria University about 20 years ago. Immediately after, he accepted an offer to continue his graduate studies at North Carolina State University where he worked on computational reactor physics for boiling water reactors, receiving his Ph.D. in nuclear engineering.

Abdel-Khalik uses his background and expertise in engineering to weigh in on how AI is tilting the balance towards malicious agents. By reverse-engineering the knowledge of how systems work, AI is also capable of being used in malicious ways, uncovering design details that may be obfuscated or unclear to us.

The main issue with AI right now is that we have developed really good recipes to automate our discovery or reverse engineering of how systems work. Because of computer power, we can do that much, much faster. So that’s raising the issue about AI, whether we want to use it to improve economy, improve performance, etc. But we want to be aware about its other side, that it can reverse engineer things that we don’t want it to do.

Full Transcript

This episode has been automatically transcribed by AI, please excuse any typos or grammatical errors. 

Steve King  00:13

So good day everyone, this is Steve King, the managing director at CyberTheory. And today I’m joined by Haney Abdel Kabylake, who was born in Alexandria, Egypt, where he obtained his pre college education and a bachelor’s degree in nuclear engineering from Alexandria university about 20 years ago. And then, immediately after that accepted an offer to continuous graduate studies at North Carolina State University, where he worked on computational reactor physics, for boiling water reactors. One year after obtaining his tenure in 2015, he moved to Purdue in the School of nuclear engineering, and he’s currently focused on data mining applications in the context of reactors safety, economy and cybersecurity. So welcome he I’m glad you could join me today.

Hany Abdel-Khalik  01:17

Steve, thank you for having me.

Steve King  01:18

Sure, sure. So let’s just jump in here, we’re gonna talk about AI or artificial intelligence for a minute. Because why not? It’s a double edged sword, right, it can help find patterns to improve performance. But it can also be used as a reverse engineering tool, making stakeholders reluctant to share their data for fear of being either exposed or their vulnerabilities being an asked or damaging the reputation of their own systems. Imagine if an AI firm announced the vulnerability that was found by analyzing a nuclear power plant. So talk to me about that a little bit, and what kinds of progress and patterns you’ve seen, on our long term march to artificial intelligence, which we must agree we we have to, we have to figure out how to embrace we’re going to eventually win this war that we’re in.

Hany Abdel-Khalik  02:25

Sure, Steve, artificial intelligence, you know, the term has been used now for since the 40s, several waves for AI adoption, several times people got really excited about it, and then sort of, you know, that fizzled away after time. But in the past 1015 years now, we’ve been hearing a lot about AI, and you know what it can do, but people now are more focusing on not on emulating human intelligence part, but on its ability to automate things, things that we understand really well, we have a very intelligent process recipe. And we can just automate that very efficiently, especially with, you know, this amazing computer power that we have today. So in a way, artificial intelligence and machine learning or data mining in general, you can think of them as really tilting the battleground in favor of, you know, some of those malicious agents. Because unlike before, where knowledge was sort of, like confined or centralized, now we turn knowledge into information. And AI is basically using that information to reverse engineer the knowledge that, you know, could be obfuscated or not made very clear. So that’s the main issue with AI right now is that we have developed really good recipes to automate our discovery or reverse engineering of how system, things work, because of the computer power, we can do that much, much faster. So that’s raising the issue about whether AI, you know, we want to use it to improve economy, improve performance, etc. But we want to be aware also about its other side that it can reverse engineer things that we don’t want it to do.

Steve King  04:01

Yeah, sure. And then trainees have been very active about doing just that with cars. Getting to that point, I should say, with quantum computing, he whereas we’re spending something like 112, I think of our national defense budget, on quantum computing within computer science, compared to what the Chinese and the Russians are spending, doesn’t that concern you?

Hany Abdel-Khalik  04:35

It certainly does. Yeah, fact, just, you know, Googling on the subject and seeing some of the recent articles that have been written on that. It’s interesting that the take that some of those countries are taken on, you know, on artificial intelligence, as I said, early on people when I mean, even still, till today, when you hear about AI, the first thing that comes to mind is that, you know, the ability to emulate human intelligence. That’s not really really what they’re focusing on, they’re more focusing on automating things that we know how to do very, very, very well. In fact, some of the key words that are floating around these days is the concept of intelligent ties ation, you know, it’s hard to even say the word. But it’s basically saying is we’re not really trying to emulate human intelligence, but we’re trying to take something that we understand very well knowledge of how some of those systems work, like nuclear reactors, for example, electric grid, and use AI to reverse engineer some of the design details that might be off, you skated are not clear to us. So this actually brings me to a really important point that, you know, that we are seeing a big shift from the concept of knowledge, the concept of information, you know, in the past, like World War One, World War Two, for example, was all based on you know, centralizing that that knowledge and putting a fence around it, so only those who have access can can get to it. But you know, with the computers being used a lot, we’re turning that knowledge into information, it’s almost like you’re diffusing the knowledge to the public. And we’re now relying on these decentralized information sharing, you know, approaches, like basically, the internet is one one form, so it is not a matter of right now, who has the knowledge, it’s all a matter of know, can you really protect the information that’s there? And can you interpret it in a way, you know, that suits your purposes. So I think some of those countries are mainly focusing on using AI techniques to try to reverse engineer as much as possible about the actual actual knowledge, for example, design, specific critical infrastructure or design of reverse engineering, you know, certain vehicle or drone, for example, technology, if it gets captured. So the value of AI techniques has really skyrocketed in recent in recent years, and it doesn’t have to be only for, you know, critical systems, but you can think about it, using it for a lot of different small manufacturing companies, you know, anything that’s required some type of automation, you know, for example, the car industry relies a lot on artificial intelligence these days. So, a lot of different industries, I think will will be vulnerable, if they AI is being used to basically penetrate and, and turn on them.

Steve King  07:19

Yeah, and, you know, part of the reason that we should be concerned as we continue to invite the, the adversarial nation states into the business of venture capital, I know that on Sandhill Road, for example, there are at least half a dozen Chinese firms, and their LLPs, depending upon each deal, are enabled to take significant chunks of that first, second or third round of equity. Which from provides them with the ability to, to look inside, take IP, take it out, study it, return it, etc, which is essentially saying, Here, here’s the IP for this new AI product that we’re developing. You guys want to take a look at it, maybe photocopier, maybe you hand it around to your engineering team, go for it. So I wonder why we keep doing that. I mean, we’ve been doing it for at least 12 years or so now. So do you have any insight to them?

Hany Abdel-Khalik  08:35

Yeah, so this is this is a necessity, basically. So you’re talking about something that computer scientists have been thinking about know for quite some time. Now. As I said earlier, when, when what you’re trying to protect, you know, it can be centralized, you can put it in a box, then all you have to think about is you know, protecting that box. But if you have to, you know, share that with other entities, you know, to improve performance, for example, an example is AI for when you test AI techniques, you can you know, do your own testing, but the best testing for performance will also only be possible, if you invite others to look at your product, you know, everybody has their own perspective, and they might be able to find weaknesses and limitations, etc. So for example, the computer science community, they were focusing on that approach, you know, where they put everything in a safe, they called the ops security approach for security. But people realized later on that, you know, if you obscure things like that, eventually somebody else will, you know, find a vulnerability and break into it. So it’s best to design security approaches that are open, everybody knows about them, but they rely maybe on one secret aspect like a secret key. And if you made that key really complicated, then you would need like a really powerful computer to break into it. But you said earlier today, now we have all these quantum computing platforms. So people are starting to think about whether this idea of sharing everything, and then you know, basing your security on the ability to break a key would would be a useful approach. But I think in general, putting a fence around things will not be the way to protect the valuables or the proprietary information. Because, you know, people always, instead of trying to play the game, they’re gonna try to play the man who was made game, you know, like, it’s kinda like you’re bribing a guard, the sitting outside or, you know, the club and made some exclusive plays, only you have that access in a car to be able to get it, eventually, somebody’s going to bribe that guard. So I think the concept of trying to protect things, you know, based on putting a fence around everything, they call it in the computer security, perimeter defense, eventually will break. So we really have to think of a way to, you know, find the root of the problem fixed at, you know, the battle level, the process level, a good example that I like to make usually is the, you know, how do you compare the COVID, you know, restrictions, you know, wearing masks and social distancing, etc, do the design of a vaccine, in a way, you know, the code restrictions are a sort of a perimeter defense, you don’t know how to fix the problem yet, but you tried to take some measures to, you know, reduce the impact on you. And that’s when you limit access, but when you actually design a vaccine, then you find the root of the problem. So even if they if the virus gets into your body, the by by mechanism, basically know how to turn off its effect. So my thought process on this is that perimeter based defenses or you know, putting a fence around your data to protect your valuables, eventually, will, will fail. So we have to think of a way that allows, you know, these industrial control systems to you know, think for their own, be basically self aware, and be able to detect when there is a footprint for an intrusion and correct for that

Steve King  11:51

there are many students do teach today, typically.

Hany Abdel-Khalik  11:56

So, you know, as a professor, we sort of work with both undergraduates and graduate students. So I typically teach writing courses on ragdoll physics, both undergraduate at on the undergraduate level, but I also have my own research group. So which includes primarily graduate students who are trying to get a master’s degree or PhD degree, but I also try to recruit undergraduate students to you know, work with me starting maybe through their junior year, so they built some muscles by the time they graduate, then they’d be ready for doing graduate research. So typical numbers, somewhere between five to 10 research students, and then all the graduate program, class sizes, or maybe in the order of 20 to 30 students. So we’re a small program, because you know, nuclear engineering programs are traditionally been much smaller than other mainstream engineering programs, like industrial, mechanical, aerospace, etc.

Steve King  12:52

Yeah, is that recent development, or as has it been heading in that direction for some time that because of our attempt to move away from nuclear power,

Hany Abdel-Khalik  13:05

so as being going through wave also, so when I joined in the mid 90s, I was, I can safely say it was the bottom, like the stock markets for nuclear. In fact, the budget for research here was zero in the US at that time. But with the you know, with the Bush administration, we started seeing a lot of support for nuclear power. And the support continued with the Obama administration and Trump administration. So we’ve been also seeing an increase in the enrollment for undergraduate students. And it’s been a steady increase since the turn of the century. So we’re really pleased with that.

Steve King  13:44

Do you see cyber security as a science or? Or is it something else? I mean, I look, you know, from your point of view, as a science professor, and I mean, nuclear sciences, is a pure science in my mind, assuming that that’s correct. Do you look at those two disciplines as the same or similar in terms of their scientific grounding? Or, or is one more affected by human factors and human influence? So

Hany Abdel-Khalik  14:21

definitely there there is a big part of this is the human factor and how, you know, you always have to entrust something to a human, you know, domain, the gate. But you know, what, in the last 15 years or so, we are seeing that terminology, cyber physical, you hear that a lot. And it’s basically telling you that anything that happens in cyberspace right now, as a physical consequence, like if you think back into the 90s, for example, when your credit card was stolen, there’s really no immediate impact. You know, you just call the company. It’s a data leak, so maybe the insurance will cover it and everything is fun. But now we’re hearing about somebody hacking into the into the car and causing an accident or hacking into a nuclear facility and you know, causing permanent damage to the component that facility. So we don’t have to think about it as a separate problem anymore. It is really because the cyber world that is tied to the physical world. So it’s definitely a science right now. And we’re talking about how do you stop that attack from happening? It’s not a matter of just building a, you know, a fence around it and requiring password and encryption and all that. But no, we actually have to understand the impact of the attack on the system, and how the system can be smart enough to figure out that, you know, it’s being fed false data. So it’s definitely cyber cybersecurity, and right now, it’s no longer just a computer science problem. It is it is an engineering problem at heart right now, because we’re concerned about the physical impact on all systems that we’re dealing with.

Steve King  15:55

Yeah, so talk to me a little more about that, because the way I look at it is that math and physics have some immutable laws, right? I mean, they don’t change, don’t matter which, how much inner human interaction there is, the laws of math and physics don’t change. I’ve been in the cybersecurity world for as long as there’s been a world, and the laws frequently changed based upon the discovery that humans make.

Hany Abdel-Khalik  16:30

Absolutely. So if I were to put a line in the sand between the two sciences, security, from computer perspective, is that you know, computer is a slave, you know, so you tell it what to do. So for a computer to establish some, some sort of trustworthiness, it has to, you have to tell it, what to what to trust, yeah. And that’s called the protocol. So you come up with a way of engagement with, you know, with an agent that’s giving you the information. And if you establish the, you know, the protocols satisfied, then you say, that’s trustworthy, and you work with them. So it is based on rules and rules, as you said, are defined by humans, so and the rules can change. So we’ve seen, for example, the evolution of these trust measures from simple passwords, you know, on firewalls, and all that, up to what we hear about right now is zero trust, that means you don’t trust anything at all in, in your system, your everything has to be verifiable, basically. So that’s open to human interpretation. And, you know, the human decides whether they really need zero trust to go all the way to that level of trust, verify ation, or, you know, rely on some basic approach for establishing trust. But on the engineering side, things are different. When we talk about trust, we’re talking about the data that we’re looking at, that’s describing the state of my system, is it representative of what I should expect, like, if I have a nuclear reactor, for example, I know type of the reactor, I know, I have a pretty good idea about how the various sensor data should be correlated to each other, you know, inlet flow pressure, you know, on the primary side, the secondary side, those are all physical quantities that, you know, they just can’t be random values you can decide and what they shouldn’t be, they all follow physical laws. So the issue here that we worry about when we talk about cybersecurity, is that as an attacker, you know, fools that those initial protocols that I talked about, so that it can talk to the system, then it relies on an engineer to fake the data that are flowing through the system. And those, this faking process will not be just random, it will be based on a good understanding of, of the physics of the problem. So in this computer community, now they talk about the concept of a cyber payload, that’s the the ability to, you know, bypass and penetrate the perimeter base defenses, and, you know, fake all these communication protocols, and then the engineering payload, that’s the part of the virus that tricks the system into doing something it shouldn’t be doing. And that virus is an engineer. Basically, it is not, it’s not based on human based laws, rules, it is based on understanding the physics of the system. And that’s what we’re mostly worried about right now. Because you when we started this podcast, you started talking about ai, ai presents, you know, the ultimate weapon here because it allows you to understand how sensors relate to each other. It understands you to even reverse engineer some of the physics that might not be declared, you know, in the operation of some facility. And I think that’s where most of the research on cybersecurity needs to focus on right now is how do you use your understanding of the engineering system to create self awareness that can be used to defend against these types of attacks?

Steve King  19:43

Yeah. And prior conversations, you and I have talked about the disconnect between physical impact and logical impact the well and engineers think of security as a nuisance. Then as a result of that They don’t want anybody, as you described, they don’t want any IT folks to test their systems. Do you find that to be true in an OT world, as well, so that, for example, the reason why we’re so far behind in operational technology, systems design, and protection from a cybersecurity point of view, is because the folks running the plant will not give control of the system, or any even limited period of time to the IT folks, because they don’t trust the ice packs. How are we going to solve the problem of, for example, programmable? Logic Controls with things like valve, the valve controllers, for example, that have no, not just don’t have a default password. Does anyone have any password controls? And here it is? 2022?

Hany Abdel-Khalik  21:03

Yeah, that’s, that’s like the million dollar question. Definitely. And a lot of researchers are trying to, you know, see how they can basically address it. But let me let me maybe highlight that the the major challenges here is not whether the IT people, you know, distrust engineers, and engineers not trust it, I think they’re just that having different perspective on on the problem. Like, for example, the simplest thing, an engineer, engineer, when they sell you a product, they talk to you about warranty, you know, they talk to you about reliability, when is it likely to fail, how long you can keep it before, you know, you have to replace the parts, etc. That’s the perception, you know, or that’s the, I’m sorry, the perspective of their thought process. But when you talk to an IT person, it’s, well, I’m going to give you, you, that argument is usually like, well, you are using a network that could be hacked. And if you hacked your you know, you’re gonna be standing to lose a lot of money. So you need to hire me. But okay, and then when you hire them, like, am I protected? That’s usually the question that they asked the engineers or whatever the owns the facility? Are we now protected? They said, Yeah, well, the answer that you get is not very clear. And like an engineer would tell, you know, there’s a 90% chance that this will fail after you know, five years, you don’t get metrics like that from security. Folks, they basically tell you, well, until some something else shows up on the market, then you’re protected. And the interesting thing is, no news is not necessarily a good news in that in that argument, because, you know, you could be already hacked, but the virus didn’t declare itself. So I think there’s just a disconnect in expectations and perspective on the problem from, you know, the engineers and the security folks. And that’s why they’re not really talking very well. So I think, in the future, as we’re moving forward, we have to think about what is it? What is the goal of all this, you know, I sit so it gets physical, that’s the main issue, right? So the key thing is that we want to maintain operation, the operation of the physical facility, even if an attack is taking place, that’s really the end of the day, what an engineer is looking for, you can talk about all the zero day attacks, you know, and the viruses and how smart it is to catch the hackers. But what we really care about is that the system continues its function, and I will be happy to you know, if I learn that I get hacked, I want to still continue to operate, then I can hire the forensic team to come and you know, clean up the system. But I don’t want to be in a situation where I have to shut down every time something, you know, infiltrates into my system. And then, you know, I have to work with the forensic team, and at the same time, maybe respond to a ransom that I have to pay and have to decide which one I should do. So I think their future research will have to think that way is that when we are implementing security needs really take advantage of the understanding the system itself, and every system is going to be different from an engineering point of view. And how can you use that knowledge to protect the system, and that’s very different way of looking at the fence approach that I talked about is that you everything that’s valuable, you put, they put a strong fence around that you keep fortifying that fence. engineers don’t want to think that way. Because anytime you build fences and fortify things, you are reducing the flexibility of the process. And again, a great social example is the COVID restrictions that we all had to deal with. And they are necessary, but at the same time, everybody was just waiting to, you know, to have a vaccine, you know, we don’t have to worry about that anymore. It’s the same way engineers look at the current security protocols, they are restricting the freedom of operation, and it would be nice if we can have a vaccine, that even if we get attacked, we can still continue to work and operate until you know the virus works its way out of the system.

Steve King  24:52

Yeah. unfortunate answer is we won’t have a vaccine because if that’s the case, premise upon which we decide that one exists or doesn’t, because unlike the laws of engineering, if you have humans going in and changing the fundamentals of those laws, then the expectations you have for the engineering system goes right out the window. And we do exactly that with cyber security, we, we continually have a ability to interfere with the laws of cybersecurity, though they, those laws are fairly flexible to begin with, let’s call it, you think about this, as an engineer at which you are, we’ve been doing the same thing for at least 40 or 50 years here, and it’s not and the result has never changed. It’s always worse than it was before. So the final question for today, any is how are we going to put an end to this stuff.

Hany Abdel-Khalik  26:02

So maybe that will give me a chance to self promote a little bit, we’ve been working on really interesting concepts, something called self awareness, or covert awareness. And it’s all engineering driven. So we are basically coming up with some algorithms that allow the system to inherently develop self awareness, so that if you, you know, receives information from a compromised communication protocol, like I mentioned earlier, it will be able to detect that on its own without, you know, having to worry about going through certain checks, like you would typically get with these zero trust approaches. The idea is very similar to what humans do, really, when you talk about self awareness, people usually weren’t thinking about the word awareness, that means they have a record of everything, what we’re really focused on more is that self part, which means that you are the only one who own that thought, like we talked offline. Before, Steve, it’s very similar to you know, when you’re having a discussion with somebody, there’s information flowing between you two, but each one of you is forming their own opinion about the subject. And that’s the information that are, could be embedded into the engineering process will not be accessible, basically, by a rogue AI, or by an insider who understand the system, etc. So if you can develop that self awareness, you were basically developing that vaccine that I keep talking about is that the system on its own will be able to detect that there’s a virus signature that’s coming in, and, you know, we will launch the the appropriate bio defenses, you know, to neutralize it and continue continue the operation. So I’d love to talk to a lot more about that. But that’s one area of research that we are very excited about, and I think it will, we’ll have lots of applications in different fields.

Steve King  27:46

Yeah. And then I’m hopeful as well. So, you know, why don’t we plan on getting together in about six months? And, and seeing what, what, what history has taught us in that in that period in between now and then?

Hany Abdel-Khalik  28:02

Exactly, exactly. So thank

Steve King  28:05

you Haney for taking the time out of your schedule to, to chat about this intriguing topic that, you know, hasn’t, hasn’t changed much. And if you’re not, particularly Senator Govan, we all hope for, for a vaccine, as you call it, that will take away all these vulnerabilities and so forth. Maybe that’s going to be found somewhere in artificial intelligence.

Hany Abdel-Khalik  28:31

So I’m very optimistic about that. And thank you, Steve, for having me on your podcast. Thank you.

Steve King  28:37

Sure. Thank my listeners for listening, sitting down for another 30 minutes. What I thought was a pretty interesting discussion with socio professors School of nuclear engineering, Purdue University, any of Delica leak, and we will do this again in a few months. Thanks again, honey. Thanks to our listeners also for joining us and another one of cyber theories unplugged reviews of the wild world of cybersecurity technology and our new digital reality. Until next time, I’m your host, Steve King, signing up.