Everything is Hackable

Ted Harrington is the author of Hackable: How to do Application Security Right and the executive partner at Independent Security Evaluators. Harrington has overseen security research hacking medical devices, password managers and cryptocurrency wallets, and he’s helped hundreds of companies like Google, Netflix, Amazon and Disney fix 10s of 1000s of security vulnerabilities. Harrington also leads a team that started and organizes IoT Village, an event whose hacking contest is a three-time DEF CON black badge winner, and which represents the discovery of more than 300 Zero-Day vulnerabilities. Harrington’s work has been featured in more than 100 media outlets, including the New York Times, Financial Times, Wall Street Journal, Washington Post and USA Today.

Inspired by the fact that he repeatedly saw companies – irrespective of company size, geographic location, industry, focus, maturity or sophistication – running into the same ten problems, Ted embarked on a mission to publicize the solution. In the process, he realized that all of the conventional solutions are wrong and wrote a book about how to do it right.

A lot of people don’t even realize they have that problem, because they’re just sort of blind to it. … And so that made me start thinking about what are the conventional solutions to those problems. And that was the moment that I realized everybody has a wrong answer. … And so that motivated me to go write the book. It covers a full spectrum of how to think both programmatically, and specifically about how to actually build better, more secure software systems.

In this episode of Cybersecurity Unplugged, Harrington discusses:

  • The dangers of connected medical devices and how he and his team have identified vulnerabilities in passive medical devices;
  • How most organizations are doing it wrong and don’t understand how to secure software systems;
  • The problem that inspired him to write Hackable;
  • Vulnerabilities in the blockchain and how he discovered an active crypto robbery in progress.
CLICK HERE for a full transcript of the conversation.

This episode has been trascribed by AI, please excuse any typos or grammatical errors.

Steve King 00:13
Good day, everyone, I’m Steve King, the managing director of cyber theory. Today’s episode is going to focus on how hackers think. Joining me today is Ted Harrington, the author of hackable, how to do application security right. And the executive partner at Independent Security Evaluators, Ted’s overseeing security research hacking medical devices, password managers and cryptocurrency wallets, and he’s helped hundreds of companies like Google, Netflix, Amazon and Disney fix 10s of 1000s of security vulnerabilities. Ted also leads a team that started and organizes IoT village and event whose hacking contest is a three time DEF CON, black badge winner, and which represents the discovery of more than 300 Zero Day vulnerabilities. Ted’s work has been featured in more than 100 media outlets, including the New York Times, Financial Times, Wall Street Journal, Washington Post and USA Today. So welcome, Chad. I’m glad you could join me. Yeah, totally.

Ted Harrington 01:23
Thanks for having me excited to be here.

Steve King 01:25
You and your organization, have done a lot of work around hacking medical devices, and maybe no one better than you right now knows the real dangers, these medical devices pose. Can you explain to our listeners where things stand today with these things? And what the real danger is there?

Ted Harrington 01:44
Sure. Well, maybe let me start with some good news. The good news is that there are many passionate security researchers focusing on this problem. This is a significant issue that is not being overlooked. It’s not being forgotten. There’s a lot of people working really, really hard on this problem. And it’s a complex problem, because it’s not entirely just about security vulnerabilities. But it’s about the entire process of how medical devices are approved for use on patients. And that’s typically a very long process has involves a lot of checks and balances along the way. And as technology evolves, and security evolves, how do you take that evolution and that constant change and deal with something that is much slower to change, which is, you know, highly regulated industry. So that’s, that’s the good news is that there’s a lot of people working really, really hard on solving these problems, some of the smartest security researchers. So I’m really excited about that. The problem as I see it, is and where we’ve really been interested in over the years has been in the places that maybe other researchers haven’t focused quite as much where all the emphasis right now is around what’s called active medical devices. So things and that’s actually where the emphasis should be, I believe, because active medical devices are things that do something to the patient. So an example would be something like a pacemaker pacemaker actively manipulates your heartbeat. So you can understand the logical correlation there where if someone were to manipulate the thing that manipulates a heartbeat, how can that hurt a patient and implied and what I’m saying here is, of course, that patient safety is the most important thing. And that’s something that I advocate for very strongly, very adamantly. And that I think often gets overlooked in the discussions of health care, because we talk a lot in healthcare, Not we but the industry talks a lot about compliance with HIPAA, which has more to do with privacy than patient safety. And so number one, the big issue is patient safety. And much of the emphasis is on these active medical devices. And the research that we’ve done has really focused more on what’s called passive medical devices, which are devices that don’t do something to the patient. But the they react to the patient. So an example would be something like the bedside monitor that you have in the hospital room that reports the patient’s vitals, etc. It’s simply reacting the patient, it’s not doing something to the patient. And the reason that we’ve been really fascinated with that is because the question that we’ve always been trying to answer is in the context of healthcare, the question we’re trying to answer is, Could an attacker cause harm or fatality to a patient? And how can you cause harm or fatality by looking at something that’s not actively doing something to a patient? That’s a really interesting attack scenario, because it’s something that people don’t typically think of. And that’s, that’s why ethical hackers exist, right? Because we’re supposed to think differently. And so our research definitely looked at that problem and identified ways that by manipulating these devices, you can actually manipulate the way that care is delivered, like the way that nurses and physicians interact with patients. By manipulating that care, you might either trigger inappropriate care like make the physician think that a patient is suffering when they’re not. And now they intervene in some way, or you prevent needed care. So a patient actually needs care from a physician, but they don’t get it because the, the monitor doesn’t signal that this patient is suffering. So if we sort of take all of that sort of really complex situation and we, we distill it down, there’s a few important ideas that come out of this. One is, we have to be thinking like an attacker thinks a second is we have to look at the assumptions that are baked into the way that we think systems operate. And then we have to address whether those assumptions are valid and true or not. And then the third is, we have to understand the relationship between the way that security evolves and adapts. And the very real business constraints that exist in any industry, whether it’s healthcare or something else. So those are, I think, the three big takeaways about this really complex topic.

Steve King 05:57
Yeah, and an example of the ladder example you cite might be that a heart monitor could be sending a signal to a nursing station that it has manipulated and indicates that the patient has had normal heart activity, when in fact, it has abnormal heart activity, and the nursing station pays no attention to it. And then the patient suffers because of that, is that that’s, that’s one of the ways in which I suppose a hacker might approach that if they wanted to do harm.

Ted Harrington 06:27
Yeah, definitely between the two scenarios, that is the much scarier one, the prevention of needed care. And I say that because the opposite, which is the triggering unnecessary care, there are fail safes in health care environments, right? There are manual verbal checks that a doctor will administer before actually performing something that might hurt a patient, right? They’ll, they’ll, of course, interact with them verbally and say, Hey, how you feeling whatever. And so if this patient isn’t really suffering, no, like, I don’t know why that thing’s going off. I feel fine. Like don’t don’t hit me with the electric battery, right? So there’s, there’s failsafes there. But nevertheless, medical errors do happen. And healthcare, hospitals are chaotic environments, they’re usually don’t have enough people at at certain times on the floor. And so it’s not out of the realm of possibility that it could happen. But the much scarier scenario is someone really needs care, and they don’t get it.

Steve King 07:22
Yeah, what a great way to create chaos in a hospital is to take all 337 beds and this represent the monitoring at the nurse’s station savvy, terrific. A it’s been like what, five, six years since we’ve worked together, I tell give us a little update on what’s happened since then at ISC and and your progress in building a company.

Ted Harrington 07:45
Yeah, now a lot has happened in six years. I think maybe a couple of the highlights are what we’ve certainly continued what our core business is, which is, you know, ethical hacking and security assessment, security consulting, basically, helping companies through a service, helping them identify how attackers might compromise their systems, primarily software and applications, and then advise them on how to fix it. So that’s sort of been the constant. But some of the highlights include, you mentioned, some of the research we published last few years looking at things like cryptocurrency, etc. password managers was another really cool one. A second highlight, that’s pretty cool. I wrote a book that talks about the things that we’ve learned along the way it’s called hackable, this book essentially addresses the problem that most people might not even realize they have, which is that most organizations actually don’t understand how to appropriately secure software systems, even when they’re trying to, they might not realize that a conventional approach is not the correct approach. So I wrote a book to try to solve that problem, keep it simple, help people say, hey, look, do this, do this, do this. And you’ll be able to build better, more secure software systems. And that book, really fortunate to see it actually hit number one bestseller. And just the other day, it was actually nominated for a very prestigious award in the security community. So it’s doing really well, a lot of people are getting a lot out of it. So that’s awesome. And then the third highlight to mention would be that what one of the things we’ve noticed over the years, is that not only do people worry about the vulnerabilities that might exist in software they’re building. But large enterprises in particular, and medium enterprises, as well are concerned about the security of their vendors, all these companies that they have to work with. They you know, in some cases, they’re working with 1000s of companies, and they’re trusting those companies to you know, keep their assets secure. And we’ve been really interested in that problem. And so we actually, we now have a software product that helps large enterprises and medium to large enterprises manage the security assessment process so that they can simplify the process of being able to say hey, can we use this vendor have they gone through our security controls our security process? You know, do they meet our standards? And can this business unit wants to use them use them? Or do we need to do something else? And so this product really helps solve that very, very painful problem.

Steve King 10:11
Yeah. unhackable is a great book, by the way, congratulations. Can you describe some of the situations from the book to maybe illustrate how easy it is to pull off a cyber attack? And what defenses folks might might mount against these, like our listener audience, for example, especially everyday consumer devices?

Ted Harrington 10:34
Yeah, well, let me start by describing the problem that I saw that motivated me to write this book, because reading a book is no, no small undertaking. I mean, if you’re gonna commit multiple years of your life to something, you better be doing it to help people. Yeah, away, you’re gonna spend that much time you’re not solving a problem for somebody. And here’s the problem that I saw. So through the course of this, this business that we have, and I mentioned before, and you know, serving as this advisor to a lot of companies, and they’re sort of outside ethical hackers, I noticed that I kept having the same conversations over and over and over again, whether it’s with current customers, or prospective customers, or maybe people I met after delivering a Keynote or something like that. And it seemed like everybody has the same what I organized as the same 10 problems. And what was really interesting to me about that was that everyone has these problems, irrespective of company size, geographic location, industry, focus, maturity, or sophistication of the organization. And that was really interesting, when I noticed that, hey, not everyone necessarily calls them by the same name. A lot of them people don’t even realize they have that problem, because they’re just sort of blind to it. But everyone has these sort of tensing problems. And so that made me start thinking about, well, what are the conventional solutions to those problems. And that was the moment that I realized I need to write this book. Because I realized that the conventional solutions to these problems that everybody has are largely wrong, like the way that people talk about solving problems, everybody has a wrong answer. And as I’m thinking about that, I’m like, wow, put yourself in the shoes of the person who has that problem, right? They see a problem that they want to solve in the world, they start building technology to solve that problem. They realize that security is an important part of that they realize they have some, some issues with how they’re going to actually solve for security. So they go and try to solve those problems. So security challenges, and the answer they get is wrong. And that to me, felt unacceptable. And so that motivated me to go write the book. And so that’s, that’s really what this book is about. It’s, it basically covers a full spectrum of how to think both programmatically, and, and specifically about how to actually build better, more secure software systems and the principles applied outside of software, but I wrote it for software. So that’s the the emphasis of the book. And it does all throughout the book, all of these different problems that talk about addressed, like here was something that people didn’t realize was an issue. And here’s how they should do it differently instead, and here’s what you need to do.

Steve King 13:19
So you’re sort of laying open some of the, some of the wounds or, or some of the sort of hidden mal truths in in the industry, which is great. AI, and we need more of that. And so you mentioned crypto earlier. You know, blockchain and crypto are becoming a real part of our economic landscape. Now, for better or worse, and a lot of hype has surrounded the security of the blockchain and crypto keys and the rest of it. But before folks rush into these markets, can you shed some later on the true security of these technologies? Do you have any experiences cracking packing rather, crypto yourself?

Ted Harrington 14:03
Yeah, we do. I have a pretty interesting story I can share with you. That’s actually the story I open my book with. But I do want to preface this with the fact that the fundamental way that a blockchain works is actually from a security perspective, an improvement on other models. And I think that even though there are stories, including the one I’m about to tell about, you know, cryptocurrencies having some hacking incidents, I don’t think inherently that means that cryptocurrency cryptocurrencies in general are bad from a security standpoint. They’re obviously hotly debated, highly politicized, very polarizing, especially because governments do not like them. But they from a security standpoint, I personally think they’re, they’re good. But the the story of some of our research we were looking at some Aetherium wallets, and Aetherium is for people don’t know is one cryptocurrency in a wallet is essentially where You, would you, you’d hold your, your currencies. And we were looking at the idea of could you predict the private key. So to give you a sense of private key is one of the most important things that keeps a cryptocurrency wallet secure. It’s as the name suggests, it’s private. It’s not something that should be known. And it’s protected by this mathematical principle that’s known as statistical improbability. And statistical improbability basically means you can’t get something. Now not that it’s completely impossible, but statistically improbable, right? And the way we could think about what this means there’s a metaphor to kind of wrap our heads around what statistical improbability means. So it’d be like, Steve, you go to the beach, and you bend over and you pick up a grain of sand, and then you throw that grain of sand back. And then the next day, a full 24 hours later, I go back to the same beach, and I pick up a single grain of sand. Now, what’s the likelihood that I pick up your grain of sand? pretty much impossible, right? Like it could happen. But it’s almost certainly not going to happen. Now, if you multiply that by every beach on Earth, and multiply that by like a gazillion Planet Earth’s, that gives you a scope of what it would take to predict a cryptocurrency wallet key. It’s just not possible. So the the key takeaway here is that this, even though statistically probability is what keeps these wallets secure. Well, in our research, looking at cryptocurrency wallets, these Ethereum wallets in particular, we were actually able to successfully predict the crypto key, the private key that was protected by this idea. And not just once, we actually were able to protect a predicted 732 times. So that’s like, you go to the beach, you pick up a grain of sand, you throw it back, and I pick up your same grain of sand 732 times. It just it’s in it’s impossible. And that was obviously not because we were guessing it’s because we were looking, we had identified a vulnerability in the way that the software provisions keys. And the thing that we did next was the logical thing that anyone listening to store is probably already thinking like, How much money are we talking about here. And because these Aetherium wallets leverage a blockchain blockchain, the very purpose of them is that all the transactions are publicly visible, we were able to actually determine how much money was in the 732, vulnerable wallets. And it was a pretty substantial amount at the time, combined between them, it was a little over 54 million US dollars worth. Now Aetherium is like 20 times more valuable today than it was at the time. So you’re talking about a massive, massive amount of money is exposed. And the way to think about that is it’s like, if there was a stack of cash just sitting on the sidewalk, someone’s gonna steal that eventually, right? And we wanted to know, Well, okay, what happened to it. And sure enough, every single unit of currency had been stolen from these vulnerable wallets. And it had all been funneled to a single destination wallet. And so that clearly signaled to us that this was a thefts underway by an individual or group who is exploiting the same vulnerability that our research had discovered. The final thing that we looked at, which I think is more tangential on a sidebar, but it was interesting was we wanted to see what how fast these get looted. And so what we did, you know, the very nature of cryptocurrency wallets is they’re intended to be anonymous. So it’s not like we could contact anybody to say, hey, just so you know, like Your fly’s down. So what we what we did was we actually transferred a dollars worth of our own Aetherium into one of these vulnerable wallets to see what happened. And almost instantly that currency was transferred out that same destination Wallet. So the story besides being like, kind of nuts, you know, in terms of we ran into an active hacking campaign, like a thief actively stealing stuff in progress in progress. Yeah, it’s it’s crazy. But the I think the key takeaway from the story is are Well, there’s two, there’s two key points about what this story tells us. Number one, security vulnerabilities exist. And number two attackers exploit them. And those are two really, really important points for I think, people to walk away from a story like this, because all too often we can hear stories about security research, or we can hit read headlines, and a lot of times it feels like hypothetical, like, Oh, if x happened, then why would be the result and that would be bad. And that can lead to a sense of complacency. Complacency happens in many organizations where they think, you know, we’re secure. We’re investing the right way. We’ve got all the smart people, we’re doing X, Y, and Z. We do penetration testing, like all this stuff. And they overlook the fact that no, these aren’t just hypothetical This stuff is real, and attackers are actively doing this. And so that’s a that’s a really vivid story from a research perspective that shows this stuff is for real happening.

Steve King 20:09
Yeah, you’re absolutely right to that. That’s what it breeds, it breeds that sense of complacency and, and because everything is hypothetical, and in terms of the way that the cybersecurity law has passed along here. It’s also true in the IoT, and OT world. Speaking of which, and we’ve got this whole universe of Internet facing things now that we need to worry about much more so than we did five years ago. We also, you know, I’ve seen increasing attacks on critical infrastructure throughout the US with colonial and JBS and CO new Co Op and, and the rest of it, or Molson, Coors, etc. You’ve created something called the IoT village, which I think is intend Shan is to, is to address these growing threats. Can you tell us a little bit about what’s going on with that initiative?

Ted Harrington 21:05
Yeah, you’re correct. So IoT village was created to really create a community that together will collaborate on improving the state of security in Internet of Things. And the fundamental truth is that pretty much everything is the Internet of Things. Now, a lot of people think about IoT devices as like your, your Nest thermostat, or something, you know, a consumer grade product that has something you can, you know, a way to communicate with it via an app or, you know, remotely in some way. But the truth is, any system or device that can be interacted with by another system, or device, that’s IoT. And so now you’re talking about basically all of transportation, you’re talking about our phones, or IoT, or computers, it’s almost everything entire lives are digital, yeah, our entire lives are digital. And at the same time, there’s a very real business constraint in that there’s varying degrees of computing power in these different devices. So if you’re talking about a car, there’s a tremendous amount of computing power, because it’s physically, it physically is large, and it can have larger hardware and equipment on board and it’s more expensive. So the cost of security can be better integrated into the purchase price. That’s one end of the spectrum, the other end of the spectrum, are really, really small devices that don’t have the form factor to have enough computational power to really process security, or they the price point on the device is so cheap, that the manufacturer says like, we’re not even, we’re just going to call it secure. But we’re not going to actually really do anything for it. And that’s very, very, very common, where that will happen, or just, it’s just called secure. And there’s no explanation of what that would mean. Those are real, real challenges. And so we wanted to create this environment that has hands on labs and hacking contests, and researchers presenting their latest findings in order to really drive awareness of these issues. So that as a community, we can really make things better. And ultimately, that’s the whole point of security. That’s the point of security research, the point of ethical hacking, is we want to make things better.

Steve King 23:12
And where do you where do you expose that most actively? Is that a DEF CON thing? Is that a Blackhat? thing? Or where do you where do you get the kind of brand awareness that you’re looking for for that?

Ted Harrington 23:26
Yeah, so we run it at conferences, and DEF CON was where it started DEF CON, sort of originated this whole village concept, which is sort of a almost like a conference within a conference. And for anyone who’s been to DEF CON, you’re probably familiar with, there’s like a crypto village, there’s larger village, there’s red teaming village, and we created IoT village. And so DEF CON is one of the tent poles, certainly of our series, we go to RSA with it as well, we go to a number of besides locations, and we’re starting to run events online. I mean, COVID certainly force that. And which is cool, because now we can make it accessible to people really all over the planet by eliminating any sort of travel requirements. And but yeah, we’re really, really aiming it at the security conferences.

Steve King 24:11
Yeah. And you can talk as long as you want, and they have all kinds of guests. And, you know, yeah, I’m not sure what the outcome is going to be here over the next couple of years because the virtual events have a lot of a lot of advantages over physical events. I think so I’m conscious of the time here. Ted, I wanted to kind of wrap this up by asking you about the phones with the launch and popularity of the Apple’s new iPhone 13. We’ve seen this technology now run right past the desktop in terms of in some raised floor computing, in terms of compute power and how people use these devices as their go to personal computers. Apples always had a strong reputation for security, but what’s your experience with the security of these computers? devices that we all carry around with us. Well, they’re

Ted Harrington 25:02
certainly getting better over time. That’s, that’s a great sign to see. You know, when the iPhone first came out only back in 2007, we were fortunate to see that the research that we publish, we wound up being the first company to actually find an exploitable vulnerability in that device when it came out. And then when Android OS came out, shortly thereafter, we repeated that feat, and we were first to find an exploitable vulnerability. And then you fast forward to today. And it’s pretty hard to exploit the phones now. And that’s not to say you don’t read headlines, I read headlines about phones getting hacked all the time. But what’s interesting about those headlines is the attacks aren’t super exotic. They’re always I mean, which is what makes them headline worthy because they’re cool. But it just shows you that the attack requirements are getting extraordinarily high to be able to attack these phones. Because the way that they’re built, they were built more with security in mind, then traditional computing systems were, you know, decades ago. And so I like that I feel I feel good about that. But a lot of the same security challenges exist Well, no matter the platform, right. So when people click links they shouldn’t or download things they shouldn’t or whatever, like those same attacks, vectors are the same on phones as they would be on desktops. So I think that the the advancements are, are great. There’s huge trade offs that have happened from a privacy standpoint, of course, because these, these companies now know pretty much absolutely everything about us. And your only option is to just not have a phone, which is probably not practical for most people. I wouldn’t want people walking away from this feeling scared about using their phone, they should be skeptical. They should turn off all the stuff they don’t need, they should say don’t allow, like every time the app prompts if they don’t really need that thing should be allowed. I wouldn’t say that you shouldn’t use your phone.

Steve King 26:49
Yeah. And that advice hasn’t changed in years. So I guess Apple’s new iPhone 13 users can sleep at night better, but not entirely, I suppose. So. Thanks for that. I appreciate it. The book is called hackable. We are out of time, I appreciate you taken the time to read to, to share with us last 2530 minutes of your perspective on things from a hackers point of view. And I hope we can kind of revisit here in another three, four months and see what’s happened between now and then.

Ted Harrington 27:20
Yeah, yeah, definitely. My pleasure. And if, if anything that we talked about today, for anyone who’s listening, if you had follow up questions, or you wanted to figure out where to learn more about the book, or you want to follow me on social media, or you want to just reach out to me or you want to talk to us about how we can help you with security assessments or whatever, however, I can help you if you just go to Ted harrington.com. All of that information is there super easy to find him very responsive and appreciate everyone’s time today.

Steve King 27:46
Thanks, Ted, and thanks to our listeners for joining us in another one of cyber theories, unplugged reviews and hope you enjoyed it. And until next time, I’m your host Steve King, signing out

Category: Podcast
Previous Post
Building Brand Trust Through Contact and Exposure
Next Post
Securing Healthcare Systems
Menu