Thursday, March 23, 2017

Inverse Law of CVEs

I've started a project to put the CVE data into Elasticsearch and see if there is anything clever we can learn about it. Ever if there isn't anything overly clever, it's fun to do. And I get to make pretty graphs, which everyone likes to look at.

I stuck a few of my early results on Twitter because it seemed like a fun thing to do. One of the graphs I put up was comparing the 3 BSDs. The image is below.


You can see that none of these graphs has enough data to really draw any conclusions from, again, I did this for fun. I did get one response claiming NetBSD is the best, because their graph is the smallest. I've actually heard this argument a few times over the past month, so I decided it's time to write about it. Especially since I'm sure I'll find many more examples like this while I'm weeding through this mountain of CVE data.

Let's make up a new law, I'll call it the "Inverse Law of CVEs". It goes like this - "The fewer CVE IDs something has has, the less secure it is".

That doesn't make sense to most people. If you have something that is bad, fewer bad things is certainly better than more bad things. This is generally true for physical concepts brains can understand. Less crime is good. Fewer accidents is good. When it comes to something like how many CVE IDs your project or product has, this idea gets turned on its head. Less is probably bad when we think about CVE IDs. There's probably some sort of line somewhere where if you cross it things flip back to bad (wait until I get to PHP). We'll call that the security maginot line because bad security decided to sneak in through the north.

If you have something with very very few CVE IDs it doesn't mean it's secure, it means nobody is looking for security issues. It's easy to understand that if something is used by a large diverse set of users, it will get more bug reports (some of which will be security bugs) and it will get more security attention from both good guys and bad guys because it's a bigger target. If something has very few users, it's quite likely there hasn't been a lot of security attention paid to it. I suspect what the above graphs really mean is Free BSD is more popular than OpenBSD, which is more popular than NetBSD. Random internet searches seem to back this up.

I'm not entirely sure what to do with all this data. Part of the fun is understanding how to classify it all. I'm not a data scientist so there will be much learning. If you have any ideas by all means let me know, I'm quite open to suggestions. Once I have better data I may consider trying to find at what point a project has enough CVE IDs to be considered on the right path, and which have so many they've crossed over to the bad place.

Sunday, March 12, 2017

Security, Consumer Reports, and Failure

Last week there was a story about Consumer Reports doing security testing of products.


As one can imagine there were a fair number of “they’ll get it wrong” sort of comments. They will get it wrong, at first, but that’s not a reason to pick on these guys. They’re quite brave to take this task on, it’s nearly impossible if you think about the state of security (especially consumer security). But this is how things start. There is no industry that has gone from broken to perfect in one step. It’s a long hard road when you have to deal with systemic problems in an industry. Consumer product security problems may be larger and more complex than any other industry has ever had to solve thanks to things such as globalization and how inexpensive tiny computers have become.

If you think about the auto industry, you’re talking about something that costs thousands of dollars. Safety is easy to justify as it’s going to be less than the overall cost of the vehicle. Now if we think about tiny computing devices, you could be talking about chips that cost less than one dollar. If the cost of security and safety will be more than the initial cost of the computing hardware it can be impossible to justify that cost. If adding security doubles the cost of something, the manufacturers will try very hard to find ways around having to include such features. There are always bizarre technicalities that can help avoid regulation, groups like Consumer Reports help with accountability.

Here is where Consumer Reports and other testing labs will be incredibly important to this story. Even if there is regulation a manufacturer chooses to ignore, a group like Consumer Reports can still review the product. Consumer Reports will get things very wrong at first, sometimes it will be hilariously wrong. But that’s OK, it’s how everything starts. If you look back at any sort of safety and security in the consumer space, it took a long time, sometimes decades, to get it right. Cybersecurity will be no different, it’s going to take a long time to even understand the problem.

Our default reaction to mistakes is often one of ridicule, this is one of those times we have to be mindful of how dangerous this attitude is. If we see a group trying to do the right thing but getting it wrong, we need to offer advice, not mockery. If we don’t engage in a useful and serious way nobody will take us seriously. There are a lot of smart security folks out there, we can help make the world a better place this time. Sometimes things can look hopeless and horrible, but things will get better. It’ll take time, it won’t be easy, but things will get better thanks to efforts such as this one.

Thursday, March 2, 2017

What the Oscars can teach us about security

If you watched the 89th Academy Awards you saw a pretty big mistake at the end of the show, the short story is Warren Beatty was handed the wrong envelope, he opened it, looked at it, then gave it to Faye Dunaway to read, which she did. The wrong people came on stage and started giving speeches, confused scrambling happened, and the correct winner was brought on stage. No doubt this will be talked about for many years to come as one of the most interesting and exciting events in the history of the awards ceremony.

People make mistakes, we won’t dwell on how the wrong envelope made it into the announcer’s hands. The details of how this error came to be isn’t what’s important for this discussion. The important lesson for us is watch Warren Beatty’s behavior. He clearly knew something was wrong, if you watch the video of him, you can tell things aren’t right. But he just kept going, gave the card to Faye Dunaway, and she read the name of the movie on the card. These people aren’t some young amateurs here, these are seasoned actors. It’s not their first rodeo. So why did this happen?

The lesson for us all is to understand that when things start to break down, people will fall back to their instincts. The presenters knew their job was to open the card and read the name. Their job wasn’t to think about it or question what they were handed. As soon as they knew something was wrong, they went on autopilot and did what was expected. This happens with computer security all the time. If people get a scary phishing email, they will often go into autopilot and do things they wouldn’t do if they kept a level head. Most attackers know how this works and they prey on this behavior. It’s really easy to claim you’d never be so stupid as to download that attachment or click on that link, but you’re not under stress. Once you’re under stress, everything changes.

This is why police, firefighters, and soldiers get a lot of training. You want these people to do the right thing when they enter autopilot mode. As soon as a situation starts to get out of hand, training kicks in and these people will do whatever they were trained to do without thinking about it. Training works, there’s a reason they train so much. Most people aren’t trained like this so they generally make poor decisions when under stress.

So what should we take away from all this? The thing we as security professionals needs to keep in mind is how this behavior works. If you have a system that isn’t essentially “secure by default”, anytime someone find themselves under mental stress, they’re going to take the path of least resistance. If this path of least resistance is also something dangerous happening, you’re not designing for security. Even security experts will have this problem, we don’t have superpowers that let us make good choices in times of high stress. It doesn’t matter how smart you think you are, when you’re under a lot of stress, you will go into autopilot, you will make bad choices if bad choices are the defaults.

Thursday, February 23, 2017

SHA-1 is dead, long live SHA-1!

Unless you’ve been living under a rock, you heard that some researchers managed to create a SHA-1 collision. The short story as to why this matters is the whole purpose of a hashing algorithm is to make it impossible to generate collisions on purpose. Unfortunately though impossible things are usually also impossible so in reality we just make sure it’s really really hard to generate a collision. Thanks to Moore’s Law, hard things don’t stay hard forever. This is why MD5 had to go live on a farm out in the country, and we’re not allowed to see it anymore … because it’s having too much fun. SHA-1 will get to join it soon.

The details about this attack are widely published at this point, but that’s not what I want to discuss, I want to bring things up a level and discuss the problem of algorithm deprecation. SHA-1 was basically on the way out. We knew this day was coming, we just didn’t know when. The attack isn’t super practical yet, but give it a few years and I’m sure there will be some interesting breakthroughs against SHA-1. SHA-2 will be next, which is why SHA-3 is a thing now. At the end of the day though this is why we can’t have nice things.

A long time ago there weren’t a bunch of expired standards. There were mostly just current standards and what we would call “old” standards. We kept them around because it was less work than telling them we didn’t want to be friends anymore. Sure they might show up and eat a few chips now and then, but nobody really cared. Then researchers started to look at these old algorithms and protocols as a way to attack modern systems. That’s when things got crazy.

It’s a bit like someone bribing one of your old annoying friends to sneak the attacker through your back door during a party. The friend knows you don’t really like him anymore, so it won’t really matter if he gets caught. Thus began the long and horrible journey to start marking things as unsafe. Remember how long it took before MD5 wasn’t used anymore? How about SSL 2 or SSHv1? It’s not easy to get rid of widely used standards even if they’re unsafe. Anytime something works it won't be replaced without a good reason. Good reasons are easier to find these days than they were even a few years ago.

This brings us to the recent SHA-1 news. I think it's going better this time, a lot better. The browsers already have plans to deprecate it. There are plenty of good replacements ready to go. Did we ever discuss killing off md5 before it was clearly dead? Not really. It wasn't until a zero day md5 attack was made public that it was decided maybe we should stop using it. Everyone knew it was bad for them, but they figured it wasn’t that big of a deal. I feel like everyone understands SHA-1 isn’t a huge deal yet, but it’s time to get rid of it now while there’s still time.

This is the world we live in now. If you can't move quickly you will fail. It's not a competitive advantage, it's a requirement for survival. Old standards no longer ride into the sunset quietly, they get their lunch money stolen, jacket ripped, then hung by a belt loop on the fence.

Sunday, February 12, 2017

Reality Based Security

If I demand you jump off the roof and fly, and you say no, can I call you a defeatist? What would you think? To a reasonable person it would be insane to associate this attitude with being a defeatist. There are certain expectations that fall within the confines of reality. Expecting things to happen outside of those rules is reckless and can often be dangerous.

Yet in the universe of cybersecurity we do this constantly. Anyone who doesn’t pretend we can fix problems is a defeatist and part of the problem. We just have to work harder and not claim something can’t be done, that’s how we’ll fix everything! After being called a defeatist during a discussion, I decided to write some things down. We spend a lot of time trying to fly off of roofs instead of looking for practical realistic solutions for our security problems.

The way cybersecurity works today someone will say “this is a problem”. Maybe it’s IoT, or ransomware, or antivirus, secure coding, security vulnerabilities; whatever, pick something, there’s plenty to choose from. It’s rarely in a general context though, it will be sort of specific, for example “we have to teach developers how to stop adding security flaws to software”. Someone else will say “we can’t fix that”, then they get called a defeatist for being negative and it’s assumed the defeatists are the problem. The real problem is they’re not wrong. It can’t be fixed. We will never see humans write error free code, there is no amount of training we can give them. Pretending it can is what’s dangerous. Pretending we can fix problems we can’t is lying.

The world isn’t fairy dust and rainbows. We can’t wish for more security and get it. We can’t claim to be working on a problem if we have no clue what it is or how to fix it. I’ll pick on IoT for a moment. How many security IoT “experts” exist now? The number is non trivial. Does anyone have any ideas how to understand the IoT security problems? Talking about how to fix IoT doesn’t make sense today, we don’t even really understand what’s wrong. Is the problem devices that never get updates? What about poor authentication? Maybe managing the devices is the problem? It’s not one thing, it’s a lot of things put together in a martini shaker, shook up, then dumped out in a heap. We can’t fix IoT because we don’t know what it even is in many instances. I’m not a defeatist, I’m trying to live in reality and think about the actual problems. It’s a lot easier to focus on solutions for problems you don’t understand. You will find a solution, those solutions won’t make sense though.

So what do we do now? There isn’t a quick answer, there isn’t an easy answer. The first step is to admit you have a problem though. Defeatists are a real thing, there’s no question about it. The trick is to look at the people who might be claiming something can’t be fixed. Are they giving up, or are they trying to reframe the conversation? If you declare them a defeatist, the conversation is now over, you killed it. On the other side of the coin, pretending things are fine is more dangerous than giving up, you’re living in a fantasy. The only correct solution is reality based security. Have honest and real conversations, don’t be afraid to ask hard questions, don’t be afraid to declare something unfixable. An unfixable problem is really just one that needs new ideas.

You can't fly off the roof, but trampolines are pretty awesome.

I'm @joshbressers on Twitter, talk to me.

Monday, February 6, 2017

There are no militant moderates in security

There are no militant moderates. Moderates never stand out for having a crazy opinion or idea, moderates don’t pick fights with anyone they can. Moderates get the work done. We could look at the current political climate, how many moderate reasonable views get attention? Exactly. I’m not going to talk about politics, that dumpster fire doesn’t need any more attention than it’s already getting. I am however going to discuss a topic I’m calling “security moderates”, or the people who are doing the real security work. They are sane, reasonable, smart, and actually doing things that matter. You might be one, you might know one or two. If I was going to guess, they’re a pretty big group. And they get ignored quite a lot because they're too busy getting work done to put on a show.

I’m going to split existing security talent into some sort of spectrum. There’s nothing more fun than grouping people together in overly generalized ways. I’m going to use three groups. You have the old guard on one side (I dare not mention left or right lest the political types have a fit). This is the crowd I wrote about last week; The people who want to protect their existing empires. On the other side you have a lot of crazy untested ideas, many of which nobody knows if they work or not. Most of them won’t work, at best they're a distraction, at worst they are dangerous.

Then in the middle we have our moderates. This group is the vast majority of security practitioners. The old guard think these people are a bunch of idiots who can’t possibly know as much as they do. After all, 1999 was the high point of security! The new crazy ideas group thinks these people are wasting their time on old ideas, their new hip ideas are the future. Have you actually seen homomorphic end point at rest encryption antivirus? It’s totally the future!

Now here’s the real challenge. How many conferences and journals have papers about reasonable practices that work? None. They want sensational talks about the new and exciting future, or maybe just new and exciting. In a way I don’t blame them, new and exciting is, well, new and exciting. I also think this is doing a disservice to the people getting work done in many ways. Security has never been an industry that has made huge leaps driven by new technology. It’s been an industry that has slowly marched forward (not fast enough, but that’s another topic). Some industries see huge breakthroughs every now and then. Think about how relativity changed physics overnight. I won’t say security will never see such a breakthrough, but I think we would be foolish to hope for one. The reality is our progress is made slowly and methodically. This is why putting a huge focus on crazy new ideas isn’t helping, it’s distracting. How many of those new and crazy ideas from a year ago are even still ideas anymore? Not many.

What do we do about this sad state of affairs? We have to give the silent majority a voice. Anyone reading this has done something interesting and useful. In some way you’ve moved the industry forward, you may not realize it in all cases because it’s not sensational. You may not want to talk about it because you don’t think it’s important, or you don’t like talking, or you’re sick of the fringe players criticizing everything you do. The first thing you should do is think about what you’re doing that works. We all have little tricks we like to use that really make a difference.

Next write it down. This is harder than it sounds, but it’s important. Most of these ideas aren’t going to be full papers, but that’s OK. Industry changing ideas don’t really exist, small incremental change is what we need. It could be something simple like adding an extra step during application deployment or even adding a banned function to your banned.h file. The important part is explaining what you did, why you did it, and what the outcome was (even if it was a failure, sharing things that don’t work has value). Some ideas could be conference talks, but you still need to write things down to get talks accepted. Just writing it down isn’t enough though. If nobody ever sees your writing, you’re not really writing.  Publish your writing somewhere, it’s never been easier to publish your work. Blogs are free, there are plenty of groups to find and interact with (reddit, forums, twitter, facebook). There is literally a security conference every day of the year. Find a venue, tell your story.

There are no militant moderates, this is a good thing. We have enough militants with agendas. What we need more than ever are reasonable and sane moderates with great ideas, making a difference every day. If the sane middle starts to work together. Things will get better, and we will see the change we need.

Have an idea how to do this, let me know. @joshbressers on Twitter

Sunday, January 29, 2017

Everything you know about security is wrong, stop protecting your empire!

Last week I kept running into old school people trying to justify why something that made sense in the past still makes sense today. Usually I ignore these sort of statements, but I feel like I’m seeing them often enough it’s time to write something up. We’re in the middle of disruptive change. That means that the way security used to work doesn’t work anymore (some people think it does) and in the near future, it won’t work at all. In some instances will actually be harmful if it’s not already.


The real reason I’m writing this up is because there are really two types of leaders. Those who lead to inspire change, and those who build empires. For empire builders, change is their enemy, they don’t welcome the new disrupted future. Here’s a list of the four things I ran into this week that gave me heartburn.


  • You need AV
  • You have to give up usability for security
  • Lock it all down then slowly open things up
  • Firewall everything


Let’s start with AV. A long time ago everyone installed an antivirus application. It’s just what you did, sort of like taking your vitamins. Most people can’t say why, they just know if they didn't do this everyone would think they're weird. Here’s the question for you to think about though: How many times did your AV actually catch something? I bet the answer is very very low, like number of times you’ve seen bigfoot low. And how many times have you seen AV not stop malware? Probably more times than you’ve seen bigfoot. Today malware is big business, they likely outspend the AV companies on R&D. You probably have some control in that phone book sized policy guide that says you need AV. That control is quite literally wasting your time and money. It would be in your best interest to get it changed.


Usability vs security is one of my favorite topics these days. Security lost. It’s not that usability won, it’s that there was never really a battle. Many of us security types don’t realize that though. We believe that there is some eternal struggle between security and usability where we will make reasonable and sound tradeoffs between improving the security of a system and adding a text field here and an extra button there. What really happened was the designers asked to use the bathroom and snuck out through the window. We’re waiting for them to come back and discuss where to add in all our great ideas on security.


Another fan favorite is the best way to improve network security is to lock everything down then start to open it up slowly as devices try to get out. See the above conversation about usability. If you do this, people just work around you. They’ll use their own devices with network access, or just work from home. I’ve seen employees using the open wifi of the coffee shop downstairs. Don’t lock things down, solve problems that matter. If you think this is a neat idea, you’re probably the single biggest security threat your organization has today, so at least identifying the problem won’t take long.


And lastly let’s talk about the old trusty firewall. Firewalls are the friend who shows up to help you move, drinks all your beer instead of helping, then tells you they helped because now you have less stuff to move. I won’t say they have no value, they’re just not great security features anymore. Most network traffic is encrypted (or should be), and the users have their own phones and tablets connecting to who knows what network. Firewalls only work if you can trust your network, you can’t trust your network. Do keep them at the edge though. Zero trust networking doesn’t mean you should purposely build a hostile network.

We’ll leave it there for now. I would encourage you to leave a comment below or tell me how wrong I am on Twitter. I’d love to keep this conversation going. We’re in the middle of a lot of change. I won’t say I’m totally right, but I am trying really hard to understand where things are going, or need to go in some instances. If my silly ramblings above have put you into a murderous rage, you probably need to rethink some life choices, best to do that away from Twitter. I suspect this will be a future podcast topic at some point, these are indeed interesting times.

How wrong am I? Let me know: @joshbressers on Twitter.