Sunday, April 24, 2016

Can we train our way out of security flaws?

I had a discussion with some people I work with smarter than myself about training developers. The usual training suggests came up, but at the end of the day, and this will no doubt enrage some of you, we can't train developers to write secure code.

It's OK, my twitter handle is @joshbressers, go tell me how dumb I am, I can handle it.

So anyhow, training. It's a great idea in theory. It works in many instances, but security isn't one of them. If you look at where training is really successful it's for things like how to use a new device, or how to work with a bit of software. Those are really single purpose items, that's the trick. If you have a device that really only does one thing, you can train a person how to use it; it has a finite scope. Writing software has no scope. To quote myself from this discussion:

You have a Turing complete creature, using a Turing complete machine, writing in a Turing complete language, you're going to end up with Turing complete bugs.

The problem with training in this situation is that you can't train for infinite permutations. By its very definition, training can only cover a finite amount of content. Programming by definition requires you to draw on an infinite amount of content. The two are mutually exclusive.

Since you've made it this far, let's come to an understanding. Firstly, training, even how to write software is not a waste of time. Just because you can't train someone to write secure software you can teach them to understand the problem (or a subset of it). The tech industry is notorious for seeing everything as all or none. It's a sliding scale.

So what's the point?

My thoughts on this matter are one of how can we think about the challenges in a different way. Sometimes you have to understand the problem and the tools you have to find better solutions for it. We love to worry about how to teach everyone how to be more secure, when in reality it's all about many layers with small bits of security in each spot.

I hate car analogies, but this time it sort of makes sense.

We don't proclaim the way to stop people getting killed in road accidents is to train them to be better drivers. In fact I've never heard anyone claim this is the solution. We have rules that dictate how to road is to be used (which humans ignore). We have cars with lots of safety features (which humans love to disable). We have humans on the road to ensure the rules are being followed. We have safety built into lots of roads, like guard rails and rumble strips. At the end of the day even with layers of safety built in, there are accidents, lots of accidents, and almost no calls for more training.

You know what's currently the talk about how to make things safer? Self driving cars. It's ironic that software may be the solution to human safety. The point though is that every system reaches a point where the best you can ever do is marginal improvements. Cars are there, software is there. If we want to see substantial change we need new technology that changes everything.

In the meantime, we can continue to add layers of safety for software, this is where most effort seems to be today. We can leverage our existing knowledge and understanding of problems to work on making things marginally better. Some of this could be training, some of this will be technology. What we really need to do is figure out what's next though.

Just as humans are terrible drivers, we are terrible developers. We won't fix auto safety with training any more than we will fix software security with training. Of course there are basic rules everyone needs to understand which is why some training is useful. We're not going see any significant security improvements without some sort of new technology breakthrough. I don't know what that is, nobody does yet. What is self driving software development going to look like?

Let me know what you think. I'm @joshbressers on Twitter.

Sunday, April 17, 2016

Software end of life matters!

Anytime you work on a software project, the big events are always new releases. We love to get our update and see what sort of new and exciting things have been added. New versions are exciting, they're the result of months or years of hard work. Who doesn't love to talk about the new cool things going on?

There's a side of software that rarely gets talked about though, and honestly in the past it just wasn't all that important or exciting. That's the end of life. When is it time to kill off the old versions. Or sometimes even kill an entire project. When you do, what happens to the people using it? These are hard things to decide, there aren't good answers usually, it's just not a topic we're good at yet.

I bring this up now because apparently Apple has decided that Quicktime on Windows is no longer a thing. I think everyone can agree that expecting users to find some obscure message on the Internet to know they should uninstall something is pretty far fetched.

The conversation is way bigger than just Apple though. Google is going to brick some old Nest hardware. What about all those old tablets that still work but have no security updates? What about all those Windows XP machines still out there? I bet there are people still using Windows 95!

In some instances, the software and hardware can be decoupled. If you're running XP you can probably upgrade to something slightly better (maybe). Generally speaking though, you have some level of control. If you think about tablets or IoT style devices, the software and hardware are basically the same thing. The software will likely end of life before the hardware stops working. So what does that mean? In the case of pure software, if you need it to get work done, you're not going to uninstall it. It's all really complex unfortunately which is why nobody has figured this out yet.

In the past, you could keep most "hardware" working almost forever. There are cars out there nearly 100 years old. They still work and can be fixed. That's crazy. The thought of 100 year old software should frighten you to your core. They may have stopped making your washing machine years ago, but it still works and you can get it fixed. We've all seen the power tools our grandfathers used.

Now what happens when we decide to connect something to the Internet? Now we've chained the hardware to the software. Software has a defined lifecycle. It is born, it lives, it reaches end of life. Physical goods do not have a predetermined end of life (I know, it's complicated, let's keep it simple), they break, you get a new one. If we add software to this mix, software that creates a problem once it's hit the end of life stage, what do we do? There are two options really.

1) End the life of the hardware (brick it)
2) Let the hardware continue to run with the known bad software.

Neither is ideal. Now there are some devices you could just cut off features. A refrigerator for example. Instead of knowing when to order more pickles it reverts back to only keeping things cold. While this could create confusion in the pickle industry, at least you still have a working device. Other things would be tricky. An internet connected smart house isn't very useful if the things can't talk to each other. A tablet without internet isn't good for much.

I don't have any answers, just questions. We're still trying to sort out what this all means I suspect. If you think you know the answer I imagine you don't understand the question. This one is turtles all the way down.

What do you think? Tell me: @joshbressers

Tuesday, April 12, 2016

What happened with Badlock?

Unless you live under a rock, you've heard of the Badlock security issue. It went public on April 12. Then things got weird.

I wrote about this a bit in a previous post. I mentioned there that this better be good. If it's not, people will get grumpy. People got grumpy.

The thing is, this is a nice security flaw. Whoever found it is clearly bright, and if you look at the Samba patchset, it wasn't trivial to fix. Hats off to those two groups.
$ diffstat -s samba-4.4.0-security-2016-04-12-final.patch 
 227 files changed, 14582 insertions(+), 5037 deletions(-)
 Here's the thing though. It wasn't nearly as good as the hype claimed. It probably couldn't ever be as good as the hype claimed. This is like waiting for a new Star Wars movie. You have memories from being a child and watching the first few. They were like magic back then. Nothing that ever comes out again will be as good. Your brain has created ideas and memories that are too amazing to even describe. Nothing can ever beat the reality you built in your mind.

Badlock is a similar concept.

Humans are squishy irrational creatures. When we know something is coming one of two things happen. We imagine the most amazing thing ever which nothing will ever live up to (the end result here is being disappointed). Or we imagine something stupid which almost anything will be better than (the end result here is being pleasantly surprised).

I think most of us were expecting the most amazing thing ever. We had weeks to imagine what the worse possible security flaw could be that affects Samba and Windows. Most of us can imagine some pretty amazing things. We didn't get that though. We didn't get amazing. We got a pretty good security flaw, but not one that will change the world. We expected amazing, we got OK, now we're angry. If you look at twitter, the poor guy who discovered this is probably having a bad day. Honestly, there probably wouldn't have been anything that would have lived up to the elevated expectations that were set.

All that said, I do think by doing an announcement weeks in advance created this atmosphere. If this was all quiet until today, we would have been impressed, even if it had a name. Hype isn't something you can usually control. Some try, but by its very nature things get out of hand quickly and easily.

I'll leave you with two bits of wisdom you should remember.

  1. Name your pets, not your security flaws
  2. Never over-hype security. Always underpromise and overdeliver.

What do you think? Tell me: @joshbressers

Sunday, April 10, 2016

Cybersecurity education isn't good, nobody is shocked

There was a news story published last week about the almost total lack of cybersecurity attention in undergraduate education. Most people in the security industry won't be surprised by this. In the majority of cases when the security folks have to talk to developers, there is a clear lack of understanding about security.

Every now and then I run across someone claiming that our training and education is going great. Sometimes I believe them for a few seconds, then I remember the state of things. Here's the thing. While there is a lot of good training and education opportunities. The ratio between competent security people and developers is without doubt going down. Software engineering positions are growing at more than double the rate of other positions. By definition it's significantly harder to educate a security person, the math says there's a problem here (this disregards the fact that as an industry we do a horrible job of passing on knowledge).

While it's clear students don't care about security, the question is should they?

It's always easy to pull out an analogy here, comparing this to car safety, or maybe architects vs civil engineers. Those analogies never really work though, the rules are just too different. The fundamental problem really boils down to the fact that a 12 year old kid in his basement has access to the exact same tools and technology the guy working on his PhD at MIT does. I'm not sure there has ever been an industry with a similar situation. Generally those in large organizations had access to significant resources that a normal person doesn't. Like building a giant rocket, or a bridge.

Here is what we need to think about.

Would we expect a kid learning how to build a game on his Dad's computer to also learn security? If I was that kid, I would say no. I want to build a game, security sounds dumb.

What if we're a college kid interested in computer algorithms. Security sounds uninteresting and is probably a waste of time. Remember when they made you take that PhyEd class and all the jocks laughed at you while you whispered to yourself about how they'll all be working at a gas station someday? Yeah, that's us now.

Let's assume that normal people don't care about security and don't want to care about security, what does that mean?

The simple answer would be to "fix the tools", but that's sort of chicken and egg. Developers build their own tools at a rather impressive speed these days, you can't really secure that stuff.

What if we sandbox everything? That really only protects the underlying system, most everything interesting these days is in the database, you can still steal all of that from a sandbox.

Maybe we could ... NO, just stop.
So how can we fix this?
We can't.

It's not that the problems are unfixable, it's that we don't understand them well enough. My best comparison here is when futurists wondered how New York could possible deal with all the horse manure if the city kept growing. Clearly they were thinking only in the context of what was available to them at the time. We think in this way too. It's not that we're dumb, I'm certain we don't really understand the problems. The problems aren't insecure code or bad tools. It's something more fundamental than that. Did we expect the people cleaning up after the horses to solve the manure problem?

If we start to think about the fundamentals, what's the level below our current development models? With the above example it was really about transportation, not horses, but horses are what everyone obsessed over. Our problems aren't really developers, code, and education. It's something more fundamental. What is it though? I don't know.

Do you think you know? Tell me: @joshbressers

Sunday, April 3, 2016

Security is really about Risk vs Reward

Every now and then the conversation erupts about what is security really? There's the old saying that the only secure computer is one that's off (or fill in your favorite quote here, there are hundreds). But the thing is, security isn't the binary concept: you can be secure, or insecure. That's not how anything works. Everything is a sliding scale, you are never secure, you are never insecure. You're somewhere in the middle. Rather than bumble around about your risk though, you need to understand what's going on and plan for the risk.

So this brings us to the idea of risk and reward. Rather than just thinking about security, you have to think about how everything fits together. It doesn't matter if your infrastructure is super secure if nobody can do their jobs. As we've all seen over and over, if security gets in the way, security loses. Every. Single. Time.

I think about this a lot, and I've come up with a graph that I think can explain this nicely.


Don't think in the context of secure or insecure. Think in the context of how much risk do I have? Once you understand what your risks are, you can decide if the level of risk you're taking on can be justified by what the result of that risk will be. This of course holds true for nearly all decisions, not just security, but we'll just focus on security.

The above graph puts things into 4 groups. If you have a high level of risk with minimal reward (the Why box), you're making a bad decision. Anything you have in that "Why" box probably needs to go away ASAP, you will regret it someday.

Additionally, if your sustaining operations are of high risk, you're probably doing something wrong. Risk is hard and drains an organization, you should be conducting your day to day operations in a manner than poses a low risk as the day to day is generally not where the high reward is.

The place you want to be is in the "Innovation" or "No Brainer" boxes. Accepting a high level of risk isn't always a bad thing, assuming that risk comes with significant rewards. You can imagine a situation where you are deploying a new and untested technology, but the benefits to conducting business could change everything, or perhaps using a new, untested vendor for the first time.

We have to be careful with risk. Risk can be crippling if you don't understand and manage it. It can also destroy everything you've done if you let it get out of hand. Many of us find ourselves in situations where all risk is seen as bad. Risk isn't always bad, risk is never zero. It's up to everyone to determine what their acceptable level of risk is. Never forget though, that sometimes we need to bump up our level of risk to get to the next level of reward. Just make sure you can bring that risk back under control once you start seeing the outcomes.

What do you think? Let me know: @joshbressers