Picking the right words to describe cloud assets is kind of important

The work of any given IT department is remarkably broad. And within each functional team, vocabularies around technology can be quite unique. This is fine when different groups don’t have to work together much, but when they get together to solve problems, one great challenge has to do with making sure specific IT terms mean the same thing to everyone.

And if that isn’t challenging enough, take traditional IT terms and then figure out how they all translate into the ‘cloud’. I’ll give an example. Take the distinction between IaaS and PaaS. The way this is often described is that with PaaS you don’t have to worry about patching an operating system. With IaaS, this is the customers’ responsibility, not the cloud service provider’s. But the scope of cloud is much bigger than the VM example. And not understanding this can have serious ramifications.

Let’s say you go out into cloud console for your tenant. (This would be the place where you log in to spin up a virtual machine, for example.) Whether you like it or not, the very moment you spin up a VM in the cloud you’ve created the beginnings of a network topology. Not knowing this can cost you dearly later.

Cloud infrastructure is not just VM’s. There’s a whole world of storage, networking and compute services, too, which we often overlook as being IaaS. Why does this matter? Because knowing and understanding this is also the beginning of securing it. Consider where each of these pieces live in a traditional on-prem model, and what controls are in place to protect the confidentiality, integrity and accessibility of these assets. That same diligence has to be transferred to the cloud. For example, protecting your firewall configurations is not unlike protecting your security group configs on a subnet or VM instance.

Also, how do you track changes to these assets? Whatever diligence you apply in traditional IT models, this same diligence is required in the cloud. This includes reviewing and validating configurations on these virtual assets. Think about what would happen if any one of these virtual assets, like a subnet or a whole virtual network were to be deleted. Where would you be and what controls do you have in place to keep this from happening? And in the unfortunate case that it does happen, how would you know how it happened and who did it?

Because it is so much easier to set up infrastructure in the cloud, it is also that much easier to abuse said infrastructure either intentionally or unintentionally. Getting everyone on the same page around the vocabulary for cloud infrastructure is the beginning of fully understanding how to secure this environment. Let’s decide on our critical cloud vocabulary and make sure we all share the same deep understanding of the words we use to describe this environment.

Cybersecurity Risk and a Cadence of Communication

Risk is everywhere. What’s the probability that something bad will happen? And when it does happen, how bad will it be? For folks who work in security these are questions we ask every day, all day.

But it doesn’t stop there. After we get done asking these questions, we have to artfully communicate our approximations to decision makers. Sometimes this works. Mostly it doesn’t.

Part of the challenge is that our calculation of risk involves technology and gobs of technical know-how; the kind of in-the-weeds technical know-how that most business folks don’t find particularly useful. So there’s a translation process. As we translate, the meat of our risk evaluations can get lost. And decision makers don’t have time to get up to speed.

So herein lies the challenge. The business makes risk decisions, like, all the time, but since technological or security risk is hard to understand, they aren’t always arriving at their decision destination with the right knowledge. It a reasonable enough to suggest that they can be informed enough to make the right decisions?

I’d say it is. But we can’t have the presumption that a single email or a short briefing will suffice. It order to make communication around risk work, there should be a cadence of communication. It should not be the first time that a decision-maker is hearing about a given risk. Security pros can help decision makers build up a baseline of risk seen in a given environment so that when a risk report does surface, it actually means something. Without regular context for these types of reports, they’re just empty words. It security they may mean something, but that’s as far as the meaning goes.

How can you develop a cadence of communication within your organization?

Wild West Hackin’ Fest: Affordable and Content-Heavy

John Strand, who owns Black Hills Information Security (BHIS), has a way clearing the fog of what passes for knowledge in the security industry. And he knows how to make his audiences laugh. It’s a kind of cathartic truth-laugh that brings people together. I remember the first time I heard him plug the Wild West Hackin’ Fest (WWHF). I made a mental note. This could be a good, small conference that offers a lot of value. Of course, I knew that there was a lot more to BHIS than its owner, but you can often tell the culture of events from the folks who run them.

So last summer, on our family vacation, I did some recon. We managed to stay a couple nights in Deadwood. Perfect chance to inspect the venue and get a good sense of what a conference here might be like. Yup, I could definitely see this: a security conference in Deadwood.

Not long after that trip I made plans to go. And I convinced a colleague to come with me. It wasn’t fancy. Don’t get me wrong. The Deadwood Mountain Grand Hotel was awesome, but the bulk of the sessions were basically in two large rooms and a stage, which were really part of one large room divided by curtains. But here’s the thing. I don’t need fancy. I need content. And that’s what we got. Session after session was loaded with content.

I remember a talk by Paul Vixie, one of the creators of DNS, that completely tied me in to the importance of DNS. And another talk by Jon Ham where his passion for forensics made me feel like there was a whole world that I’d been skipping over in my infosec career development. And Jake Williams was there too. His session was on privilege escalation. And I was like, “Wait, what?” — an eye opener indeed. Also memorable was a talk by Annah Waggoner. It was her first talk and she was inspirational. Doing a talk for the first time at an event like WWHF has to take courage. Which is another thing, WWHF is great about pushing, encouraging folks to present, especially those who haven’t done it before.

I’m not going to rehash every talk, but I do want to encourage people to go to this event. I’m very excited about going again this year! If you want an affordable, content-heavy, hands-on experience, Deadwood in October is the time and place for you!

https://www.wildwesthackinfest.com

How can you be a consultant in your own organzation?

We’ve all seen it, especially folks who work in IT, or any area where things are changing faster than they ever have been. We hire consultants to bring value, and they often do, but often not as much as we expect them to.

Just like anyone in our departments, these folks have their specialties and they don’t know everything about everything. The resulting gaps in knowledge can create painful obstacles on the way toward successful project completion. These are the “we don’t know what we don’t know” gaps. Knowledge gaps are challenging, but they also present huge opportunities.

Identifying knowledge gaps and diving into them head first is critical. You don’t know what you don’t know until you start asking yourself what you don’t know. I know, sounds dumb, but that’s where you have to start. If there is no one in your organization who can answer your questions or who can bring value to a high-demand subject area, then it’s time to start diving, digging, reading, watching, learning, asking, etc. This can mean reading books, experimenting with technology, and generally getting out of your comfort zone.

Sure, it’s a lot of work, but if you’re not doing this work, you’re not bringing value to yourself or your organization. As you start to dig, you’re bringing value to yourself because there are few things more rewarding than learning, and then sharing what you’ve learned. You’re bringing value to organization because they don’t know what they don’t know.

I get it, this process isn’t for everyone. All I’m saying is that the knowledge gap problem is solvable. No training budget? Okay, well, there is seriously more information online than you seriously digest in a billion lifetimes. Don’t know how to cull through that information? Well, you won’t know how until you start pushing yourself to sort it out. And the thing with learning is that once you learn something, it’s hard to feel like you’ve made any progress because now you know it and it doesn’t seem like a big deal. So don’t forget to take stock of the things you’re learning. You know more today than you did yesterday!

Also, a big part of learning is sharing what you’ve learned, even if it is nearly immediately after you’ve learned it. It’s like when you share knowledge, the knowledge you share finds a home in your brain.

The more you teach and share, the more you become a consultant in your own organization. You don’t know everything, but neither do your consultants!

Premortem Now!

Apparently, one of the greatest learning experiences a chess player can have occurs once a game is lost. It’s called a postmortem analysis. And it’s hard, miserable work because a player is sitting there with a pile of negative emotions and they have to think through the reasons why they lost…one hateful move at a time. Why is this so important? Because our mistakes have the potential to teach us far more than our successes.

From this concept comes the notion of a “premortem”. Which is about getting the benefits of a project’s postmortem analysis well before said project has the chance to fail.

Let’s say your organization in on the verge of a very large project. You’re heading into some significant technological changes which will impact people and processes that have been in place for a very long time. There are so many unknowns that it makes people’s heads spin. How do you make sure groupthink doesn’t prevent critical issues from being resolved ahead of time?

In a word: premortem. Key stakeholders sit around a table and pretend as if the project failed. It went down in flames. The budget was busted. None of the deployments went as planned. Significant damage done and nothing to show for it. At this point you might do a pretend ‘blame game’. Who is to blame for the fact that this project did not succeed?

Which team didn’t do their part? Who didn’t communicate risk the way they were supposed to? What assumptions were made? Or what perceptions did the various teams have of the project? How could we let this happen? Didn’t anyone see those issues coming?

Pretending that a project went the way of Hades is a great way to invite honest discussion without relying on someone to play the role of naysayer. Let’s face it, no one wants to be accused of being overly negative. “Are you on board with this project are not?” A premortem analysis requires that everyone discuss the death of the project and “what went wrong” not whether it will go wrong . This prevents pitting the “positive people” against the “negative people”.

Even the thought of doing a premortem analysis can cause some folks to feel anxious. Why is this? Is it because it is a lot easier to keep moving than to stop and ask critical questions? Sometimes critical questions lead to uncovering critical issues. Will asking critical questions lead to more work? Will this make an already tight timeline even tighter? No one wants more stress so it’s best to just keep….on…going. Or is it?

I’d like to offer that the time to do a premortem analysis is now. Take a moment amidst planning and discussion meetings to pretend that the project never made it off the ground. Work out all the reasons it failed. Because we can all learn mightily mistakes. And what could be better than learning from mistakes before they happen?

Study both AWS and Azure for High-level Cloud Understanding

Recently, I’ve found it’s not enough to simply say, “I’m an AWS person” or “I’m an Azure person” when it comes to learning and understanding the cloud. The greatest benefits in learning surface from seeking a high-level understanding of cloud platforms, in general.

These two leading cloud providers, AWS and Azure, are distinct enough from each other, but fundamentally there are features that neither of them can avoid if they are going maintain market share, or simply be able to provide viable solutions to their customers.

When folks spin up a VM instance at AWS or in the Azure portal, they are likely to overlook a few critical things. 1) These environments are networks unto themselves, 2) Where you might’ve been operating a private cloud in the past, which is essentially a traditional data center, you’re now using a public cloud, and 3) Where you might envision you’re life getting simpler with cloud, it’s actually getting more complicated.

When folks spin up a VM instance at AWS or in the Azure portal, they are likely to overlook a few critical things. 1) These environments are networks unto themselves, 2) Where you might’ve been operating a private cloud in the past, which is essentially a traditional data center, you’re now using a public cloud, and 3) Where you might envision you’re life getting simpler with cloud, it’s actually getting more complicated.

With point #1, AWS and Azure both provide ways for their customers to create networks. There may be slight differences in how these networks are set up, but the concepts are the same. Azure calls theirs virtual networks and AWS describes them as virtual private cloud or VPC instances. (Oracle calls theirs virtual cloud network or VCNs.)

Call it what you want, it is a network. And if you don’t know that, you’re likely in for a world of hurt later when you need to scale or secure your cloud environment. And if you can’t do the same thing in AWS as you do Azure on this front, then you don’t really understand the fundamentals. And if you try another cloud platform and find you simply can’t fine tune networks the way you like, then you’ll know why AWS and Azure are leaving them behind.

With point #2, you are now participating in the use of a public data center. This means it is exceedingly easy to expose your services in ways you maybe didn’t intend. Take a careful look at how AWS and Azure overlap or don’t overlap in their approach to exposing services. If you only ever had you head in AWS you might assume that certain settings are always set to be public and the same might be true of Azure.

They’re both evolving from month to month on their approach to how they handle default public exposure, so I won’t go into too much detail. But if you work with them simultaneously when you’re learning, you’ll make fewer assumptions and as more critical questions than you would otherwise.

With point #3, generally speaking, your life as an IT professional who uses the cloud to build IT infrastructure, services and to take advantage of pre-built applications, is about to get a bit more complicated. Why? Because now in addition to all the systems you need to support locally, which help you move data in an out of the cloud, you need to manage the cloud as well. And if you’re using a service that “manages itself” you need to manage perpetual change just to keep up with it.

Understanding integration in both AWS and Azure will help you make fewer assumptions about the way things should be and, again, think critically about the fact that much of the design is actually up to you. You can’t afford to turn of your critical thinking skills. Having the courage to ask tough questions is even more important than it has been in the past. So if someone asks you, should I learn AWS or Azure, or (name your favorite alternative), the answer should be “yes”. There is no “or” if you’re going to be the master of your own cloud destiny. 🙂

‘The Cloud’ is Still New

It feels like folks have been talking about ‘the cloud’ forever. But levels of cloud utilization in the form of IaaS, PaaS, etc. have really only ramped up significantly in the last couple years. The tendency is to think that there are ‘cloud’ people who were just born knowing ‘cloud’ and that the chasm between ‘cloud’ and ‘on-prem’ is so great that the ‘on-prem’ folks simply won’t understand this new realm.

Fact is, ‘the cloud’ is still new. And no one is born knowing anything, especially not best-practices around cloud utilization, security, and architecture. Herein lies both risk and opportunity. If we can all just put down our pretensions around cloud know-how and get busy learning, we might actually be able to build, configure and secure our cloud environments in a way that delivers consistent, beautiful results.

But the first step is remind ourselves about how new all of this is, and how revolutionary it is. Organizational leaders, instead of saying, “Hey what do you know about cloud? Oh, you don’t know anything? Okay, bye.” Need to say, “Hey let’s get learning! See what you can find out about the cloud that will help us meet our goals.” Because the reality is, most of us don’t know everything there is to know about the cloud. It is still new! And it is going to still be new for a long time!

If leaders don’t charge their teams with learning, these same leaders will have their business strategies singularly handled by vendors — well meaning as they may be. And the best solutions and the most remarkable features of ‘the cloud’ will never arrive. Innovation happens with a sense of ownership and dedication. This is less likely to happen when innovative work is attempted by 3rd parties who have ample room to over promise and under deliver.

The cloud is still new! Let’s respect that fact and don’t presume that the best solutions live elsewhere. Bring your teams into this new world and get ready to be blown away. Give them a chance to learn and innovate; don’t write them off. Sometimes the best innovations are right under our noses, but we can see them because we’re blinded by the glare of shinny, well-marketed solutions that can be low on substance.

Security Hygiene is Boring and Critical

This has been said many times before by people many times more credentialed than me. There are sexy vulnerabilities out there that take considerable expertise to understand. Then there are vulnerabilities or configurations that are the equivalent of leaving your car door unlocked.

The calculation so often made goes like this: “it hasn’t happened before”, or “I’ll only be gone for a few minutes”.

Oddly, many who have an incredibly honed financial sense about them and who understand that ‘past performance does not equal equal future results’, have great difficulty extending this concept elsewhere. But nowhere is it more applicable than in security. Past performance does not equal future results! (Or you may have been hacked in the past and you don’t know it.)

The oversight that causes an organization to get hacked in the first place is likely something simple. Are you missing two-factor authentication? Are you still using a default login? Is your password “Spring2019” and do you use it everywhere? These are security concerns that don’t take heaps of expertise to understand; they are boring and critical.

Attackers don’t want to work hard to steal data or install ransomware, so they’re likely to look for simple vulnerabilities or poorly configured networks in order to get the job done. Don’t sweat the small stuff, sweat the simple stuff.

“The Cuckoo’s Egg:” An Old Story – New to Me

Two weekends ago I finished reading “Tribe of Hackers: Cybersecurity Advice from the Best Hackers in the World”. (Please read previous blog entry to learn more.) I was amazed at how many of “Tribe of Hackers” contributors recommended an old book, “The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage,” which was written by Clifford Stoll in 1989.

The story actually begins at Lawrence Berkeley National Laboratory in 1986. I won’t go into too many details about the setting or the time. In computer years, it was ages ago. So my question: “How could such an old book about tracking down a hacker be so routinely recommended by a slew of highly knowledgeable and well-respected info sec professionals?”

Turns out cybersecurity hasn’t changed much. In “The Cuckoo’s Egg,” the hacker who is being tracked by Stoll, an astronomer, is aided by of the following: 1) default credentials, 2) processes that run as root, but shouldn’t, 3) well-known vulnerabilities, 4) the fact that folks can be fooled into entering their credentials into fake sites, 5) the desire of organizations to not share information, 6) the fact that various US agencies described this sort of attack as not their ‘bailiwick’, 7) the fact that various agencies don’t have the expertise to fully comprehend the risk to their data and network infrastructures, and 8) that organizations could not possibly imagine someone actually penetrating their ‘high security’ environments. I’m sure I’m missing a few, but you get the idea.

Besides being a great old book, published when I was a curious, modem tapping, BBS surfing adolescent, it’s an excellent primer on the foundations of modern cybersecurity. Sure, the technology has changed, but fundamentals haven’t moved an inch. Maybe all cybersecurity professionals have heard of this book except for me, but if you haven’t, consider reading it. Even if you’re not after the education, it’s wonderfully entertaining.

Postman API Learning, Testing, and Development

I’m pretty late into to the API game. Recently I was on a call with a handful of security engineers and they explained that they couldn’t afford to have their people staring at console screens any more. Instead, they rely almost entirely on API’s to automate and streamline their work. I’ve been hearing about API development forever but I’d not gotten past the first hurdle: how to start. My answer to this is Postman.

Once you have an API you want to consume, you can start doing ‘POST’ and ‘GET’ requests pronto and see results immediately. Also, one critical tipping point for me was when I watched a number of the introductory videos that Postman provides. For example, I didn’t understand what the ‘Test’ section was for. The videos demonstrated that this is where you can write JavaScript to traverse the JSON files which are the results of your requests.

Currently, I’m only using a free account. I’m in learning mode, but as I move toward doing more work with API’s in the future, I’ll absolutely be using Postman to test and verify my efforts. It’s also a great introduction in the security advantages and disadvantages of using API’s.

Anyone else who has a desire to dig into API’s and consider what they can do to add value to your work, try Postman. And don’t forget to check out a few of their tutorial videos.