Picking the right words to describe cloud assets is kind of important

The work of any given IT department is remarkably broad. And within each functional team, vocabularies around technology can be quite unique. This is fine when different groups don’t have to work together much, but when they get together to solve problems, one great challenge has to do with making sure specific IT terms mean the same thing to everyone.

And if that isn’t challenging enough, take traditional IT terms and then figure out how they all translate into the ‘cloud’. I’ll give an example. Take the distinction between IaaS and PaaS. The way this is often described is that with PaaS you don’t have to worry about patching an operating system. With IaaS, this is the customers’ responsibility, not the cloud service provider’s. But the scope of cloud is much bigger than the VM example. And not understanding this can have serious ramifications.

Let’s say you go out into cloud console for your tenant. (This would be the place where you log in to spin up a virtual machine, for example.) Whether you like it or not, the very moment you spin up a VM in the cloud you’ve created the beginnings of a network topology. Not knowing this can cost you dearly later.

Cloud infrastructure is not just VM’s. There’s a whole world of storage, networking and compute services, too, which we often overlook as being IaaS. Why does this matter? Because knowing and understanding this is also the beginning of securing it. Consider where each of these pieces live in a traditional on-prem model, and what controls are in place to protect the confidentiality, integrity and accessibility of these assets. That same diligence has to be transferred to the cloud. For example, protecting your firewall configurations is not unlike protecting your security group configs on a subnet or VM instance.

Also, how do you track changes to these assets? Whatever diligence you apply in traditional IT models, this same diligence is required in the cloud. This includes reviewing and validating configurations on these virtual assets. Think about what would happen if any one of these virtual assets, like a subnet or a whole virtual network were to be deleted. Where would you be and what controls do you have in place to keep this from happening? And in the unfortunate case that it does happen, how would you know how it happened and who did it?

Because it is so much easier to set up infrastructure in the cloud, it is also that much easier to abuse said infrastructure either intentionally or unintentionally. Getting everyone on the same page around the vocabulary for cloud infrastructure is the beginning of fully understanding how to secure this environment. Let’s decide on our critical cloud vocabulary and make sure we all share the same deep understanding of the words we use to describe this environment.

Cybersecurity Risk and a Cadence of Communication

Risk is everywhere. What’s the probability that something bad will happen? And when it does happen, how bad will it be? For folks who work in security these are questions we ask every day, all day.

But it doesn’t stop there. After we get done asking these questions, we have to artfully communicate our approximations to decision makers. Sometimes this works. Mostly it doesn’t.

Part of the challenge is that our calculation of risk involves technology and gobs of technical know-how; the kind of in-the-weeds technical know-how that most business folks don’t find particularly useful. So there’s a translation process. As we translate, the meat of our risk evaluations can get lost. And decision makers don’t have time to get up to speed.

So herein lies the challenge. The business makes risk decisions, like, all the time, but since technological or security risk is hard to understand, they aren’t always arriving at their decision destination with the right knowledge. It a reasonable enough to suggest that they can be informed enough to make the right decisions?

I’d say it is. But we can’t have the presumption that a single email or a short briefing will suffice. It order to make communication around risk work, there should be a cadence of communication. It should not be the first time that a decision-maker is hearing about a given risk. Security pros can help decision makers build up a baseline of risk seen in a given environment so that when a risk report does surface, it actually means something. Without regular context for these types of reports, they’re just empty words. It security they may mean something, but that’s as far as the meaning goes.

How can you develop a cadence of communication within your organization?

English Major into Security Analyst

I’ve found it interesting to read about how people arrived in the field information security. Each person has a unique story to tell — no two paths are exactly the same, and some diverge considerably. Here’s my story.

I got off to a non-traditional start graduating from college with a major in English. From there I embarked in a random work history: dry cleaner, bakery, greasy spoon grill, cook, bus driver, book store, D.C. intern. I won’t go into all the details of all that, but I will take a moment to mark what I view as the true beginning of my IT career.

Hired as temp worker writing code in Excel VBA, (that’s right, Excel Visual Basic for Applications), I designed Excel reports that took loads of data and moved it around in a workbook for charts, graphs, etc. This was object oriented programming with a miserable IDE. I would have to plan when and how I made changes because it literally took 1-3 minutes to save. I worked on the boss’ daughters’ computer. I can still see colorful stickers plastered everywhere on the chassis.

I built odd things: reports that changed languages on the fly with the press of a button (within the workbook), an Excel workbook that doubled as a scantron form, and charts and graphs that built them selves dynamically. We delivered reports that ran very complex macros in large corporate network environments. I back to this now and it seems utterly INSANE…from a security perspective.

Getting used to programming concepts literally made my brain hurt. I spent many a lunch break on the room laying in a lawn chair holding on to my head. I also did .NET web development and started writing SQL queries, along with building out reports and integrations.

My next job required that I learn C# and even more SQL Server work. Here’s where I started doing stuff with credit card numbers: encrypting them, storing them, passing them around with APIs, etc. I’m not going to comment on best practices with any of that, but suffice it to say I studied PCI compliance aggressively. I also, learned what audits were like. And I learned about things like check digits, electronic check formats, and electronic check processing. All of this was my introduction to cyber security. It was my first foray into the imaginative world of threat modeling…and where things can go wrong with data.

After that, I took a job that focused primarily on business intelligence. This involved more SQL Server in the form of SSRS, SSIS, and something new SSAS (SQL Server Analysis Services), which is basically an Excel pivot table on steroids (slight oversimplification, but a handy one for quick explanations). Then I did an awkward shift into Oracle’s business intelligence world. This pulled me into data warehouse development and fairly heavy development in OBIEE and the dreaded RPD file. I also did some work around analytics. And, in my spare time, reviewed classes on machine learning.

Through all of this, I remained interested in security, so when a security analyst opportunity showed up, I took it. I landed the job, I think, because of my applications development experience and my full exposure into the world of PCI compliance and threat modeling. Right away I dove into vulnerability management, which has me hitting nearly everything in the environment with packets. In addition to this, I now study cloud infrastructure and security at the same time I study OT/ICS security. And I am working out how to implement both at the same time. These two areas were once incredibly far apart, but in some ways, seem to be getting closer every day.

Through all of this, I maintain a fascination I’ve had with Linux for like 15 years. Every year Linux gets better and better and better.

I also maintain an interest in pen-testing. There is so much learn in this area that it keeps a person coming back over and over again to study new tools and approaches to seeing and validating vulnerabilities. So that’s my story for now. Hats off to you if you read this whole post and good luck on your info sec journey!

Wild West Hackin’ Fest: Affordable and Content-Heavy

John Strand, who owns Black Hills Information Security (BHIS), has a way clearing the fog of what passes for knowledge in the security industry. And he knows how to make his audiences laugh. It’s a kind of cathartic truth-laugh that brings people together. I remember the first time I heard him plug the Wild West Hackin’ Fest (WWHF). I made a mental note. This could be a good, small conference that offers a lot of value. Of course, I knew that there was a lot more to BHIS than its owner, but you can often tell the culture of events from the folks who run them.

So last summer, on our family vacation, I did some recon. We managed to stay a couple nights in Deadwood. Perfect chance to inspect the venue and get a good sense of what a conference here might be like. Yup, I could definitely see this: a security conference in Deadwood.

Not long after that trip I made plans to go. And I convinced a colleague to come with me. It wasn’t fancy. Don’t get me wrong. The Deadwood Mountain Grand Hotel was awesome, but the bulk of the sessions were basically in two large rooms and a stage, which were really part of one large room divided by curtains. But here’s the thing. I don’t need fancy. I need content. And that’s what we got. Session after session was loaded with content.

I remember a talk by Paul Vixie, one of the creators of DNS, that completely tied me in to the importance of DNS. And another talk by Jon Ham where his passion for forensics made me feel like there was a whole world that I’d been skipping over in my infosec career development. And Jake Williams was there too. His session was on privilege escalation. And I was like, “Wait, what?” — an eye opener indeed. Also memorable was a talk by Annah Waggoner. It was her first talk and she was inspirational. Doing a talk for the first time at an event like WWHF has to take courage. Which is another thing, WWHF is great about pushing, encouraging folks to present, especially those who haven’t done it before.

I’m not going to rehash every talk, but I do want to encourage people to go to this event. I’m very excited about going again this year! If you want an affordable, content-heavy, hands-on experience, Deadwood in October is the time and place for you!

https://www.wildwesthackinfest.com

How can you be a consultant in your own organzation?

We’ve all seen it, especially folks who work in IT, or any area where things are changing faster than they ever have been. We hire consultants to bring value, and they often do, but often not as much as we expect them to.

Just like anyone in our departments, these folks have their specialties and they don’t know everything about everything. The resulting gaps in knowledge can create painful obstacles on the way toward successful project completion. These are the “we don’t know what we don’t know” gaps. Knowledge gaps are challenging, but they also present huge opportunities.

Identifying knowledge gaps and diving into them head first is critical. You don’t know what you don’t know until you start asking yourself what you don’t know. I know, sounds dumb, but that’s where you have to start. If there is no one in your organization who can answer your questions or who can bring value to a high-demand subject area, then it’s time to start diving, digging, reading, watching, learning, asking, etc. This can mean reading books, experimenting with technology, and generally getting out of your comfort zone.

Sure, it’s a lot of work, but if you’re not doing this work, you’re not bringing value to yourself or your organization. As you start to dig, you’re bringing value to yourself because there are few things more rewarding than learning, and then sharing what you’ve learned. You’re bringing value to organization because they don’t know what they don’t know.

I get it, this process isn’t for everyone. All I’m saying is that the knowledge gap problem is solvable. No training budget? Okay, well, there is seriously more information online than you seriously digest in a billion lifetimes. Don’t know how to cull through that information? Well, you won’t know how until you start pushing yourself to sort it out. And the thing with learning is that once you learn something, it’s hard to feel like you’ve made any progress because now you know it and it doesn’t seem like a big deal. So don’t forget to take stock of the things you’re learning. You know more today than you did yesterday!

Also, a big part of learning is sharing what you’ve learned, even if it is nearly immediately after you’ve learned it. It’s like when you share knowledge, the knowledge you share finds a home in your brain.

The more you teach and share, the more you become a consultant in your own organization. You don’t know everything, but neither do your consultants!

Premortem Now!

Apparently, one of the greatest learning experiences a chess player can have occurs once a game is lost. It’s called a postmortem analysis. And it’s hard, miserable work because a player is sitting there with a pile of negative emotions and they have to think through the reasons why they lost…one hateful move at a time. Why is this so important? Because our mistakes have the potential to teach us far more than our successes.

From this concept comes the notion of a “premortem”. Which is about getting the benefits of a project’s postmortem analysis well before said project has the chance to fail.

Let’s say your organization in on the verge of a very large project. You’re heading into some significant technological changes which will impact people and processes that have been in place for a very long time. There are so many unknowns that it makes people’s heads spin. How do you make sure groupthink doesn’t prevent critical issues from being resolved ahead of time?

In a word: premortem. Key stakeholders sit around a table and pretend as if the project failed. It went down in flames. The budget was busted. None of the deployments went as planned. Significant damage done and nothing to show for it. At this point you might do a pretend ‘blame game’. Who is to blame for the fact that this project did not succeed?

Which team didn’t do their part? Who didn’t communicate risk the way they were supposed to? What assumptions were made? Or what perceptions did the various teams have of the project? How could we let this happen? Didn’t anyone see those issues coming?

Pretending that a project went the way of Hades is a great way to invite honest discussion without relying on someone to play the role of naysayer. Let’s face it, no one wants to be accused of being overly negative. “Are you on board with this project are not?” A premortem analysis requires that everyone discuss the death of the project and “what went wrong” not whether it will go wrong . This prevents pitting the “positive people” against the “negative people”.

Even the thought of doing a premortem analysis can cause some folks to feel anxious. Why is this? Is it because it is a lot easier to keep moving than to stop and ask critical questions? Sometimes critical questions lead to uncovering critical issues. Will asking critical questions lead to more work? Will this make an already tight timeline even tighter? No one wants more stress so it’s best to just keep….on…going. Or is it?

I’d like to offer that the time to do a premortem analysis is now. Take a moment amidst planning and discussion meetings to pretend that the project never made it off the ground. Work out all the reasons it failed. Because we can all learn mightily mistakes. And what could be better than learning from mistakes before they happen?

Study both AWS and Azure for High-level Cloud Understanding

Recently, I’ve found it’s not enough to simply say, “I’m an AWS person” or “I’m an Azure person” when it comes to learning and understanding the cloud. The greatest benefits in learning surface from seeking a high-level understanding of cloud platforms, in general.

These two leading cloud providers, AWS and Azure, are distinct enough from each other, but fundamentally there are features that neither of them can avoid if they are going maintain market share, or simply be able to provide viable solutions to their customers.

When folks spin up a VM instance at AWS or in the Azure portal, they are likely to overlook a few critical things. 1) These environments are networks unto themselves, 2) Where you might’ve been operating a private cloud in the past, which is essentially a traditional data center, you’re now using a public cloud, and 3) Where you might envision you’re life getting simpler with cloud, it’s actually getting more complicated.

When folks spin up a VM instance at AWS or in the Azure portal, they are likely to overlook a few critical things. 1) These environments are networks unto themselves, 2) Where you might’ve been operating a private cloud in the past, which is essentially a traditional data center, you’re now using a public cloud, and 3) Where you might envision you’re life getting simpler with cloud, it’s actually getting more complicated.

With point #1, AWS and Azure both provide ways for their customers to create networks. There may be slight differences in how these networks are set up, but the concepts are the same. Azure calls theirs virtual networks and AWS describes them as virtual private cloud or VPC instances. (Oracle calls theirs virtual cloud network or VCNs.)

Call it what you want, it is a network. And if you don’t know that, you’re likely in for a world of hurt later when you need to scale or secure your cloud environment. And if you can’t do the same thing in AWS as you do Azure on this front, then you don’t really understand the fundamentals. And if you try another cloud platform and find you simply can’t fine tune networks the way you like, then you’ll know why AWS and Azure are leaving them behind.

With point #2, you are now participating in the use of a public data center. This means it is exceedingly easy to expose your services in ways you maybe didn’t intend. Take a careful look at how AWS and Azure overlap or don’t overlap in their approach to exposing services. If you only ever had you head in AWS you might assume that certain settings are always set to be public and the same might be true of Azure.

They’re both evolving from month to month on their approach to how they handle default public exposure, so I won’t go into too much detail. But if you work with them simultaneously when you’re learning, you’ll make fewer assumptions and as more critical questions than you would otherwise.

With point #3, generally speaking, your life as an IT professional who uses the cloud to build IT infrastructure, services and to take advantage of pre-built applications, is about to get a bit more complicated. Why? Because now in addition to all the systems you need to support locally, which help you move data in an out of the cloud, you need to manage the cloud as well. And if you’re using a service that “manages itself” you need to manage perpetual change just to keep up with it.

Understanding integration in both AWS and Azure will help you make fewer assumptions about the way things should be and, again, think critically about the fact that much of the design is actually up to you. You can’t afford to turn of your critical thinking skills. Having the courage to ask tough questions is even more important than it has been in the past. So if someone asks you, should I learn AWS or Azure, or (name your favorite alternative), the answer should be “yes”. There is no “or” if you’re going to be the master of your own cloud destiny. 🙂

“Sapiens” and Economic Value

To some extent, Economics is the study of how people produce more (both variation and volume) when they work together. Most of the time people have a place in the world’s economy when they provide value, which is measured by money and credit…mostly.

The book “Sapiens: A Brief History of Humankind” by Yuval Noah Harari has me thinking differently about economics. Harari takes us into critical transitions in human history; like the years just before and after the invention of “credit”. According to Harari, “credit” is anchored in the belief that the future will be better than the past. For most of human history, people assumed the reverse. The future was no match for the glory of the past.

Once credit took hold, however, both for good and ill, it allowed for a greater and more frequent transfer of value. Humanity could start to build a future together. And value could begin to be sought out in all corners of the globe. Trade and credit meant that we could do more together. And the more humans worked together to produce what they needed (or wanted) the more the economy grew. With all the benefits of economic growth, humans also witnessed exploitation and abuse of this system. Individuals and institutions figured out how to steal value from others who weren’t in a position to know better or defend themselves.

Unfortunately, trading on stolen value still happens today. But in the greater scheme of things, I find myself wondering about how we’re going to manage value and economic growth in the future. We’re moving from exploiting people to simply eliminating them from the equation all together. If people are not providing direct value to the global economy, will they be able to participate? Will there be huge swaths of people who can’t take advantage of all the value being created because they won’t have anything to offer in exchange for it?

Think of the countries or societies that are generating value and those that aren’t. Countries that don’t generate value fall victim to crime and exploitation. The further they get from full participation in the global economy, the further they get from the benefits of modern society. Disproportionately they end up on the downside of the world’s value systems.

As a result, with no value accessible to them, citizens in these countries migrate toward countries where value is accessible; where they have a chance of participating and producing value of their own. These value destinations, however, have responded by restricting their borders. Also, they attempt to control the flow of value by forcing their hand in trade deals. But these kinds of restrictions are antithetical to what actually makes a global economy work in the first place. We generate value when we work together.

Sure, there’s competition, but ultimately the real wins happen when we engage countries and societies who have been left out. And we all win when we help them generate value. The more overall participation we get, the better we’ll all be. Both because we’ll benefit from what these countries have to offer and because they won’t become feeders for crime and violence.

‘The Cloud’ is Still New

It feels like folks have been talking about ‘the cloud’ forever. But levels of cloud utilization in the form of IaaS, PaaS, etc. have really only ramped up significantly in the last couple years. The tendency is to think that there are ‘cloud’ people who were just born knowing ‘cloud’ and that the chasm between ‘cloud’ and ‘on-prem’ is so great that the ‘on-prem’ folks simply won’t understand this new realm.

Fact is, ‘the cloud’ is still new. And no one is born knowing anything, especially not best-practices around cloud utilization, security, and architecture. Herein lies both risk and opportunity. If we can all just put down our pretensions around cloud know-how and get busy learning, we might actually be able to build, configure and secure our cloud environments in a way that delivers consistent, beautiful results.

But the first step is remind ourselves about how new all of this is, and how revolutionary it is. Organizational leaders, instead of saying, “Hey what do you know about cloud? Oh, you don’t know anything? Okay, bye.” Need to say, “Hey let’s get learning! See what you can find out about the cloud that will help us meet our goals.” Because the reality is, most of us don’t know everything there is to know about the cloud. It is still new! And it is going to still be new for a long time!

If leaders don’t charge their teams with learning, these same leaders will have their business strategies singularly handled by vendors — well meaning as they may be. And the best solutions and the most remarkable features of ‘the cloud’ will never arrive. Innovation happens with a sense of ownership and dedication. This is less likely to happen when innovative work is attempted by 3rd parties who have ample room to over promise and under deliver.

The cloud is still new! Let’s respect that fact and don’t presume that the best solutions live elsewhere. Bring your teams into this new world and get ready to be blown away. Give them a chance to learn and innovate; don’t write them off. Sometimes the best innovations are right under our noses, but we can see them because we’re blinded by the glare of shinny, well-marketed solutions that can be low on substance.

Security Hygiene is Boring and Critical

This has been said many times before by people many times more credentialed than me. There are sexy vulnerabilities out there that take considerable expertise to understand. Then there are vulnerabilities or configurations that are the equivalent of leaving your car door unlocked.

The calculation so often made goes like this: “it hasn’t happened before”, or “I’ll only be gone for a few minutes”.

Oddly, many who have an incredibly honed financial sense about them and who understand that ‘past performance does not equal equal future results’, have great difficulty extending this concept elsewhere. But nowhere is it more applicable than in security. Past performance does not equal future results! (Or you may have been hacked in the past and you don’t know it.)

The oversight that causes an organization to get hacked in the first place is likely something simple. Are you missing two-factor authentication? Are you still using a default login? Is your password “Spring2019” and do you use it everywhere? These are security concerns that don’t take heaps of expertise to understand; they are boring and critical.

Attackers don’t want to work hard to steal data or install ransomware, so they’re likely to look for simple vulnerabilities or poorly configured networks in order to get the job done. Don’t sweat the small stuff, sweat the simple stuff.