Empathy in Negotiation?

I’m reading “Getting More” by Stuart Diamond…again.  I don’t typically read books twice, but there is usable content in this book that it is hard not to.  It’s a book about negotiation, but feels like more than that. It’s also about how we get things done together. According to Diamond, what gets in the way of working together are differing mental pictures. You can come to a negotiation, but if you don’t try to see the situation the way the other party sees it, you’ll have a hard time reaching an agreement.

Photo Credit: Sharon Sinclair

Diamond recommends ‘role reversal’ practice as a way to gain knowledge about those with whom you are trying to negotiate. It’s just another way of saying, “Put yourself in the other person’s shoes.” I’ve known some folks to bristle when you tell them to do this, but it is an invaluable exercise. I find it most interesting, oddly enough, when it comes to negotiating agreements with my kids. I try to make sure I understand their position and then see if I can repeat it back to them to make sure they know that I know where they are coming from.

Diamond argues that if you don’t show that you understand the other party’s position, the other party will get stuck in a loop and won’t come out of it. Do this early, he says. If I say, “[Son], it sounds like you’re frustrated that you’re not able to the same things as your friends. You’re worried that you won’t be able to talk to them about the same things that they’re talking about,” and I can get him to say, “That’s right” then I know I’m getting somewhere. It may take a few tries, though, because you may not understand at all the reason for his position. But that’s the point.

You really can’t help them meet their goals unless you understand their goals. Traditionally, negotiation has been about you reaching your goals at the expense of the other party. This may work once or twice, but over time you’ll find that you’re not able to make deals any more, argues Diamond. Also, you’ll suffer from a loss of credibility.

What I enjoy most about the concepts in “Getting More” is that they are counter-intuitive. Who knew that you would need so much empathy in order to engage in a successful negotiation? It’s almost like, if you want to negotiate with someone, you need to provide them a service. That service is listening. There is considerable value gifting the other party with the acknowledgement that they’re being heard. If you don’t provide that service, you’re going to get less because they’ll be crippled by an unmet need. Help them reach that goal and you’ll both get more!

Eliminating the Inefficiency of Work-in-Progress in Cybersecurity

Some time ago I read “The Goal: A Process of Ongoing Improvement” by Eliyahu M. Goldratt. My big takeaway: Work-in-Progress or WIP items slow production. As the theory goes, you can be swimming in “efficiencies”, but if you’re stumbling over excess work-in-progress inventory or you’ve ignored a bottleneck, you’re nowhere near your potential.

This is clear enough in manufacturing. But these concepts can be applied elsewhere.

Photo Credit: Kristin & Adam

Demands on IT departments are growing exponentially. As technological advances accelerate, IT professionals are required to keep up. This isn’t one area, but in several areas at once. IT pros are pursuing cutting edge analytics and at the same time pushing traditional on-prem infrastructure to the cloud; while also balancing an undercurrent of spurious applications and solutions. Not just balancing, but seeking to meet an expectation of “subject matter expert” level knowledge/expertise with each new IT initiative.

This drives inefficiencies into IT. I’ll focus on cybersecurity within IT since I’m a cybersecurity analyst.

In order to win, security teams need a system for how they arrive at priorities. Priorities reduce work-in-progress items; they also minimize bottlenecks. IT departments tend to develop rockstars who don’t do all the work, but significant amounts of work pass through them. When many projects are going on at once, rockstars become “constraints”. (See “The Phoenix Project” by Gene Kim and Kevin Behr.)  The other constraint is tools-in-progress. The tendency is to push for breadth over depth. More tools, less expertise in each tool.

When tools are viewed as 80-90% of the solution, the requirement of analysts’ time is easily overlooked. When it comes to cybersecurity, organizations can easily end up with a myriad of tools. Each of these tools becomes a work-in-progress or tool-in-progress item. Tools can add value, but if there are too many, they can actually lower the aggregate value of a team. The way to overcome this is through a highly effective system of prioritization. Knowing what to prioritize takes time. But for each tool, there if there is a sharp focus, chances creating value go up considerably.

Challenge teams to not let the perfect be the enemy of the good. Dare to set some things aside in order to arrive at critical priorities. Zero in on these priorities. They may change over time. This isn’t an issue. But if they’re changing too frequently, you’ll get stuck with a stifling inventory of work-in-progress items. Make a best-effort attempt to document this and quantify it so it doesn’t keep happening.

With a clean set of priorities and a careful reduction of WIP items, all things are possible! 

Machine Learning and Human Self-awareness

With all the talk around machine learning, it causes us to reflect how humans learn. What are the parallels between humans and machines? What can machine learning teach us about our experiences and the actions we take based on our experiences?

ML is a way to provide meaningful experiences to machines. 

Photo Credit: Alan Levine

We convey information to silicone based entities in a language they understand, “When this happens, this other thing tends to happen.” Or, getting slightly more complicated, “When these four things happen, with some of those things being more significant than others, this other thing has a very big chance of happening.”

What makes machine learning different from run-of-the-mill statistics is that we tend to care less about the process or even the veracity of the data. The outcome is all that matters. If a machine is able to experience enough scenarios and outcomes, there is a fairly good chance it can provide us with a prediction. 

If machines learn by experiencing data, there is theoretically no limit to what they can learn. Data is the limit. A machine needs enough of the right kind of data for its predictions or insights to be meaningful. 

Humans learn through experience as well, but the sheer number of datapoints processed through their five senses is astronomical. Think of going for a walk. Every forward leg movement is a vicious, light-speed cycle of inputs and their resulting outputs. Not only are we learning as well walk, but we’re taking into account years of walking/learning experiences. We have multiple models going on at once.

There isn’t a single action humans take that isn’t informed by nearly infinite numbers of data points. Human decisions are the result of a form of ‘supervised learning’. We act, experience, and choose to act again based on an aggregation of results or outcomes. And to add to the permutations of parallelism, we’re impacted by external models (other humans).

What are the experiences and training we’re providing each other? How does abuse impact a person expectations of outcomes? How does this impact the actions they take in the future? How does poverty impact the ‘supervised learning’ that humans experience? When a person lands in jail, how did they get there? What models are society using to put them there? When someone does something to contribute positively to society, how do we create responses that affirm these actions and stimulate more of them?

The more we explore machine learning, the more we’ll learn about ourselves. My hope is that this will provide us with a level of enlightenment and self-awareness that we’ve not seen before.

Picking the right words to describe cloud assets is kind of important

The work of any given IT department is remarkably broad. And within each functional team, vocabularies around technology can be quite unique. This is fine when different groups don’t have to work together much, but when they get together to solve problems, one great challenge has to do with making sure specific IT terms mean the same thing to everyone.

And if that isn’t challenging enough, take traditional IT terms and then figure out how they all translate into the ‘cloud’. I’ll give an example. Take the distinction between IaaS and PaaS. The way this is often described is that with PaaS you don’t have to worry about patching an operating system. With IaaS, this is the customers’ responsibility, not the cloud service provider’s. But the scope of cloud is much bigger than the VM example. And not understanding this can have serious ramifications.

Let’s say you go out into cloud console for your tenant. (This would be the place where you log in to spin up a virtual machine, for example.) Whether you like it or not, the very moment you spin up a VM in the cloud you’ve created the beginnings of a network topology. Not knowing this can cost you dearly later.

Cloud infrastructure is not just VM’s. There’s a whole world of storage, networking and compute services, too, which we often overlook as being IaaS. Why does this matter? Because knowing and understanding this is also the beginning of securing it. Consider where each of these pieces live in a traditional on-prem model, and what controls are in place to protect the confidentiality, integrity and accessibility of these assets. That same diligence has to be transferred to the cloud. For example, protecting your firewall configurations is not unlike protecting your security group configs on a subnet or VM instance.

Also, how do you track changes to these assets? Whatever diligence you apply in traditional IT models, this same diligence is required in the cloud. This includes reviewing and validating configurations on these virtual assets. Think about what would happen if any one of these virtual assets, like a subnet or a whole virtual network were to be deleted. Where would you be and what controls do you have in place to keep this from happening? And in the unfortunate case that it does happen, how would you know how it happened and who did it?

Because it is so much easier to set up infrastructure in the cloud, it is also that much easier to abuse said infrastructure either intentionally or unintentionally. Getting everyone on the same page around the vocabulary for cloud infrastructure is the beginning of fully understanding how to secure this environment. Let’s decide on our critical cloud vocabulary and make sure we all share the same deep understanding of the words we use to describe this environment.

Cybersecurity Risk and a Cadence of Communication

Risk is everywhere. What’s the probability that something bad will happen? And when it does happen, how bad will it be? For folks who work in security these are questions we ask every day, all day.

But it doesn’t stop there. After we get done asking these questions, we have to artfully communicate our approximations to decision makers. Sometimes this works. Mostly it doesn’t.

Part of the challenge is that our calculation of risk involves technology and gobs of technical know-how; the kind of in-the-weeds technical know-how that most business folks don’t find particularly useful. So there’s a translation process. As we translate, the meat of our risk evaluations can get lost. And decision makers don’t have time to get up to speed.

So herein lies the challenge. The business makes risk decisions, like, all the time, but since technological or security risk is hard to understand, they aren’t always arriving at their decision destination with the right knowledge. It a reasonable enough to suggest that they can be informed enough to make the right decisions?

I’d say it is. But we can’t have the presumption that a single email or a short briefing will suffice. It order to make communication around risk work, there should be a cadence of communication. It should not be the first time that a decision-maker is hearing about a given risk. Security pros can help decision makers build up a baseline of risk seen in a given environment so that when a risk report does surface, it actually means something. Without regular context for these types of reports, they’re just empty words. It security they may mean something, but that’s as far as the meaning goes.

How can you develop a cadence of communication within your organization?

English Major into Security Analyst

I’ve found it interesting to read about how people arrived in the field information security. Each person has a unique story to tell — no two paths are exactly the same, and some diverge considerably. Here’s my story.

I got off to a non-traditional start graduating from college with a major in English. From there I embarked in a random work history: dry cleaner, bakery, greasy spoon grill, cook, bus driver, book store, D.C. intern. I won’t go into all the details of all that, but I will take a moment to mark what I view as the true beginning of my IT career.

Hired as temp worker writing code in Excel VBA, (that’s right, Excel Visual Basic for Applications), I designed Excel reports that took loads of data and moved it around in a workbook for charts, graphs, etc. This was object oriented programming with a miserable IDE. I would have to plan when and how I made changes because it literally took 1-3 minutes to save. I worked on the boss’ daughters’ computer. I can still see colorful stickers plastered everywhere on the chassis.

I built odd things: reports that changed languages on the fly with the press of a button (within the workbook), an Excel workbook that doubled as a scantron form, and charts and graphs that built them selves dynamically. We delivered reports that ran very complex macros in large corporate network environments. I back to this now and it seems utterly INSANE…from a security perspective.

Getting used to programming concepts literally made my brain hurt. I spent many a lunch break on the room laying in a lawn chair holding on to my head. I also did .NET web development and started writing SQL queries, along with building out reports and integrations.

My next job required that I learn C# and even more SQL Server work. Here’s where I started doing stuff with credit card numbers: encrypting them, storing them, passing them around with APIs, etc. I’m not going to comment on best practices with any of that, but suffice it to say I studied PCI compliance aggressively. I also, learned what audits were like. And I learned about things like check digits, electronic check formats, and electronic check processing. All of this was my introduction to cyber security. It was my first foray into the imaginative world of threat modeling…and where things can go wrong with data.

After that, I took a job that focused primarily on business intelligence. This involved more SQL Server in the form of SSRS, SSIS, and something new SSAS (SQL Server Analysis Services), which is basically an Excel pivot table on steroids (slight oversimplification, but a handy one for quick explanations). Then I did an awkward shift into Oracle’s business intelligence world. This pulled me into data warehouse development and fairly heavy development in OBIEE and the dreaded RPD file. I also did some work around analytics. And, in my spare time, reviewed classes on machine learning.

Through all of this, I remained interested in security, so when a security analyst opportunity showed up, I took it. I landed the job, I think, because of my applications development experience and my full exposure into the world of PCI compliance and threat modeling. Right away I dove into vulnerability management, which has me hitting nearly everything in the environment with packets. In addition to this, I now study cloud infrastructure and security at the same time I study OT/ICS security. And I am working out how to implement both at the same time. These two areas were once incredibly far apart, but in some ways, seem to be getting closer every day.

Through all of this, I maintain a fascination I’ve had with Linux for like 15 years. Every year Linux gets better and better and better.

I also maintain an interest in pen-testing. There is so much learn in this area that it keeps a person coming back over and over again to study new tools and approaches to seeing and validating vulnerabilities. So that’s my story for now. Hats off to you if you read this whole post and good luck on your info sec journey!

Wild West Hackin’ Fest: Affordable and Content-Heavy

John Strand, who owns Black Hills Information Security (BHIS), has a way clearing the fog of what passes for knowledge in the security industry. And he knows how to make his audiences laugh. It’s a kind of cathartic truth-laugh that brings people together. I remember the first time I heard him plug the Wild West Hackin’ Fest (WWHF). I made a mental note. This could be a good, small conference that offers a lot of value. Of course, I knew that there was a lot more to BHIS than its owner, but you can often tell the culture of events from the folks who run them.

So last summer, on our family vacation, I did some recon. We managed to stay a couple nights in Deadwood. Perfect chance to inspect the venue and get a good sense of what a conference here might be like. Yup, I could definitely see this: a security conference in Deadwood.

Not long after that trip I made plans to go. And I convinced a colleague to come with me. It wasn’t fancy. Don’t get me wrong. The Deadwood Mountain Grand Hotel was awesome, but the bulk of the sessions were basically in two large rooms and a stage, which were really part of one large room divided by curtains. But here’s the thing. I don’t need fancy. I need content. And that’s what we got. Session after session was loaded with content.

I remember a talk by Paul Vixie, one of the creators of DNS, that completely tied me in to the importance of DNS. And another talk by Jon Ham where his passion for forensics made me feel like there was a whole world that I’d been skipping over in my infosec career development. And Jake Williams was there too. His session was on privilege escalation. And I was like, “Wait, what?” — an eye opener indeed. Also memorable was a talk by Annah Waggoner. It was her first talk and she was inspirational. Doing a talk for the first time at an event like WWHF has to take courage. Which is another thing, WWHF is great about pushing, encouraging folks to present, especially those who haven’t done it before.

I’m not going to rehash every talk, but I do want to encourage people to go to this event. I’m very excited about going again this year! If you want an affordable, content-heavy, hands-on experience, Deadwood in October is the time and place for you!

https://www.wildwesthackinfest.com

How can you be a consultant in your own organzation?

We’ve all seen it, especially folks who work in IT, or any area where things are changing faster than they ever have been. We hire consultants to bring value, and they often do, but often not as much as we expect them to.

Just like anyone in our departments, these folks have their specialties and they don’t know everything about everything. The resulting gaps in knowledge can create painful obstacles on the way toward successful project completion. These are the “we don’t know what we don’t know” gaps. Knowledge gaps are challenging, but they also present huge opportunities.

Identifying knowledge gaps and diving into them head first is critical. You don’t know what you don’t know until you start asking yourself what you don’t know. I know, sounds dumb, but that’s where you have to start. If there is no one in your organization who can answer your questions or who can bring value to a high-demand subject area, then it’s time to start diving, digging, reading, watching, learning, asking, etc. This can mean reading books, experimenting with technology, and generally getting out of your comfort zone.

Sure, it’s a lot of work, but if you’re not doing this work, you’re not bringing value to yourself or your organization. As you start to dig, you’re bringing value to yourself because there are few things more rewarding than learning, and then sharing what you’ve learned. You’re bringing value to organization because they don’t know what they don’t know.

I get it, this process isn’t for everyone. All I’m saying is that the knowledge gap problem is solvable. No training budget? Okay, well, there is seriously more information online than you seriously digest in a billion lifetimes. Don’t know how to cull through that information? Well, you won’t know how until you start pushing yourself to sort it out. And the thing with learning is that once you learn something, it’s hard to feel like you’ve made any progress because now you know it and it doesn’t seem like a big deal. So don’t forget to take stock of the things you’re learning. You know more today than you did yesterday!

Also, a big part of learning is sharing what you’ve learned, even if it is nearly immediately after you’ve learned it. It’s like when you share knowledge, the knowledge you share finds a home in your brain.

The more you teach and share, the more you become a consultant in your own organization. You don’t know everything, but neither do your consultants!

Premortem Now!

Apparently, one of the greatest learning experiences a chess player can have occurs once a game is lost. It’s called a postmortem analysis. And it’s hard, miserable work because a player is sitting there with a pile of negative emotions and they have to think through the reasons why they lost…one hateful move at a time. Why is this so important? Because our mistakes have the potential to teach us far more than our successes.

From this concept comes the notion of a “premortem”. Which is about getting the benefits of a project’s postmortem analysis well before said project has the chance to fail.

Let’s say your organization in on the verge of a very large project. You’re heading into some significant technological changes which will impact people and processes that have been in place for a very long time. There are so many unknowns that it makes people’s heads spin. How do you make sure groupthink doesn’t prevent critical issues from being resolved ahead of time?

In a word: premortem. Key stakeholders sit around a table and pretend as if the project failed. It went down in flames. The budget was busted. None of the deployments went as planned. Significant damage done and nothing to show for it. At this point you might do a pretend ‘blame game’. Who is to blame for the fact that this project did not succeed?

Which team didn’t do their part? Who didn’t communicate risk the way they were supposed to? What assumptions were made? Or what perceptions did the various teams have of the project? How could we let this happen? Didn’t anyone see those issues coming?

Pretending that a project went the way of Hades is a great way to invite honest discussion without relying on someone to play the role of naysayer. Let’s face it, no one wants to be accused of being overly negative. “Are you on board with this project are not?” A premortem analysis requires that everyone discuss the death of the project and “what went wrong” not whether it will go wrong . This prevents pitting the “positive people” against the “negative people”.

Even the thought of doing a premortem analysis can cause some folks to feel anxious. Why is this? Is it because it is a lot easier to keep moving than to stop and ask critical questions? Sometimes critical questions lead to uncovering critical issues. Will asking critical questions lead to more work? Will this make an already tight timeline even tighter? No one wants more stress so it’s best to just keep….on…going. Or is it?

I’d like to offer that the time to do a premortem analysis is now. Take a moment amidst planning and discussion meetings to pretend that the project never made it off the ground. Work out all the reasons it failed. Because we can all learn mightily mistakes. And what could be better than learning from mistakes before they happen?

Study both AWS and Azure for High-level Cloud Understanding

Recently, I’ve found it’s not enough to simply say, “I’m an AWS person” or “I’m an Azure person” when it comes to learning and understanding the cloud. The greatest benefits in learning surface from seeking a high-level understanding of cloud platforms, in general.

These two leading cloud providers, AWS and Azure, are distinct enough from each other, but fundamentally there are features that neither of them can avoid if they are going maintain market share, or simply be able to provide viable solutions to their customers.

When folks spin up a VM instance at AWS or in the Azure portal, they are likely to overlook a few critical things. 1) These environments are networks unto themselves, 2) Where you might’ve been operating a private cloud in the past, which is essentially a traditional data center, you’re now using a public cloud, and 3) Where you might envision you’re life getting simpler with cloud, it’s actually getting more complicated.

When folks spin up a VM instance at AWS or in the Azure portal, they are likely to overlook a few critical things. 1) These environments are networks unto themselves, 2) Where you might’ve been operating a private cloud in the past, which is essentially a traditional data center, you’re now using a public cloud, and 3) Where you might envision you’re life getting simpler with cloud, it’s actually getting more complicated.

With point #1, AWS and Azure both provide ways for their customers to create networks. There may be slight differences in how these networks are set up, but the concepts are the same. Azure calls theirs virtual networks and AWS describes them as virtual private cloud or VPC instances. (Oracle calls theirs virtual cloud network or VCNs.)

Call it what you want, it is a network. And if you don’t know that, you’re likely in for a world of hurt later when you need to scale or secure your cloud environment. And if you can’t do the same thing in AWS as you do Azure on this front, then you don’t really understand the fundamentals. And if you try another cloud platform and find you simply can’t fine tune networks the way you like, then you’ll know why AWS and Azure are leaving them behind.

With point #2, you are now participating in the use of a public data center. This means it is exceedingly easy to expose your services in ways you maybe didn’t intend. Take a careful look at how AWS and Azure overlap or don’t overlap in their approach to exposing services. If you only ever had you head in AWS you might assume that certain settings are always set to be public and the same might be true of Azure.

They’re both evolving from month to month on their approach to how they handle default public exposure, so I won’t go into too much detail. But if you work with them simultaneously when you’re learning, you’ll make fewer assumptions and as more critical questions than you would otherwise.

With point #3, generally speaking, your life as an IT professional who uses the cloud to build IT infrastructure, services and to take advantage of pre-built applications, is about to get a bit more complicated. Why? Because now in addition to all the systems you need to support locally, which help you move data in an out of the cloud, you need to manage the cloud as well. And if you’re using a service that “manages itself” you need to manage perpetual change just to keep up with it.

Understanding integration in both AWS and Azure will help you make fewer assumptions about the way things should be and, again, think critically about the fact that much of the design is actually up to you. You can’t afford to turn of your critical thinking skills. Having the courage to ask tough questions is even more important than it has been in the past. So if someone asks you, should I learn AWS or Azure, or (name your favorite alternative), the answer should be “yes”. There is no “or” if you’re going to be the master of your own cloud destiny. 🙂