This morning I read an article in the Economist magazine January 12, 2019 edition titled, “Shopping for a Caesarean”. This article summarizes the challenges that we face in the US around pricing for medical procedures. The true cost of medical procedures is lost in reams of arbitrary pricing algorithms.
In an era of “big data” convoluted pricing presents a great irony. We have data that corresponds to nearly every other facet of our lives. This data helps businesses predict consumer behavior in order to market the right product to consumers at the right time.
In the health care industry, hospitals don’t have to predict consumer needs. Rather, consumers will purchase a procedure when they are sick and/or under “duress” (the word used in the Economist article). They aren’t likely to shop around. This “duress” allows hospitals to use creative pricing, make deals with insurers, and do all sorts of tricks that conceal the true cost of healthcare.
The Economist article argues that price transparency is the first step, but that it won’t solve the problem because of the “duress” faced by those in need of care. What is needed is a big picture look at pricing for all of us to see when we are not in duress. This way we can identify who exactly is benefiting from these gross inefficiencies. We need “big data” for the masses. We need “big data” that will improve the standard of living for average folks just like we have “big data” that helps businesses market products. However, as long as the medical industry profits greatly from hidden pricing algorithms, they have little incentive to share their secrets and drive more efficiency into the marketplace.
Originally, this lack of transparency was probably not intentional, but now that it generates so much profit for the healthcare industry there is very little incentive to do anything about it. We need more than transparency around pricing for each procedure; we need “big data” algorithms that will allow us to untangle our current pricing mess.
Perpetual learning is paramount for folks in any profession, but I’ve found that for individuals who work in cyber security it is absolutely critical. A significant part of the work I do involves knowing what risks lurk both in the wild (and internally) that can stand in the way of an organization’s future success. Staying up with these risks, mitigation techniques, and controls is vital.
There are all types of learning that help new concepts find a home in my brain. One comprehensive learning experience that I recommend for anyone in cyber security is an event put out each year by SANS, which is an organization that trains cyber security professionals. The event is called the SANS Holiday Hack Challenge.
This year 9-year-old son helped me in ways that blew my mind. His little mind went after small details that I thought were insignificant that turned out to be a pretty big deal. He was very excited by what he was able to uncover…and so was I.
The SANS Holiday Hack challenge introduces cyber security professionals and pen-testers to new technologies and opens their minds to risks and mitigation techniques that they had not previously considered. I greatly enjoy their ‘terminal challenges’ which provide hints toward solving objectives. Never before had I decrypted http2 traffic using Wireshark and SSL keys. So awesome! Here’s the link for this years’ challenge which has been a wild ride for me, to say the least: https://www.holidayhackchallenge.com/2018/.
Stop in and poke around. Solve a terminal challenge or two then put it on your holiday to-do list for next year. You won’t regret it!
For Christmas we got our son an Arduino Uno starter kit. It’s not officially and Arduino, though. The hardware specifications are the same, but it is made by a company called Elegoo. What we purchased was the “Complete Starter Kit”. I highly recommend it. So far we’ve made prototypes for the following: 1) blinking LED lights, 2) joystick controlling a servo motor, and 3) an ultrasonic sensor that tells us how far objects are from it. There have been a few other things, but those are what come to mind as I write.
Besides being extremely fun an interesting, these prototypes foster a new understanding about all the electronic things we use and how they may be wired. We could have gotten a kit for a robot or a remote controlled car, but testing out a range of sensors seems to broaden our view of what’s possible. If we decide on a full project, we’ll have a much better idea of what we’ll need and whether it will work.
Also, as a side note, since I’m using my Chromebook for these project I’m not using a locally installed IDE. Instead, I’m paying $1 a month to use the cloud service provided by Arduino for building sketches. So far it has worked flawlessly. Though ChromeOS does have a linux sandbox now. I’m going to see if I can install it that way, too.
This morning I read an article in the Economist about a kid who was born without a cerebellum. Learning to walk, among other things, has proven to be much harder for him than it is for other kids his age. He has had more success than kids who merely have damaged cerebellums. This is partly because other parts of his brain have compensated for the part of his brain that is missing, which can be harder than if it is missing completely.
Another reason why he’s seen success and exceeded the expectations of medical experts is because of his parents. The Economist article illustrates how it is that his parents acted like a cerebellum for him. Repeatedly, they pushed him to stand up when he would have rather crawled. When he totters off a trail while walking through the zoo, they pull him back on. He’s momentarily agitated, not entirely sure why, but then he gets back on track, mentally.
This is an exaggerated case, but what it and other cases like it show is that if a human brain can use other brains to aid its processing power, it will. And that, as humans, we tend to rely on this distributed processing power. Whether this is in a family, a social group, or even in the workplace, I think it is important to understand our own distributed processing. If groups aren’t communicating or are in separate work silos, this will significantly reduce the value they bring to an organization. On the flip side, if these distributed systems are able to interface with each other, we can expect to see considerable value added to innovation supply chains.
We often relish rugged mental individualism, but by ignoring our distributed models of thinking, we decapitate our true potential of generating value within an organization. It is true that we can and should “put our heads together”. My son calls this “Hive Mind”.
If you’re like many IT professionals who’ve had anything to do with large amounts of data, you’ve become immune to the phrase ‘big data’. Mostly because the meaning behind that phrase can vary so wildly.
Processing ‘big data’ can seem out of reach for many organizations. Either because of the costs in infrastructure required to establish a foothold on this front or because of a lack organizational expertise. And since the meaning of ‘big data’ can vary so much, you may find that you’re doing ‘big data’ work and then ask yourself, “Is this big data?” Or an observer can suggest that something is ‘big data’ when you know full well that it isn’t.
With my own background in data, I’m ever curious about what’s out there that can make the threshold into ‘big data’ seem less insurmountable. Also, I’m interested in the security considerations around these solutions.
In the last week or so, I’ve gotten more familiar with AWS s3 buckets and a querying service called Amazon Athena. Here’s the truly amazing thing. You can simply drop files in an s3 bucket and query them straight from Amazon Athena. (There are just a couple steps to go through, but they are mostly trivial.) And for the most part, there’s not much of a limit for how much data you can query and analyze. You can scan 1tb of data for $5. What? That’s right. And you didn’t have to set up servers, database platforms, or any of that. I’ll be exploring Amazon Athena more and more over the coming weeks. If you have an interest in this sort of thing, I suggest you do the same.
One note: Google has something similar called BigQuery, so that might be worth a look as well. I’ve explored BigQuery briefly but I keep coming back to various AWS services since they seem to be holding strong as a dominant leader in emerging cloud technologies. But as well all know, the emerging technology landscape can change very quickly!
For some time, I’ve been interested in learning about the Raspberry Pi. It’s little a bare bones computer that packs a big punch. And to top it off, it’s quite affordable. Through work I heard about a way to use a Raspberry Pi for an OS called Retropie. Retropie is an emulation platform that let’s you play scores of old games…if you have the digital files for them, of which many can be found with the help of Google.
I’m not much into modern video games, (as in games from the last 20 years or so), but I did play NES games back when I was in jr. high and high school. And I do still have my original NES, but it has a number of issues that make it less than reliable for playing. My kids are interested in the older games because I’ll actually join them when they play. And, quite frankly, because the older games are super fun to play and easy to learn.
Anyway, Retropie is a great way to learn how to use and get familiar with the Raspberry Pi. You simply, burn the Retropie image on a micro SD card, pop it in the micro SD card slot and boot it up! There are a few other things you need to know, but that’s the gist of it. Get a few games, a controller or two, have a monitor with an HDMI plug-in handy and you’re good to go. That’s a bit of an over-simplification, but please do explore Retropie and Raspberry Pi if you’re at all interested in this sort of thing and are looking for a good way to get familiar with the Raspberry Pi world.
These days efforts to revamp company culture are in vogue. I’m going to attempt to articulate what I see as a connection between machine learning and efforts to change company culture. Stay with me here a bit because the analogy doesn’t show up until the fourth paragraph and I need to share a little bit of background first. 🙂
One group leading the charge to change company culture is Partners in Leadership (https://www.partnersinleadership.com). They use a tool that identifies the following flow toward changing results. It’s a pyramid that moves from experiences to results in the following steps: EXPERIENCES >> BELIEFS >> ACTIONS >> RESULTS. According to the model, you start with the results you want to see as an organization and then move backward until you’ve arrived at the experiences that you need to create. The thinking is that experiences shape beliefs, which shape actions, which shape results. They maintain that you cannot simply skip ahead results until the rest of the house is in order first.
As for the experiences, they actually need to be high quality experiences. Partners in Leadership breaks these experiences into four types (big paraphrase here): 1) Easy to interpret, 2) Needing work to interpret, 3) Very little meaning, so there isn’t much to interpret, and 4) Experiences that, well, kind of did the opposite of what they were intended to do.
Now it is time for the machine learning analogy! Boiled down, machine learning is essentially learning from experiences (data) in order to shape beliefs (trained statistical models). These beliefs/models turn into actions (acting on the outcome of a model), which leads to results. Critical to this process is the experiential data and its interpretation (the model). We train our models by feeding data (experiences) into them. Why am I making this connection? Because organizations are really struggling to understand machine learning. Why not piggy back off of something that they’re learning already? Results from machine learning algorithms are no different results gleaned from an organizations’ cultural change initiatives. What data do you have that you can use to shape your statistical models? Which actions do you need to take to get results? You can change your culture and understand machine learning at the same time!
I spend approximately 8-10 hours a day in front of a computer. That’s a lot of time staring at a screen. (I think a lot of other people are probably in the same boat.) And, yes, I’m sitting in front of a screen to write this. 🙂
So I’m mindful of ways where I can dive deeply into the analog world. I’ve found one activity really provides a great escape from all of that: analog music. Yup, an actual musical instrument. Lately, I’ve been playing the violin. It is so incredibly fun and there is so much to learn about it. Granted, if I want a tip from Itzhak Perlman about how to hold my bow, I briefly turn to YouTube for a quick tutorial, but then I’m right back to my purely analog endeavor. I also play guitar, cello and mandolin. All those instruments provide an excellent balance against computing.
For me, the vibration of an actual string, which is caused by fingers, hands and arms…and then the resulting sound dancing off my eardrums…is about as real as it gets. Sure, I can have my head in some sheet music, but I can also close my eyes and visualize the sound and have it connect with actual movements my body is making.
Also, I try to enjoy every note and try not to get to wrapped up in a whole piece or song being completed. Sometimes three notes are all you need, or a couple measures. Just ask the members of my household. I’m sure there are times when they wish I had a slightly more varied approach to my practicing. In my mind, though, practicing by definition is repetitive. Anyway, something to think about as an antidote to computing. Never too late to start!
How much of the world’s IT infrastructure is in the cloud now and much of it will be in the cloud in five years? I’m sure there is nearly solid data somewhere to answer those questions. Regardless, it is happening and it won’t be long until most IT infrastructure is in the cloud.
Oddly, though, in my conversations with other IT professionals, it seems like we’re finding we’ve arrived late to the party. With the advent of “the cloud” organizations are finding that there are all sorts of solutions out there that don’t necessarily need the involvement of traditional IT. In much of the IT world, our perception is that this process is more gradual when in fact it is accelerating.
So the real question is not whether “the cloud” is coming, but whether we see it coming. If we want to make sure cloud implementation is done properly and doesn’t completely hose our respective organizations, we must learn as much as we can in a very short period of time.
Nearly every day I find myself reading about cloud security risks right along side incredible cloud solutions for problems that would normally be much harder to solve. At the same time, many cloud solutions create problems that we’ve never seen before. With the flip of a switch something private can become public: see S3 buckets. And it isn’t so much that the cloud is insecure, but how we connect to the cloud, whether this is through our API infrastructure or open ports that maybe shouldn’t be…open. The only answer I have for all of this is that we need to learn, learn, learn, learn…and fast.
So, generally, the easiest way for hackers to get into an organization is by convincing users do to something: click on an email attachment or a link, make a phone call, share information, etc. For all the technological advances that have sprung forth in the past decade, this is still among greatest challenges faced by security professionals: figuring out how to keep people from following hackers’ instructions.
Our biggest vulnerability is also our greatest asset. We can make thoughtful decisions quickly. And sometimes our decisions aren’t so thoughtful because we’re in the midst of doing other things, or generally too distracted to slow down and think through what is being asked of us. This little glitch in our code is all an attacker needs.
Exploiting this human vulnerability is all an attacker needs to get us to act in a way that is not in our best interest. This is the nature of a hacker-victim relationship. But are there other ways that people are getting hacked that maybe aren’t as overt as this? Think of the decisions we make daily. How many of them are in our best interest or the best interest of our friends and family.
We make snap decisions all the time that aren’t really based on sound logic. I bet any one of us can look back over the course of the case and think about an action we took that wasn’t ideal. It’s a given. If we didn’t make decisions relatively quickly, our brains would grind to a halt and we’d become mostly ineffective at making our way through this world. But as technology gets better and better at humans hacking other humans (think targeted advertising through machine learning algorithms), we should pause to ask ourselves whether we’re on the right track. Will this lead us to a better humanity? Just throwing that question out there. It can go a myriad of different ways. Thanks for reading.