Author Archives: deanwebb

Invasive Species and Security

I just read an article about how invasive species are presenting severe threats to the wildlife in the national parks here in the USA. It’s not just a problem in the USA: regions around the world have to face the consequences of a more interconnected world when those connections bring in a non-native species that begins to take over the environment, destroying delicate ecosystems in the process.

Of course, my thoughts made a connection to IT security. So, I’m going to write about my thoughts. 🙂

What makes an invasive species so invasive and dominant is that it doesn’t have a natural predator in the new region, so it is able to reproduce and consume resources without limit, until the land can’t support them any more. But, at that point, they’re pretty much dominant in that region. If a natural predator of that species is brought in, it could wind up being invasive in and of itself, wiping out other species that were already threatened by that first invasive species.

In IT, we have systems that are created and maintained to provide a particular level of service with a particular level of security. We expect those systems to maintain equilibrium – employees are typically told not to bring in other devices and IT staff have to comply with standardized purchasing and acquisition processes to bring in new gear, typically chosen carefully to work well with all the other systems.

An invasive species in IT is something, be it a hardware platform, a website, or piece of software that allows employees or other users of IT resources to evade security, go around processes, or even to create systems of their own that exist outside IT standards.

Once introduced, there’s no stopping these invasive IT elements without some drastic measures. Consider a scenario in which a company wants to improve productivity by blocking YouTube and Facebook on both employee and guest networks. Mobile devices become an invasive species, as employees bring those in and use LTE networks to access the prohibited material. If an employer wants to stop those mobile devices, it’s looking at introducing discipline for their users – which would destroy morale – or introducing cell phone signal jammers – which will destroy morale and possibly violate local laws.

While I’m aware that many would want to argue with the wisdom of blocking YouTube and Facebook, we can all agree that employees deciding to start using resources outside of IT’s control on a regular basis is an eventual trouble spot. What if there is a way to access company data in the cloud via those mobile devices? Then it’s possible for the data, now on those mobiles, to be shared outside the purview of any dlp software that exists on the company-managed laptops and desktops. It’s easier for the employees to share data – properly or improperly – and they’ll keep doing it. Is there a way to shut down cloud access to just company-owned devices? If so, does that then put a negative impact on the flow of business, overall? Does this introduce another layer of complexity, and will this new scheme be stable? Scalable? All the other questions we ask about the viability of a solution? Certainly, it’s an additional cost – is it worth it to implement, or does the company just abandon the cloud or DLP solutions altogether?

Abandon DLP? I’m sure some of the readers of that phrase would react with shock, horror, and disappointment. But, if we think like an executive, we have to ask the question, “Why should I pay for something that’s not able to get me what I want?”

When I was a high school teacher, I saw these invasive IT species all the time. I confess even to participating in their spread. I was a user, then, not part of IT security, so I had other concerns on my mind – getting my job done, for example.

We all had to use software purchased by the school district to provide class information. The software allowed for teachers to post links to online resources, contact information, class calendars, notes, and a discussion board. The software was also difficult to use and constantly crashed. I posted the bare minimum of information, never updated it, and ran a discussion board on my personal website that had some solid uptime numbers, if I say so myself. My students used it constantly and pretty much didn’t even look at the district system. After the district canned that system after 2 years and got another similar one that didn’t allow for teachers to port over their content from one to the other, that’s when the rest of the faculty revolted and either did the bare minimum, used an outside resource, or both.

My school district also blocked YouTube and Facebook. In the days before mobile devices, students using school-provided PCs would go for proxy buster sites. As fast as the district security could block one of those sites, another one would be discovered and quickly utilized. When I wanted to show a documentary on YouTube to my classes, it was much easier to go the route of the proxy buster than to submit the link weeks in advance for an official review. I knew the documentary on economics didn’t have any objectionable material in it, so I just went around the proxy server, just like everyone else did.

When the district just blocked YouTube on district networks, that’s when I brought in my personal PC, joined it to the unscreened guest wireless network, and plugged that into my display projector. Other teachers used their district-issued laptops, but connected them to mobile hotspots, making for the dreaded bridging between the Internet and office networks.

All along, I wasn’t trying to do anything evil. I was just wanting to get my job done. Any end-user facing a choice between finishing work or security is going to choose finishing work, and that can mean the introduction of an “invasive species” that gets adopted by many other users, once word gets out about how it lets them do their work.

Not all invasive species in IT are themselves IT. How many times have those annual security trainings been foiled by lists of answers for the test at the end of the training? Given a choice between paying attention to the training or just clicking through it while getting real work done, nearly all employees are going to click through with the sound off and then go CBBADECCAE for the test at the end, just like the answer list tells them to do. Jumble up the questions? Not a problem, as the list of letters is annotated with notes like, “Question about mouse hovering – C”. Jumble the answers? “Question about mouse hovering – different link revealed.” Give them an honesty affirmation at the start? That gets clicked through, too, if the pressure is high enough to get stuff done.

So how can we deal with invasive species? All I can think of are proactive measures. Make sure that the only way to interact with the corporate network is with a corporate device, be it through NAC or VPN, or both. For situations where employers want to control online activities of employees, perhaps the solution lies with human resources and one-on-one meetings instead of proxy servers and firewalls. When employees complain about how lack of IT response isn’t letting them get their jobs done, listen to them and respond to their satisfaction. Once those complaints stop, it’s too late – they’ve found the invasive species and your security posture is likely compromised, with a high chance it’s a severe compromise.

There are reasons why nations highly dependent upon agriculture will fumigate your checked bags before you’re allowed to collect them. They don’t want any invasive species. We can’t fumigate our employees, so we instead have to be sure that security policies and practices don’t create a need for an employee to introduce an invasive IT species.

Does Security Require Imagination?

I’ll open with my premise: if security does require imagination, then we’re in for trouble. So we’re going to need an answer for that question, and I’m afraid the answer is “yes.” Let me explain…

I was recently chatting with a colleague about how I enjoy my job. I thought I was talking about my passion for security, but he heard differently. He heard how my imagination and curiosity were prerequisites for my successes. He pointed out, “If someone doesn’t have the intuition that you have, how is he going to do security successfully? He can fill out a requirements list, do an audit checklist, follow regulations, but how is a person without that imagination going to be able to go beyond that and really get security done?”

In my role, I sometimes get a chance to deliver training for the product I support at $VENDOR. In those classes, I always enjoy a good discussion, when the participants are lively and engaged. But that’s not every class I’ve taught. I’ve taught classes where I had to help winkle out the answers from the students with leading questions. I’ve had students that may have been innovative and clever, but who did not see their future at the company that paid for their training. Demoralized and discouraged, they had no interest in applying their wits and insight to their current employers’ needs.

So, we need imaginative *and* motivated employees to do security right. Great, that really tightens up on my premise. Adding that “motivated” adjective cuts deep into the “imaginative” group. The imaginative ones tend also to be ones that need the best motivations to stick with their roles in security, so that makes the effective security professional even more of an endangered species, if not an outright unicorn.

I’m not going to go deep into the game theory of career path decisions. If one threatens to quit over an issue at work, one either gets passed over for promotions and opportunities because one is seen as a short-timer, or that threat becomes stale if used more than once or twice. Therefore, one doesn’t threaten to quit, one simply quits and moves on. If firms want to retain the imaginative by keeping them motivated, then those firms have to be proactive.

But back to those imaginative people… do firms really want to retain them? Those imaginative people can be high maintenance types, you know. Is it better to keep the “bread-and-butter” types on the payroll and let vendors, VARs, and outsourcers worry about managing the artistes of our profession? After all, we don’t need imagination all of the time. Quite a lot of work in security is simply painting by numbers. What are the vendor best practice recommendations? Follow those. What are the regulatory requirements? Implement those. Maintaining code blocksIP address assignments, switch configurations, application stores, document libraries – you and I both know that there’s drudgery in those tasks, and any level 1 tech with a runbook can handle them.

So when, exactly, do we need the imagination? I know we need it when analyzing the data. Yes, algorithms can sort through quite a lot of noise to get to the signal, but what does the algorithm know about things it could not have been programmed to handle? Leave zero-day exploits aside, we have to know what to do when there’s a new production application in play! It takes imagination and initiative to think of what that new signal might be and who to ask about it so that it can be exempted from blocking rules.

We also need imagination after a breach. There’s chaos and mayhem all around, and it takes some proper cleverness to think of all the other evil that could be taking root as that chaos and mayhem distracts our attention. We need multiple imaginations here, not just one. Different eyes, different minds, different experiences can inform a broad range of responses that build off of each other.

But before the breach, we could certainly use imagination in red and blue teams experimenting with both ways to penetrate and ways to mitigate. Someone has to ask the questions about the environment that lead to fuzz testing and investigations. There’s no way to put “think of something new” in a runbook, the human mind just doesn’t work that way.

There’s also a call for imagination not on the technical side, but on the process and procedure side. We have to be creative in how we submit requests and apply for resources so that we don’t get shot down or delayed. This isn’t out of the box thinking – the people on the other end of the request will reject anything that doesn’t conform to their box. This is inside the box thinking, except with the ability to somehow merge normal spacetime into a singularity that allows for bypassing internal red tape while still, overall, complying with corporate processes and procedures.

So, we’ve got a problem, as I mentioned at the outset. We need creative, imaginative people, and those types simply do not grow on trees. (In point of fact, no humans grow on trees, it’s something to do with our mammalian biology, as I understand…) And while we can encounter a few natural gifted visionaries in the wild, there simply aren’t enough to go around for all the needs of all the firms in the world.

That leads to the question: can we teach people to be creative?

And if so, who is responsible for that?

While my education experience gives a firm “yes” to answer the first question, I’ve got no answer from experience to deal with the second. I would suppose that the firm that desires creative people needs to be about the business of teaching them, but I don’t see any programs that are geared for that. Let’s face it, most of security training deals with learning the tools, technical stuff. Where in our profession do we see training that gets people to think creatively?

As I typed that, the answer came to me – look at our end-user security training. We teach people how to spot phishing attackssocial engineering, things like that. Not everyone passes that training brilliantly, but enough people do to show that it has value in and of itself, but also as creativity training. To successfully deal with a phishing attack, for example, we tell people how to analyze certain data and evaluate it. We don’t provide a list of all possible bad links to click, but we do have a few short rules on how to spot them. And, unlike an algorithm, the human mind can adapt and extend lessons to new situations with ease.

Maybe, then, we don’t have trouble. We just have a need to perhaps change our accounting rules and consider people as unique assets that can be improved, not identical widgets that can be swapped interchangeably. But I can guarantee that it’ll take some imagination in order to close the imagination gap at where you work.

Do You Rate Use Cases For Maturity?

More than once, I’ve been in the meeting where someone is questioning whether or not to get a particular security system. This someone asks, “OK, so if someone has the CEO at gunpoint and forces him to log in to his PC and then takes pictures of the documents visible on his screen, then blackmails the CEO to say nothing to the local police as he slips away into the shadows and to a foreign nation where extradition is difficult, will you be able to stop that data exfiltration?”

“Uh, no…”

And then that someone crosses arms and boldly states, “Then why bother with all this trouble if it’s useless against a *real* hacker?”

Now, maybe it’s not exactly that scenario. But whatever’s offered up is an advanced use case that even the tightest of security nets would have trouble catching. And if the current state of the IT environment is where someone could bring a PC from home and copy all the files off the main server, maybe that group of advanced use cases isn’t what anyone should be worrying about right now.

Which is why it’s important to consider such exotic cases, but rate them for what they are – exotic. When someone brings up a basic use case that is well within the capabilities of the security product to restrict, rate that as a basic case that will be among the first to be dealt with as the system is introduced. As the system matures, then the more mature cases can be considered.

I deal with NAC in my role, so I see the range of use cases all the time in my meetings with customers. Block a PC that isn’t part of your firm? This is not difficult to do. Block someone spoofing the MAC address of a printer? Well, that’s more than a basic task. I have to ask how we can tell a legitimate printer apart from a spoofed device. If there is no way to tell, then we have to ask if it’s possible to treat all printers as outsiders and restrict their access. This is where maturity comes into consideration.

Maybe we just proceed forward with the PC use case and think some more about that printer issue. Perhaps once we have the PC use case dealt with, there may have been time enough to set up an SNMPv3 credential to use to log on to legitimate printers. Maybe there was enough time to determine how to set up printer VLANs and restrict them. If so, then we’re ready to deal with that printer issue. While we’re doing that, we could be thinking about how to handle the security camera issue, or something like that.

Each environment will have different levels of maturity for their use cases. Perhaps at one firm, it is easier to deal with securing PCs than it is with MacOSs. At the next one, they could have a better handle on their MacOS management than they do with PCs. Maturity could simply be deciding between equally-difficult tasks about which one will be done first.

Maturity can also be seen in calling out when a use case goes beyond the capabilities of the product under consideration. A proxy server does not provide its own physical security system, for example. So, if we entertain scenarios in which physical security is defeated, we should be tabling those until we’re looking at a physical security system. By the same token, if for a scenario to be plausible another security system has to be defeated, then that begs an argument about the safeguards and durability of the system that has to be defeated, not the one under current consideration.

We also see maturity in getting different systems to work together. Being able to automate responses from one system to another gives firms the ability to deal with increasingly advanced threats. All the while, as long as we keep a perspective on how mature our security systems are, we know what level of threat we can deal with.

Home Insecurity

A major reason people don’t want to buy more home automation technology is security. Not only is this a response given by 42% of respondents to the question “Why don’t you want to buy more home automation devices?”, it’s also my response.

When I get a device that will be internet-enabled, I agonize about how soon it will be before that device becomes a botnet host or worse. I do a little pen testing, I change default passwords, and I’m happy to say that my existing devices are either pretty darn secure or at least more secure now than when I first plugged them in. While I’m sure that there’s a person with at least above average intelligence out there that can pop these devices if given local access, I’m also sure that their traffic isn’t exposed to the Internet and I’ve got reasonable security with these things.

That being said, I don’t want to go through that for any other devices. My televisions have to stream content, my security system needs to connect to the monitoring back-end, and, uh… that’s all I’ve got. My robotic vacuum cleaner has no Internet – I paid more for that lack of feature, as it happens. My appliances all keep to themselves. I work from home, so my thermostats are right where I’d like them to be, no need to be online with those. It looks like I’m also in line with 49% of respondents who also indicated that they’re not buying more home automation because they don’t see a use case for the technology.

But even if I did, I’d have to ask, “is it secure?” And that’s not just the device itself. Maybe that new Internet-enabled barbeque grill is locked down tight, but what about the app that runs it? Or the app that runs any other system in my house?

Security doesn’t just mean making sure the kid down the road from me doesn’t pwn me when he does his daily wardriving. It also means that when I do something with it, it doesn’t suddenly affect my Google search results or trigger a Facebook ad. It’s bad enough that when my kid sends me a link to a stupid YouTube video that I have to spend the next few weeks telling algorithms that, no, I am NOT a fan of Korean boy bands. I don’t need this to happen because I change my thermostat or order groceries. Yes, there’s also the concern about private information. And while I can change default passwords and block ports, that does nothing about my info going into advertisers’ data lakes.

In fact, what other reason is there to have an Internet-enabled dishwasher except to send me more ads? I mean, if I forget to run the dishwasher before I leave home for the day, I can run it at night. If it’s before a big vacation, I can text the person that’s going to feed my cats to punch the “start” button. I’m happy to pay more for an airgapped dishwasher precisely because I want informational security, not just device security. Remember my comment about the vacuum cleaner? That applies to any other appliance. I want to keep that stuff to myself, thank you very much.

Check ALL Your RFC 1918 Ranges…

Let me set the scene: a customer asks about being able to track users that bring up unauthorized VMs on Windows machines. He explains that he’d like to look at the 192.168.0.0 RFC range to see how many addresses we see in that range. That’s OK by me, all I have to do is add that to the scope of the networks we track…

At that moment, we only looked at 10.0.0.0/8. I added the 192.168.0.0/16 range and we watched the new devices pop up into the discovery window.

And then we watched as those devices started to churn… the IP addresses stayed the same, but the MAC addresses kept changing. Loads of Netgear, Arris, Cisco-Linksys, Belkin, TP-Link devices… what was causing all this?

The horror! The horror of the home networks!

And then it dawned on us: these were all teleworker home networks bleeding into the corporate network estate! The traffic to and from 192.168 networks wasn’t supposed to be routable, but here it was, coming and going and getting picked up on the SPAN session monitoring north-south traffic at the datacenter gateway.

192.168.1.1 and 192.168.0.1 were the addresses that changed MAC addresses most frequently. No surprise there, as those are default gateways on oh-so-many home networking products. 192.168.1.254 changed less often, as that was the default gateway on Arris routers used for AT&T broadband networks (I used to have one, so I know) and only a handful of other home devices. I saw Nest controls, Roku streamers, gaming systems, the works. And all of this was exposed to the customer network, and all of the customer network was exposed to these environments.

Granted, there was going to be a mess as far as being able to route to any endpoint for much time, but the IP addresses that were less commonly used were also the ones with the most persistent MAC addresses and connections. The biggest concern was that the customer did allow any guest traffic on the wired network – but here were untold numbers of guest devices, the kind that don’t usually show up on BYOD networks!

Moral of the story? Those teleworker devices for home office networks are part of your perimeter. Make sure you keep an eye on those points of entry, as well as the big one you pay the ISP for.

Security for All Sizes: Remote Management and Monitoring

I remember the first remote management and monitoring (RMM) solution ever, the venerable and wonderful “ping”. We would use it all the time to see if a remote host was up and responding. And then, one day, someone wrote a program for Windows, Whatsup, and the world was changed forever. With that program, we admins could enter multiple IP addresses and that tool would ping them all day and night! It could even be set up to generate alerts.

We thought we had it made until someone asked, “Hey, I know I can ping the SQL server, but is it responding on TCP 1433?” At that point, we knew both that we needed more in our app and that there would be other admins, with other network ports, who would make similar requests. And so began the development of RMM tools.

At small companies, RMM may very well be not much more than a shareware ping/telnet suite that checks for hosts being up and responding on critical ports. It may involve learning multiple suites of RMM tools, roughly in conjunction with the trial period for one tool ending and a download for the new tool being complete. Most of what goes on is just monitoring, not management (does that mean they consume R_M products?), as there are few enough systems to manage where ssh and RDP sessions to the several devices that need management are sufficient.

Once we get to a medium company with multiple sites, that SSH/RDP solution for everything simply fails to scale. It’s time to lay some money out and actually pay for an RMM solution that will track those uptimes as well as do some kind of configuration management. Everyone makes demands of that config management solution – will it do rollbacks? Will it do point-in-time recovery? Will it track changes made outside the product? Will it enforce certain configuration parameters? Will it integrate with the helpdesk ticketing system?

The answer to all of those questions is either “no” or “yes, at an additional cost.” Nobody rides the RMM train for free.

And it’s not like that RMM will magically never make mistakes. We’re still in a garbage in, garbage out world. More than once, I was working on a project to integrate our routers and switches with a tool by pushing code to them with the RMM solution… only to have that code get overwritten because a different team pushed a change with an outdated template. So what’s the policy and procedure for undoing a change that was done in error? I found that part out the hard way as I waited for the next change window to get my changes put back into the environment.

I’ve seen RMM tools that can’t push version-specific code. Well, they can, but they don’t keep track of versions, so it’s a guess or a logic problem to figure out which devices are on which version. One solution I came up with was to push one line of code to all devices, knowing that it would fail for devices on the older version. The next push checked the config to see if that line I previously pushed was in the config. If so, skip the device. If not, then push a line of code compatible with the older versions. Would I have preferred that the tool have the intelligence to do a version check and then push the appropriate line of code, all in one go? Yes. Yes, I would. The biggest irony to me in this particular case was that the RMM tool was made by the vendor of the devices that the tool couldn’t track the version on. Very disappointing…

And then there’s RMM at the large corporation. Thousands of switches and routers, some on very dodgy Internet connections, all of them being monitored. This means the poor sap with the on-call phone is constantly answering when the NOC calls in to say that the Dakar site is down. Or the Guadalajara site. Or the Noida site. Or the Ho Chih Minh City site. Or the Chengdu site. Or the Narvik site. Or the Deadhorse site. And the NOC guy reads out the entire device name and IP addressletter and number by letter and number, so one has to sit and wait through it all before saying, “Acknowledged. Please open a ticket with the ISP.” I can’t remember a happier day than when the policy was finally re-done so that the NOC would just open the blasted ticket on their own without requiring acknowledgement from engineering.

Still, we were blessed in that we had nearly every switch under management. This did have one side effect, however… we wouldn’t believe a switch existed if it wasn’t in the RMM tool until we saw it listed as a neighbor on another switch and pinged it. That’s when we discovered that some switches couldn’t be brought into our RMM tool because they didn’t support the SNMPv2. Or because nobody could remember the password to get local access and nobody had the nerve to take it to ROMMON mode to break into it. Or because the local support contract kept that gear out of our global tools.

Those problems were relatively straightforward compared to getting gear from specialty vendors into the RMM tool. Not all of them had the same implementation when it came to reporting, even things as simple as disk space and CPU usage. For disk space, does the vendor report total available space, across all volumes, or will it send an alert when one particular volume hits 95% capacity? Will it report overall CPU utilization or will it fire an alert when one of 16 CPUs goes over 90%? The answer is, of course, “It depends.” That means that alerts from some vendors actually aren’t alerts, they’re more like transient conditions of no great importance. It also means that some vendor gear could be in an alert state, but it doesn’t actually report it as such, given how it implements a particular SNMP MIB.

At all companies, there’s the issue with keeping the tools up-to-date. The day that the tool is launched for general use is such a bright, shining moment in the history of the progress of humanity, with all the devices that need monitoring in that tool, right where they should be. Within a very short time – overnight, in some cases – the information in it is obsolete. New devices aren’t added and decommissioned devices are showing red because nothing is reporting back at that IP address… and then they go green again when that IP is re-used, but we just haven’t realized yet that it’s a security camera now, not a loopback address.

Finally, there’s the issue of access. Even at the small company, not everyone who wants to know if a system is up will have access to the RMM dashboard. At larger and larger companies, access to that dashboard can get limited to the point where even the network engineers can’t look at it… or the tool is so cumbersome, there’s severe mental pain involved in getting information out of it.

And that’s why, even at a massively huge global megacorporation, I still got plenty of use out of running a shareware app that would ping a list of devices, so I’d know if they were up… it wasn’t an official tool with management and headcount assigned to it. It just ran on my desktop and running it meant I wouldn’t have to open a service ticket to ask someone if they could check to see if the RMM had a green dot by my device or not.

Understanding Security: The Spy

First of all, let’s take a look at an actual spy:

That’s John Walker, who was a US Navy Warrant Officer from 1967 to 1985. 1985 was when the FBI found out he had a second career passing cryptographic information to the USSR. And you know what they say about moonlighting without telling your employer…

And you know what, he looks like one of us! This is not James Bond, not Austin Powers, not Jack Ryan, not any of those guys. This is the AIX guru that sits two cubicle rows over. One of us.

The difference between Walker here and a security guy is only in what information is gathered and who it is passed on to. That’s what a spy does, after all. All that Hollywood stuff is just that – make believe for the movies.

If you want a real spy movie that shows the security side of things, watch a 36-minute US Army training film from 1969 about counterintelligence work. It’s set in West Berlin and goes through the steps of gathering intelligence and then using that intelligence to develop operational plans. https://www.youtube.com/watch?v=E3hAUTGm1D8

I watched that short film and it totally clicked with me. The heroes of the film are guys that look like me and my co-workers, doing things me and my co-workers can do. Namely, gathering information and following up on leads. To be sure, the baddies, like Walker up there, also look like me and my co-workers… after all, it’s the admins that outsiders want to turn to working for them, right? But I digress. Gather information, follow leads, document everything, that’s us.

An important note in the film is that an intelligence operation in which information is passed up to a superior is a successful operation. Think about that. We may think what we have discovered may require immediate action, but it’s not always our call to make. We inform the decision makers and leave it at that.

For what it’s worth, the film underlines the importance in gathering information in such a way as to not alert the target – this helps me to deal with the urge to act immediately. Now, there are routine checks that we do for compliance and such, and I’m sure clever attackers will learn to avoid those patterns, but when we run a check and find something out of the ordinary, we report on the details and then coordinate with other groups to see what kind of follow-up is needed.

In current terms, coordination with other groups often means coordinating data from different systems. Putting all the data together helps to build a complete picture of activity. Packet captures, DNS traces, all that fun stuff – assemble it to show the whole story as far as we can tell. That’s what counterintelligence agents do… and what we do in security.

It’s pretty easy to take old-school information and translate it into updated ideas, especially since the core best practices and procedures remain the same. There are plenty of other training films out there to watch where you get to see how any person, with proper training and expectations, can do security work. You don’t have to be James Bond and you’re not fighting Dr. No. Everyone involved is human.

Thanks to these old training films, when I hear the word “spy”, I don’t think of James Bond. I think of me.

Understanding Security: The US Space Program

“But you said you wouldn’t glamorize the security profession!” I hear some of you thinking. How do I hear you thinking? Let me tell you about the sensors in my company’s product… But seriously, I can’t really hear you thinking and I’m not really glamorizing the security biz. That being said, it’s very much like the US space program, once you take the program in its totality.

Start with the executive sponsor speech after some big events have made headlines. Stuff just happened and we have to take this matter seriously. We don’t do this because it’s easy, we do it because it’s hard. Let’s get a budget together, a project office, and some staff that are willing to make “risk” their middle names.

Everyone has an eye on the pilot programs, but not everyone understands the science behind the project. In fact, probably the only people who fully understand the complexity of the work are those directly connected to it in design and implementation groups. Management is pretty much there to make sure things get done and that they get numbers to prove that things got done.

When a major milestone is reached – that first site comes online! – everyone is ready to send congratulations and have a little party. But after that, interest wanes. People begin to question if we’ve gotten enough out of the project and if money wouldn’t be better spent elsewhere. If there should be a failure, there’s a big chance that the project budget gets cut or the whole thing is paused for a year or more while everyone takes a step back to figure things out. The project could even get shelved at that point.

What keeps the project from getting cut or canceled entirely? Information, my friends. Information. If the project can consistently produce streams of actionable information, it can stay alive. If upper management comes to depend on that information, then the project will become an institution, more or less. It will be operationalized and staff will be put in place for daily tasks and routine maintenance and changes. It will never have as much excitement as that first site coming online, but it will still keep chugging along and will be useful.

Some staff may talk about scaling the project out to truly massive scales. Budget-minded officials will be the first to throw cold water on those dreams. People familiar with the limits of the technology being used will also diminish excitement for the project, as they question if it really will scale out like that. Voices calling for tighter integration with existing systems will win in budget discussions because what was once risky is now a sure thing, and it’s safe to play things conservatively. That’s especially true when budgets and staff are big.

You stare at a screen all day, solve some tricky problems, engineer solutions, pray to God nothing goes wrong, hope the budget doesn’t get cut, and nobody really knows who you are or what you do. Are you in Mission Control or the Security Team?

Understanding Security: Get Your Metaphors Right

Forget any analogies dealing with pitched battles. Security professionals are not generals, foot soldiers, commanders, admirals, missile base commanders, gunfighters, or X-wing squadron leaders. Thinking that we are such things puts us in the wrong frame of mind, where we expect a conventional conflict. Even if such a conflict is edged in trickery or clever deception, it’s simply not how things work in information security. We’re more in a world of trickery and clever deception, sometimes edged with conventional conflict, if anything.

If we want comparisons to professions, we need to look at spies, pest exterminators, librarians, cattle ranchers, and forest rangers. These are people who manipulate knowledge, guard assets, and who deal with hidden threats. If you still want military metaphors, I’ll allow people clearing minefields, sentries, codebreakers and intelligence analysts (although those are technically spies), and military police. Let’s get rid of the glamour and focus on the dirty work, OK?

There are two major reasons to come up with the right metaphors and examples for cybersecurity. One is so that we get ourselves into good habits of mind for dealing with threats. Two is so that we can use real-world explanations to help people outside of the profession understand that we don’t simply identify all the PCs running “Hacker.exe” and then blow them up.

I’ll even dare to say that much of our profession has a connection to organizations that make us all uncomfortable. While I don’t want the NSA to harvest all of my data, I’m perfectly ready to recommend massive data harvesting to organizations wanting to improve security. While I’d hate for my wife and kids to spy on me, I’m always advocating that we set up as many sensors and data collectors as possible in a customer environment, even getting PCs to report on each other.

In other words, you know you’re a security professional when you read 1984 to get ideas about doing your job better.

Now, not everything in this series will go dark like that. Then again, dark is what we all deal with, so don’t be surprised to find metaphors in that region. They may not necessarily be the metaphors you want to share to explain the profession to others, but they could very well be the metaphors that unlock the habits of mind you need to improve your focus.

Security for All Sizes: When Vendors Fall Out

When a security pro gets different vendor solutions to work with each other, it’s a cause for celebration. Unfortunately, most security stories seem like they’re written by George R.R. Martin and they don’t resolve to “happily ever after” conditions. Yes, things can run well for a while, even a good long while, but there comes a day for many a partnership where the parties involved part ways and their products no longer play well with each other.

This isn’t just something in an update breaking a functionality. That gets fixed with a call to tech support and developers writing a hotfix. This is the kind of breakup that gets announced on page 23 of a vendor website or which is mentioned quietly by a sales account manager that can’t renew licensing on an integration package. The vendors, for strategic or other reasons, are no longer on speaking terms.

Vendor A releases a product that competes directly with vendor B.

In this scenario, vendor A launches its new product and has a clear choice: adopt our product or do without the integration. This move is possible only if A has a big market share. It doesn’t have to be a dominating share, just a big one. It doesn’t even have to be in the security area – maybe A was eyeing a way it could get into security, and saw this as its market entry opportunity.

At a small company, they’re all ears if A’s solution is cheaper to implement than B. If that cost reduction is achieved by discounts over both the old A product and A’s competing product, so be it. Cheaper is cheaper. If the competing product from A delivers most of what they get from B, then the small company can learn to live without the features from B that they no longer will get.

If A’s solution isn’t cheaper, then the small company will learn to live without the direct integration. Maybe some whiz writes a PowerShell script that produces a cool CSV or something to help bring data together, but such whizzes are rare to find at small companies. And if they’re found at small companies, chances are they’re producing code to improve profitability.

Alternately, if there’s a vendor C that does integrate with B – and is cheaper than A – then maybe it’s time to drop A altogether.

At the medium-sized company, it’s more likely that they’ll do a bake-off between the competing products and use features in combination with pricing as determinants about which product they go with. It’s less likely that they’d drop one or the other entirely all at once, but when the products come up for lifecycle renewal, they can make a switch at that time.

For the large company, it may come down to a question of how big A is. If A is truly huge, then it’s bye-bye B and hello A if the company IT leadership wants to standardize on A. If the leadership, however, is wary of A’s size, then it keeps B and A is a non-starter. These are decisions that come down to executive strategy and have little to do with price or features. Not to say that price and features will be mentioned in conversations about keeping or switching, but the underlying rationale will be the large company’s overall relationship with big vendor A.

So why wouldn’t A compete with B if A didn’t have a big market share? It would be because A doesn’t just integrate with B. A integrates with lots of other vendors and, because it can’t control the market, bills itself as being comfortable in multi-vendor environments.

And if A has a miniscule market share, competing with B is what is commonly known as a “mistake” and will result in A going out of business or withdrawing its competing product.

Vendor A terminates an exclusive partnership with B, is now working directly with C

This scenario assumes a tight integration between A and B, more so than what is normally offered in an exposed API or a SQL transaction query. Maybe the two companies were drawing closer to each other, with a merger likely, but things changed and now A is with C, not B. This can happen regardless of A’s market share – provided that C is at least as big as B if A is itself small.

In this scenario, pricing is not likely to be a factor. C will likely cost about as much as B, once the per-endpoint licenses are tallied up. This will come down to a question of features and whether or not A+C is, overall, better than A running side by side with B. If yes, then B will be on its way out to make way for C. The only companies keeping B will be the ones that didn’t do any testing and that won’t talk to sales teams.

If no, then the executives at A will have some hard pondering to do when they lose revenue on their software that integrates with B, and there being lack of sales for integration with C to make up for it. How could something like this come to be? Easy. People lie to executives, especially so to executives that want to be lied to. If A’s leadership is surrounded by mediocre sycophants, A will make some huge blunders.

Vendor A cuts integration with B because support costs exceed revenue

No hard feelings in this scenario. There just simply aren’t enough people using B to justify the support costs of keeping the connector between A and B up and running.

At the small company, it just means lower overall cost to drop renewal on that product. Since there’s no other product that does B’s job that integrates with A, there’s no compelling story arising out of this scenario to justify replacing any product… unless there’s a cheaper product that does A’s job that integrates with B… Absent that, the company learns that integration is a fleeting thing and may well make a decision to not integrate other products because they don’t want to get burned again.

The medium company may make the same choices, perhaps choosing to have all security systems pump information into a data lake and then try and make sense of things. There’s a good chance that the lake will always be there, but few will swim in it.

At the large company, an interesting mathematical problem emerges: would subsidizing support with a custom agreement be cheaper than living without the integration? If yes, then while the rest of the world lives without the connection, the large company will keep it going… and going… and going… and going… to the point at where, ten or twenty years down the line, some new person is shocked to see that software still running somewhere! Think it can’t happen? Just ask Microsoft how many Windows 3.11 support contracts they still have with major customers…