Category Archives: Security

The Fallacies of Network Security

Like the Fallacies of Distributed Computing, these are assumptions made about security by those that use the network. And, like those other fallacies, these assumptions are made at the peril of both project and productivity.

1. The network can be made completely secure.

2. It hasn’t been a problem before.

3. Monitoring is overkill.

4. Syslog information can be easily reviewed.

5. alerts are sufficient warning of malicious behavior.

6. Our competition is honest.

7. Our users will not make mistakes that will jeopardize or breach security.

8. A perimeter is sufficient.

9. I don’t need security because nobody would want to hack me.

10. Time correlation amongst devices is not that important.

11. If nobody knows about a vulnerability, it’s not a vulnerability.

Effects of the Fallacies
1. Ignorance of network security leads to poor risk assessment.
2. Lack of monitoring, logging, and correlation hampers or prevents forensic investigation.
3. Failure to view competitors and users with some degree of suspicion will lead to vulnerabilities.
4. Insufficiently deep security measures will allow minimally sophisticated penetrations to succeed in ongoing and undetected criminal activity.

I wrote this list for the purpose of informing, educating, and aiding any non-security person that reads it. Failing that, it serves as something that I can fall back on when commiserating with other security guys.

A Grim Observation

This is a picture of the SM-70 anti-personnel mine, devised by East Germany to kill people scaling border fences to escape to the West. Its purpose was not to reduce the number of escape attempts, but to reduce the number of successful attempts. Over time, it did reduce the number of successful escape attempts, but it did not bring the total number of attempts to zero, nor did it bring the number of successful attempts to zero.

I bring this up to show that, even with the extremes that the DDR was willing to go to to prevent population exfiltration, it was an ongoing issue through the entire history of that nation. They killed violators of their policy, the killings were well-known and publicized, and yet the population continued to try to move west. This has implications for corporate security.

Namely, corporations can’t kill off violators of their policies, so those violators will continue to violate. The reward, whatever it may be for them, will face very little relative risk. Criminal penalties? Those are only for those who get caught by companies not afraid of the negative exposure. Most of the worst case scenarios, it’s a job loss for a violation. Considering that a big chunk of people that breach security are already planning to leave that firm, job loss is a threat only so much as it interferes with the timing of leaving the firm.

While the leaders of the DDR could take a long-term approach to their perimeter issues, most executives answer to a board that wants to see results this quarter, or within the first few quarters after a system goes live. Security is an investment, right? Well, where is the return on this investment?

Security is not playing a hand of poker. It is a game of chess. It is a game of chess in which one must accept the loss of pawns, even knights, bishops, rooks, and maybe even a sacrifice of the queen, in order to attain the ultimate goal. Sadly, chess is not a game that is conducive to quarterly results. Just as the person attacking IT systems may spend months doing reconnaissance before he acts, the person defending IT systems must spend months developing baselines of normal activity and acquiring information on what traffic is legitimate and what is not. The boardroom is not a good place to drive security policy.

But, quite often, the security policy does come from the boardroom, complete with insistence that the hackers be found as soon as the security system is in place. Once in place, anything that gets past the security system is seen as a failure of the system. There’s no concept of how many violations get through without the system in place and how many have been deterred by the system, just that security needs to work now, and failure is not an option… and other platitudes like that that make good motivational posters.

That’s simply the wrong mentality about security. Going back to the DDR – a lethal system with a long-term perspective and a massive intelligence network behind it – we see a highly effective system that nevertheless was defeated by those both determined enough and lucky enough. The leaders of the DDR did not scrap it until the DDR pretty much was no longer a going concern. With less ruthless security in place and a lack of long-term perspective and a failure to orchestrate all available intelligence sources, is it any wonder that IT security is such a problem for companies to get their arms around?

And if companies want to step up their potential penalties to include criminal charges, they cannot do so without first developing a proper concept of security. They will need to train employees in forensic procedures. They will need to get legal and HR involved more closely with IT – and to be more up-to-date on both the technology and the legal environment surrounding it. There will have to be decisions about what breaches must be allowed so as to collect proper evidence, and so on and so forth. We’re talking about the development of a corporate intelligence community.

And, even then, that’s no insurance. But, it’s a start. Most companies’ security policy is as effective as a substitute teacher ignoring all the students in the class. Some step up their game to that of a substitute screaming at all the students in the class. True security needs to have consequences, investigative procedures, and collections of data – and, even then, there will always be breaches. Security will not eliminate the problems, only reduce them.

Wet Economics and Digital Security

A student once unwittingly asked a physicist, “Why did the chicken cross the road?” Immediately, the physicist retreated to his office to work on the problem.

Some days later, the physicist emerged and told the student, “I have a model that explains the chicken’s actions, but it assumes the road is frictionless and that the chicken is both homogeneous and spherical…”

In the last 50 years, economics has increasingly tied its models to frictionless decisions and homogeneous, spherical employees. These employees are as interchangeable with each other as are the widgets a company mass-produces. They show up to work at a certain wage and, since perfect competition in the labor market makes these models work, there is an assumption that the cost of labor is at a point where the market clears – no need to offer any more or less than that going wage rate.

As the world economy moved from regionalization to globalization and digital technologies made employees’ locations no longer tied to where a firm was legally chartered, the idea that costly labor in one market could be replaced with cheaper labor in another market fit well with the notion that employees were homogeneous, spherical physical bodies making frictionless decisions.

The biggest problem with the economic models that have dominated economic thought over the last 50 years is that, while they are great for predicting normal ups and downs in periods of relative calm, they are useless in times of massive upheaval. Put another way, they are like weather forecasting models that see category 5 hurricanes as “an increased chance of rain” or massive blizzards as “snowfall predicted for the weekend”. These models go blind in such unanticipated crises and are particularly useless for crises precipitated out of massive fraud and abuse. We saw the flaws of the models first in 1998, then in 2001, and again in 2008. We may soon see another round of flaw-spotting very soon, what with unease afflicting a number of major banks in germany and Italy…

But the second-biggest problem with the economic models is less obvious, and that’s because it involves the one thing everyone seems to leave out of their thought processes: security. Because the employees are not interchangable spheres and their decisions frequently involve friction, we can see security issues arising out of our reliance on those economic models.

The first is that employees are not widgets to be had at the lowest price: changing out a skilled veteran with many years at a firm even for someone with the same amount of experience from another firm involves a loss of institutional knowledge that the veteran had. The new person will simply not know many of those lessons until they are learned the hard way. In security, that can be costly, if not fatal.

It’s even worse if the new employee has significantly less experience than the veteran. I shudder whenever I hear about “voluntary early retirement” because it means all those people with many, many years at the firm are about to be replaced by people of vastly less experience. Because that experience is not quantified in the models used, it has no value in the accounting calculations that determined cutting the payroll to be the best path to profitability.

Then there’s the matter of the new employees – especially if they’re outsourced – not having initiative to fix things proactively. That lack of initiative, in fact, may be specified in the support contract. Both parties may have their reasons for not wanting to see initiative in third-party contractors, but the end result is less flexibility in dealing with a fluid security issue.

Remember the story of the Little Dutch Boy that spotted a leak in the dike and decided to stop it with his finger, then and there? What sort of catastrophe would have resulted if the Little Dutch Boy was contracted by the dike owners to monitor the dike, to fill out a trouble ticket if he spotted a breach, for the ticket to go to an incident manager for review, then on to a support queue with a 4-hour SLA to contact the stakeholders, so that they could perform an incident review and assess the potential impact before assigning it the correct priority? There would be a good chance that the incident would resolve itself negatively by the time it was graded as a severity one incident and assigned a major incident management team to set up a call bridge.

Security needs flexibility in order to succeed, and that kind of flexibility has to go along with the ability to exercise initiative. Full-time employees, costly though they may be, are more likely to be authorized to exercise initiative – and, if they’re experienced, more likely to use it.

On the matter of those decisions with friction… at any time, an employee can make an assessment of his or her working conditions and decide that they are no longer optimal. Most employees will then initiate either a job search process or a program of heavy substance abuse to dull the pain brought on by poor life and career choices, but others will choose different paths. It is those others that will create the security issues.

These others may decide that the best thing to do in their particular position is to get even with their employer for having created an undesirable situation. In the film “Office Space”, three of the main characters chose that path and created a significant illegal diversion of funds via their access to financial system code. They also stole and vandalized a laser printer, but that had less impact on their employer than the diversion of funds. In the same film, a fourth employee chose to simply burn down the place of business. Part of the popularity of the film stemmed from the way those acts of vengeance, in particular the vandalism of the printer and the sabotage of the financial system, rang true with the people in the audience.

We all knew an employer that, in our minds, deserved something like what happened in the movie. When I read recently of a network administrator deleting configurations from his firm’s core routers and then texting all his former co-workers that he had struck a blow on their behalf, I saw that such sentiments were alive and seething in more than one mind. As options for future employment in a region diminish as the jobs that once sustained that region go elsewhere, that seething resentment will only increase, resulting in ever-bolder acts of defiance, even if they result in the self-destruction of the actors initiating them.

But then there are the others that take even more thought about their actions and see a ray of hope saving them from self-destruction in the form of criminal activity. Whether they sell their exfiltrated data for money or post it anonymously on WikiLeaks. The first seeks to act as a leech off of his employer, the second has a motive to make the truth be known. Both actually prefer that the employer’s computer systems be working optimally, so as to facilitate their data exfiltration.

In economic models, this should not be happening. People should be acting rationally and either accept lower wages or retrain for other jobs. In real life, people don’t act rationally, especially in times of high stress. So, what can firms do about this in order to improve security?

The answer lies in the pages of Machiavelli’s “The Prince”. Give them a stake in the enterprise that requires their loyalty in order to succeed, and then honor that loyalty, even if it means payroll costs don’t go down. It won’t eliminate criminals 100%, but it will go a long way towards not only limiting the number of criminals in one’s firm, but also will maximize the incentive for loyal employees to notice, report, and react to suspect behaviors. If a firm was once again a place where people could be comfortable with their job prospects for the years ahead, it would be less of a target in the minds of the unhomogeneous, unspherical employees whose decisions always come with friction. It would be a firm that would have better retention of institutional knowledge and expertise in dealing with incidents.

Now, will boards and c-level executives see things this way? Not likely, given that the economic models of the past 50 years dominate their thinking. Somehow, the word has to get out that econometric models are not the path to security. Security is not a thing, but a system of behaviors. If we want more security, we then have to address the behaviors of security and give employees a reason to embrace them.

The Right to Know and Institutionalized Ignorance

I take the title for this from the Yes, Minister episode in which the bureaucrat, Sir Humphrey Appleby, saves the political career of Jim Hacker by not providing him with full information about an issue. In a nutshell, there are some things that are better for the people at the top to not know. Appleby explains that there is a certain dignity in ignorance, almost an innocence in saying with full honesty, “I did not know that.”

Now, consider your own firm and its security. What if there’s a conduit from the Internet to the DMZ, and from there on to the entire corporate network, including areas segregated for business-critical functions? And what if that conduit has been there for over 10 years? And what if your firm is due for a security audit or in the process of having a security audit? Does anyone in a high position – or any of the auditors, for that matter – personally benefit from this huge flaw being made known?

It’s highly and hugely embarrassing. It’s been there for 10 years, and the network people have known about it all along, but have grown tired of being ignored by the systems people that refuse to re-architect their system with security in mind, since that would significantly impact production. If the people on top and the auditing firm had to deal with this now, I could see the potential for more than one person to potentially get fired or put on a remediation plan because of it.

But if nobody officially knows about it, nobody has to officially do anything about it. The audit completes successfully and the auditors retain their contract to provide auditing services. The managers and executives can nod their heads that, yes, they’ve got their arms around this security thing and that things are looking pretty good on that front.

Yes, the execs and auditors have both a right to know and a need to know about that huge problem, but neither has a desire to have such highly embarrassing information made known. There’s a sort of institutionalized ignorance about the situation to the point where, if there was a breach via that conduit, an executive could legitimately protest at the engineers and developers, “Why didn’t you tellanyone about it?” Never mind that they did, but got ignored, tabled, distracted, re-prioritized, or otherwise sidetracked.

No, if something had been done right away, there’d be no problem. But this has festered and become toxic. It is best for the careers of those closest to it to ignore it. If it does result in a breach, then those at the top have to throw as much blame around as possible so that nobody will try to assign any blame to them, and that blame flows downhill to the very people that tried to inform about the issue to begin with.

In the episode, Appleby explains the difference between controversial and courageous:

“Controversial” only means “this will lose you votes”. “Courageous” means “this will lose you the election”!

Similar parallels apply to business. This is why I roll my eyes a little every time I hear an exhortation to innovate and think outside the box. Trust me, if I’m not following a specified process to innovate or doing a proper SOP for thinking outside the box, I’m doing something either controversial or courageous, with associated negative consequences.

It stands to reason that if I was to email directly a C-level person and copy all the management chain between me and him and then describe a situation as bad as the above, I’d be doing something highly courageous. If I do less than that, then institutionalized ignorance can keep anyone with a right to know the bad news from actually having to hear it, thereby maintaining their dignity in ignorance.

Apart from this being a cautionary tale about not developing a too-cozy relationship with one’s auditors, it’s also a very real concern about where a culture of permitting mistakes has to be in place in order for security to have a chance. Even monumental mistakes such as this 10-year marvel need to be allowed in order for the people responsible for fixing them to actually do something about them other than sweeping them under the carpet and pretending that all is well as they desperately seek employment elsewhere, before the situation blows up.

We’ve got the need to know and the right to know… but are we strong enough to know even when we lack the desire to know?

Manual Override

As the Himynamistan diplomatic convoy made its way to the intersection, the Dassom agent noted their passing as he sat slumped and fetid, like countless other bums on the streets of San Francisco. The convoy made its halt at the stop sign, autonomous brakes holding firm against the gravity of the downward slope.

As the convoy yielded right-of-way to the cross traffic, the Dassom agent, nameless in the shadows of the alleys of dumpsters between glittering financial monuments, lifted a small infrared controller and pointed it at the 18-wheeler loaded with pig iron that was rolling along just behind the convoy.

The Dassom agent pressed a button on the IR device and shot a signal to the 18-wheeler.

You know, how that big truck got to the top of the hill with all that metal in it was a testament to the builders of the engine in that beast of a machine. Well done, lads! Such a shame that the engineering and craftsmanship were going to be wrecked soon after the truck’s driving software interpreted the IR signal as a manual emergency override to disengage all braking systems and to accelerate.

The Dossam agent did not turn to one side or the other, but kept the metallic collision between the truck and the Himynamistan diplomats in their unmoving vehicles to his back. Most of the wreckage went forward, towards the cross street traffic, but a few small ricochets bounced off the back of the agent’s hoodie.

Prioritizing Security Spending

I’ll put on my manager/owner hat, since I have one laying about the house, and will look at the receiving side of my constant cries to emphasize security spending. There, it’s on, although it seems to restrict blood flow to the part of my brain that handles technological details… never mind, let’s get to budgeting!

First off, security is very important. It’s so important, I’ll use a few more “verys” to emphasize that importance. It’s very very very very very important. But, before I can pay for security, I have to pay for a few other things.

Out of my revenue, first to go through are my loan payments. If I don’t keep current on my business loans, I close my doors. That’s a certainty. Ditto for payroll, rent, and utilities. I have to pay those, on time, every month, or I *will* close my doors.

Next up, I have to pay for my materials that I use in my business, whether those materials be solid manufacturing inputs or intangible information, it’s what I use to make my stuff. Without those inputs, my business is no more.

Then there’s advertising. I have to have that, right? I also need money for fees, which I pay to local, regional, and national government authorities in order to stay in business. If I don’t pay those, my business will certainly not be able to operate.

Now, I’ve got some money left over. Part of me wants to have a little more for myself, to compensate for all those days I lived out of my office, getting this business off the ground. That’s why I went into business, right, to make a little something for myself, over and above what The Man would pay me in a regular gig? I’ve got a business partner, as well, and we’ve been through everything together, all these years. I’ve got to give him his cut, fair’s fair.

What’s left is my IT budget. Before anyone panics, let me assure you that there’s still quite a lot of money in that pot.

But, before I pay for any security, I need to pay for my existing licenses. If my PCs don’t have an operating system, they don’t run, and I don’t have a business anymore. Then I pay for my productivity software because what’s the point of having PCs if they don’t do anything useful? No, I must have word processors, spreadsheets, and email! No compromise on that!

If I have specialized software for my line of business, you better believe there are some big-time license fees to run that stuff. But, without it, I can’t produce what my customers want. Honestly, security is important to me, you saw how many “verys” I used up there, but I have to first allocate money for what’s core to my business.

But I’m almost to security in my line-items. Let me first cover printing costs, VoIP services, Internet connections, and a new box fan for my server closet. As long as we keep the fans on and the door open, the servers won’t overheat. That’s a good feeling to have, the feeling you get when you know the servers won’t overheat.

Now that I’m ready to buy some security, please don’t bring up the issue of locks on the doors. I can lock the outside doors, but if I lock the door to the server closet, we’re finished as a going concern.

Looking at the budget, there’s not a lot, so maybe I should get the most important piece of security gear and hope it does most of the work I need it to do. I’ll get a firewall and pay for that annual license/maintenance.

Then there’s an antivirus program that’s only $21.95 per workstation when I buy in bulk, I’ll get that. I don’t know if it’s any good, but it’s at least something.

I need to buy a backup and recovery solution, so that’s going to set me back a bit.

I also have to pay for spam filtering and DDoS protection through my ISP, or I get shut down by spammers and/or DDoSers. This expenditure, in fact, should have come before the backup and recovery.

When I ask the guy that comes in twice a week after lunch to do my IT about what else I should get, he’s got a long list of cool stuff. But when I look at the prices he quotes for them, I have to shake my head. I really can’t afford to spend thousands on a big piece of hardware like a proxy server or an IPS. Maybe if I saved up, I could, but I can’t spend that kind of money right now. And don’t even talk to me about IP protection or UEBA or other big systems like that, there’s no way I can buy one of those solutions.

The thing is, security is a matter of maybe I’ll lose my business if I don’t have it. The other things are a matter of I *WILL* lose my business if I don’t have them. Will beats maybe, every time. That good feeling I have about the servers not overheating is countered by the worry I have that one day, maybe tomorrow, I’m the next small business that gets hit with something that the firewall, antivirus, and/or antispam-antiDDoS can’t deal with. But that’s a maybe, a roll of the dice.

Eventually, I learn to live with “maybe” and I just focus on running my business, the best I can.

And if all my PCs, unbeknownst to me, are secretly mining bitcoins for North Korea or participating in Mafia-run botnets, it’s no concern to me as long as I keep in business. What I don’t know doesn’t impact my bottom line.

I’m not being callow or flippant about wanting to emphasize security but simply not having the budget for it. That’s a reality. And if I get to where the “maybe” doesn’t nag at me anymore, then I can live with myself and my decisions.

I just took off my manager/owner hat and read that over. It does make sense to me. As a security person, I see all the breaches and crashes and outbreaks. But I don’t see that, for most people, these are only rumors, things that happen to someone else. Daily bashing away at firewalls, constant spam and DDoS, legacy malware trying to infect your PC like it’s 1999, those are the constants that happen to everyone. Businesses must protect against them. The other stuff, though, that’s in the realm of “maybe” and that’s not a strong enough case to justify a major expenditure, particularly one that could cut deep into the profitability of a firm.

Cyberattack Doomsday Prepping

Time to do a little doomsday prepping for the folks on the IT floor. Cyberattacks are happening constantly and, eventually, one succeeds against your organization. There are things that will be available and absolutely great to have. They are the same things that, if you don’t have them, you wish real hard that you did. What should be in the doomsday prepper bug-out kit?

First on that list has to be an external hard drive. I specify an external drive because a PC off the network is too easily left unpatched and could accidentally be connected to a hot network, whereupon all its information gets compromised by the ransomware. A stack of papers in a sealed envelope wouldn’t be a bad way of storing vital information, either.

What would I put on the hard drive? I would start with current copies of network diagrams. Even relatively recent copies will do the job. If a ransomware worm gets into the network share where these are kept, it’s game over as far as sharing intel quickly with first responders.

Likewise, information on what SNMP communities exist and what devices they work with; SNMP v3 information and what devices accept those credentials; TACACS accounts that are not connected to AD that work; where network devices still have local accounts and those credentials; which devices do ssh with keys of length 1024 or greater; which devices are still stuck on telnet. Knowing this can do two things: help with getting access to determine if the network devices are compromised, and being able to make an educated guess about which devices and credentials are most likely compromised.

What else… how about a client installed on each PC that is able to monitor the activity on the PC and also run scripts with local admin or system privileges? This client should be able to access the system independently of AD, which could be compromised in such a situation. Enterprise software distribution tools can be damaged in a major outage, so having the scripting ability can invoke a hardware install from a known clean network share. Granted, the client isn’t in the external drive or sealed envelope, but it’s something I’d want in place for my IT doomsday prepping.

I want it because monitoring activity on endpoints is critical. Anything and everything that provides information for reporting is excellent. If it can provide spreadsheets that can be further analyzed, even better. If the client or AD account can reach most machines, but is cut off on a segment of the population, then it’s a good bet that the ones where it has been cut off are compromised. Those monitors might be able to find dual-homed devices that can serve as vectors of contagion. You’ll want to know where those are and maybe shut them down as part of the prepping.

But I’m just a network guy that does a lot of NAC work. I’d like to know what else would be good to have on that external hard drive? What would be good to have in a sealed envelope? Is there a way to securely store application code in the event that app servers are compromised? Speaking of servers, should we ensure that the server networks are properly segmented from the rest of the network?

In short, what are the things you would put into place if you were brought in to get an organization as prepared as possible for The Big One?

When RoI becomes DoS

Here’s the scenario: a firm purchases a security solution. The firm skimps on professional services and/or rushes the schedule on implementation and/or neglects to maintain the product properly.

Do not be surprised when, one day, that security solution does something that results in a system-wide outage:

Fig. 1: System-wide outage

Why were those decisions made? Because professional services, longer timelines, and proper staffing/coordination are all costs, and we demand better return on investment!

The problem is that many security systems have the capability to shut down the entire network, or kill access to PCs, or other stuff that, well, keeps devices completely safe from threats by denying any access to them whatsoever. And while an enraged executive can satisfy his need to offer up a sacrifice to the shareholders in his firm by kicking out the vendor closest to the outage, there’s still the problem of cleaning up the after-effects. The vendor typically survives to roll out product another day, but the firm is left with the same problem as before – and will have to now go to another vendor whose product can be just as destructive as the first, if implemented incorrectly.

Fig. 2: Vendor making an exit from firm after system-wide outage

Worse, the firm may choose to reject all vendors of a particular solution and instead seek to eliminate all technology that requires such a solution with a Bold Move. “We’re going to get rid of all our Windows workstations and switch over to thin clients that run on burner phones, so we don’t need firewalls anymore.” Yeah. Good luck with that. This much I know: whatever product is mentioned as part of a Bold Move Strategy definitely has an amazing salesperson in that region. Chances are, that Bold Move is going to involve a purchase order that skimps on professional services, compresses timelines, and lacks proper staffing and coordination, which may result not in a system-wide outage, but an undesired result after a lofty promise.

Fig. 3: Undesired result after a lofty promise

This, in turn, can result in the executive that oversaw a failed vendor implementation and a failed Big Move taking an opportunity at another company. This makes way for a new executive to step in and try his hand at choosing between doing things on the cheap or doing things correctly. Because RoI is much easier to measure than the chance that a botched implementation results in a DoS, my money’s on the cheap.

Fig. 4: Another botched implementation of a security product…

Paying for Network Security, One Line of Code at a Time

Here’s the situation: there’s a company that has handed over all operational running of its network over to a third-party integrator. At first, it was with a thought that the company would save loads of money, but now the truth is known. This integrator charges by the line of code. The only way to save money is to never issue changes to the switch configs.

Along comes an auditor, and the auditor makes a finding that the company needs more network security. His change involves adding just one line of code.

To a switchport config.

To every access port.

In the entire network.

The customer does the multiplication and comes to the conclusion this auditor *has* to be getting a kickback from the integrator.

It doesn’t matter if the integrator makes the changes by hand or if it automates them: the contract spells it out clearly, each line of code involves a charge.

It may come out cheaper to just fire the CISO every year, pay a fine, and never really fix the problem.

What are some other tack-on monetary cost barriers that integrators add that get in the way of security? I’ve seen quotes for a pair of firewalls that, in retail terms, would be as much as purchasing the same pair once a month for an entire year and still have enough money left over to cover my salary, albeit without benefits. I suppose if they bought the one firewall pair, the cost of the other 11 could be transferred to cover my benefits.

But I did more at my job than manage a single pair of firewalls – how could this be an actual savings? It was only a cost savings if we never purchased the gear in the first place!

And that ain’t good security…

Integrators also introduce non-monetary costs (and if I sound like an economist now, it’s because I used to teach Economics…) in the form of the time and effort it takes for their customers to get the paperwork put together to submit to them whenever a new system is introduced to the network. Does the product also need to access network equipment? Oh dear, oh dear, oh dear, that may be a problem…

… because the integrator uses the same management environment for multiple customers. If my product can access customer AAA in the integrator’s environment, it is only a few lines of configuration away from accessing everything from AAA to ZZZ in that environment.

That also ain’t good security…

Then there’s the time I submitted the request to have a firewall rule added to permit a group of 5 source addresses talk to a group of 3 destination addresses over a group of 10 TCP ports.

Did the integrator create one rule and three groups?

No.

5 times 3 times 10 equals 150, the actual number of rules created by the integrator for my request.

And we paid for every line…

The Great Unplugging

Walk with me through a thought exercise… let’s say that two nations are at a high level of international tensions, just less than a full declaration of war. Let’s also say that one nation’s Internet access is tightly controlled and the other’s is widely available. What happens when the two nations engage in a dark war in cyberspace?

By “dark war”, I mean one in which the nations can be pretty sure about who is sending cyberattacks, but they can’t prove it. They can’t prove attribution because that means either revealing sources and methods of intelligence collection or they simply don’t have any permanent, tangible evidence to work with. As such, the attacks go forward, as do the responses, but there is no public attribution of them, so they stay in the dark.

Back to the nations in the hypothetical example, the one that has tightly controlled Internet access is already set for cyberdefense. Its commerce and government likely do not rely upon Internet connectivity in order to run normally. Or, if they do require connectivity, it’s only with internal IP addresses, nothing or very little extraterritorial. As such, it is not much of a target for the other nation. Its networks are difficult to get into and wrecking them is little more than an inconvenience.

For the nation with widely available Internet access, commerce and government services depend upon the Internet as a lifeline. Without it, activity halts and few organizations are prepared for long-term offline activity. It is a target-rich environment in a dark war.

So let us say that attacks against the Internet-rich nation are increasing in frequency and cost. How can it protect itself?

First, it bans traffic from the attacking nation. Nothing allowed to or from it. This then leads the attacking nation to shift to another path, that of compromising systems in other countries, perhaps starting with those in the allies of the nation they’re attacking. This is where the Internet-rich nation then asks all its allies to cut off traffic to and from the attacking nation. Let us assume that they all comply. What next?

Next are neutral nations. Their PCs are compromised, the botnets made of their PCs then launch attacks. This is where things will get complicated, so I’ll start using fictional names for these nations. We’ll call the Internet-rich nation the United States of Shamerica, or Shamerica for short, and the nation with its local networks separated from the world Shiran. Shamerica has its allies Shanada, Shengland, Shermany, Shrance, and The Shetherlands all cut off traffic from Shiran.

But many of those nations have outsourced their IT needs to nations such as Shindia, Shungary, Shulgaria, Shalaysia, and The Shech Republic. If Shiran attacks through those nations, what does Shamerica do if only half of those agree to cut off traffic from Shiran? Companies with outsourced IT in nations that don’t cut that traffic, like Shindia and Shalaysia, will be ruined if their access to those outsourcers is suddenly terminated – and that will be a victory for Shiran.

But if the traffic isn’t blocked, then that will also be a victory for Shiran when it results in yet another major cyberattack successfully getting through.

Meanwhile, firms in Shamerica are dealing with a higher and higher likelihood of cyberattack from Shiran’s indirect methods. At what point is the likelihood of attack high enough to justify them spending appropriately on security? And how much can appropriate security cost without those firms deciding to disconnect entirely from the Internet and return to the days of paper ledgers and mail-order business? Would customers be receptive to such things, as they potentially promise less Big Data tracking of their lives and maybe even less likely identity theft?

Drastic, I know, to suggest people returning to physical mail and magazines and buying goods in stores, but we have to ask at what point is the certainty of a successful attack enough to where being connected to the Internet is too high of a risk relative to the costs of mitigation? Would the hypothetical nations of Shussia and Shina have to be involved, as well, or can Shiran reach this point all on its own? How do we correctly calculate the risk of being connected to the Internet, or will that always be a given until such point it becomes entirely clogged with attack traffic so as to render it useless?

Because, in a dark war, the only true equivalent of a bomb shelter is to unplug from the Internet. Any connectivity is making a bet that your defenses are better than the attacker’s weapons. Miscalculate, and you are damaged.