Category Archives: Security

Our Most Important Assets Are…

Frequently, I hear “our employees” as the closer for that sentence. Nice sentiment, but is it backed up by evidence? When we do a risk assessment, we consider our assets and what it would cost us if they were not available, if they had to be replaced.

I’ve seen firewalls and encryption and digital loss prevention systems put in place around databases, source code, and trade secrets. I have yet to see a company that has proactively made similar protective efforts around its employees. Given the efforts some go to in order to hire those same employees in the first place, I find such a lack of protections ironic.

After all, if a company is willing to offer better benefits, higher pay, and better working conditions than another company in order to attract talented employees, it is definitely showing a value for those employees at the time of hiring. But that value seems to be discounted almost immediately through HR practices that limit bonuses and vacation in the first year of work, annual compensation rules that limit increases in pay, and management choices to restrict lateral moves within the company. These are endemic, even at companies that think they don’t have these problems.

So, the employee stays with the company for a while and then notices other firms dangling bigger and better opportunities. If a person asks for a raise, however, such requests are frequently met with denial or stalling tactics. The current employer basically encourages its employees to actively seek out better opportunities, secure them, and then come forward with an offer letter and a notice of departure. Only then do the negotiations start in earnest, in the hopes that a matching counter-offer is sufficient to retain the person who already made a decision to leave and found a place to go to.

If a person could actually go to a manager, talk about dissatisfaction with current conditions, and then walk out with those conditions addressed to the point where the person won’t bother to look for a better place to be – including the possibility of an out-of-cycle pay increase, then, yes, that is a place where the greatest assets are the employees.

Otherwise, may we please ask that people no longer say “our greatest assets are our employees”? The greatest assets are the ones where investments are made to keep them from walking out the door.

What the SolarWinds Breach Teaches Us

First off, the Russian hacking of SolarWinds to get its cyber eyes and ears inside of sensitive US installations is not an act of war. It’s an extremely successful spy operation, not an attack meant to force the USA to do something against its will.

Next off, if not SolarWinds, then it would have been some other piece of software. The Russians were determined to compromise a tool that was commonly used, and that was the one they found a way in on. Had SolarWinds been too difficult to crack, then the Russians would have shifted efforts to an easier target. That’s how it goes in security.

So the lessons learned are stark and confronting:

  1. We can no longer take for granted that software publishers are presenting us with clean code. In my line of work, I’ve already seen other apps from software vendors with malware baked into them, but which are also whitelisted as permissible apps. SolarWinds is the biggest such vendor thus far, but there are others out there that contain evil in them. We have to put layers around our systems to ensure that they don’t start talking to endpoints that they have no business talking to, or that they don’t start chains of communication that eventually send sensitive data outside.
  2. The firewall is not enough. Neither is the IPS. Or the proxy server. The malware in SolarWinds included code to randomize the intervals used for sending data and the data was sent to IP addresses in-country, so all those geolocation filters did not have an impact in this case. We need to look at internal communications and flag on whenever a user account is being used to access a resource it really shouldn’t be accessing, like an account from HR trying to reach a payroll server.
  3. Software development needs to reduce its speed and drive forward more safely than it is currently. I know how malware gets into some packages: a developer needs to meet a deadline, so instead of writing the code from scratch, a code snippet posted somewhere finds its way into the software. Well, that code snippet should have been looked at more carefully, because that’s what the malware developers put out there so that time-crunched in-house developers would grab it and use it and make the job of spreading malware that much easier.

    Malware can also get in through bad code that allows external hooks, but there’s nothing to compare with a rushed – or lazy – developer actually putting the malware into the app that’s going to be signed, sealed, and whitelisted at customer sites.
  4. That extended development cycle to give breathing space for in-house developers needs to be further telescoped to do better penetration testing of the application so that we can be sure that not only do we not have malware baked in, we also don’t have vulnerable code baked in, either.

Those last two are what will start to eat into revenue and profits for development teams. But it’s something we must do in order to survive – constant focus on short-term gains is a guarantee to remaining insecure. We may need to take another look at how we do accounting so that we can have a financial system that allows us the room we need in order to be more secure from the onset. Because, right now, security is a cost and current accounting practices give incentives to eliminate costs. We can’t afford to make profits that way.

Hell Hath No Fury Like an Admin Scorned

Take a good look at this guy, because he may be potentially more devastating you your company than a major natural disaster. He is an admin, and he’s not happy about going to work every day.

network admin from Citibank was recently sentenced to 21 months in prison and $77,000 in fines for trashing his company’s core routers, taking down 90% of their network. Why did he do it? His manager got after him for poor performance.

I don’t know how the manager delivered his news, but it was enough to cause that admin to think he was about to be fired and that he wanted to take the whole company down to hell with him. Thing is, he could have done much worse.

What if he had decided to sell information about the network? What if he had started to exfiltrate data? What if he had set up a cron job to trash even more network devices after his two-week notice was over? And there could be worse scenarios than those… what can companies do about such threats?

It’s not like watching the admin will keep the admin from going berserk. This guy didn’t care about being watched. He admitted to it and frankly stated that he was getting them before they got him. His manager only reprimanded him – who knew the guy was going to do all that just for a reprimand? But, then, would the company have endured less damage if it had wrongfully terminated the admin, cut him a check for a settlement, and then walked him on out? So what about the other admins still there? Once they find out how things work, they could frown their way into a massive bonus and we’re heading towards an unsustainable situation, in which the IT staff works just long enough to get wrongfully terminanted.

So what does a manager do with a poorly-performing employee that’s about to get bad news? Or an amazingly good employee that nobody (including him) knows that he is about 10 minutes away from an experience that will make him flip out? Maybe arranging a lateral transfer for the first guy while everyone changes admin passwords during the meeting… but the second guy… there was no warning. He just snapped.

Turns out, good managers don’t need warnings. Stephen Covey wrote about the emotional bank account, and IT talent needs a lot of deposits because the demands of the job result in a lot of withdraws. A good manager is alongside her direct reports, and they know she’s fighting battles for them. That means a great deal to an employee. I know it’s meant a great deal to me. My manager doesn’t have to be my buddy, but if my manager stands up for me, I remember that.

Higher up the ladder, there needs to be a realization in the company that it needs to pay the talent what it is worth. I’ve known people that earned their CCIE, expected a significant bump in pay, and got told that company policy does not allow a pay increase of greater than 3% in a year. They leave the company, get paid 20% more to work somewhere else for a year or two, and then their former employer hires them back for 20% more than that. By that time, though, they’re now used to following money and not growing roots to get benefits over time. By contrast, maybe a 20% bump – or even a 15% bump, maybe – could have kept the employee there.

What are the savings? Not just the pay. The firm doesn’t have to go through the costs of training someone to do the job of the person who’s left. The firm retains the talent, the talent is there longer and now has a reason to try to hold on to those benefits, and there’s a sense of loyalty that has a chance to develop.

If an employee has a sense of loyalty, feels like compensation is commensurate with skills, and has a manager that fights real battles, that employee is better able to ride out the storms of the job and not snap without warning. If that manager has to encourage an employee to do better, maybe then he’ll try harder instead of trashing all the routers.

There may be no way to completely prevent these damaging outbursts from happening, but the best solutions for people’s problems aren’t technological. They’re other people, doing what’s right.

A Night at the Outsourcer

Driftwood: All right. It says the, uh, “The first part of the party of the first part shall be known in this contract as the first part of the party of the first part shall be known in this contract” – look, why should we quarrel about a thing like this? We’ll take it right out, eh?
Fiorello: Yeah, it’s a too long, anyhow. (They both tear off the tops of their contracts.) Now, what do we got left?
Driftwood: Well, I got about a foot and a half.

After talking with people from companies whose experiences with their outsourcing‍ contracts can be best described as “disappointing”, I wonder if they didn’t have the equivalent of the‍ Marx Brothers‍ representing them in their contract negotiations. I’m not saying that the corporate lawyers were idiots‍ , just that they may have been outclassed by the outsourcers’ lawyers. This is a specialized situation, after all.

Like the company doing the outsourcing, the outsourcer wants to maximize profits. Outsourcers are not charitable organizations, offering up low-cost business services to help the hapless firm with IT‍ needs. They want to get paid, Jack! Some may want a long-term, quality relationship with a client, but there are plenty out there that want to sign a contract that, on the surface, looks like it will reduce costs, but it contains hidden standard business practices‍ that will rake the clients over the coals.

One of the biggest gotchas in an outsourcing contract is the fact that the relationship between a company and its IT is no longer one of company to employee, but company to contractually provided service. That means the “one more thing” that managers like to ask for from their employees isn’t an automatic wish that will be granted. Did the contract authorize that one more thing? No? Well, that will cost extra, possibly a lot extra.

Another loss is the ability to say, “I know that’s what I wrote, but what I meant was…” as a preface to correcting a requested change. In-house staff can be more flexible and adapt to the refinement of the request. Outsourced staff? Well, it seems as though the staff were engaged to make a specific change, so there’s a charge for that, even though you decided to cancel the change in the middle of it. Now, the change you requested needs to be defined, submitted, and approved in order for us to arrange staff for the next change window…

There’s also the limit on the time-honored technique of troubleshooting the failed change and then making the troubleshooting part of the change. Consider a firewall change and then discovering that the vendor documentation left out a port needed for the application to work. In-house staff have no problem with adding that port and making things work. Outsourcers? If that change isn’t in writing, forget about it until it is. And, then, it may be a matter of rolling back the change and trying again, come the next change window.

Speaking of firewalls, that brings me to the “per line of code” charge. If the contract pays by the line of code, prepare for some bulky code if the contract does not explicitly state that lines of code must be consolidated whenever possible in order to be considered valid and, therefore, billable. Let me illustrate with an example.

My daughter is 14 and has zero experience with firewall rules. I asked her recently how many rules would be needed for two sources to speak to two destinations over five ports. She said five rules would be needed. I then gave a hint that the firewall help file said that ports could be grouped. Then, she proudly said, “one!”

While that’s the right answer for in-house IT staff, it’s the wrong answer for an outsourcer being paid by the line. 20 is the right answer in that case. It blew her mind when I told her how many different firms I’ve heard about that had 20 rules where one would do. As a teenager with a well-developed sense of justice, she was outraged. So long as contracts are signed that don’t specify when, how, and what to consolidate, she will continue to be outraged.

I didn’t have the heart to tell her about how some outsourcers contract to provide services like email, but the contract did not outline all the things we take for granted as part of email but which, technically, are not email. Shared calendars? Not email. Permissions for an admin assistant to open a boss’ Inbox? Not email. Spam filtering? Not email. Email is the mail server sending/receiving to other mail servers and allowing clients to access their own inboxes. Everything else is not email, according to the outsourcers’ interpretation of the contract. Email is just one example, and all the other assumptions made about all the other services add up with the above to create a situation in which the outsourcing costs significantly more than keeping the work in-house.

This can have significant impact on security. Is the outsourcer obligated to upgrade devices for security patching? Is the outsourcer obligated to tune security devices to run optimally? Is the outsourcer required to not use code libraries with security vulnerabilities? If the contract does not specify, then there is zero obligation. Worse, if the contract is a NoOps‍ affair in which the customer has zero visibility into devices or code, then the customer may never know which things need what vulnerabilities mitigated. There may be a hurried, post-signing negotiation of a new section about getting read rights on the firm’s own devices and code… and that’s going to come at a cost.

Another security angle: who owns the intellectual property in the outsourcing arrangement? Don’t make an assumption, read that contract! If the outsourcer owns the architecture and design, your firm may be in for a rough ride should it ever desire to terminate the contract or let it expire without renewing it.

I’m not even considering the quality of work done by the outsourcer or the potential for insider threat – those can be equal concerns for some in-house staff. The key here is that the contract is harsh, literal, and legally binding. That means vague instructions can have disastrous results. Tell an outsourcer to “make a peanut butter and jelly sandwich,” do not be surprised if the outsourcer rips open a bag of bread, smashes open the jars of peanut butter and jelly, mashes the masses of PB & J together, shoves the bread into that mass, and then pulls out the bread slices with a glob of peanut butter, jelly, glass, and plastic between them. He gave you what you specified: it’s not his fault that the instructions were vague.

There can be a place for oursourcing, particularly as a staffing solution for entry-level positions with high turnover. But every time I talk with someone from a place that either is currently in or is recovering from an outsourcing contract that went too far, I hear the horror stories. The outsourcers’ lawyers know what they’re doing and the firm’s lawyers fail to realize how specific they have to be with the contract language to keep from looking like they may as well have been the Marx Brothers‍.

Driftwood (offering his pen to sign the contract): Now just, uh, just you put your name right down there and then the deal is, uh, legal.
Fiorello: I forgot to tell you. I can’t write.
Driftwood: Well, that’s all right, there’s no ink in the pen anyhow. But listen, it’s a contract, isn’t it?
Fiorello: Oh sure.
Driftwood: We got a contract…
Fiorello: You bet.

Security Policy RIPPED FROM TODAY’S HEADLINES!!!

I had a very sad friend. His company bought all kinds of really cool stuff for security monitoring, detection, and response and told him to point it all at the firm’s offices in the Russian Federation. Because Russia is loaded with hackers, right? That’s where they are, right?

Well, he’d been running the pilot for a week and had nothing to show for it. He knows that the tools have a value, and that his firm would benefit greatly from their widespread deployment, but he’s worried that, because he didn’t find no hackers nowhere in the Hackerland Federation, his executives are going to think that these tools are useless and they won’t purchase them.

So I asked him, “Do you have any guidance from above on what to look for?”

“Hackers. They want me to look for hackers.”

“Right. But did they give you a software whitelist, so that if a process was running that wasn’t on the list, you could report on it?”

“No. No whitelist.”

“What about a blacklist? Forbidden software? It won’t have everything on it, but it’s at least a start.”

“Yes, I have a blacklist.”

“Great! What’s on it?”

“Hacker tools.”

“OK, and what are listed as hacker tools?”

My friend sighed the sigh of a thousand years of angst. “That’s all it says. Hacker tools. I asked for clarification and they said I was the security guy, make a list.”

“Well, what’s on your list?”

“I went to Wikipedia and found some names of programs there. So I put them on the list.”

“And did you find any?”

“Some guys are running the Opera browser, which has a native torrenting client. I figured that was hacker enough.”

Well, security fans, that’s something. We got us a proof of concept: we can find active processes. I described this to my friend, and hoped that he could see the sun peeking around the clouds. But it was of no help.

“They’re not going to spend millions on products that will tell them we’re running Opera on a handful of boxes!”

He had a point, there. Who cares about Opera? That’s not a hacker tool as featured on the hit teevee show with hackers on it. And, to be honest, the Russian offices were pretty much sales staff and a minor production site. The big stashes of intellectual property and major production sites were in the home office, in Metropolis, USA.

So I asked, “Any chance you could point all that stuff at the head office?”

“What do you mean?”

“Well, it’s the Willie Sutton principle.”

“Who was Willie Sutton?”

I smiled. “Willie Sutton was a famous bank robber. His principle was to always rob banks, because that’s where the money was. Still is, for the most part. Russia in your firm is kind of like an ATM at a convenience store. There’s some cash in it, but the big haul is at the main office. Point your gear where the money is – or intellectual property – and see if you don’t get a lot more flashing lights.”

My friend liked that. He also liked the idea of getting a software whitelist so he’d know what was good and be able to flag the rest as suspect. He liked the idea of asking the execs if they had any guidance on what information was most valuable, so that he could really take a hard look at how that was accessed – and who was accessing it.

And maybe there were tons of hackers in Russia, but they weren’t hacking anything actually in Russia. And maybe said hackers weren’t doing anything that was hacking-as-seen-on-television. Maybe they were copying files that they had legitimate access to… just logging on, opening spreadsheets, and then doing “Save As…” to a USB drive. Or sending it to a gmail account. Or loading it to a cloud share…

The moral of the story is: If your security policy is driven by the popular media, you don’t have a security policy.

The Fallacies of Network Security

Like the Fallacies of Distributed Computing, these are assumptions made about security by those that use the network. And, like those other fallacies, these assumptions are made at the peril of both project and productivity.

1. The network can be made completely secure.

2. It hasn’t been a problem before.

3. Monitoring is overkill.

4. Syslog information can be easily reviewed.

5. alerts are sufficient warning of malicious behavior.

6. Our competition is honest.

7. Our users will not make mistakes that will jeopardize or breach security.

8. A perimeter is sufficient.

9. I don’t need security because nobody would want to hack me.

10. Time correlation amongst devices is not that important.

11. If nobody knows about a vulnerability, it’s not a vulnerability.

Effects of the Fallacies
1. Ignorance of network security leads to poor risk assessment.
2. Lack of monitoring, logging, and correlation hampers or prevents forensic investigation.
3. Failure to view competitors and users with some degree of suspicion will lead to vulnerabilities.
4. Insufficiently deep security measures will allow minimally sophisticated penetrations to succeed in ongoing and undetected criminal activity.

I wrote this list for the purpose of informing, educating, and aiding any non-security person that reads it. Failing that, it serves as something that I can fall back on when commiserating with other security guys.

A Grim Observation

This is a picture of the SM-70 anti-personnel mine, devised by East Germany to kill people scaling border fences to escape to the West. Its purpose was not to reduce the number of escape attempts, but to reduce the number of successful attempts. Over time, it did reduce the number of successful escape attempts, but it did not bring the total number of attempts to zero, nor did it bring the number of successful attempts to zero.

I bring this up to show that, even with the extremes that the DDR was willing to go to to prevent population exfiltration, it was an ongoing issue through the entire history of that nation. They killed violators of their policy, the killings were well-known and publicized, and yet the population continued to try to move west. This has implications for corporate security.

Namely, corporations can’t kill off violators of their policies, so those violators will continue to violate. The reward, whatever it may be for them, will face very little relative risk. Criminal penalties? Those are only for those who get caught by companies not afraid of the negative exposure. Most of the worst case scenarios, it’s a job loss for a violation. Considering that a big chunk of people that breach security are already planning to leave that firm, job loss is a threat only so much as it interferes with the timing of leaving the firm.

While the leaders of the DDR could take a long-term approach to their perimeter issues, most executives answer to a board that wants to see results this quarter, or within the first few quarters after a system goes live. Security is an investment, right? Well, where is the return on this investment?

Security is not playing a hand of poker. It is a game of chess. It is a game of chess in which one must accept the loss of pawns, even knights, bishops, rooks, and maybe even a sacrifice of the queen, in order to attain the ultimate goal. Sadly, chess is not a game that is conducive to quarterly results. Just as the person attacking IT systems may spend months doing reconnaissance before he acts, the person defending IT systems must spend months developing baselines of normal activity and acquiring information on what traffic is legitimate and what is not. The boardroom is not a good place to drive security policy.

But, quite often, the security policy does come from the boardroom, complete with insistence that the hackers be found as soon as the security system is in place. Once in place, anything that gets past the security system is seen as a failure of the system. There’s no concept of how many violations get through without the system in place and how many have been deterred by the system, just that security needs to work now, and failure is not an option… and other platitudes like that that make good motivational posters.

That’s simply the wrong mentality about security. Going back to the DDR – a lethal system with a long-term perspective and a massive intelligence network behind it – we see a highly effective system that nevertheless was defeated by those both determined enough and lucky enough. The leaders of the DDR did not scrap it until the DDR pretty much was no longer a going concern. With less ruthless security in place and a lack of long-term perspective and a failure to orchestrate all available intelligence sources, is it any wonder that IT security is such a problem for companies to get their arms around?

And if companies want to step up their potential penalties to include criminal charges, they cannot do so without first developing a proper concept of security. They will need to train employees in forensic procedures. They will need to get legal and HR involved more closely with IT – and to be more up-to-date on both the technology and the legal environment surrounding it. There will have to be decisions about what breaches must be allowed so as to collect proper evidence, and so on and so forth. We’re talking about the development of a corporate intelligence community.

And, even then, that’s no insurance. But, it’s a start. Most companies’ security policy is as effective as a substitute teacher ignoring all the students in the class. Some step up their game to that of a substitute screaming at all the students in the class. True security needs to have consequences, investigative procedures, and collections of data – and, even then, there will always be breaches. Security will not eliminate the problems, only reduce them.

Wet Economics and Digital Security

A student once unwittingly asked a physicist, “Why did the chicken cross the road?” Immediately, the physicist retreated to his office to work on the problem.

Some days later, the physicist emerged and told the student, “I have a model that explains the chicken’s actions, but it assumes the road is frictionless and that the chicken is both homogeneous and spherical…”

In the last 50 years, economics has increasingly tied its models to frictionless decisions and homogeneous, spherical employees. These employees are as interchangeable with each other as are the widgets a company mass-produces. They show up to work at a certain wage and, since perfect competition in the labor market makes these models work, there is an assumption that the cost of labor is at a point where the market clears – no need to offer any more or less than that going wage rate.

As the world economy moved from regionalization to globalization and digital technologies made employees’ locations no longer tied to where a firm was legally chartered, the idea that costly labor in one market could be replaced with cheaper labor in another market fit well with the notion that employees were homogeneous, spherical physical bodies making frictionless decisions.

The biggest problem with the economic models that have dominated economic thought over the last 50 years is that, while they are great for predicting normal ups and downs in periods of relative calm, they are useless in times of massive upheaval. Put another way, they are like weather forecasting models that see category 5 hurricanes as “an increased chance of rain” or massive blizzards as “snowfall predicted for the weekend”. These models go blind in such unanticipated crises and are particularly useless for crises precipitated out of massive fraud and abuse. We saw the flaws of the models first in 1998, then in 2001, and again in 2008. We may soon see another round of flaw-spotting very soon, what with unease afflicting a number of major banks in germany and Italy…

But the second-biggest problem with the economic models is less obvious, and that’s because it involves the one thing everyone seems to leave out of their thought processes: security. Because the employees are not interchangable spheres and their decisions frequently involve friction, we can see security issues arising out of our reliance on those economic models.

The first is that employees are not widgets to be had at the lowest price: changing out a skilled veteran with many years at a firm even for someone with the same amount of experience from another firm involves a loss of institutional knowledge that the veteran had. The new person will simply not know many of those lessons until they are learned the hard way. In security, that can be costly, if not fatal.

It’s even worse if the new employee has significantly less experience than the veteran. I shudder whenever I hear about “voluntary early retirement” because it means all those people with many, many years at the firm are about to be replaced by people of vastly less experience. Because that experience is not quantified in the models used, it has no value in the accounting calculations that determined cutting the payroll to be the best path to profitability.

Then there’s the matter of the new employees – especially if they’re outsourced – not having initiative to fix things proactively. That lack of initiative, in fact, may be specified in the support contract. Both parties may have their reasons for not wanting to see initiative in third-party contractors, but the end result is less flexibility in dealing with a fluid security issue.

Remember the story of the Little Dutch Boy that spotted a leak in the dike and decided to stop it with his finger, then and there? What sort of catastrophe would have resulted if the Little Dutch Boy was contracted by the dike owners to monitor the dike, to fill out a trouble ticket if he spotted a breach, for the ticket to go to an incident manager for review, then on to a support queue with a 4-hour SLA to contact the stakeholders, so that they could perform an incident review and assess the potential impact before assigning it the correct priority? There would be a good chance that the incident would resolve itself negatively by the time it was graded as a severity one incident and assigned a major incident management team to set up a call bridge.

Security needs flexibility in order to succeed, and that kind of flexibility has to go along with the ability to exercise initiative. Full-time employees, costly though they may be, are more likely to be authorized to exercise initiative – and, if they’re experienced, more likely to use it.

On the matter of those decisions with friction… at any time, an employee can make an assessment of his or her working conditions and decide that they are no longer optimal. Most employees will then initiate either a job search process or a program of heavy substance abuse to dull the pain brought on by poor life and career choices, but others will choose different paths. It is those others that will create the security issues.

These others may decide that the best thing to do in their particular position is to get even with their employer for having created an undesirable situation. In the film “Office Space”, three of the main characters chose that path and created a significant illegal diversion of funds via their access to financial system code. They also stole and vandalized a laser printer, but that had less impact on their employer than the diversion of funds. In the same film, a fourth employee chose to simply burn down the place of business. Part of the popularity of the film stemmed from the way those acts of vengeance, in particular the vandalism of the printer and the sabotage of the financial system, rang true with the people in the audience.

We all knew an employer that, in our minds, deserved something like what happened in the movie. When I read recently of a network administrator deleting configurations from his firm’s core routers and then texting all his former co-workers that he had struck a blow on their behalf, I saw that such sentiments were alive and seething in more than one mind. As options for future employment in a region diminish as the jobs that once sustained that region go elsewhere, that seething resentment will only increase, resulting in ever-bolder acts of defiance, even if they result in the self-destruction of the actors initiating them.

But then there are the others that take even more thought about their actions and see a ray of hope saving them from self-destruction in the form of criminal activity. Whether they sell their exfiltrated data for money or post it anonymously on WikiLeaks. The first seeks to act as a leech off of his employer, the second has a motive to make the truth be known. Both actually prefer that the employer’s computer systems be working optimally, so as to facilitate their data exfiltration.

In economic models, this should not be happening. People should be acting rationally and either accept lower wages or retrain for other jobs. In real life, people don’t act rationally, especially in times of high stress. So, what can firms do about this in order to improve security?

The answer lies in the pages of Machiavelli’s “The Prince”. Give them a stake in the enterprise that requires their loyalty in order to succeed, and then honor that loyalty, even if it means payroll costs don’t go down. It won’t eliminate criminals 100%, but it will go a long way towards not only limiting the number of criminals in one’s firm, but also will maximize the incentive for loyal employees to notice, report, and react to suspect behaviors. If a firm was once again a place where people could be comfortable with their job prospects for the years ahead, it would be less of a target in the minds of the unhomogeneous, unspherical employees whose decisions always come with friction. It would be a firm that would have better retention of institutional knowledge and expertise in dealing with incidents.

Now, will boards and c-level executives see things this way? Not likely, given that the economic models of the past 50 years dominate their thinking. Somehow, the word has to get out that econometric models are not the path to security. Security is not a thing, but a system of behaviors. If we want more security, we then have to address the behaviors of security and give employees a reason to embrace them.

The Right to Know and Institutionalized Ignorance

I take the title for this from the Yes, Minister episode in which the bureaucrat, Sir Humphrey Appleby, saves the political career of Jim Hacker by not providing him with full information about an issue. In a nutshell, there are some things that are better for the people at the top to not know. Appleby explains that there is a certain dignity in ignorance, almost an innocence in saying with full honesty, “I did not know that.”

Now, consider your own firm and its security. What if there’s a conduit from the Internet to the DMZ, and from there on to the entire corporate network, including areas segregated for business-critical functions? And what if that conduit has been there for over 10 years? And what if your firm is due for a security audit or in the process of having a security audit? Does anyone in a high position – or any of the auditors, for that matter – personally benefit from this huge flaw being made known?

It’s highly and hugely embarrassing. It’s been there for 10 years, and the network people have known about it all along, but have grown tired of being ignored by the systems people that refuse to re-architect their system with security in mind, since that would significantly impact production. If the people on top and the auditing firm had to deal with this now, I could see the potential for more than one person to potentially get fired or put on a remediation plan because of it.

But if nobody officially knows about it, nobody has to officially do anything about it. The audit completes successfully and the auditors retain their contract to provide auditing services. The managers and executives can nod their heads that, yes, they’ve got their arms around this security thing and that things are looking pretty good on that front.

Yes, the execs and auditors have both a right to know and a need to know about that huge problem, but neither has a desire to have such highly embarrassing information made known. There’s a sort of institutionalized ignorance about the situation to the point where, if there was a breach via that conduit, an executive could legitimately protest at the engineers and developers, “Why didn’t you tellanyone about it?” Never mind that they did, but got ignored, tabled, distracted, re-prioritized, or otherwise sidetracked.

No, if something had been done right away, there’d be no problem. But this has festered and become toxic. It is best for the careers of those closest to it to ignore it. If it does result in a breach, then those at the top have to throw as much blame around as possible so that nobody will try to assign any blame to them, and that blame flows downhill to the very people that tried to inform about the issue to begin with.

In the episode, Appleby explains the difference between controversial and courageous:

“Controversial” only means “this will lose you votes”. “Courageous” means “this will lose you the election”!

Similar parallels apply to business. This is why I roll my eyes a little every time I hear an exhortation to innovate and think outside the box. Trust me, if I’m not following a specified process to innovate or doing a proper SOP for thinking outside the box, I’m doing something either controversial or courageous, with associated negative consequences.

It stands to reason that if I was to email directly a C-level person and copy all the management chain between me and him and then describe a situation as bad as the above, I’d be doing something highly courageous. If I do less than that, then institutionalized ignorance can keep anyone with a right to know the bad news from actually having to hear it, thereby maintaining their dignity in ignorance.

Apart from this being a cautionary tale about not developing a too-cozy relationship with one’s auditors, it’s also a very real concern about where a culture of permitting mistakes has to be in place in order for security to have a chance. Even monumental mistakes such as this 10-year marvel need to be allowed in order for the people responsible for fixing them to actually do something about them other than sweeping them under the carpet and pretending that all is well as they desperately seek employment elsewhere, before the situation blows up.

We’ve got the need to know and the right to know… but are we strong enough to know even when we lack the desire to know?

Manual Override

As the Himynamistan diplomatic convoy made its way to the intersection, the Dassom agent noted their passing as he sat slumped and fetid, like countless other bums on the streets of San Francisco. The convoy made its halt at the stop sign, autonomous brakes holding firm against the gravity of the downward slope.

As the convoy yielded right-of-way to the cross traffic, the Dassom agent, nameless in the shadows of the alleys of dumpsters between glittering financial monuments, lifted a small infrared controller and pointed it at the 18-wheeler loaded with pig iron that was rolling along just behind the convoy.

The Dassom agent pressed a button on the IR device and shot a signal to the 18-wheeler.

You know, how that big truck got to the top of the hill with all that metal in it was a testament to the builders of the engine in that beast of a machine. Well done, lads! Such a shame that the engineering and craftsmanship were going to be wrecked soon after the truck’s driving software interpreted the IR signal as a manual emergency override to disengage all braking systems and to accelerate.

The Dossam agent did not turn to one side or the other, but kept the metallic collision between the truck and the Himynamistan diplomats in their unmoving vehicles to his back. Most of the wreckage went forward, towards the cross street traffic, but a few small ricochets bounced off the back of the agent’s hoodie.