Monthly Archives: July 2020

Hell Hath No Fury Like an Admin Scorned

Take a good look at this guy, because he may be potentially more devastating you your company than a major natural disaster. He is an admin, and he’s not happy about going to work every day.

network admin from Citibank was recently sentenced to 21 months in prison and $77,000 in fines for trashing his company’s core routers, taking down 90% of their network. Why did he do it? His manager got after him for poor performance.

I don’t know how the manager delivered his news, but it was enough to cause that admin to think he was about to be fired and that he wanted to take the whole company down to hell with him. Thing is, he could have done much worse.

What if he had decided to sell information about the network? What if he had started to exfiltrate data? What if he had set up a cron job to trash even more network devices after his two-week notice was over? And there could be worse scenarios than those… what can companies do about such threats?

It’s not like watching the admin will keep the admin from going berserk. This guy didn’t care about being watched. He admitted to it and frankly stated that he was getting them before they got him. His manager only reprimanded him – who knew the guy was going to do all that just for a reprimand? But, then, would the company have endured less damage if it had wrongfully terminated the admin, cut him a check for a settlement, and then walked him on out? So what about the other admins still there? Once they find out how things work, they could frown their way into a massive bonus and we’re heading towards an unsustainable situation, in which the IT staff works just long enough to get wrongfully terminanted.

So what does a manager do with a poorly-performing employee that’s about to get bad news? Or an amazingly good employee that nobody (including him) knows that he is about 10 minutes away from an experience that will make him flip out? Maybe arranging a lateral transfer for the first guy while everyone changes admin passwords during the meeting… but the second guy… there was no warning. He just snapped.

Turns out, good managers don’t need warnings. Stephen Covey wrote about the emotional bank account, and IT talent needs a lot of deposits because the demands of the job result in a lot of withdraws. A good manager is alongside her direct reports, and they know she’s fighting battles for them. That means a great deal to an employee. I know it’s meant a great deal to me. My manager doesn’t have to be my buddy, but if my manager stands up for me, I remember that.

Higher up the ladder, there needs to be a realization in the company that it needs to pay the talent what it is worth. I’ve known people that earned their CCIE, expected a significant bump in pay, and got told that company policy does not allow a pay increase of greater than 3% in a year. They leave the company, get paid 20% more to work somewhere else for a year or two, and then their former employer hires them back for 20% more than that. By that time, though, they’re now used to following money and not growing roots to get benefits over time. By contrast, maybe a 20% bump – or even a 15% bump, maybe – could have kept the employee there.

What are the savings? Not just the pay. The firm doesn’t have to go through the costs of training someone to do the job of the person who’s left. The firm retains the talent, the talent is there longer and now has a reason to try to hold on to those benefits, and there’s a sense of loyalty that has a chance to develop.

If an employee has a sense of loyalty, feels like compensation is commensurate with skills, and has a manager that fights real battles, that employee is better able to ride out the storms of the job and not snap without warning. If that manager has to encourage an employee to do better, maybe then he’ll try harder instead of trashing all the routers.

There may be no way to completely prevent these damaging outbursts from happening, but the best solutions for people’s problems aren’t technological. They’re other people, doing what’s right.

A Night at the Outsourcer

Driftwood: All right. It says the, uh, “The first part of the party of the first part shall be known in this contract as the first part of the party of the first part shall be known in this contract” – look, why should we quarrel about a thing like this? We’ll take it right out, eh?
Fiorello: Yeah, it’s a too long, anyhow. (They both tear off the tops of their contracts.) Now, what do we got left?
Driftwood: Well, I got about a foot and a half.

After talking with people from companies whose experiences with their outsourcing‍ contracts can be best described as “disappointing”, I wonder if they didn’t have the equivalent of the‍ Marx Brothers‍ representing them in their contract negotiations. I’m not saying that the corporate lawyers were idiots‍ , just that they may have been outclassed by the outsourcers’ lawyers. This is a specialized situation, after all.

Like the company doing the outsourcing, the outsourcer wants to maximize profits. Outsourcers are not charitable organizations, offering up low-cost business services to help the hapless firm with IT‍ needs. They want to get paid, Jack! Some may want a long-term, quality relationship with a client, but there are plenty out there that want to sign a contract that, on the surface, looks like it will reduce costs, but it contains hidden standard business practices‍ that will rake the clients over the coals.

One of the biggest gotchas in an outsourcing contract is the fact that the relationship between a company and its IT is no longer one of company to employee, but company to contractually provided service. That means the “one more thing” that managers like to ask for from their employees isn’t an automatic wish that will be granted. Did the contract authorize that one more thing? No? Well, that will cost extra, possibly a lot extra.

Another loss is the ability to say, “I know that’s what I wrote, but what I meant was…” as a preface to correcting a requested change. In-house staff can be more flexible and adapt to the refinement of the request. Outsourced staff? Well, it seems as though the staff were engaged to make a specific change, so there’s a charge for that, even though you decided to cancel the change in the middle of it. Now, the change you requested needs to be defined, submitted, and approved in order for us to arrange staff for the next change window…

There’s also the limit on the time-honored technique of troubleshooting the failed change and then making the troubleshooting part of the change. Consider a firewall change and then discovering that the vendor documentation left out a port needed for the application to work. In-house staff have no problem with adding that port and making things work. Outsourcers? If that change isn’t in writing, forget about it until it is. And, then, it may be a matter of rolling back the change and trying again, come the next change window.

Speaking of firewalls, that brings me to the “per line of code” charge. If the contract pays by the line of code, prepare for some bulky code if the contract does not explicitly state that lines of code must be consolidated whenever possible in order to be considered valid and, therefore, billable. Let me illustrate with an example.

My daughter is 14 and has zero experience with firewall rules. I asked her recently how many rules would be needed for two sources to speak to two destinations over five ports. She said five rules would be needed. I then gave a hint that the firewall help file said that ports could be grouped. Then, she proudly said, “one!”

While that’s the right answer for in-house IT staff, it’s the wrong answer for an outsourcer being paid by the line. 20 is the right answer in that case. It blew her mind when I told her how many different firms I’ve heard about that had 20 rules where one would do. As a teenager with a well-developed sense of justice, she was outraged. So long as contracts are signed that don’t specify when, how, and what to consolidate, she will continue to be outraged.

I didn’t have the heart to tell her about how some outsourcers contract to provide services like email, but the contract did not outline all the things we take for granted as part of email but which, technically, are not email. Shared calendars? Not email. Permissions for an admin assistant to open a boss’ Inbox? Not email. Spam filtering? Not email. Email is the mail server sending/receiving to other mail servers and allowing clients to access their own inboxes. Everything else is not email, according to the outsourcers’ interpretation of the contract. Email is just one example, and all the other assumptions made about all the other services add up with the above to create a situation in which the outsourcing costs significantly more than keeping the work in-house.

This can have significant impact on security. Is the outsourcer obligated to upgrade devices for security patching? Is the outsourcer obligated to tune security devices to run optimally? Is the outsourcer required to not use code libraries with security vulnerabilities? If the contract does not specify, then there is zero obligation. Worse, if the contract is a NoOps‍ affair in which the customer has zero visibility into devices or code, then the customer may never know which things need what vulnerabilities mitigated. There may be a hurried, post-signing negotiation of a new section about getting read rights on the firm’s own devices and code… and that’s going to come at a cost.

Another security angle: who owns the intellectual property in the outsourcing arrangement? Don’t make an assumption, read that contract! If the outsourcer owns the architecture and design, your firm may be in for a rough ride should it ever desire to terminate the contract or let it expire without renewing it.

I’m not even considering the quality of work done by the outsourcer or the potential for insider threat – those can be equal concerns for some in-house staff. The key here is that the contract is harsh, literal, and legally binding. That means vague instructions can have disastrous results. Tell an outsourcer to “make a peanut butter and jelly sandwich,” do not be surprised if the outsourcer rips open a bag of bread, smashes open the jars of peanut butter and jelly, mashes the masses of PB & J together, shoves the bread into that mass, and then pulls out the bread slices with a glob of peanut butter, jelly, glass, and plastic between them. He gave you what you specified: it’s not his fault that the instructions were vague.

There can be a place for oursourcing, particularly as a staffing solution for entry-level positions with high turnover. But every time I talk with someone from a place that either is currently in or is recovering from an outsourcing contract that went too far, I hear the horror stories. The outsourcers’ lawyers know what they’re doing and the firm’s lawyers fail to realize how specific they have to be with the contract language to keep from looking like they may as well have been the Marx Brothers‍.

Driftwood (offering his pen to sign the contract): Now just, uh, just you put your name right down there and then the deal is, uh, legal.
Fiorello: I forgot to tell you. I can’t write.
Driftwood: Well, that’s all right, there’s no ink in the pen anyhow. But listen, it’s a contract, isn’t it?
Fiorello: Oh sure.
Driftwood: We got a contract…
Fiorello: You bet.


I had a very sad friend. His company bought all kinds of really cool stuff for security monitoring, detection, and response and told him to point it all at the firm’s offices in the Russian Federation. Because Russia is loaded with hackers, right? That’s where they are, right?

Well, he’d been running the pilot for a week and had nothing to show for it. He knows that the tools have a value, and that his firm would benefit greatly from their widespread deployment, but he’s worried that, because he didn’t find no hackers nowhere in the Hackerland Federation, his executives are going to think that these tools are useless and they won’t purchase them.

So I asked him, “Do you have any guidance from above on what to look for?”

“Hackers. They want me to look for hackers.”

“Right. But did they give you a software whitelist, so that if a process was running that wasn’t on the list, you could report on it?”

“No. No whitelist.”

“What about a blacklist? Forbidden software? It won’t have everything on it, but it’s at least a start.”

“Yes, I have a blacklist.”

“Great! What’s on it?”

“Hacker tools.”

“OK, and what are listed as hacker tools?”

My friend sighed the sigh of a thousand years of angst. “That’s all it says. Hacker tools. I asked for clarification and they said I was the security guy, make a list.”

“Well, what’s on your list?”

“I went to Wikipedia and found some names of programs there. So I put them on the list.”

“And did you find any?”

“Some guys are running the Opera browser, which has a native torrenting client. I figured that was hacker enough.”

Well, security fans, that’s something. We got us a proof of concept: we can find active processes. I described this to my friend, and hoped that he could see the sun peeking around the clouds. But it was of no help.

“They’re not going to spend millions on products that will tell them we’re running Opera on a handful of boxes!”

He had a point, there. Who cares about Opera? That’s not a hacker tool as featured on the hit teevee show with hackers on it. And, to be honest, the Russian offices were pretty much sales staff and a minor production site. The big stashes of intellectual property and major production sites were in the home office, in Metropolis, USA.

So I asked, “Any chance you could point all that stuff at the head office?”

“What do you mean?”

“Well, it’s the Willie Sutton principle.”

“Who was Willie Sutton?”

I smiled. “Willie Sutton was a famous bank robber. His principle was to always rob banks, because that’s where the money was. Still is, for the most part. Russia in your firm is kind of like an ATM at a convenience store. There’s some cash in it, but the big haul is at the main office. Point your gear where the money is – or intellectual property – and see if you don’t get a lot more flashing lights.”

My friend liked that. He also liked the idea of getting a software whitelist so he’d know what was good and be able to flag the rest as suspect. He liked the idea of asking the execs if they had any guidance on what information was most valuable, so that he could really take a hard look at how that was accessed – and who was accessing it.

And maybe there were tons of hackers in Russia, but they weren’t hacking anything actually in Russia. And maybe said hackers weren’t doing anything that was hacking-as-seen-on-television. Maybe they were copying files that they had legitimate access to… just logging on, opening spreadsheets, and then doing “Save As…” to a USB drive. Or sending it to a gmail account. Or loading it to a cloud share…

The moral of the story is: If your security policy is driven by the popular media, you don’t have a security policy.

The Fallacies of Network Security

Like the Fallacies of Distributed Computing, these are assumptions made about security by those that use the network. And, like those other fallacies, these assumptions are made at the peril of both project and productivity.

1. The network can be made completely secure.

2. It hasn’t been a problem before.

3. Monitoring is overkill.

4. Syslog information can be easily reviewed.

5. alerts are sufficient warning of malicious behavior.

6. Our competition is honest.

7. Our users will not make mistakes that will jeopardize or breach security.

8. A perimeter is sufficient.

9. I don’t need security because nobody would want to hack me.

10. Time correlation amongst devices is not that important.

11. If nobody knows about a vulnerability, it’s not a vulnerability.

Effects of the Fallacies
1. Ignorance of network security leads to poor risk assessment.
2. Lack of monitoring, logging, and correlation hampers or prevents forensic investigation.
3. Failure to view competitors and users with some degree of suspicion will lead to vulnerabilities.
4. Insufficiently deep security measures will allow minimally sophisticated penetrations to succeed in ongoing and undetected criminal activity.

I wrote this list for the purpose of informing, educating, and aiding any non-security person that reads it. Failing that, it serves as something that I can fall back on when commiserating with other security guys.

A Grim Observation

This is a picture of the SM-70 anti-personnel mine, devised by East Germany to kill people scaling border fences to escape to the West. Its purpose was not to reduce the number of escape attempts, but to reduce the number of successful attempts. Over time, it did reduce the number of successful escape attempts, but it did not bring the total number of attempts to zero, nor did it bring the number of successful attempts to zero.

I bring this up to show that, even with the extremes that the DDR was willing to go to to prevent population exfiltration, it was an ongoing issue through the entire history of that nation. They killed violators of their policy, the killings were well-known and publicized, and yet the population continued to try to move west. This has implications for corporate security.

Namely, corporations can’t kill off violators of their policies, so those violators will continue to violate. The reward, whatever it may be for them, will face very little relative risk. Criminal penalties? Those are only for those who get caught by companies not afraid of the negative exposure. Most of the worst case scenarios, it’s a job loss for a violation. Considering that a big chunk of people that breach security are already planning to leave that firm, job loss is a threat only so much as it interferes with the timing of leaving the firm.

While the leaders of the DDR could take a long-term approach to their perimeter issues, most executives answer to a board that wants to see results this quarter, or within the first few quarters after a system goes live. Security is an investment, right? Well, where is the return on this investment?

Security is not playing a hand of poker. It is a game of chess. It is a game of chess in which one must accept the loss of pawns, even knights, bishops, rooks, and maybe even a sacrifice of the queen, in order to attain the ultimate goal. Sadly, chess is not a game that is conducive to quarterly results. Just as the person attacking IT systems may spend months doing reconnaissance before he acts, the person defending IT systems must spend months developing baselines of normal activity and acquiring information on what traffic is legitimate and what is not. The boardroom is not a good place to drive security policy.

But, quite often, the security policy does come from the boardroom, complete with insistence that the hackers be found as soon as the security system is in place. Once in place, anything that gets past the security system is seen as a failure of the system. There’s no concept of how many violations get through without the system in place and how many have been deterred by the system, just that security needs to work now, and failure is not an option… and other platitudes like that that make good motivational posters.

That’s simply the wrong mentality about security. Going back to the DDR – a lethal system with a long-term perspective and a massive intelligence network behind it – we see a highly effective system that nevertheless was defeated by those both determined enough and lucky enough. The leaders of the DDR did not scrap it until the DDR pretty much was no longer a going concern. With less ruthless security in place and a lack of long-term perspective and a failure to orchestrate all available intelligence sources, is it any wonder that IT security is such a problem for companies to get their arms around?

And if companies want to step up their potential penalties to include criminal charges, they cannot do so without first developing a proper concept of security. They will need to train employees in forensic procedures. They will need to get legal and HR involved more closely with IT – and to be more up-to-date on both the technology and the legal environment surrounding it. There will have to be decisions about what breaches must be allowed so as to collect proper evidence, and so on and so forth. We’re talking about the development of a corporate intelligence community.

And, even then, that’s no insurance. But, it’s a start. Most companies’ security policy is as effective as a substitute teacher ignoring all the students in the class. Some step up their game to that of a substitute screaming at all the students in the class. True security needs to have consequences, investigative procedures, and collections of data – and, even then, there will always be breaches. Security will not eliminate the problems, only reduce them.

Wet Economics and Digital Security

A student once unwittingly asked a physicist, “Why did the chicken cross the road?” Immediately, the physicist retreated to his office to work on the problem.

Some days later, the physicist emerged and told the student, “I have a model that explains the chicken’s actions, but it assumes the road is frictionless and that the chicken is both homogeneous and spherical…”

In the last 50 years, economics has increasingly tied its models to frictionless decisions and homogeneous, spherical employees. These employees are as interchangeable with each other as are the widgets a company mass-produces. They show up to work at a certain wage and, since perfect competition in the labor market makes these models work, there is an assumption that the cost of labor is at a point where the market clears – no need to offer any more or less than that going wage rate.

As the world economy moved from regionalization to globalization and digital technologies made employees’ locations no longer tied to where a firm was legally chartered, the idea that costly labor in one market could be replaced with cheaper labor in another market fit well with the notion that employees were homogeneous, spherical physical bodies making frictionless decisions.

The biggest problem with the economic models that have dominated economic thought over the last 50 years is that, while they are great for predicting normal ups and downs in periods of relative calm, they are useless in times of massive upheaval. Put another way, they are like weather forecasting models that see category 5 hurricanes as “an increased chance of rain” or massive blizzards as “snowfall predicted for the weekend”. These models go blind in such unanticipated crises and are particularly useless for crises precipitated out of massive fraud and abuse. We saw the flaws of the models first in 1998, then in 2001, and again in 2008. We may soon see another round of flaw-spotting very soon, what with unease afflicting a number of major banks in germany and Italy…

But the second-biggest problem with the economic models is less obvious, and that’s because it involves the one thing everyone seems to leave out of their thought processes: security. Because the employees are not interchangable spheres and their decisions frequently involve friction, we can see security issues arising out of our reliance on those economic models.

The first is that employees are not widgets to be had at the lowest price: changing out a skilled veteran with many years at a firm even for someone with the same amount of experience from another firm involves a loss of institutional knowledge that the veteran had. The new person will simply not know many of those lessons until they are learned the hard way. In security, that can be costly, if not fatal.

It’s even worse if the new employee has significantly less experience than the veteran. I shudder whenever I hear about “voluntary early retirement” because it means all those people with many, many years at the firm are about to be replaced by people of vastly less experience. Because that experience is not quantified in the models used, it has no value in the accounting calculations that determined cutting the payroll to be the best path to profitability.

Then there’s the matter of the new employees – especially if they’re outsourced – not having initiative to fix things proactively. That lack of initiative, in fact, may be specified in the support contract. Both parties may have their reasons for not wanting to see initiative in third-party contractors, but the end result is less flexibility in dealing with a fluid security issue.

Remember the story of the Little Dutch Boy that spotted a leak in the dike and decided to stop it with his finger, then and there? What sort of catastrophe would have resulted if the Little Dutch Boy was contracted by the dike owners to monitor the dike, to fill out a trouble ticket if he spotted a breach, for the ticket to go to an incident manager for review, then on to a support queue with a 4-hour SLA to contact the stakeholders, so that they could perform an incident review and assess the potential impact before assigning it the correct priority? There would be a good chance that the incident would resolve itself negatively by the time it was graded as a severity one incident and assigned a major incident management team to set up a call bridge.

Security needs flexibility in order to succeed, and that kind of flexibility has to go along with the ability to exercise initiative. Full-time employees, costly though they may be, are more likely to be authorized to exercise initiative – and, if they’re experienced, more likely to use it.

On the matter of those decisions with friction… at any time, an employee can make an assessment of his or her working conditions and decide that they are no longer optimal. Most employees will then initiate either a job search process or a program of heavy substance abuse to dull the pain brought on by poor life and career choices, but others will choose different paths. It is those others that will create the security issues.

These others may decide that the best thing to do in their particular position is to get even with their employer for having created an undesirable situation. In the film “Office Space”, three of the main characters chose that path and created a significant illegal diversion of funds via their access to financial system code. They also stole and vandalized a laser printer, but that had less impact on their employer than the diversion of funds. In the same film, a fourth employee chose to simply burn down the place of business. Part of the popularity of the film stemmed from the way those acts of vengeance, in particular the vandalism of the printer and the sabotage of the financial system, rang true with the people in the audience.

We all knew an employer that, in our minds, deserved something like what happened in the movie. When I read recently of a network administrator deleting configurations from his firm’s core routers and then texting all his former co-workers that he had struck a blow on their behalf, I saw that such sentiments were alive and seething in more than one mind. As options for future employment in a region diminish as the jobs that once sustained that region go elsewhere, that seething resentment will only increase, resulting in ever-bolder acts of defiance, even if they result in the self-destruction of the actors initiating them.

But then there are the others that take even more thought about their actions and see a ray of hope saving them from self-destruction in the form of criminal activity. Whether they sell their exfiltrated data for money or post it anonymously on WikiLeaks. The first seeks to act as a leech off of his employer, the second has a motive to make the truth be known. Both actually prefer that the employer’s computer systems be working optimally, so as to facilitate their data exfiltration.

In economic models, this should not be happening. People should be acting rationally and either accept lower wages or retrain for other jobs. In real life, people don’t act rationally, especially in times of high stress. So, what can firms do about this in order to improve security?

The answer lies in the pages of Machiavelli’s “The Prince”. Give them a stake in the enterprise that requires their loyalty in order to succeed, and then honor that loyalty, even if it means payroll costs don’t go down. It won’t eliminate criminals 100%, but it will go a long way towards not only limiting the number of criminals in one’s firm, but also will maximize the incentive for loyal employees to notice, report, and react to suspect behaviors. If a firm was once again a place where people could be comfortable with their job prospects for the years ahead, it would be less of a target in the minds of the unhomogeneous, unspherical employees whose decisions always come with friction. It would be a firm that would have better retention of institutional knowledge and expertise in dealing with incidents.

Now, will boards and c-level executives see things this way? Not likely, given that the economic models of the past 50 years dominate their thinking. Somehow, the word has to get out that econometric models are not the path to security. Security is not a thing, but a system of behaviors. If we want more security, we then have to address the behaviors of security and give employees a reason to embrace them.

The Right to Know and Institutionalized Ignorance

I take the title for this from the Yes, Minister episode in which the bureaucrat, Sir Humphrey Appleby, saves the political career of Jim Hacker by not providing him with full information about an issue. In a nutshell, there are some things that are better for the people at the top to not know. Appleby explains that there is a certain dignity in ignorance, almost an innocence in saying with full honesty, “I did not know that.”

Now, consider your own firm and its security. What if there’s a conduit from the Internet to the DMZ, and from there on to the entire corporate network, including areas segregated for business-critical functions? And what if that conduit has been there for over 10 years? And what if your firm is due for a security audit or in the process of having a security audit? Does anyone in a high position – or any of the auditors, for that matter – personally benefit from this huge flaw being made known?

It’s highly and hugely embarrassing. It’s been there for 10 years, and the network people have known about it all along, but have grown tired of being ignored by the systems people that refuse to re-architect their system with security in mind, since that would significantly impact production. If the people on top and the auditing firm had to deal with this now, I could see the potential for more than one person to potentially get fired or put on a remediation plan because of it.

But if nobody officially knows about it, nobody has to officially do anything about it. The audit completes successfully and the auditors retain their contract to provide auditing services. The managers and executives can nod their heads that, yes, they’ve got their arms around this security thing and that things are looking pretty good on that front.

Yes, the execs and auditors have both a right to know and a need to know about that huge problem, but neither has a desire to have such highly embarrassing information made known. There’s a sort of institutionalized ignorance about the situation to the point where, if there was a breach via that conduit, an executive could legitimately protest at the engineers and developers, “Why didn’t you tellanyone about it?” Never mind that they did, but got ignored, tabled, distracted, re-prioritized, or otherwise sidetracked.

No, if something had been done right away, there’d be no problem. But this has festered and become toxic. It is best for the careers of those closest to it to ignore it. If it does result in a breach, then those at the top have to throw as much blame around as possible so that nobody will try to assign any blame to them, and that blame flows downhill to the very people that tried to inform about the issue to begin with.

In the episode, Appleby explains the difference between controversial and courageous:

“Controversial” only means “this will lose you votes”. “Courageous” means “this will lose you the election”!

Similar parallels apply to business. This is why I roll my eyes a little every time I hear an exhortation to innovate and think outside the box. Trust me, if I’m not following a specified process to innovate or doing a proper SOP for thinking outside the box, I’m doing something either controversial or courageous, with associated negative consequences.

It stands to reason that if I was to email directly a C-level person and copy all the management chain between me and him and then describe a situation as bad as the above, I’d be doing something highly courageous. If I do less than that, then institutionalized ignorance can keep anyone with a right to know the bad news from actually having to hear it, thereby maintaining their dignity in ignorance.

Apart from this being a cautionary tale about not developing a too-cozy relationship with one’s auditors, it’s also a very real concern about where a culture of permitting mistakes has to be in place in order for security to have a chance. Even monumental mistakes such as this 10-year marvel need to be allowed in order for the people responsible for fixing them to actually do something about them other than sweeping them under the carpet and pretending that all is well as they desperately seek employment elsewhere, before the situation blows up.

We’ve got the need to know and the right to know… but are we strong enough to know even when we lack the desire to know?

Manual Override

As the Himynamistan diplomatic convoy made its way to the intersection, the Dassom agent noted their passing as he sat slumped and fetid, like countless other bums on the streets of San Francisco. The convoy made its halt at the stop sign, autonomous brakes holding firm against the gravity of the downward slope.

As the convoy yielded right-of-way to the cross traffic, the Dassom agent, nameless in the shadows of the alleys of dumpsters between glittering financial monuments, lifted a small infrared controller and pointed it at the 18-wheeler loaded with pig iron that was rolling along just behind the convoy.

The Dassom agent pressed a button on the IR device and shot a signal to the 18-wheeler.

You know, how that big truck got to the top of the hill with all that metal in it was a testament to the builders of the engine in that beast of a machine. Well done, lads! Such a shame that the engineering and craftsmanship were going to be wrecked soon after the truck’s driving software interpreted the IR signal as a manual emergency override to disengage all braking systems and to accelerate.

The Dossam agent did not turn to one side or the other, but kept the metallic collision between the truck and the Himynamistan diplomats in their unmoving vehicles to his back. Most of the wreckage went forward, towards the cross street traffic, but a few small ricochets bounced off the back of the agent’s hoodie.

Insecure Social Media, Russians, and US Elections

For social media companies, insecurity is an integral part of their business model. It’s all down to how they work. They want to sell advertising and their rates are determined by the popularity of the pages where the ads run. More popular pages means higher ad rates, so anything that boosts popularity also boosts revenue for the social media companies.

Of course, when accounts that are liking and following are found to be fraudulent, advertisers cry foul and demand a purging of those fake accounts and also a reduction in their ad rates. This creates an incentive for social media companies to obscure account ownership so that fake accounts are less likely to be discovered. There’s also an incentive to engage in clickfraud, but I’ll pass over that for now. Instead, I’d like to focus in on how those fraudulent accounts can do more than just hike up revenues.

The Russian intelligence agency Федеральная служба безопасности Российской Федерации (ФСБ) – FSB to English-speakers – has made use of misinformation and agitprop since it was the FSK, and before that the KGB, and before that the MGB, and before that the NKVD, and before that the NKGB, and before that the Cheka, and before that the Okhrana. One could say that misinformation and agitprop have been hobbies of Russian intelligence agencies for about 130 years. What is new for this age are the avenues available to the FSB to spread its poison messages.

Before social media concerns, Russians wishing to whip up extremist political movements and create internal discord in Western democracies had to buy their own presses and pay for their own mouthpieces, which could be quite expensive. If one of those were unmasked, then the expensive operation would be compromised and that expense and effort would go to waste.

But with FaceBook and Twitter and blogs, the FSB now has drastically reduced costs and much higher levels of cover. It’s Agitprop as a Service! Consider how easy it is to run multiple fake online accounts, compared to hiring multiple agents. These accounts generate interest and activity on social media, so they drive up ad rates – the firms that would be policing them in an authoritarian regime are protecting them in a capitalist system.

Even better for the FSB, the ability of extremist groups – particularly the far right – to sequester themselves from other news sources means that, once a message is injected into their media echo chambers, it will be repeated often enough so that, in the observation of Josef Goebbels, it will be held up as a truth. What shows up on will be tweeted and retweeted by FSB accounts active in far-right forums and will soon be heralded as non-fake news in outlets such as Fox, ZeroHedge, and Breitbart.

Back when ZeroHedge was more focused on the financial misdeeds of large banks in the wake of the Panic of 2008, I was an avid reader of stories posted there. But something changed over time, particularly in the run-up to the 2016 election in the USA. It went from examining financial issues as its primary focus and slid deep, really deep into pro-Trump positions with lots of posters on its boards echoing comments that could be classified as pro-Russian, anti-Semitic, racist, neo-fascist, and/or a combination of the previous.

The slide in bias was obvious to me. I’ve been a follower of non-corporate media since the 1980s, and I know the difference between an investigative journalism piece and a partisan propaganda paper. ZeroHedge had definitely lost a lot of the former and had gained a lot of the latter. As the onslaught of Russophilism, antisemitism, racism, and neofascism increased, I felt a need to get out of that news source and seek out alternatives. In so doing, I did a lot of searching. In those searches, I was stunned to see how many other outlets were parroting the sludge from ZeroHedge, like they were sheep from Animal Farm bleating out “four legs good, two legs better!”

From all this agitation in stirring up the far right, Russia knows it is destabilizing America. The heads of the FSB know that the American far right will prove Pushkin right at every turn: it will reject ten thousand truths in order to cling to the lie that justifies itself. This is how I know Judge Moore is highly likely to win the Senate election in Alabama. The Russian Twitter choir is singing his praises and millions of far-right users of social media are echoing those sentiments, actively and belligerently.

Judge Moore, of course, is a hand grenade being lobbed directly at the US Senate. The man has shown a pattern of serial sexual predation against minors. If he wasn’t running as a Republican for the Senate, he’d be the focus of a true crime show right now. Russian tweets and far right echoes claim falsely that his accusers have either forged evidence against him or recanted their claims. Those lies allow his supporters to push hard for his election. If Moore is elected, it will roil the Senate as many senators will demand that he not be seated and that Alabama send a different favorite son to the Capitol. Each house of Congress can do just that, accept or reject the people sent to it – and Moore is ripe for rejection.

If Moore is rejected, it will split the Republican party even deeper. The Republicans are already incapable of putting together a coherent legislative agenda. With a Moore rejection, it will be practically open war between the different halves of the Republican party.

If Moore is not rejected, it will split the Republican party even deeper, but in a different way. Instead of Moore’s supporters repeating Russian propaganda that they were robbed, it will be outraged moderates, unable to stomach being in the same political caucus as a sexual predator. Bear in mind that the stalking of multiple daughters of single women, all around the same age, all in roughly similar ways, is an actual pattern of sexual predation. We have documentation of this. We have multiple testimonies to this effect. This is a sexual predator that the Russians, through insecure social media, are helping to force down the GOP’s throat.

When we look back to what happened in Georgia and Estonia in the decade prior to 2016, we see exactly the same thing. We see the social media misinformation. We see the political manipulation of extremists. When we look at Ukraine after the USA toppled a pro-Russian government there, we see even Russia providing armed assistance to extremists there. That fact chills me, especially in light of how many on the far right hinted at taking up arms if Trump wasn’t elected in 2016.

I doubt if they actually would have taken up arms on their own, but if they were whipped up by their social media echo chamber and shipped a few thousand AK-15s, maybe they would cross over that tipping point. If that were to happen, I have no doubt that a US Army would crush that insurrection… and then spend decades dealing with low-level guerrilla warfare, all fueled by continued echoing of Russian lies in social media echo chambers.

While there is increasing agitation on the left in the form of the antifa movement, there just isn’t as much militancy in the American left, especially after the legacy of peaceful, antiwar protests. These are not minds that will have much fertile soil for violent rhetoric. They’re also more likely to turn out one of their own if he or she is found to have feet of clay. Witness their abandonment of big donors found to be serial sexual harassers. Witness their pressure on their own political caucus to resign from office, rather than persist in running for it or remaining in place.

No, the fertile ground is in the neofascist mind. The Russians make those pushes in Greece, in Germany, and in the USA. And while I find Steve Bannon to be more of an Austrofascist than a Nazi (the strong affinity for Catholicism is a dead giveaway for Austrofascists), I don’t think such fine details matter either to the Russians or to the minds the Russians poison every day with their lies.

So how do we solve this problem? The market won’t solve it. In fact, the free market will fan these flames because the business model of Twitter and other outlets is to spread misinformation if that means more ad revenue. But in a world of multiple email addresses, how do we limit a person to just one Twitter account? In a world of VPNs and tor exit nodes, how do we keep too many FSB-driven accounts from affecting social media? When these fake accounts actually started out years ago with softer agendas, and have loads of historical content, how do we build an algorithm that can identify a friend from a foe? Or a friend from a foe yet to reveal itself?

Hamilton 68 is a project that, instead of looking for the artillery shells of propaganda, seeks out the guns. While it does not claim to have discovered all sources of Russian disinformation on social media, it has found some significant signals amidst the noise. There’s some hope yet in the intel they are able to derive from extensive signals analysis. This is what any good intel agency does: read all the news to see where stories originated and how they are disseminated.

Right now, the Russian social media barrage is striving to elect Roy Moore to the US Senate. But, merely by getting the Republicans to cling to him like a piece of driftwood in a shipwreck, they’ve already demonstrated their control over that political faction. In the days and weeks to come, be certain that the Russians will continue to tug on that leash and the far right will follow every jerk and tug.

Insecure Social Media, Russians, and US Elections: Agitprop as a Service.

IT Network Managers: Give the Gift of Linux to Your Engineers

‘Tis the season and all that. I have a short holiday message to all the managers of Networks and Network Security: Give your engineers a Linux box this year, and they will have the merriest of Diwalis, Christmases, Hannukahs, and/or other Winter holidays, as appropriate.

Give this Linux box permission to log on to your network devices, install scripting tools on it, and send your engineers links to websites where there are network configuration scripts for the downloading. They will be responsible and won’t run scripts without testing them first on a switch or three in the lab. But they’ll be ever so happy to have these tools!

The real struggle will be to ensure that the Linux scripting box is under proper management. Secure it so it can only be accessed via a jump host that’s used to access most everything else on your network. That’s easily done. An even bigger struggle may be to introduce a server that’s used almost exclusively by the network and network security teams. This means possible exception documents to file, meetings with the server and/or VM managers about patching and maintenance routines your teams will need to be aware of, and other managerial things of that sort.

After all, isn’t that why managers are called managers? They… manage… resources for the good of the firm. That Linux scripting host is a major IT resource, get on out there and manage away until your charges have one!

There are many Linux distributions out there – ask your engineers which one they’d like if your firm hasn’t yet standardized on a distribution. Once the distribution issue is settled, be ready to fight battles over making sure your engineers have appropriate levels of access and so the Linux box itself will be able to have the access it needs to get its scripting job done.

And what a scripting job it *will* do! Multivendor-aware scripts! Version-aware scripts! Little or no expense on annual licensing! Happy engineers learning how to use scripts to do all their work faster and with fewer errors – and what errors do crop up, what do you want to wager they’ll be fixable via other scripts? I’d wager rather a lot, but it would be at low odds, because that’s how things are done, you know.

I’ve seen Linux scripting boxes do things that proprietary config management utilities have failed to deliver, and that’s a huge deal. Even if you already have a proprietary solution, this Linux scripting host is going to complement that proprietary solution and give you so much more flexibility. The business case is here, I just wrote it: copy and paste and modify as needed, that’s my $HOLIDAY gift to you, O Network Manager!

If you read this article on your own or if you got this forwarded to you by your direct reports, please make this holiday season one of the best your firm has ever seen. Take a look at the image below:

That’s what a network engineer looks like after he’s gotten the paperwork finished that authorizes a Linux scripting host for his team to use. He’s so happy now that he knows that the configurations on those switches and routers and firewalls and all kinds of gear are going to be standardized and, hence, more secure. Why, he could even write a script to parse for unauthorized changes… his joy knows no bounds.

Be that manager this year. Be the person forever remembered as the manager who gave the gift of Linux.