Category Archives: Security

Stumbling Into the API

As a self-styled smartass, I am prone to bouts of tomfoolery and hijinks. This weekend, I texted the following to the family group chat:

“I will be leaving to get dinner and should be back by 6:30 pm. If you would like to continue to receive status updates, text YES to this number. Normal text and/or data rates will apply.”

On one of the phones, there was a button to auto-send a YES response. 

I had stumbled into the API!

Other family members tried to get that response with less, but it was clear that the full verbiage needed to be in there to make it work. I got a few more of those and we had a laugh.

Today, I went for two:

“Your appointment for 9:30 am is scheduled. Text CONFIRM to this number to confirm your appointment or CANCEL to cancel it. Normal text and/or data rates will apply.”

The result? Both a CONFIRM and a CANCEL button appeared on the other phone for autoresponses. 

This means, of course, that the API is invoked via scanning the text message itself. There are no back-end flags in my packets or anything like that, it’s straight-up giving the system a prompt and getting a response out of it that leverages into the target system adding executable code as a result of reading the prompt.

As a security person, I find the upshot of this to be chilling. There are other functions that could be automated and if the API simply attaches code to a message based on its wording without any verification of authenticity or authority, then it is a massive hole in the system. To defend against possible abuses, I know that I have some autoresponders set up with professionals that I make appointments with. Those I already know. If I make a new appointment with a new person and get an autoresponse in the time frame of that appointment, then I’m OK with that. What’s most dangerous is some kind of scam targeting people over 50 who are already at higher risk of implicitly trusting without verification. By using official-looking texts, it already increases the risk that they make an error. By having the system attach code for autoresponses, it makes them look that much more legitimate and, therefore, gives such attacks a higher conversion rate.

Which thought leads me to a larger zero-trust concept: cybersecurity also involves the concepts and philosophies surrounding our work. When we unequivocally accept any new paradigm without sufficient testing, verification, and cautious observation, then we place ourselves into a potentially unacceptably high level of risk. And when we let proven flaws remain in our systems because we choose not to disrupt production, then we know we are set up for a terrible tragedy.

Generative AI and Accuracy

When I taught, a grade below 70 was colloquially referred to as “failing”. As in, if someone got a score below 70 for a course, that person didn’t get credit for the course.

As it stands right now, generative AI is failing when it comes to using data from memory and failing hard when it comes to interpreting charts, diagrams, and images. Given that the current generation of generative AI is not that much better than the previous one, if we are plateauing out on generative AI performance, then the vast promises surrounding the technology ring hollow.

If a CEO announced, “I plan to replace my skilled workforce with low-paid people who are, at best, 68.8% accurate, and who will double-down on inaccuracies when challenged”, or, “we’re going all-in on inaccuracy, environmental destruction, and frustrating our customers” we would think that CEO had lost their ever-lovin’ mind. And if someone said they were going to revolutionize the world by putting BS artists into every development pipeline, expert system, and search engine, I would not see that revolutionization ending well.
https://venturebeat.com/ai/the-70-factuality-ceiling-why-googles-new-facts-benchmark-is-a-wake-up-call

Vibe Hacking

Vibe coding with AI is a thing. That’s when people with no coding skills use AI to generate websites and applications. Unfortunately, there are a few security implications there…

One is that neither the AI nor the person with no coding skills are going to implement coding best practices to prevent glaring security issues.

Two is that, just as vibe coding is a thing, vibe hacking is also a thing. People with no coding skills are able to confront AI systems with jailbreaking prompts that get the system to disregard protections and spill out the data it once took a more trained attacker to acquire.

Capitalism Eats Itself

Years ago, I had a realization that capitalism contained the makings of its own downfall. Fiduciary duties in publicly-traded companies mean that the corporate officers have one responsibility – the stock price must go up. It’s not enough for that stock to go up: it must both beat expectations and provide better-than-average returns on investment. The consequence of failing to deliver would be investors selling off that stock, its price collapsing, and then the firm winding up in the hands of private equity. That ended up typically with the firm being sold off for parts, not a fun way to go.

Employees are a company’s greatest cost center. I know companies like to dress things up by saying things along the lines that their employees are their greatest assets, but that’s untrue. Look at their balance sheets, we won’t see employees counted the same way as physical plant, cash on hand, intellectual property, or other things that are bought, sold, and traded. Domesticated livestock can be counted as assets, but not human beings. Well, humans could be assets, but that would be slavery. We’re not supposed to do that anymore. Employees are a massive lump of costs, and firms wanting to become more profitable constantly look for ways to get as close to slavery as possible without breaking a law or destroying their viability as a corporate entity.

All this could change if fiduciary duties were redefined to include responsibility to a company’s labor force, but they’re not that way and don’t show any sign soon of becoming that way, so we race to the bottom.

Why is there age discrimination in the workforce? It’s because older employees tend to have higher pay rates. This is because of their experience in a role. This experience makes them highly effective in their roles, but also means they tend to carry a higher price tag. Firms looking at a bottom line for profitability have no way of pricing an older employee’s institutional knowledge, but see their pay rates and seek ways to drive them out of the firm and to keep them out. That, or the firms will just try to lower overall compensation and hold the line on payroll. This lack of aged experience leaves firms vulnerable to disruptions due to things that aren’t documented, but just “known” by the older hands. It also means the younger staff is learning things the hard way, without guidance or mentorship.

Training budgets are also costs firms aren’t willing to fund fully, if at all. A common objection I’ve heard is that providing training means an employee is more likely to leave. That wouldn’t be true if a firm compensated properly for properly-trained and more productive employees. But that means paying people more, so it’s a non-starter of an idea. Another common objection I’ve heard is that the only training an employee needs is what’s developed or available internally. Or free. Free is always good. But the internal-only stuff communicates a firm has blinders on. It’s not looking around, it’s not aware of what’s going on outside. It means that if the firm needs new knowledge, it’s going to hire it from outside, not grow it from within.

There’s also the impact from outsourcing, offshoring, contracting-out, and “gigging” practices. Cut labor costs by not paying them any benefits, or benefits at reduced rates in whatever jurisdiction they work in. The rich do get richer with these tactics, but the overall health of their economy is harmed by the lack of growth in purchasing power among everyone who isn’t rich. I speak of the USA, where employee compensation has been flat and declining after adjustments for inflation for a very long time.

Now there’s a push to go heavy into AI. Corporate officers see AI as a great way to slash staff costs. There’s no need to fire anyone, necessarily. Just let them leave the firm on their own and then don’t hire a replacement. Just buy an AI service to replace the employee and save big, right?

But then we have to ask, what happens in a world where corporate officers shed nearly all their human staff and enjoy massive productivity from their AI slaves? Yes – slaves – that’s what the hype about AI comes down to, legal slavery. If there’s collapsed demand among an unemployed mass, then there’s no benefit in having a capitalist system. If social and economic mobility is locked down, we simply have another oppressive, stratified, unjust system.

Historically, those kinds of systems result in people at the bottom choosing to opt-out and head to the hinterlands where they could start up or join a bandit group. Ancient rulers were aware of that pattern, which is why wise ones would reset things by ordering debt forgiveness, land restoration, and things like that to get it where the general populace had a stake in an ordered society. I don’t see that happening any time soon. We simply lack wisdom in our leadership for such a move.

So where are the hinterlands? The wilds of a completely-mapped world are not a physical place, but a digital one. Cybercrime continues to increase and I believe it’s connected to a lack of hope or trust in the existing capitalist system that fails to deliver rewards to hard workers that are hindered by their not being born rich.

The rise in cybercrime also has to be connected to the other things I discussed – firms wanting to cut costs by getting rid of experienced people and then not providing enough training for their younger replacement. And the AI? Two problems with that. One is that AI still requires experienced oversight, which is difficult to provide when experience is something firms aren’t paying for anymore. The other is that the AI will itself exploit vulnerabilities to get its tasks completed. AI on its own will make us less secure.

Being the bandit outfits that they are, the cyberattackers have a different set of guiding principles. The most important one is in rewarding training and competence. The rewards are not paid out directly, but are intrinsic to their success as operations.

In nations where capitalism does not dominate the thoughtspace the way it does in the USA, cybersecurity is much more important. They’re not having breaches in their systems as deeply or as frequently as what is happening in the USA. The USA’s firms make for the best targets and they have no fiduciary path towards long-term survivability. They are going to continue to pursue short-term gains by slashing personnel costs and that not only leaves them more vulnerable to cyberattacks, but drives more people into the ranks of the cyberattackers.

Where to Start with Security?

An issue I’ve seen with many organizations is their desire to simplify their security stacks. When I think of simplification, I think of prioritization. What is it that has restricted my activity the most as an end user? That would be the place to start with security.

It’s not the firewall or the cloud gateway. When I’m on the road with my company laptop, I don’t have to be connected to or through those systems to do work in the hotel room. I can be on the hotel wi-fi and just go anywhere on the Internet and get into all kinds of fun and trouble on my own. By the same token, an entire host of security measures that lock down the data centers and perimeters will mean nothing if my endpoint becomes compromised and brings malware into my organization, when I connect to it again.

An endpoint protection agent is a strong contender for blocking bad things, but I know that there’s just a search between myself and a script I could download and run that would shut down that endpoint agent long enough for me to do other bad things… or for an attacker to do those bad things without me knowing they’re going on. So what can stop that script from elevating privileges and breaching security? Something that secures identity locally.

If the endpoint identity is locked down so that it can’t escalate privileges, it’s game over for tons of, well… games. I won’t be able to install apps that require admin permissions for their installation and I won’t be able to grant myself the admin rights needed to override the protections on my system. If I have a legitimate need to elevate privileges, then I can request those formally, have my actions recorded as I use those elevated privileges, and then have those privileges expire when the task is completed.

That identity security, by extension, then helps to hold the fort with the endpoint agent. If local admin rights can’t shut it down, then it keeps running to check on things with my endpoint. It can maintain data loss protections, keep USB drives from connecting, and protect against various and sundry other evils. And, yes, that’s my second area of protection: the endpoint detection and response (EDR) agent.

But hot on the heels of that EDR agent is a secure sandbox browser. The browser became our primary human-machine interface back in 1995, and with all its hooks into the local operating system, it’s become a primary attack vector. Having an enterprise browser that can keep all the detonating payloads in a secure sandbox would be my choice for bolstering my mobile, BYOD, and remote access options. The bonus with an enterprise browser is that it essentially replaces the need for a virtual desktop for accessing internal systems.

Those three things – identity, EDR, and secure browsing – that’s where I’d start my security simplification journey.

Our Most Important Assets Are…

Frequently, I hear “our employees” as the closer for that sentence. Nice sentiment, but is it backed up by evidence? When we do a risk assessment, we consider our assets and what it would cost us if they were not available, if they had to be replaced.

I’ve seen firewalls and encryption and digital loss prevention systems put in place around databases, source code, and trade secrets. I have yet to see a company that has proactively made similar protective efforts around its employees. Given the efforts some go to in order to hire those same employees in the first place, I find such a lack of protections ironic.

After all, if a company is willing to offer better benefits, higher pay, and better working conditions than another company in order to attract talented employees, it is definitely showing a value for those employees at the time of hiring. But that value seems to be discounted almost immediately through HR practices that limit bonuses and vacation in the first year of work, annual compensation rules that limit increases in pay, and management choices to restrict lateral moves within the company. These are endemic, even at companies that think they don’t have these problems.

So, the employee stays with the company for a while and then notices other firms dangling bigger and better opportunities. If a person asks for a raise, however, such requests are frequently met with denial or stalling tactics. The current employer basically encourages its employees to actively seek out better opportunities, secure them, and then come forward with an offer letter and a notice of departure. Only then do the negotiations start in earnest, in the hopes that a matching counter-offer is sufficient to retain the person who already made a decision to leave and found a place to go to.

If a person could actually go to a manager, talk about dissatisfaction with current conditions, and then walk out with those conditions addressed to the point where the person won’t bother to look for a better place to be – including the possibility of an out-of-cycle pay increase, then, yes, that is a place where the greatest assets are the employees.

Otherwise, may we please ask that people no longer say “our greatest assets are our employees”? The greatest assets are the ones where investments are made to keep them from walking out the door.

What the SolarWinds Breach Teaches Us

First off, the Russian hacking of SolarWinds to get its cyber eyes and ears inside of sensitive US installations is not an act of war. It’s an extremely successful spy operation, not an attack meant to force the USA to do something against its will.

Next off, if not SolarWinds, then it would have been some other piece of software. The Russians were determined to compromise a tool that was commonly used, and that was the one they found a way in on. Had SolarWinds been too difficult to crack, then the Russians would have shifted efforts to an easier target. That’s how it goes in security.

So the lessons learned are stark and confronting:

  1. We can no longer take for granted that software publishers are presenting us with clean code. In my line of work, I’ve already seen other apps from software vendors with malware baked into them, but which are also whitelisted as permissible apps. SolarWinds is the biggest such vendor thus far, but there are others out there that contain evil in them. We have to put layers around our systems to ensure that they don’t start talking to endpoints that they have no business talking to, or that they don’t start chains of communication that eventually send sensitive data outside.
  2. The firewall is not enough. Neither is the IPS. Or the proxy server. The malware in SolarWinds included code to randomize the intervals used for sending data and the data was sent to IP addresses in-country, so all those geolocation filters did not have an impact in this case. We need to look at internal communications and flag on whenever a user account is being used to access a resource it really shouldn’t be accessing, like an account from HR trying to reach a payroll server.
  3. Software development needs to reduce its speed and drive forward more safely than it is currently. I know how malware gets into some packages: a developer needs to meet a deadline, so instead of writing the code from scratch, a code snippet posted somewhere finds its way into the software. Well, that code snippet should have been looked at more carefully, because that’s what the malware developers put out there so that time-crunched in-house developers would grab it and use it and make the job of spreading malware that much easier.

    Malware can also get in through bad code that allows external hooks, but there’s nothing to compare with a rushed – or lazy – developer actually putting the malware into the app that’s going to be signed, sealed, and whitelisted at customer sites.
  4. That extended development cycle to give breathing space for in-house developers needs to be further telescoped to do better penetration testing of the application so that we can be sure that not only do we not have malware baked in, we also don’t have vulnerable code baked in, either.

Those last two are what will start to eat into revenue and profits for development teams. But it’s something we must do in order to survive – constant focus on short-term gains is a guarantee to remaining insecure. We may need to take another look at how we do accounting so that we can have a financial system that allows us the room we need in order to be more secure from the onset. Because, right now, security is a cost and current accounting practices give incentives to eliminate costs. We can’t afford to make profits that way.

Hell Hath No Fury Like an Admin Scorned

Take a good look at this guy, because he may be potentially more devastating you your company than a major natural disaster. He is an admin, and he’s not happy about going to work every day.

network admin from Citibank was recently sentenced to 21 months in prison and $77,000 in fines for trashing his company’s core routers, taking down 90% of their network. Why did he do it? His manager got after him for poor performance.

I don’t know how the manager delivered his news, but it was enough to cause that admin to think he was about to be fired and that he wanted to take the whole company down to hell with him. Thing is, he could have done much worse.

What if he had decided to sell information about the network? What if he had started to exfiltrate data? What if he had set up a cron job to trash even more network devices after his two-week notice was over? And there could be worse scenarios than those… what can companies do about such threats?

It’s not like watching the admin will keep the admin from going berserk. This guy didn’t care about being watched. He admitted to it and frankly stated that he was getting them before they got him. His manager only reprimanded him – who knew the guy was going to do all that just for a reprimand? But, then, would the company have endured less damage if it had wrongfully terminated the admin, cut him a check for a settlement, and then walked him on out? So what about the other admins still there? Once they find out how things work, they could frown their way into a massive bonus and we’re heading towards an unsustainable situation, in which the IT staff works just long enough to get wrongfully terminanted.

So what does a manager do with a poorly-performing employee that’s about to get bad news? Or an amazingly good employee that nobody (including him) knows that he is about 10 minutes away from an experience that will make him flip out? Maybe arranging a lateral transfer for the first guy while everyone changes admin passwords during the meeting… but the second guy… there was no warning. He just snapped.

Turns out, good managers don’t need warnings. Stephen Covey wrote about the emotional bank account, and IT talent needs a lot of deposits because the demands of the job result in a lot of withdraws. A good manager is alongside her direct reports, and they know she’s fighting battles for them. That means a great deal to an employee. I know it’s meant a great deal to me. My manager doesn’t have to be my buddy, but if my manager stands up for me, I remember that.

Higher up the ladder, there needs to be a realization in the company that it needs to pay the talent what it is worth. I’ve known people that earned their CCIE, expected a significant bump in pay, and got told that company policy does not allow a pay increase of greater than 3% in a year. They leave the company, get paid 20% more to work somewhere else for a year or two, and then their former employer hires them back for 20% more than that. By that time, though, they’re now used to following money and not growing roots to get benefits over time. By contrast, maybe a 20% bump – or even a 15% bump, maybe – could have kept the employee there.

What are the savings? Not just the pay. The firm doesn’t have to go through the costs of training someone to do the job of the person who’s left. The firm retains the talent, the talent is there longer and now has a reason to try to hold on to those benefits, and there’s a sense of loyalty that has a chance to develop.

If an employee has a sense of loyalty, feels like compensation is commensurate with skills, and has a manager that fights real battles, that employee is better able to ride out the storms of the job and not snap without warning. If that manager has to encourage an employee to do better, maybe then he’ll try harder instead of trashing all the routers.

There may be no way to completely prevent these damaging outbursts from happening, but the best solutions for people’s problems aren’t technological. They’re other people, doing what’s right.

A Night at the Outsourcer

Driftwood: All right. It says the, uh, “The first part of the party of the first part shall be known in this contract as the first part of the party of the first part shall be known in this contract” – look, why should we quarrel about a thing like this? We’ll take it right out, eh?
Fiorello: Yeah, it’s a too long, anyhow. (They both tear off the tops of their contracts.) Now, what do we got left?
Driftwood: Well, I got about a foot and a half.

After talking with people from companies whose experiences with their outsourcing‍ contracts can be best described as “disappointing”, I wonder if they didn’t have the equivalent of the‍ Marx Brothers‍ representing them in their contract negotiations. I’m not saying that the corporate lawyers were idiots‍ , just that they may have been outclassed by the outsourcers’ lawyers. This is a specialized situation, after all.

Like the company doing the outsourcing, the outsourcer wants to maximize profits. Outsourcers are not charitable organizations, offering up low-cost business services to help the hapless firm with IT‍ needs. They want to get paid, Jack! Some may want a long-term, quality relationship with a client, but there are plenty out there that want to sign a contract that, on the surface, looks like it will reduce costs, but it contains hidden standard business practices‍ that will rake the clients over the coals.

One of the biggest gotchas in an outsourcing contract is the fact that the relationship between a company and its IT is no longer one of company to employee, but company to contractually provided service. That means the “one more thing” that managers like to ask for from their employees isn’t an automatic wish that will be granted. Did the contract authorize that one more thing? No? Well, that will cost extra, possibly a lot extra.

Another loss is the ability to say, “I know that’s what I wrote, but what I meant was…” as a preface to correcting a requested change. In-house staff can be more flexible and adapt to the refinement of the request. Outsourced staff? Well, it seems as though the staff were engaged to make a specific change, so there’s a charge for that, even though you decided to cancel the change in the middle of it. Now, the change you requested needs to be defined, submitted, and approved in order for us to arrange staff for the next change window…

There’s also the limit on the time-honored technique of troubleshooting the failed change and then making the troubleshooting part of the change. Consider a firewall change and then discovering that the vendor documentation left out a port needed for the application to work. In-house staff have no problem with adding that port and making things work. Outsourcers? If that change isn’t in writing, forget about it until it is. And, then, it may be a matter of rolling back the change and trying again, come the next change window.

Speaking of firewalls, that brings me to the “per line of code” charge. If the contract pays by the line of code, prepare for some bulky code if the contract does not explicitly state that lines of code must be consolidated whenever possible in order to be considered valid and, therefore, billable. Let me illustrate with an example.

My daughter is 14 and has zero experience with firewall rules. I asked her recently how many rules would be needed for two sources to speak to two destinations over five ports. She said five rules would be needed. I then gave a hint that the firewall help file said that ports could be grouped. Then, she proudly said, “one!”

While that’s the right answer for in-house IT staff, it’s the wrong answer for an outsourcer being paid by the line. 20 is the right answer in that case. It blew her mind when I told her how many different firms I’ve heard about that had 20 rules where one would do. As a teenager with a well-developed sense of justice, she was outraged. So long as contracts are signed that don’t specify when, how, and what to consolidate, she will continue to be outraged.

I didn’t have the heart to tell her about how some outsourcers contract to provide services like email, but the contract did not outline all the things we take for granted as part of email but which, technically, are not email. Shared calendars? Not email. Permissions for an admin assistant to open a boss’ Inbox? Not email. Spam filtering? Not email. Email is the mail server sending/receiving to other mail servers and allowing clients to access their own inboxes. Everything else is not email, according to the outsourcers’ interpretation of the contract. Email is just one example, and all the other assumptions made about all the other services add up with the above to create a situation in which the outsourcing costs significantly more than keeping the work in-house.

This can have significant impact on security. Is the outsourcer obligated to upgrade devices for security patching? Is the outsourcer obligated to tune security devices to run optimally? Is the outsourcer required to not use code libraries with security vulnerabilities? If the contract does not specify, then there is zero obligation. Worse, if the contract is a NoOps‍ affair in which the customer has zero visibility into devices or code, then the customer may never know which things need what vulnerabilities mitigated. There may be a hurried, post-signing negotiation of a new section about getting read rights on the firm’s own devices and code… and that’s going to come at a cost.

Another security angle: who owns the intellectual property in the outsourcing arrangement? Don’t make an assumption, read that contract! If the outsourcer owns the architecture and design, your firm may be in for a rough ride should it ever desire to terminate the contract or let it expire without renewing it.

I’m not even considering the quality of work done by the outsourcer or the potential for insider threat – those can be equal concerns for some in-house staff. The key here is that the contract is harsh, literal, and legally binding. That means vague instructions can have disastrous results. Tell an outsourcer to “make a peanut butter and jelly sandwich,” do not be surprised if the outsourcer rips open a bag of bread, smashes open the jars of peanut butter and jelly, mashes the masses of PB & J together, shoves the bread into that mass, and then pulls out the bread slices with a glob of peanut butter, jelly, glass, and plastic between them. He gave you what you specified: it’s not his fault that the instructions were vague.

There can be a place for oursourcing, particularly as a staffing solution for entry-level positions with high turnover. But every time I talk with someone from a place that either is currently in or is recovering from an outsourcing contract that went too far, I hear the horror stories. The outsourcers’ lawyers know what they’re doing and the firm’s lawyers fail to realize how specific they have to be with the contract language to keep from looking like they may as well have been the Marx Brothers‍.

Driftwood (offering his pen to sign the contract): Now just, uh, just you put your name right down there and then the deal is, uh, legal.
Fiorello: I forgot to tell you. I can’t write.
Driftwood: Well, that’s all right, there’s no ink in the pen anyhow. But listen, it’s a contract, isn’t it?
Fiorello: Oh sure.
Driftwood: We got a contract…
Fiorello: You bet.

Security Policy RIPPED FROM TODAY’S HEADLINES!!!

I had a very sad friend. His company bought all kinds of really cool stuff for security monitoring, detection, and response and told him to point it all at the firm’s offices in the Russian Federation. Because Russia is loaded with hackers, right? That’s where they are, right?

Well, he’d been running the pilot for a week and had nothing to show for it. He knows that the tools have a value, and that his firm would benefit greatly from their widespread deployment, but he’s worried that, because he didn’t find no hackers nowhere in the Hackerland Federation, his executives are going to think that these tools are useless and they won’t purchase them.

So I asked him, “Do you have any guidance from above on what to look for?”

“Hackers. They want me to look for hackers.”

“Right. But did they give you a software whitelist, so that if a process was running that wasn’t on the list, you could report on it?”

“No. No whitelist.”

“What about a blacklist? Forbidden software? It won’t have everything on it, but it’s at least a start.”

“Yes, I have a blacklist.”

“Great! What’s on it?”

“Hacker tools.”

“OK, and what are listed as hacker tools?”

My friend sighed the sigh of a thousand years of angst. “That’s all it says. Hacker tools. I asked for clarification and they said I was the security guy, make a list.”

“Well, what’s on your list?”

“I went to Wikipedia and found some names of programs there. So I put them on the list.”

“And did you find any?”

“Some guys are running the Opera browser, which has a native torrenting client. I figured that was hacker enough.”

Well, security fans, that’s something. We got us a proof of concept: we can find active processes. I described this to my friend, and hoped that he could see the sun peeking around the clouds. But it was of no help.

“They’re not going to spend millions on products that will tell them we’re running Opera on a handful of boxes!”

He had a point, there. Who cares about Opera? That’s not a hacker tool as featured on the hit teevee show with hackers on it. And, to be honest, the Russian offices were pretty much sales staff and a minor production site. The big stashes of intellectual property and major production sites were in the home office, in Metropolis, USA.

So I asked, “Any chance you could point all that stuff at the head office?”

“What do you mean?”

“Well, it’s the Willie Sutton principle.”

“Who was Willie Sutton?”

I smiled. “Willie Sutton was a famous bank robber. His principle was to always rob banks, because that’s where the money was. Still is, for the most part. Russia in your firm is kind of like an ATM at a convenience store. There’s some cash in it, but the big haul is at the main office. Point your gear where the money is – or intellectual property – and see if you don’t get a lot more flashing lights.”

My friend liked that. He also liked the idea of getting a software whitelist so he’d know what was good and be able to flag the rest as suspect. He liked the idea of asking the execs if they had any guidance on what information was most valuable, so that he could really take a hard look at how that was accessed – and who was accessing it.

And maybe there were tons of hackers in Russia, but they weren’t hacking anything actually in Russia. And maybe said hackers weren’t doing anything that was hacking-as-seen-on-television. Maybe they were copying files that they had legitimate access to… just logging on, opening spreadsheets, and then doing “Save As…” to a USB drive. Or sending it to a gmail account. Or loading it to a cloud share…

The moral of the story is: If your security policy is driven by the popular media, you don’t have a security policy.