What the SolarWinds Breach Teaches Us

First off, the Russian hacking of SolarWinds to get its cyber eyes and ears inside of sensitive US installations is not an act of war. It’s an extremely successful spy operation, not an attack meant to force the USA to do something against its will.

Next off, if not SolarWinds, then it would have been some other piece of software. The Russians were determined to compromise a tool that was commonly used, and that was the one they found a way in on. Had SolarWinds been too difficult to crack, then the Russians would have shifted efforts to an easier target. That’s how it goes in security.

So the lessons learned are stark and confronting:

  1. We can no longer take for granted that software publishers are presenting us with clean code. In my line of work, I’ve already seen other apps from software vendors with malware baked into them, but which are also whitelisted as permissible apps. SolarWinds is the biggest such vendor thus far, but there are others out there that contain evil in them. We have to put layers around our systems to ensure that they don’t start talking to endpoints that they have no business talking to, or that they don’t start chains of communication that eventually send sensitive data outside.
  2. The firewall is not enough. Neither is the IPS. Or the proxy server. The malware in SolarWinds included code to randomize the intervals used for sending data and the data was sent to IP addresses in-country, so all those geolocation filters did not have an impact in this case. We need to look at internal communications and flag on whenever a user account is being used to access a resource it really shouldn’t be accessing, like an account from HR trying to reach a payroll server.
  3. Software development needs to reduce its speed and drive forward more safely than it is currently. I know how malware gets into some packages: a developer needs to meet a deadline, so instead of writing the code from scratch, a code snippet posted somewhere finds its way into the software. Well, that code snippet should have been looked at more carefully, because that’s what the malware developers put out there so that time-crunched in-house developers would grab it and use it and make the job of spreading malware that much easier.

    Malware can also get in through bad code that allows external hooks, but there’s nothing to compare with a rushed – or lazy – developer actually putting the malware into the app that’s going to be signed, sealed, and whitelisted at customer sites.
  4. That extended development cycle to give breathing space for in-house developers needs to be further telescoped to do better penetration testing of the application so that we can be sure that not only do we not have malware baked in, we also don’t have vulnerable code baked in, either.

Those last two are what will start to eat into revenue and profits for development teams. But it’s something we must do in order to survive – constant focus on short-term gains is a guarantee to remaining insecure. We may need to take another look at how we do accounting so that we can have a financial system that allows us the room we need in order to be more secure from the onset. Because, right now, security is a cost and current accounting practices give incentives to eliminate costs. We can’t afford to make profits that way.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.