Security is hard and won’t get much easier

Security is hard and won’t get much easier

Software systems are complex, and development teams have conflicting goals. Oh, and people are imperfect.

Credit: Dreamstime

Security is one of the few things that will survive the budget axe should the world plunge into recession, but it’s increasingly clear that we can’t simply spend our way to a secure future.

Indeed, SLSA (Supply-chain Levels for Software Artifacts), Tekton, and other solutions can secure open source supply chains, but the reality is we still mostly rely on developers to do better and “be vigilant,” as Modal Labs founder Erik Bernhardsson points out. Unsurprisingly, this non-strategy keeps failing.

This prompts Bernhardsson’s core question: “why is security so hard in 2022?” One answer is that systems keep getting more complex, leaving holes that hackers can exploit. With this in mind, is there any hope of things getting better?

No panaceas

One major reason security is hard is it’s hard to secure a system without understanding the system in its entirety. 

As open source luminary Simon Willison posits, “writing secure software requires deep knowledge of how everything works.” Without that fundamental understanding, he continues, developers may follow so-called “best practices” without understanding why they are such, which “is a recipe for accidentally making mistakes that introduce new security holes.”

One common rejoinder is that we can automate human error out of development. Simply enforce secure defaults and security issues go away, right?

Nope. “I don’t think the tools can save us,” Willison argues. Why? Because “no matter how good the default tooling is, if engineers don’t understand how it keeps them secure they’ll subvert it—without even meaning to or understanding why what they are doing is bad.” 

Additionally, no matter how good the tool, if it doesn’t fit seamlessly into security-minded processes, it will never be enough. Ultimately, security (as with most things) comes back to people: You can fix software, but until you fix the people behind the software, you haven’t really fixed anything.

Even so, programming languages and other software tools could introduce mechanisms to catch non-secure developer code. We have key managers from HashiCorp, better auth through things like AuthO, etc., all of which have improved security, generally. Still, such defaults for “mass-market” solutions may not apply to the cracks in a company’s security. 

As one developer adds, “The most impactful security problems are also unique to each company and their customer base.” In other words, as good as an enforced security posture may be in auth for an app, security breaches tend to be much more specific to a given company’s architecture.

That’s true, but it’s also not quite as persuasive as some suggest. After all, strong, security-oriented defaults in ORMs (object relational mapping) have largely eliminated SQL injections, once a common security breach, as Octavian Costache calls out.

Security is people

Here’s the perennial problem with features: “Security and innovation is driven by different people with conflicting goals,” notes Scling’s Lars Albertsson

“Security and risk management will always lose against direct business needs in the long term.” Or, as Socure’s Gordon Shotwell expresses it, “Security almost always has a productivity cost. This cost is often very difficult to justify because security has long-term somewhat theoretical benefits while the productivity cost is real and immediate.”

Otherwise put, the value of security is generally apparent in hindsight but rarely clear in advance.

Not that it must remain this way. As Albertsson suggests, both QA and ops communities fixed the dissonance through cultural shifts and tools and processes that took development speed as a non-negotiable priority. Once that happens with security, as seems to be underway with the devsecops movement, we should see this chasm between security and new feature development melt away.

Back to the people problem and holistic system thinking. One of the hard things about security is that “security complexity comes from engineering complexity that itself comes (mostly) from organisation complexity,” according to Bearer founder Guillaume Montard. If development teams and architectures skew smaller, they’ll be better able to understand their system holistically and secure it accordingly.

We keep thinking that security is something we can buy, but really, it’s about how we function as development teams. Security is always a people problem, which is why process-oriented approaches such as devsecops show real promise.

Tags security


Brand Post

Show Comments