Menu
How observability tools help with legacy software

How observability tools help with legacy software

Legacy software isn't just dusty code on mainframes. It's the stuff you wrote a few months or years ago. Observability tools and good documentation help find and fix problems.

Credit: Dreamstime

You know what legacy software is. It’s what other people write and what other people use. Right? Wrong. 

“The minute you ship code, you have legacy software,” argues Jean Yang, founder and CEO of Akita Software. “There's stuff that binds you. You can't change it. I don't remember things I did last week, and I think that's true of every single person out there that creates code.”

We normally think of “legacy software” as applications written in COBOL and sitting on some mainframe somewhere. This kind of thinking leads developers to build code myopically, not thinking of who will have to read their code later. As Yang points out, this includes just about everyone, including the original developer of the code.

How can we get smarter about our legacy code?

Network calls and complexity

One problem with code is that it’s never truly static. It never just sits there. As Honeycomb cofounder and CTO Charity Majors highlights in her interview with Yang, 

“Anytime it hops the network, you're in mystery land. You have no more control over it.” It’s like your application can live in a pristine garden of Eden, as it were, but the minute you need it to be useful, which generally requires a network call, all hell breaks loose because you introduce complexity into the application.

Majors argues you can’t really know how your software is going to behave until you push it into production. Only in production do the cracks in that “legacy” code reveal themselves. 

“A little piece of code is a complicated system, but once it's live,” she says, “once it has users and traffic patterns and different infrastructure underneath it, it becomes complex.” 

Complex, yes, and complex in ways that introduce “unknown unknowns.” Majors continues, “You can't predict what's going to happen when you change something. You have to change it and watch and see what happens under some controlled environment.”

People problems, not code problems

As Majors stresses, “Individual engineers can write software, but only teams can deliver, ship, maintain and own software. The smallest unit of software delivery is the team.” Not sure what she means? Yang explains that we can delude ourselves into thinking the problems (bugs, errors, etc.) are technical issues with our application. 

This misses the point, she says: “It's always a people problem. You always have to trust people. And a lot of what the tooling is doing is helping you do archeology on what people [did] in the past, what they [did] recently, [and] what caused these issues. It's all people.”

This brings us back to legacy. And observability.

Understanding legacy

“The cost of finding and fixing problems in our software goes up exponentially the longer it's been since you wrote it,” notes Majors in her interview with Yang. 

As such, observability tools such as Akita or Honeycomb can be critical to helping fix problems in minutes by running code in controlled production, enabling developers to debug their legacy code soon after it’s written rather than trying to decipher it months or years or even decades later.

This is also why good documentation is so essential. Sometimes we think documentation is to help others be more productive with the code we’ve written, and that’s true. 

But as Datasette founder Simon Willison once explained to me, he writes documentation for himself as he otherwise tends to forget why he wrote code in a certain way. “When I come back to the project in two months, everything works, and I know where everything is,” he says, because he’d written detailed documentation to help him (or someone else) reorient himself with the code.

Good docs, good unit tests, and good observability. “Part of development is operating it,” Majors insists, “and seeing how it behaves under different systems and constraints. You're not going to understand your code in the IDE ever.” You have to run it, and then observe it.

What about those looking at your code and trying to run it years or decades later? Even for those who don’t think this applies to them, consider what Avishai Ish-Shalom recently argued: our “modern” infrastructure such as Linux, MySQL, PostgreSQL, etc., is decades old, and even the “modern” clouds are in their middle teens now. 

More worryingly, he said, “This infrastructure, while proving itself remarkably more flexible and stable than our most optimistic predictions, is showing signs of rust and old age, making maintenance and development more challenging year after year.”

Both in terms of new legacy and old legacy, we’re all living in a legacy world. We’re moving from monoliths to microservices (sometimes), shifting from disk to RAM, and doing many more things when we bump up against hardware or software constraints, or we spot opportunities to exploit new advances. 

For those dealing with systems that are years or decades old, it gets worse, as Yang details, channeling her inner Tolstoy: “Every system built in the last year has the same stuff, but every system that was built 5, 10 years ago, they're all old in different ways.... Every legacy system is legacy in its own unique way."

This is why companies are using Akita to do service mapping, like a discovery tool. They’re trying to figure out what their existing systems do and how they work. 

Those same people might go even deeper with Honeycomb. In both cases, these observability tools attempt to “make complexity tractable,” as Majors says, so that they can enable people — teams of people — to understand and deliver even more software.

Of course, this creates even more legacy. But that’s ok, so long as you’re using observability tools to understand and tame that legacy. There’s no way to avoid legacy. There’s now no reason to want to.


Tags Observability

Events

Show Comments