Software development is a challenging discipline built on millions of parameters, variables, libraries, and more that all must be exactly right. If one character is out of place, the entire stack can fall.
And that’s just the technical part. Opinionated programmers, demanding stakeholders, miserly accountants, and meeting-happy managers mix in a political layer that makes a miracle of any software development work happening at all.
Still, it’s impossible to list the endless innovations that software alone has made possible. And much of that depends on the efforts of coders and the people who manage them. Over the years software teams have figured out a few rules for getting the job done. From elaborate methodologies to emergent disciplines and philosophies, the rule books of software development help make it possible for everyone to collaborate and get to the finish line with something that works.
Alas, for all the innovation, there are still failure modes — ways that software developers and their managers get things wrong. Sometimes the methodologies are misapplied. Or the good ideas are taken too far. Sometimes developers just forget, sometimes purposely, what they’re supposed to do.
These sins of software development can derail just about any project. Pay attention because the only way to ensure your team can build great things is to pause and consider the not-so-great code that can be created when we fall prey to these missteps and temptations.
Choosing the wrong methodology
All software development methodologies have fans who are passionately devoted to the rules that define their favorite way for organising a team. The problem is often in choosing the right one for your team.
One big mistake is imposing these rules from the top. If coders are big believers in a different approach they’ll often grouse and complain with cynical disdain if they’re shoehorned into using another. Alas, another mistake is letting programmers in the trenches choose their favorite because they may not understand what’s best for the whole team.
Choosing the right methodology won’t fix all problems, but it will reduce the amount of friction that comes from organising the workflow. The team will know their role and they’ll understand just how to code inside of it.
Some software development issues can be fixed later. Building an application that scales efficiently to handle millions or billions of events isn’t one of them. Creating effective code with no bottlenecks that surprise everyone when the app finally runs at full scale requires plenty of forethought and high-level leadership. It’s not something that can be fixed later with a bit of targeted coding and virtual duct tape.
The algorithms and data structures need to be planned from the beginning. That means the architects and the management layer need to think carefully about the data that will be stored and processed for each user. When a million or a billion users show up, which layer does the flood of information overwhelm? How can we plan ahead for those moments?
Sometimes this architectural forethought means killing some great ideas. Sometimes the management layer needs to weigh the benefits with the costs of delivering a feature at scale. Some data analysis just doesn’t work well at large scale. Some formulas grow exponentially with more users. The computations overwhelm the hardware and clog the communications.
Developers don’t always want to think about the big picture. It’s too easy to just dive in and start creating. But smart development teams and their managers spend time anticipating these issues because if they don’t they fail later.
Falling for the latest trend
Software developers can be notoriously attracted to new and flashy ideas. Maybe it’s a new kind of database that offers more complex queries. Maybe it’s a new programming language that will fix all the bugs caused by the old one.
Sometimes these ideas have merit. Many times, though, they end up slowing development as everyone tries to learn the new technology. Sometimes the new ideas have hidden flaws that become apparent only after everyone is knee deep in the muck just before the project must be delivered.
Caution is often the best rule for adopting new technology. There’s a reason why some of the biggest and oldest companies continue to run software written in COBOL. Trends come and go, but working logic in running code doesn’t wear out.
Retaining too much data
Programmers are natural pack rats. They love to store information in case it’s needed in the future. Keeping it around because “you never know when we’ll need it”, though, can be a recipe for a security leak or a violation of users’ privacy.
The problem can be even greater with personal information like birth dates or other details. Some areas, such as financial records or health records, are heavily regulated making it easier to run afoul of the rules.
Good software architecture involves planning ahead to minimise the amount of data that’s stored. It protects everyone and can save storage charges, while even speeding up the system by reducing the amount of data in motion.
Outsourcing the wrong work
The debate over building or buying software is a time-honored one with no definitive conclusion. Still, software developers often choose poorly. Maybe there’s a perfectly good solution at a good price and they are too prideful to set aside their custom stack with its expensive in-house team. The opposite also happens. Some managers buy into an outside vendor’s product line only to watch the vendor jack up the prices dramatically when the lock-in is complete.
Unfortunately, deciding just which outside tools to use is a constant challenge for software development teams and their managers. Hiring the right outside source is genius, but adopting the wrong vendor is a ticket to a high-priced prison.
Effective software developers and their managers know that testing is a constant challenge and just as much a part of the job as writing recursive code or designing an elegant data structure. Testing should be included from the very beginning because unit tests and integration tests are vital to ensuring code stays viable throughout the development process.
But testing is also important for handling large loads. It’s too easy to write code that runs smoothly on our desk when we’re the only user. If the application is going to have hundreds, thousands, or maybe hundreds of thousands of users, you need to ensure that the code is efficient and the deployment is able to handle the large scale.
Many teams bring in quality assurance testers who watch for the kinds of mistakes that programmers make. They know how to, say, set a parameter to zero just to see whether it causes a divide-by-zero error. They know to purchase 3.14159 shirts or -4000 socks just to see if it breaks the code. This attention to testing is essential when the use cases get so complicated that it’s hard for any single human to think of all the variations and write clean code that anticipates them all.
Underestimating the power of planning
Most code requires some devotion to planning. Alas most coders often just want to jump right in and start machine-gunning code.
One of my friends tells me that it took him several years to recognise that the best step is to stop, plan, test the plans, and plan some more. Writing plans may seem tedious but it can be 10 times faster to try out ideas when thinking abstractly. He’s now a very successful manager.
Planning also means including the input from the other teams and stakeholders. They’re going to be the ones using the code in the future, so spending time discussing the project and learning their needs will save plenty of frustration afterwards. This is the best way to avoid many of the sins listed here.