AI-driven software development is no longer a futuristic dream. With GitHub Copilot, more than 1.2 million developers already rely on artificial intelligence to generate code on their behalf, saving them time (and their companies money).
But while we tend to focus on Copilot for its impressive, if still nascent, code-generation capabilities, there are even better, more immediately impactful opportunities for AI/large language models (LLM) in software development.
Just ask Jaana Dogan, a distinguished software engineer at GitHub. According to Dogan, “People are too focused on code generation and completely ignore that LLMs are useful for code analysis.” In other words, savvy developers should consider using AI-driven software development less for doing their coding and more for reviewing their coding.
Domo arigato, Mr. Roboto
Developers know they should be testing their code. Yet software testing (and test-driven development) is talked about more than done.
Developers may lack the objectivity to effectively test their own code, or they simply find it cumbersome and slow. From unit testing to integration testing to regression testing (and beyond), there are different ways to test, but all come with a cost: They tend to slow development down—or rather, they appear to. Short-term development output will likely slow, but good testing leads to faster, long-term output.
Of course, as I recently wrote, worshiping development speed at the expense of impact is a Really Bad Idea. The goal for any software developer shouldn’t be to write lots of code but to write as little code as possible with the maximum impact. Testing is an essential way to ensure this happens.
This is where LLMs powering things like Copilot can have a massive impact.
Back in the 80s, Styx sang, “Thank you very much, Mr. Roboto / For doing the jobs nobody wants to.” Many developers dislike doing the ugly but necessary work of software testing. What if AI could take care of that for you?
LLMs can help create code by generating boilerplate code, for example, so that developers can focus on higher-value code, but it can be a bit nerve-wracking to depend on black-box AI.
As Ben Kehoe, former cloud robotics research scientist for iRobot, has stressed, “A lot of the AI takes I see assert that AI will be able to assume the entire responsibility for a given task for a person, and implicitly assume that the person’s accountability for the task will just sort of … evaporate?”
An experienced developer may feel comfortable letting LLMs generate code for her, as she’ll have the expertise to spot when the AI might be wrong. For less experienced developers, by contrast, LLMs can lead them into situations where they’re relying on machines to do their work, perhaps not realising fully that they still have the responsibility for that work.
Having LLMs review code, by contrast, comes with less risk. “I’ve been personally surprised how useful [LLMs] were in identifying missing test cases, unreleased leaking resources, or even telling me what’s wrong with my IAM policy,” notes Dogan.
This isn’t to suggest they’re perfect. She continues, LLMs “are not highly useful for suggesting optimisations.” Instead, “At a high level, they can talk about what opportunities are there, but results are not in the right priority order. They are not useful for code deletion or cleanups either.”
Still, they can help a developer more effectively spot issues in languages they are less familiar with; for example, a Java developer might use LLM to review Go code. Or, she says, LLMs can also be good for “navigating small blocks of code [or] specific algorithms I’m not familiar with.” AI won’t take any software developer’s job away from them. Not anytime soon, anyway. Done right, AI can help developers become much better at their job.
In sum, LLMs, specifically, and AI, generally, can complement developers’ work. Human ingenuity can’t be replaced, but some of the tedious tasks (like testing) will increasingly be assumed by machines so as to enable developers to spend more of their time being…human.