
Although there may come a time when Generation-Z youths and younger don’t ever learn to drive an automobile, almost all of us who have been in an autonomous vehicle, have learned to drive first.
Airline pilots adopt the same approach when using autopilot functions i.e. they can all fly a plane with human analogue competency before they consider switching over to digital controls.
As artificial intelligence (AI) is now manifesting itself in consumer-facing applications at every conceivable level, few users will need to fully understand the ground-level mechanics of the code and data crunching that’s going on in the background to make an app ‘smart’. However, that same level of blissful detachment does not apply to software engineers, developers or programmers.
AI enters the developer zone
Software engineers need to keep this cautionary note in mind because AI-enriched programming controls are springing up fast.
Although so-called ‘code completion’ advancements have been around for some time, we’re now seeing AI and machine learning (ML) automate many of the low-level tasks that developers would have previously shouldered and quite happily accepted as part of the ‘grunt-work’ associated with building any piece of enterprise software.
So what factors do we need to keep in mind?
At a positive level, we know that software programmers can now use AI to ingest and process telemetry and log file data created by software applications and data services as they run. This can allow engineers to proactively predict the root causes of failure and so propose workarounds to both systems managers and end users themselves.
Similarly beneficial is the chance to use AI coding layers to look for ‘signals in the noise’ created by applications. When this data is aggregated it can allow the software engineering team to automate common workflows and so to create new software functionality extensions more quickly and, hopefully, in a more robust way as their usage has already been ratified elsewhere.
This subject will inevitably throw up discussion surrounding GPT-3, or Generative Pre-trained Transformer 3 (GPT-3) technology. This autoregressive (using randomness) language model uses a form of ML to create human-like text and sentences. It can also be used to build software code for text generation using huge volumes of training data.
Not everyone is sold on the advantages of GPT-3 as there is some distrust surrounding us humans handing over the power of the written word to computers. Did somebody say fake news? Perhaps, maybe.
“The application of AI to process automation, whatever the process may be, is usually phased as a matter of caution but also of effectiveness," said Riccardo Bocci, director of product management at cloud ERP company IFS.
"Rather than an enabler for ‘complete automation at all costs’, AI should be seen as augmentation of expert skills and knowledge. Expert knowledge is what really supports the development of any valuable application of AI and I don’t see it being entirely substituted any time soon."
Bocci insists that a ground-level understanding of how AI and ML techniques work is fundamental to making the right choices when looking into any ready-made toolbox. “I would add that for the successful AI practitioner, this level of technical understanding is not enough, but it needs to be coupled with the ability to see the real business problem at stake,” he added.
The payoff for prudence
If the software industry uses AI-advancements at the coding toolset effectively, then a development teams can be more solidly assured that they’re using the right tool for the job in any given use case. There’s a virtuous circle here. Using the right tools to create the right elements of any enterprise software codebase means that the data, logic and business models inside which that software will exist will perform better.
“As a data scientist, I could rewrite every algorithm I use from scratch, but this is an extremely inefficient use of time and skills when I can easily use a model by accessing a Python library via an Application Programming Interface (API)," said Adam Lieberman, head of AI and ML at financial services software company Finastra.
"Does this mean I will eventually lose the skills I need to understand and build algorithms? Of course not. If developers want models to write code for them, they must understand what they’re asking the model to do."
It is perhaps a comforting thought to remind ourselves that every airline pilot worth their stripes will be able to circumvent all the autopilot controls and stop flying-by-wire when needed.
Perhaps we need to apply that same grading methodology to enterprise software applications (especially mission-critical ones) and be able to stop and ask the programming team: do you know the DNA behind the algorithms driving these functions, or have you just given us the Lego-version with quickly snapped together building blocks?
If your developer team doesn’t know that Lego comes from the first two letters of the Danish words LEG GODT, meaning ‘play well’, then there’s your problem right there.