While the hype around generative AI is palpable, enterprise IT spending so far has not been impacted by it.
That is one of the main takeaways from Gartner’s quarterly IT spending forecast, which finds that worldwide IT spending is projected to total $4.7 trillion in 2023, an increase of 4.3 per cent from 2022.
Data centre systems spending was down slightly year-over-year, but most other enterprise spending will increase, including software, IT and communications services outlay, according to the research firm.
“IT projects are shifting from a focus on external facing deliverables such as revenue and customer experience, to more inward facing efforts focused on optimisation,” said John-David Lovelock, distinguished VP analyst at Gartner, in a statement.
“The software segment will see double-digit growth in 2023 as organisations increase utilisation and reallocate spending to core applications and platforms that support efficiency gains, such as enterprise resource planning (ERP) and customer relationship management (CRM) applications,” Lovelock stated. “Vendor price increases will also continue to bolster software spending through this year.”
As for generative AI, Lovelock said the topic is “top of mind for many business and IT leaders, [but] it is not yet significantly impacting IT spending levels.”
“Generative AI’s best channel to market is through the software, hardware and services that organisations are already using,” Lovelock stated. “Every year, new features are added to tech products and services as add-ons or upgrades. Most enterprises will incorporate generative AI in a slow and controlled manner through upgrades to tools that are already built into IT budgets.”
When it comes to AI this year, organisations can thrive without having AI in production, but they cannot be without a story and a strategy, Lovelock stated.
Meanwhile, other Gartner surveys have found the fever around generative AI and ChatGPT will ultimately result in increased IT spending.
In a Gartner poll in May of more than 2,500 executive leaders, 45 per cent of execs said the publicity of ChatGPT has prompted them to increase AI investments. 70 percent of executives said that their organisation is in investigation and exploration mode with generative AI, while 19 per cent are in pilot or production mode.
“The generative AI frenzy shows no signs of abating,” said Frances Karamouzis, distinguished VP analyst at Gartner, in a statement. “Organisations are scrambling to determine how much cash to pour into generative AI solutions, which products are worth the investment, when to get started and how to mitigate the risks that come with this emerging technology.”
That same poll found that 68 per cent of executives believe the benefits of generative AI outweigh the risks, compared with just 5 per cent that feel the risks outweigh the benefits.
“Initial enthusiasm for a new technology can give way to more rigorous analysis of risks and implementation challenges,” Karamouzis stated. “Organisations will likely encounter a host of trust, risk, security, privacy and ethical questions as they start to develop and deploy generative AI.”
Another survey, this one published by MIT Technology Review Insights and sponsored by enterprise data management company Databricks, polled 600 senior data and technology executives. It predicted that just about every industry will eventually find a use for generative AI in the near future. Retailers could use the technology for scheduling and installation of heavier goods, manufacturers could use it as a virtual “co-pilot” for service and repair technicians, and media outlets could use it to write articles and headlines.
Furthermore, there’s now more of an expectation that the advent of generative AI will improve existing business AI use cases. Chatbots for customer and employee support, for example, are likely to be improved by wider uptake of generative AI, as well as business transformation efforts around unifying data stores and similar, according to a story in CIO about the survey.
Vendors focus on generative AI
Multiple networking vendors have been rolling out generative AI technology plans in recent weeks.
For example, Juniper Networks said it is looking to simplify the control of enterprise networks by expanding the AI-driven conversational interface of its cloud-based Mist management system and adding a new security access control service.
Juniper is integrating the ChatGPT AI-based large language model (LLM) with Mist’s virtual network assistant, Marvis. Marvis can detect and describe myriad network problems, including persistently failing wired or wireless clients, bad cables, access-point coverage holes, problematic WAN links, and insufficient radio-frequency capacity.
By adding ChatGPT capabilities, Juniper is expanding the role of Marvis and augmenting its documentation and support options to help IT administrators quickly get the necessary assistance with problems or challenges, the vendor stated.
Cisco recently stated it was looking meld the network and security intelligence it has amassed over the years with the large language models (LLMs) of generative AI to simplify enterprise operations and address threats with practical, effective techniques.
The first fruits of this effort will be directed at the Cisco Security Cloud, the overarching, integrated-security platform that includes software such as Duo access control and Umbrella security as well as firewalls and Talos threat intelligence access all delivered via the cloud.
Cisco said that an AI-based policy engine, Policy Assistant, will be available by the end of the year in Security Cloud. The idea is that the assistant will use natural language interfaces to simplify policy management for admins.
Generative AI will also be a part of features—to be disclosed in the future—in Cisco Networking Cloud unveiled at the vendor’s Cisco Live! event in June. The Networking Cloud will involve a broad range of software and cloud-system integration and has as its ultimate goal to converge networking platforms over time, culminating in a unified management platform.
IBM, too, has been spreading the AI gospel.
Most recently, IBM said it would integrate AI with its mainframes. The newest z16 Big Iron boasts an AI accelerator built onto its core Telum processor that can do 300 billion deep-learning inferences per day with one millisecond latency, according to IBM.
The latest version of its z/OS operating system will include a new AI Framework for system operations to optimise IT processes, simplify management, improve performance and reduce skill requirements. The new version will also support technologies to deploy AI workloads co-located with z/OS applications and will feature improved cloud capabilities.
IBM began rolling out the Watsonx system to help organisations accelerate and scale AI. Watsonx includes three components: the Watsonx.ai studio for new foundation models, generative AI and machine learning; the Watsonx.data fit-for-purpose data store; and the Watsonx.governance toolkit to help enable AI workflows to be built with responsibility, transparency and explainability, IBM stated.
Watsonx allows clients and partners to specialise and deploy models for various enterprise use cases or build their own, according to Big Blue.