Cisco CIO sees AI embedded in every product and process

Cisco CIO sees AI embedded in every product and process

After little more than a year on the job, Cisco CIO Fletcher Previn can already see that AI will create productivity and efficiency gains well worth the money spent on developing domain-specific models to address internal and external business plans.

Credit: Shutterstock/NicoElNino

Less than a year after OpenAI's ChatGPT was released to the public, Cisco Systems is already well into the process of embedding generative artificial intelligence (genAI) into its entire product portfolio and internal backend systems.

The plan is to use it in virtually every corner of the business, from automating network functions and monitoring security to creating new software products.

But Cisco's CIO, Fletcher Previn, is also dealing with a scarcity of IT talent to create and tweak large language model (LLM) platforms for domain-specific AI applications. As a result, IT workers are learning as they go, while discovering new places and ways the ever-evolving technology can create value.

Previn took over as CIO at Cisco in April 2022. Prior to that, he worked at IBM for 15 years — the last four as its CIO. So, Previn is familiar with the competitive landscape and he's aware that every genAI model his company creates is low-hanging fruit for industrial espionage. At the same time, he's concerned about securing proprietary AI technology that costs millions of dollars to create and understands that genAI can sometimes take on a mind of its own. Keeping a human in the loop is always important.

Previn spoke to Computerworld about Cisco's internal AI efforts. The following are excerpts from that interview.

How is Cisco using generative AI and what are your challenges with it? "It’s an exciting time. It’s an especially interesting time to be in IT where now 10 or 11 months after ChatGPT entered the scene, it continues to amaze and terrify in some cases.

"We think of it in…three categories of how are we going to bring AI to bear on for ourselves, for our products, and for our customers?

"In terms of how we’re using it for ourselves, there’s a lot in that. I’m about one year into the job now and spend a lot of time thinking about IT as a culture change and how we bring technology as a force multiplier to our workforce; AI helps in that way.

"If you think about networking — the core business of Cisco — you have this firehose of data and information and it’s the ability to identify things in a timely fashion, make sense of it, and take action based on it where AI excels.

"So, if you think about network monitoring, can use AI algorithms to analyse huge amounts of data in real-time to detect anomalies, detect performance issues, or predict problems. The whole idea of predictive maintenance and using AI to detect when you’re going to have a network failure or performance problem and then take preventative maintenance to prevent it is huge; then the ability to automate routine network management tasks like configuration management, device provisioning, policy enforcement, reducing manual things in general...."

What keeps you up at night in terms of AI? "I think we want to make sure we have a human in the loop at all times. ...I think part of the reason machine learning was slow and difficult is because you had to take a set of data, curate it, and then train the machine learning against that data — and the answer to the question you were asking had to exist in that data. That’s not the case if you can reason over data. Then you can start to approach human reading and writing comprehension to answer questions to which there as no previous answer.

"But we need human beings involved to ensure those answers are correct and compatible with our values, and business models; hence, the need for our responsible and ethical business policies."

How many products have you created so far that have genAI-embedded in them? "It’s ironic. I’m currently putting together a paper for Cisco’s board and it’s already 10 pages long. The answer is we will embed AI into the entire portfolio of Cisco products and are already well underway in that.

"It’s across the entire portfolio. It would be a shorter answer to ask which products are not using AI. It’s in every product and very quickly people are running towards what AI can bring to bear, whether in the collaboration space, in the security space, or in the networking and routing and switching space. You can intuitively see how this is helpful for the security portfolio — simplifying things, increasing speed, automating tasks, and understanding what’s happening across complex digital states in real time.

"Those are challenging tasks and AI is a perfect solution to bring to bear on things like ThousandEyes and our Umbrella and Duo SASE [secure access service edge] SD-WAN. How’s the traffic moving? What decisions are being made in how that traffic is getting routed? What anomalies are popping up? Where are problems coming up? Where does something malicious appear to be? If I make changes here, will it propagate to all other places so I don’t need to log into a bunch of other tools to have that outcome I want? That’s the effort currently underway.

"It’s very quickly working into every part of the Cisco portfolio."

What about security? Have you found AI useful for securing networks? "Security is a huge opportunity for us to leverage AI in an interesting way through threat detection and analysing network patterns, identifying and highlighting abnormal behaviour and detecting security threats in real-time.

"Using AI to optimise the flow of traffic and dynamically adjust traffic paths is of high value for an IT organisation like mine — reducing latency and improving performance. AI-driven network management can integrate these IT systems and network orchestration systems to create seamless, unified experiences that are augmented with intelligence from AI.

"Then bringing some of our products to bear on that as well, whether it’s the ThousandEyes platform, Cisco Secure Network Analytics, Catalyst Smart Center Alerts, Nexus Dashboards. You bring all this telemetry together, analyse it, make sense of it, take action on it in real time and automate some of those actions going forward. That’s the Holy Grail of AI-driven network management."

How has AI produced efficiencies for employees? "...If you think about how much inefficiency is in any large organisation — just with the things we need to do to perform our jobs — what we can bring to bear there [is] automating certain tasks, quickly surfacing high-value information, summarising things.

"There’s a lot of AI working its way into the Webex platform. As an example, we’re using AI to summarise a meeting, highlight important moments in a meeting, and analyse body language — and not just words and written language. This happened in the meeting; this person got up and walked away; if you missed a meeting you can have AI send you a short summary of what happened in the meeting and what decisions were made.

"You already have LLMs. Now, you have this idea of RMMs [Remote Monitoring and Management] and Cisco is going in the direction of being able to understand body language and non-verbal cues and summarise it and make sense of it.

"Being able to have a video meeting in a hybrid world is necessary, but not sufficient. It’s still in some ways less than the in-person experience. We’re right on the precipice of all this exciting innovation that’s going to solve this in a more meaningful way, where people aren’t disadvantaged by way of not being in the office together. ...That’s what we’re starting to see now with advances in AI.

"There are the obvious things with noise cancellation and virtual backgrounds, but now you’re getting into summarising meetings, what are the decisions and action items, what are the non-verbal body language?"

How are you using AI for software development? “Then for software development, earlier on there was a feeling the first use cases of AI would be the more menial tasks. But it turns out one of the first broad use cases of it is software development, which is really interesting because the conventional wisdom was always that you cannot shorten the time it takes to develop software; there’s no compression algorithm for software development.

“That’s why there was so much focus on the past on testing and release automation. Now, it turns out you can use things like Copilot for Github and have AI sit on your shoulder and help you write code more efficiently. That’s really interesting in the software development space.

“I think by the end of this year, something like 40% to 60% of all code being checked into Github will be augmented in some way by AI. And what impact does that have on your software development pipelines and how do you properly, responsibly, and ethically document where AI has assisted you in the building of things?”

A concern has been if you’re producing code via AI, certain errors, biases or even malware can be introduced. Do you see a danger with so much of future code development being augmented with AI? "I don’t know that those two things are true. You can have a lot of code that’s being checked into Github that’s augmented by AI without it being uncontrolled, runaway optimisation. So, things like having two human beings review code before it gets published [or] having a requirement to comment and tag any code generated by AI — there are things you can do to be responsible with AI-generated code and software development and those are the things we’re doing."

What about the fact that generative AI has been caught stealing intellectual property for training large language models? One of the edicts of President Biden’s executive order is for a system for watermarking AI-created content. Have you run into this? How do you deal with it? "That’s probably more of an immediate issue for these large language models that are indexing the entire internet. It creates a lot of interesting questions about which artist gets compensated for the use of their intellectual property. The original source? The person who used AI to create something new from the original source? I don’t think the answers to those questions are clear yet, so in some ways, it’s uncharted territory. At what point does something become an original creation, and at what point is it a reuse of someone else’s art?

"I think it’s something we’re going to have to work through as a society. When people take a mashup of a song, even that’s not always clear in the courts. If you sample something from someone else’s song and make a new song out of that, how much of that new song needs to be the theme for it to be considered theft versus something new? It'll probably end up being a similar situation here with AI.

"The large language models are very good for mastery of language, and summarising things, writing things well; it’s a form of AI to be able to look at the word you’ve written and be able to predict what the next word will be. That’s less useful when you want to have industry or company knowledge brought to bear on something.

"For example, if I want to have an AI-assisted network engineer that knows everything about Cisco’s products, including the helpdesk articles and technical support documents, product schematics, and internal things like that, then you’re going to want to create your own proprietary, smaller model for answering those niche, point-solution questions — which is why a lot of people are going to want to build their own AI clusters for the purpose of creating those models.

"That’s something we’re doing here in the IT department, building our own GPU-AI cluster using Cisco’s Ethernet fabric to connect it all, which in our view is the way to go. To build an AI cluster, you need two things: you need GPUs and you need low-latency, high-speed connectivity between those things. Our Ethernet project uses those two things."

How far along are you with your domain-specific LLMs? "You can’t train an LLM, but you can sort of have it interrogate pools of data and summarise it. So you can use it for things like ... optimisation in your intranet or helpdesk articles, where you can have an OpenAI model interrogate the articles you’ve written and come up with the likely questions a person would ask it, for which this would be the best article, and then optimise your search based on those results.

"So, for example, you have it look at a helpdesk article and say, 'These are the questions that I think this article likely answers,' and then tweak your search engine to say if someone asks these questions, this is the answer you should likely show. That’s a sort of easy, initial use case.

"There are also very technical things like what is the Power-Over-Ethernet budget of this Cisco Catalyst switch versus this other one, and connecting that to our own internal product, help desk and customer service data to be able to help interrogate it in natural language. Those things are already well underway, as is the building of our own cluster. We’ll eventually make several clusters using our own Ethernet fabric, but using different GPUs and different server blueprints so we can put them out as reference blueprints so others can do the same."

It's very expensive to create and train an LLM. I’ve heard figures as high as $5 million. What have you found? "That’s the value. It’s part of the reason security is so important. The cost and the energy required to train a model are significant, but at the end of it, the output fits on a USB drive. So, you’ve created a perfect incentive for industrial espionage; ...these will become crown-jewel data for companies that need to be protected. If you’re an adversary, you’re going to think, ‘I’m not going to spend millions of dollars to build out my own cluster and train it and come up with a model that takes me six months. I’m just going to steal it.’

"It's not a huge amount of data. It’s 50GB. Even from an energy footprint, when we used to build even a dense data centre, you’d put maybe 10kw into a rack and maybe at the high end for a really high-end, dense compute cluster, you’d go 30kw to 40kw. AI clusters need something like 100kw per rack. So, there is significant power and infrastructure required for building out these clusters."

What is the Retrieval Augmentation Generation? What’s its importance? "It’s a way to bring into an LLM, without needing to do refining and tuning, access to your proprietary information where that doesn’t become part of the public training set. That’s the value. You can imagine a scenario where there’s a topic search for a call centre. ...Can I get the benefit of an LLM for mastery of language, but marry that with what I know about these products that sometimes cause problems for people."

There’s a dearth of talent around AI model creation for training LLMs, prompt engineering, and general knowledge of AI. How are you addressing training for Cisco’s workforce? "Everyone is required to take our responsible AI training. It’s new, so you’re not going to find lots of people in the workforce that have all these skills that are only now being refined. So, you’re just going to have to grow a lot of this talent. And to do that, the best way to learn is through experience.

"...There’s high value in making these tools available to people in a safe way and providing an environment where people can experiment. That includes the physical as well as the software. We need to build these environments so that engineers can get their hands and feet in the data centres to understand how this technology really works. Then make those clusters and the LLMs that will run on them available to our product development teams, our software developers, and others in the company so they can get real-world experience with prompt engineering, but also software development and embedding AI features into the products. It is new, so there’s a learning curve for everyone.

"It's probably the rule of thirds where a third of people are going to struggle, a third will excel and a third will need training and help getting there."

So, how has Cisco addressed internal training? Have you created sandboxes and video classes? "Yes and yes. We’ve created sandboxes where people can experiment and learn on the job. I think the best way to learn something is by solving a difficult problem. ...Having a real problem that you’re trying to solve is a great forcing function to focus and narrow the effort in getting real experience.

"Solve a real problem and a business outcome. We also work with product teams across Cisco — people who’ve had years of experience working with AI. Our collaboration products use a lot of AI to do things like background noise cancellation determining who the active speaker in a room is and what’s the best camera angle in a conference room for a meeting, and summarising the meetings.

"So, we learn from each other. Then there are certain jobs we continue to always need more of data, design, user experience, AI and enterprise architecture. Those disciplines are always in high demand and now more than ever as everybody moves toward experience-led IT, experience-led products and having a singular user experience throughout all our products.

"AI has been around for a while. Why did ChatGPT catch on so quickly? It’s because they created a very simple interface for it instead of an API. You can go to a website and chat with this thing."

Show Comments