When quantum computing moves from the theoretical world into the applied space it threatens to break apart the accepted modus operandi of much of the technology industry, something Hubert Yoshida, the CTO of Hitachi Vantara is keenly aware of.
Search giant Google made a surprise announcement that it had reached quantum supremacy last month, raising serious questions about how organisations can manage and secure data in the future. Nowhere is this more important than in the domain of cryptography.
Where once it could take hundreds of years to crack encryption methods with traditional computing, quantum computing techniques could lower that to just seconds.
"We have to keep one step ahead and find different ways of doing encryption in the face of new technologies," Yoshida, told Computerworld, speaking during the Hitachi Next conference at the MGM Grand in Las Vegas.
"We don't know all the new things that are going to happen. But, we do see certain things like quantum computers, 5G, the next generation of computers and communications – so we're trying to anticipate that work."
To prepare, Hitachi is dedicating resources at Vantara – the data specialist vendor which combined Pentaho, Hitachi Data Systems and Hitachi Insight Group in September 2017 – towards somewhat future-proofed technologies such as lattice encryption. (For a technical rundown take a look at the Wickr crypto blog here.)
"We have to retain data for 20 years – that's a long time for new technologies to occur and can change a lot of the things that we do for protection, like encryption ... it's going to change the way we do things," he said, adding that the Japanese business principle of kaizen, to keep human beings at the centre of decision-making, could help put a "sanity check" on the technologies that emerge.
In the context of Neven's law – the recently posited extension to Moore's law that suggests quantum computing power will double exponentially – Yoshida said: "Not very long ago, we were looking at two qubits. Now we're looking at 50 qubits, and we're going to do a million qubits in the not very far off future.
"Of course, with more qubits, you process things much faster: with all that capability, what is going to happen? It's a struggle: you wonder what's going to happen next?"
At the same time, public trust in technology multinationals is at its lowest ebb. In much of the public's eye, big tech has done little to assuage fears that their data won't be handled responsibly, and, of course, there's only going to be more of this data generated in future.
Businesses are starting to apply machine learning and artificial intelligence techniques to pull even more patterns and insights from this noise. Can enterprises win back any of this trust - and how?
Education might go some way towards helping. Yoshida says that much of the public simply do not understand the lengths technology companies are going to make data and results more trustworthy.
"They don't know the process we go through to do an analysis, the exploration, the data models," he said. "For instance, when we do an AI project, we don't use one model, we use two, three, or even four to cross-check the results that come out of that, which is good practice.
"Then we retain everything in an immutable form so that we can always go back and show the regulators, whoever it is, where we got that information and how we developed it."
The overwhelming theme of the Hitachi conference was highlighting how emerging technologies can be used for good – see this interview with Hitachi fellow Dr Kazuo Yano on our sister title Techworld for more on that front – but it did not shy away from where things can go wrong in the world of AI when models are let loose, either.
Academic Zeynep Tufekci spoke in her keynote about how YouTube's human-free recommendation engine has a tendency towards sending viewers down extremist rabbit holes, all for the sake of boosting engagement and time spent on the platform.
Hitachi Vantara, for its part, runs stringent exploratory AI (or XAI) programmes before the data is handed over to the data science teams. Here the firm retains all its data artefacts it uses to closely examines how the model was trained and what data was used for each model.
It then anonymises or pseudo-anonymises the data to be in compliance with regulations such as GDPR in Europe, as well as making the data durable. For instance, by hashing the data when it's generated so that the hashes can be matched at a later date to ensure that it hasn't been tampered with.
"There are some technology things we can check and we can investigate," said Yoshida. "Now, that's assuming that the person doing that is not entering their own biases – biases can happen from the data scientists' view, from the data that we train the models with.
"The models themselves can cheat or push people to the extreme ... you can influence what they see next," he added, referring to that YouTube auto-play feature. "So there's a lot of things that we could do to try to avoid those situations."
But what about that crucial question on public trust?
"My fear is that people will hear all these bad stories, they just won't trust anybody, or, you know, sometimes they have no choice," Yoshida said.
However, regulators in Europe are doing a "very good" job with upholding privacy. "[Europe] stepped in with privacy laws before anybody else was even thinking about that," Yoshida said.
"And the FSA is very good. But, you know, it's getting so fast: regulators are so slow to develop, how do we manage when things are moving so quickly?"