Menu
Opaque Systems releases new data security, privacy-preserving features for LLMs

Opaque Systems releases new data security, privacy-preserving features for LLMs

Broader support for confidential AI use cases provides safeguards for machine learning and AI models to execute on encrypted data inside of trusted executions environments.

Credit: Gerd Altmann

Opaque Systems has announced new features in its confidential computing platform to protect the confidentiality of organisational data during large language model (LLM) use.

Through new privacy-preserving generative AI and zero-trust data clean rooms (DCRs) optimised for Microsoft Azure confidential computing, Opaque said it also now enables organisations to securely analyse their combined confidential data without sharing or revealing the underlying raw data.

Meanwhile, broader support for confidential AI use cases provides safeguards for machine learning and AI models to use encrypted data inside of trusted executions environments (TEEs), preventing exposure to unauthorised parties, according to Opaque.

LLM use can expose businesses to significant security, privacy risks

The potential risks of sharing sensitive business information with generative AI algorithms are well-documented, as are vulnerabilities known to impact LLM applications.

While some generative AI LLM models such as ChatGPT are trained on public data, the usefulness of LLMs can skyrocket if trained on an organisation’s confidential data without risk of exposure, according to Opaque. 

However, if an LLM provider has visibility into the queries set by their users, the possibility of access to very sensitive queries – like proprietary code – becomes a significant security and privacy issue as the possibility of hacking increases dramatically, Jay Harel, VP of product at Opaque Systems, tells CSO.

Protecting the confidentiality of sensitive data like personally identifiable information (PII) or internal data, such as sales figures is critical for enabling the expanded use of LLMs in an enterprise setting, he adds.

“Organisations want to fine-tune their models on company data, but in order to do so, they must either give the LLM provider access to their data or allow the provider to deploy the proprietary model within the customer organisation,” Harel says.

“Additionally, when training AI models, the training data is retained regardless of how confidential or sensitive it is. If the host system’s security is compromised, it may lead to the data leaking or landing in the wrong hands.”

Opaque platform leverages multiple layers of protection for sensitive data

By running LLM models within Opaque’s confidential computing platform, customers can ensure that their queries and data remain private and protected – never exposed to the model/service provider or used in unauthorised ways and only accessible to authorised parties, Opaque claimed.

“The Opaque platform utilises privacy-preserving technologies to secure LLMs, leveraging multiple layers of protection for sensitive data against potential cyber-attacks and data breaches through a powerful combination of secure hardware enclaves and cryptographic fortification,” Harel says.

For example, the solution allows generative AI models to run inference inside confidential virtual machines (CVMs), he adds. “This enables the creation of secure chatbots that allow organisations to meet regulatory compliance requirements.”


Tags data security programsgenerative AIlarge language model

Brand Post

Show Comments