Amazon Web Services (AWS) has released a new service, dubbed Amazon Bedrock, that provides multiple foundation models designed to allow companies to customise and create their own generative AI applications — including programs for general commercial use.
Amazon Bedrock provides users with foundation models from AI21 Labs, Anthropic, Stability AI, and Amazon, accessible via an API.
The service, announced Thursday and now in private preview, comes just a day after Databricks announced its own open-source-based large language model (LLM), Dolly 2.0, and has a similar strategy: to help enterprises circumvent constraints of closed-loop models (like ChatGPT) that stop them from making their own customised generative AI applications.
Closed-loop, trained foundation models bar enterprises from creating any form of generative AI that competes with the original.
This constraint has seen an acceleration in research to create open-source models and other alternative methods for generative AI that can be targeted for commercial use, as enterprises demand customisable models for targeted use cases.
Amazon's foundation models for AI
The Amazon foundation models available via the new service include Amazon’s Titan FMs, which consist of two new LLMs — generally, AI models trained on vast textual data to generate human-like responses — that were previewed on Thursday and expected to be made generally available in the coming months.
The first Titan foundation model, according to the company, is a generative LLM for tasks such as summarisation, text generation, classification, open-ended Q&A, and information extraction.
The second is an LLM that translates text inputs (words, phrases or possibly large units of text) into numerical representations (known as embeddings) that contain the semantic meaning of the text, the company said.
“While this LLM will not generate text, it is useful for applications like personalisation and search because by comparing embeddings the model will produce more relevant and contextual responses than word matching,” Swami Sivasubramanian, vice president of data and machine learning at AWS, wrote in a blog post.
These foundation models, according to the company, have been tuned to detect and remove harmful content in enterprise data provided to it for customisation, AWS said, adding that the models can also remove harmful content from outputs as well.
Amazon Bedrock, according to Sivasubramanian, can be used to find the right model for various use cases and customise it accordingly with enterprise data sets before integrating and deploying it into their applications using AWS tools.
“Customers simply point Bedrock at a few labelled examples in Amazon S3, and the service can fine-tune the model for a particular task without having to annotate large volumes of data (as few as 20 examples is enough),” Sivasubramanian said.
Training AI models without using enterprise data
“None of the customer’s data is used to train the underlying models, and since all data is encrypted and does not leave a customer’s Virtual Private Cloud (VPC), customers can trust that their data will remain private and confidential,” Sivasubramanian added.
In addition to the Titan foundation models, Bedrock includes the Jurassic-2 family of multilingual LLMs from AI21 Labs, which follow natural language instructions to generate text in Spanish, French, German, Portuguese, Italian, and Dutch.
Anthropic’s LLM, Claude, also included in Bedrock, can perform a wide variety of conversational and text processing tasks and is based on Anthropic’s extensive research into training honest and responsible AI systems, AWS said.
In addition, Bedrock gets a text-to-imaging foundation model in the form of Stability AI’s text-to-image suite.
CodeWhisperer, which is an AI-based code generator, competes with Microsoft-owned GitHub’s CoPilot, which recently added AI features to aid developers.
AWS also has announced the general availability of Inf2 cloud instances powered by AWS Inferentia2. These instances, according to the company, are “optimised specifically for large-scale generative AI applications with models containing hundreds of billions of parameters.”