Reminiscent of the recurring battle between open source and proprietary ecosystems, the AI Alliance includes about 50 members and is expected to help IBM, Meta and AMD challenge the largest players in generative AI.
Generative AI was the watchword at re:Invent 2023.
Amazon Q can do all the tasks that Copilot can and is expected to appeal to IT managers who want to limit the number of generative AI assistants in use at their enterprises.
The capabilities, currently in preview, will allow enterprises to run machine learning on shared data while collaborating with partners and maintaining data privacy and security.
AWS CEO Adam Selipsky and other top executives at the company’s annual re:Invent conference revealed updates and new products aimed at offerings from Microsoft, Oracle, Google, and IBM.
The new integrations include updates to relational and non-relational databases, such as Amazon Aurora PostgreSQL, Amazon DynamoDB, and Amazon RDS for MySQL.
The updates include the addition of new foundation models along with vector capabilities for several databases.
The new pricing tier includes a custom blueprint feature that allows enterprises to encapsulate best practices for application code, workflows, and infrastructure.
The new program allows researchers to cut through waiting queues or wait times while providing the option to connect with experts to seek guidance on quantum workloads.
New chips from Microsoft along with other custom chips from the likes of AWS, Google and Oracle, though limited in their impact initially, are expected to provide stiff competition for chipmakers such as Nvidia, AMD and Intel, analysts said.
The largest US maker of chipmaking machinery said it is under investigation from the US Attorney’s Office for the District of Massachusetts.
Nvidia’s generative AI-based foundry services on Microsoft Azure are already being used by several companies including SAP, Amdocs, and Getty Images.
The chipmaker could be trying to circumvent US restrictions that aim to prevent Chinese companies from stringing together lesser-powered GPUs to build compute capacity for AI-based workloads.
OpenAI is trying to position itself as a viable alternative to build-it-yourself, open source development efforts with cheaper products and advanced capabilities.
Microsoft and Oracle have entered into a multiyear agreement to support inferencing of AI models that are being optimized to power Bing’s conversational searches.