Here’s what’s new at AWS: a massive re:Invent wrap-up

All the major products, services and offerings revealed during Amazon Web Services’ annual re:Invent event in Las Vegas
Andy Jassy - CEO, Amazon Web Services

Andy Jassy - CEO, Amazon Web Services

Amazon Web Services (AWS) is no slouch when it comes to announcing new updates, products and services, with most days seeing at least a few, but as the company settles into the rhythm of its latest re:Invent event in Las Vegas, which kicked off on 2 December and wraps up on 6 December, the company has shifted its new announcements into overdrive.

Here is a wrap-up of some of the more notable announcements made by the cloud services giant so far during this year’s re:Invent event: 

General availability of AWS Outposts

Starting 3 December 2019, AWS Outposts can be installed and operated in the United States of America, all EU countries, Switzerland, Norway, Australia, Japan, and South Korea.

With general availability of AWS Outposts, Amazon is touting the offering as a new fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any customer datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. 

“AWS Outposts is ideal for workloads that need low latency access to on-premises applications or systems, local data processing, or for local data storage needs,” the company said.

Users can now run Amazon EC2, Amazon EBS, container-based services such as Amazon ECS and Amazon EKS, database services such as Amazon RDS on AWS Outposts, and analytics services such as Amazon EMR locally on Outposts. 

To order your Outpost, users can log-in to the AWS Management Console and select from a range of pre-validated Outposts configurations offering a mix of Amazon EC2 and Amazon EBS capacity that best suits their application needs and site characteristics.

Amazon RDS on Outposts is available in preview

Amazon Relational Database Service (Amazon RDS) on Outposts is now available in Preview, allowing users to deploy fully managed Amazon RDS database instances in their on-premises environments.

According to AWS, the Outposts offering brings native AWS services, infrastructure, and operating models to virtually any data centre, co-location space, or on-premises facility. 

Users can deploy Amazon RDS on Outposts to set up, operate, and scale relational databases on premises, just as they would in the cloud, the company said.

Amazon Braket

Amazon Braket is a fully managed service that makes it easy for scientists, researchers, and developers to build, test, and run quantum computing algorithms. 

As reported by ARN, the new offering is designed to help IT professionals get started learning about quantum computing by providing a development environment to build quantum algorithms, test them on simulated quantum computers, and run them on your choice of different quantum hardware technologies.

“Amazon Braket lets you design your own quantum algorithms from scratch or choose from a set of pre-built algorithms. Once you define your algorithm, Amazon Braket provides a fully managed simulation service to help troubleshoot and verify your implementation,” Amazon said. 

Amazon Detective 

Amazon Detective is a new service, currently in preview that makes it easy to analyse, investigate, and quickly identify the root cause of potential security issues or suspicious activities. 

“Amazon Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that enables you to easily conduct faster and more efficient security investigations,” Amazon said.

The company said that Amazon Detective can analyse trillions of events from multiple data sources such as Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail, and Amazon GuardDuty, and automatically creates a unified, interactive view of resources, users, and the interactions between them over time. 

During the preview, it is available in US-East (N. Virginia), US-East (Ohio), US-West (Oregon), EU (Ireland), and Asia Pacific (Tokyo).

Amazon Transcribe Medical - medical speech recognition

A new offering, Amazon Transcribe Medical, is now available. According to Amazon, it is a new speech recognition capability of Amazon Transcribe, designed to convert clinician and patient speech to text. 

“Amazon Transcribe Medical makes it easy for developers to integrate medical transcription into applications that help physicians do clinical documentation efficiently,” the company said. “It can automatically and accurately transcribe physicians’ dictations, as well as their conversations with patients, into text. Moreover, the service enables automatic punctuation and capitalisation, allowing physicians to speak naturally when transcribing voice notes.”

At launch, Amazon Transcribe Medical is Health Insurance Portability and Accountability Act (HIPAA) eligible -- in the United States -- and offers an easy-to-use API that can integrate with voice-enabled applications and any device with a microphone. 

Output transcripts will support word-level time stamps and confidence scores. Users can call the API to open a secure connection over WebSocket protocol and start passing a stream of audio to the service. In return, users receive a stream of text in real time. The raw text can be sent into downstream text analytics services, Amazon Comprehend Medical, to extract valuable medical insights.

Seneca for Amazon Connect

Seneca for Amazon Connect is a new set of natively integrated artificial intelligence (AI) analytics capabilities for Amazon Connect that, according to AWS, gives contact centres the ability to understand the sentiment, trends, and compliance of customer conversations to improve customer experience and identify crucial customer feedback.  

Moreover, expected to arrive in mid-2020, Seneca for Amazon Connect will also provide the ability for supervisors to be alerted to issues during in-progress calls, giving them the ability to intervene earlier when a customer is having a poor experience. 

The company said that Seneca for Amazon Connect’s fully managed machine learning (ML)-powered analytics capabilities let contact centres professionals and their non-technical staff to use the power of AI with just a few clicks – no coding or ML experience required.  

AWS Compute Optimizer

Also freshly announced is AWS Compute Optimizer, a new machine learning-based recommendation service that makes it easy for users to ensure that they’re using optimal AWS Compute resources.  

“Over-provisioning resources can lead to unnecessary infrastructure cost, and under-provisioning can lead to poor application performance,” Amazon said. “AWS Compute Optimizer delivers intuitive and easily actionable Amazon EC2 instance recommendations so that you can identify optimal Amazon EC2 instance types, including those that are part of Auto Scaling groups, for your workloads, without requiring specialized knowledge or investing substantial time and money."

Compute Optimizer delivers EC2 instance type and size recommendations for standalone EC2 instances of M, C, R, T, and X instance families, the company said. It also delivers recommendations for auto scaling groups with a fixed group size, where all member instances are of the same instance type and size. It is now available in five AWS regions at no additional charge.

Read more on the next page...

Page Break

Amazon S3 Access Points

Amazon S3 Access Points is a new S3 feature that simplifies managing data access at scale for shared data sets on Amazon S3. With S3 Access Points, users can create hundreds of access points per bucket, each with a name and permissions customised for the application. 

According to AWS, this capability represents a new way of provisioning access to shared data sets. Whether creating an access point for data ingestion, transformation, restricted read access, or unrestricted access, using S3 Access Points simplifies the work of creating and maintaining access to shared S3 buckets.

Moreover, S3 Access Points policies allow enforcing permissions by prefixes and object tags, allowing limits on the object data that can be accessed. Any S3 Access Points can be restricted to a Virtual Private Cloud (VPC) to firewall S3 data access within a user’s private networks.

Amazon Augmented AI

Amazon Augmented AI (Amazon A2I) makes it easy for users to build the workflows required for human review of ML predictions. A2I is designed to bring human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers.  

Broadly, Amazon A2I provides built-in human review workflows for common machine learning use cases, such as content moderation and text extraction from documents, which allows predictions from Amazon Rekognition and Amazon Textract to be reviewed easily. 

Users can also create their own workflows for ML models built on Amazon SageMaker or any other tools. Using A2I, users can allow human reviewers to step in when a model is unable to make a high confidence prediction or to audit its predictions on an on-going basis.  

Accelerated Site-to-Site VPN for Improved VPN Performance

Amazon has announced the availability of Accelerated Site-to-Site VPN, which uses AWS Global Accelerator to improve the performance of VPN connections by intelligently routing traffic through the AWS Global Network and AWS edge locations.

“Previously, VPN connections might face inconsistent performance as traffic traverses multiple public networks to reach a VPN endpoint in AWS. Public networks, such as the public internet, can be congested. Each hop between and within public networks can introduce performance risks,” the company said.

Now, when creating an AWS Site-to-Site VPN connection to an AWS Transit Gateway, users can now enable Acceleration to take advantage of performance improvement using the AWS global network.

Amazon Redshift data lake export, in Apache Parquet format

With this new functionality, users can now unload the result of an Amazon Redshift query to their Amazon S3 data lake as Apache Parquet, an efficient open columnar storage format for analytics, according to Amazon.

“The Parquet format is up to 2x faster to unload and consumes up to 6x less storage in Amazon S3, compared to text formats,” the company said. “This enables you to save data transformation and enrichment you have done in Amazon Redshift into your Amazon S3 data lake in an open format. 

“You can then analyze your data with Redshift Spectrum and other AWS services such as Amazon Athena, Amazon EMR, and Amazon SageMaker.”

AWS Transit Gateway now supports Inter-Region Peering

AWS Transit Gateway now supports the ability to establish peering connections between Transit Gateways in different AWS Regions. The service enables customers to connect thousands of Amazon Virtual Private Clouds (Amazon VPCs) and their on-premises networks using a single gateway. 

“With AWS Transit Gateway, customers only have to create and manage a single connection from a central regional gateway to each Amazon VPC, on premises data center, or remote office across their networks,” the company said.

Inter-region Transit Gateway peering is available in US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and EU (Frankfurt) AWS Regions. Support for other AWS Regions is coming soon.

AWS DeepComposer

AWS has announced the preview of AWS DeepComposer, which it claims is the world’s first machine learning-enabled keyboard for developers. 

“Get hands-on, literally, with a musical keyboard and the latest machine learning techniques to compose your own music,” the company said. “With AWS DeepComposer keyboard, you can create a melody that will transform into a completely original song in seconds, all powered by AI. 

“AWS DeepComposer includes tutorials, sample code, and training data that can be used to get started building generative models, all without having to write a single line of code,” it said. 

According to Amazon, generative AI is one of the biggest advancements in artificial intelligence technology and, until now, developers interested in growing skills in this area haven’t had an easy way to get started. Developers, regardless of their background in ML or music, can now get started with Generative Adversarial Networks (GANs). 

“This Generative AI technique pits two different neural networks against each other to produce new and original digital works based on sample inputs. With AWS DeepComposer, you can train and optimise GAN models to create original music,” the company said.

Read more on the next page...

Page Break

Five new features and updated pricing for AWS IoT SiteWise

Also announced by AWS are five new features and updated pricing for AWS IoT SiteWise (preview). 

The updated pricing for AWS IoT SiteWise sees users charged for data ingest and egress from AWS IoT SiteWise based on number of messages, instead of amount of data ingested or scanned. 

As for the updates, users can now collect data in AWS IoT SiteWise using MQTT or a REST API and store it in a time-series data store. 

Additionally, users can now create virtual representations, or models, of your industrial facilities which can span a hierarchy of hundreds of thousands of assets. 

The third update sees users able to create transforms and compute metrics over your equipment data using a built-in library of mathematical and statistical operators.

Fourth, users can now publish a live data stream from within AWS IoT SiteWise that contains measurements and computed metrics linked to your equipment. 

And for the fifth update, users can utilise the new SiteWise Monitor feature to create a fully-managed web application that provides enterprise users visibility into equipment data stored in AWS IoT SiteWise.

Amazon EC2 Inf1 Instances

Amazon EC2 Inf1 instances have reached general availability. Built from the ground up to support machine learning inference applications, the Inf1 instances feature up to 16 AWS Inferentia chips, high-performance machine learning inference chips designed and built by AWS. 

“In addition, we’ve coupled the Inferentia chips with the latest custom 2nd Gen Intel Xeon Scalable processors and up to 100 Gbps networking to enable high throughput inference,” Amazon said. 

“This powerful configuration enables Inf1 instances to deliver up to 3x higher throughput and up to 40 per cent lower cost per inference than Amazon EC2 G4 instances, which were already the lowest cost instance for machine learning inference available in the cloud.”

Amazon EC2 Inf1 instances offer high performance and the lowest cost machine learning inference in the cloud, the company claims.

With Inf1 instances, users can run large scale machine learning inference applications like image recognition, speech recognition, natural language processing, personalisation and fraud detection, at the lowest cost in the cloud.  

Amazon EC2 Inf1 instances come in four sizes and are currently available in the US East (N. Virginia) and US West (Oregon) AWS Regions as On-Demand, Reserved, and Spot Instances or as part of a Savings Plan.

EC2 Image Builder

EC2 Image Builder, a service that makes it easier and faster to build and maintain secure images, is now available. Image Builder simplifies the creation, patching, testing, distribution, and sharing of Linux or Windows Server images, according to Amazon.

“Keeping server images up-to-date can be time consuming, resource intensive, and error-prone. Currently, customers either manually update and snapshot VMs or have teams that build automation scripts to maintain images,” the company said.  

Amazon claims that Image Builder significantly reduces the effort of keeping images up-to-date and secure by providing a simple graphical interface, built-in automation, and AWS-provided security settings. 

“With Image Builder, you can easily build your automated pipeline that customises, tests, and distributes your images in addition to keeping them secure and up-to-date,” it said.

Image Builder is available in all AWS regions and offered at no cost, other than the cost of the underlying AWS resources used to create, store, and share the images. 

Amazon Fraud Detector

The new Amazon Fraud Detector offers a fully managed service for detecting potential online identity and payment fraud in real time, based on the same technology used by Amazon’s consumer business.

According to Amazon, the new service uses historical data of both fraudulent and legitimate transactions to build, train, and deploy machine learning models that provide real-time, low-latency fraud risk predictions. 

“To get started, customers upload transaction data to Amazon Simple Storage Service (S3) to customize the model’s training. Customers only need to provide the email address and IP address associated with a transaction, and can optionally add other data (e.g. billing address, or phone number),” the company said. 

“Based upon the type of fraud customers want to predict (new account or online payment fraud), Amazon Fraud Detector will pre-process the data, select an algorithm, and train a model – using the decades of experience running fraud detection risk analysis at scale at Amazon,” it said. 

Arm-based instances powered by new AWS Graviton2 processors

Also announced are new Arm-based versions of Amazon EC2 M, R, and C instance families, powered by new AWS-designed Graviton2 processors, which the company claims deliver up to 40 per cent better price and performance than current x86 processor-based M5, R5, and C5 instances for a broad spectrum of workloads.

“These new Arm-based instances are powered by the AWS Nitro System, a collection of custom AWS hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage, to reduce customer spend and effort when using AWS,” Amazon said. 

“AWS Graviton2 processors introduce several new performance optimizations versus the first generation. AWS Graviton2 processors use 64-bit Arm Neoverse cores and custom silicon designed by AWS, built using advanced 7 nanometer manufacturing technology,” it said.

According to Amazon, AWS Graviton2 processors provide two times faster floating point performance per core for scientific and high performance computing workloads, optimised instructions for faster machine learning inference, and custom hardware acceleration for compression workloads.

Contact Lens for Amazon Connect

AWS Contact Lens is a set of capabilities for Amazon Connect enabled by machine learning, which is designed to give contact centers the ability to understand the sentiment, trends, and compliance of customer conversations to improve customer experience and identify crucial customer feedback. 

“Amazon Connect is a fully managed cloud contact center service, based on the same technology that powers Amazon’s award-winning customer service,” Amazon said. “Companies like Intuit, GE Appliances, and Dow Jones use Amazon Connect to run their contact centers at lower cost, while easily scaling to thousands of agents. 

“With AWS Contact Lens, customer service supervisors can discover emerging themes and trends from customer conversations, conduct fast, full-text search on call and chat transcripts to troubleshoot customer issues, and improve customer service agents’ performance with call and chat-specific analytics – all from within the Amazon Connect console,” it said.

Coming mid-2020, Contact Lens will also provide the ability for supervisors to be alerted to issues during in-progress calls, giving them the ability to intervene earlier when a customer is having a poor experience.