AWS unveils new Aurora, IoT TwinMaker and Glue offerings – Finance – Cloud

AWS Summit Keynote: Sivasubramanian Unveils New Offerings

Amazon Web Services today launched a slew of offerings at its AWS Summit event in San Francisco around its Aurora Serverless, IoT TwinMaker, Amplify Studio, and AWS Glue cloud solutions.

Swami Sivasubramanian, vice president of data, analytics and machine learning services at AWS, unveiled the new innovations during his keynote. AWS’ Strategy of Hiring Builders and Inventors.

“We hire builders who are always thinking about inventing on behalf of customers, looking for ways to simplify, and they don’t buy into the idea that we have to build things a certain way because it’s always as well as things are done,” Sivasubramanian said onstage during his AWS Summit keynote.

New product launches on Thursday revolved around AWS Aurora’s cloud-based relational database solution; AWS IoT TwinMaker, which creates digital twins of real systems as well as management capabilities; Amplify Studio for rapid creation and delivery of web and mobile applications; and AWS Glue for serverless data integration.

In his speech, Sivasubramanian extolled the benefits of AWS services are convincing businesses of all shapes and sizes to migrate to the cloud.

“The benefits of the cloud are so great that virtually all workloads and applications are running in the cloud today or moving to the cloud as quickly as possible. And analytics and big data workloads are no exception,” said Sivasubramanian. “The ability to analyze data at massive scale, quickly and at low cost, is unmatched on AWS.”

Sivasubramanian first joined Amazon in 2006, rising from senior engineer to general manager of AWS’s NoSQL and analytics business within a few years. He became one of AWS’s top leaders for machine learning in 2017, before taking on his current role managing AWS database, analytics, and machine learning services.

Here’s what Sivasubramanian said on stage during his keynote at AWS Summit concerning the launch of five new offers.

Aurora Serverless v2: Configuring On-Demand Autoscale for Aurora PostgreSQL and MySQL

Given the popularity of relational databases, it’s no surprise that customers use a lot of Aurora Amazon. Since launching in 2014, Aurora has grown by leaps and bounds, growing 10x in the past four years alone. It continues to be the fastest growing AWS service.

Today, as customers use Aurora more and more and build more and more complex applications, their applications also grow in complexity and size. As they experience this growth, they asked us for ways to make it simpler and easier to scale.

That’s why we’re excited to announce today the general availability of our Amazon Aurora Serverless v2.

Aurora Serverless v2 is the result of extensive conversations with customers, particularly on scaling behavior. “How could we provide a better relational database with a serverless operating model? »

Some of these improvements relate to how we scale in smaller increments, for example: we want to be able to scale up to hundreds of thousands of transactions per second, not in double-digit seconds, but in single seconds. And having fine-grained scaling instruments to deliver just the right amount of database resources as needed.

Aurora Serverless v2 also includes user-popular features such as global database read replicas and multiple AZs [Availability Zones] Support. We wanted customers of all sizes to experience the benefits of serverless, and with Aurora Serverless v2, we can now support businesses with hundreds, if not thousands, of associated applications and databases.


AWS IoT TwinMaker: Create digital twins of real systems

We’re excited to announce the general availability of AWS IoT TwinMaker, our service that makes it easier for developers to create digital twins of real-world systems such as buildings, factories, industrial equipment, and production lines.

TwinMaker allows you to use your existing IoT, video, and enterprise application data where it already resides without the need to ingest or move the data to another location.

[For example] by importing your 3D models directly into the service, you can then create immersive 3D visualizations of your real environment, displaying real-time data, video and information. This TwinMaker automatically builds a knowledge graph that helps catalog entities and relationships in our environment.

TwinMaker also provides a plugin for Grafana so you can also create a unified view of all relevant data to help you monitor things more efficiently. This unified view allows your operators to respond quickly and detect and diagnose problems as they arise, reducing the total cost of equipment failure. We are really looking forward to seeing how TwinMaker helps improve field operations for many of our industrial customers.


AWS Amplify Studio: Simplifies front-end and back-end development for web and mobile apps

When I talk to front-end web developers and ask them what challenges they face, these are some of the most common that come up. First, they are always looking for ways to add common back-end functionality to their app, but they don’t have the subject matter expertise to do it from scratch. Second, they try to speed up their work with UX designers. They want to be able to scale faster and move from designs to working functionality even faster. They need tools that leverage their skills to accomplish their tasks, implement front-end UI designs, and build full-featured applications much faster.

To help developers with this task, I’m really excited to announce the general availability of AWS Amplify Studio.

Developers can now use AWS Amplify Studio to quickly set up a scalable backend in hours, instead of weeks. As well [they can] visually create feature-rich user interfaces and connect together within the studio, without cloud or AWS expertise is required.

Amplify Studio connects app design, development, and product with the same tools customers love. All UI components are fully customizable with popular design and prototyping for [collaboration interface design tool] Figma—giving designers full control over the visual styling of components. And developers can do it all for free on AWS.


AWS Glue Autoscaling: Scale Workers Based on Workload

Unifying data is not just about storing that data, but also unifying access. For that, we have AWS Glue, our fully serverless data integration service that lets you discover, prepare, and integrate all your data. When we launched AWS Glue in 2017, we had a vision to make it faster, easier and more cost effective for companies to scale and modernize ETL [extract, transform and load] pipelines and do it all in one place. But when it comes to ETL and data integration, sometimes it can be difficult to predict how this data integration job will turn out.

There are constant fluctuations in data volume and demand over time. Data engineers must constantly monitor and experiment to adjust to the performance they need and the cost they can actually afford. It is a difficult and time-consuming engineering job. There are trade-offs: either you under-provision your infrastructure, which reduces performance, or you over-provision, which reduces costs.

We saw this as heavy undifferentiated work and wanted to relieve customers of this burden. That’s why we’ve automated this process.

We are excited to announce the general availability of AWS Glue Autoscaling.

While Glue is already serverless, the challenge of automating the number of workers you need to process in your ETL pipeline has made it very difficult for data engineers. Now, with Autoscaling in AWS Glue, it dynamically scales the number of workers, up and down, based on your data pipeline needs. This makes it even easier to size the infrastructure right for your ETL job, so you’re paying for exactly the resources you need, and nothing more.

Glue adds and removes resources based on its ability to distribute the workload. You no longer have to worry about over-provisioning or under-provisioning resources. Freeing up valuable time for data engineers and reducing computational costs.


AWS Glue sensitive data detection

A common conversation I have with customers is about how they can become more efficient at handling sensitive data. When I say sensitive data, I mean things like tax ID numbers, names, and medical information. However, finding sensitive data in data lakes is like finding a needle in an ever-growing haystack.

Data volume is still growing. Sensitive data can be found anywhere. Finally, the problem of finding this sensitive information is really difficult. You should build techniques to detect all types of sensitive data in pipelines and in your data lake.

To address this challenge, I’m really excited to announce the general availability of Sensitive Data Detection in AWS Glue.

This feature can help identify sensitive data while it’s still in your data pipeline so you can even stop that data from landing in your data lake.

It can also identify the locations of sensitive data already present in your database so that you can detect, delete, replace or report this data. The best part is that it works at scale and is completely serverless. Sensitive data detection has never been easier or more cost-effective.

This article originally appeared on

About Jason Jones

Check Also

Android Auto to spice up your car with split-screen design and more smart elements

Android Auto gets an upgrade that lets you run apps in split-screen and enjoy a …