In what is likely one of the longest [but most interesting] stories ever to appear in ChannelBuzz, AWS CEO Andy Jassy outlines the dozens of announcements made so far at Re:Invent.
LAS VEGAS – Transformation isn’t exactly a new theme at Re:Invent, AWS’s annual customer event going on here. CEO Andy Jassy, in his kickoff keynote, emphasized that they chose the event name back in 2012 because it reflected the pace at which both large and small companies were reinventing themselves with the AWS cloud as the linchpin. However, in introducing the surfeit of services AWS is announcing, as he does every year, Jassy tied the new announcements into the broader theme of what transformation requires today, and how AWS is addressing it. The new announcements covered what he referred to as all three areas of the stack, from the services on top, to SageMaker in the middle, and the foundational ML frameworks and infrastructure.
“The theme for the keynote today is transformation,” Jassy declared, emphasizing that successful transformation isn’t about being technical, but about leadership. Companies that successfully transform have four key differentiations.
“They have figured out how to make their senior leadership aligned to make these changes,” he emphasized. “Change is easy to block if you don’t haven’t senior management conviction and alignment.
“You also need an aggressive top-down goal that forces an organization to move faster than it otherwise would,” he stated. “It’s easy to go a long time dripping your toe in the water if you don’t have an aggressive goal.”
Next, Jassy emphasized that you have to train people to get change – train your builders.
Finally, he stressed the need to avoid being overwhelmed by the whole process.
“Don’t get paralyzed before you start if you haven’t figured out how to move every last workload,” he said.
Jassy provided his take on where the market is at in transforming today, highlighting the key trends, which he believes works to the advantage of AWS.
“When companies are making transformations in modernization, all bets are off,” he said. “They are moving from mainframes and old guard databases like Oracle and SQL.” Oracle remains everyone’s favorite whipping boy for old-timey expensive on-prem deployments, notwithstanding its own cloud initiatives in recent years. Throwing Microsoft into the fuddy-dud category, despite CEO Satya Nadella’s largely successful redefinition of that company as with-it and cutting edge, came as something of a surprise, however.
“People are trying to get away from SQL quickly as well,” Jassy stressed. “For many years, you could Bring Your Own License with SQL – and they changed that. So customers are moving to open engines like MySQL, Postgre and MariaDB. It’s hard to get commercial performance from them, however. So we built Aurora in 2015, and it’s 1/10th of the cost of commercial databases. It remains the fastest growing of our services.”
The other key areas of modernization Jassy highlighted, are moving from Windows to Linux, and the vibrant partner ecosystem AWS has assembled – ISVs, SaaS providers, and system integrators.
The result, he stressed, is that in a cloud world of massive opportunity, where 97% of spend today is still on-prem, AWS is the overwhelming cloud leader, at almost 48%. Microsoft is a distant second at 15.5%.
“We have 4x more instance types today compared to two years ago,” Jassy stated. “Because we reinvented the hypervisor with AWS Nitro, which gives us performance at a much lower cost compared to servers, that helps us innovate at a much faster rate.”
Jassy emphasized that a big turning point for them was when they acquired Israeli-based chip maker Annapurna Labs in 2015. He used that as a segue into the new announcements – the new chips powered by the ARM-based AWS Graviton 2 processors.
“These have 4x the compute and 7x the memory compared to the first generation – and 40% better price performance than the latest x86 chips,” Jassy announced. Instances for EC2 based on the M6g processors are available today. Instances based on the R6g and C6 instances will be available in early 2020.
Jassy also announced another instance launch, based on their Inferentia chips for high performance machine learning that they first previewed last year.
“We are launching our Inf1 instances for EC2 backed by AWS Inferentia chips,” Jassy said. “These have 3x faster throughout and are 40% less cost than the current best inferencing chips, from NVIDIA.”
These are available now for EC2, and will be available for ECS, EKS and SageMaker in early 2020.
The next announcement, around containers, was an expansion of the Fargate serverless compute engine for containers with Amazon Fargate for Amazon EKS. It enhances customer options with choices between Fargate, ECS and EKS by providing a managed option for serverless Kubernetes containers, and was something that Jassy said customers had been asking for.
Jassy then shifted his focus to the Amazon Redshift cloud data warehouse.
“When we launched Redshift in 2012, it changed the data warehousing space, and was our fastest growing service until Aurora was launched,” he said. “Customers gravitate to it because we continuate to iterate at a fast clip.”
The newest iteration is AQUA [Advanced Query Accelerator] for Redshift. AQUA is a multi-tenanted cache that sits on top of S3, that rethinks analytics, disaggregating compute by moving compute to the storage and processes data before it even hits the CPU.
“It runs up to 10x faster than any other cloud warehouse,” Jassy said, indicating that the secret sauce is a Nitro chip adapted to speed up. “It makes processing so much faster that you can do compute on raw data without having to move it.”
Jassy highlighted this as a good example of building something new, and then instead of moving onto another project, relentlessly keeping innovating around it. It will be available in mid-2020
Jassy also announced Redshift RA3 Instances with Managed Storage, the next generation compute instances for Redshift, which lets customers optimize their data warehouse by scaling and paying for compute and storage independently.
Next up was UltraWarm, a fully managed, low-cost, warm storage tier for Amazon Elasticsearch Service, which is now available in preview.
“Most customers just store a few weeks of data on Elasticsearch because storing data is expensive at scale,” Jassy said. “UltraWarm is a new warm tier on steroids for the Elasticsearch Service. These Warm services aren’t used a lot, because they are laggy, and durability isn’t very good. This has much better durability, backed by S3. It will save 90% on Elasticsearch storage costs over today to store the same amount of data.” Customers have up to 900TB of storage available.
Jassy then highlighted AWS’s extensive roster of purpose-built databases, and announced a new addition.
“The day of using databases for everything has come and gone,” he said. “You won’t find this collection of purpose-built databases anywhere else,” pointing to a slide enumerating all the AWS offerings. “Swiss Army knives are hardly ever the best solution for anything other than a simple task. You want the right purpose-built database for the job.”
One database had been missing from the AWS roster, however – Apache Cassandra.
“Cassandra is hard to manage at scale,” Jassy said. “The rollback features are clunky, so people operate on old versions. At scale, many companies move on from Cassandra.”
To give them the option of staying with Cassandra, Jassy introduced the new Managed Cassandra service, which is now in preview.
Jassy then made a flurry of announcements around AWS SageMaker, the cloud machine-learning platform AWS launched in November 2017 to help developers create and train machine-learning models.
“TensorFlow is the main framework for this, and 85% of TensorFlow in the cloud runs on AWS,” he said. “Most other cloud providers try to funnel everyone through TensorFlow. But 90% of data scientists use multiple frameworks because people invent algorithms in all frameworks. So we support all three, including PyTorch and Mxnet. We will always give you all of the major tools to do your job. But it has to be more accessible. That’s why we built SageMaker and launched over 50 features just last year.”
Jassy then introduced SageMaker Studio, which is what he called the first fully integrated development environment for machine learning, a Web-based IDE for complete machine learning workflows. It has multiple components.
SageMaker Notebooks are one-click notebooks with elastic compute.
“You can spin them up with a click in a second,” Jassy said. “There is no instance to provision. IT also automatically copies and transfers Notebook content to new instances.”
SageMaker Experiments Jassy termed as a much, much easier way to find, search for and share Experiments.
“It lets you organize and search every step of building and tuning models,” he said. “You can also search for older experiments by name.”
After making Notebooks and Experiments easier, Jassy asked how training could be made easier. He then introduced SageMaker Debugger, which lets developers debug and profile their model training to improve accuracy.
“It has feature prioritization, which lets you know what drives the model, and lets you know what dimensions you leave out that cause bad predictions. It’s very useful to help you train, and understand what matters.”
Next up were SageMaker Model Monitor, a way to detect concept drift by monitoring models deployed to production, and SageMaker AutoPilot.
“SageMaker AutoPilot gives AutoML with full control of visibility,” Jassy said. “It selects the right model and trains up to 50 different models, then lets you inspect and configure all of them, so you can choose the one you want and deploy it with a single click.” You can choose, for example, between two very similar models in terms of accuracy, but where one has lower latency.
“SageMaker Studio is a giant leap forward, which will make it easier to build ML models,” Jassy emphasized.
“Studio pulls together for the first time dozens of tools into a single pane of glass to put ML in the hands of even more developers and data scientsts,” said Dr. Matt Wood, VP, Product Management, AI. “You don’t just train a single mode. You train dozens, and pick the best one. For the first time, Studio pulls together tools developers know from traditional software like debuggers.”
After SageMaker – the middle layer of the stack – Jassy turned to the top layer, more services
“Our priority is not just working on cool tech or something that looks good in a press release,” he said. “We work on things that help you do your job better and change customer experience. What else can we build that will bring value to you?”
Amazon Fraud Detection is a new service that Jassy said fills a void in the market.
“We build a unique model for you, from our experience in our consumer business,” he said.
Amazon CodeGuru is a new service that seemed to impress many customers at the event.
“With code, you write it, have to review it, build and deploy it, measure it and improve it,” Jassy said. “But if there’s a problem with the code you write, the other steps won’t matter, and customers have a bad experience.”
Amazon CodeGuru is a new service that does two things – automates code reviews and identifies the most expensive lines of code.
“Just reviewing it for adherence to AWS best practices was a game changer in early trials,” Jassy noted.
The second part creates a profile that shows lines of code that can be improved.
“We have used this for a couple years internally, and it has led to tens of millions of dollars in savings for us,” Jassy said. The improvements between 2017 and 2018 around their Prime Day increased CPU utilization by 325%, and resulted in 39% lower cost.
Contact Lens for Amazon Connect is a new service, now in preview, that responds to customer asks for analytics around calls to be easier to store, transcribe, search, know sentiment, and alert for problems. All this could be done before, by using a series of AWS tools. This service does it in one step, including things like searching for periods of silence and overtalking. By mid-2020, transcriptions will also be available in real time.
Amazon Kendra is a new service now in preview designed to reinvent enterprise search with machine learning and natural language processing to make it as efficient as consumer searches.
“It will totally change the value of the data you get from enterprise search,” Jassy said.
Matt Wood reappeared on stage to provide more detail.
“Kendra doesn’t require teams to have any ML expertise,” he said. “You can set it up entirely through the AWS console. You configure all the data siloes in your organization, and provide optional FAQs. It them syncs and indexes your data, not by using keywords, but through natural language understanding. You can test and refine your queries, and it generates the code for you.”
Wood showed how a query like ‘Where is the IT support desk,’ using keywords like ‘IT’ ‘support’ and ‘desk’ returned large numbers of worthless responses, while Kendra gave a precise answer to the question.
“Kendra gives a clear answer rather than spurious searches with low- value keyword matches,” Wood said.
Jassy then announced the GA of AWS Outposts, which was announced a year ago at the event, to run AWS infrastructure on prem for a truly consistent hybrid experience. It’s available in two variants: native, and VMware Cloud on AWS if you want to use the VMware control plane. The former is available now, and the latter in early 2020.
Next up was a variant of Outposts — AWS Local zones.
“You can build local zones in metro cities so you can have single digit latency today,” he said. “This is for workloads in certain geographies where you don’t want to have data centres.” An example would be around the gaming industry, where big rendering workloads would be put into the cloud, but the work of individual artists at distributed locations typically would not. The local zones gives them the same advantages. One catch – only one local zone was announced, in LA, which happens to be a gaming industry hotbed.
Finally, Jassy introduced AWS Wavelength, which embeds storage and compute at the edge for 5G, starting with Verizon, so you have way fewer hops.
“We think it dramatically changes what customers will be able to get done,” he said.
One major announcement, around quantum computing, was actually made the day before. Amazon Braket, now in preview, is a new, fully managed AWS service that enables scientists, researchers, and developers to begin experimenting with computers from quantum hardware providers (including D-Wave, IonQ, and Rigetti) in a single place. It was accompanied by announcements around a new AWS Center for Quantum Computing, and an Amazon Quantum Solutions Lab program to connect customers with quantum computing experts from Amazon.
Does this mean that AWS quantum computing is right around the corner? Probably not.
“The point of Braket right now is development and test,” said Bill Vass VP, Technology, Storage, Automation and Messaging. “I don’t want to overhype it. The goal of this service right now is to improve our quantum computing capabilities.”
Still, Vass said that it’s more than just an experiment.
“You can do real work on it. Will it be better than a classical computer right now? Probably not, although there are some things that it accelerates.”
“With Braket, maybe we will find that quantum computing can run machine learning – or maybe not,” said Dr. Matt Wood. “But it will be very exciting to find out.”