AWS News – December Round Up


With the endless flood of new products, features and changes from AWS and its surrounding ecosystem, it can be easy to miss an update. Our monthly round up highlights major AWS news, announcements, product updates and behind the scenes changes we think are most relevant.

re:Invent is AWS’ annual exhaustive and exhausting product announcement bonanza. While features and services are released year-round, the best is always held for the week after Thanksgiving. Because the week has to cover the coming year, services are often announced months before they’re ready for preview and often as much as a year before they’re “generally available.” We’ll do our best to highlight when you’re likely to be able to take advantage of each feature announced.

As always, we’re going to skip over machine learning, AI, IoT and gaming because, let’s be honest, you’re not actually using any of those services.

We’re also going to skip talking about the vendor hall this year because QuinnyPig  said all there was to say in his thoughtful, moving Expo Nature Walk livetweet.

Fargate Support for EKS

This is one of the cooler announcements out of the entire conference, not just because of its benefits, but because of the difficulty in pulling it off. Kubernetes heavily restricts what cloud providers can do in the code base to keep a consistent cross-platform experience. Fargate is an entirely separate service, and AWS needed to integrate EKS and Fargate together without changing Kubernetes. This basically means AWS joins GCP as the only other cloud provider that lets you run “serverless” Kubernetes. That’s pretty impressive since Google had every advantage, having built Kubernetes to fit its needs before open-sourcing it. Janakiram MSV did an excellent deep dive over at The New Stack if you want to dig into how AWS pulled this off. Suffice to say, no other cloud provider is going to figure this out anytime soon.

As impressed as we are, it doesn’t mean we’re ditching our node groups for serverless just like that. As with ECS, Fargate has limitations. What Fargate does get right is matching compute capacity to workload demand. @Clare_Liguori did a brave live demo at re:Invent that shows just how good Fargate was compared to EC2 autoscaling:


Despite this impressive efficiency, there are key reasons not to use Fargate:

  • Cost: The cost for compute is at a premium in Fargate. If your workload doesn’t scale aggressively, you’re probably better off with EC2 autoscaling.
  • Sidecars and daemonsets are tricky: If you rely on these, keep in mind that your Fargate pods won’t get whatever function the daemonset was providing, whether that’s security monitoring, logging, etc.
  • Lack of customization: Many EKS users customize their worker nodes through userdata, say to place a certificate or add a security module. As with Managed Node Groups, this is not possible with Fargate. If you’re not using ECR to hold your images, this can be a very tricky limitation.
  • Complexity: Most people who use Fargate with EKS will only run it for portions of their workload. This requires properly planning out your deployments and effectively using labels, monitoring, etc.

Given the limitations, we recommend considering EKS with Fargate for very small clusters and for very large clusters where the cost-saving potential is 50 percent or more.

EKS with Fargate is available immediately. The AWS blog post is a great guide.

AWS Outposts. For Real This Time.

AWS Outposts were announced at re:Invent 2018 and spent a full year in preview. What was going on all that time? Well, for one, they had to actually build the thing. It was more of an idea in 2018 than an actual product. As a result, this thing has changed massively based on heavy customer feedback. The result is worth the wait. Outposts support EC2, EBS, VPC, ECS, EKS, RDS and EMR (with S3 support coming in 2020), all through the traditional AWS APIs (meaning your Terraform and Cloudformation templates will work with minimal modifications). All of this runs on top of hardware that includes AWS Nitro, which means ultra-fast performance and strong security.

To give you an idea of what an Outpost deployment might look like, the obviously named “OR-SRG6JS4” configuration gives you capacity for 6 m5.24xlarge instances and 11TB of storage. You can allocate those instances however you like such as 48 m5.large instances. The instances also can be allocated toward RDS instances (e.g., 4 db.m5.large RDS instances and 44 m5.large EC2 instances). The cabinet comes fully ready to connect to your network, and hardware issues are AWS’ responsibility. This will set you back just under $15k/month. Compared to buying your own kit and running something like VMware vSphere, the cost premium is fairly trivial and allows you to manage your on-premise needs the same as you manage your in-cloud needs.

AWS Outposts are available now, though S3 support is “coming soon.” The GA announcement is available here.

EC2 Nitro Is Really Awesome

There really isn’t a new feature here, except for Enclaves (which really only have narrow use cases). But both Andy Jassy and Werner Vogels devoted significant time to talking about Nitro, and with good reason. Most cloud providers are building on top of the same server design that has been used for the past 30-plus years. With Nitro, AWS started with a new design specifically targeting hyperscale virtualization. That includes obvious changes like providing separate, dedicated buses for storage IO and networking IO (on traditional servers, external storage IO shares the networking IO bus). But it also includes more subtle changes to secure the hypervisor, better isolate running instances from each other and improve performance across the board.

Why does all this matter? Nitro allows AWS to run significantly more workload in the same space as its competitors. This has caused price, as measured by throughput, to drop aggressively over the past five years and allows AWS to expand more rapidly. Azure is still trying to figure out how to offer core infrastructure services in a reliable manner. Google is trying to figure out how to get people to use GCP at all. AWS is building custom hardware to optimize cost and capacity. AWS will be challenged by worthy competitors over the next decade, but it has used its head start to give itself every possible advantage.

Don’t Ignore Graviton2

AWS announced its ARM processor called Graviton in 2018, available in the “a1” instance family. Their latest release, Graviton2, is available across beefier instance types, making it accessible for more workloads. The ARM processors offer modest performance improvements and significant cost savings for most use cases. But it’s ARM, not Intel. That’s scary, right?

No, it isn’t. If you’re running Java (admit it, you are), the JDK is very well tested on the ARM processor family. Most Linux distributions offer (and have always offered) ARM releases. Generally speaking, applications that are based on Python, Ruby, Node, etc., should just work often with no customization or changes whatsoever. And Graviton2 works with ECS and EKS.

Every AWS customer should be heavily evaluating and experimenting with ARM-based instances.

The Graviton2 instance families (M6g, C6g, R6g) are in preview now. They’re expected to be generally available this quarter. Graviton instances are already available. The product release announcement has additional details.

S3 Access Points Change Everything (In S3)

S3 Access Points might be the product announcement that has the most impact on daily life in AWS. Access Points allow you to create arbitrary URLs (or ARNs) for your buckets, with each having its own distinct bucket policy. One of the biggest challenges with properly permissioning bucket access has been in confusion between bucket policies (attached to the buckets themselves) and IAM policies (attached to the bucket user roles, groups, etc.. Bucket policies are limited in size, making them an inflexible choice for many buckets. With Access Points, each potential use case can have its own access point and corresponding policy. This allows policy to be applied and monitored consistently.

This isn’t to say you should never set bucket policies in IAM again. In fact, it will be required in many cases. But bucket permissioning strategies should be rethought in light of access policies. They’ll have a place in nearly every AWS account that uses S3.

The AWS blog post on S3 Access Points is a good starting point to learn more. Access Points are generally available now.

Miscellaneous News

  • Cost Optimizer gives sizing recommendations that are actually useful. For years, we’ve been trained to ignore the sizing recommendations from Trusted Adviser, which nearly universally tell you that you can save 90 percent on your AWS bill by turning every instance you’ve got into a t3.micro. Cost Optimizer is finally a step in the right direction. While you probably don’t want to blindly follow its recommendations, it is a useful tool in reviewing your usage in bulk. Keep in mind that you have to install a service-level role in your account that will allow AWS to access your instance performance data. Also, if you aren’t running the CloudWatch monitoring agent on your instances, your recommendations will be based solely on CPU usage and will not factor in memory consumption.
  • IAM Access Analyzer is here, and you need to use it. Access Analyzer synthesizes all IAM policies, whether attached to roles, buckets or keys, and it uses them to create a view of the effective permissions on your actual resources. You can pipe this through to S3, SecurityHub or a report that you can download periodically to make your auditors happy. It’s free to use. We look forward to all of the open-source tools that are going to build on top of this data.
  • Ultrawarm pushes infrequently accessed ElasticSearch data to S3. This feature is still in preview, and we expect it to be for a while. On the surface, it appears wicked cool store more of your application logs in ElasticSearch without the crushing storage costs by shifting older logs to S3 automatically. There are a lot of gotchas, though. This won’t be set-it-and-forget-it; some thought and customization is required. And you must be using AWS Managed ElasticSearch, which has a lot of upsides, but also a lot of downsides. Still, this is something to keep an eye on as AWS iterates it over the coming year. Overall, the strategy of storing warm data in S3 has been used successfully in Redshift, EMR and nearly all of the machine learning services. It stands to reason it will be a winner for ElasticSearch, too. Eventually.
  • Fargate Spot pricing now available for ECS. This lets you combine the serverless-y nature of Fargate with the really cheap nature of Spot pricing. Unfortunately, it is not yet available for EKS, with no timeline for its release.
  • Ingress Routing Now Supported. You can finally route your inbound traffic through virtual appliances. This is incredibly useful for organizations that require secure filtering of inbound traffic. We recommend being careful about how you set this up, though. While ingress routing is free, most virtual appliances are quite pricey. If you route too much traffic through this, it can be expensive and a single point of failure if not set up correctly.
  • Inter-Region Transit Gateway Peering. We couldn’t be happier about this. You can now peer your transit gateways across regions directly, eliminating a sore point for dealing with transit gateways in a multi-region account.
  • Local Zones are now available. Local Zones are mini-availability zones, existing outside of a traditional AWS region. While AWS is experimenting right now with a single local zone in Los Angeles (preview only), the goal is to create hundreds of local zones throughout the world. These zones will have a very limited AWS feature set and are intended to provide edge-like capabilities to AWS customers. If you’re delivering media to consumers or providing a highly responsive mobile app, this could be the feature for which you were waiting. But for most of us, it just goes into the “Hey, that’s neat” pile.
  • CodeGuru adds machine learning driven code analysis and profiling. CodeGuru is actually two separate products: Profiler and Reviewer. Profiler is like a typical code profiler, but it also can instrument a wide number of AWS services such as Lambda. Reviewer is like a traditional static code analysis tool, except with the ability to define rules based on analysis of lots and lots of code. Theoretically, Profiler and Reviewer share data on the back end to improve each other, though such magic hasn’t been publicly disclosed yet. Reviewer can be very pricey, and of course you’re giving AWS access to your GitHub repos. Your source code is effectively their machine learning training data. Profiler is a little less expensive and a little less scary, but you’re still ultimately running an agent that sees your entire workload.
  • Redshift RA3 and Aqua continue to keep Redshift relevant in the era of Presto and Snowflake. RA3 instances offer managed storage, local SSD caching and auto-tiering of data back to S3. RA3 instances are generally available now. Aqua provides low-level aggregation and table scanning functions on separate hardware, physically and logically closer to the underlying storage. By offloading this high CPU, high IO activity, AWS claims as much as a 10x improvement in query performance. That means you can do more queries with less Redshift.

© Copyright 2020 Rhythmic Technologies, Inc. All Rights Reserved.  ·  Privacy Policy