Blog
AWS EC2 Image Builder Introduces CloudFormation Support, Simplifying Image Pipeline Deployment
With the endless flood of new products, features and changes from AWS and its surrounding ecosystem, it can be easy to miss an update. Our monthly round up highlights major AWS news, announcements, product updates and behind the scenes changes we think are most relevant.
May welcomed a number of features announced last year out of preview. There were also important announcements for EC2, S3 and EKS.
EC2 Image Builder Now Includes Support for AWS CloudFormation
At Rhythmic, we’re all in on EC2 Image Builder and are gradually replacing our Packer build pipelines with it. EC2 Image Builder goes beyond Packer with the ability to automate, test and distribute images. Though it works best with AWS, it supports other providers too. Unfortunately, there wasn’t an IaC option to deploy EC2 Image Builder. And if you have to click, we’d rather not do it at all. When AWS announced CloudFormation support, we pounced.
Though it’s worthy of a standalone blog post (and that’s on the list!), we didn’t want to wait to share some new Terraform modules we created to automatically generate a CloudFormation template and deploy it. It isn’t that we hate CloudFormation. But, the toil required to build a CloudFormation template masked the actual behavior of the pipeline.
This simple TF code will generate a pipeline that kicks off a new AMI every month, built on top of Amazon Linux 2 and with a simple Rhythmic ansible module run against it.
module "test_component" { source = "rhythmictech/imagebuilder-component-ansible/aws" component_version = "1.0.0" description = "Testing component" name = "testing-component" playbook_dir = "packer-generic-images/base" playbook_repo = "https://github.com/rhythmictech/packer-generic-images.git" } module "test_recipe" { source = "rhythmictech/imagebuilder-recipe/aws" description = "Testing recipe" name = "test-recipe" parent_image = "arn:aws:imagebuilder:us-east-1:aws:image/amazon-linux-2-x86/x.x.x" recipe_version = "1.0.0" update = true component_arns = [ module.test_component.component_arn, "arn:aws:imagebuilder:us-east-1:aws:component/simple-boot-test-linux/1.0.0/1", "arn:aws:imagebuilder:us-east-1:aws:component/reboot-test-linux/1.0.0/1" ] } module "test_pipeline" { source = "rhythmictech/imagebuilder-pipeline/aws" description = "Testing pipeline" name = "test-pipeline" recipe_arn = module.test_recipe.recipe_arn }
Through SNS Topic Notifications, you can link builds together and automate your patching process. Build your base image, then automatically kick off the downstream builds. Using AMI Aliases(link to that), you can even get your external deployment scripts to automatically pick up the latest AMI IDs.
Check out the modules on GitHub:
- https://github.com/rhythmictech/terraform-aws-imagebuilder-pipeline
- https://github.com/rhythmictech/terraform-aws-imagebuilder-recipe
- https://github.com/rhythmictech/terraform-aws-imagebuilder-component-ansible
EC2 Image Builder now includes support for AWS CloudFormation
Amazon ECS and AWS Fargate Support Amazon EFS Filesystems Generally Available
In great news for anyone using ECS, EFS is now natively supported. Previously, EFS mounts could only be done on EC2 instances using cloud-init or bootstrapping metadata if your use case supported it. On Fargate, it wasn’t possible at all. Now, you can do it straight from the task definition.
Using the existing volumes and mountPoints properties in the task definition, you can reference your EFS filesystem. Mounting is handled automatically.
{ "family": "busybox", "containerDefinitions": [ { "mountPoints": [ { "containerPath": "/mnt", "sourceVolume": "efs" } ], "name": "busybox", "image": "busybox" } ], "volumes": [ { "name": "efs", "efsVolumeConfiguration": { "fileSystemId": "fs-12345678" } } ], }
Be mindful of the fact that the Security Groups for the EFS mount points still need to be correct, or your containers will fail to deploy.
Amazon ECS and AWS Fargate support for Amazon EFS File Systems now generally available
Single Sign-On between Okta Universal Directory and AWS
At Rhythmic, we use Okta for everything. It’s how we log into our tools like Confluence and Jira, our payroll provider, and most importantly, our AWS accounts. We use aws-okta, a great tool published by Segment, a cool data platform company that, like us, has to manage authentication and identity across tons of AWS accounts.
Of course, that often left us doing things the cool way–Duo push MFA, no local access keys, etc–while our clients accessed their accounts using IAM users. Where we can, we move them to Okta too, but for many, the tool chain requirements and the confusion over how cross-account IAM role assumption works makes it an impractical barrier.
AWS SSO is the answer to that problem, allowing you to use familiar constructs like IAM users thanks to user synchronization. Users still take on roles, but the management of that process is simplified dramatically. Most importantly, you don’t need to build complicated AWS config profiles. Because AWS SSO is natively integrated with the CLI, a simple one-time command will automatically configure a profile for you with STS and role assumption, which means no local access keys.
Single Sign-On between Okta Universal Directory and AWS
Miscellaneous News
- Native parameter support for Amazon Machine Image IDs. The announcement could have been a bit clearer about what this is. You can now store an AMI ID in Parameter Store and reference it in your deployment processes. It’s great seeing AWS continue to integrate Parameter Store more tightly with other services.
- Add enriched metadata to Amazon VPC flow logs published to CloudWatch Logs and S3. Metadata fields make VPC flow logs infinitely more useful, but they were restricted to S3 until now. This update allows for custom log formats to be applied to both S3 and CloudWatch. That means it is probably time for you to start using them.
- New – Amazon EC2 C5a Instances Powered By 2nd Gen AMD EPYC™ Processors. What if I told you that you could save 10% by typing an additional letter? Is that something you might be interested in? Nitro-powered AMD processors are 10% cheaper, nearly always equally performant, and only require you to add the letter ‘a’ to use them.
- Amazon EC2 M6g instances powered by AWS Graviton2 processors are now generally available. Saving 10% by adding the letter ‘a’ not good enough for you? What about changing a ‘5’ to a ‘6g’ for 25% savings and 40% better performance? The new M6g family does exactly that. The Graviton2 ARM processor is compatible with most languages–Java, Python, Node, Ruby, what have you. It’s purpose built for cloud-based applications. You should be using Graviton2
- Trusted Advisor is becoming much more useful for Cost Optimization. If you aren’t doing anything for cost management, consider Trusted Advisor. Or, read our Rapid Cost Savings Checklist blog post (AWS Rapid Cost Savings Checklist)
- Amazon EKS Best Practices Guide for Security. One stop shopping for EKS best practices. 95% of what you need to know is here
- Amazon MSK now supports Apache Kafka version upgrades. Amazon’s Managed Kafka service now supports rolling upgrades. Running Kafka sucks, and this makes it easier to let someone else run it for you
- Data Lifecycle Manager adds support for scheduling based on cron expressions and additional backup intervals including weekly, monthly and annual schedules. Data Lifecycle Manager is a super easy way to manage EBS snapshots. Cron expressions offer significantly more flexibility
- Amazon DynamoDB now supports empty values for non-key String and Binary attributes in DynamoDB tables. DynamoDB finally understands null values. AWS wasn’t exactly late to the party on this one, so much as the party ended years ago and now we get to undo all the stupid hacks we made to deal with this. It’s like a time delayed hangover
- Kernel Live Patching on Amazon Linux 2. Splice patching is hard. If you’re wondering how widely used Amazon Linux is at this point, don’t overlook the fact that they saw enough demand to implement a feature that RedHat doesn’t support
- Amazon EKS now supports Kubernetes version 1.16. Kubernetes 1.16 has a lot of breaking changes, particularly for early adopters of K8s. Lots of deprecated APIs fall off here. In 12 months, you’ll be forced to make this upgrade. Don’t get caught off guard by it.