Skip to main content

Blog

AWS EC2 Image Builder Introduces CloudFormation Support, Simplifying Image Pipeline Deployment

June 9, 2020       Cris Daniluk       ,         Comments  0

With the endless flood of new products, features and changes from AWS and its surrounding ecosystem, it can be easy to miss an update. Our monthly round up highlights major AWS news, announcements, product updates and behind the scenes changes we think are most relevant.

May welcomed a number of features announced last year out of preview. There were also important announcements for EC2, S3 and EKS.

EC2 Image Builder Now Includes Support for AWS CloudFormation

At Rhythmic, we’re all in on EC2 Image Builder and are gradually replacing our Packer build pipelines with it. EC2 Image Builder goes beyond Packer with the ability to automate, test and distribute images. Though it works best with AWS, it supports other providers too. Unfortunately, there wasn’t an IaC option to deploy EC2 Image Builder. And if you have to click, we’d rather not do it at all. When AWS announced CloudFormation support, we pounced.

Though it’s worthy of a standalone blog post (and that’s on the list!), we didn’t want to wait to share some new Terraform modules we created to automatically generate a CloudFormation template and deploy it. It isn’t that we hate CloudFormation. But, the toil required to build a CloudFormation template masked the actual behavior of the pipeline.

This simple TF code will generate a pipeline that kicks off a new AMI every month, built on top of Amazon Linux 2 and with a simple Rhythmic ansible module run against it.

module "test_component" {
  source  = "rhythmictech/imagebuilder-component-ansible/aws"

  component_version = "1.0.0"
  description       = "Testing component"
  name              = "testing-component"
  playbook_dir      = "packer-generic-images/base"
  playbook_repo     = "https://github.com/rhythmictech/packer-generic-images.git"
}

module "test_recipe" {
  source  = "rhythmictech/imagebuilder-recipe/aws"

  description    = "Testing recipe"
  name           = "test-recipe"
  parent_image   = "arn:aws:imagebuilder:us-east-1:aws:image/amazon-linux-2-x86/x.x.x"
  recipe_version = "1.0.0"
  update         = true

  component_arns = [
    module.test_component.component_arn,
    "arn:aws:imagebuilder:us-east-1:aws:component/simple-boot-test-linux/1.0.0/1",
    "arn:aws:imagebuilder:us-east-1:aws:component/reboot-test-linux/1.0.0/1"
  ]
}

module "test_pipeline" {
  source  = "rhythmictech/imagebuilder-pipeline/aws"

  description = "Testing pipeline"
  name        = "test-pipeline"
  recipe_arn  = module.test_recipe.recipe_arn
}

 

Through SNS Topic Notifications, you can link builds together and automate your patching process. Build your base image, then automatically kick off the downstream builds. Using AMI Aliases(link to that), you can even get your external deployment scripts to automatically pick up the latest AMI IDs.

Check out the modules on GitHub:

EC2 Image Builder now includes support for AWS CloudFormation

Amazon ECS and AWS Fargate Support Amazon EFS Filesystems Generally Available

In great news for anyone using ECS, EFS is now natively supported. Previously, EFS mounts could only be done on EC2 instances using cloud-init or bootstrapping metadata if your use case supported it. On Fargate, it wasn’t possible at all. Now, you can do it straight from the task definition.

Using the existing volumes and mountPoints properties in the task definition, you can reference your EFS filesystem. Mounting is handled automatically.

{
    "family": "busybox",
    "containerDefinitions": [
        {
            "mountPoints": [
                {
                    "containerPath": "/mnt",
                    "sourceVolume": "efs"
                }
            ],
            "name": "busybox",
            "image": "busybox"
        }
    ],
    "volumes": [
        {
            "name": "efs",
            "efsVolumeConfiguration": {
                "fileSystemId": "fs-12345678"
            }
        }
    ],
}

Be mindful of the fact that the Security Groups for the EFS mount points still need to be correct, or your containers will fail to deploy.

Amazon ECS and AWS Fargate support for Amazon EFS File Systems now generally available

Single Sign-On between Okta Universal Directory and AWS

At Rhythmic, we use Okta for everything. It’s how we log into our tools like Confluence and Jira, our payroll provider, and most importantly, our AWS accounts. We use aws-okta, a great tool published by Segment, a cool data platform company that, like us, has to manage authentication and identity across tons of AWS accounts.

Of course, that often left us doing things the cool way–Duo push MFA, no local access keys, etc–while our clients accessed their accounts using IAM users. Where we can, we move them to Okta too, but for many, the tool chain requirements and the confusion over how cross-account IAM role assumption works makes it an impractical barrier.

AWS SSO is the answer to that problem, allowing you to use familiar constructs like IAM users thanks to user synchronization. Users still take on roles, but the management of that process is simplified dramatically. Most importantly, you don’t need to build complicated AWS config profiles. Because AWS SSO is natively integrated with the CLI, a simple one-time command will automatically configure a profile for you with STS and role assumption, which means no local access keys.

Single Sign-On between Okta Universal Directory and AWS

Miscellaneous News

Leave a Reply