Terraform On a New Cloud – Defining Existing Infrastructure

Terraform a new cloud

So, you’ve got a repo set up for your infrastructure, your AWS account is set up, and you’ve logged in through the CLI. Now what do you do? Chances are you aren’t starting from scratch but have some existing infrastructure in your AWS environment you would like defined in Terraform but you don’t want to risk accidentally messing them up. Fortunately for you, I’ve gone ahead and messed up a bunch of infrastructure so you don’t have to!

Do I have to delete all my existing infrastructure?

No. The creators of Terraform, in their infinite wisdom, foresaw that people might already have infrastructure in the cloud they want defined in Terraform and don’t want to recreate it. If you’ve already played with Terraform, you may have noticed there’s an import command, and if you’re familiar with the docs, you’ve probably already noticed it at the bottom of each resource page.

The example we’ll be running with today is the aws_vpc resource, which you can find here. This is mostly because I’m working in my personal account and I’ve deleted everything except the default VPC that comes with every account. The techniques I lay down have been developed on a multitude of resources and honed in several clouds so I’m certain they will apply to you (unless you are using one of the unlucky few resources that does not have a way to import).  The trick here is to not freak out just because your resource definition doesn’t match what’s already instantiated in the cloud, but to let Terraform go ahead and pull that configuration for you.

 

In my own Terraform git repo, I’ve created a new project for my networking stuff called network. In the main.tf file, I’ll just copy and paste the basic example from the aws_vpc docs and change the name (you have to change it before you import it or you’ll have to do this all over again).

# network/main.tf

resource "aws_vpc" "default" {
    cidr_block + "10.0.0/16"
}

As you may have noticed, the configuration I have here does not match the one pictured above. That’s okay because Terraform will not make any changes to the infrastructure on an import. The syntax for the import has been pulled right out of the Terraform docs.

# terraform import [options] <RESOURCE_TYPE>.<RESOURCE_NAME> <RESOURCE_ID>
> terraform import -var-file default.tfvars aws_vpc.default vpc-61c8eb19
aws_vpc.default: Importing from ID "vpc-61c8eb19"...
aws_vpc.default: Import prepared!
Prepared aws_vpc for import
aws_vpc.default: Refreshing state... [id=vpc-61c8eb19]

Import successful!

The resources that were imported are shown above. These resources are now in your Terraform state and will henceforth be managed by Terraform.

Now, here’s the trick to getting your new Terraform configuration – you just run a terraform planand see what changes terraform wants to apply. You don’t have to worry about making any Changes to the infrastructure on a terraform plan because, as the name implies, it simply generates a plan!An execution plan has been generated and is shown below.

>  terraform plan -var-file default.tfvars                                                                                                     
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_vpc.default: Refreshing state... [id=vpc-61c8eb19]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # aws_vpc.default must be replaced
-/+ resource "aws_vpc" "default" {
      ~ arn                              = "arn:aws:ec2:us-east-1:784384215106:vpc/vpc-61c8eb19" -> (known after apply)
        assign_generated_ipv6_cidr_block = false
      ~ cidr_block                       = "172.31.0.0/16" -> "10.0.0.0/16" # forces replacement
      ~ default_network_acl_id           = "acl-612e851a" -> (known after apply)
      ~ default_route_table_id           = "rtb-acac72d1" -> (known after apply)
      ~ default_security_group_id        = "sg-71834b05" -> (known after apply)
      ~ dhcp_options_id                  = "dopt-81f5e3f8" -> (known after apply)
      ~ enable_classiclink               = false -> (known after apply)
      ~ enable_classiclink_dns_support   = false -> (known after apply)
      ~ enable_dns_hostnames             = true -> (known after apply)
        enable_dns_support               = true
      ~ id                               = "vpc-61c8eb19" -> (known after apply)
        instance_tenancy                 = "default"
      + ipv6_association_id              = (known after apply)
      + ipv6_cidr_block                  = (known after apply)
      ~ main_route_table_id              = "rtb-acac72d1" -> (known after apply)
      ~ owner_id                         = "784384215106" -> (known after apply)
      - tags                             = {} -> null
    }

Plan: 1 to add, 0 to change, 1 to destroy.

If you look above, you might notice that what Terraform has given you is the definition for our aws_vpc resource as it is. We can take this, alter it to remove the extra symbols, and we get the resource definition we’ve been searching for! I present the finished resource to you below, but I have to admit that it did not burst forth from my head fully formed like Athena from Zeus. It took a run to get rid of all the fields that cannot be set and produce an error like this one: Error: "owner_id": this field cannot be set.

# network/main.tf

resource "aws_vpc" "default" {
  assign_generated_ipv6_cidr_block = false
  cidr_block                       = "172.31.0.0/16"
  enable_classiclink               = false
  enable_classiclink_dns_support   = false
  enable_dns_hostnames             = true
  enable_dns_support               = true
  instance_tenancy                 = "default"
  tags                             = {}
}

Now we have our current state defined in Terraform with no changes to apply! We can run a plan to confirm this.

>  tf plan -var-file default.tfvars                                                                                                                                                                               
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

aws_vpc.default: Refreshing state... [id=vpc-61c8eb19]

------------------------------------------------------------------------

No changes. Infrastructure is up-to-date.

That still seems like work. Can Terraform just generate the code for me?

Not quite, although if you believe the terraform import command docs they seem to indicate that this is a planned feature.

| The current implementation of Terraform import can only import resources into the state. It does not generate configuration. A future version of Terraform will also generate configuration.

There are tools that aim to do exactly this, although we’ve had limited success with them. There are two that I’ve tried personally and can speak to but for a more up-to-date list you should check out github.com/shuaibiyy/awesome-terraform or the terraform topic on GitHub.

Dtan4/Terraforming

The project I’ve had the most success with, github.com/dtan4/terraforming, is written in ruby and supports a long list of AWS resources. It can generate Terraform code as well as terraform state files. Below is the output it gave us for my VPC.

>  terraforming vpc                                                                                                                                                                                               
resource "aws_vpc" "vpc-61c8eb19" {
    cidr_block           = "172.31.0.0/16"
    enable_dns_hostnames = true
    enable_dns_support   = true
    instance_tenancy     = "default"

    tags {
    }
}

GoogleCloud/Terraformer

This is a product of the Waze SRE team and is primarily aimed at users of Google Cloud Platform but has limited support for other providers. They provide a good summary on their README but I’ll give a rundown of what I got working here.

First, you have to initialize terraform with the provider you’ll be using. For us, that means creating a file like the one below.

# testing_gcp_terraformer/init.tf

provider "aws" {}

Now to import the VPC configuration which is all written to files in a newly generated folder.

>  terraformer import aws --resources=vpc --connect=true --profile=default --regions=us-east-1                                                                                                                    
2020/03/09 16:24:30 aws importing region us-east-1
2020/03/09 16:24:30 aws importing... vpc
2020/03/09 16:24:39 Refreshing state... aws_vpc.tfer--vpc-002D-61c8eb19
2020/03/09 16:25:05 aws Connecting.... 
2020/03/09 16:25:05 aws save vpc
2020/03/09 16:25:05 aws save tfstate for vpc
>  tree                                                                                                                                                                                                           
.
├── generated
│   └── aws
│       └── vpc
│           └── us-east-1
│               ├── outputs.tf
│               ├── provider.tf
│               ├── terraform.tfstate
│               └── vpc.tf
└── init.tf
4 directories, 5 files

And we have some beautiful new Terraform albeit with some unique names…

# testing_gcp_terraformer/generated/aws/vpc/us-east-1/outputs.tf

output "aws_vpc_tfer--vpc-002D-61c8eb19_id" {
  value = "${aws_vpc.tfer--vpc-002D-61c8eb19.id}"
}
# testing_gcp_terraformer/generated/aws/vpc/us-east-1/vpc.tf

resource "aws_vpc" "tfer--vpc-002D-61c8eb19" {
  assign_generated_ipv6_cidr_block = "false"
  cidr_block                       = "172.31.0.0/16"
  enable_classiclink               = "false"
  enable_classiclink_dns_support   = "false"
  enable_dns_hostnames             = "true"
  enable_dns_support               = "true"
  instance_tenancy                 = "default"
}
# testing_gcp_terraformer/generated/aws/vpc/us-east-1/provider.tf

provider "aws" {
  region  = "us-east-1"
  version = "~>v2.52.0"
}

Conclusion

Of course, you’re going to want to refactor your code into modules, introduce variables and perhaps even manage multiple environments using Terraform workspaces. But a lot of teams let this be a barrier to getting started at all. Start by importing your state exactly as it is. Then, you can incrementally improve your existing code while also getting everything new in Terraform from the start. This way also can be confident in your ability to reset changes if something goes wrong because you can just git reset --hard ^HEAD and then terraform apply again! When applying changes to infrastructure that was created by hand you don’t have the same ability to “undo” changes and subsequently your Recovery Point Objectives (RPOs) will be as long as it takes you to manually recreate that resource.

Delineating code into modules and projects and controlling the “blast radius” of your Terraform project is a complex topic with no clear answers. At Rhythmic, we’ve had success separating code into what we’ve termed “projects”, which live inside the main Terraform repository for the account. It isn’t the sexiest Terraform setup, but its simplicity allows easy onboarding of developers and the ability to quickly look at Terraform code and know what’s deployed into an environment, so really, it’s simplicity is its strength. Those basic principles and what the repositories it leads to look like will have to be a blog post all their own .

© Copyright 2020 Rhythmic Technologies, Inc. All Rights Reserved.  ·  Privacy Policy