HOW TO CONFIGURE TERRAFORM REMOTE BACKEND ON AWS S3 BUCKET

Okey Ebere
4 min readJul 6, 2024

--

Terraform is an infrastructure as a code tool that lets you build, change, and version infrastructure safely and efficiently.

source- devops-mojo

Imagine Building a custom AWS virtual private cloud (VPC) with Terraform. It’s a thing of beauty! But here’s the catch: Terraform creates a .tfstate file during initialization. This file stores vital details about your environment, like IP addresses and even secret keys. Sharing this file directly is risky but keeping it local can cause conflicts when your team collaborates.

Think of a remote backend as a secure vault for your .tfstate file. Instead of local storage or a central repository with open access, this backend stores it remotely on services like Terraform Cloud or AWS S3 buckets. This allows your team to access the latest infrastructure state without compromising sensitive information.

Collaboration Made Easy, Security Uncompromised

No matter your infrastructure project, a remote backend ensures smooth collaboration while keeping your secrets safe. It’s like having a designated vault for your digital treasures. In this tutorial, we’ll explore using AWS S3 buckets as your remote backend solution.

PREREQUISITES

  1. Terraform Installed on your Local Machine
  2. AWS CLI configured with IAM credentials that have permissions to create S3 buckets and DynamoDB tables

Steps:

  1. Create an S3 Bucket:
  • Open the AWS Management Console and navigate to the S3 service.
  • Click “Create Bucket” and choose a unique name.

All configurations should be left as default except —

  • Enable versioning for the bucket to maintain a history of state files.
  • Consider server-side encryption with AWS Key Management Service (KMS) for added security.

2. Set Up Bucket Permissions:

  • Go to the bucket’s “Permissions” tab.
  • Edit the bucket policy using the policy generator.
  • Allow s3:GetObject, s3:PutObject, and s3:ListBucket actions for the authenticated IAM user/group/account that will use Terraform.
  • Deny s3:DeleteBucket to prevent accidental deletion.
{
"Version": "2012-10-17",
"Id": "Policy1720275396028",
"Statement": [
{
"Sid": "Stmt1720275382031",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxx:user/xxxxx"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::terraform-statefile2bo/*"
},
{
"Sid": "Stmt1720275382031",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxxxxx:user/xxxx"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::terraform-statefile2bo"
}
]
}

3. Create a DynamoDB Table (Optional):

While S3 buckets are great for storing Terraform state files and enabling collaboration, there’s a potential pitfall: concurrent modifications. Imagine two team members hitting “apply” at the same time — the state file could get corrupted!

This is where DynamoDB comes in. It acts as a locking mechanism, a digital guard for your state file. When someone starts applying changes, DynamoDB ensures only they can write to the state file. Others attempting modifications are politely told to wait their turn. DynamoDB prevents a collision by letting only one can get (Terraform apply) through the intersection at a time. This keeps your state file consistent and your infrastructure deployments smooth.

  • DynamoDB table can provide locking functionality to prevent concurrent state modifications.
  • If you choose to use DynamoDB, create a table with a single string key named “LockID.” Leave the remaining settings as default and click create table.

4. Configure Terraform Backend:

  • On the main.tf in your Terraform project directory.
  • Add the following configuration, replacing placeholders with your values:
terraform {
backend "s3" {
bucket = "terraform-statefile2bo" <your_bucket_name>
key = "terraform.tfstate"
region = "us-west-2" <your_aws_region>
dynamodb_table = "Terraform-statefile" <your_dynamo_dbtable_name>
}
}

5. Initialize Terraform:

  • Run terraform init in your project directory. This initializes Terraform and configures the remote backend based on your backend.tf file. If using a DynamoDB table, Terraform will create the table for you.

5. Apply Terraform:

Now, run terraform apply and wait for the infrastructure to be created.

Additional Considerations:

  • Multiple Users: For collaboration, configure IAM roles or users with appropriate permissions to access the S3 bucket and DynamoDB table (if used).
  • State Locking: If using DynamoDB for locking, ensure the IAM user/role has the necessary permissions to perform DynamoDB actions.

CONCLUSION

  • Terraform’s .tfstate file holds crucial infrastructure details.
  • Sharing this file directly exposes sensitive information.
  • Remote backends (like AWS S3) offer secure storage for collaboration.
  • Your team can access the latest state without compromising security.

--

--

No responses yet