Terraform State Management: Remote State, Workspaces, Locking, Import, and Moved Blocks
Master Terraform state management in production: S3 remote state with DynamoDB locking, workspaces for environment isolation, terraform import for existing resources, moved blocks for refactoring, and state surgery.
Terraform state is the source of truth for what infrastructure exists. Local state works for solo projects; it breaks for teams — concurrent applies overwrite each other, state files get committed to Git accidentally, and there's no audit trail. Production Terraform needs remote state with locking, environment isolation, and the techniques to safely refactor or import existing resources.
This post covers S3 remote state with DynamoDB locking, workspace-based environment isolation, terraform import for brownfield resources, moved blocks for safe refactoring, and state surgery for when things go wrong.
1. S3 Remote State with DynamoDB Locking
# infrastructure/bootstrap/main.tf
# Run this ONCE to create the state backend (before other modules)
# Can't use remote state for itself — use local state, then push
terraform {
required_providers {
aws = { source = "hashicorp/aws", version = "~> 5.0" }
}
}
provider "aws" { region = "us-east-1" }
# S3 bucket for state files
resource "aws_s3_bucket" "terraform_state" {
bucket = "viprasol-terraform-state-${random_id.suffix.hex}"
lifecycle {
prevent_destroy = true # Never delete this accidentally
}
}
resource "random_id" "suffix" { byte_length = 4 }
# Enable versioning — you can restore previous state
resource "aws_s3_bucket_versioning" "state" {
bucket = aws_s3_bucket.terraform_state.id
versioning_configuration { status = "Enabled" }
}
# Server-side encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "state" {
bucket = aws_s3_bucket.terraform_state.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "aws:kms"
}
}
}
# Block all public access
resource "aws_s3_bucket_public_access_block" "state" {
bucket = aws_s3_bucket.terraform_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# DynamoDB table for state locking
resource "aws_dynamodb_table" "terraform_locks" {
name = "viprasol-terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
lifecycle { prevent_destroy = true }
}
output "state_bucket" { value = aws_s3_bucket.terraform_state.bucket }
output "lock_table" { value = aws_dynamodb_table.terraform_locks.name }
# All other modules: configure remote backend
# infrastructure/production/backend.tf
terraform {
backend "s3" {
bucket = "viprasol-terraform-state-a1b2c3d4"
key = "production/main.tfstate" # Unique key per module
region = "us-east-1"
dynamodb_table = "viprasol-terraform-locks"
encrypt = true
}
required_providers {
aws = { source = "hashicorp/aws", version = "~> 5.0" }
}
}
2. State Data Sources (Cross-Module References)
# Module A outputs VPC ID
# infrastructure/networking/outputs.tf
output "vpc_id" { value = aws_vpc.main.id }
output "private_subnet_ids" { value = aws_subnet.private[*].id }
# Module B reads Module A's state
# infrastructure/application/main.tf
data "terraform_remote_state" "networking" {
backend = "s3"
config = {
bucket = "viprasol-terraform-state-a1b2c3d4"
key = "production/networking.tfstate"
region = "us-east-1"
}
}
# Use output from another module's state
resource "aws_ecs_cluster" "main" {
name = "viprasol-production"
}
resource "aws_ecs_service" "api" {
cluster = aws_ecs_cluster.main.arn
# Reference the VPC/subnets from networking module
network_configuration {
subnets = data.terraform_remote_state.networking.outputs.private_subnet_ids
}
}
☁️ Is Your Cloud Costing Too Much?
Most teams overspend 30–40% on cloud — wrong instance types, no reserved pricing, bloated storage. We audit, right-size, and automate your infrastructure.
- AWS, GCP, Azure certified engineers
- Infrastructure as Code (Terraform, CDK)
- Docker, Kubernetes, GitHub Actions CI/CD
- Typical audit recovers $500–$3,000/month in savings
3. Workspaces for Environment Isolation
# One codebase, multiple environments via workspaces
# infrastructure/main.tf
locals {
# workspace = "default" (don't use default for prod), "staging", "production"
env = terraform.workspace == "default" ? "dev" : terraform.workspace
# Size config per environment
instance_sizes = {
dev = "t3.micro"
staging = "t3.small"
production = "r7g.xlarge"
}
db_instance_classes = {
dev = "db.t3.micro"
staging = "db.t3.small"
production = "db.r8g.xlarge"
}
}
resource "aws_instance" "api" {
instance_type = local.instance_sizes[local.env]
tags = { Environment = local.env }
}
resource "aws_db_instance" "main" {
instance_class = local.db_instance_classes[local.env]
multi_az = local.env == "production"
deletion_protection = local.env == "production"
skip_final_snapshot = local.env != "production"
}
# Workspace commands
terraform workspace new staging
terraform workspace new production
terraform workspace list
# default
# * staging
# production
terraform workspace select production
terraform plan
terraform apply
# State file structure in S3:
# env:/staging/production/main.tfstate ← workspace state
# env:/production/production/main.tfstate
4. terraform import for Existing Resources
# You have an existing RDS instance (created manually) and want to manage it with Terraform
# Step 1: Write the resource block in your .tf file
# Step 2: Import the existing resource into state
terraform import aws_db_instance.main my-existing-rds-identifier
# For resources with complex IDs:
terraform import aws_s3_bucket_acl.example bucket-name,private
# Import multiple resources (Terraform 1.5+ import blocks — preferred)
# Terraform 1.5+ declarative import (better than CLI — version-controlled)
# infrastructure/imports.tf
import {
id = "my-existing-rds-identifier"
to = aws_db_instance.main
}
import {
id = "viprasol-existing-bucket"
to = aws_s3_bucket.static_assets
}
import {
id = "arn:aws:iam::123456789:role/existing-role"
to = aws_iam_role.app
}
# Generate resource config from imported state (Terraform 1.5+)
terraform plan -generate-config-out=generated.tf
# Review generated.tf, clean it up, then:
terraform apply # Imports + plans with generated config
⚙️ DevOps Done Right — Zero Downtime, Full Automation
Ship faster without breaking things. We build CI/CD pipelines, monitoring stacks, and auto-scaling infrastructure that your team can actually maintain.
- Staging + production environments with feature flags
- Automated security scanning in the pipeline
- Uptime monitoring + alerting + runbook automation
- On-call support handover docs included
5. Moved Blocks for Safe Refactoring
# Before: resource was named "web"
# resource "aws_instance" "web" { ... }
# After refactor: renamed to "api"
# resource "aws_instance" "api" { ... }
# Without moved block: Terraform would DESTROY web + CREATE api (downtime!)
# With moved block: Terraform updates state only (no infrastructure change)
moved {
from = aws_instance.web
to = aws_instance.api
}
# Also works for moving between modules:
moved {
from = aws_security_group.app
to = module.networking.aws_security_group.app
}
# Moving items in a count/for_each
moved {
from = aws_subnet.private[0]
to = aws_subnet.private["us-east-1a"]
}
6. State Surgery (Emergency Procedures)
# List all resources in state
terraform state list
# Show details of a specific resource
terraform state show aws_db_instance.main
# Remove resource from state (stop managing — doesn't destroy infrastructure)
# Use when: resource was deleted manually and you want Terraform to forget it
terraform state rm aws_s3_bucket.legacy_bucket
# Move resource within state (same as moved block but CLI)
terraform state mv aws_instance.web aws_instance.api
# Pull current state to local file (for inspection/backup)
terraform state pull > backup.tfstate
# Push state back (dangerous — overwrites remote state)
# Only use if state was corrupted and you have a clean backup
terraform state push backup.tfstate
# Force-unlock a stuck lock (use only if CI job died holding the lock)
terraform force-unlock LOCK_ID
7. CI/CD Pipeline Pattern
# .github/workflows/terraform.yml
name: Terraform
on:
push:
branches: [main]
paths: ['infrastructure/**']
pull_request:
paths: ['infrastructure/**']
jobs:
plan:
runs-on: ubuntu-latest
permissions:
id-token: write # For OIDC
contents: read
pull-requests: write
steps:
- uses: actions/checkout@v4
- name: Configure AWS (OIDC — no long-lived keys)
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.TF_ROLE_ARN }}
aws-region: us-east-1
- uses: hashicorp/setup-terraform@v3
with: { terraform_version: "1.10.0" }
- name: Terraform Init
working-directory: infrastructure/production
run: terraform init
- name: Terraform Plan
id: plan
working-directory: infrastructure/production
run: terraform plan -out=tfplan -no-color 2>&1
- name: Post plan to PR
uses: actions/github-script@v7
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `\`\`\`\n${{ steps.plan.outputs.stdout }}\n\`\`\``
});
apply:
needs: plan
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
environment: production # Requires manual approval in GitHub
steps:
- uses: actions/checkout@v4
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.TF_ROLE_ARN }}
aws-region: us-east-1
- uses: hashicorp/setup-terraform@v3
- run: terraform init
working-directory: infrastructure/production
- run: terraform apply -auto-approve
working-directory: infrastructure/production
Cost Reference
| State backend | Cost | Notes |
|---|---|---|
| S3 state storage | < $0.01/mo | Tiny files |
| S3 versioning | < $0.01/mo | Historical states |
| DynamoDB locking | < $0.01/mo | Pay-per-request, rarely used |
| Terraform Cloud (free) | $0 | 500 resources, 1 workspace |
| Terraform Cloud (Plus) | $20/user/mo | Unlimited resources, SSO |
See Also
- Terraform Modules: Reusable Infrastructure and Remote State
- AWS Step Functions: State Machines and Lambda Orchestration
- AWS ECS Fargate in Production: Task Definitions and Blue/Green Deploys
- Kubernetes Cost Optimization: Right-Sizing, Spot Nodes, and Autoscaling
- AWS Secrets Manager: Secret Rotation and Lambda Integration
Working With Viprasol
Running Terraform with local state, manual applies, or no environment isolation? We migrate your infrastructure to remote state with S3 + DynamoDB locking, set up workspace-based environment isolation, implement OIDC-based CI/CD pipelines with plan-in-PR and manual approval gates, and import any existing manually-created resources so Terraform owns your full infrastructure.
About the Author
Viprasol Tech Team
Custom Software Development Specialists
The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.
Need DevOps & Cloud Expertise?
Scale your infrastructure with confidence. AWS, GCP, Azure certified team.
Free consultation • No commitment • Response within 24 hours
Making sense of your data at scale?
Viprasol builds end-to-end big data analytics solutions — ETL pipelines, data warehouses on Snowflake or BigQuery, and self-service BI dashboards. One reliable source of truth for your entire organisation.