Terraform repositories rarely become difficult to work with overnight. They usually start as a small root module that solves one problem, then grow release by release until networking, IAM, storage, and DNS all sit side by side in the same directory.
At that point, the code may still work, but the team starts treating it like glass. Names are inconsistent. Some resources were created manually outside Terraform. Nobody is completely sure which edits are safe, so even routine cleanup feels risky.
That caution is justified. Terraform does not infer that a refactor is only organizational. It tracks infrastructure objects by resource address in state. If you change the address without telling Terraform how to interpret that change, an innocent cleanup can turn into a plan to destroy and recreate a live object. moved blocks exist to prevent that by telling Terraform how to remap addresses during planning.
In this walkthrough, we will safely refactor a live configuration by doing three things:
- Moving an existing resource into a module without recreating it
- Bringing a manually created Route 53 record under Terraform management with an
importblock - Adding a small native Terraform test to verify that the refactor preserved the module contract
The starting point
Assume you have a small AWS stack that already exists in production:
- a VPC managed by Terraform
- an application security group
- an IAM role
- an S3 bucket
- a Route 53 record that exists in AWS but was created manually and never imported
The repository is flat:
.
├── main.tf
├── variables.tf
├── outputs.tf
└── terraform.tfvars
This layout is common because it is easy to bootstrap. It also becomes awkward quickly. As more resources accumulate in one root module, ownership boundaries blur, reviews get noisier, and reuse gets harder. The problem is not cosmetic. Every change becomes harder to reason about.
So the refactor goal is narrow and practical:
- Move networking-related code into
modules/network - Preserve the identity of the existing live resources
- Start managing the existing Route 53 record in Terraform
- Add a lightweight test so the next refactor is less stressful
What makes Terraform refactors risky
Terraform matches configuration to real infrastructure through addresses stored in state, not through human intent. That is why “it is still the same security group” is not enough. If the object used to live at one address and now lives at another, Terraform needs to know that both addresses refer to the same remote object.
In practice, three kinds of changes usually introduce risk during a Terraform refactor: renaming a resource, moving a resource into a child module, and starting to manage an existing object that was created outside Terraform.
This article addresses each of those cases with a different mechanism: moved blocks for resource address changes, import blocks for existing unmanaged infrastructure, and terraform test for repeatable validation.
Step 1: Move a resource into a module without replacing it
Start with one small slice of the configuration. A good candidate is the application security group.
Before the refactor
Terraform knows the security group at this address:
aws_security_group.app
After the refactor
Create a child module at modules/network, move the security group there, and call the module from the root module:
.
├── main.tf
├── variables.tf
├── outputs.tf
├── modules
│ └── network
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
Inside modules/network, place the aws_security_group resource and any related outputs. In the root module, replace the old resource block with a module "network" call and pass in whatever the module needs, such as VPC ID, naming values, or tags.
That changes the resource address from:
aws_security_group.app
to:
module.network.aws_security_group.app
Without extra guidance, Terraform sees one address disappear and another appear. That can look like a destroy-and-create cycle.
Add a moved block in the root module to map the old address to the new one:
moved {
from = aws_security_group.app
to = module.network.aws_security_group.app
}
A moved block tells Terraform to remap the object from the old address to the new one before planning, instead of treating the change as a replacement. This is the core mechanism for safe refactors involving resource and module addresses.
What to look for in the plan
At this stage, terraform plan should show the address remap and little to no infrastructure change.
That is the pattern to follow for the rest of the refactor:
- move one logical group at a time
- add the corresponding
movedblock - run
terraform plan - keep behavior changes out of the same commit
That last point matters. A refactor is much easier to verify when it changes structure, not behavior.
Step 2: Import an existing Route53 record into Terraform
The second problem is different. Here, the Route 53 record already exists in AWS, but Terraform does not manage it yet.
This is what import blocks are for. In Terraform v1.5 and later, you can declare the resource in configuration and include an import block so the import happens through the normal reviewed workflow instead of as an ad hoc CLI action.
Declare the resource in configuration:
resource "aws_route53_record" "app" {
zone_id = var.zone_id
name = "app.example.com"
type = "A"
ttl = 300
records = ["203.0.113.10"]
}
Add the import block:
import {
to = aws_route53_record.app
id = "Z1234567890ABC_app.example.com_A"
}
For aws_route53_record, the AWS provider documents the import ID as the hosted zone ID, record name, and record type separated by underscores, with a set identifier appended when needed.
Reconcile configuration with reality
This is the step teams often miss: importing does not mean Terraform and AWS now agree. It means Terraform now associates that configuration address with the existing remote object.
If the configuration does not match the live record exactly, the next plan can still show drift or proposed updates. That is normal. The import attaches the object to state. It does not automatically align mismatched arguments.
That makes the workflow:
- Declare the resource
- Import it
- Review the next plan carefully
- Decide whether Terraform should update the record or whether the configuration should be adjusted to match reality
Step 3: Add a native Terraform test for the refactor
A clean plan is necessary, but it does not always build confidence. Teams still want to know that the module contract stayed intact: do outputs still have the same names, do required tags still exist, and do downstream callers still get what they expect?
Terraform’s native test framework is a good fit for that. Test files use the .tftest.hcl extension, support run blocks with either plan or apply, and let you assert against resource values and outputs. The framework is available in Terraform v1.6 and later.
A simple contract-style test might look like this:
# tests/network.tftest.hcl
run "network_contract" {
command = plan
assert {
condition = output.app_security_group_name == "app-sg"
error_message = "Security group name changed unexpectedly."
}
assert {
condition = output.common_tags["managed-by"] == "terraform"
error_message = "Required tags are missing."
}
}
This kind of test is intentionally narrow. It does not try to validate every detail of the module. It checks the promises the module makes to its callers.
One important nuance: terraform test uses separate test state instead of your live workspace state, so it does not directly validate production state. Also, while a plan-based test like the example above is lightweight, Terraform tests can create short-lived real infrastructure when run with apply.
That makes these tests useful for refactors because they turn assumptions into repeatable checks:
- expected outputs still exist
- naming rules still hold
- required tags are still present
The broader win is cultural. Future cleanup is no longer protected only by caution and tribal knowledge.
What the final plan should look like
By the end of the refactor, the codebase should be easier to understand, but the infrastructure should still be the same infrastructure.
You should expect:
- networking resources to live under
modules/network - the Route 53 record to be tracked in Terraform state
movedblocks to explain address changes- a final plan that is empty or contains only small, deliberate alignment changes
That is the real definition of a good Terraform refactor: better structure, clearer ownership, and no surprise churn in production.
Wrapping up
The safest Terraform refactor is not the one that moves the most files. It is the one that makes the code easier to maintain while production stays uneventful.
Use moved blocks when a resource or module address changes so Terraform can remap state instead of planning a replacement. Use import blocks to bring existing infrastructure under management in a reviewable way. Use terraform test to codify the module contract so the next cleanup depends less on memory and caution.
That sequence turns a risky refactor into a controlled one: preserve identity first, bring unmanaged resources into state second, and add guardrails for whatever comes next.