How to manage multiple infrastructure environment with Terraform | Brainboard Blog

How to manage multiple infrastructure environment with Terraform

Chafik Belhaoues June 15, 2025
Expert reviewed
How to manage multiple infrastructure environment with Terraform
How to manage multiple infrastructure environment with TerraformManaging multi-environments with Terraform best practicesPlatform EngineeringChafik Belhaoues2025-06-15T00:00:00.000Z12 minutesintermediateguidedevelopers, DevOps engineers, cloud architects
12 min read
Level: intermediate
Type: guide

Introduction

Have you ever felt like you’re juggling flaming torches while riding a unicycle when managing different infrastructure environments? You’re not alone. Whether you’re dealing with development, staging, and production environments, or perhaps even more complex setups with multiple regions and client-specific configurations, keeping everything in sync can feel like herding cats in a thunderstorm.

Here’s the thing: Terraform, our beloved infrastructure-as-code tool, can either be your best friend or your worst nightmare when it comes to multi-environment management. The difference? How you set it up from the get-go. Let me walk you through the strategies that’ll save you countless hours of head-scratching and late-night debugging sessions.

Why multiple environments matter more than you think

Before we dive into the how-to, let’s talk about the why. Remember when you accidentally pushed that “minor” configuration change directly to production? Yeah, we’ve all been there. Multiple environments aren’t just a nice-to-have; they’re your safety net, your testing ground, and your sanity preserver all rolled into one.

Think of your infrastructure environments like dress rehearsals before opening night. Your development environment is where you experiment freely, staging is your final dress rehearsal, and production? That’s showtime, baby. You wouldn’t want to test your new spotlight configuration for the first time when the audience is already seated, would you?

The workspace strategy: Terraform’s built-in solution

Let’s start with Terraform workspaces – the built-in feature that many developers discover way too late in their journey. Workspaces are like parallel universes for your infrastructure state. Each workspace maintains its own state file, allowing you to deploy the same configuration to multiple environments without the files stepping on each other’s toes.

Here’s how you can leverage workspaces effectively:

# Create and switch to a new workspace
terraform workspace new staging
terraform workspace select staging

# Use workspace-specific variables
variable "instance_type" {
  default = {
    dev     = "t2.micro"
    staging = "t2.small"
    prod    = "t2.large"
  }
}

resource "aws_instance" "app_server" {
  instance_type = lookup(var.instance_type, terraform.workspace, "t2.micro")
  # ... other configuration
}

The beauty of this approach? You’re using the same code for all environments, which means consistency is baked right in. However, I’ll be honest with you – workspaces aren’t perfect for every scenario. They work brilliantly for simple setups, but when you need significant configuration differences between environments, things can get messy quickly.

Directory structure: The organizational powerhouse

Now, if workspaces feel too constraining for your needs, let’s talk about the directory structure approach. This is where you create separate directories for each environment, giving you ultimate flexibility while maintaining some level of code reuse through modules.

Picture your repository structure like a well-organized closet:

terraform/
├── modules/
│   ├── networking/
│   ├── compute/
│   └── database/
├── environments/
│   ├── dev/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── terraform.tfvars
│   ├── staging/
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── terraform.tfvars
│   └── prod/
│       ├── main.tf
│       ├── variables.tf
│       └── terraform.tfvars

Each environment directory becomes its own little kingdom, complete with its own state file and variable definitions. You’re calling the same modules but with different parameters – like using the same recipe but adjusting the spice levels for different taste preferences.

Variable files: The configuration maestros

Whether you’re team workspace or team directory structure, variable files are your secret weapon for environment-specific configurations. Instead of hardcoding values (please, for the love of all that is holy, don’t do this), you create separate .tfvars files for each environment.

Here’s a practical example that’ll make your life easier:

# dev.tfvars
environment = "dev"
instance_count = 1
enable_monitoring = false
database_instance_class = "db.t3.micro"

# prod.tfvars
environment = "prod"
instance_count = 3
enable_monitoring = true
database_instance_class = "db.r5.large"

Then, when you’re ready to deploy, you simply point Terraform to the right file:

terraform apply -var-file="environments/prod.tfvars"

It’s like having different settings profiles on your phone – one for work, one for home, and one for when you’re trying to preserve battery life.

Remote state management: The collaboration enabler

Let me tell you a horror story. Once upon a time, a team stored their Terraform state files locally. Developer A made changes, Developer B made different changes, and when they tried to reconcile… well, let’s just say the infrastructure had an identity crisis.

Remote state backends are non-negotiable for multi-environment setups. Whether you’re using AWS S3, Azure Blob Storage, or Terraform Cloud, keeping your state files centralized and locked prevents the kind of conflicts that make grown developers cry.

Here’s a rock-solid S3 backend configuration:

terraform {
  backend "s3" {
    bucket = "my-terraform-state"
    key    = "env/${var.environment}/terraform.tfstate"
    region = "us-east-1"
    
    # Enable state locking
    dynamodb_table = "terraform-state-lock"
    encrypt        = true
  }
}

Notice how we’re using the environment variable in the key path? That’s how you keep your state files organized and separated, even when they’re all living in the same bucket.

Modules: Write once, deploy everywhere

If you find yourself copying and pasting Terraform code between environments, stop right there. You’re doing it wrong, and there’s a better way. Modules are like Lego blocks for your infrastructure – reusable, composable, and incredibly powerful when used correctly.

Create modules for common infrastructure patterns:

# modules/web-app/main.tf
resource "aws_instance" "app" {
  count         = var.instance_count
  instance_type = var.instance_type
  
  tags = {
    Name        = "${var.environment}-app-${count.index}"
    Environment = var.environment
  }
}

# environments/staging/main.tf
module "web_app" {
  source = "../../modules/web-app"
  
  environment    = "staging"
  instance_count = 2
  instance_type  = "t3.small"
}

Think of modules as your infrastructure recipes. You wouldn’t rewrite a chocolate cake recipe every time you bake one, right? You’d just adjust the quantities based on how many people you’re serving.

Environment promotion strategies

Now, here’s where the rubber meets the road. How do you actually promote changes from dev to staging to production without breaking into a cold sweat? The key is having a solid promotion strategy that fits your team’s workflow.

One approach I’ve seen work wonders is the branch-based strategy. Each environment gets its own branch:

  • main branch → Production
  • staging branch → Staging
  • develop branch → Development

Changes flow upward through pull requests, getting tested at each level. It’s like a waterfall, but one where you can actually swim back upstream if something goes wrong.

Another approach is using Git tags for production deployments. You test everything in lower environments, and when you’re ready for prime time, you tag the commit and deploy from that tag. It’s like putting a bookmark in your infrastructure’s history – you always know exactly what’s running where.

Secrets and sensitive data handling

Let’s talk about the elephant in the room – secrets. Managing sensitive data across multiple environments is like trying to keep surprise party plans secret in a house full of kids. It’s challenging, but absolutely crucial.

Never, and I mean never, commit secrets to your repository. Instead, use environment-specific secret management:

# Pull secrets from AWS Secrets Manager
data "aws_secretsmanager_secret_version" "db_password" {
  secret_id = "${var.environment}-db-password"
}

resource "aws_db_instance" "database" {
  password = jsondecode(data.aws_secretsmanager_secret_version.db_password.secret_string)["password"]
  # ... other configuration
}

Each environment gets its own set of secrets, stored securely and accessed dynamically. It’s like having different keys for different locks – you wouldn’t use your house key for your car, would you?

Testing and validation across environments

Here’s a truth bomb for you: if you’re not testing your Terraform code before applying it, you’re playing Russian roulette with your infrastructure. But testing across multiple environments? That’s where things get interesting.

Start with terraform validate and terraform plan – they’re your first line of defense. But don’t stop there. Consider implementing:

  • Pre-commit hooks that catch issues before they even make it to version control
  • Automated testing with tools like Terratest for your modules
  • Policy as code with Sentinel or Open Policy Agent to enforce standards

Think of it as quality control in a factory. You wouldn’t ship a product without testing it, so why would you ship infrastructure changes without the same rigor?

Monitoring and observability

Managing multiple environments isn’t just about deployment – it’s about knowing what’s happening in each environment at all times. Set up environment-specific monitoring and alerting that scales with your infrastructure’s criticality.

Your dev environment might just need basic monitoring, but production? That needs the works – detailed metrics, logs, traces, and alerts that’ll wake you up at 3 AM if something goes sideways (though hopefully, with good practices, those 3 AM wake-up calls become extinct).

Conclusion

Managing multiple infrastructure environments with Terraform doesn’t have to feel like you’re conducting an orchestra while blindfolded. With the right strategies – whether it’s workspaces, directory structures, or a hybrid approach – you can create a setup that’s both flexible and maintainable. Remember, the goal isn’t perfection on day one; it’s continuous improvement. Start with what makes sense for your team, and evolve as your needs grow. Your future self (and your teammates) will thank you for taking the time to set things up properly now.

FAQs

What’s the main difference between using Terraform workspaces and separate directories for environments?

Workspaces share the same Terraform configuration but maintain separate state files, making them ideal for environments with minimal configuration differences. Separate directories give you complete flexibility to have entirely different configurations per environment but require more maintenance. Choose workspaces for simplicity when environments are similar, and directories when you need significant configuration variations.

How do I handle provider configurations that differ between environments?

Create provider aliases for different environments and use variables to dynamically configure them. You can pass environment-specific endpoints, regions, or credentials through variable files, ensuring each environment connects to the right resources without hardcoding values in your configuration.

Should I use the same modules across all environments or create environment-specific modules?

Always strive to use the same modules across all environments with parameterization handling the differences. This ensures consistency and reduces maintenance overhead. Only create environment-specific modules when there’s a fundamental architectural difference that can’t be handled through variables.

How can I prevent accidental deployments to the wrong environment?

Implement safeguards like required confirmation prompts for production deployments, use separate AWS profiles or credentials for each environment, add environment prefixes to all resource names, and consider using tools like Terraform Cloud or Atlantis that add approval workflows to your deployment process.

What’s the best practice for handling terraform state file migrations when restructuring environments?

Plan state migrations carefully by first backing up all state files, use terraform state mv commands to relocate resources, test migrations in a lower environment first, and document the migration process. Consider using terraform import for complex migrations where moving state directly might be risky.

Chafik Belhaoues

Cloud Architect and Former SRE
LinkedIn

Chafik has more than 20 years of experience in the IT industry, including 12 years as a Senior Site Reliability Engineer (SRE) and 8 years as a Cloud Architect. He is the founder of Brainboard, focusing on cloud infrastructure and DevOps automation.

Back to Blog