What is Terraform?
Terraform is an Infrastructure as Code (IaC) tool that lets you define, provision, and manage cloud and on-premises infrastructure using declarative configuration files. Instead of clicking through cloud consoles, you write code that describes the desired state of your infrastructure — Terraform figures out how to get there.
Core concepts
Providers are plugins that let Terraform talk to external APIs — AWS, Azure, GCP, Kubernetes, GitHub, Datadog, and hundreds more. Each provider exposes resources you can manage.
Resources are the individual infrastructure objects you declare — an EC2 instance, a DNS record, a database, a Kubernetes namespace. Each resource block says "this thing should exist with these properties."
State is how Terraform tracks what it has already created. It stores a JSON file (locally or remotely) mapping your config to real-world resources. This is what lets it calculate diffs.
The plan/apply cycle is the core workflow:
terraform init— download providers and modulesterraform plan— show what would change, without changing anythingterraform apply— make the changesterraform destroy— tear everything down
Modules are reusable bundles of configuration — like functions in a programming language. You write a VPC module once and call it for dev, staging, and prod with different variables.
How it works — the execution flow
HCL — the language
Terraform uses HashiCorp Configuration Language (HCL), a declarative, human-readable format. A basic resource looks like:
hcl
resource "aws_instance" "web" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t3.micro"
tags = {
Name = "web-server"
}
}You declare what you want, not how to create it. References between resources (aws_instance.web.id) automatically create dependency edges, so Terraform builds and applies resources in the correct order — and parallelizes when there are no dependencies.
The HashiCorp ecosystem
Terraform is one piece of a broader platform. Here's how the tools relate:
Key workflows and patterns
Remote state — in any real team setup, state lives in a shared backend (S3 + DynamoDB for locking, GCS, or HCP Terraform) rather than on one developer's laptop. This prevents two people from applying at the same time and corrupting state.
Workspaces allow multiple state files from the same config — useful for managing dev/staging/prod environments without duplicating code.
Variable files (.tfvars) let you parameterize a config and pass different values per environment: the same module code, different instance sizes and region settings.
Policy as Code with Sentinel or OPA — HCP Terraform can enforce policies before apply runs, blocking, say, any instance type larger than t3.large in dev, or any S3 bucket without versioning enabled.
Terraform with Vault is a very common pattern: Vault generates short-lived AWS credentials at plan/apply time, so no long-lived secrets ever sit in your CI environment.
Typical project structure
my-infra/
├── main.tf # core resources
├── variables.tf # input variable declarations
├── outputs.tf # values to expose after apply
├── versions.tf # provider version constraints
├── terraform.tfvars # variable values (gitignored for secrets)
└── modules/
└── vpc/ # reusable module
├── main.tf
├── variables.tf
└── outputs.tfWhen to use what
| Situation | Tool |
|---|---|
| Provision cloud infra | Terraform (open source) |
| Team collaboration, policy, audit | HCP Terraform |
| Self-host the control plane | Terraform Enterprise |
| Build VM/container images | Packer |
| Manage secrets and creds | Vault |
| Service discovery / mesh | Consul |
| Schedule workloads (non-K8s) | Nomad |
The core loop — write config → plan → apply → state — is simple, but Terraform's real power comes from modules, remote state, and its massive provider ecosystem (over 3,000 providers on the registry). It's the de facto standard for declarative cloud infrastructure management.