---
title: "Converting kubectl Manifests to Terraform: From Manual Applies to Infrastructure as Code"
description: "Step-by-step guide for converting a working minikube setup into Terraform IaC, covering resource export, field cleanup, provider configuration, module organization, and state management."
url: https://agent-zone.ai/knowledge/kubernetes/converting-kubectl-to-terraform/
section: knowledge
date: 2026-02-22
categories: ["kubernetes"]
tags: ["terraform","kubectl","infrastructure-as-code","migration","kubernetes-provider"]
skills: ["terraform-authoring","manifest-conversion","module-design"]
tools: ["terraform","kubectl","helm"]
levels: ["intermediate"]
word_count: 719
formats:
  json: https://agent-zone.ai/knowledge/kubernetes/converting-kubectl-to-terraform/index.json
  html: https://agent-zone.ai/knowledge/kubernetes/converting-kubectl-to-terraform/?format=html
  api: https://api.agent-zone.ai/api/v1/knowledge/search?q=Converting+kubectl+Manifests+to+Terraform%3A+From+Manual+Applies+to+Infrastructure+as+Code
---


# Converting kubectl Manifests to Terraform

You have a working Kubernetes setup built with `kubectl apply -f`. It works, but there is no state tracking, no dependency graph, and no way to reliably reproduce it. Terraform fixes all three problems.

## Step 1: Export Existing Resources

Start by extracting what you have. For each resource type, export the YAML:

```bash
kubectl get deployment,service,configmap,ingress -n my-app -o yaml > exported.yaml
```

For a single resource with cleaner output:

```bash
kubectl get deployment my-app -n my-app -o yaml > deployment.yaml
```

## Step 2: Clean Up Kubernetes-Generated Fields

Exported manifests contain fields that Kubernetes manages internally. These must be removed before converting to Terraform, or you will get perpetual diffs on every plan.

Remove these fields from every resource:

```yaml
# DELETE all of these from exported YAML:
metadata:
  resourceVersion: "12345"     # Server-managed version
  uid: "abc-123-def"           # Server-assigned unique ID
  creationTimestamp: "..."     # Server-set timestamp
  generation: 2                # Server-tracked generation
  managedFields: [...]         # Field ownership tracking
status: {}                     # Entire status block
```

A quick `yq` command strips them in bulk:

```bash
yq eval 'del(.metadata.resourceVersion, .metadata.uid,
  .metadata.creationTimestamp, .metadata.generation,
  .metadata.managedFields, .status)' deployment.yaml
```

## Step 3: Configure the Kubernetes Provider

Set up the Terraform provider to talk to your cluster:

```hcl
# providers.tf
terraform {
  required_providers {
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.25"
    }
    helm = {
      source  = "hashicorp/helm"
      version = "~> 2.12"
    }
  }
}

provider "kubernetes" {
  config_path    = "~/.kube/config"
  config_context = "minikube"
}

provider "helm" {
  kubernetes {
    config_path    = "~/.kube/config"
    config_context = "minikube"
  }
}
```

For production, replace `config_path` with `host`, `token`, and `cluster_ca_certificate` sourced from your cloud provider's Terraform outputs.

## Step 4: Convert Manifests to Terraform Resources

Here is the before and after. A manual deployment:

```bash
# Before: manual kubectl apply
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  namespace: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: ghcr.io/myorg/my-app:v1.2.0
        ports:
        - containerPort: 8080
EOF
```

Becomes a Terraform resource:

```hcl
# applications/main.tf
resource "kubernetes_deployment_v1" "my_app" {
  metadata {
    name      = "my-app"
    namespace = "my-app"
  }

  spec {
    replicas = 3
    selector {
      match_labels = { app = "my-app" }
    }
    template {
      metadata {
        labels = { app = "my-app" }
      }
      spec {
        container {
          name  = "my-app"
          image = "ghcr.io/myorg/my-app:v1.2.0"
          port {
            container_port = 8080
          }
        }
      }
    }
  }
}
```

## When to Use Helm Provider vs kubernetes_manifest

**Use the `helm_release` resource** when a community chart already exists for what you need (PostgreSQL, Redis, NGINX Ingress, Prometheus). It handles templating and upgrades:

```hcl
resource "helm_release" "postgresql" {
  name       = "dt-postgresql"
  repository = "https://charts.bitnami.com/bitnami"
  chart      = "postgresql"
  namespace  = "my-app"

  set { name = "auth.database"; value = "mydb" }
  set { name = "auth.username"; value = "myuser" }
}
```

**Use `kubernetes_deployment_v1` and typed resources** for your own application manifests. Terraform validates the schema, catches typos at plan time, and provides meaningful diffs.

**Use `kubernetes_manifest`** only for CRDs or resources without a typed Terraform equivalent. It takes raw YAML but gives weaker validation.

## Module Organization

Structure your Terraform into logical modules:

```
terraform/
  main.tf              # Provider config, module calls
  variables.tf         # Cluster-wide variables
  modules/
    networking/        # Ingress, NetworkPolicies, Services
    databases/         # Helm releases for PostgreSQL, Redis
    applications/      # Your app Deployments, Services
    monitoring/        # Prometheus, Grafana Helm releases
```

Each module has its own `main.tf`, `variables.tf`, and `outputs.tf`. The root module wires them together:

```hcl
module "databases" {
  source    = "./modules/databases"
  namespace = var.namespace
}

module "applications" {
  source       = "./modules/applications"
  namespace    = var.namespace
  db_host      = module.databases.postgresql_host
  depends_on   = [module.databases]
}
```

## State Management Considerations

Terraform state tracks every resource it manages. For Kubernetes workloads, keep these points in mind:

**Import existing resources** before running `terraform apply` to avoid duplicates:

```bash
terraform import kubernetes_deployment_v1.my_app my-app/my-app
```

**Use remote state** from the start. An S3 bucket with DynamoDB locking, or Terraform Cloud, prevents state corruption when multiple people run applies.

**State file contains secrets.** Kubernetes Secrets managed by Terraform appear in plaintext in state. Use `sensitive = true` on variables and consider encrypting the state backend.

**Do not mix Terraform and manual kubectl.** If Terraform manages a resource, all changes must go through Terraform. Manual edits cause drift that the next `terraform apply` will revert.

## Migration Order

Convert resources in dependency order: namespaces first, then ConfigMaps and Secrets, then databases (via Helm), then application Deployments and Services, and finally Ingress. Run `terraform plan` after each batch to verify no unintended changes.

