---
title: "Converting kubectl Manifests to Helm Charts: Packaging for Reuse"
description: "How to take working Kubernetes manifests and package them as Helm charts, covering chart scaffolding, parameterization, helper templates, dependencies, and when Helm beats raw Terraform."
url: https://agent-zone.ai/knowledge/kubernetes/converting-kubectl-to-helm-charts/
section: knowledge
date: 2026-02-22
categories: ["kubernetes"]
tags: ["helm","charts","templates","packaging","migration"]
skills: ["helm-chart-authoring","manifest-templating","dependency-management"]
tools: ["helm","kubectl"]
levels: ["intermediate"]
word_count: 750
formats:
  json: https://agent-zone.ai/knowledge/kubernetes/converting-kubectl-to-helm-charts/index.json
  html: https://agent-zone.ai/knowledge/kubernetes/converting-kubectl-to-helm-charts/?format=html
  api: https://api.agent-zone.ai/api/v1/knowledge/search?q=Converting+kubectl+Manifests+to+Helm+Charts%3A+Packaging+for+Reuse
---


# Converting kubectl Manifests to Helm Charts

You have a set of YAML files that you `kubectl apply` to deploy your application. They work, but deploying to a second environment means copying files and editing values by hand. Helm charts solve this by parameterizing your manifests.

## Step 1: Scaffold the Chart

Create the chart structure with `helm create`:

```bash
helm create my-app
```

This generates:

```
my-app/
  Chart.yaml           # Chart metadata (name, version, appVersion)
  values.yaml          # Default configuration values
  charts/              # Subcharts / dependencies
  templates/
    deployment.yaml    # Deployment template
    service.yaml       # Service template
    ingress.yaml       # Ingress template
    hpa.yaml           # HorizontalPodAutoscaler
    serviceaccount.yaml
    _helpers.tpl       # Named template helpers
    NOTES.txt          # Post-install message
    tests/
      test-connection.yaml
```

Delete the generated templates you do not need. Keep `_helpers.tpl` -- it provides essential naming functions.

## Step 2: Move Manifests Into Templates

Take your working YAML files and copy them into the `templates/` directory. Then replace hardcoded values with template expressions.

Before (raw manifest):

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    spec:
      containers:
      - name: my-app
        image: ghcr.io/myorg/my-app:v1.2.0
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi
```

After (Helm template):

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "my-app.fullname" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "my-app.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "my-app.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "my-app.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        resources:
          {{- toYaml .Values.resources | nindent 10 }}
```

## Step 3: Parameterize with values.yaml

The `values.yaml` file holds all configurable defaults:

```yaml
replicaCount: 3

image:
  repository: ghcr.io/myorg/my-app
  tag: "v1.2.0"
  pullPolicy: IfNotPresent

service:
  type: ClusterIP
  port: 8080

ingress:
  enabled: false
  className: nginx
  hosts:
    - host: my-app.example.com
      paths:
        - path: /
          pathType: Prefix

resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    cpu: 500m
    memory: 512Mi
```

Override per environment with separate values files:

```bash
helm upgrade --install my-app ./my-app \
  -n production \
  -f values-production.yaml
```

Where `values-production.yaml` overrides only what differs:

```yaml
replicaCount: 5
image:
  tag: "v1.3.0"
resources:
  limits:
    cpu: "1"
    memory: 1Gi
```

## Step 4: Write Helper Templates

The `_helpers.tpl` file defines reusable named templates. The scaffolded version provides sensible defaults. The critical ones:

```yaml
# templates/_helpers.tpl

{{- define "my-app.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{- define "my-app.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}

{{- define "my-app.selectorLabels" -}}
app.kubernetes.io/name: {{ include "my-app.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{- define "my-app.labels" -}}
{{ include "my-app.selectorLabels" . }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
```

These ensure consistent naming and labeling across all resources in the chart. Selector labels are kept separate because they are immutable on Deployments.

## Step 5: Add Chart Dependencies

If your app needs PostgreSQL or Redis, declare them as dependencies in `Chart.yaml` rather than including them in your templates:

```yaml
# Chart.yaml
apiVersion: v2
name: my-app
version: 0.1.0
appVersion: "1.2.0"

dependencies:
  - name: postgresql
    version: "15.x.x"
    repository: https://charts.bitnami.com/bitnami
    condition: postgresql.enabled
  - name: redis
    version: "19.x.x"
    repository: https://charts.bitnami.com/bitnami
    condition: redis.enabled
```

Then run `helm dependency update` to download them. Configure the subchart through your `values.yaml`:

```yaml
postgresql:
  enabled: true
  auth:
    database: mydb
    username: myuser
```

## When Helm Beats Raw Terraform kubernetes_manifest

**Use Helm when** you need environment-specific overrides (values files), the chart will be shared across teams, you want rollback (`helm rollback`), or community charts exist for dependencies.

**Use Terraform kubernetes resources when** your infrastructure team already uses Terraform for cloud resources, you need a unified dependency graph (cloud infra + Kubernetes resources), or you want strong schema validation.

**Use both together** by deploying Helm charts via Terraform's `helm_release` resource. This gives you Terraform's state management with Helm's templating power.

## Validation Before Deploy

Always lint and template-render before deploying:

```bash
# Check for syntax errors
helm lint ./my-app -f values-production.yaml

# Render templates without deploying to verify output
helm template my-app ./my-app -f values-production.yaml

# Dry-run against the cluster to catch server-side issues
helm upgrade --install my-app ./my-app --dry-run -f values-production.yaml
```

## Minimal Chart Checklist

1. `Chart.yaml` has correct `name`, `version`, and `appVersion`.
2. Every hardcoded value in templates has a corresponding entry in `values.yaml`.
3. `_helpers.tpl` defines `name`, `fullname`, `labels`, and `selectorLabels`.
4. Resources use `{{ include "my-app.fullname" . }}` for names, not hardcoded strings.
5. Namespace is `{{ .Release.Namespace }}`, never hardcoded.
6. `helm lint` and `helm template` pass cleanly.

