Learn how to set up a cost-effective Kubernetes cluster on Hetzner Cloud with Terraform. Step-by-step guide to deploy your cluster, PostgreSQL, and Redis servers efficiently and securely.
Kubernetes resources are very pricy. Especially if you are working on one of the big providers like AWS, Google Cloud or Azure. If you are reading this then you are probably looking to set up a cost-effective and affordable Kubernetes cluster with predictable pricing. I know your struggle!
This guide will walk you through the process on how to install a Kubernetes cluster on Hetzer Cloud, step-by-step, using Terraform and the kube-hetzner Terraform provider. By the end, you'll have a fully functional Kubernetes cluster along with dedicated servers for PostgreSQL and Redis.
Step 1: Create a New Project on Hetzner Cloud
First things first, you'll need a Hetzner account. If you don't have one yet, head over to hetzner.com and sign up. Once you have an account, follow these steps to create a new project:
- Log in to your Hetzner account.
- Navigate to the "Cloud" section from the dashboard.
- Click on "Projects" in the sidebar.
- Create a new project by clicking the "New Project" button.
- Name your project and set any desired settings, then click "Create Project".
Now, you have a project set up and let's get ready for deploying a Kubernetes cluster.
Step 2: Set Up Your Project for Kube-Hetzner
Generate an API Token
To interact with Hetzner Cloud via Terraform, you'll need an API token:
- Go to your project in Hetzner Cloud.
- Click on "Security" in the sidebar.
- Under "API Tokens", click "Generate API Token".
- Give it a name and ensure it has "Read & Write" permissions.
- Copy the token and store it securely.
Configure a new Terraform Project
Next, we will setup our Terraform project. Before going further, make sure you have installed on your machine the following tools:
- Terraform or OpenTofu
- Packer (for the initial snapshot creation only, no longer needed once that's done)
- Kubectl cli
- Hcloud - the Hetzner cli for convenience. The easiest way is to use the homebrew package manager to install them (available on Linux, Mac, and Windows Linux Subsystem).
Here are the relevant installation commands for convenience.
brew tap hashicorp/tap
brew install hashicorp/tap/terraform # OR brew install opentofu
brew install packer
brew install kubectl
brew install hcloud
Assuming you have installed the above tools on your machine you can then create a directory for your Terraform configuration files and start pulling using the hcloud provider.
💡 Important: Create your kube.tf file and pull the OpenSUSE MicroOS snapshot
- After you created a project in your Hetzner Cloud Console, go to Security > API Tokens of your project to get the API key, it needs to be Read & Write. Remember to save your key!
- Generate an ed25519 SSH key pair without a passphrase for your cluster; take note of the respective paths of your private and public keys.
- Inside your project's directory execute the following command, which will help you get started with a new folder along with the required files, and will prompt you to create the needed MicroOS snapshot.
tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"
FYI: This script runs the following commands.mkdir /path/to/your/new/folder cd /path/to/your/new/folder curl -sL https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/kube.tf.example -o kube.tf curl -sL https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/packer-template/hcloud-microos-snapshots.pkr.hcl -o hcloud-microos-snapshots.pkr.hcl export HCLOUD_TOKEN="your_hcloud_token" packer init hcloud-microos-snapshots.pkr.hcl packer build hcloud-microos-snapshots.pkr.hcl hcloud context create <project-name>
- Inside the directory that gets created from running this script, you will find your
kube.tf
and you can customize it to suit your needs.
Step 3: Configure the kube.tf file
We can now finally configure the generated kube.tf
file to create our cluster.
Resources Overview
As I mentioned initially, this example will use 2 dedicated nodes for our databases Postgres and Redis. This 2 nodes won't be controlled by Kubernetes Control Plane at all, however we need to declare them in our Terraform setup because we need to attach them to our Private Network and thus make them accessible from within a Kubernetes service using a local IP address like 10.128.0.1
.
In order to create our Kubernetes cluster we will need the following resources from Hetzner Cloud. The type of servers you decide to use from Hetzner Cloud is entirely up to you and the needs of your project. This is just an example.
- 2 x CX31 Servers for the k8s agent node pool.
- 1 x CPX11 Server for the k8s Control Plane
- 1 x CX13 Server for Postgres
- 1 x CX31 Server for Redis
- 1 x LB11 Load Balancer
- 1 x Private Network
- 1 x Firewall
- 1 x Volume for Postgres (optional)
These are a lot of resources and you are probably having second thoughts now calculating prices on the back of your head... Relax! At the time of writing this post in June 2024, the above stack will cost you around 60 EUR/65 USD per month. Compare that to any of the big 3 providers like AWS and you would have to pay around 15x more than that.
Step 4. Configure Terraform using the kube.tf file
Let's now create the required resources using by configuring the kube.tf
file we generated earlier. Your kube.tf
configuration should look something like the following. Please read the comments carefully to understand what each section is for.
locals {
# You have the choice of setting your Hetzner API token here or define the TF_VAR_hcloud_token env
# within your shell, such as: export TF_VAR_hcloud_token=xxxxxxxxxxx
# If you choose to define it in the shell, this can be left as is.
# Your Hetzner token can be found in your Project > Security > API Token (Read & Write is required).
hcloud_token = "xxxxxxxxxxx"
}
# Create a private network
resource "hcloud_network" "k3s_proxy" {
name = "k3s"
ip_range = "10.0.0.0/8"
}
resource "hcloud_network_subnet" "k3s_proxy" {
network_id = hcloud_network.k3s_proxy.id
type = "cloud"
network_zone = "eu-central"
ip_range = "10.128.0.0/9"
}
# Create the Postgres Server
# If you already have an existing resource created on Hetzner Cloud
# use `terraform import hcloud_server.postgres <id>` to import it
resource "hcloud_server" "postgres" {
name = "postgres"
image = "ubuntu-22.04"
server_type = "ccx13" # Define the server type here
location = "fsn1"
delete_protection = true
rebuild_protection = true
# Initialization commands (Read more in the next section)
user_data = file("cloud_init_postgres.yaml")
lifecycle {
prevent_destroy = true
ignore_changes = [user_data, ssh_keys]
}
}
# Optionally attach a volume to Postgres Server
resource "hcloud_volume" "postgres" {
name = "postgres"
size = 10
server_id = hcloud_server.postgres.id
automount = true
format = "ext4"
delete_protection = true
lifecycle {
prevent_destroy = true
}
}
# Make Postgres accessible internally on 10.128.0.1
resource "hcloud_server_network" "postgres" {
depends_on = [
hcloud_server.postgres
]
server_id = hcloud_server.postgres.id
network_id = hcloud_network.k3s_proxy.id
ip = "10.128.0.1"
}
# Create the Redis Server
# If you already have an existing resource created on Hetzner Cloud
# use `terraform import hcloud_server.redis <id>` to import it
resource "hcloud_server" "redis" {
name = "redis"
image = "ubuntu-22.04"
server_type = "cx31"
location = "fsn1"
delete_protection = true
rebuild_protection = true
# Initialization commands (Read more in the next section)
user_data = file("cloud_init_redis.yaml")
lifecycle {
prevent_destroy = true
ignore_changes = [user_data, ssh_keys]
}
}
# Make Redis accessible internally on 10.128.0.2
resource "hcloud_server_network" "redis" {
depends_on = [
hcloud_server.redis
]
server_id = hcloud_server.redis.id
network_id = hcloud_network.k3s_proxy.id
ip = "10.128.0.2"
}
module "kube-hetzner" {
providers = {
hcloud = hcloud
}
hcloud_token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
version = "2.13.5"
# The path to your ssh public key on your machine
ssh_public_key = file("~/.ssh/id_ed25519.pub")
# The path to your ssh private key on your machine
ssh_private_key = file("~/.ssh/id_ed25519")
# Define the network data center
network_region = "eu-central" # change to `us-east` if location is ash
# Optional. If you have an existing network define it here
# existing_network_id = [hcloud_network.k3s_proxy.id]
# Define your Control plane node pool. You need at least 1.
control_plane_nodepools = [
{
name = "control-plane-cpx11",
server_type = "cpx11",
location = "fsn1",
labels = [],
taints = [],
count = 1
kubelet_args = ["kube-reserved=cpu=250m,memory=1500Mi,ephemeral-storage=1Gi", "system-reserved=cpu=250m,memory=300Mi"]
# Fine-grained control over placement groups (nodes in the same group are spread over different physical servers, 10 nodes per placement group max):
placement_group = "default"
# Enable automatic backups via Hetzner (default: false)
# backups = true
},
]
# Define the agent node pool. Need at least 2 here for High-Availability
agent_nodepools = [
{
name = "agent-cx31",
server_type = "cx31",
location = "fsn1",
labels = []
taints = [],
count = 2
# Control over the placement groups
placement_group = "default"
# Enable automatic backups via Hetzner (default: false)
# backups = true
},
]
# Add custom control plane configuration options here.
# E.g to enable monitoring for etcd, proxy etc:
control_planes_custom_config = {
etcd-expose-metrics = true,
kube-controller-manager-arg = "bind-address=0.0.0.0",
kube-proxy-arg = "metrics-bind-address=0.0.0.0",
kube-scheduler-arg = "bind-address=0.0.0.0",
}
# LB location and type
load_balancer_type = "lb11"
load_balancer_location = "fsn1"
# Additional trusted IPs for traefik.
# Example for Cloudflare:
traefik_additional_trusted_ips = [
"173.245.48.0/20",
"103.21.244.0/22",
"103.22.200.0/22",
"103.31.4.0/22",
"141.101.64.0/18",
"108.162.192.0/18",
"190.93.240.0/20",
"188.114.96.0/20",
"197.234.240.0/22",
"198.41.128.0/17",
"162.158.0.0/15",
"104.16.0.0/13",
"104.24.0.0/14",
"172.64.0.0/13",
"131.0.72.0/22",
"2400:cb00::/32",
"2606:4700::/32",
"2803:f800::/32",
"2405:b500::/32",
"2405:8100::/32",
"2a06:98c0::/29",
"2c0f:f248::/32"
]
# Allow SSH access from the specified networks. Default: ["0.0.0.0/0", "::/0"]
# Add your Office/home static IP here or leave empty
firewall_ssh_source = ["xxx.xxx.xxx.xxx/32"]
# Optional. Add a firewall rule for your Office/home static IP
extra_firewall_rules = [
{
description = "Allow all traffic"
direction = "in"
protocol = "tcp"
port = "any"
source_ips = ["xxx.xxx.xxx.xxx/32"]
destination_ips = []
},
]
# IP Addresses to use for the DNS Servers
dns_servers = [
"1.1.1.1",
"8.8.8.8",
"2606:4700:4700::1111",
]
# It is best practice to turn this off.
create_kubeconfig = false
}
provider "hcloud" {
token = var.hcloud_token != "" ? var.hcloud_token : local.hcloud_token
}
terraform {
required_version = ">= 1.5.0"
required_providers {
hcloud = {
source = "hetznercloud/hcloud"
version = ">= 1.43.0"
}
}
}
output "kubeconfig" {
value = module.kube-hetzner.kubeconfig
sensitive = true
}
variable "hcloud_token" {
sensitive = true
default = ""
}
Create the Cloud-Init files for Postgres and Redis
For convenience, I like to have Terraform do some initial work for me on the dedicated nodes for Postgres and Redis so that I at least have a bear-minimum setup up and running. Then I can take over with any additional configuration needed by connecting directly to those instances with SSH and making any necessary changes.
Create the following 2 YAML files in the same directory as your kube.tf
file.
For PostgreSQL cloud_init_postgres.yaml
:
#cloud-config
packages:
- gnupg2
- wget
- nano
package_update: true
package_upgrade: true
runcmd:
- sed -ie '/^PasswordAuthentication/s/^.*$/PasswordAuthentication no/' /etc/ssh/sshd_config
- systemctl restart ssh
- sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
- curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg
- sudo apt update -y
- sudo apt install postgresql-16 postgresql-contrib-16 -y
- echo "listen_addresses = '*'" >> /etc/postgresql/16/main/postgresql.conf
- echo "track_counts = on" >> /etc/postgresql/16/main/postgresql.conf
- echo "autovacuum = on" >> /etc/postgresql/16/main/postgresql.conf
- sudo systemctl enable postgresql
- sudo systemctl start postgresql
For Redis cloud_init_redis.yaml
:
#cloud-config
packages:
- lsb-release
- curl
- gpg
- nano
package_update: true
package_upgrade: true
runcmd:
- sed -ie '/^PasswordAuthentication/s/^.*$/PasswordAuthentication no/' /etc/ssh/sshd_config
- systemctl restart ssh
- curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
- echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/redis.list
- apt-get update -y
- apt-get install redis -y
- echo "maxmemory 2GB" >> /etc/redis/redis.conf
- echo "maxmemory-policy allkeys-lru" >> /etc/redis/redis.conf
- systemctl restart redis
We are referencing to these files in the hcloud_server
resources we created for postgres
and redis
.
resource "hcloud_server" "postgres" {
...
user_data = file("cloud_init_postgres.yaml")
}
resource "hcloud_server" "redis" {
...
user_data = file("cloud_init_redis.yaml")
}
Initialize and Apply Terraform
All the necessary configuration files are no in place. Let's initialize our Terraform environment.
Back in your terminal:
- Initialize the project with
terraform init
. - Apply the configuration with
terraform apply
.
Step 5: Connect And Verify Your Cluster
After Terraform finishes applying the configuration, you can verify your cluster's status by accessing the generated kubeconfig. To do this, save the kubeconfig file and use kubectl
to interact with your cluster.
- You can export your .kubeconfig file directly from Terraform using the following command
terraform output kubeconfig
. - Save the contents of this file in
~/.kube/myproject.kubeconfig
. - In your current terminal do
export KUBECONFIG=~/.kube/wd-minio.kubeconfig
to set your .kubeconfig to the context.
You should be able to access your Kubernetes cluster now with kubectl
,
kubectl get nodes
Conclusion
Congratulations! 🎉 You've successfully deployed a cost-effective Kubernetes cluster on Hetzner Cloud, complete with dedicated PostgreSQL and Redis servers, a Private Network, Firewall and a Load Balancer. This setup leverages the power of Kubernetes for container orchestration and the reliability of Hetzner's cloud infrastructure while allowing you to stay within budget.