A lightweight Kubernetes cluster k3s can be easily set up for a small-scale configuration, starting with just a single PC.
There are two ways to configure a k3s cluster: by specifying startup options for the systemd unit or by using a configuration file.
For setting up multiple devices or performing backups and restores, using configuration files is more portable.
Startup Options
When you install k3s using the official script, it’s set up as a systemd service.
You can customize its
startup options by editing /etc/systemd/system/k3s.service
.
The official documentation for each option uses this exact format.
Because the YAML syntax - discussed later - can sometimes be less intuitive, you can always fall back on directly specifying startup options if the YAML method isn’t working as expected.
Following standard systemd procedures, you can apply your changes by running systemctl daemon-reload
and then systemctl restart k3s
.
/etc/rancher/k3s/config.yaml
The configuration file is a YAML version of the k3s server options.
The typical path on a Linux setup is /etc/rancher/k3s/config.yaml
.
This file isn’t present by default; you create it when you need to customize your configuration.
Kubelet Configuration
Since k3s includes kubelet, you can specify standard Kubernetes startup options within the config.yaml
.
However, there’s a point of confusion: for nested configuration sets like kubelet, you don’t use a standard YAML map. Instead, you specify an array of strings that mimic command-line arguments, as shown below:
kubelet-arg:
- "image-gc-high-threshold=99"
- "image-gc-low-threshold=98"
This example specifies the behavior of garbage collection.
Similar logic applies when specifying options for other Kubernetes processes like kube-apiserver
or etcd
.
Practical Example: Specifying Garbage Collection Behavior
The above settings are an example of how to deal with container eviction due to disk space shortage.
This is a common issue reported in general Kubernetes setups, not just k3s.
Kubelet’s garbage collection also runs when disk free space is low.
By default, it’s set to recover disk space (image-gc-low-threshold
) to 80% when 92% has been consumed (image-gc-high-threshold
).
In cases where the hardware is not dedicated to k3s, these default values may not function properly.
For example, if you install it on a 1TB partition, garbage collection will start when the free space drops below 80GB, aiming to free up 12GB.
However, if the disk is filled with other apps’ data, it might not be able to free 12GB by deleting unnecessary container image caches alone.
In this situation, Kubernetes gives up on disk usage and the node goes down.
You can check this error in the event logs using journalctl -e -u k3s
or kubectl describe nodes
.
In cases where you’re setting up a cluster on a client PC, the main cause of disk exhaustion is often not Kubernetes itself. Therefore, it is effective to suppress garbage collection based on the actual hardware configuration.
Configurations
A working examle for IPv6 oriented settings is as follows:
bind-address: "::"
cluster-cidr: "10.42.0.0/16,2001:cafe:42::/56"
service-cidr: "10.43.0.0/16,2001:cafe:43::/112"
flannel-ipv6-masq: true
disable: traefik
service-node-port-range: "30-9999"
IPv6/IPv4 dualstack
As stated in the official documentation, the settings are based on startup options, but here is how to represent them in YAML format:
cluster-cidr: "10.42.0.0/16,2001:cafe:42::/56"
service-cidr: "10.43.0.0/16,2001:cafe:43::/112"
flannel-ipv6-masq: true
For options that enable a feature, like --flannel-ipv6-masq
, you specify a boolean value.
Specifying a Well-Known Port for nodePort
By default, Kubernetes uses the 30000 series for a Service’s nodePort.
However, you can change this range to include well-known ports, for example, by using --service-node-port-range=100-9999
.
The YAML format is as follows:
service-node-port-range: "100-9999"
Like a Pod’s hostPort
, a Service’s nodePort
listens on the host’s port. However, nodePort adds a k3s route, allowing it to forward requests to multiple backend Pods. On the other hand, the requests that reach the Pod originate from a container network’s internal IP, which can cause issues for applications that need to reference the peer’s IP address.
Removing Ingress
By default, k3s sets up Traefik as the default Ingress in the kube-system
namespace. This causes a conflict with port 443 on the host.
You can remove Traefik using the --disable-traefik
option. It will stop after a cluster restart, but if it’s still running, you must manually delete the resources.
Even if you want to run your own web server or proxy Pod to accept external HTTPS requests instead of using Ingress, you’ll still need to disable Traefik first.
The YAML format is as follows:
disable: "traefik"
You can likely specify multiple features for the disable
option using a comma-separated string, not a YAML array (this is unverified).
/etc/rancher/k3s/registries.yaml
Here is an example of /etc/rancher/k3s/registries.yaml
for compatibility with the GKE registry:
mirrors:
docker.pkg.dev:
endpoint:
- https://<internal.registry.host>:<port>/
rewrite:
"^<artifactregistry-project-name>/<subpath>/(.*)": "$1"