k3s

k3s is a lightweight kubernetes distribution. It is also used as the runtime for Rancher Desktop.

You can install it by running the script on its site.

$ curl -sfL https://get.k3s.io | sh -

In this process, the systemd setting is completed, and k3s will start automatically when the system starts.

Prerequisites

Although not explicitly stated in the documentation, Kubernetes implicitly assumes that the host has a static IP address.
For dual-stack setups, you must configure both IPv4 and IPv6 with a static IP.

If the host’s IP address changes while you’re running a single-node configuration, the k3s.service in systemd will fail with a communication error to the initial node. It will then repeatedly try to restart without recovering.

The easiest way to recover from this state is to simply reinstall, but all containers are lost. There is no built-in method to change the node’s IP configuration.
Furthermore, because you won’t be able to access any part of resources running on the cluster, it will be difficult to back up your running containers.

Enabling cgroups on Raspberry Pi

In some cases, Raspberry Pi has certain cgroup features disabled. To enable them, you must add a parameter like the following to the kernel options at boot time:

cgroup_enable=memory

On Ubuntu, cmdline.txt is located at /boot/firmware/cmdline.txt. The changes will take effect after a reboot.
You can verify the current kernel command parameters by checking /proc/cmdline.

kubernetes動作にはcpusetmemoryなどのcgroupアクセスが必要です。cgroup v2の機能は/sys/ファイルシステムを通じて確認できます。 For Kubernetes to function, it requires access to cgroup features cpuset and memory. You can check the capabilities of cgroup v2 through the /sys/ filesystem.

$ cat /sys/fs/cgroup/cgroup.controllers
cpuset cpu io memory hugetlb pids rdma misc dmem

The older, similar but distinct cgroup v1 - /proc/cgroups - is being deprecated, so you won’t need it.

IPv6 cluster

To support IPv6 dual stack, add options during cluster installation.

$ curl -sfL https://get.k3s.io | sh -s - --cluster-cidr=10.42.0.0/16,2001:cafe:42::/56 --service-cidr=10.43.0.0/16,2001:cafe:43::/112 --flannel-ipv6-masq

If an existing k3s cluster is already set up, you’ll need to uninstall it and reinstall to enable IPv6 dual-stack support. For installation options, refer to Configuration.

Adding the --flannel-ipv6-masq option is crucial, as omitting it will result in a configuration where containers cannot communicate externally.

Also, NodePort is currently unsupported for IPv6. While service objects can be created without issues, connections to ::1 from the host will drop.

upgrade

Upgrade k3s process is the same as installation.

context

kubeconfig is generated in /etc/rancher/k3s/k3s.yaml after installation.
Merging it with an existing config, you can operate the local k3s cluster by switching the context with the kube config subcommand.

# Personalize config to each user.
$ sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/
$ sudo chown <user>:<user> ~/.kube/k3s.yaml

# Change context label to "k3s"
$ sed -i -e 's/default/k3s/g' k3s.yaml

$ KUBECONFIG=~/.kube/config:~/.kube/k3s.yaml kubectl config view --flatten > ~/.kube/merged
# Replace config
$ mv ~/.kube/merged ~/.kube/config

Private Registry Authentication

Authentication for pulling container images from an external source uses the standard Docker API authentication method.
However, it’s important to note that a ServiceAccount is an object created on a per-namespace basis. You must attach a secret to the default ServiceAccount in the namespace where you are launching your container.

For cases where you build a mirror registry on a local network, you can refer to /etc/rancher/k3s/registries.yaml settings.

Mounting Local Directories

You can also access local directories from within a container using a hostPath volume.
When mounting a type: Directory volume, you’ll often encounter directory permission errors, so you’ll likely need to configure it at container startup.

Trouble shooting

Shutdown Failure

Power off and reboot may be slow after installing k3s.
systemd-shutdown hangs on containerd-shim when k3s-agent running #2400 can be related.

In essence, cleaning up the container proccess properly at shutdown is needed.
As for the subject, there is a workaround that forces systemd to terminate by shortening the DefaultTimeoutStopSec in /etc/systemd/system.conf.

However, since systemd of Ubuntu 20.04 has this bug that does not read config. It takes always 90 seconds until the complete termination.

kube-system starting error

Occasionally, kube-system containers may have failed to start, in which case the containers will be out of network.
You can reboot system containers with a command like the following:

$ kubectl rollout restart -nkube-system deploy/coredns
$ kubectl rollout restart -nkube-system deploy/local-path-provisioner
$ kubectl rollout restart -nkube-system deploy/metrics-server

Container network issues

If containers are running and accessible via kubectl but are unreachable from other containers or through NodePort, there may be an issue with the k3s network configuration.
k3s network configuration is likely specified directly in the systemd startup script. Linux distributions often use a layered network configuration that can interfere with settings on other layers.

In one encountered case, the container network became inaccessible when there was a cni0 interface configuration present in Ubuntu’s Netplan. Removing the cni configuration in /etc/netplan/ and rebooting resolved the issue.

kubectl connection error

There are cases where kubectl returns an Unauthorized error.
This often occurs when connection information is set in ~/.kube/config but user.client-certificate-data or user.client-key-data has been corrupted, leading to authentication failures.

Additionally, this certificate can expire over time because upgrading k3s does not update each user’s config. When attempting to execute kubectl, you may encounter an error like the following:

E0210 17:25:42.675689   10896 memcache.go:265] couldn't get current server API group list: the server has asked for the client to provide credentials
error: You must be logged in to the server (the server has asked for the client to provide credentials)

The current system credentials are located in /etc/rancher/k3s/k3s.yaml.
Replacing the existing configuration with these credentials resolves the issue. The certificate is a base64-encoded PEM file, which can be decoded to check the expiration date using openssl x509 command.

Since ~/.kube/config may contain credentials for other clusters, you’ll want to overwrite only the user section specific to k3s. If you’re not using ~/.kube/config by default, set the config path with export KUBECONFIG=~/.kube/config.

Impression

k3s runs without VMs on Linux host, so it’s more stable than Rancher Desktop.
You can even direct access to k3s containers with extra tweaks.

For configuration details, please refer to k3s configuration.

⁋ May 16, 2022↻ Sep 16, 2025
中馬崇尋
Chuma Takahiro