I didn’t see any options in the config to increase the limits.
You can set Kubelet’s maxPods
config flag in the k0s configuration via worker
profiles. They accept a partial KubeletConfiguration. Below is an example:
apiVersion: k0s.k0sproject.io/v1beta1
kind: ClusterConfig
metadata:
name: k0s
spec:
workerProfiles:
# This is the default that's applied to k0s workers
# that don't specify any worker profile.
- name: default
values:
maxPods: 125
# This is the "more-pods" profile that can be selected when
# starting k0s workers with the --profile more-pods flag.
- name: more-pods
values:
maxPods: 200
Note: While it’s possible to increase the number of pods per node, you should
keep some things in mind. Just because you might be able to schedulde more pods
on a node in terms of CPU and memory, there might be other bottlenecks, too:
- Check the maximum number of pod IPs allocatable per node
- Keep an eye on the node’s resource usage. Things like CPU, memory, disk, and
network can become overloaded. The Kubelet, CRI (containerd), and CNI
(Kube-Router or Calico) will also have a higher workload.