Error from konnectivity after node restarts

Hi!

I have a cluster deployed by k0sctl with this config:

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 192.168.100.61
      user: zerum
      port: 22
    role: controller+worker
    noTaints: true
  - ssh:
      address: 192.168.100.62
      user: zerum
      port: 22
    role: controller+worker
    noTaints: true
  - ssh:
      address: 192.168.100.63
      user: zerum
      port: 22
    role: controller+worker
    noTaints: true
  k0s:
    config:
      spec:
        network:
          provider: calico

The cluster works fine until a node is restarted for any reason. When the cluster is fully populated again, I can no longer get any logs. I receive:

Error from server: Get “https://192.168.100.61:10250/containerLogs/default/es-master-1-0/elasticsearch”: No agent available

The logs I get for k0scontroller.service

Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="E1008 13:01:14.112462     998 remote_available_controller.go:448] \"Unhandled Error\" err=\"v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.147.225:443/apis/metrics.k8s.io/v1beta1: Get \\\"https://10.96.147.225:443/apis/metrics.k8s.io/v1beta1\\\": No agent available\" logger=\"UnhandledError\"" component=kube-apiserver stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="E1008 13:01:14.112531    5309 server.go:579] \"Failed to get a backend\" err=\"No agent available\" dialID=9178363639299553498" component=konnectivity stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="E1008 13:01:14.114942    5309 server.go:579] \"Failed to get a backend\" err=\"No agent available\" dialID=3340364227193805717" component=konnectivity stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="E1008 13:01:14.114970    5309 server.go:579] \"Failed to get a backend\" err=\"No agent available\" dialID=533201712653154451" component=konnectivity stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="E1008 13:01:14.115484    5309 server.go:579] \"Failed to get a backend\" err=\"No agent available\" dialID=7854035195788166003" component=konnectivity stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="E1008 13:01:14.115616    5309 server.go:579] \"Failed to get a backend\" err=\"No agent available\" dialID=8014857067551678884" component=konnectivity stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="E1008 13:01:14.115672    5309 server.go:579] \"Failed to get a backend\" err=\"No agent available\" dialID=1925891275905775483" component=konnectivity stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="E1008 13:01:14.116181     998 controller.go:146] \"Unhandled Error\" err=<" component=kube-apiserver stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="\tError updating APIService \"v1beta1.metrics.k8s.io\" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to reach service: No agent available" component=kube-apiserver stream=stderr
Oct 08 13:01:14 kubernetes-1 k0s[709]: time="2024-10-08 13:01:14" level=info msg="\t, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]" component=kube-apiserver stream=stderr

The kube-system pods look fine:

$ kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97f5949fb-8p2pw   1/1     Running   0          37m
calico-node-plzpv                         1/1     Running   0          65m
calico-node-qrljg                         1/1     Running   0          65m
calico-node-vc72k                         1/1     Running   0          65m
coredns-679c655b6f-jqcpc                  1/1     Running   0          65m
coredns-679c655b6f-n7xqq                  1/1     Running   0          65m
konnectivity-agent-45wbr                  1/1     Running   0          65m
konnectivity-agent-br58g                  1/1     Running   0          65m
konnectivity-agent-vm47x                  1/1     Running   0          65m
kube-proxy-6kzlm                          1/1     Running   0          65m
kube-proxy-j9pvb                          1/1     Running   0          65m
kube-proxy-lkxv6                          1/1     Running   0          65m
metrics-server-78c4ccbc7f-6vdhv           1/1     Running   0          65m

What I doing wrong?

Thanks in advance!