Hi,
I got the following errors flooding my system journal. I think the IP 10.101.213.69
refers to the metrics-server
pod in my cluster.
Aug 14 11:23:46 l09853 k0s[441]: time="2023-08-14 11:23:46" level=info msg="E0814 11:23:46.339804 705 controller.go:116] loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable" component=kube-apiserver stream=stderr Aug 14 11:23:46 l09853 k0s[441]: time="2023-08-14 11:23:46" level=info msg=", Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]" component=kube-apiserver stream=stderr Aug 14 11:23:46 l09853 k0s[441]: time="2023-08-14 11:23:46" level=info msg="I0814 11:23:46.341048 705 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue." component=kube-apiserver stream=stderr Aug 14 11:23:50 l09853 k0s[441]: time="2023-08-14 11:23:50" level=info msg="E0814 11:23:50.340588 705 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.213.69:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.213.69:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" component=kube-apiserver stream=stderr Aug 14 11:23:55 l09853 k0s[441]: time="2023-08-14 11:23:55" level=info msg="E0814 11:23:55.347965 705 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.213.69:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.213.69:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" component=kube-apiserver stream=stderr
Aug 14 11:23:55 l09853 k0s[441]: time="2023-08-14 11:23:55" level=info msg="I0814 11:23:55.396666 705 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: EOF" component=kube-apiserver stream=stderr
Aug 14 11:23:55 l09853 k0s[441]: time="2023-08-14 11:23:55" level=info msg="I0814 11:23:55.396712 705 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager" component=kube-apiserver stream=stderr
However, the log level of these messages is info. So, maybe I can ignore them?
But I hope there’s a better way to resolve it.
Thanks.