Problem when connecting to node shell (2025.2.141554)

After upgrading to Lens Version “2025.2.141554-latest” the connect to node shell option seems to be broken.
Trying to connect to a node results in

E0311 16:01:16.377804   58968 memcache.go:265] couldn't get current server API group list: Get "...": proxyco
nnect tcp: dial tcp: lookup undefined: no such host
Unable to connect to the server: proxyconnect tcp: dial tcp: lookup undefined: no such host
Terminal will auto-close in 15 seconds ...

This does not happen in the previous version from january. Connecting to a node shell via kubectl command still works.

Thanks for mentioning this. I thought the problem was me, not Lens.

As a workaround, I got a shell on the node by doing the following:

kubectl debug node/very-special-hostname -it --image=public.ecr.aws/aws-cli/aws-cli:2.24.22 -- /bin/bash

It’s not the same as what Lens creates, but it got the job done.

1 Like

this is happening for me too

happening for me as well

Found another workaround, that solved it for me.
In File->Preferences Section “Proxy” you find the option “Lens Internal Proxy” → “Bypass Lens Internal KubeApi Proxy”.
Disabling this option worked for me.

We would like to understand why the default setting does not work here. Are there anything special with the kubeconfig or the environment that could help us resolve the bug?

In my scenario i am connecting with aks clusters.
nodes are on version 1.30.6 (OS: linux (amd64), OS Image: Ubuntu 22.04.5 LTS)

I also see in the lens-logs that it creates a temp kubeconfig

debug: 	[CLUSTER-KUBECONFIG-MAIN]: Created temp kubeconfig "..." at "...AppData\Local\Temp\kubeconfig-direct-9848c74fb5631a671336ca460e3544d4"

The content of this temp kubeconfig looks valid to me. I also tried it out separately by copying this temp-kubeconfig to my .kube-folder and it was usable.

Also when using the node-shell function i see that in namespace “kube-system” it creates a pod “node-shell-(guid)”. This pod is accessible via pod-shell as long as it lives. Which is about 30s. Then the terminal and pod terminate.

Made an account to just for this issue.
We’re experiencing this on AWS EKS v1.30
Lens version 2025.3.181451-beta

I cannot attach a shell to a Pod, is this the same problem? Thanks

Are there any proxies / vpns required for the connectivity to the cluster?

@jhbaconshone for me logging to pod-shell worked. If it is a similar problem depends on you log-message which you should shortly see in the terminal or in the logs of Lens.

No, no specific proxies/vpn. IP-range is restricted, but since the temp-kubeconfig is created on my local system, i would not have expected this to be a factor.
As mentioned in the original post the problem did not occur in version “2025.1.161916-latest”, if this helps.

Sorry, but I do not understand how you can log to pod-shell? The Shell connection to a pod does not show up in my logs

@jhbaconshone You can check this post to find the application logs for lens https://forums.k8slens.dev/t/where-can-i-find-application-logs/98
There you find a file “lens.log”

In my case where pod-shell works i just get an info line stating “info: [LOCAL-SHELL]: PTY for … is started with PID=61292”
Maybe you see diffent messages for your case.

I can see a line like that, but about 150ms later I get that shell has exited for … closed with exitcode=0

What I see is a quick flicker of the shell that disappears.

I solved my problem on my Mac by removing the kubectl symlink to a non-existent version and the kubectl install inside the Lens Application Support folder - Lens downloaded again and Lens now lets me attach shell to pods again!

1 Like