In my scenario i am connecting with aks clusters.
nodes are on version 1.30.6 (OS: linux (amd64), OS Image: Ubuntu 22.04.5 LTS)
I also see in the lens-logs that it creates a temp kubeconfig
debug: [CLUSTER-KUBECONFIG-MAIN]: Created temp kubeconfig "..." at "...AppData\Local\Temp\kubeconfig-direct-9848c74fb5631a671336ca460e3544d4"
The content of this temp kubeconfig looks valid to me. I also tried it out separately by copying this temp-kubeconfig to my .kube-folder and it was usable.
Also when using the node-shell function i see that in namespace “kube-system” it creates a pod “node-shell-(guid)”. This pod is accessible via pod-shell as long as it lives. Which is about 30s. Then the terminal and pod terminate.
@jhbaconshone for me logging to pod-shell worked. If it is a similar problem depends on you log-message which you should shortly see in the terminal or in the logs of Lens.
No, no specific proxies/vpn. IP-range is restricted, but since the temp-kubeconfig is created on my local system, i would not have expected this to be a factor.
As mentioned in the original post the problem did not occur in version “2025.1.161916-latest”, if this helps.
In my case where pod-shell works i just get an info line stating “info: [LOCAL-SHELL]: PTY for … is started with PID=61292”
Maybe you see diffent messages for your case.
I solved my problem on my Mac by removing the kubectl symlink to a non-existent version and the kubectl install inside the Lens Application Support folder - Lens downloaded again and Lens now lets me attach shell to pods again!
Same problem for us.
It stopped working with recent updates.
Workaround with switching proxy settings in Lens solve the issue, but
it works only one time… when I try to do it for the second time (different node, same cluster) it just hangs…
Then I just restart Lens and it works again, very annoying…
Do we need vpn to connect to cluster - yes.
Do we have special kubeconfig - quite typical
debug: ▪ [NODE-SHELL]: waiting for node-shell-44548936-ea12-4b92-b4c7-3ac577f1f4a6 to be running +120ms
info: ▪ [INTERNAL-PROXY]: watch (pod-3) started /api/v1/namespaces/kube-system/pods?watch=1&resourceVersion=12111828 +1ms
[59085:0409/193303.001980:ERROR:electron_api_service_impl.cc(46)] Attempted to get the 'ipcNative' object but it was missing
[59085:0409/193303.002261:ERROR:electron_api_service_impl.cc(46)] Attempted to get the 'ipcNative' object but it was missing
[59085:0409/193305.243064:ERROR:electron_api_service_impl.cc(46)] Attempted to get the 'ipcNative' object but it was missing
lens 2025.4.92142-latest still can’t connect to node shell, receiving:
Error occurred: Pod creation timed outfailed to open a node shell: failed to create node pod
hello folks, I was facing the same problem here and when checking Lens Logs I did see it was trying to download the kubectl client same version of the Cluster, makes sense, but my VPN was not letting this download, so I manually downloaded the client and put it in ~/Library/Application\ Support/Lens/binaries/kubectl/1.31/ (the version varies) and give it execution permission with chmod and now everything is fine!
Could you please share a few more details to help us investigate?
Your operating system (Windows / macOS / Linux + version)
Kubernetes cluster type (EKS, GKE, Minikube, etc.)
Whether the issue happens for all pods or specific ones
Any errors visible in the Developer Console
We’re also preparing a new release very soon that includes several fixes related to terminal and shell behavior — it might already address what you’re seeing.