-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Description
What happened (please include outputs or screenshots):
When switching between AWS accounts and gathering information from clusters, the first account works fine, the second account fails, as though there is some cached data that has a reference which isn't being cleared when the objects go out of scope. When run individually against each account, everything works as expected.
When using the example script against the first AWS account, it works:
$ python3 bin/k8s_client_test.py AWS_Profile_One
Profile: AWS_Profile_One
==== cluster_name='profile_one_cluster_one' ====
{'build_date': '2025-07-15T04:50:26Z',
.....}
When using the example script against the second AWS account, it works:
$ python3 bin/k8s_client_test.py AWS_Profile_Two
Profile: AWS_Profile_Two
==== cluster_name='profile_two_cluster_one' ====
{'build_date': '2025-07-15T04:47:57Z',
.....}
When using the example script against both AWS accounts, the second one specified always fails:
$ python3 bin/k8s_client_test.py AWS_Profile_One AWS_Profile_Two
Profile: AWS_Profile_One
==== cluster_name='profile_one_cluster_one' ====
{'build_date': '2025-07-15T04:50:26Z',
Profile: AWS_Profile_Two
==== cluster_name='profile_two_cluster_one' ====
Traceback (most recent call last):
File "~/bin/k8s_client_test.py", line 57, in <module>
pprint(version_api.get_code())
^^^^^^^^^^^^^^^^^^^^^^
File "~/.venv/lib/python3.12/site-packages/kubernetes/client/api/version_api.py", line 61, in get_code
return self.get_code_with_http_info(**kwargs) # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/.venv/lib/python3.12/site-packages/kubernetes/client/api/version_api.py", line 128, in get_code_with_http_info
return self.api_client.call_api(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/.venv/lib/python3.12/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/.venv/lib/python3.12/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
^^^^^^^^^^^^^
File "~/.venv/lib/python3.12/site-packages/kubernetes/client/api_client.py", line 373, in request
return self.rest_client.GET(url,
^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/.venv/lib/python3.12/site-packages/kubernetes/client/rest.py", line 244, in GET
return self.request("GET", url,
^^^^^^^^^^^^^^^^^^^^^^^^
File "~/.venv/lib/python3.12/site-packages/kubernetes/client/rest.py", line 238, in request
raise ApiException(http_resp=r)
kubernetes.client.exceptions.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'abcdefghi-deed-feef-beeb-6bcd6d59cbfd', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 29 Aug 2025 12:15:19 GMT', 'Content-Length': '129'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I am aware of the connection pooling in the client and have included an explicit close()
but this has made no difference.
What you expected to happen:
Cluster information is parse successfully from each cluster when swapping between multiple AWS accounts
How to reproduce it (as minimally and precisely as possible):
Minimal script I have been able to put together - moves on to the next AWS account after finding the first cluster in the available regions:
import os
import sys
import boto3
import eks_token
import tempfile
import base64
import kubernetes
from pprint import pprint
boto3.setup_default_session()
for profile in sys.argv[1: len(sys.argv)]:
print(f"Profile: {profile}")
# need to set the env-var for eks_token to work correct
os.environ['AWS_PROFILE'] = profile
session=boto3.Session() #profile_name=profile)
ec2=session.client('ec2')
regions=[ d.get('RegionName') for d in ec2.describe_regions()['Regions'] ]
for region in regions:
eks=session.client('eks', region_name=region)
should_break=False
for cluster_name in eks.list_clusters()['clusters']:
print(f"==== {cluster_name=} ====")
describe_cluster=eks.describe_cluster(name=cluster_name)['cluster']
cluster_endpoint=describe_cluster['endpoint']
cluster_token = eks_token.get_token(cluster_name)['status']['token']
cluster_cafile=tempfile.NamedTemporaryFile(delete=False)
cadata = base64.b64decode(describe_cluster['certificateAuthority']['data'])
cluster_cafile.write(cadata)
cluster_cafile.flush()
kconfig=kubernetes.config.kube_config.Configuration(
host=cluster_endpoint,
api_key={'authorization': 'Bearer ' + cluster_token},
)
kconfig.ssl_ca_cert=cluster_cafile.name
kubernetes_client=kubernetes.client.ApiClient(configuration=kconfig)
version_api=kubernetes.client.VersionApi(kubernetes_client)
pprint(version_api.get_code())
# forceably close/shut down the connection pool
# - makes no difference using this or not
kubernetes_client.close()
should_break=True
break
if should_break:
break
Anything else we need to know?:
Environment:
- Kubernetes version (
kubectl version
):
Client Version: v1.32.0
Kustomize Version: v5.5.0
Server Version: v1.31.10-eks-931bdca
- OS (e.g., MacOS 10.13.6):
Ubuntu 24.04.3 LTS
- Python version (
python --version
):Python 3.12.3
- Python client version (
pip list | grep kubernetes
):kubernetes 33.1.0