-
Notifications
You must be signed in to change notification settings - Fork 165
Closed
Description
Problem
When running func deploy with invalid cluster configuration or connectivity issues, the command wastes time building the container image before failing with misleading or technical error messages.
Current Behavior
Scenario 1a: Invalid KUBECONFIG path
Command:
KUBECONFIG=/nonexistent/config func deploy --registry ghcr.io/userCurrent Output:
Building function image
Still building
^CError: executing lifecycle: context canceled Problems:
- Starts building before validating cluster connection
- Wastes time (minutes) building an image that can't be deployed
- Shows confusing "invalid run-image" error instead of cluster connection issue
- User must manually cancel (Ctrl+C) the build
Scenario 1b: Invalid KUBECONFIG path with build and push false
Command:
KUBECONFIG=/nonexistent/config func deploy --registry ghcr.io/user --build=false --push=falseCurrent Output:
Error: deploy error. failed to create new serving client: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variableProblems:
- Suggests outdated
KUBERNETES_MASTERenvironment variable instead ofKUBECONFIG - Technical error message doesn't indicate the kubeconfig file path is invalid
- No guidance on how to fix the issue or where the config file should be located
Scenario 2a: Empty kubeconfig or no cluster configuration
Command:
# Create empty kubeconfig
cat > /tmp/empty-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters: []
contexts: []
users: []
current-context: ""
EOF
KUBECONFIG=/tmp/empty-kubeconfig.yaml func deploy --registry ghcr.io/userCurrent Output:
Building function image
Still building
^CError: executing lifecycle: context canceled Problems:
- Starts building before validating cluster connection
- Wastes time (minutes) building an image that can't be deployed
- Shows confusing "invalid run-image" error instead of cluster connection issue
- User must manually cancel (Ctrl+C) the build
Scenario 2b: Empty kubeconfig or no cluster configuration with build and push false
Command:
# Create empty kubeconfig
cat > /tmp/empty-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters: []
contexts: []
users: []
current-context: ""
EOF
KUBECONFIG=/tmp/empty-kubeconfig.yaml func deploy --registry ghcr.io/user --build=false --push=falseCurrent Output:
Error: deploy error. failed to create new serving client: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variableProblems:
- Identical error message as Scenario 1b, making it impossible to distinguish between missing file and empty configuration
- Suggests deprecated
KUBERNETES_MASTERvariable instead of modern cluster setup commands (minikube start,kind create cluster)
Scenario 3a: Valid kubeconfig but cluster is down
Command:
# Create kubeconfig with unreachable cluster
cat > /tmp/invalid-cluster-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://127.0.0.1:6443
name: invalid-cluster
contexts:
- context:
cluster: invalid-cluster
user: invalid-user
name: invalid-context
current-context: invalid-context
users:
- name: invalid-user
user: {}
EOF
KUBECONFIG=/tmp/invalid-cluster-kubeconfig.yaml func deploy --registry ghcr.io/userCurrent Output:
Building function image
Still building
^CError: executing lifecycle: context canceledProblems:
- Starts building before checking if cluster is accessible
- Wastes significant time and resources
- Generic error doesn't indicate cluster connectivity issue
Scenario 3b: Valid kubeconfig but cluster is down with build and push false
Command:
# Create kubeconfig with unreachable cluster
cat > /tmp/invalid-cluster-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://127.0.0.1:6443
name: invalid-cluster
contexts:
- context:
cluster: invalid-cluster
user: invalid-user
name: invalid-context
current-context: invalid-context
users:
- name: invalid-user
user: {}
EOF
KUBECONFIG=/tmp/invalid-cluster-kubeconfig.yaml func deploy --registry ghcr.io/userCurrent Output:
Error: deploy error. knative deployer failed to get the Knative Service: Get "https://127.0.0.1:6443/apis/serving.knative.dev/v1/namespaces/default/services/myfunction": tls: failed to verify certificate: x509: certificate signed by unknown authorityProblems:
- Exposes technical error with full URL and certificate details that aren't helpful to most users
- Error message focuses on TLS certificate verification rather than the actual issue (cluster is down/unreachable)
- No guidance on checking cluster status or starting the cluster
Proposed Improvement
Implement early cluster validation with a 2-layer error system that checks cluster connectivity before starting the build process.
Scenario 1: Invalid KUBECONFIG path
Improved Output:
Error: invalid kubeconfig
The kubeconfig file at '/nonexistent/config' does not exist or is not accessible.
Try this:
export KUBECONFIG=~/.kube/config Use default kubeconfig
kubectl config view Verify current config
ls -la ~/.kube/config Check if config file exists
For more options, run 'func deploy --help'Scenario 2: Empty kubeconfig or no cluster configuration
Improved Output:
Error: cluster not accessible
Cannot connect to Kubernetes cluster. No valid cluster configuration found.
Try this:
minikube start Start Minikube cluster
kind create cluster Start Kind cluster
kubectl cluster-info Verify cluster is running
kubectl config get-contexts List available contexts
For more options, run 'func deploy --help'Scenario 3: Valid kubeconfig but cluster is down
Improved Output:
Error: cluster not accessible
Cannot connect to Kubernetes cluster. No valid cluster configuration found.
Try this:
minikube start Start Minikube cluster
kind create cluster Start Kind cluster
kubectl cluster-info Verify cluster is running
kubectl config get-contexts List available contexts
For more options, run 'func deploy --help'Metadata
Metadata
Assignees
Labels
No labels