Skip to content

func deploy: Add early cluster validation to prevent wasted build time #3116

@RayyanSeliya

Description

@RayyanSeliya

Problem

When running func deploy with invalid cluster configuration or connectivity issues, the command wastes time building the container image before failing with misleading or technical error messages.

Current Behavior

Scenario 1a: Invalid KUBECONFIG path

Command:

KUBECONFIG=/nonexistent/config func deploy --registry ghcr.io/user

Current Output:

Building function image
Still building
^CError: executing lifecycle: context canceled 

Problems:

  • Starts building before validating cluster connection
  • Wastes time (minutes) building an image that can't be deployed
  • Shows confusing "invalid run-image" error instead of cluster connection issue
  • User must manually cancel (Ctrl+C) the build

Scenario 1b: Invalid KUBECONFIG path with build and push false

Command:

KUBECONFIG=/nonexistent/config func deploy --registry ghcr.io/user --build=false  --push=false

Current Output:

Error: deploy error. failed to create new serving client: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Problems:

  • Suggests outdated KUBERNETES_MASTER environment variable instead of KUBECONFIG
  • Technical error message doesn't indicate the kubeconfig file path is invalid
  • No guidance on how to fix the issue or where the config file should be located

Scenario 2a: Empty kubeconfig or no cluster configuration

Command:

# Create empty kubeconfig
cat > /tmp/empty-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters: []
contexts: []
users: []
current-context: ""
EOF

KUBECONFIG=/tmp/empty-kubeconfig.yaml func deploy --registry ghcr.io/user

Current Output:

Building function image
Still building
^CError: executing lifecycle: context canceled 

Problems:

  • Starts building before validating cluster connection
  • Wastes time (minutes) building an image that can't be deployed
  • Shows confusing "invalid run-image" error instead of cluster connection issue
  • User must manually cancel (Ctrl+C) the build

Scenario 2b: Empty kubeconfig or no cluster configuration with build and push false

Command:

# Create empty kubeconfig
cat > /tmp/empty-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters: []
contexts: []
users: []
current-context: ""
EOF

KUBECONFIG=/tmp/empty-kubeconfig.yaml func deploy --registry ghcr.io/user  --build=false --push=false

Current Output:

Error: deploy error. failed to create new serving client: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

Problems:

  • Identical error message as Scenario 1b, making it impossible to distinguish between missing file and empty configuration
  • Suggests deprecated KUBERNETES_MASTER variable instead of modern cluster setup commands (minikube start, kind create cluster )

Scenario 3a: Valid kubeconfig but cluster is down

Command:

# Create kubeconfig with unreachable cluster
cat > /tmp/invalid-cluster-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters:
- cluster:
    server: https://127.0.0.1:6443
  name: invalid-cluster
contexts:
- context:
    cluster: invalid-cluster
    user: invalid-user
  name: invalid-context
current-context: invalid-context
users:
- name: invalid-user
  user: {}
EOF

KUBECONFIG=/tmp/invalid-cluster-kubeconfig.yaml func deploy --registry ghcr.io/user

Current Output:

Building function image
Still building
^CError: executing lifecycle: context canceled

Problems:

  • Starts building before checking if cluster is accessible
  • Wastes significant time and resources
  • Generic error doesn't indicate cluster connectivity issue

Scenario 3b: Valid kubeconfig but cluster is down with build and push false

Command:

# Create kubeconfig with unreachable cluster
cat > /tmp/invalid-cluster-kubeconfig.yaml << 'EOF'
apiVersion: v1
kind: Config
clusters:
- cluster:
    server: https://127.0.0.1:6443
  name: invalid-cluster
contexts:
- context:
    cluster: invalid-cluster
    user: invalid-user
  name: invalid-context
current-context: invalid-context
users:
- name: invalid-user
  user: {}
EOF

KUBECONFIG=/tmp/invalid-cluster-kubeconfig.yaml func deploy --registry ghcr.io/user

Current Output:

Error: deploy error. knative deployer failed to get the Knative Service: Get "https://127.0.0.1:6443/apis/serving.knative.dev/v1/namespaces/default/services/myfunction": tls: failed to verify certificate: x509: certificate signed by unknown authority

Problems:

  • Exposes technical error with full URL and certificate details that aren't helpful to most users
  • Error message focuses on TLS certificate verification rather than the actual issue (cluster is down/unreachable)
  • No guidance on checking cluster status or starting the cluster

Proposed Improvement

Implement early cluster validation with a 2-layer error system that checks cluster connectivity before starting the build process.

Scenario 1: Invalid KUBECONFIG path

Improved Output:

Error: invalid kubeconfig

The kubeconfig file at '/nonexistent/config' does not exist or is not accessible.        

Try this:
  export KUBECONFIG=~/.kube/config           Use default kubeconfig
  kubectl config view                        Verify current config
  ls -la ~/.kube/config                      Check if config file exists

For more options, run 'func deploy --help'

Scenario 2: Empty kubeconfig or no cluster configuration

Improved Output:

Error: cluster not accessible

Cannot connect to Kubernetes cluster. No valid cluster configuration found.

Try this:
  minikube start                             Start Minikube cluster
  kind create cluster                        Start Kind cluster
  kubectl cluster-info                       Verify cluster is running
  kubectl config get-contexts                List available contexts

For more options, run 'func deploy --help'

Scenario 3: Valid kubeconfig but cluster is down

Improved Output:

Error: cluster not accessible

Cannot connect to Kubernetes cluster. No valid cluster configuration found.

Try this:
  minikube start                             Start Minikube cluster
  kind create cluster                        Start Kind cluster
  kubectl cluster-info                       Verify cluster is running
  kubectl config get-contexts                List available contexts

For more options, run 'func deploy --help'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions