Skip to content

Commit c67db00

Browse files
authored
Merge pull request #20 from aroraharsh23/upstream/next_release
Upstream/next release
2 parents f6b0799 + 522401f commit c67db00

File tree

11 files changed

+139
-13
lines changed

11 files changed

+139
-13
lines changed

README.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -70,9 +70,15 @@ Refer the [deployment](deploy/README.md) page for running Citrix k8s node contro
7070

7171
## Supported CNIs
7272

73-
- Flannel
74-
- Calico (only Encapsulated IPIP mode)
75-
- Cilium
73+
- flannel
74+
- calico
75+
- cilium
76+
- weave
77+
- canal
78+
79+
## TroubleShoot
80+
81+
After deploying CNC, if services are in DOWN state, refer the [troubleshooting](deploy/troubleshoot.md) page
7682

7783
## Questions
7884

deploy/README.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This topic provides information on how to deploy Citrix node controller on Kubernetes and establish the route between Citrix ADC and Kubernetes Nodes.
44

5-
**NOTE:** As part of configuration, some resources will be created in the "kube-system" namespace. Hence, Please make sure that "kube-system" namespace is configurable.
5+
Note: CNC creates "kube-cnc-router" in HOST mode on all the schedulable-nodes. These router pods create virtual network interface and program iptables accordingly on respective nodes where they are scheduled. These pods need to run with NET_ADMIN capability to achieve the same. Hence CNC serviceaccount must have NET_ADMIN privilege and ability to create HOST mode pods.
66

77
Perform the following:
88

@@ -28,12 +28,13 @@ Perform the following:
2828
2929
| Environment Variable | Mandatory or Optional | Description |
3030
| -------------------- | --------------------- | ----------- |
31-
| NS_IP | Mandatory | Citrix k8s node controller uses this IP address to configure the Citrix ADC. The NS_IP can be anyone of the following: </br></br> - **NSIP** for standalone Citrix ADC </br>- **SNIP** for high availability deployments (Ensure that management access is enabled) </br> - **CLIP** for Cluster deployments |
31+
| NS_IP | Mandatory | Citrix kubernetes node controller uses this IP address to configure the Citrix ADC. The NS_IP can be anyone of the following: </br></br> - **NSIP** for standalone Citrix ADC </br>- **SNIP** for high availability deployments (Ensure that management access is enabled) </br> - **CLIP** for Cluster deployments |
3232
| NS_USER and NS_PASSWORD | Mandatory | The user name and password of Citrix ADC. Citrix k8s node controller uses these credentials to authenticate with Citrix ADC. You can either provide the user name and password or Kubernetes secrets. If you want to use a non-default Citrix ADC user name and password, you can [create a system user account in Citrix ADC](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/deploy/deploy-cic-yaml/#create-system-user-account-for-citrix-ingress-controller-in-citrix-adc). </br></br> The deployment file uses Kubernetes secrets, create a secret for the user name and password using the following command: </br></br> `kubectl create secret generic nslogin --from-literal=username='nsroot' --from-literal=password='nsroot'` </br></br> **Note**: If you want to use a different secret name other than `nslogin`, ensure that you update the `name` field in the `citrix-node-controller` definition. |
3333
| NETWORK | Mandatory | The IP address range (for example, `192.128.1.0/24`) that Citrix node controller uses to configure the VTEP overlay end points on the Kubernetes nodes. </br></br> **Note:** Ensure that the subnet that you provide is different from your Kubernetes cluster.|
3434
| VNID | Mandatory | A unique VXLAN VNID to create a VXLAN overlay between Kubernetes cluster and the ingress devices. </br></br>**Note:** Ensure that the VXLAN VNID that you use does not conflict with the Kubernetes cluster or Citrix ADC VXLAN VNID. You can use the `show vxlan` command on your Citrix ADC to view the VXLAN VNID. For example: </br></br> `show vxlan` </br>`1) ID: 500 Port: 9090`</br>`Done` </br> </br>In this case, ensure that you do not use `500` as the VXLAN VNID.|
3535
| VXLAN_PORT | Mandatory | The VXLAN port that you want to use for the overlay. </br></br>**Note:** Ensure that the VXLAN PORT that you use does not conflict with the Kubernetes cluster or Citrix ADC VXLAN PORT. You can use the `show vxlan` command on your Citrix ADC to view the VXLAN PORT. For example: </br></br> `show vxlan` </br>`1) ID: 500 Port: 9090`</br>`Done` </br> </br>In this case, ensure that you do not use `9090` as the VXLAN PORT.|
3636
| REMOTE_VTEPIP | Mandatory | The Ingress Citrix ADC SNIP. This IP address is used to establish an overlay network between the Kubernetes clusters.|
37+
| CNI_TYPE | Mandatory | The CNI used in kubernetes cluster. Valid values: flannel,calico,canal,weave,cilium|
3738
| DSR_IP_RANGE | Optional | This IP address range is used for DSR Iptable configuration on nodes. Both IP and subnet must be specified in format : "xx.xx.xx.xx/xx" |
3839
3940
@@ -62,14 +63,17 @@ The highlights in the screenshot show the VXLAN VNID, VXLAN PORT, SNIP, route, a
6263
6364
Apart from "citrix-node-controller" deployment, some other resources are also created.
6465
65-
- In "Kube-system" namespace:
66+
- In the namespace where CNC was deployed:
6667
- For each worker node, a "kube-cnc-router" pod.
6768
- A configmap "kube-cnc-router".
68-
- A serviceaccount "kube-cnc-router"
69-
- A clusterrole "kube-cnc-router"
70-
- A clusterrolebinding "kube-cnc-router"
7169
72-
![Verification](../images/k8s_deployments.png)
70+
<img src="../images/kube_cnc_router.png" width="600" height="300">
71+
72+
On each of the worker nodes, a interface "cncvxlan<hash-of-namespace>" and iptables rule will get created.
73+
74+
<img src="../images/worker-1.png" width="600" height="300">
75+
<img src="../images/worker-2.png" width="600" height="300">
76+
7377
7478
# Delete the Citrix K8s node controller
7579

deploy/citrix-k8s-node-controller.yaml

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -72,14 +72,14 @@ roleRef:
7272
subjects:
7373
- kind: ServiceAccount
7474
name: kube-cnc-router
75-
namespace: kube-system
75+
namespace: default
7676
apiVersion: rbac.authorization.k8s.io/v1
7777
---
7878
apiVersion: v1
7979
kind: ServiceAccount
8080
metadata:
8181
name: kube-cnc-router
82-
namespace: kube-system
82+
namespace: default
8383

8484
---
8585
apiVersion: apps/v1
@@ -99,7 +99,7 @@ spec:
9999
serviceAccountName: citrix-node-controller
100100
containers:
101101
- name: citrix-node-controller
102-
image: "quay.io/citrix/citrix-k8s-node-controller:2.2.1"
102+
image: "quay.io/citrix/citrix-k8s-node-controller:2.2.2"
103103
imagePullPolicy: Always
104104
env:
105105
- name: NS_IP
@@ -122,3 +122,5 @@ spec:
122122
value: "3267"
123123
- name: VNID
124124
value: "300"
125+
- name: CNI_TYPE
126+
value: "xxxx"

deploy/configmap_tolerations.yaml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
apiVersion: v1
2+
kind: ConfigMap
3+
apiVersion: v1
4+
metadata:
5+
name: citrix-node-controller
6+
data:
7+
key: "xxx"
8+
operator: "xxx"
9+
value: "xxx"
10+
effect: "xxx"
11+
tolerationseconds: "xxx"

deploy/troubleshoot.md

Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
# CNC troubleshooting
2+
3+
This document explains how to troubleshoot issues that you may encounter while using the Citrix Kubernetes node controller (CNC). Using this document, you can collect logs to determine the causes and apply workarounds for some of the common issues related to the configuration of the CNC
4+
5+
To validate Citrix ADC and the basic node configurations, refer to the image on the [deployment](README.md) page.
6+
7+
### Service status DOWN
8+
9+
To debug the issues when the service is in down state,
10+
11+
1. Verify the logs of the CNC pod using the following command::
12+
13+
```
14+
kubectl logs <cnc-pod> -n <namespace>
15+
```
16+
17+
Check for any 'permission' errors in the logs. CNC creates kube-cnc-router pods which require NET_ADMIN privilege to perform the configurations on nodes. So, the CNC service account must have the NET_ADMIN privilege and the ability to create host mode kube-cnc-routerpods.
18+
19+
2. Verify the logs of the kube-cnc-router pod using the following command:
20+
21+
```
22+
kubectl logs <kube-cnc-pod> -n <namespace>
23+
```
24+
25+
Check for any error in the node configuration. The following is a sample typical router pod log:
26+
27+
<img src="../images/router-pod-log.png" width="600" height="300">
28+
29+
3. Verify the kube-cnc-router configmap output using the following command:
30+
31+
```
32+
kubectl get configmaps -n <namespace> kube-cnc-router -o yaml
33+
```
34+
Check for empty field in the data section of the configmap. The following is a sample typical two node data section:
35+
36+
<img src="../images/router-cmap-data.png" width="600" height="300">
37+
38+
4. Verify the node configuration and make sure the following:
39+
- CNC interface cncvxlan<md5_of_namespace> should be created.
40+
- Assigned VTEP IP address should be the same as the corresponding router gateway entry in Citrix ADC.
41+
- Status of interface should be functioning.
42+
- iptable rule port should be created.
43+
- Port should be the same that of VXLAN created on Citrix ADC.
44+
45+
<img src="../images/worker-1.png" width="600" height="300">
46+
47+
48+
### If the service status is up and operational, but the ping from Citrix ADC is not working
49+
50+
If you are not able to ping the service IP address from Citrix ADC even though the services are in operational state. One reason may be the presence of a PBR entry which directs the packets from Citrix ADC with SRCIP as NSIP to the default gateway.
51+
52+
It does not impact any functionality. You can use the VTEP of Citrix ADC as source IP address using the -S option of ping command in the Citrix ADC command line interface. For example:
53+
54+
```
55+
ping <serviceIP> -S <vtepIP>
56+
```
57+
Note: If it is necessary to ping with NSIP itself, then, you must remove the PBR entry or add a new PBR entry for the endpoint with high priority.
58+
59+
### cURL to the pod endpoint or VIP is not working
60+
61+
Though, services are in up state, still you cannot cURL to the pod endpoint, that means, the stateful TCP session to the endpoint is failing. One reason may be the ns mode 'MBF' is set to enable. This issue depends upon deployments and might occur only on certain versions of Citrix ADC.
62+
To resolve this issue, you should disable MBF ns mode or bind a netprofilewith the netprofile disabled to the servicegroup.
63+
Note: If disabling the MBF resolves the issue, then it should be kept disabled.
64+
65+
## Customer support
66+
67+
For general support, when you raise issues, provide the following details which help for faster debugging.
68+
cURL or ping from Citrix ADC to the endpoint and get the details for the following:
69+
70+
For the node, provide the details for the following commands:
71+
72+
1. tcpdump capture on CNC interface on nodes
73+
```
74+
tcpdump -i cncvxlan<hash_of_namesapce> -w cncvxlan.pcap
75+
```
76+
2. tcpdump capture on node Mgmt interface lets say "eth0"
77+
```
78+
tcpdump -i eth0 -w mgmt.pcap
79+
```
80+
3. tcpdump capture on CNI interface lets say "vxlan.calico"
81+
```
82+
tcpdump -i vxlan.calico -w cni.pcap
83+
```
84+
4. output of "ifconfig -a" on the node.
85+
5. output of "iptables -L" on the node.
86+
87+
88+
For ADC, provide the details for the following show commands:
89+
1. show ip
90+
2. show vxlan <vxlan_id>
91+
3. show route
92+
4. show arp
93+
5. show bridgetable
94+
6. show ns pbrs
95+
7. show ns bridgetable
96+
8. show ns mode
97+
9. Try and capture nstrace while ping/curl:
98+
```
99+
start nstrace -size 0 -mode rx new_rx txb tx -capsslkeys enABLED
100+
```
101+
```
102+
stop nstrace
103+
```

images/k8s_deployments.png

-591 KB
Binary file not shown.

images/kube_cnc_router.png

142 KB
Loading

images/router-cmap-data.png

178 KB
Loading

images/router-pod-log.png

205 KB
Loading

images/worker-1.png

207 KB
Loading

0 commit comments

Comments
 (0)