ipvs代理模式下的ClusterIP

在iptables代理模式下,每一个Service会生成很多的iptables规则,在Service较多的情况下,itpables规则会更多,在性能上将会由很大的影响。而ipvs的代理模式则比iptables相对简单的多。

ipvs代理模式下,kube-proxy会在每个节点上创建一个名为kube-ipvs0的虚拟接口,并将集群所有Service对象的ClusterIP和ExternalIP都配置在该接口; kube-proxy为每个service生成一个虚拟服务器(Virtual Server)的定义。

ipvs类型:默认使用nat类型;仅需要借助于极少量的iptables规则完成源地址及端口转换等功能。

iptables代理模式改ipvs

云原生应用支持直接修改环境变量或修改配置文件中非关键配置后动态加载,我们修改配置后会通过配置中心加载信息后自动重载,用户对此无所感知。我们只需要提供配置中心,将需要修改的配置在配置中心上进行替换,经过一段时间后就自动生效了。

k8s上提供了一个非常重要的资源ConfigMap来模拟为所有pod中的应用提供配置中心。

代理模式是kube-proxy的特性,kubeadm部署的k8s中,kube-proxy是以集群附件的方式进行部署的。他的配置文件放kube-system名称空间中。

1
2
3
4
# configmaps中kube-system名称空间下kube-proxy的配置
root@k8s-master01:~# kubectl get configmaps kube-proxy -n kube-system
NAME DATA AGE
kube-proxy 2 12d

1.修改该ConfigMap配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
config.conf: |-
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 0
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables: # 此处为iptables代理模式下相关的配置
masqueradeAll: false # false表示只为集群外以及源地址为自身的流量做源地址转换,true则表示所有流量都做转换。
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: "" # 此项为调度算法,如果不给出k8s设定的默认调度算法为轮询(RR)
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: "ipvs" # mode默认留空表示使用iptables,将其改为ipvs
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
enableDSR: false
networkName: ""
sourceVip: ""
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://kube-api:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
annotations:
kubeadm.kubernetes.io/component-config.hash: sha256:1ea36ed7ad141bac9baf431205bd45f2383ab0257b11569ca33dbe9255c70197
creationTimestamp: "2021-06-28T10:53:47Z"
labels:
app: kube-proxy
name: kube-proxy
namespace: kube-system
resourceVersion: "274"
uid: 5e6d2b07-cb9a-4e72-a67b-3ee5305ea559

# 修改完毕后保存退出
configmap/kube-proxy edited
# 提示edited表示修改成功

2.一段时间后即可生效,既可以看到kube-ipvs0的网卡信息。如果长时间不生效可以将kube-proxy的pod手动删除,让其强制重读配置。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# 删除kube-proxy相关pod强制重新生成新的pod
# 此处直接删除全部,生产中注意以灰度方式删除,或集群部署时直接设置为ipvs代理模式
root@k8s-master01:~# kubectl get pods -n kube-system --show-labels -l "k8s-app=kube-proxy"
NAME READY STATUS RESTARTS AGE LABELS
kube-proxy-5splm 1/1 Running 0 12d controller-revision-hash=bb6f59455,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-d67gm 1/1 Running 0 12d controller-revision-hash=bb6f59455,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-p8md8 1/1 Running 0 12d controller-revision-hash=bb6f59455,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-tf5pd 1/1 Running 0 12d controller-revision-hash=bb6f59455,k8s-app=kube-proxy,pod-template-generation=1
root@k8s-master01:~# kubectl delete pods -n kube-system -l "k8s-app=kube-proxy"
pod "kube-proxy-5splm" deleted
pod "kube-proxy-d67gm" deleted
pod "kube-proxy-p8md8" deleted
pod "kube-proxy-tf5pd" deleted

# 再次查看pod已经被重新生成
root@k8s-master01:~# kubectl get pods -n kube-system --show-labels -l "k8s-app=kube-proxy"
NAME READY STATUS RESTARTS AGE LABELS
kube-proxy-82ph8 1/1 Running 0 22s controller-revision-hash=bb6f59455,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-lpw4v 1/1 Running 0 21s controller-revision-hash=bb6f59455,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-m89z4 1/1 Running 0 15s controller-revision-hash=bb6f59455,k8s-app=kube-proxy,pod-template-generation=1
kube-proxy-rr8r2 1/1 Running 0 20s controller-revision-hash=bb6f59455,k8s-app=kube-proxy,pod-template-generation=1

3.查看kube-ipvs0网卡是否生成

1
2
3
4
5
6
7
8
root@k8s-master01:~# ifconfig kube-ipvs0
kube-ipvs0: flags=130<BROADCAST,NOARP> mtu 1500
inet 10.98.79.128 netmask 255.255.255.255 broadcast 0.0.0.0
ether 8a:0b:01:0b:4f:04 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

4.每一个Service的地址都会被配置在kube-ipvs0的接口上

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
root@k8s-master01:~# ip addr show kube-ipvs0
30: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 8a:0b:01:0b:4f:04 brd ff:ff:ff:ff:ff:ff
inet 10.98.79.128/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.97.72.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.103.164.125/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.141.139/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 172.16.11.75/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.102.104.28/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.104.238.122/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.98.184.16/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.104.237.74/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.104.57.104/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.101.148.66/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.97.181.65/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.107.36.136/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.111.8.128/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.104.124.18/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 172.16.11.72/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.97.56.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.98.63.248/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 172.16.11.73/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever

5.使用ipvsadm查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
# 所有的service都变为ipvs的规则。
root@k8s-master01:~# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.11.71:30373 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.71:31156 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.71:31398 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.71:31828 rr
-> 10.244.3.54:80 Masq 1 0 0
TCP 172.16.11.72:80 rr
-> 10.244.3.4:8000 Masq 1 0 0
TCP 172.16.11.72:30373 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.72:31156 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.72:31398 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.72:31828 rr
-> 10.244.3.54:80 Masq 1 0 0
TCP 172.16.11.73:80 rr
-> 10.244.3.53:80 Masq 1 0 0
TCP 172.16.11.73:30373 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.73:31156 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.73:31398 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.73:31828 rr
-> 10.244.3.54:80 Masq 1 0 0
TCP 172.16.11.75:80 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.75:30373 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.75:31156 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.75:31398 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.16.11.75:31828 rr
-> 10.244.3.54:80 Masq 1 0 0
TCP 172.17.0.1:30373 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.17.0.1:31156 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 172.17.0.1:31398 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.96.0.1:443 rr
-> 172.16.11.71:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.0.3:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.2:9153 Masq 1 0 0
-> 10.244.0.3:9153 Masq 1 0 0
TCP 10.96.141.139:80 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.97.56.1:80 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.97.72.1:80 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.97.181.65:80 rr
-> 10.244.3.26:80 Masq 1 0 0
-> 10.244.3.27:80 Masq 1 0 0
TCP 10.98.63.248:80 rr
-> 10.244.3.53:80 Masq 1 0 0
TCP 10.98.79.128:80 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.98.184.16:12345 rr
-> 10.244.1.14:12345 Masq 1 0 0
-> 10.244.3.11:12345 Masq 1 0 0
-> 10.244.3.12:12345 Masq 1 0 0
TCP 10.101.148.66:80 rr
-> 10.244.3.54:80 Masq 1 0 0
TCP 10.102.104.28:80 rr
-> 10.244.3.46:80 Masq 1 0 0
TCP 10.103.164.125:9500 rr persistent 10800
-> 10.244.1.5:9500 Masq 1 0 0
-> 10.244.2.4:9500 Masq 1 0 0
-> 10.244.3.3:9500 Masq 1 0 0
TCP 10.104.57.104:12345 rr
-> 10.244.1.13:12345 Masq 1 0 0
-> 10.244.2.11:12345 Masq 1 0 0
-> 10.244.3.9:12345 Masq 1 0 0
TCP 10.104.124.18:80 rr
-> 10.244.3.4:8000 Masq 1 0 0
TCP 10.104.237.74:3306 rr
-> 172.16.11.79:3306 Masq 1 0 0
TCP 10.104.238.122:12345 rr
-> 10.244.1.10:12345 Masq 1 0 0
-> 10.244.1.11:12345 Masq 1 0 0
-> 10.244.3.8:12345 Masq 1 0 0
TCP 10.107.36.136:12345 rr
-> 10.244.1.12:12345 Masq 1 0 0
-> 10.244.2.9:12345 Masq 1 0 0
-> 10.244.2.10:12345 Masq 1 0 0
TCP 10.111.8.128:80 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.244.0.0:30373 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.244.0.0:31156 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.244.0.0:31398 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.244.0.0:31828 rr
-> 10.244.3.54:80 Masq 1 0 0
TCP 10.244.0.1:30373 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.244.0.1:31156 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 10.244.0.1:31398 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 127.0.0.1:30373 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 127.0.0.1:31156 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 127.0.0.1:31398 rr
-> 172.16.11.81:80 Masq 1 0 0
-> 10.244.1.4:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
-> 10.244.3.2:80 Masq 1 0 0
TCP 127.0.0.1:31828 rr
-> 10.244.3.54:80 Masq 1 0 0
TCP 172.17.0.1:31828 rr
-> 10.244.3.54:80 Masq 1 0 0
TCP 10.244.0.1:31828 rr
-> 10.244.3.54:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.0.3:53 Masq 1 0 0

6.再次查看iptables中的规则

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
root@k8s-master01:~# iptables -t nat -S 
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N KUBE-FIREWALL
-N KUBE-KUBELET-CANARY
-N KUBE-LOAD-BALANCER
-N KUBE-MARK-DROP
-N KUBE-MARK-MASQ
-N KUBE-NODE-PORT
-N KUBE-POSTROUTING
-N KUBE-SERVICES
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE --random-fully
-A DOCKER -i docker0 -j RETURN
-A KUBE-FIREWALL -j KUBE-MARK-DROP
-A KUBE-LOAD-BALANCER -j KUBE-MARK-MASQ
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODE-PORT -p tcp -m comment --comment "Kubernetes nodeport TCP port for masquerade purpose" -m set --match-set KUBE-NODE-PORT-TCP dst -j KUBE-MARK-MASQ
-A KUBE-POSTROUTING -m comment --comment "Kubernetes endpoints dst ip:port, source ip for solving hairpin purpose" -m set --match-set KUBE-LOOP-BACK dst,dst,src -j MASQUERADE
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SERVICES ! -s 10.244.0.0/16 -m comment --comment "Kubernetes service cluster ip + port for masquerade purpose" -m set --match-set KUBE-CLUSTER-IP dst,dst -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m comment --comment "Kubernetes service external ip + port for masquerade and filter purpose" -m set --match-set KUBE-EXTERNAL-IP dst,dst -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m comment --comment "Kubernetes service external ip + port for masquerade and filter purpose" -m set --match-set KUBE-EXTERNAL-IP dst,dst -m physdev ! --physdev-is-in -m addrtype ! --src-type LOCAL -j ACCEPT
-A KUBE-SERVICES -m comment --comment "Kubernetes service external ip + port for masquerade and filter purpose" -m set --match-set KUBE-EXTERNAL-IP dst,dst -m addrtype --dst-type LOCAL -j ACCEPT
-A KUBE-SERVICES -m addrtype --dst-type LOCAL -j KUBE-NODE-PORT
-A KUBE-SERVICES -m set --match-set KUBE-CLUSTER-IP dst,dst -j ACCEPT