ingress的⽤法与原理
前⾔
我们知道真正提供服务的是后端的pod,但是为了负载均衡,为了使⽤域名,为了....,service诞⽣了,再后来ingress诞⽣了,那么为什么需要有Ingress呢?先看看官⽹怎么说的:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting.
所以,Ingress的作⽤主要有四:
1)帮助位于集中的Service能够有⼀个对外可达的url,即让集外的客户端也可以访问到⾃⼰。(wxy:对于这⼀点,NodePort类型的Service也可以,后⾯会说到)
2)做专业的负载均衡,毕竟Service的负载均衡还是很初级的
3)终结ssl/tls。意思是说对于那些业务不提供https的,为了安全,可以有专门机构帮我们做安全⽅⾯的事,业务只需要专注业务就⾏了,所以可以说是"过滤"ssl/tls
4)基于名称的虚拟hosting。这个我理解就是我们常说的Ingress是⼀个基于应⽤层提供服务的,因为Ingress不仅负责⼀个业务/Service, ⽽是可以根据名称区分不同的"hosting"....(wxy: 继续看看可能就理解了)
那么,上述的四项功能就是Ingress帮我们实现的么?其实不是的(所以说是Ingress的作⽤是不准确的),⽽是需要有⼀个Ingress Controller来实现这个功能,⽽Ingress只不过是作为集中的Service的"发⾔⼈",去和Ingress Controller打交道,相当于去IC那⾥注册⼀下,告知你要帮我基于怎样的规则转发流量,即在上述四个⽅⾯帮我业务Service对外暴露。
好了,那就让我们⼀起看看到底怎么将Ingress⽤起来吧。
⼀: 创建业务的pod/service
1. 关于业务的pod,基本信息如下:
# kubectl get pods -ncattle-system-my -oyaml rancher-57f75c44f4-2mrz6
...
containers:
- args:
- --http-listen-port=80
- --https-listen-port=443
- --add-local=auto
    ...
name: rancher
ports:
- containerPort: 80  ---这个表⽰业务容器有暴露80端⼝号
protocol: TCP
...
实际的容器的端⼝号:
sh-4.4# cat /usr/bin/entrypoint.sh
#!/bin/bash
set -e
exec tini -- rancher --http-listen-port=80 --https-listen-port=443 --audit-log-path=${AUDIT_LOG_PATH} --audit-level=${AUDIT_LEVEL} --audit-log-
maxage=${AUDIT_LOG_MAXAGE} --audit-log-maxbackup=${AUDIT_LOG_MAXBACKUP} --audit-log-maxsize=${AUDIT_LOG_MAXSIZE} "${@}"
2.此时业务的service如下
# kubectl get svc -ncattle-system-my -oyaml
...
spec:
clusterIP: 10.105.53.47
ports:
- name: http
port: 80  ---只需要在80端⼝号提供http服务即可,因为ingress会为我们terminal https
protocol: TCP
targetPort: 80
selector:
app: rancher
type: ClusterIP  ---注意,如果想要使⽤Ingres,那么service的类型⼀般就是ClusterIP就可以了,详细后⾯的总结
status:
loadBalancer: {}  ---此时的状态是这样的,表⽰没做什么负载均衡
⼆: 创建Ingress, 并为业务服务配置规则
# kubectl get ingress -ncattle-system-my -oyaml
...
spec:
rules:
- host: st.org  ---规则1:对应的host即域名为他
http:                                这条规则是for上⾯创建的那个名叫rancher的service, 会访问这个服务的80端⼝
paths:
- path: /example    ---可省略
backend:
serviceName: rancher
servicePort: 80
- host: bar.foo      ---这个是⽤来解释何为"name based virtual hosting"的
http:
paths:
- backend:
serviceName: service2
servicePort: 80
tls:                                    for https,使⽤的证书信息在名叫tls-rancher-ingress的secret中
- hosts:
- st.org
secretName: tls-rancher-ingress
0.⾸先看看官⽹是怎么描述Ingress的各个字段的含义
The Ingress  has all the information needed to configure a load balancer or proxy server.
Ingress resource only supports rules for directing HTTP traffic.
即:这些信息都是给真正的负责均衡或者说代理者⽤的,并且⽬前只⽤于http协议的转发,https是http + tls,也是基于http的。
规则包含两部分: http rule    和      tls rule(如果是https的话,否则就不需要)
1.每⼀个http rule部分,承载了三部分信息
host(可选): 表⽰这条规则应⽤在哪个host上,如果没有配置,则这条规则将应⽤到all inbound HTTP traffic through the IP address specified(所有经过指定ip地址的⼊站http 流量)。
A list of paths: 每⼀个path还结合⼀组serviceName 和servicePort。当LB接收到⼀个incoming流量,只有当这个流量中的content匹配了host 和 path后,才会被转发给后端
的Service。注意可以省略"path"字段,那就表⽰"根"
backend : 即表⽰真正的后端Service,也即serviceName 和 servicePort指向的那个后端service。
其中解释下关于Name-based virtual hosts support routing HTTP traffic to multiple host names at the same IP address.
即,可以在⼀台代理机器上,为多个服务代理流量转发,只要⼤家的host不同就可以,这⾥的host可以理解成域名。
2. tls rule
关于ingress的tls,有如下的知识点
1).only supports a single TLS port, 443, and assumes TLS termination.
2).可以看到,在tls中的rule也可以指定host,如果这个host和http rule部分中不⼀样,则they are multiplexed on the same port according to the hostname specified through the SNI TLS extension (假设 the Ingress controller ⽀持SNI)    ---SNI是什么⿁,以后再研究吧
3) tls中指定的secret中,必须要包含keys and tls.key that contain the certificate and private key to use for TLS。
然后,certificate 中必须包含的CN,并且和ingress中的配置 - hosts中⼀致
3).Ingress中的secrete中的证书是给controller⽤的,即告诉controller使⽤我要求的证书和客户端进⾏tls连接
4). 关于负载均衡:
An Ingress controller is bootstrapped with some load balancing policy settings that it applies to all Ingress, such as the load balancing algorithm, backend weight scheme, and others. More advanced load balancing concepts (e.g. persistent sessions, dynamic weights) are not yet exposed through the Ingress. You can instead get these features through the load balancer used for a Service。
解析:⼀个ic可能在启动的时候就加载了⼀下负载均衡策略,这个策略会应⽤到后⽅的所有ingress上。
但⽬前还不⽀持通过Ingress配置ic的负载均衡策略,不过你可以在service上配置复杂的负载均衡策略在实现你的要求。
wxy:通过ingress让ingress controller帮我们为业务负载均衡恐怕不⾏,不过你可以让⾃⼰的业务的service负责
三: 创建Ingress Controller来实现ingress
0.关于Ingress Controler,官⽹是这样解释的
You may deploy any number of ingress controllers within a cluster. When you create an ingress, you should
annotate each ingress with the appropriate ingress.class to indicate which ingress controller should be used
if more than one exists within your cluster.
If you do not define a class, your cloud provider may use a default ingress controller.
Ideally, all ingress controllers should fulfill this specification, but the various ingress controllers operate slightly differently.
Note: Make sure you review your ingress controller’s documentation to understand the caveats of choosing it.
An admin can deploy an Ingress controller such that it only satisfies Ingress from a given namespace,
but by default, controllers will watch the entire Kubernetes cluster for unsatisfied Ingress.
解析:
真正让⼀个ingress⼯作,⼀个集中需要有⼀个ingress controller(这种controller不同于kube-controller-manager管理的其他类型controller,  他管理pod的⽅式是⽤户⾃定义的,⽬前⽀持的ingress controller是 GCE and nginx controllers. 另外这种controller也是⼀种deployment)。集中也可以部署若⼲个ingress controller, 但这时你的ingress就需要利⽤ingress.class这个annotate来指明你想⽤哪个ic,如果你没有定义⼀个class,那么你的云provider会使⽤缺省的ic。另外,新版的k8s已经使⽤ingressClassName字段取代annotate⽅式了。
尽管,管理员可以部署⼀个只应⽤于某个给定的ns的ic,但是缺省情况,controller应该可以watch到整个k8s集中不满⾜的Ingress.
wxy:关于ingress.class将会在下⼀个章节中详细讲解,这⾥由于只有⼀个ingress controller,所以暂时先忽略这⼀项。
在我实际的操作中,ingress和ingress controller的⼯作原理是这样的:
⾸先,ingress创建,并准备好http rule(path部分) 和 tls rule(证书部分)
然后,⼀种类型的ingress controller⽐如ngxin创建,这种ic负责的ingress假设配置成负责整个集(nginx官⽹提供的缺省配置就是如此)
此时,ingress controller会主动watch 整个集的ingress,然后读取这个ingress的信息并添加到⾃⼰的代理规则中,同时更新ingress的相关字段
具体的实验过程战术如下:
0. 官⽅ngxin controller是这样描述⾃⼰的
ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: "nginx" annotation.
Please note that the ingress resource should be placed inside the same namespace of the backend resource.
解析: IC会⾃动去发现所有注解有"nginx"的ingress,另外要求ingress和其代理的backend(Service)位于同⼀个ns。
wxy: ic和ingress⽆需同⼀个namespace,⼀般ic是全局的.....
1.创建Ingress Controller,以及相关的资源
#kubectl apply -f ./mandatory.yaml
mandatory.yaml⽂件的主要内容如下:
[root@node213 wxy]# cat mandatory.yaml
---
#0. 先创建⼀个专属的namespace
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
#1.创建三个配置,这些配置都是给controller即ngixn使⽤的,缺省情况下这三项配置都是空的
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
name: udp-services
---
#2.创建⼀个服务账号给controller使⽤,并给这个账号赋予⼀定的权限,其中对资源是Ingress类型具有watch和update其status的权限apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
...
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
-
ingresses
verbs:
- get
- list
- watch  ---可以watch整个cluster的ingress
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
-
update  ---并可以更新ingress的status
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
roleRef:
name: nginx-ingress-role
subjects:
name: nginx-ingress-serviceaccount
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
roleRef:
name: nginx-ingress-clusterrole
subjects:
namespace: ingress-nginx
---
#3. 创建⼀种ingress controller(Deployment类型的),该Deployment会级联创建对应的pod实例
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
...
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
bootstrapped- --annotations-prefix=nginx.ingress.kubernetes.io
...
ports:
- name: http
containerPort: 80
-
name: https
containerPort: 443
---
View Code
2. 创建ingress controller的服务
#kubectl apply -f ./service-nodeport.yaml
ic也不过是个pod,所以作为总代理的他也要能够对外暴露⾃⼰,也正因作为总代理,所以ic的服务⼀般是NodePort类型或更⼤范围
附加:因为是内⽹环境,需要提前下载需要的⽂件和镜像,在这⾥由于受墙所限,⽂件我是从github上下载的
1)部署⽂件
2)使⽤的镜像
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
3. Ingress被实现
ingress controller会⾃动到需要被satisfies的ingress,然后读取的他内容添加到⾃⼰的规则中,同时更新ingress的信息
1)ingress controler的service信息
# kubectl get svc -ningress-nginx
NAME            TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)                      AGE
ingress-nginx  NodePort  10.106.118.8  <none>        80:56901/TCP,443:25064/TCP  26h
2) ingress contrller watch 到ingress的⽇志
<:287] updating Ingress wxy-test/rancher status from [] to [{10.106.118.8 }]
<:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"my-cattle-system", Name:"rancher", UID:"b042e1c5-b851-11ea-9fd1-286ed488c73f", APIVersion:"networking.k8s.io/v1b
eta1", ResourceVersion:"937884", FieldPath: :255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"my-cattle-system", Name:"rancher", UID:"b042e1c5-b851-11ea-9fd1-286ed488c73f", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"937885", FieldPath: 3) Ingress的信息被刷新
# kubectl get ingress -nmy-cattle-system -oyaml
annotations:
--被实现后,新增⼀个注解
field.cattle.io/publicEndpoints: '[{"addresses":["10.100.126.179"],"port":443,"protocol":"HTTPS","serviceName":"wxy-test:rancher","ingressName":"wxy-test:rancher","hostname":"","allNodes":false}]'
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30" ---创建ingress就添加的注解,是不是因为该注解的前缀符合nginx ingress controller,所以被nginx "看上了"?
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
...
status:
loadBalancer:
ingress:    ---新增的,为ingress controller的service的地址
- ip: 10.106.118.8
4. 详细解析⼀下nginx的转发规则
进⼊nginx实例,看看ingress这个转发规则和证书是怎样作⽤于nginx的
# kubectl exec -ti -ningress-nginx nginx-ingress-controller-744b8ccf4c-mdnws /bin/sh
$ cat ./f
...
## start st.org
server {
server_st.org ;
listen 80  ;
listen [::]:80  ;
listen 443  ssl http2 ;
listen [::]:443  ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
location / {
set $namespace"my-cattle-system ";
set $ingress_name  "rancher";
set $service_name  "rancher";
set $service_port  "80";
set $location_path  "/";
...
四: 如何通过Ingress访问业务
验证⽅式1:curl⽅式访问API
# IC_HTTPS_PORT=25064  ---nginx controller的service暴露的nodePort
# IC_IP=192.168.48.213      ---nginx controller暴露的服务ip,在这⾥因为是nodeport类型,所以任意⼀个节点的ip即可
(解析:
--resolve HOST:PORT:ADDRESS  :将:25064 强制解析成192.168.48.213(缺省80端⼝?),
)
结果:
{"type":"collection","links":{"self":":25064/"},"actions":{},"pagination":{"limit":1000,"total":4},"sort":{"order":"asc","reverse":":25064/? order=desc"},"resourceType":"apiRoot","data":[{"apiVersion":{"group":"meta.cattle.io","path":"/meta","version":"v1"},"baseType":"apiRoot","links": {"apiRoots":":25064/meta/apiroots","root":"h
...终于成功了
或者,在访问者的机器上
# vi /etc/hosts
192.168.48.st.org  ---增加
于是,就可以访问
验证⽅式2:浏览器⽅式
1)⾸先,因为是⾃定义的域名所以官⽅dns是不认的,所以就需要在访问者机⼦上直接添加上对应的域名解析的结果,在C:\Windows\System32\drivers\etc\hosts增加:192.168.48.214              # source server
2)然后,浏览器上访问
注意:
⼀定要使⽤域名访问,否则
# curl 192.168.48.213:42060 -k
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.2</center>
</body>
</html>
即,这种访问⽅式表⽰直接访问的是ngxin,然后nginx并不知道你真正的后端想要访问谁
四:详细说说Ingress Class
0. 官⽹说
1)在Kubernetes 1.18之前,使⽤ingress.class注解(信息来⾃nginx官⽹)
If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress such as GKE,
you need to specify the annotation kubernetes.io/ingress.class: "nginx" in all ingresses that you would like the ingress-nginx controller to claim.
解析: 这个注解是给nginx看的,nginx 类型的ingress controller会去watch"想要我"的ingress,然后将其添加到我的"管辖范围中"
2)在Kubernetes 1.18之后,使⽤IngressClass类型object 和 ingressClassName字段
ingress可以由不同的controller所实现,并且不同的controller对应不同的配置,这是如何做到的呢?就是通过IngressClass这种资源,具体来说就是:
在ingress中,可以为其指定⼀个class,即这个ingress对应的IngressClass类型的资源
在这个IngressClass中,包含了两部分的参数:
1)controller,表⽰这⼀类对应的ingress controller具体是谁,官⽅说:the name of the controller that should implement the class.
2)parameters(可选),是TypedLocalObjectReference类型的参数,是⼀个
is a link to a custom resource containing additional configuration for the controller.
也就是说,是给controller⽤的,即如果这个controller还需要些额外的配置信息,根据如下三元素就能到承载配置的那个object
例:
apiVersion: networking.k8s.io/v1beta1
kind: IngressClass
metadata:
name: external-lb  ---该类的名称
spec:
controller: example/ingress-controller  --要求集中使⽤这个ingress controller
parameters:  ---使⽤这个controller的时候,还需要引⽤⼀种称为IngressParameters的crd,这个crd的名称叫做external-lb
apiGroup: ample/v1alpha
kind: IngressParameters
name: external-lb
1.⼿动为Ingress添加ingress.class注解,查看nginx ingress controller的反应
如果没有显式创建class,发现如果为ingress指定
kubernetes.io/ingress.class: "nginx-1"则会将其从缺省的ic中删除
在改成kubernetes.io/ingress.class: "nginx"就⼜添加进来了。详细的⽇志如下
# kubectl logs -f -ningress-nginx nginx-ingress-controller-744b8ccf4c-8wnkn
-------------------------------------------------------------------------------
NGINX Ingress controller
Release:      0.26.1
Build:        git-2de5a893a
Repository:    github/kubernetes/ingress-nginx
nginx version: openresty/1.15.8.2
-
------------------------------------------------------------------------------
0,准备⼯作,包括加载⼀些配置,初始化⼀个访问我的url(https端⼝号缺省443), 还会创建⼀个假的证书(⼲什么⽤的?)
<:243] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
<:182] Creating API client for 10.96.0.1:443
<:226] Running in Kubernetes cluster version v1.12 (v1.12.1) - git (clean) commit 4ed3216f3ec431b140b1d899130a69fc671678f4 - platform linux/amd64
<:101] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
<:105] Using deprecated "k8s.io/api/extensions/v1beta1" package because Kubernetes version is < v1.14.0

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。