Centos7使⽤Kubeadm搭建k8s集
⽬录
本⽂安装Kubernetes的⽅式是使⽤kubeadm安装,还有其他安装⽅式,kubeadm是较为简单的⽅法。
这⾥先说⼀下安装的步骤,由于环境问题和⽹络问题,安装可能并不是⼀篇教程跟着下来就可以,所以建议先了解下安装的步骤,看看官⽹。
环境
主机1主机2
k8s-master k8s-node-1
144.34.220.135106.14.141.90
2u2g1u1g
国外国内
centos7centos7
安装验证与准备
此部分两台主机都要进⾏
设置主机名与host⽂件
设置主机名
# master
hostnamectl --static set-hostname k8s-master
# node
hostnamectl --static set-hostname k8s-node-1
查看系统信息
hostnamectl status
修改/etc/hosts⽂件, 两台主机都要修改
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 localhost
# 添加下⾯两⾏
144.34.220.135 k8s-master
106.14.141.90 k8s-node-1
禁⽤ Swap 交换分区
为了保证 kubelet 正确运⾏,您必须禁⽤交换分区。
使⽤free -h 查看交换分区是否启⽤
[root@k8s-master ~]# free -h
total used free shared buff/cache available
Mem: 2.0G 80M 334M 16M 1.6G 1.7G
Swap: 511M 0B 511M
如何关闭swap?
swapoff -a
vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Mar 13 20:42:02 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=3437f1a0-f850-4f1b-8a7c-819c5f6a29e4 / ext4 defaults,discard,noatime 1 1
UUID=ad1361f7-4ab4-4252-ba00-5d4e5d8590fb /boot ext3 defaults 1 2
/swap none swap sw 0 0
这⾥没有swap的声明,如果有需要注释掉
[root@k8s-master ~]# free -h
total used free shared buff/cache available
Mem: 2.0G 80M 333M 16M 1.6G 1.7G
Swap: 0B 0B 0B
禁⽤SELinux
# 将 SELinux 设置为 permissive 模式(将其禁⽤)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
只有执⾏这⼀操作之后,容器才能访问宿主的⽂件系统,进⽽能够正常使⽤ Pod ⽹络。您必须这么做,直到 kubelet 做出升级⽀持 SELinux 为⽌。
开放k8s⽤的端⼝
我们直接禁⽤掉防⽕墙
# centos 7
systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
systemctl stop firewalld.service #停⽌firewall
systemctl disable firewalld.service #禁⽌firewall开机启动
确保每个节点上 MAC 地址和 product_uuid 的唯⼀性。
⼀般来讲,硬件设备会拥有独⼀⽆⼆的地址,但是有些虚拟机可能会雷同。
MAC 地址:ip link 或是 ifconfig -a
获取 product_uuid sudo cat /sys/class/dmi/id/product_uuid
[root@localhost ~]# sudo cat /sys/class/dmi/id/product_uuid
552404A0-F6C4-4A3A-BC3E-A3B4A032501B
确保两台主机⽹络互通
ping
安装docker
此部分两台主机都要进⾏
从 v1.6.0 起,Kubernetes 开始允许使⽤ CRI,容器运⾏时接⼝。默认的容器运⾏时是 Docker,这是由 kubelet 内置的 CRI 实现 dockershim 开启的。
还有其他容器
containerd (containerd 的内置 CRI 插件)
cri-o
frakti
rkt
这⾥我们使⽤docker
注意:此篇⽂章写于2019.8.9,此时安装的kubelet版本为1.15.2,最⾼⽀持docker-ce 18.9,⽽docker已经更新到19,因次你可能需要控制下docker的版本## Set up the repository
### Install required packages.
yum install yum-utils device-mapper-persistent-data lvm2
### Add Docker repository.
yum-config-manager \
--add-repo \
download.docker/linux/po
## Install Docker CE.
yum update && yum install docker-ce-18.
[root@k8s-master ~]# docker --version
Docker version 18.06.2-ce, build 6d37f41
安装 kubeadm, kubelet 和 kubectl
此部分两台主机都要进⾏
kubeadm: ⽤来初始化集的指令。
kubelet: 在集中的每个节点上⽤来启动 pod 和 container 等。
kubectl: ⽤来与集通信的命令⾏⼯具。
cat <<eof> /pos.po
[kubernetes]
name=Kubernetes
baseurl=mirrors.aliyun/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=mirrors.aliyun/kubernetes/yum/doc/yum-key.gpg mirrors.aliyun/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
[root@k8s-master ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─f
Active: activating (auto-restart) (Result: exit-code) since 五 2019-08-09 15:08:39 EDT; 740ms ago
Docs: kubernetes.io/docs/
Process: 26821 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 26821 (code=exited, status=255)
8⽉ 09 15:08:39 k8s-master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
8⽉ 09 15:08:39 k8s-master systemd[1]: Unit kubelet.service entered failed state.
8⽉ 09 15:08:39 k8s-master systemd[1]: kubelet.service failed.
启动失败,没关系,先不管它
⼀些 RHEL/CentOS 7 的⽤户曾经遇到过:由于 iptables 被绕过导致⽹络请求被错误的路由。您得保证在您的 sysctl 配置中 net.bridge.bridge-nf-call-iptables 被设为1。
cat <<eof> /etc/sysctl.f
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 使配置⽣效centos vim命令
sysctl --system
在 Master 节点上配置 kubelet 所需的 cgroup 驱动
使⽤ Docker 时,kubeadm 会⾃动为其检测 cgroup 驱动在运⾏时对 /var/lib/v ⽂件进⾏配置。如果您使⽤了不同的 CRI,您得把 /etc/default/kubelet ⽂件中的 cgroup-driver
位置改为对应的值,像这样:
KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
这个⽂件将会被 kubeadm init 和 kubeadm join ⽤于为 kubelet 获取额外的⽤户参数。
请注意,您只需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。
需要重启 kubelet:
systemctl daemon-reload
systemctl restart kubelet
查看docker cgronp driver
docker info | grep -i cgroup
如果查询不到,重启docker(在运⾏kubelet之后)
# 重启docker
[root@k8s-master ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-master ~]# docker info | grep -i cgroup
Cgroup Driver: cgroupfs
# 重启kubelet
systemctl restart kubelet
初始化Master集
拉取镜像
这⼀步不是必须的,在使⽤kubeadm init时也会执⾏此命令,这⾥单独拿出来是因为此步骤国内主机需要FQ。否则初始化时会报如下错误
[ERROR ImagePull]: failed to pull io/kube-apiserver:v1.13.4: output: Error response from daemon: Get io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded whil , error: exit status 1
国内拉取镜像解决⽅法
注意:这是我国外的主机,国内主机需要FQ,但设置代理时要记得取消掉,否则可能启动失败,还有⼀种⽅法是使⽤docker提供的包然后修改镜像名。链接在上⾯。
kubeadm config images pull
[root@k8s-master kubelet.service.d]# kubeadm config images pull
[config/images] io/kube-apiserver:v1.15.2
[config/images] io/kube-controller-manager:v1.15.2
[config/images] io/kube-scheduler:v1.15.2
[config/images] io/kube-proxy:v1.15.2
[config/images] io/pause:3.1
[config/images] io/etcd:3.3.10
[config/images] io/coredns:1.3.1
通过docker来看拉取的镜像
[root@k8s-master kubelet.service.d]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
初始化kubeadm init
kubeadm init --apiserver-advertise-address=144.34.220.135 --pod-network-cidr=10.244.0.0/16
启动成功会打印如下输出:
[root@k8s-master kubelet.service.d]# kubeadm init --apiserver-advertise-address=144.34.220.135 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/v"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [144.34.220.135 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [144.34.220.135 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 144.34.220.135]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "f" kubeconfig file
[kubeconfig] Writing "f" kubeconfig file
[kubeconfig] Writing "f" kubeconfig file
[kubeconfig] Writing "f" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 40.514041 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration
for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9f8hc3.mvm668g4rwmtiwpl
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/f $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 144.34.220.135:6443 --token 9f8hc3.mvm668g4rwmtiwpl \
--discovery-token-ca-cert-hash sha256:e828f328183d747f2f9171476ddd3187d380c372080e0776dfd59165c7b99815
[root@k8s-master kubelet.service.d]#
根据控制台输出,要使kubectl为⾮root⽤户⼯作,请运⾏以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/f $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
或者,如果您是root⽤户
export KUBECONFIG=/etc/f
记录输出的kubeadm join命令kubeadm init。您需要此命令才能将节点加⼊集。
安装pod⽹络附加组件
kubectl apply -f raw.githubusercontent/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/l
[root@k8s-master kubelet.service.d]# kubectl apply -f raw.githubusercontent/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/sions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
安装了pod⽹络后,您可以通过检查CoreDNS pod是否在输出中运⾏来确认它是否正常⼯作kubectl get pods --all-namespaces。⼀旦CoreDNS pod启动并运⾏,您可以继续加⼊您的节点。
[root@k8s-master kubelet.service.d]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5c98db65d4-kcvfp 1/1 Running 0 9m51s
kube-system coredns-5c98db65d4-p8fd8 1/1 Running 0 9m51s
kube-system etcd-k8s-master 1/1 Running 0 9m13s
kube-system kube-apiserver-k8s-master 1/1 Running 0 9m10s
kube-system kube-controller-manager-k8s-master 1/1 Running 0 9m7s
kube-system kube-flannel-ds-amd64-zdsrf 1/1 Running 0 77s
kube-system kube-proxy-dkfq9 1/1 Running 0 9m51s
kube-system kube-scheduler-k8s-master 1/1 Running 0 8m53s
[root@k8s-master kubelet.service.d]#
[root@k8s-master kubelet.service.d]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 12m v1.15.2
初始化node
ssh 144.34.220.135 -p ssh端⼝
拉取镜像
同样,这⾥拉取的镜像需要kexue上⽹,与master的节点相同,不同的是node节点只需要kube-proxy与pause这两个镜像。
如果你不想FQ的话,请参考
[root@k8s-node-1 docker]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
[root@k8s-node-1 docker]#
加⼊集
kubeadm join 144.34.220.135:6443 --token 9f8hc3.mvm668g4rwmtiwpl \
--discovery-token-ca-cert-hash sha256:e828f328183d747f2f9171476ddd3187d380c372080e0776dfd59165c7b99815
这⾥也可以带上版本号
这条命令是master kubeadm init的输出,其中token是有时限的,默认是24⼩时。
可以先在k8s-master上通过kubeadm token list查看token
[root@k8s-master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
9f8hc3.mvm668g4rwmtiwpl 6h 2019-08-10T15:39:03-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token [root@k8s-master ~]#
如果没有,在k8s-master执⾏kubeadm token create创建。
如果没有值--discovery-token-ca-cert-hash,可以通过在master节点上运⾏以下命令链来获取它:
openssl x509 -pubkey -in /etc/kubernetes/ | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
加⼊集
[root@k8s-node-1 docker]# kubeadm join 144.34.220.135:6443 --token qekggk.bqsgueieehkhli45 \
> --discovery-token-ca-cert-hash sha256:16606f1055983847eaaba5f283b634a96160de22fc48a58c72b0045f78de3b31
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at kubernetes.io/docs/setup/cri/ [preflight] Reading configuration from
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/v"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node-1 docker]#
在master主机上查看集状态:
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 10m v1.15.2
k8s-node-1 Ready <none> 25s v1.15.2
[root@k8s-master ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-5c98db65d4-7ghr6 1/1 Running 0 10m kube-system coredns-5c98db65d4-gl9kw 1/1 Running 0 10m kube-system etcd-k8s-master 1/1 Running 0 9m42s
kube-system kube-apiserver-k8s-master 1/1 Running 0 9m55s kube-system kube-controller-manager-k8s-master 1/1 Running 0 9m35s kube-system kube-flannel-ds-amd64-5bcg4 1/1 Running 0 54s kube-system kube-flannel-ds-amd64-v7grj 1/1 Running 0 8m12s kube-system kube-proxy-q52r2 1/1 Running 0 10m
kube-system kube-proxy-z7687 1/1 Running 0 54s
kube-system kube-scheduler-k8s-master 1/1 Running 0 9m27s
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论