K8s常见问题及解决⽅案
K8s 常见问题及解决⽅案
1. 我已经通过k8s官⽅提供的解决⽅案安装的docker,并且docker可以成功运⾏。启动minikube的时候出现的问题
xiaoqu@k8s2:~$ sudo minikube start --driver=none
[sudo] password for xiaoqu:
Sorry, try again.
[sudo] password for xiaoqu:
minikube v1.12.1 on Ubuntu 16.04
Using the none driver based on user configuration
Sorry, Kubernetes 1.18.3 requires conntrack to be installed in root's path
Sorry, Kubernetes 1.18.3 requires conntrack to be installed in root's path
之前在另⼀台虚拟机上安装minikube 就没有出现这个问题。
解决⽅法:
安装conntract,之后在此尝试启动minikube。
sudo apt-get install conntract -y
2. docker 的Cgroup driver 改为 systemd
# (Install Docker CE)
## Set up the repository:
### Install packages to allow apt to use a repository over HTTPS
apt-get update && apt-get install -y \
apt-transport-https ca-certificates curl software-properties-common gnupg2
# Add Docker’s official GPG key:
curl -fsSL download.docker/linux/ubuntu/gpg | apt-key add -
# Add the Docker apt repository:
add-apt-repository \
"deb [arch=amd64] download.docker/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update && apt-get install -y \
containerd.io=1.2.13-2 \
docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
重点来了
# Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["updriver=systemd"], ## 这⾥改为了systemd
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
systemctl daemon-reload
systemctl restart docker
# enable start on boot
sudo systemctl enable docker
sudo kubeadm init 出现下列错误
[ERROR Swap]: running with swap on is not supported. Please disable swap.
解决⽅法:
sudo swapoff -a
永久关闭:
vim /etc/fstab
注释掉下边这⾏
# /dev/mapper/k8s1--vg-swap_1 none swap sw 0 0
参考
问题:
sudo kubelet
] failed to run Kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
解决
⽬前也没有到正确的姿势,唯⼀的⽅式就是重新安装docker 和 kubectl
问题: k8s 集搭好了kubectl get nodes role是空的,如何指定节点的role
root@k8s2:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s1 Ready <none> 14h v1.18.6
k8s2 Ready master 15h v1.18.6
答案:
暂时没记录。
问题如何查看K8s集的版本号
答案:我是⽤kube admin 构建的k8s集,⽤minikube构建的可能会不⼀样
kubeadm version
root@k8s2:~/k8s# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"这⾥是1.18.6
如何删除加⼊集中的节点
下线节点
root@k8s2:~# kubectl drain k8s1 --ignore-daemonsets --delete-local-data
node/k8s1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-bw8mn
node/k8s1 drained
确认下线
root@k8s2:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s1 Ready,SchedulingDisabled <none> 23h v1.18.6
k8s2 Ready master 23h v1.18.6
删除节点
root@k8s2:~# kubectl delete node k8s1
node "k8s1" deleted
root@k8s2:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s2 Ready master 23h v1.18.6
root@k8s2:~#
删除成功
kubeadm init 如何指定pod的⽹络段
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
如何配置K8s集的calico⽹络
前提
基于kubeadm init 指定的pod⽹络段。
kubeadm init 之后的coreDNS状态
root@k8s2:~/k8s# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bff467f8-l4lq5 0/1 Pending 0 65s
kube-system coredns-66bff467f8-mqpxc 0/1 Pending 0 65s
kube-system etcd-k8s2 1/1 Running 0 75s
kube-system kube-apiserver-k8s2 1/1 Running 0 75s
kube-system kube-controller-manager-k8s2 1/1 Running 0 75s
kube-system kube-proxy-5twv5 1/1 Running 0 65s
kube-system kube-scheduler-k8s2 1/1 Running 0 75s
下载calico⽹络配置⽂件
编辑calico.yaml修改默认⽹络段和你在kubeadm init指定的⼀致
vim calico.yaml /192.168 可快速定位
> - name: CALICO_IPV4POOL_CIDR
> value: "10.10.0.0/16"
安装插件
kubectl apply -f calico.yaml
查看状态
启动⽐较慢需要等⼤约1-2min
watch kubectl get pods --all-namespaces
root@k8s2:~/k8s# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-75d555c48-x87bq 1/1 Running 0 12m
kube-system calico-node-g2rhd 1/1 Running 0 12m
kube-system coredns-66bff467f8-l4lq5 1/1 Running 0 13m
kube-system coredns-66bff467f8-mqpxc 1/1 Running 0 13m
kube-system etcd-k8s2 1/1 Running 0 13m
kube-system kube-apiserver-k8s2 1/1 Running 0 13m
kube-system kube-controller-manager-k8s2 1/1 Running 0 13m
kube-system kube-proxy-5twv5 1/1 Running 0 13m
kube-system kube-scheduler-k8s2 1/1 Running 0 13m
安装成功。
参考:
k8s 的服务如何让外⽹去访问
背景:
我在把ubuntu 装在了两个不同的虚拟机⾥,⽹络都是桥接的。这两个组成了k8s的集,想让集⾥部署的服务被我的真实的物理机访问到。
解决⽅式有四种:
1. hostNetwork: true 直接暴露 pod在部署pod的节点上,然后通过节点的ip加端⼝去访问。
yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-kube
spec:
hostNetwork: true
containers:
- name: hello-kube
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: "hello-kube"
2. hostPort 直接定义Pod⽹络的⽅式,通过宿主机和pod之间的端⼝映射,类似直接起docker 然后做端⼝映射。
yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-kube
spec:
containers:
- name: hello-kube
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
hostPort: 8081
env:
- name: MESSAGE
value: "hello-kube-host-port"
3. nodePort 定义⽹络⽅式
NodePort在kubenretes⾥是⼀个⼴泛应⽤的服务暴露⽅式。
Kubernetes中的service默认情况下都是使⽤的ClusterIP这种类型,这样的service会产⽣⼀个ClusterIP,这个IP只能在集内部访问,要想让外部能够直接访问service,需要将service type修改为 nodePort。
apiVersion: v1
kind: Pod
metadata:
name: hello-kube
labels:
name: hello-kube
spec:
containers:
- name: hello-kube
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
-
name: MESSAGE
value: "hello-kube-node-port"
---
apiVersion: v1
kind: Service
metadata:
name: hello-kube
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30001 # 只能在区间 30000-32767
selector:
name: hello-kube
4. LoadBalancer 只能在service上定义
yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-kube
labels:
name: hello-kube
spec:
containers:
- name: hello-kube
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: "hello-kube-load-balancer"
---
apiVersion: v1
kind: Service
metadata:
name: hello-kube
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
name: hello-kube
5. ingress
创建service 和pod 的yaml
apiVersion: v1
kind: Pod
metadata:
name: hello-kube
labels:
name: hello-kube
spec:
containers:
- name: hello-kube
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: "hello-kube-load-balancer"
---
apiVersion: v1
kind: Service
metadata:
name: hello-kube
spec:
ports:
- port: 8080
selector:
name: hello-kube
service 状态
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kube ClusterIP 10.111.142.96 <none> 8080/TCP 4s
集重启之后⽆法使⽤kubelet
The connection to the server 192.168.2.121:6443 was refused - did you specify the right host or port?
1. sudo -i
2. swapoff -a
3. exit
4. strace -eopenat kubectl version
⼿动关掉 swapoff 在master 节点
nodePort 不⽣效
设置service type为 nodePort,理论上能在集中任意ip均可访问,但是只能在部署pod的节点访问
暂时没到能在kubeadm 上建⽴的集的解决⽅案。
port, nodePort, targetPort 分别是指什么?
port 是在k8s集内部暴露的端⼝,其他sevice/pod可以通过这个端⼝开访问
nodePort:也就是在k8s集的node上暴露的端⼝,供外界访问。宿主机的端⼝。
targetPort: pod 本⾝⾃⼰暴露的端⼝,也就是k8s内部和外界访问的流量最终都会到这⾥。
configMap的值变了为什么 pod引⽤的内容没变?
congMapYaml
apiVersion: v1
kind: ConfigMap
metadata:
name: hello-kube-config
labels:
name: hello
data:
MESSAGE: "message"
name: "hello"
service with deployments Yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kube-d
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kube-d
-
--
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kube-d
spec:
replicas: 1
selector:
matchLabels:
app: hello-kube-d
template:
metadata:
labels:
app: hello-kube-d
spec:
containers:
- name: hello-kube-d
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGEnodeselector
valueFrom:
configMapKeyRef:
name: hello-kube-config
key: MESSAGE # 这⾥引⽤的是configMap的值
答案:
当configMap作为 environment variable 加载到pod中,当configMap对应的key发⽣改变时,需要重启pod才能⽣效。当confgiMap 作为卷mount到系统中,变更将⾃动⽣效,但是有延迟,在下次ttl检查之前不会⽣效,之后才会成效。集⽆法部署pod
解决⽅法:需要安装⽹络插件,安装完成之后 coreDNS的pod才会启起来,之后才可以⽤。
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论