使用kubeadm安装kubernetes集群

Posted by Lucky Xue on 2020-10-18

本篇文章主要参考了微信公众号先超发布的《安装kubernetes集群-灵活安装k8s各个版本高可用集群》这篇文章,完全从 0 开始,在 Bare-metal 环境下使用 kubeadm 工具部署了一个完整的 Kubernetes 集群:这个集群有一个 Master 节点和1个 Worker 节点;使用 Flannel 作为容器网络插件;使用 Dashboard 插件提供了可视化的 Web 界面;使用 Metrics 作为容器指标插件。kubeadm项目大大简化了部署kubernetes的准备工作,尤其是配置文件、证书、二进制文件的准备和制作,以及集群的版本管理等操作,都被kubeadm接管了,kubernets本身一切皆容器的设计思想,加上良好的可扩展机制,是的插件的部署非常简便。

不过,我在实际测试的过程中由于国内访问谷歌镜像网络不是很好,经常会发生无法拉取镜像的问题,对于一些安装不成功的Pod可以自己手动下载docker镜像文件离线安装。

一、环境准备

PS:每台机器都要执行

1、机器环境

节点CPU核数必须是 :>= 2核 ,否则k8s无法启动,内存必须是 :2GB,否则k8s的pod容易Crash
DNS网络: 最好设置为 本地网络连通的DNS,否则网络不通,无法下载一些镜像
linux内核: linux内核必须是 3.10 版本以上,因此必须把linux核心进行升级
k8s-master: 此机器用来安装k8s-master的操作环境
k8s-node01: 此机器用来安装k8s-node节点的环境

2、依赖环境

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
安装依赖环境,注意:每一台机器都需要安装此依赖环境
[root@localhost ~]# yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git iproute lrzsz shell-completion tree bridge-utils unzip bind-utils gcc yum-utils device-mapper-persistent-data lvm2

安装iptables,启动iptables,设置开机自启,清空iptables规则,保存当前规则到默认规则
关闭防火墙(停止并禁用)
[root@localhost ~]# systemctl stop firewalld && systemctl disable firewalld

置空iptables
[root@localhost ~]# yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

关闭swap分区【虚拟内存】并且永久关闭虚拟内存
[root@localhost ~]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
关闭selinux
[root@localhost ~]# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

升级Linux内核版本
安装源
centos8执行
[root@localhost ~]# yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm -y
centos7执行
[root@localhost ~]# yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm -y

列出所有可用内核
[root@localhost ~]# yum --enablerepo=elrepo-kernel list|grep kernel

升级内核最新版
[root@localhost ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt

查询已安装的内核
[root@localhost ~]# rpm -qa | grep kernel
查看默认启动项
[root@localhost ~]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (4.4.227-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-1127.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-415ea3f02941f246af981f6f14e92f33) 7 (Core)

默认启动的顺序是从0开始(CentOS Linux (3.10.0-1127.el7.x86_64) 7),新内核是从头插入,所以需要选择0
[root@localhost ~]# grub2-set-default 0

重启
[root@localhost ~]# reboot
使用新的内核,重启后使用的内核版本:
[root@localhost ~]# uname -r
4.4.227-1.el7.elrepo.x86_64

删除旧内核
[root@localhost ~]# rpm -qa | grep kernel
kernel-3.10.0-1127.10.1.el7.x86_64
kernel-3.10.0-1127.el7.x86_64
kernel-3.10.0-1062.el7.x86_64
kernel-tools-3.10.0-1127.10.1.el7.x86_64
kernel-headers-3.10.0-1127.10.1.el7.x86_64
kernel-lt-4.4.227-1.el7.elrepo.x86_64
kernel-tools-libs-3.10.0-1127.10.1.el7.x86_64
[root@localhost ~]# yum remove kernel-3.10.0-1127.10.1.el7.x86_64 kernel-3.10.0-1127.el7.x86_64 kernel-3.10.0-1062.el7.x86_64

优化系统内核参数(对于k8s)
[root@localhost ~]# cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

手动刷新,让优化文件立即生效
[root@localhost ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
#注意:可能会报错(最后的一个配置net.netfilter.nf_conntrack_max=2310720)
# sysctl: cannot stat /proc/sys/net/netfilter/nf_conntrack_max: 没有那个文件或目录
# 也许没有加载ip_conntrack.尝试:lsmod |grep conntrack
# 如果这是空的,请加载:modprobe ip_conntrack
# 或者尝试net.nf_conntrack_max代替net.netfilter.nf_conntrack_max = xxxx

调整系统临时区 --- 如果已经设置时区,可略过
设置系统时区为中国/上海
[root@localhost ~]# timedatectl set-timezone Asia/Shanghai
#将当前的 UTC 时间写入硬件时钟
[root@localhost ~]# timedatectl set-local-rtc 0
#重启依赖于系统时间的服务
[root@localhost ~]# systemctl restart rsyslog
[root@localhost ~]# systemctl restart crond

3、docker部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
配置一个稳定(stable)的仓库,仓库配置会保存到/etc/yum.repos.d/docker-ce.repo文件中
[root@localhost ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

查看可用版本
[root@localhost ~]# yum list docker-ce --showduplicates | sort -r

更新Yum&安装Docker CE (不加版本默认最新版)
[root@localhost ~]# yum update -y && yum install docker-ce -y

重启docker服务,设置开机自启动
[root@localhost ~]# systemctl restart docker && systemctl enable docker
[root@localhost system]# docker --version
Docker version 19.03.11, build 42e35e61f3

更新daemon.json文件
[root@localhost ~]# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts":{"max-size":"100m"}
}
EOF
注意: 一定注意编码问题,出现错误:查看命令:journalctl -amu docker 即可发现错误

重启
[root@localhost ~]# systemctl restart docker

4、安装kubeadm、kubelet、kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
安装kubernetes的时候,需要安装kubelet, kubeadm等包
但k8s官网给的yum源是packages.cloud.google.com,国内访问不了,我们使用阿里云的yum仓库镜像。
[root@localhost ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm、kubelet、kubectl
列出所有版本
[root@localhost ~]# yum list kubeadm --showduplicates | sort -r
安装
[root@localhost ~]# yum install -y kubeadm-1.19.3 kubelet-1.19.3 kubectl-1.19.3

启动kubelet,添加自启
[root@localhost ~]# systemctl enable kubelet && systemctl start kubelet

5、复制虚拟机

master和node的都需要以上的环境,直接复制虚拟机,选择任意一个为master,其他为node

二、集群部署

1、配置主机名

1
2
3
4
5
6
7
8
9
10
master设置主机名:
[root@localhost ~]# hostnamectl set-hostname k8s-master
在master添加hosts:
[root@localhost ~]# cat >> /etc/hosts << EOF
192.168.112.100 k8s-master
192.168.112.101 k8s-node01
EOF

node设置主机名:
[root@localhost ~]# hostnamectl set-hostname k8s-node01

如果有多个node,每个node都需要配置自己的主机名(不能相同),然后在master的/etc/hosts添加配置,让master知道每个node的ip和对应的主机名

2、 部署Kubernetes Master

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
在Master执行
[root@localhost ~]# ip a
获取master ip 192.168.112.100

获取默认配置,并输出到kubeadm-config.yaml
[root@localhost ~]# kubeadm config print init-defaults > kubeadm-config.yaml
W0617 15:35:43.386146 13616 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@localhost ~]# ls
kubeadm-config.yaml
在kubeadm-config.yaml基础上做修改,或者直接执行下边的命令手动写入文件

#修改初始化配置文件kubeadm-config.yaml
cat << EOF>kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
#前边获取的master本机ip
advertiseAddress: 192.168.112.100
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
#本机节点name
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
#镜像仓库地址,默认k8s.gcr.io,修改为阿里云的
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
#修改成kubectl一致的版本
kubernetesVersion: v1.19.3
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
# 指定flannel模型通信 pod网段地址,此网段和flannel网段一致
podSubnet: 10.244.0.0/16
scheduler: {}
EOF


进行初始化,CPU核心数量必须大于1核,否则无法执行成功
[root@localhost ~]# kubeadm init --config=kubeadm-config.yaml


W0725 12:24:53.652756 2081 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

[root@localhost ~]# kubeadm init --config=kubeadm-config.yaml
W0617 15:48:20.619712 13686 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.112.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost localhost] and IPs [192.168.112.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost localhost] and IPs [192.168.112.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0617 15:48:27.092221 13686 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0617 15:48:27.093584 13686 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.002816 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node localhost as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.112.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5a373d36b57e15736fb02b377d2f11f99451160d6c48a72a99ee709d2fac975d


通过一份配置文件而不是使用命令行参数来配置 kubeadm init 命令是可能的,但是一些更加高级的功能只能够通过配置文件设定。这份配置文件通过 --config 选项参数指定(也可以通过命令行不用配置文件的方式初始化集群,2选1即可,推荐配置文件方式)
[root@localhost ~]# kubeadm init \
--apiserver-advertise-address=192.168.112.100 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.19.3 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

apiserver-advertise-address是master的ip
image-repository是镜像仓库地址,这里指定阿里云镜像仓库地址
1
2
kubeadm join 192.168.112.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:2e22a7fc4908916bb6921b25106308084559ebe8fa16019dd8b48735eb924164

3、保存配置

1
2
3
4
5
6
7
8
创建目录,保存连接配置缓存,认证文件 
[root@localhost ~]# mkdir -p $HOME/.kube

拷贝集群管理配置文件
[root@localhost ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

授权给配置文件
[root@localhost ~]# chown $(id -u):$(id -g) $HOME/.kube/config

如果不保存配置,执行kubectl命令会报错

4、 网络插件flannel(CNI)

1
[root@localhost ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

这个地址现在访问不到了,需要先下载kube-flannel.yml

https://github.com/coreos/flannel/tree/master/Documentation

然后执行

1
[root@k8s-master ~]# kubectl create -f kube-flannel.yml

5、Node加入Kubernetes集群

在Node执行
向集群添加新节点,执行在kubeadm init成功后最后输出的kubeadm join命令,即master初始化成功会在最后生成token,node节点加入集群要使用这个命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[root@localhost ~]# kubeadm join 192.168.112.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:5a373d36b57e15736fb02b377d2f11f99451160d6c48a72a99ee709d2fac975d

加入成功会有以下显示
W0617 18:57:28.643045 3348 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Hostname]: hostname "k8s-node01" could not be reached
[WARNING Hostname]: hostname "k8s-node01": lookup k8s-node01 on 202.106.0.20:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


在master执行kubectl get nodes
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 5m21s v1.19.3
k8s-node01 Ready <none> 4m41s v1.19.3
k8s-node02 Ready <none> 4m44s v1.19.3

更详细查看命令,可以看见初始化节点所属节点
查询工作空间中pod容器的详细信息
[root@k8s-master ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7ff77c879f-264fv 1/1 Running 0 9h 10.244.0.2 k8s-master <none> <none>
coredns-7ff77c879f-wjnp7 1/1 Running 0 9h 10.244.0.3 k8s-master <none> <none>
etcd-k8s-master 1/1 Running 2 9h 10.20.33.80 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 2 9h 10.20.33.80 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 2 9h 10.20.33.80 k8s-master <none> <none>
kube-flannel-ds-amd64-54b5b 1/1 Running 0 4m47s 10.20.33.80 k8s-master <none> <none>
kube-flannel-ds-amd64-6g2s7 1/1 Running 0 4m47s 10.0.2.15 k8s-node01 <none> <none>
kube-proxy-mvp4j 1/1 Running 2 9h 10.0.2.15 k8s-node01 <none> <none>
kube-proxy-xr76t 1/1 Running 2 9h 10.20.33.80 k8s-master <none> <none>
kube-scheduler-k8s-master 1/1 Running 2 9h 10.20.33.80 k8s-master <none> <none>

6. 测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

1
2
3
4
5
6
7
8
9
[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
[root@k8s-master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-f89759699-xx4jv 1/1 Running 0 20m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9h
service/nginx NodePort 10.106.216.150 <none> 80:31880/TCP 19m

svc中有一个service/nginx,PORT(s)代表端口映射,就是把svc的80端口映射到物理机(node节点)的31880端口上,访问http://nodeIP:31880/,出现nginx欢迎页代表集群搭建成功

7. 部署 Dashboard

获取kubernetes-dashboard.yaml

1
2
# 登录kubernetes dashboard官网查看Readmd帮助文档
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml

启动 Dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml

或者
[root@k8s-master ~]# kubectl create -f kubernetes-dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

出现这样的显示,说明dashboard启动成功

查看dashboard是否安装成功:
kubectl get pods -n kubernetes-dashboard
显示如下,说明dashboard安装成功了

1
2
3
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-6b4884c9d5-wxcnf 1/1 Running 0 23h
kubernetes-dashboard-7bfbb48676-fvk6r 1/1 Running 0 23h

查看dashboard前端的service
kubectl get svc -n kubernetes-dashboard
显示如下:

1
2
3
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper ClusterIP 10.111.175.220 <none> 8000/TCP 23h
kubernetes-dashboard ClusterIP 10.102.54.134 <none> 443/TCP 23h

修改service type类型变成NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
把type: ClusterIP变成 type: NodePort,保存退出即可。
kubectl get svc -n kubernetes-dashboard
显示如下:

1
2
3
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper ClusterIP 10.111.175.220 <none> 8000/TCP 23h
kubernetes-dashboard NodePort 10.102.54.134 <none> 443:30001/TCP 23h

访问 Dashboard

上面可看到service类型是NodePort,访问master1节点ip:30001端口即可访问kubernetes dashboard,我的环境需要输入如下地址
https://192.168.112.100:30001/
可看到出现了dashboard界面,通过yaml文件里指定的默认的token登陆dashboard

1)查看kubernetes-dashboard名称空间下的secret
kubectl get secret -n kubernetes-dashboard
显示如下:

1
2
3
4
5
6
NAME                               TYPE                                  DATA   AGE
default-token-5hbbn kubernetes.io/service-account-token 3 23h
kubernetes-dashboard-certs Opaque 2 23h
kubernetes-dashboard-csrf Opaque 1 23h
kubernetes-dashboard-key-holder Opaque 2 23h
kubernetes-dashboard-token-vr6n5 kubernetes.io/service-account-token 3 23h

2)找到对应的带有token的kubernetes-dashboard-token-vr6n5
kubectl describe secret kubernetes-dashboard-token-vr6n5 -n kubernetes-dashboard
显示如下:

1
2
3
4
5
6
7
8
Data
====
ca.crt: 1025 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Ilp0em1EZndTQzhOWVlRdzd5enVvVm5HU3BvN0xuLWFEMi1FTGVHSzFlNVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi12cjZuNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA1MDUwMTRiLWQxNTUtNGJiOS1iMDBlLTI0Zjk4ODZmYmYxNiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.FWZBhuIboXqeQ6tj3fkhULZbLQB04STebgd1YBiiMSXkJj-pc-ft284m0Df5xt6qW7lTOeFNgEDevNWbYm_RKX3kicFz8LsfGX3oQ4sSBVyjotJZWs2ow98IrO6ltClJc1X5TJWXBoKiUmBH9cpkv7-_Djjhz-SJTJzfXRlGp71VRLKcRr9NXYdVBBMulhev1eNnhqeL0bAetL1UrffwP0P1yNOXSNtNrxHV2ojVWGAhh0h7ns7Tij1aNfiwBm1BrqRD6HWVqf9cM6v3ST5df3sFW0Y1lyrWkTbUdZNIHQJrJZYAG3KO3NS6d2aKkDRKviqAxcwC-dj-8EEg40529A

记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆:
eyJhbGciOiJSUzI1NiIsImtpZCI6Ilp0em1EZndTQzhOWVlRdzd5enVvVm5HU3BvN0xuLWFEMi1FTGVHSzFlNVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi12cjZuNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA1MDUwMTRiLWQxNTUtNGJiOS1iMDBlLTI0Zjk4ODZmYmYxNiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.FWZBhuIboXqeQ6tj3fkhULZbLQB04STebgd1YBiiMSXkJj-pc-ft284m0Df5xt6qW7lTOeFNgEDevNWbYm_RKX3kicFz8LsfGX3oQ4sSBVyjotJZWs2ow98IrO6ltClJc1X5TJWXBoKiUmBH9cpkv7-_Djjhz-SJTJzfXRlGp71VRLKcRr9NXYdVBBMulhev1eNnhqeL0bAetL1UrffwP0P1yNOXSNtNrxHV2ojVWGAhh0h7ns7Tij1aNfiwBm1BrqRD6HWVqf9cM6v3ST5df3sFW0Y1lyrWkTbUdZNIHQJrJZYAG3KO3NS6d2aKkDRKviqAxcwC-dj-8EEg40529A

3)创建管理员token,可查看任何空间权限
kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
查看kubernetes-dashboard名称空间下的secret
kubectl get secret -n kubernetes-dashboard
显示如下:

1
2
3
4
5
6
NAME                               TYPE                                  DATA   AGE
default-token-5hbbn kubernetes.io/service-account-token 3 24h
kubernetes-dashboard-certs Opaque 2 23h
kubernetes-dashboard-csrf Opaque 1 24h
kubernetes-dashboard-key-holder Opaque 2 24h
kubernetes-dashboard-token-vr6n5 kubernetes.io/service-account-token 3 24h

4)找到对应的带有token的kubernetes-dashboard-token-vr6n5
kubectl describe secret kubernetes-dashboard-token-vr6n5 -n kubernetes-dashboard
记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆,点击sing in登陆,显示如下,这次就可以看到和操作任何名称空间的资源了。

8. 安装metrics组件

把metrics-server-amd64_0_3_1.tar.gz和addon.tar.gz镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在百度网盘链接:https://pan.baidu.com/s/1k1heJy8lLnDk2JEFyRyJdA 提取码:udkj,可自行下载

docker load -i metrics-server-amd64_0_3_1.tar.gz
docker load -i addon.tar.gz

metrics-server版本0.3.1,用到的镜像是k8s.gcr.io/metrics-server-amd64:v0.3.1
addon-resizer版本是1.8.4,用到的镜像是k8s.gcr.io/addon-resizer:1.8.4

在k8s的master1节点操作
kubectl apply -f metrics.yaml

metrics.yaml文件内容在如下链接地址处复制
https://raw.githubusercontent.com/luckylucky421/kubernetes1.17.3/master/metrics.yaml
上面如果访问不了,可以访问下面的链接,然后把下面的分支克隆和下载,手动把yaml文件传到master1上即可:
https://github.com/luckylucky421/kubernetes1.17.3
上面组件都安装之后,查看组件安装是否正常,STATUS状态是Running,说明组件正常,如下所示:

1
2
3
4
5
6
7
8
9
10
11
12
13
kubectl  get  pods  -n kube-system  -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-7ff77c879f-tftsh 1/1 Running 0 24h 10.244.0.2 s100 <none> <none>
coredns-7ff77c879f-wrzch 1/1 Running 0 24h 10.244.0.3 s100 <none> <none>
etcd-s100 1/1 Running 0 24h 192.168.112.100 s100 <none> <none>
kube-apiserver-s100 1/1 Running 0 24h 192.168.112.100 s100 <none> <none>
kube-controller-manager-s100 1/1 Running 0 24h 192.168.112.100 s100 <none> <none>
kube-flannel-ds-2g2wq 1/1 Running 0 24h 192.168.112.101 s101 <none> <none>
kube-flannel-ds-b6jwp 1/1 Running 0 24h 192.168.112.100 s100 <none> <none>
kube-proxy-khfkd 1/1 Running 0 24h 192.168.112.100 s100 <none> <none>
kube-proxy-x4cs8 1/1 Running 0 24h 192.168.112.101 s101 <none> <none>
kube-scheduler-s100 1/1 Running 0 24h 192.168.112.100 s100 <none> <none>
metrics-server-8459f8db8c-x8ss6 2/2 Running 0 20h 10.244.1.3 s101 <none> <none>

三、参考链接

微信文章搜索安装kubernetes集群-灵活安装k8s各个版本高可用集群
kubernetes中网络报错问题
k8s云原生明阳的博客

关注【憨才好运】微信公众号,了解更多精彩内容⬇️⬇️⬇️

continuous_deployment