《摆渡人》下载_迅雷下载_故事片_6v电影网
cac55 2025-11-03 18:32 4 浏览
Centos7.X部署kubernates集群环境
部署环境准备
集群类型采用多对多高可用集群部署,共7台主机,3台master,3台slaver,1台client。
主机名 | OS版本 | ip | 主机配置 | 备注 |
region-master-1 | 2颗CPU4G内存 | |||
region-master-2 | 2颗CPU4G内存 | |||
region-master-3 | 2颗CPU4G内存 | |||
region-slaver-1 | 2颗CPU4G内存 | |||
region-slaver-2 | 2颗CPU4G内存 | |||
region-slaver-3 | 2颗CPU4G内存 | |||
region-vip | 2颗CPU4G内存 | |||
region-client | 2颗CPU4G内存 |
系统环境准备
分别在master和slaver节点都执行下面操作。
配置操作系统
禁用了防火墙和selinux并设置了阿里源。
$ systemctl stop firewalld && systemctl disable firewalld
$ setenforce 0
$ vim /etc/selinux/config
SELINUX=disabled
配置主机名
修改主机名
[root@localhost ~]# hostnamectl set-hostname region-master-1
[root@localhost ~]# more /etc/hostname
退出重新登陆即可显示新设置的主机名region-master-1
修改hosts文件
[root@region-master-1 ~]# cat >> /etc/hosts << EOF
region-master-1
region-master-2
region-master-3
region-slaver-1
region-slaver-2
region-slaver-3
EOF
禁用swap
临时禁用
[root@region-master-1 ~]# swapoff -a
永久禁用
禁用swap后还需修改配置文件/etc/fstab,注释swap
[root@region-master-1 ~]# sed -i.bak /swap/s/^/#/ /etc/fstab
内核参数修改
本文的k8s网络使用flannel,该网络需要设置内核参数bridge-nf-call-iptables=1,修改这个参数需要系统有br_netfilter模块。
br_netfilter模块加载
查看br_netfilter模块:
[root@region-master-1 ~]# lsmod |grep br_netfilter
如果系统没有br_netfilter模块则执行下面的新增命令,如有则忽略。
临时新增br_netfilter模块:
[root@region-master-1 ~]# modprobe br_netfilter
该方式重启后会失效
永久新增br_netfilter模块:
[root@region-master-1 ~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@region-master-1 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
[root@region-master-1 ~]# chmod /etc/sysconfig/modules/br_netfilter.modules
内核参数临时修改
[root@region-master-1 ~]# sysctl net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
[root@region-master-1 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-ip6tables = 1
内核参数永久修改
[root@region-master-1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@region-master-1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
新增kubernetes源
[root@region-master-1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
更新缓存
[root@region-master-1 ~]# yum clean all
[root@region-master-1 ~]# yum -y makecache
免密登录
配置region-master-1到region-master-2、region-master-3免密登录,本步骤只在region-master-1上执行。
创建秘钥
[root@region-master-1 ~]# ssh-keygen -t rsa
将秘钥同步至region-master-2/region-master-3
[root@region-master-1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@
[root@region-master-1 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@
免密登陆测试
[root@region-master-1 ~]# ssh
[root@region-master-1 ~]# ssh region-master-3
region-master-1可以直接登录region-master-2和region-master-3,不需要输入密码。
Docker安装
control plane和work节点都执行本部分操作。
安装依赖包
[root@region-master-1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
设置Docker源
[root@region-master-1 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
安装Docker CE
docker安装版本查看
[root@region-master-1 ~]# yum list docker-ce --showduplicates | sort -r
安装docker
[root@region-master-1 ~]# yum install docker-ce- docker-ce-cli- containerd.io -y
启动Docker
[root@region-master-1 ~]# systemctl start docker
[root@region-master-1 ~]# systemctl enable docker
命令补全
安装bash-completion
[root@region-master-1 ~]# yum -y install bash-completion
加载bash-completion
[root@region-master-1 ~]# source /etc/profile.d/bash_completion.sh
镜像加速
由于Docker Hub的服务器在国外,下载镜像会比较慢,可以配置镜像加速器。主要的加速器有:Docker官方提供的中国registry mirror、阿里云加速器、DaoCloud 加速器,本文以阿里加速器配置为例。
登陆阿里云容器模块
登陆地址为:https://cr.console.aliyun.com ,未注册的可以先注册阿里云账户
配置镜像加速器
配置daemon.json文件
[root@region-master-1 ~]# mkdir -p /etc/docker
[root@region-master-1 ~]# tee /etc/docker/daemon.json <<-EOF
{
registry-mirrors: [https://v16stybc.mirror.aliyuncs.com]
}
EOF
重启服务
[root@region-master-1 ~]# systemctl daemon-reload
[root@region-master-1 ~]# systemctl restart docker
加速器配置完成
验证
[root@region-master-1 ~]# docker --version
[root@region-master-1 ~]# docker run hello-world
通过查询docker版本和运行容器hello-world来验证docker是否安装成功。
修改Cgroup Driver
修改daemon.json
修改daemon.json,新增‘”exec-opts”: [“native.cgroupdriver=systemd”’
[root@region-master-1 ~]# more /etc/docker/daemon.json
{
registry-mirrors: [https://v16stybc.mirror.aliyuncs.com],
exec-opts: [native.cgroupdriver=systemd]
}
重新加载docker
[root@region-master-1 ~]# systemctl daemon-reload
[root@region-master-1 ~]# systemctl restart docker
修改cgroupdriver是为了消除告警:
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at?https://kubernetes.io/docs/setup/cri/
keepalived安装
control plane节点都执行本部分操作。
安装keepalived
[root@region-master-1 ~]# yum -y install keepalived
keepalived配置
region-master-1上keepalived配置:
[root@region-master-1 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id region-master-1
}
vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id
priority
advert_int 1
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
}
}
region-master-2上keepalived配置:
[root@region-master-2 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id region-master-2
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id
priority
advert_int 1
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
}
}
region-master-3上keepalived配置:
[root@region-master-3 ~]# more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id region-master-3
}
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id
priority
advert_int 1
authentication {
auth_type PASS
auth_pass
}
virtual_ipaddress {
}
启动keepalived
所有control plane启动keepalived服务并设置开机启动
[root@region-master-1 ~]# service keepalived start
[root@region-master-1 ~]# systemctl enable keepalived
VIP查看
[root@region-master-1 ~]# ip a
vip在region-master-1上
k8s安装
control plane和work节点都执行本部分操作。
版本查看
[root@region-master-1 ~]# yum list kubelet --showduplicates | sort -r
本文安装的kubelet版本是,该版本支持的docker版本为, , , , , 。
安装kubelet、kubeadm和kubectl
安装三个包
[root@region-master-1 ~]# yum install -y kubelet- kubeadm- kubectl-
# 调整CentOS7仓库
yum install wget -y
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 调整Kubernetes仓库
vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
#vim保存
# 刷新仓库
yum clean all
yum makecache
启动kubelet
启动kubelet并设置开机启动
[root@region-master-1 ~]# systemctl enable kubelet && systemctl start kubelet
kubectl命令补全
[root@region-master-1 ~]# echo source <(kubectl completion bash) >> ~/.bash_profile
[root@region-master-1 ~]# source .bash_profile
下载镜像
镜像下载的脚本
Kubernetes几乎所有的安装组件和Docker镜像都放在goolge自己的网站上,直接访问可能会有网络问题,这里的解决办法是从阿里云镜像仓库下载镜像,拉取到本地以后改回默认的镜像tag。本文通过运行image.sh脚本方式拉取镜像。
[root@region-master-1 ~]# more image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.
images=(`kubeadm config images list --kubernetes-version=$version|awk -F / {print $2}`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
url为阿里云镜像仓库地址,version为安装的kubernetes版本。
下载镜像
运行脚本image.sh,下载指定版本的镜像
[root@region-master-1 ~]# ./image.sh
[root@region-master-1 ~]# docker images
初始化Master
region-master-1节点执行本部分操作。
kubeadm.conf
[root@region-master-1 ~]# more kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.
apiServer:
certSANs: #填写所有kube-apiserver节点的hostname、IP、VIP
- region-master-1
- region-master-2
- region-master-3
- region-slave-1
- region-slave-2
- region-slave-3
-
-
-
-
-
-
-
controlPlaneEndpoint: :
networking:
podSubnet: /
kubeadm.conf为初始化的配置文件
master初始化
[root@region-master-1 ~]# kubeadm init --config=kubeadm-config.yaml
记录kubeadm join的输出,后面需要这个命令将work节点和其他master节点加入集群中。
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join : --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join : --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
初始化失败:
如果初始化失败,可执行kubeadm reset后重新初始化
[root@region-master-1 ~]# kubeadm reset
[root@region-master-1 ~]# rm -rf $HOME/.kube/config
加载环境变量
[root@region-master-1 ~]# echo export KUBECONFIG=/etc/kubernetes/admin.conf >> ~/.bash_profile
[root@region-master-1 ~]# source .bash_profile
本文所有操作都在root用户下执行,若为非root用户,则执行如下操作:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
安装flannel网络
在region-master-1上新建flannel网络
[root@region-master-1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
由于网络原因,可能会安装失败,可以在文末直接下载kube-flannel.yml文件,然后再执行apply
master节点加入集群
证书分发
region-master-1分发证书:
在region-master-1上运行脚本cert-main-master.sh,将证书分发至region-master-2和region-master-3
[root@region-master-1 ~]# ll|grep cert-main-master.sh
-rwxr--r-- 1 root root 月 : cert-main-master.sh
[root@region-master-1 ~]# more cert-main-master.sh
USER=root # customizable
CONTROL_PLANE_IPS=
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt ${USER}@$host:
scp /etc/kubernetes/pki/ca.key ${USER}@$host:
scp /etc/kubernetes/pki/sa.key ${USER}@$host:
scp /etc/kubernetes/pki/sa.pub ${USER}@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt ${USER}@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key ${USER}@$host:
scp /etc/kubernetes/pki/etcd/ca.crt ${USER}@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key ${USER}@$host:etcd-ca.key
done
region-master-2移动证书至指定目录:
在region-master-2上运行脚本cert-other-master.sh,将证书移至指定目录
[root@region-master-2 ~]# pwd
/root
[root@region-master-2 ~]# ll|grep cert-other-master.sh
-rwxr--r-- 1 root root 月 : cert-other-master.sh
[root@region-master-2 ~]# more cert-other-master.sh
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@region-master-2 ~]# ./cert-other-master.sh
region-master-3移动证书至指定目录:
在region-master-3上也运行脚本cert-other-master.sh
[root@region-master-3 ~]# pwd
/root
[root@region-master-3 ~]# ll|grep cert-other-master.sh
-rwxr--r-- 1 root root 月 : cert-other-master.sh
[root@region-master-3 ~]# ./cert-other-master.sh
region-master-2加入集群
kubeadm join : --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
region-master-3加入集群
kubeadm join : --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
--control-plane
加载环境变量
region-master-2和region-master-3加载环境变量
[root@region-master-2 ~]# scp region-master-1:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@region-master-2 ~]# echo export KUBECONFIG=/etc/kubernetes/admin.conf >> ~/.bash_profile
[root@region-master-2 ~]# source .bash_profile
[root@region-master-3 ~]# scp region-master-1:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@region-master-3 ~]# echo export KUBECONFIG=/etc/kubernetes/admin.conf >> ~/.bash_profile
[root@region-master-3 ~]# source .bash_profile
该步操作是为了在region-master-2和region-master-3上也能执行kubectl命令。
集群节点查看
[root@region-master-1 ~]# kubectl get nodes
[root@region-master-1 ~]# kubectl get po -o wide -n kube-system
所有master节点处于ready状态,所有的系统组件也正常。
Slave节点加入集群
region-slaver-1加入集群
kubeadm join : --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
运行初始化master生成的work节点加入集群的命令
region-slaver-2加入集群
kubeadm join : --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
region-slaver-3加入集群
kubeadm join : --token qbwt6v.rr4hsh73gv8vrcij \
--discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966
集群节点查看
[root@region-master-1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
region-master-1 Ready master 44m v1.
region-master-2 Ready master 33m v1.
region-master-3 Ready master 23m v1.
region-slaver-1 Ready <none> 11m v1.
region-slaver-2 Ready <none> 7m50s v1.
region-slaver-3 Ready <none> 3m4s v1.
Client配置
设置kubernetes源
新增kubernetes源
[root@client ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
更新缓存
[root@client ~]# yum clean all
[root@client ~]# yum -y makecache
安装kubectl
[root@client ~]# yum install -y kubectl-
命令补全
安装bash-completion
[root@client ~]# yum -y install bash-completion
加载bash-completion
[root@client ~]# source /etc/profile.d/bash_completion.sh
拷贝admin.conf
[root@client ~]# mkdir -p /etc/kubernetes
[root@client ~]# scp :/etc/kubernetes/admin.conf /etc/kubernetes/
[root@client ~]# echo export KUBECONFIG=/etc/kubernetes/admin.conf >> ~/.bash_profile
[root@client ~]# source .bash_profile
加载环境变量
[root@region-master-1 ~]# echo source <(kubectl completion bash) >> ~/.bash_profile
[root@region-master-1 ~]# source .bash_profile
kubectl测试
[root@client ~]# kubectl get nodes
[root@client ~]# kubectl get cs
[root@client ~]# kubectl get po -o wide -n kube-system
Dashboard搭建
本节内容都在client端完成
下载yaml
[root@client ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.-beta8/aio/deploy/recommended.yaml
如果连接超时,可以多试几次。recommended.yaml已上传,也可以在文末下载。
配置yaml
修改镜像地址
[root@client ~]# sed -i s/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g recommended.yaml
由于默认的镜像仓库网络访问不通,故改成阿里镜像
外网访问
[root@client ~]# sed -i /targetPort: /a\ \ \ \ \ \ nodePort: \n\ \ type: NodePort recommended.yaml
配置NodePort,外部通过https://NodeIp:NodePort 访问Dashboard,此时端口为
新增管理员帐号
[root@client ~]# cat >> recommended.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
部署访问
部署Dashboard
[root@client ~]# kubectl apply -f recommended.yaml
状态查看
[root@client ~]# kubectl get all -n kubernetes-dashboard
令牌查看
[root@client ~]# kubectl describe secrets -n kubernetes-dashboard dashboard-admin
令牌为:
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd0NHZ5X3RHZW5pNDR6WEdldmlQUWlFM3IxbGM3aEIwWW1IRUdZU1ZKdWMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNms1ZjYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjk1NDE0ODEtMTUyZS00YWUxLTg2OGUtN2JmMWU5NTg3MzNjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.LAe7N8Q6XR3d0W8w-r3ylOKOQHyMg5UDfGOdUkko_tqzUKUtxWQHRBQkowGYg9wDn-nU9E-rkdV9coPnsnEGjRSekWLIDkSVBPcjvEd0CVRxLcRxP6AaysRescHz689rfoujyVhB4JUfw1RFp085g7yiLbaoLP6kWZjpxtUhFu-MKh1NOp7w4rT66oFKFR-_5UbU3FoetAFBmHuZ935i5afs8WbNzIkM6u9YDIztMY3RYLm9Zs4KxgpAmqUmBSlXFZNW2qg6hxBqDijW_1bc0V7qJNt_GXzPs2Jm1trZR6UU1C2NAJVmYBu9dcHYtTCgxxkWKwR0Qd2bApEUIJ5Wug
访问
使用火狐浏览器访问:
https://VIP:
接受风险
集群高可用测试
本节内容都在client端完成
组件所在节点查看
通过ip查看apiserver所在节点,通过leader-elect查看scheduler和controller-manager所在节点:
[root@region-master-1 ~]# ip a|grep
inet / scope global ens160
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: {holderIdentity:region-master-1_6caf8003-052f-451d-8dce-4516825213ad,leaseDurationSeconds:,acquireTime:-02T09::23Z,renewTime:-03T07::55Z,leaderTransitions:2}
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: {holderIdentity:region-master-1_720d65f9-e425--95d7-e5478ac951f7,leaseDurationSeconds:,acquireTime:-02T09::20Z,renewTime:-03T07::03Z,leaderTransitions:2}
region-master-1关机
关闭region-master-1
[root@region-master-1 ~]# init 0
各组件查看
vip飘到了region-master-2
[root@region-master-2 ~]# ip a|grep
inet / scope global ens160
controller-manager和scheduler也发生了迁移
[root@client ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: {holderIdentity:region-master-2_b3353e8f-a02f--bf17-2f596cd25ba5,leaseDurationSeconds:,acquireTime:-03T08::42Z,renewTime:-03T08::36Z,leaderTransitions:3}
[root@client ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: {holderIdentity:region-master-3_e0a2ec66-c415-44ae-871c-18c73258dc8f,leaseDurationSeconds:,acquireTime:-03T08::56Z,renewTime:-03T08::45Z,leaderTransitions:3}
集群功能性测试
查询:
[root@client ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
region-master-1 NotReady master 22h v1.
region-master-2 Ready master 22h v1.
region-master-3 Ready master 22h v1.
region-slaver-1 Ready <none> 22h v1.
region-slaver-2 Ready <none> 22h v1.
region-slaver-3 Ready <none> 22h v1.
region-master-1状态为NotReady
新建pod:
[root@client ~]# more nginx-master.yaml
apiVersion: apps/v1 #描述文件遵循extensions/v1beta1版本的Kubernetes API
kind: Deployment #创建资源类型为Deployment
metadata: #该资源元数据
name: nginx-master #Deployment名称
spec: #Deployment的规格说明
selector:
matchLabels:
app: nginx
replicas: 3 #指定副本数为3
template: #定义Pod的模板
metadata: #定义Pod的元数据
labels: #定义label(标签)
app: nginx #label的key和value分别为app和nginx
spec: #Pod的规格说明
containers:
- name: nginx #容器的名称
image: nginx:latest #创建容器所使用的镜像
[root@client ~]# kubectl apply -f nginx-master.yaml
deployment.apps/nginx-master created
[root@client ~]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-master-75b7bfdb6b-lnsfh 1/1 Running 0 4m44s region-slaver-3 <none> <none>
nginx-master-75b7bfdb6b-vxfg7 1/1 Running 0 4m44s region-slaver-1 <none> <none>
nginx-master-75b7bfdb6b-wt9kc 1/1 Running 0 4m44s region-slaver-2 <none> <none>
结论
当有一个master节点宕机时,VIP会发生漂移,集群各项功能不受影响。
region-master-2关机
在关闭region-master-1的同时关闭region-master-2,测试集群还能否正常对外服务。
关闭region-master-2:
[root@region-master-2 ~]# init 0
查看VIP
[root@region-master-3 ~]# ip a|grep
inet / scope global ens160
vip漂移至唯一的region-master-3
集群功能测试
[root@client ~]# kubectl get nodes
Error from server: etcdserver: request timed out
[root@client ~]# kubectl get nodes
The connection to the server : was refused - did you specify the right host or port?
etcd集群崩溃,整个k8s集群也不能正常对外服务。
相关推荐
- 我的抗战演员表全部_我的抗战演员表全部名单
-
霍啸林,是抗战剧勇敢的抗战中的角色。由男演员杨志刚饰演。他曾经是一个不学无术的少爷。后来在父亲霍绍昌被赵金虎杀后才懂得的世间的冷暖。后来选择了抗日救国,最后被日本侵略者砍下脑袋。勇者的抗战霍啸林大结局...
- 虐到肝疼的超级虐文短篇现代言情
-
《送你一枝野百合》作者:罪加罪从校园到都市,双向暗恋,女追男+追妻火葬场,这本真的绝,甜虐交织,推拉一绝,今年看过的最好看的文。罪加罪真的好厉害,讲故事的能力很强。作者罪加罪真的好厉害,又会写甜,又会...
- 海洋天堂观后感_海洋天堂观后感一千字
-
海洋天堂结局:是大福像从前趴在父亲背上一样,伏在海龟的身上,和他一起游。他费尽心力地教大福自己坐公交车去海洋馆,在海洋馆擦地。为了不让大福感到孤独,他不惜拖着病重的身体,背着自制的龟壳扮成海龟,陪着大...
- 无敌战神林北_无敌战神林北1130
-
五年前,被陷害入狱!五年后,他荣耀归来,天下权势,尽握手中!我所失去的,终会千百倍的拿回来! 此一刻,天空之城,整个议事大厅,鸦雀无声…&nb...
- 虫儿飞原唱_虫儿飞原唱郑伊健
-
原唱郑伊健主唱,童声伴唱歌曲歌词:黑黑的天空低垂,亮亮的繁星相随,虫儿飞,虫儿飞,你在思念谁。天上的星星流泪,地上的玫瑰枯萎,冷风吹,冷风吹,只要有你陪,虫儿飞花儿睡,一双又一对才美,不怕天黑,只怕心...
- 黑莓视频_黑莓视频素材
-
看视频没问题!只是他是四方屏幕,不能满屏观看,而且屏幕又小!这个黑莓打电话发信息上上网还是可以的。看视频就一般般啦!
- 最霸气的十首诗_笛子最霸气的十首诗
-
一生必读的十首霸气古诗词有:《观沧海》、《赤壁》、《过零丁洋》、《夏日绝句》、《石灰吟》、《满江红》、《赴戍登程口占示家人·其二》、《从军行》、《雁门太守行》和《无题·龙卧千江水自流》。这些诗词或表达...
- 你是我藏不住的甜_你是我藏不住的甜最新章节
-
第五十四章!小说甜而不腻,有些接地气,作者文笔流畅,句句写进人心,情节套路新颖,不是烂大街的剧情,在读的时候,最大的体验就是感觉书里出现的那些人好像我们身边也有。《偷偷藏不住》刚开始看到书名的时候,我...
- 神级奶爸免费阅读全文_神级奶爸格格党
-
尚不清楚。因为张汉是一个虚构角色,他的结局取决于他的作者和故事情节的发展。如果现有的小说或影视作品已经完成,那么可以据此判断他的结局;如果还有未完成的作品,那么他的结局还不确定。需要等待后续的剧情发展...
- 哆啦a梦主题曲歌词_哆啦a梦主题曲歌词罗马音
-
1:“?”是的,我给你讲一下哆啦A梦主题曲的国语版歌词。1,哆啦A梦主题曲的国语版歌词是这样的:小小的希望被星星守护夜空之下未来是创造这世界的奇迹用画笔绘出期待与创意将...
- 十大最强机械怪兽_十大最强机械怪兽实力排行
-
金谷桥,艾雷王,艾斯杀手,机械哥莫拉,我现在只想起来这些1嘎拉蒙不是机器怪兽。2嘎拉蒙是一个虚构的角色,不是真实存在的机器怪兽。他是一只来自外太空的生物,具有超能力和变形能力。3嘎拉蒙在动画片和...
- 绝密押运40集免费观看_电视剧绝密押运全集
-
是假象的卧底,其实都是蝴蝶帮干的,武警没有卧底只是赵野是警察安在银行的卧底而已私家车恶意插队是在第二集。绝密押运第二集剧情:陶涛到九中队报到,被分配到警卫连。九中队军容整齐,军纪严明,营区内布满...
- 海之边夜未增减板全季_海之边境
-
大海是有边的。虽然说大海看起来无边无际,但它总是有尽头的。太平洋是最宽广的,但它的东边是美洲,西边是亚洲,北边是白领海峡,南边一直到南极洲,它也是有头的。其他有印度洋,北冰洋,大西洋,它们也都是有尽头...
- 夏至桑旗全文免费阅读_夏至桑旗免费阅读目录
-
《初婚有错》女主夏至,男主桑棋。作者芭了芭蕉。简介:年轻貌美的女记者忽然怀孕了,孩子不是老公的。当做金丝鸟被圈养,却不知道对方是谁;有一天晚上,一个人爬上了她的床,“怎么是你”桑旗开了一家绣坊,夏至辞...
- 权力的游戏第7集完整版_权力的游戏第1集完整版
-
1、史塔克家族的北境王国2、霍尔家族的河屿王国(河间地+铁群岛)3、艾林家族的山谷王国4、杜兰登家族的风暴地风暴王国5、兰尼斯特家族的西镜凯岩王国6、园丁家族的河湾地河湾王国7、纳梅洛斯·马泰尔家族的...
- 一周热门
- 最近发表
- 标签列表
-
