k8s dev 集群 新增节点worker5 通过脚本方式

一 。k8s dev 集群 新增节点worker5 通过脚本方式

  1. 执行 sh install_kubelet.sh 安装 docker ,kubelet , kubeadm ,kubectl 等命令

    /root/.docker/保存文件 config.json 可以免登录拉取阿里云业务应用镜像

{
"auths": {
"registry-vpc.cn-hangzhou.aliyuncs.com": {
"auth": "5Lq/5qyh572R6IGUOmV0U21lMDcwNA=="
},
"registry.cn-hangzhou.aliyuncs.com": {
"auth": "5Lq/5qyh572R6IGUOmV0U21lMDcwNA=="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.11 (linux)"
}
}

  1. dev-master1 节点执行 命令查看 加入集群token等信息

[root@etsme-dev-k8s-master ~]# kubeadm token create –print-join-command

kubeadm join apiserver.etsme:6443 –token a5gk11.nnm8lqi6o9ltbf32 –discovery-token-ca-cert-hash sha256:8f8ec67a66db0510620e4e10588a06cba18faab36aa9174d0e7018d4ec392a9b

  1. 新节点 Worker5 执行命令 加入k8s集群

[root@etsme-dev-k8s-worker5 ~]# kubeadm join apiserver.etsme:6443 –token a5gk11.nnm8lqi6o9ltbf32 –discovery-token-ca-cert-hash sha256:8f8ec67a66db0510620e4e10588a06cba18faab36aa9174d0e7018d4ec392a9b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster…
[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

  1. dev-master1 节点查看 新节点worker5是否ready状态

[root@etsme-dev-k8s-master .docker]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
etsme-dev-k8s-master Ready master 100d v1.19.5 172.16.234.57 CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
etsme-dev-k8s-worker1 Ready 100d v1.19.5 172.16.234.58 CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
etsme-dev-k8s-worker2 Ready 100d v1.19.5 172.16.32.99 CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
etsme-dev-k8s-worker3 Ready 100d v1.19.5 172.16.50.128 CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
etsme-dev-k8s-worker4 Ready 2d23h v1.19.5 172.16.234.68 CentOS Linux 7 (Core) 3.10.0-1160.59.1.el7.x86_64 docker://19.3.11
etsme-dev-k8s-worker5 Ready 46s v1.19.5 172.16.234.69 CentOS Linux 7 (Core) 3.10.0-1160.59.1.el7.x86_64 docker://19.3.11

二。 更换 worker节点

  1. 迁移 ceph -mon组件到新的 worker节点

kubectl get deploy rook-ceph-mon-a -o yaml -n rook-ceph |grep -i nodeselector -C 2

  nodeSelector:
    kubernetes.io/hostname: etsme-dev-k8s-worker4

kubectl get deploy rook-ceph-mon-d -o yaml -n rook-ceph |grep -i nodeselector -C 2

  nodeSelector:
    kubernetes.io/hostname: etsme-dev-k8s-worker5
  1. 使用cordon命令将worker2节点标记为不可调度;

kubectl cordon etsme-dev-k8s-worker2

  1. 执行drain命令,将运行在worker2上运行的pod平滑的赶到其他节点上;

kubectl drain etsme-dev-k8s-worker2 –ignore-daemonsets

drain 命令没有效果可以使用 kubectl delete pod $(kubectl get pod -n dev-1 -o wide |grep worker2|awk ‘{print $1}’) -n dev-1 做驱逐。

  1. 使用cordon命令将worker3节点标记为不可调度;

kubectl cordon etsme-dev-k8s-worker3

  1. 执行drain命令,将运行在worker3上运行的pod平滑的赶到其他节点上;

    kubectl drain etsme-dev-k8s-worker3 –ignore-daemonsets

三 。 登录 ceph-tool 删除 需更换节点的 osd

[root@etsme-dev-k8s-master ~]# kubectl exec -it rook-ceph-tools-84fc455b76-ffhbk bash -n rook-ceph

ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.44937 root default
-5 0.29300 host etsme-dev-k8s-worker1
0 hdd 0.29300 osd.0 up 1.00000 1.00000
-3 0.03909 host etsme-dev-k8s-worker2
1 hdd 0.03909 osd.1 up 1.00000 1.00000
-7 0.03909 host etsme-dev-k8s-worker3
2 hdd 0.03909 osd.2 up 1.00000 1.00000
-9 0.03909 host etsme-dev-k8s-worker4
3 hdd 0.03909 osd.3 up 1.00000 1.00000
-11 0.03909 host etsme-dev-k8s-worker5
4 hdd 0.03909 osd.4 up 1.00000 1.00000

  1. 将osd标识为out,此时会进行元数据的同步

ceph osd out osd.1

ceph osd down osd.1

  1. 通过ceph -s可以看到数据的迁移过程

cluster:
id: 90c680a2-a05c-44bd-a6fe-1f55e59587de
health: HEALTH_OK

services:
mon: 3 daemons, quorum a,c,d (age 21m)
mgr: a(active, since 2d)
mds: myfs:1 {0=myfs-b=up:active} 1 up:standby-replay
osd: 5 osds: 5 up (since 38m), 4 in (since 2m); 14 remapped pgs

data:
pools: 3 pools, 65 pgs
objects: 8.83k objects, 5.0 GiB
usage: 18 GiB used, 402 GiB / 420 GiB avail
pgs: 3595/26478 objects misplaced (13.577%)
51 active+clean
13 active+remapped+backfill_wait
1 active+remapped+backfilling

  1. 删除out,此时会进行数据的搬迁,即backfilling和rebalancing动作,完成数据的迁移

ceph osd purge 1

  1. 查看osd的目录树,可以发现osd已经删除

ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.41028 root default
-5 0.29300 host etsme-dev-k8s-worker1
0 hdd 0.29300 osd.0 up 1.00000 1.00000
-3 0 host etsme-dev-k8s-worker2
-7 0.03909 host etsme-dev-k8s-worker3
2 hdd 0.03909 osd.2 up 1.00000 1.00000
-9 0.03909 host etsme-dev-k8s-worker4
3 hdd 0.03909 osd.3 up 1.00000 1.00000
-11 0.03909 host etsme-dev-k8s-worker5
4 hdd 0.03909 osd.4 up 1.00000 1.00000

  1. 将对应deployment和cluster.yaml中的内容删除

kubectl delete deploy rook-ceph-osd-1 -n rook-ceph

四。 移除 worker节点 (etsme-dev-k8s-worker2, etsme-dev-k8s-worker3)

  1. 在准备移除的 worker 节点上执行

kubeadm reset

  1. 在第一个 master 节点 etsme-dev-k8s-master 上执行

kubectl delete node etsme-dev-k8s-worker2

kubectl delete node etsme-dev-k8s-worker3

  1. master01上执行 kubectl get nodes -o wide 查看节点是否已经正常移除

kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
etsme-dev-k8s-master Ready master 106d v1.19.5 172.16.234.57 CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
etsme-dev-k8s-worker1 Ready 106d v1.19.5 172.16.234.58 CentOS Linux 7 (Core) 3.10.0-1127.19.1.el7.x86_64 docker://19.3.11
etsme-dev-k8s-worker4 Ready 9d v1.19.5 172.16.234.68 CentOS Linux 7 (Core) 3.10.0-1160.59.1.el7.x86_64 docker://19.3.11
etsme-dev-k8s-worker5 Ready 6d16h v1.19.5 172.16.234.69 CentOS Linux 7 (Core) 3.10.0-1160.59.1.el7.x86_64 docker://19.3.11

k8s dev 集群 新增节点worker5 通过脚本方式

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

Scroll to top
桂ICP备2023008908号-1