Kubernetes集合篇

Kubernetes集群部署

本次通过kubeadm安装 1.29.2 版本

前言

1
2
3
4
k8s启动的流程 linux > docker > cri-docker > kberclt > AS/CM/SCH
- AS apiserver
- CM controller-manager
- SCH scheduler
1
2
3
4
5
6
7
8
9
10
11
12
13
14
通过kubeadm的安装方式 
组件以容器化的方式运行
优势
- 简单
- 自愈
缺点
- 集群部署掩盖一些细节不便于理解

二进制安装
组件以系统进程的方式运行
优势
- 能够更灵活的配置集群
缺点
- 难理解

一、准备工作

下载rockylinux9镜像

官方地址

1
https://rockylinux.org/zh-CN/download

根据系统架构选择镜像

image-20241217163929574

阿里云镜像站地址

1
https://mirrors.aliyun.com/rockylinux/9/isos/x86_64/?spm=a2c6h.25603864.0.0.29696621VzJej5

二、系统配置

系统核心数:4c 内存:4G 磁盘:100G 一主两从架构

设备掩码255.255.255.0

设备网关192.168.68.1

设备IP master 192.168.68.160 node1 192.168.68.161 node2 192.168.68.162

image-20241217164751615

三、环境初始化

固定所有设备的IP地址,配置仅需要修改ipv4块

1
2
3
4
5
6
# 网卡配置
# cat /etc/NetworkManager/system-connections/enp6s18.nmconnection
[ipv4]
method=manual
address=192.168.68.160/24,192.168.68.1
dns=223.5.5.5;119.29.29.29

image-20241217171641365

配置系统镜像源

1
https://developer.aliyun.com/mirror/rockylinux?spm=a2c6h.13651102.0.0.47731b11u4haJL  # 阿里云镜像源
1
2
3
4
5
6
sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g' \
-i.bak \
/etc/yum.repos.d/Rocky-*.repo

dnf makecache

关闭firewalld防火墙启用iptables

1
2
systemctl stop firewalld
systemctl disable firewalld
1
2
3
4
5
yum -y install iptables-services
systemctl start iptables
iptables -F
systemctl enable iptables
service iptables save

禁用selinux

1
2
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

关闭swap分区

1
2
swapoff -a
sed -i 's:/dev/mapper/rl-swap:#/dev/mapper/rl-swap:g' /etc/fstab

修改主机名

1
2
3
4
hostnamectl  set-hostname k8s-master      # master
hostnamectl set-hostname k8s-node01 # node01
hostnamectl set-hostname k8s-node02 # node02
bash # 刷新生效

配置本地解析

1
2
3
4
vi /etc/hosts
192.168.68.160 k8s-master m1
192.168.68.161 k8s-node01 n1
192.168.68.162 k8s-node02 n2

image-20241217180130123

安装ipvs

1
yum install -y ipvsadm

开启路由转发

1
2
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl -p

加载 bridge

1
2
3
4
5
6
7
yum install -y epel-release
yum install -y bridge-utils
modprobe br_netfilter
echo 'br_netfilter' >> /etc/modules-load.d/bridge.conf
echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' >> /etc/sysctl.conf
sysctl -p

安装docker

1
2
3
4
5
6
7
8
9
10
11
dnf config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
# 切换中科大源
sed -e 's|download.docker.com|mirrors.ustc.edu.cn/docker-ce|g' /etc/yum.repos.d/docker-ce.repo
# 安装docker
yum -y install docker-ce

# 或者使用阿里源
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce

配置docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat > /etc/docker/daemon.json <<EOF
{
"data-root": "/data/docker",
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "100"
},
"insecure-registries": ["harbor.xinxainghf.com"],
"registry-mirrors": ["https://proxy.1panel.live","https://docker.1panel.top","https://docker.m.daocloud.io","https://docker.1ms.run","https://docker.ketches.cn"]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

重启设备

1
reboot   # 建议拍摄快照防止后续问题没办法恢复环境

安装cri-docker

1
2
yum -y install wget 
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9.amd64.tgz
1
2
3
4
# 注意 下面操作必须 在配置本地解析时 n1 , n2 和我写的一样
cp cri-dockerd/cri-dockerd /usr/bin/
scp cri-dockerd/cri-dockerd root@n1:/usr/bin/cri-dockerd
scp cri-dockerd/cri-dockerd root@n2:/usr/bin/cri-dockerd

配置 cri-docker 服务(这块仅在master设备上执行)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat <<"EOF" > /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
1
2
3
4
5
6
7
8
9
10
11
12
cat <<"EOF" > /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF
1
2
3
4
5
scp /usr/lib/systemd/system/cri-docker.service root@n1:/usr/lib/systemd/system/cri-docker.service
scp /usr/lib/systemd/system/cri-docker.service root@n2:/usr/lib/systemd/system/cri-docker.service

scp /usr/lib/systemd/system/cri-docker.socket root@n1:/usr/lib/systemd/system/cri-docker.socket
scp /usr/lib/systemd/system/cri-docker.socket root@n2:/usr/lib/systemd/system/cri-docker.socket

启动cri-docker

1
2
3
4
5
systemctl daemon-reload
systemctl enable cri-docker
systemctl start cri-docker
systemctl is-active cri-docker
# 返回 active 即为成功

添加kuberadm源

1
https://v1-29.docs.kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/   # 官方文档 参考使用
1
2
3
4
5
6
7
8
9
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
# exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

安装kubeadm 1.29.2 版本

1
2
yum install -y kubelet-1.29.2 kubectl-1.29.2 kubeadm-1.29.2
systemctl enable kubelet.service

安装完成之后取消exclude的跳过注释,防止被update升级

1
sed -i 's/\# exclude/exclude/' /etc/yum.repos.d/kubernetes.repo

image-20241217185636044

四、集群初始化

初始化master

1
2
# master 节点执行
kubeadm init --apiserver-advertise-address=192.168.68.160 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version 1.29.2 --service-cidr=10.10.0.0/12 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock

image-20241217191623922

创建kubernetes证书目录

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

node节点加入集群

1
2
# node 节点执行,复制集群初始化的token命令时间记得加 --cri-socket unix:///var/run/cri-dockerd.sock 指定cri-docker的sock文件
kubeadm join 192.168.68.160:6443 --token kvuy4b.s4hanac9sc2g9uf5 --discovery-token-ca-cert-hash sha256:0cd92bb4735b5e8444bd58ae9b72eef16888e03e0768323f280cbaf0d711c094 --cri-socket unix:///var/run/cri-dockerd.sock

验证node节点是否加入集群

1
kubernetes get node 

image-20241217191842021

如果 token过期 重新申请

1
kubeadm token create --print-join-command

五、部署网络calico插件

官方文档

1
https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes

导入镜像

1
2
3
4
5
6
7
8
9
10
11
12
# master 节点执行
wget https://alist.wanwancloud.cn/d/%E8%BD%AF%E4%BB%B6/Kubernetes/Calico/calico.zip?sign=45yzzkju25P3oo3e7-D7w4A38_0Ug50Xag585imOt-0=:0 -O calico.zip
yum -y install unzip
unzip calico.zip
cd calico
tar xf calico-images.tar.gz
scp -r calico-images root@n1:/root
scp -r calico-images root@n2:/root
docker load -i calico-images/calico-cni-v3.26.3.tar
docker load -i calico-images/calico-kube-controllers-v3.26.3.tar
docker load -i calico-images/calico-node-v3.26.3.tar
docker load -i calico-images/calico-typha-v3.26.3.tar

导入镜像

1
2
3
4
5
# node 节点执行
docker load -i calico-images/calico-cni-v3.26.3.tar
docker load -i calico-images/calico-kube-controllers-v3.26.3.tar
docker load -i calico-images/calico-node-v3.26.3.tar
docker load -i calico-images/calico-typha-v3.26.3.tar

修改yaml文件

1
vi calico-typha.yaml
1
2
3
CALICO_IPV4POOL_CIDR	指定为 pod 地址 也是就是初始化集群时的地址 --pod-network-cidr=10.244.0.0/16
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"

image-20241217195539243

1
2
3
CALICO_IPV4POOL_IPIP 修改为 BGP 模式
- name: CALICO_IPV4POOL_IPIP
value: "Always" #改成Off

image-20241217195713853

启用calico网络插件

1
kubectl apply -f calico-typha.yaml

稍等一段时间 组件全部 Running 集群节点 Ready 就绪 集群就搭建成功了

1
2
kubectl get pod -A
kubectl get node

image-20241217200504009

资源清单

一、资源清单详解

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s-master ~]# kubectl  api-versions  # 获取版本
admissionregistration.k8s.io/v1
apiextensions.k8s.io/v1
apiregistration.k8s.io/v1
apps/v1
authentication.k8s.io/v1
authorization.k8s.io/v1
autoscaling/v1
autoscaling/v2
batch/v1
certificates.k8s.io/v1
coordination.k8s.io/v1
crd.projectcalico.org/v1
discovery.k8s.io/v1
events.k8s.io/v1
flowcontrol.apiserver.k8s.io/v1
flowcontrol.apiserver.k8s.io/v1beta3
networking.k8s.io/v1
node.k8s.io/v1
policy/v1
rbac.authorization.k8s.io/v1
scheduling.k8s.io/v1
storage.k8s.io/v1
v1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Pod 资源清单 详解
apiVersion: v1 # 接口组版本 可通过 kubectl explain pod 获取 core 核心v1 版本 简写v1
kind: Pod # 资源类型 Pod
metadata: # 元数据
name: pod-demo # pod 名字
namespace: default # 所在命名空间 不写也默认 default
labels: # 标签选择
app: myapp # 定义标签 便于归类识别等
spec: # 期望 也就是你最终想让他帮你达成的状态 最终能不能成功是不一定的 也属于声明式的表达
# 例如我想让他帮我创建一个不存在镜像的pod,肯定创建不出来但是他会一直帮你创建达到你的期望状态
# 故此这里 spec 不叫定义而是期望
containers: # 期望子对象 容器组
- name: myapp-1 # 第一个容器名 myapp-1
image: wangyanglinux/myapp:v1.0 # 容器镜像
- name: buxybox-1 # 第二个容器名 buxybox-1
image: wangyanglinux/tools:busybox # 容器镜像
command: # buxybox-1 容器的对应运行命令
- "/bin/sh"
- "-c"
- "sleep 3600"
status: # 状态 描绘这个资源清单的状态
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-12-20T08:49:07Z"
status: "True"
type: Initialized
- lastProbeTime: null

image-20241220140816706

二、创建资源

编写资源清单

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@k8s-master ~]# cat >> pod.yml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
spec:
containers:
- name: myapp-1
image: wangyanglinux/myapp:v1.0
- name: buxybox-1
image: wangyanglinux/tools:busybox
command:
- "/bin/sh"
- "-c"
- "sleep 3600"
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-12-20T08:49:07Z"
status: "True"
type: Initialized
- lastProbeTime: null
EOF

把期望资源清单提交k8s集群,帮我们进行对应的生产制造 进行实例化

1
[root@k8s-master ~]# kubectl create -f pod.yml
1
2
3
4
5
6
7
8
9
[root@k8s-master ~]# kubectl  get pod  # 查看 pod 状态
NAME READY STATUS RESTARTS AGE
pod-demo 0/2 ContainerCreating 0 5s
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-demo 2/2 Running 0 19s
[root@k8s-master ~]# kubectl get pod -o wide # 查看 pod 详细信息
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo 2/2 Running 0 28s 10.244.85.201 k8s-node01 <none> <none>

**根据上面详细信息 NODE k8s-node01 可以获取到当前 pod 的所在节点 **

1
2
3
4
5
6
登陆 nod01 设备执行命令 获取容器信息 可以得出概念 pod 是由多个容器组成在一起的 pause k8s_myapp k8s_buxybox 三个容器构建成的 pod 
[root@k8s-node01 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f9a71e2a9bb1 adf0836b2bab "/bin/sh -c 'sleep 3…" About a minute ago Up About a minute k8s_buxybox-1_pod-demo_default_37ceeb28-143a-4f02-9f2f-6cfdbe25dc1f_0
df23f643af98 79fbe47c0ab9 "/bin/sh -c 'hostnam…" About a minute ago Up About a minute k8s_myapp-1_pod-demo_default_37ceeb28-143a-4f02-9f2f-6cfdbe25dc1f_0
21b1d02515a3 registry.aliyuncs.com/google_containers/pause:3.8 "/pause" About a minute ago Up About a minute k8s_POD_pod-demo_default_37ceeb28-143a-4f02-9f2f-6cfdbe25dc1f_0

三、修改Pod内容器信息

1
2
3
4
5
[root@k8s-master ~]# kubectl  get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-demo 2/2 Running 0 28s 10.244.85.201 k8s-node01 <none> <none>
[root@k8s-master ~]# curl 10.244.85.201 # 访问 pod IP 得到下面信息
www.xinxianghf.com | hello MyAPP | version v1.0

image-20241220143847641

** 使用kubectl 修改容器内部信息**

1
2
3
4
5
6
7
# 命令解析 exec 进入容器 -it 分配可交互的终端  pod-demo pod的名字 -c 指定  myapp 容器名  -- 分割  /bin/bash 执行的命令
[root@k8s-master ~]# kubectl exec -it pod-demo -c myapp-1 -- /bin/bash
pod-demo:/# cd /usr/local/nginx/html/
pod-demo:/usr/local/nginx/html# cd /usr/local/nginx/html/
pod-demo:/usr/local/nginx/html# echo "123321" >> index.html
pod-demo:/usr/local/nginx/html# exit
exit

追加的内容已经生效

image-20241220144248865

四、标签查看

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master ~]# kubectl get  pod --show-labels           # 查看default命名空间下的所有标签
NAME       READY   STATUS   RESTARTS   AGE   LABELS
pod-demo   2/2     Running   0         14m   app=myapp
[root@k8s-master ~]# kubectl get pod -l app # 查看default命名空间下具备app这个key的标签的pod
NAME       READY   STATUS   RESTARTS   AGE
pod-demo   2/2     Running   0         14m
[root@k8s-master ~]# kubectl get pod -l app=myapp # 查看default命名空间下具备app=myapp标签的pod
NAME       READY   STATUS   RESTARTS   AGE
pod-demo   2/2     Running   0         15m
[root@k8s-master ~]# kubectl get pod -l app1 # 查看具备app1这个key的标签的pod 没有app1所以不显示
No resources found in default namespace

五、查看容器日志

1
2
3
4
5
6
# pod-demo pod名  -c  指定 容器名 
[root@k8s-master ~]# kubectl logs pod-demo -c myapp-1
192.168.68.160 - - [20/Dec/2024:14:32:51 +0800] "GET / HTTP/1.1" 200 48 "-" "curl/7.76.1"
192.168.68.160 - - [20/Dec/2024:14:42:31 +0800] "GET / HTTP/1.1" 200 55 "-" "curl/7.76.1"

# 如果 pod 内只有一个容器 那么不需要-c 指定容器名

六、查看资源对象详细信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master ~]# cat >> 2.pod.yml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-1
namespace: default
labels:
  app: myapp
spec:
containers:
- name: myapp-1
  image: wangyanglinux/myapp:v1.0
- name: myapp-2
  image: wangyanglinux/myapp:v2.0
EOF
1
kubectl create -f 2.pod.yml
1
kubectl discribe pod pod-demo-1

七、Pod的特性

续接6板块 发现pod特性

1
kubectl discribe pod pod-demo-1

image-20241220151707819

1
2
3
4
# 这里会发现 容器只就绪了一个并且创建6次都失败的
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod-demo-1 1/2 CrashLoopBackOff 6 (17s ago) 6m41s

查看具体的容器日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s-master ~]# kubectl  logs pod-demo-1 -c myapp-1
[root@k8s-master ~]# kubectl logs pod-demo-1 -c myapp-2 # 发现容器2 的80端口显示已经被占用
2024/12/20 15:17:40 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/12/20 15:17:40 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/12/20 15:17:40 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/12/20 15:17:40 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/12/20 15:17:40 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2024/12/20 15:17:40 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()
1
2
3
# 镜像wangyanglinux/myapp:v1.0  是基于操作系统封装编译安装的nginx
# 由此可以得出 同一个pod内部 不同的容器中共享网络站 IPC PID
# 当容器1启动后占用80端口 容器2就无法继续使用被占用的80端口

八、常用命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 获取当前的资源 pod
$ kubectl get pod
-A , --all-namespaces 查看所有名称空间下的资源
-n 指定空间名称,不写默认default
--show-labels 查看当前标签
-l 筛选资源 key、key=value
-o wide 详细信息 分配的节点设备 ip等

# 进入pod执行命令 pod名 容器名
$ kubectl exec -it podName -c cName -- command
-c 可以省略 默认进入容器为一的容器内部

# 查看资源的描述
$ kubectl explain pod.spec

# 查看容器内的日志
$ kubectl logs podName -c cName
-c 可以省略 默认查看容器为一的容器日志

# 创建 pod
$ kubectl create -f pod.yml

# 删除 pod
$ kubectl delete pod pod-demo # 通过 pod 名称删除对应 pod
$ kubectl delete  -f pod.yml # 通过yml文件删除对应资源

# 查看资源对象的详细描述
$ kubectl describe pod podName

Pod的生命周期

从创建到死亡的完成过程

Pod控制器-核心灵魂

1
2
3
4
在Kubernetes中运行了一系列控制器来确保集群的当前状态与期望状态保持一致,它们就是Kubernetes集群内部的管理控制中心或者说是“中心大脑”
例如:
ReplicaSet控制器负责维护集群中运行的Pod数量
节点控制器负责监控节点的状态,并在节点出现故障时,执行自动化修复流程,确保集群始终处于预期的工作状态。

image-20241220154013515

一、ReplicationController控制器

1
2
3
4
简写RC 用来确保容器应用的副本数量始终保持在用户定义的副本书,即如果有容器异常退出,会自动创建新的 Pod 来替代;而如果异常多出来的容器也会自动回收;

在新版的 Kubernetes 中建议使用 ReplicaSet 来取代 ReplicationController。
ReplicaSet 跟 ReplicationController 没有本质的不同,知识名字不一样,而且 ReplicaSet 支持集合式的selector
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 解释
apiVersion: v1 # 核心组 v1版本
kind: ReplicationController # 资源类别 ReplicationController 控制器
metadata: # 元数据
name: rc-demo # rc 控制器名
spec: # 期望
replicas: 3 # pod 副本数量
selector: # 选择器
  app: rc-demo # rc 控制器标签
template: # 模版
  metadata: # 元数据
    labels: # Pod 标签
      app: rc-demo # 必须是rc 控制器标签的子集
  spec: # 容器组的期望
    containers: # 容器组
     - name: rc-demo-container # 容器名
      image: wangyanglinux/myapp:v1.0 # 容器镜像
      env: # 环境变量
       - name: GET_HOSTS_FROM # 定义环境变量
        value: dns
        name: zhangsan
        value: "123"
      ports: # 容器端口
       - containerPort: 80 # 80端口
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@k8s-master ~]# cat >> rc.yml <<EOF
apiVersion: v1
kind: ReplicationController
metadata:
name: rc-demo
spec:
replicas: 3
selector:
  app: rc-demo
template:
  metadata:
    labels:
      app: rc-demo
  spec:
    containers:
     - name: rc-demo-container
      image: wangyanglinux/myapp:v1.0
      env:
       - name: GET_HOSTS_FROM
        value: dns
        name: zhangsan
        value: "123"
      ports:
       - containerPort: 80
EOF
1
[root@k8s-master ~]# kubectl create -f rc.yml

通过删除 pod 的方式验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master ~]# kubectl  get rc													# 查看rc控制器资源 
NAME DESIRED CURRENT READY AGE
rc-demo 3 3 3 6m26s # 副本3 创建3 就绪3
[root@k8s-master ~]# kubectl get pod # 查看 pod 资源
NAME READY STATUS RESTARTS AGE
rc-demo-ndx2r 1/1 Running 0 6m32s
rc-demo-nhv9q 1/1 Running 0 6m32s
rc-demo-rtlph 1/1 Running 0 6m32s
[root@k8s-master ~]# kubectl delete pod rc-demo-ndx2r # 手动删除 pod
pod "rc-demo-ndx2r" deleted
[root@k8s-master ~]# kubectl get pod # 查看 pod 资源
NAME READY STATUS RESTARTS AGE
rc-demo-ljdhd 1/1 Running 0 11s # 发现原删除的 pod 自动重新创建了
rc-demo-nhv9q 1/1 Running 0 7m28s
rc-demo-rtlph 1/1 Running 0 7m28s

通过标签的子集方式验证

1
2
3
4
5
6
7
8
9
10
11
12
[root@k8s-master ~]# kubectl  get pod --show-labels											# 通过标签查看pod信息
NAME READY STATUS RESTARTS AGE LABELS
rc-demo-ljdhd 1/1 Running 0 9m58s app=rc-demo
rc-demo-nhv9q 1/1 Running 0 17m app=rc-demo
rc-demo-rtlph 1/1 Running 0 17m app=rc-demo
[root@k8s-master ~]# kubectl label pod rc-demo-ljdhd version=v1 # 新增rc-demo-ljdhd 这个pod 标签
pod/rc-demo-ljdhd labeled
[root@k8s-master ~]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
rc-demo-ljdhd 1/1 Running 0 10m app=rc-demo,version=v1 # 增加了version=v1的标签,但这个pod的还是rc控制器标签的子集
rc-demo-nhv9q 1/1 Running 0 17m app=rc-demo
rc-demo-rtlph 1/1 Running 0 17m app=rc-demo
1
2
3
4
5
6
7
8
9
10
11
12
13
[root@k8s-master ~]# kubectl  get pod --show-labels										# 通过标签查看pod信息
NAME READY STATUS RESTARTS AGE LABELS
rc-demo-ljdhd 1/1 Running 0 10m app=rc-demo,version=v1
rc-demo-nhv9q 1/1 Running 0 17m app=rc-demo
rc-demo-rtlph 1/1 Running 0 17m app=rc-demo
[root@k8s-master ~]# kubectl label pod rc-demo-nhv9q app=test --overwrite # 修改app标签为test
pod/rc-demo-nhv9q labeled
[root@k8s-master ~]# kubectl get pod --show-labels # 通过标签查看pod信息 发现现在是4个pod
NAME READY STATUS RESTARTS AGE LABELS
rc-demo-jdnfm 1/1 Running 0 66s app=rc-demo # 新建的pod
rc-demo-ljdhd 1/1 Running 0 15m app=rc-demo,version=v1
rc-demo-nhv9q 1/1 Running 0 23m app=test # app=test 不符合 rc控制器标签的子集 故不在受rc控制器管理
rc-demo-rtlph 1/1 Running 0 23m app=rc-demo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s-master ~]# kubectl  get pod --show-labels										# 通过标签查看pod信息
NAME READY STATUS RESTARTS AGE LABELS
rc-demo-jdnfm 1/1 Running 0 66s app=rc-demo
rc-demo-ljdhd 1/1 Running 0 15m app=rc-demo,version=v1
rc-demo-nhv9q 1/1 Running 0 23m app=test
rc-demo-rtlph 1/1 Running 0 23m app=rc-demo
[root@k8s-master ~]# kubectl label pod rc-demo-nhv9q app=rc-demo --overwrite # 修改app标签为rc-demo
pod/rc-demo-nhv9q labeled
[root@k8s-master ~]# kubectl get pod --show-labels # 通过过标签查看pod信息 发现现在是3个pod
# 最新创建的rc-demo-jdnfm这个pod已经被删除了,由此可见符合 rc控制器标签都将被rc控制器所管理
NAME READY STATUS RESTARTS AGE LABELS
rc-demo-ljdhd 1/1 Running 0 18m app=rc-demo,version=v1
rc-demo-nhv9q 1/1 Running 0 25m app=rc-demo
rc-demo-rtlph 1/1 Running 0 25m app=rc-demo
1
2
3
4
5
6
7
由此可见 rc控制器最重要的特性:
- 保证副本值与期望值尽可能的一致
- 当然也不可能会完全一致,如果node资源不够无法创建指定数量的pod你那么他会一直的重试,尽可能的达到你期望值的数量

- pod副本数量多出会优先杀死删除最新创建的pod
- pod副本数量少会创建新pod
- rc控制器通过标签管理pod的数量

通过命令调整rc副本的数量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s-master ~]# kubectl  get pod -o wide									# 获取当前的pod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rc-demo-ljdhd 1/1 Running 0 24m 10.244.85.206 k8s-node01 <none> <none>
rc-demo-nhv9q 1/1 Running 0 31m 10.244.85.205 k8s-node01 <none> <none>
rc-demo-rtlph 1/1 Running 0 31m 10.244.58.200 k8s-node02 <none> <none>
[root@k8s-master ~]# kubectl scale rc rc-demo --replicas=10 # 调整名称为 rc-demo 的 rc 控制器 pod 副本数量为10
replicationcontroller/rc-demo scaled
[root@k8s-master ~]# kubectl get pod -o wide # 获取 pod 信息
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rc-demo-4l22s 1/1 Running 0 39s 10.244.85.211 k8s-node01 <none> <none>
rc-demo-52mg4 1/1 Running 0 39s 10.244.58.205 k8s-node02 <none> <none>
rc-demo-57z42 1/1 Running 0 39s 10.244.58.208 k8s-node02 <none> <none>
rc-demo-8mzzt 1/1 Running 0 39s 10.244.58.207 k8s-node02 <none> <none>
rc-demo-gh96h 1/1 Running 0 39s 10.244.58.206 k8s-node02 <none> <none>
rc-demo-khxrw 1/1 Running 0 39s 10.244.85.213 k8s-node01 <none> <none>
rc-demo-ljdhd 1/1 Running 0 25m 10.244.85.206 k8s-node01 <none> <none>
rc-demo-nhv9q 1/1 Running 0 32m 10.244.85.205 k8s-node01 <none> <none>
rc-demo-rtlph 1/1 Running 0 32m 10.244.58.200 k8s-node02 <none> <none>
rc-demo-zdzlt 1/1 Running 0 39s 10.244.85.212 k8s-node01 <none> <none>
[root@k8s-master ~]# kubectl get rc # 获取 rc 控制器的信息
NAME DESIRED CURRENT READY AGE
rc-demo 10 10 10 33m # 副本10 创建10 就绪10
1
2
3
4
5
6
7
8
9
[root@k8s-master ~]# kubectl  scale rc rc-demo --replicas=2       # 调整名称为 rc-demo 的 rc 控制器 pod 副本数量为10
replicationcontroller/rc-demo scaled
[root@k8s-master ~]# kubectl get rc # 获取 rc 控制器信息
NAME     DESIRED   CURRENT   READY   AGE
rc-demo   2         2         2       35m
[root@k8s-master ~]# kubectl get pod -o wide # 获取 pod 数量
NAME           READY   STATUS   RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
rc-demo-nhv9q   1/1     Running   0         35m   10.244.85.205   k8s-node01   <none>           <none>
rc-demo-rtlph   1/1     Running   0         35m   10.244.58.200   k8s-node02   <none>           <none>

二、ReplicaSet 控制器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# 解释
apiVersion: apps/v1 # rs 控制器核心组版本
kind: ReplicaSet # 资源类型 rs 控制器
metadata: # 元数据
name: rs-ml-demo # rs 控制器名
spec: # 期望
replicas: 3 # 副本数量
selector: # 标签选择
  matchLabels: # 使用子集匹配标签跟 pod 标签做子集运算
    app: rs-ml-demo # 定义rs 控制器标签
    domain: rs-t1 # 定义rs 控制器标签
template: # 定义pod模版
  metadata: # 元数据
    labels: # pod 标签
      app: rs-ml-demo # 定义 pod 标签
      domain: rs-t1
      version: v1
  spec: # 容器期望
    containers: # 容器组
     - name: rs-ml-demo-container # 容器名
      image: wangyanglinux/myapp:v1.0 # 容器镜像
      env: # 定义容器内环境变量
       - name: GET_HOSTS_FROM
        value: dns
      ports: # 声明端口 可写可不写
       - containerPort: 80 # 容器使用端口 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@k8s-master ~]# cat >> rs.yml <<EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-ml-demo
spec:
replicas: 3
selector:
matchLabels:
app: rs-ml-demo
domain: rs-t1
template:
metadata:
labels:
app: rs-ml-demo
domain: rs-t1
version: v1
spec:
containers:
- name: rs-ml-demo-container
image: wangyanglinux/myapp:v1.0
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 80
EOF
1
[root@k8s-master ~]# kubectl  create -f  rs.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master ~]# kubectl  get pod															# 查看 pod 信息
NAME READY STATUS RESTARTS AGE
rs-ml-demo-4shl5 1/1 Running 0 2m12s
rs-ml-demo-p9hbt 1/1 Running 0 2m12s
rs-ml-demo-qs9dw 1/1 Running 0 2m12s
[root@k8s-master ~]# kubectl get rs # 查看 rs 控制器信息
NAME DESIRED CURRENT READY AGE
rs-ml-demo 3 3 3 2m57s
[root@k8s-master ~]# kubectl delete pod rs-ml-demo-4shl5 # 删除 pod
pod "rs-ml-demo-4shl5" deleted
[root@k8s-master ~]# kubectl get pod # 查看 pod 信息
NAME READY STATUS RESTARTS AGE
rs-ml-demo-7bhfg 1/1 Running 0 12s # 删除的 pod 被 rs 自动重新创建
rs-ml-demo-p9hbt 1/1 Running 0 3m41s
rs-ml-demo-qs9dw 1/1 Running 0 3m41s

matchExpressions 标签

1
2
3
4
5
6
7
selector.matchExpressions
rs 在标签选择器上,除了可以定义键值对的选择形式,还支持 matchExpressions 字段,可以提供多种选择。
目前支持的操作包括:
- In:label 的值在某个列表中
- NotIn:label 的值不在某个列表中
- Exists:某个 label 存在
- DoesNotExist:某个 label 不存在
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 解释
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-me-exists-demo
spec:
selector:
  matchExpressions: # 匹配运算标签
     - key: app # 标签app
      operator: Exists # 包含 app 这个key
template:
  metadata:
    labels:
      app: spring-k8s # pod 标签
  spec:
    containers:
       - name: rs-me-exists-demo-container
        image: wangyanglinux/myapp:v1.0
        ports:
           - containerPort: 80
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@k8s-master ~]# cat >> rs-1.yml <<EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-me-exists-demo
spec:
selector:
  matchExpressions:
     - key: app
      operator: Exists
template:
  metadata:
    labels:
      app: spring-k8s
  spec:
    containers:
       - name: rs-me-exists-demo-container
        image: wangyanglinux/myapp:v1.0
        ports:
           - containerPort: 80
EOF
1
kubectl create -f rs-1.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@k8s-master ~]# kubectl  get pod
NAME                     READY   STATUS   RESTARTS   AGE
rs-me-exists-demo-v245q   1/1     Running   0         3m12s
rs-ml-demo-7bhfg          1/1     Running   0         23m
rs-ml-demo-p9hbt          1/1     Running   0         26m
rs-ml-demo-qs9dw          1/1     Running   0         26m
[root@k8s-master ~]# kubectl get rs # 资源控制器未写副本数量 默认为1
NAME               DESIRED   CURRENT   READY   AGE
rs-me-exists-demo   1         1         1       5m45s
rs-ml-demo          3         3         3       29m
[root@k8s-master ~]# kubectl get pod --show-labels # 查看当前 pod 标签
NAME                     READY   STATUS   RESTARTS   AGE     LABELS
rs-me-exists-demo-v245q   1/1     Running   0         3m20s   app=spring-k8s # rs exists key匹配创建的 pod
rs-ml-demo-7bhfg          1/1     Running   0         23m     app=rs-ml-demo,domain=rs-t1,version=v1
rs-ml-demo-p9hbt          1/1     Running   0         26m     app=rs-ml-demo,domain=rs-t1,version=v1
rs-ml-demo-qs9dw          1/1     Running   0         26m     app=rs-ml-demo,domain=rs-t1,version=v1
[root@k8s-master ~]# kubectl get rs
NAME               DESIRED   CURRENT   READY   AGE
rs-me-exists-demo   1         1         1       3m46s
rs-ml-demo          3         3         3       27m
[root@k8s-master ~]# kubectl label pod rs-me-exists-demo-v245q app=test-1 --overwrite # 修改标签的 value 为test-1
pod/rs-me-exists-demo-v245q labeled
[root@k8s-master ~]# kubectl get pod --show-labels # 查看当前 pod 标签
NAME                     READY   STATUS   RESTARTS   AGE     LABELS
rs-me-exists-demo-v245q   1/1     Running   0         5m40s   app=test-1 # 发现pod并未被删除
rs-ml-demo-7bhfg          1/1     Running   0         25m     app=rs-ml-demo,domain=rs-t1,version=v1
rs-ml-demo-p9hbt          1/1     Running   0         29m     app=rs-ml-demo,domain=rs-t1,version=v1
rs-ml-demo-qs9dw          1/1     Running   0         29m     app=rs-ml-demo,domain=rs-t1,version=v1
1
2
3
4
5
6
7
8
9
10
11
12
# 得出结论 使用 matchExpressions Exists 标签时 key做匹配 value不做匹配 
# rs控制器就可管理携带key标签一致的pod,但是不会管理到其他非该资源创建出来的pod
# rs 控制器比 Rc控制器 有更丰富的 标签选择管理功能
spec:
selector:
matchExpressions:
- key: app # 标签key = app
operator: Exists
template:
metadata:
labels:
app: spring-k8s # pod 标签的app = app 所以可以被rs控制器管理

三、Deployment控制器

基本概念

声明式与命令式

1
2
3
声明性的东西是对终结果的陈述,表明意图而不是实现它的过程。在Kubernetes中,这就是说"应该有一个包含三个Pod的ReplicaSet"

命令式充当命令。声明式是被动的,而命令是主动且直接的:"创建一个包含三个Pod的ReplicaSet"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
replaceapply 的对比 
替换方式
kubectl replace:使用新的配置完全替换掉现有资源的配置。这意味着新配置将覆盖现有资源的所有字段和属性,包括未指定的字段,会导致整个资源的替换
kubectl apply: 使用新的配置部分地更新现有资源的配置。它会根据提供的配置文件或参数,只更新与新配置中不同的部分,而不会覆盖整个资源的配置字段级别的更新

kubectl replace: 由于是完全替换,所以会覆盖所有字段和属性,无论是否在新配置中指定
kubectl apply:只更新与新配置中不同的字段和属性,保留未指定的字段不受影响

部分更新
kubectl replace: 不支持部分更新,它会替换整个资源的配置
kubectl apply: 支持部分更新,只会更新新配置中发生变化的部分,保留未指定的部分不受影响

与其他配置的影响
kubectl replace: 不考虑其他资源配置的状态,直接替换资源的配置
kubectl apply: 可以结合使用 -f或 -k 参数,从文件或目录中读取多个资源配置,并根据当前集群中的资源状态进行更新
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# --dry-run -o yaml  不创建资源生成模版
[root@k8s-master ~]# kubectl create deployment myapp --image=wangyanglinux/myapp:v1.0 --dry-run -o yaml > deployment.yml.tmp
W1224 11:38:42.790200  874724 helpers.go:704] --dry-run is deprecated and can be replaced with --dry-run=client.
[root@k8s-master ~]# cat deployment.yml.tmp
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
  app: myapp
name: myapp
spec:
replicas: 1
selector:
  matchLabels:
    app: myapp
strategy: {}
template:
  metadata:
    creationTimestamp: null
    labels:
      app: myapp
  spec:
    containers:
     - image: wangyanglinux/myapp:v1.0
      name: myapp
      resources: {}
status: {}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 解释
apiVersion: apps/v1 # Deployment资源核心版本
kind: Deployment # 资源类别
metadata: # 元数据
labels: # deployment标签
  app: deployment-demo # 定义标签
name: deployment-demo # 控制器名字
spec: # 期望
selector: # deployment 标签
  matchLabels: # 使用的匹配标签
    app: deployment-demo # 定义标签 key 和 value
template: # 副本
  metadata: # 元数据
    labels: # Pod 标签
      app: deployment-demo # 定义pod标签 key value
  spec: # 期望
    containers: # 容器组
     - image: wangyanglinux/myapp:v1.0 # 镜像
      name: deployment-demo-container # 容器名
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@k8s-master ~]# cat >> deployment.yml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
  app: deployment-demo
name: deployment-demo
spec:
selector:
  matchLabels:
    app: deployment-demo
template:
  metadata:
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v1.0
      name: deployment-demo-container
EOF
1
2
3
4
5
6
7
8
[root@k8s-master ~]# kubectl  apply -f deployment.yml						# 创建资源
deployment.apps/deployment-demo created
[root@k8s-master ~]# kubectl get deploy # 查看 deploy 资源控制器
NAME READY UP-TO-DATE AVAILABLE AGE
deployment-demo 1/1 1 1 30s
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
deployment-demo-6995c75668-5g8p7 1/1 Running 0 70s

查看 deployment-demo 控制器存储在etcd数据库的数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
[root@k8s-master ~]# kubectl  get deployment deployment-demo  -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
  deployment.kubernetes.io/revision: "1"
  kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"deployment-demo"},"name":"deployment-demo","namespace":"default"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"deployment-demo"}},"template":{"metadata":{"labels":{"app":"deployment-demo"}},"spec":{"containers":[{"image":"wangyanglinux/myapp:v2.0","name":"deployment-demo-container"}]}}}}
creationTimestamp: "2024-12-20T12:48:41Z"
generation: 1
labels:
  app: deployment-demo
name: deployment-demo
namespace: default
resourceVersion: "394942"
uid: da19be8e-1d44-4855-b981-84b142f681d7
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
  matchLabels:
    app: deployment-demo
strategy:
  rollingUpdate:
    maxSurge: 25%
    maxUnavailable: 25%
  type: RollingUpdate
template:
  metadata:
    creationTimestamp: null
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v1.0
      imagePullPolicy: IfNotPresent
      name: deployment-demo-container
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
 - lastTransitionTime: "2024-12-20T12:48:58Z"
  lastUpdateTime: "2024-12-20T12:48:58Z"
  message: Deployment has minimum availability.
  reason: MinimumReplicasAvailable
  status: "True"
  type: Available
 - lastTransitionTime: "2024-12-20T12:48:41Z"
  lastUpdateTime: "2024-12-20T12:48:58Z"
  message: ReplicaSet "deployment-demo-6465d4c5c9" has successfully progressed.
  reason: NewReplicaSetAvailable
  status: "True"
  type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 增加副本数量
[root@k8s-master ~]# kubectl scale deployment deployment-demo --replicas=10 # 设置deployment-demo控制器pod副本数量10
deployment.apps/deployment-demo scaled
[root@k8s-master ~]# kubectl get pod -o wide # 查看pod详细信息
NAME                               READY   STATUS   RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
deployment-demo-6995c75668-7jvfq   1/1     Running   0         66s    10.244.85.238   k8s-node01   <none>           <none>
deployment-demo-6995c75668-94fhp   1/1     Running   0         65s    10.244.85.239   k8s-node01   <none>           <none>
deployment-demo-6995c75668-9v2dx   1/1     Running   0         66s    10.244.58.231   k8s-node02   <none>           <none>
deployment-demo-6995c75668-cscv9   1/1     Running   0         2m3s   10.244.85.236   k8s-node01   <none>           <none>
deployment-demo-6995c75668-gr2l6   1/1     Running   0         65s    10.244.58.232   k8s-node02   <none>           <none>
deployment-demo-6995c75668-l67tk   1/1     Running   0         66s    10.244.58.229   k8s-node02   <none>           <none>
deployment-demo-6995c75668-m4hqc   1/1     Running   0         66s    10.244.85.240   k8s-node01   <none>           <none>
deployment-demo-6995c75668-qq587   1/1     Running   0         66s    10.244.58.228   k8s-node02   <none>           <none>
deployment-demo-6995c75668-s6wxf   1/1     Running   0         66s    10.244.58.230   k8s-node02   <none>           <none>
deployment-demo-6995c75668-vdfgg   1/1     Running   0         66s    10.244.85.237   k8s-node01   <none>           <none>
[root@k8s-master ~]# curl 10.244.85.239 # 访问pod ip
www.xinxianghf.com | hello MyAPP | version v1.0

版本升级 把镜像版本改为v2.0

1
[root@k8s-master ~]# sed -i 's/myapp:v1.0/myapp:v2.0/g' deployment.yml

image-20241220210257333

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s-master ~]# kubectl  apply -f deployment.yml           # apply 更新
[root@k8s-master ~]# kubectl get pod -o wide
NAME                               READY   STATUS   RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
deployment-demo-6465d4c5c9-4j898   1/1     Running   0         2m22s   10.244.58.234   k8s-node02   <none>           <none>
deployment-demo-6465d4c5c9-4wccv   1/1     Running   0         87s     10.244.85.245   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-592z4   1/1     Running   0         99s     10.244.58.236   k8s-node02   <none>           <none>
deployment-demo-6465d4c5c9-bzwxz   1/1     Running   0         2m25s   10.244.85.241   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-gbnb8   1/1     Running   0         2m24s   10.244.58.233   k8s-node02   <none>           <none>
deployment-demo-6465d4c5c9-j5vh9   1/1     Running   0         2m21s   10.244.85.243   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-jr4m8   1/1     Running   0         2m24s   10.244.85.242   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-k7dm6   1/1     Running   0         91s     10.244.58.237   k8s-node02   <none>           <none>
deployment-demo-6465d4c5c9-zqb42   1/1     Running   0         104s    10.244.58.235   k8s-node02   <none>           <none>
deployment-demo-6465d4c5c9-zrhbs   1/1     Running   0         92s     10.244.85.244   k8s-node01   <none>           <none>
[root@k8s-master ~]# curl 10.244.58.234 # 更新为v2.0版本
www.xinxianghf.com | hello MyAPP | version v2.0
1
2
3
更新的过程中也发现一个新的特性,更新流程 先创建新版本的pod runing 后才删除旧版本的pod
apply:只改变你声明的跟现在运行状态不相符的部分 其他不做改变 例如通过scale重新定义deployment-demo副本的数量为10,通过apply 重新更新的pod数量还是为10

image-20241220212108216

通过 replace 构建资源

1
2
3
4
5
6
7
8
9
10
11
[root@k8s-master ~]# sed -i 's/myapp:v2.0/myapp:v3.0/g' deployment.yml    # 重新设置版本为v3.0
[root@k8s-master ~]# kubectl replace -f deployment.yml # 通过 replace 构建资源
deployment.apps/deployment-demo replaced
[root@k8s-master ~]# kubectl get pod -o wide # 查看pod信息 数量只为1了
NAME                               READY   STATUS   RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
deployment-demo-7dbccb74d6-twd84   1/1     Running   0         111s   10.244.85.246   k8s-node01   <none>           <none>
[root@k8s-master ~]# kubectl get deployment # 查看控制器信息副本数量也只为1了
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
deployment-demo   1/1     1            1           40m
[root@k8s-master ~]# curl 10.244.85.246 # 版本也已经更新
www.xinxianghf.com | hello MyAPP | version v3.0
1
2
3
得出结论
replace:会根据 deployment.yml 完全重建,完全覆盖 并不是更新与先版本不一致的地方
上面 副本数量为1 是因为deployment.yml 没有写副本数量,即默认为1 故此通过replace创建的pod的数量也为1

通过diff 对比当前现有deployment.yml的资源清单与已经创建的运行的资源对象做对比查询是否有区别

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s-master ~]# kubectl  diff -f deployment.yml              # 这里无返回结果 说明创建的资源与yml一致
[root@k8s-master ~]# sed -i 's/myapp:v3.0/myapp:v4.0/g' deployment.yml # 修改容器镜像版本号
[root@k8s-master ~]# kubectl diff -f deployment.yml # 再次对比
diff -u -N /tmp/LIVE-4200347792/apps.v1.Deployment.default.deployment-demo /tmp/MERGED-2776630918/apps.v1.Deployment.default.deployment-demo
--- /tmp/LIVE-4200347792/apps.v1.Deployment.default.deployment-demo 2024-12-20 21:34:15.119335901 +0800
+++ /tmp/MERGED-2776630918/apps.v1.Deployment.default.deployment-demo 2024-12-20 21:34:15.121335985 +0800
@@ -4,7 +4,7 @@
  annotations:
    deployment.kubernetes.io/revision: "6"
  creationTimestamp: "2024-12-20T12:48:41Z"
- generation: 9
+ generation: 10
  labels:
    app: deployment-demo
  name: deployment-demo
@@ -30,7 +30,7 @@
        app: deployment-demo
    spec:
      containers:
-      - image: wangyanglinux/myapp:v3.0 # 区别如下 删除了3.0版本的这一行
+      - image: wangyanglinux/myapp:v4.0 # 新增了4.0版本的这行
        imagePullPolicy: IfNotPresent
        name: deployment-demo-container
        resources: {}
1
tips: 正式的生产环境中 可以使用diff 命令检测这个资源清单文件是否已经应用过,无反馈结果就是已经应用,有返回不同的地方说明这个资源清单未被重新应用升级。

3.1 滚动更新

1
2
deployment资源管理pod 滚动更新的过程
版本需要升级,当前的v1版本的pod数量为3 控制器会先杀死一个pod,并且通过deployment控制器会操作rs控制器生产v2版本的pod。保障服务的不间断访问,逐步迁移所有pod为v2版本的pod

image-20241220214319574

image-20241220214330109

3.2 回滚降级

1
在 deployment 滚动升级完成后 原v1版本的 rs控制器的资源清单不会被删除 而是存储在etcd数据库中

image-20241220215146941

1
2
逐步删除v2版本的rs控制器的pod 通过v1版本的rs 生成v1版本的pod, 
再次更新v2.1 和上面滚动升级同理,创建v2.1的rs控制器 逐步替换v1 的pod

image-20241220215343917

3.3 Kubectl create、apply、replace 的区别

1
2
3
create	是一种命令式的表达方式 根据资源清单创建资源
apply 是一种声明式的表达方式 根据资源清单更新不同
replace 是一种命令式的表达方式 根据资源清单删除原有资源重新创建资源
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# create 方式创建资源
[root@k8s-master ~]# cat deployment-1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: deployment-demo
name: deployment-demo
spec:
selector:
matchLabels:
app: deployment-demo
template:
metadata:
labels:
app: deployment-demo
spec:
containers:
- image: wangyanglinux/myapp:v1.0
name: deployment-demo-container
[root@k8s-master ~]# kubectl create -f deployment-1.yml
deployment.apps/deployment-demo created
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-demo-6995c75668-d7t5j 1/1 Running 0 29s 10.244.85.247 k8s-node01 <none> <none>
[root@k8s-master ~]# curl 10.244.85.247
www.xinxianghf.com | hello MyAPP | version v1.0
## 修改资源清单
[root@k8s-master ~]# sed -i 's/myapp:v1.0/myapp:v2.0/' deployment-1.yml
[root@k8s-master ~]# kubectl create -f deployment-1.yml # 是无法在通过create创建资源的
Error from server (AlreadyExists): error when creating "deployment-1.yml": deployments.apps "deployment-demo" already exists
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# apply 方式创建资源
[root@k8s-master ~]# kubectl delete -f deployment-1.yml
deployment.apps "deployment-demo" deleted
[root@k8s-master ~]# kubectl get pod -o wide
No resources found in default namespace.
[root@k8s-master ~]# kubectl apply -f deployment-1.yml
deployment.apps/deployment-demo created
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-demo-6465d4c5c9-625rx 1/1 Running 0 67s 10.244.85.248 k8s-node01 <none> <none>
[root@k8s-master ~]# curl 10.244.85.248
www.xinxianghf.com | hello MyAPP | version v2.0
[root@k8s-master ~]# sed -i 's/myapp:v2.0/myapp:v3.0/' deployment-1.yml
[root@k8s-master ~]# kubectl apply -f deployment-1.yml # 可以更新现有资源与清单不同的地方
deployment.apps/deployment-demo configured
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-demo-7dbccb74d6-brnjv 1/1 Running 0 2m39s 10.244.58.240 k8s-node02 <none> <none>
[root@k8s-master ~]# curl 10.244.58.240 # 2.0 更新至3.0
www.xinxianghf.com | hello MyAPP | version v3.0
1
2
3
4
5
6
7
8
9
10
11
12
13
# replace 方式创建资源
[root@k8s-master ~]# sed -i 's/myapp:v3.0/myapp:v2.0/' deployment-1.yml
[root@k8s-master ~]# kubectl replace -f deployment-1.yml # 直接删除原有v3版本pod重新创建2.0的pod
deployment.apps/deployment-demo replaced
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-demo-6465d4c5c9-gwntz 0/1 ContainerCreating 0 3s <none> k8s-node01 <none> <none>
deployment-demo-7dbccb74d6-brnjv 1/1 Running 0 4m26s 10.244.58.240 k8s-node02 <none> <none>
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-demo-6465d4c5c9-gwntz 1/1 Running 0 25s 10.244.85.249 k8s-node01 <none> <none>
[root@k8s-master ~]# curl 10.244.85.249
www.xinxianghf.com | hello MyAPP | version v2.0

3.4 Deployment 常用命令

1
2
3
4
5
6
7
8
9
10
11
12
13
# --record 参数可以记录一些命令。 可以放方便查看每次reversion的变化
[root@k8s-master ~]# kubectl create -f deployment.yml --record

# scale 修改名为nginx-deployment的deployment资源控制器副本数量为10
[root@k8s-master ~]# kubectl scale deployment nginx-deployment --replicas=10

# autoscale 自动伸缩扩展pod副本数量,cpu使用率在80% 那么会扩展pod,设置min最小值防止低峰pod回收后突然高峰顶爆当前这台设备,设置最大值防止被攻击或代码问题导致一直扩展pod 非真实用户访问造成资源浪费
[root@k8s-master ~]# kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80

# 设置名为deployment-demo的deployment资源控制器的容器名为deployment-demo-container的镜像为wangyanglinux/myapp:v1.0
[root@k8s-master ~]# kubectl set image deployment/deployment-demo deployment-demo-container=wangyanglinux/myapp:v1.0
deployment.apps/deployment-demo image updated

image-20241220224349917

3.5 滚动更新实验

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
deployment-demo-6995c75668-qn8jv   0/1     ContainerCreating   0          2s
[root@k8s-master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
  app: deployment-demo
name: deployment-demo
spec:
replicas: 5
selector:
  matchLabels:
    app: deployment-demo
template:
  metadata:
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v1.0
      name: deployment-demo-container
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment-demo create
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS   RESTARTS   AGE
deployment-demo-6995c75668-25gxf   1/1     Running   0         20s
deployment-demo-6995c75668-2tlhq   1/1     Running   0         20s
deployment-demo-6995c75668-jsz5z   1/1     Running   0         20s
deployment-demo-6995c75668-kdrx8   1/1     Running   0         73s
deployment-demo-6995c75668-qn8jv   1/1     Running   0         20s
[root@k8s-master ~]# kubectl get pod --show-labels
NAME                               READY   STATUS   RESTARTS   AGE     LABELS
deployment-demo-6995c75668-25gxf   1/1     Running   0         113s    app=deployment-demo,pod-template-hash=6995c75668
deployment-demo-6995c75668-2tlhq   1/1     Running   0         113s    app=deployment-demo,pod-template-hash=6995c75668
deployment-demo-6995c75668-jsz5z   1/1     Running   0         113s    app=deployment-demo,pod-template-hash=6995c75668
deployment-demo-6995c75668-kdrx8   1/1     Running   0         2m46s   app=deployment-demo,pod-template-hash=6995c75668
deployment-demo-6995c75668-qn8jv   1/1     Running   0         113s    app=deployment-demo,pod-template-hash=6995c75668
[root@k8s-master ~]# kubectl create svc clusterip deployment-demo --tcp=80:80 # 创建deployment控制器根据标签deployment-demo关系负载均衡集群
service/deployment-demo created
[root@k8s-master ~]# kubectl get svc
NAME             TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
deployment-demo   ClusterIP   10.2.163.19   <none>        80/TCP   5s
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   3d3h
[root@k8s-master ~]# curl 10.2.163.19 # 访问负载均衡集群IP
www.xinxianghf.com | hello MyAPP | version v1.0

# 再开一个终端循环访问集群的ip
[root@k8s-master ~]# while true; do curl 10.2.163.19 && sleep 1 ; done # 循环访问负载集群的ip
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0

# 回到原终端更新pod的容器镜像版本
[root@k8s-master ~]# kubectl set image deployment/deployment-demo deployment-demo-container=wangyanglinux/myapp:v2.0
deployment.apps/deployment-demo image updated
[root@k8s-master ~]# kubectl get pod -o wide
NAME                               READY   STATUS             RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
deployment-demo-6465d4c5c9-2xxnq   0/1     ContainerCreating   0         4s     <none>         k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-6gdff   1/1     Running             0         18s     10.244.85.254   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-6zspk   1/1     Running             0         17s     10.244.85.255   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-n4fb5   1/1     Running             0         17s     10.244.58.243   k8s-node02   <none>           <none>
deployment-demo-6465d4c5c9-w748p   0/1     ContainerCreating   0         6s     <none>         k8s-node02   <none>           <none>
deployment-demo-6995c75668-25gxf   1/1     Terminating         0         8m44s   10.244.58.242   k8s-node02   <none>           <none>
deployment-demo-6995c75668-2tlhq   1/1     Terminating         0         8m44s   10.244.85.252   k8s-node01   <none>           <none>
deployment-demo-6995c75668-kdrx8   1/1     Terminating         0         9m37s   10.244.85.251   k8s-node01   <none>           <none>
deployment-demo-6995c75668-qn8jv   1/1     Running             0         8m44s   10.244.58.241   k8s-node02   <none>           <none>
[root@k8s-master ~]# kubectl get pod -o wide
NAME                               READY   STATUS   RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
deployment-demo-6465d4c5c9-2xxnq   1/1     Running   0         27s   10.244.85.192   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-6gdff   1/1     Running   0         41s   10.244.85.254   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-6zspk   1/1     Running   0         40s   10.244.85.255   k8s-node01   <none>           <none>
deployment-demo-6465d4c5c9-n4fb5   1/1     Running   0         40s   10.244.58.243   k8s-node02   <none>           <none>
deployment-demo-6465d4c5c9-w748p   1/1     Running   0         29s   10.244.58.244   k8s-node02   <none>           <none>


# 回到循环访问集群的ip的终端, 可以发现v1版本逐渐与v2版本交替访问,直到完全被v2版本取代,这就是滚动升级的过程
[root@k8s-master ~]# while true; do curl 10.2.163.19 && sleep 1 ; done
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0

3.6 Deployment-金丝雀部署

1
2
3
4
5
6
7
8
# 续接3.5滚动更新的实验,先删除deployment控制器创建的pod,保留svc
[root@k8s-master ~]# kubectl delete -f deployment.yml
[root@k8s-master ~]# kubectl get pod
No resources found in default namespace.
[root@k8s-master ~]# kubectl get svc
NAME             TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
deployment-demo   ClusterIP   10.2.163.19   <none>        80/TCP   20h
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   3d23h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@k8s-master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
  app: deployment-demo
name: deployment-demo
spec:
replicas: 10 # 这里演示10个副本pod
selector:
  matchLabels:
    app: deployment-demo
template:
  metadata:
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v1.0
      name: deployment-demo-container
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
[root@k8s-master ~]# kubectl  apply -f deployment.yml         # 创建deploy控制器
deployment.apps/deployment-demo created
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS   RESTARTS   AGE
deployment-demo-6995c75668-crh9g   1/1     Running   0         80s
deployment-demo-6995c75668-fk2fj   1/1     Running   0         80s
deployment-demo-6995c75668-glzph   1/1     Running   0         80s
deployment-demo-6995c75668-ln92q   1/1     Running   0         81s
deployment-demo-6995c75668-lq76f   1/1     Running   0         81s
deployment-demo-6995c75668-m4nmn   1/1     Running   0         80s
deployment-demo-6995c75668-q5969   1/1     Running   0         80s
deployment-demo-6995c75668-rfpl5   1/1     Running   0         81s
deployment-demo-6995c75668-th6jv   1/1     Running   0         80s
deployment-demo-6995c75668-z2tl5   1/1     Running   0         80s
[root@k8s-master ~]# kubectl get svc
NAME             TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
deployment-demo   ClusterIP   10.2.163.19   <none>        80/TCP   20h
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   4d
[root@k8s-master ~]# kubectl get deployment deployment-demo -o yaml # 把deployment-demo控制器输出为资源清单
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
  deployment.kubernetes.io/revision: "1"
  kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"deployment-demo"},"name":"deployment-demo","namespace":"default"},"spec":{"replicas":10,"selector":{"matchLabels":{"app":"deployment-demo"}},"template":{"metadata":{"labels":{"app":"deployment-demo"}},"spec":{"containers":[{"image":"wangyanglinux/myapp:v1.0","name":"deployment-demo-container"}]}}}}
creationTimestamp: "2024-12-21T11:13:06Z"
generation: 1
labels:
  app: deployment-demo
name: deployment-demo
namespace: default
resourceVersion: "516213"
uid: 6a0e236a-c726-46cd-a5f0-4a6411d51297
spec:
progressDeadlineSeconds: 600
replicas: 10
revisionHistoryLimit: 10
selector:
  matchLabels:
    app: deployment-demo
strategy:
  rollingUpdate: # 更新模式为滚动更新
    maxSurge: 25% # 滚动更新允许创建超过pod副本的数量 这里是总副本数量的25%
    maxUnavailable: 25% # 更新时pod最多不可用副本的数量 这里是总副本数量的25%
  type: RollingUpdate
template:
  metadata:
    creationTimestamp: null
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v1.0
      imagePullPolicy: IfNotPresent
      name: deployment-demo-container
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    terminationGracePeriodSeconds: 30
status:
availableReplicas: 10
conditions:
 - lastTransitionTime: "2024-12-21T11:14:01Z"
  lastUpdateTime: "2024-12-21T11:14:01Z"
  message: Deployment has minimum availability.
  reason: MinimumReplicasAvailable
  status: "True"
  type: Available
 - lastTransitionTime: "2024-12-21T11:13:06Z"
  lastUpdateTime: "2024-12-21T11:14:06Z"
  message: ReplicaSet "deployment-demo-6995c75668" has successfully progressed.
  reason: NewReplicaSetAvailable
  status: "True"
  type: Progressing
observedGeneration: 1
readyReplicas: 10
replicas: 10
updatedReplicas: 10

image-20241221191920937

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
# 配置deployment-demo控制器的滚动更新策略为创建新pod版本的数量为1,不可用pod副本的数量为0
[root@k8s-master ~]# kubectl patch deployment deployment-demo -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'
deployment.apps/deployment-demo patched
[root@k8s-master ~]# kubectl get deployment deployment-demo -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
  deployment.kubernetes.io/revision: "1"
  kubectl.kubernetes.io/last-applied-configuration: |
    {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"deployment-demo"},"name":"deployment-demo","namespace":"default"},"spec":{"replicas":10,"selector":{"matchLabels":{"app":"deployment-demo"}},"template":{"metadata":{"labels":{"app":"deployment-demo"}},"spec":{"containers":[{"image":"wangyanglinux/myapp:v1.0","name":"deployment-demo-container"}]}}}}
creationTimestamp: "2024-12-21T11:13:06Z"
generation: 2
labels:
  app: deployment-demo
name: deployment-demo
namespace: default
resourceVersion: "517361"
uid: 6a0e236a-c726-46cd-a5f0-4a6411d51297
spec:
progressDeadlineSeconds: 600
replicas: 10
revisionHistoryLimit: 10
selector:
  matchLabels:
    app: deployment-demo
strategy:
  rollingUpdate:
    maxSurge: 1 # 滚动更新允许创建超过pod副本的数量1 那么滚动升级是会出现11个容器
    maxUnavailable: 0 # 不可用pod副本的数量为0 也就是更新的过程中,新pod可以访问后才会删除一个旧pod,而不是默认删除pod副本数量的25%
  type: RollingUpdate
template:
  metadata:
    creationTimestamp: null
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v1.0
      imagePullPolicy: IfNotPresent
      name: deployment-demo-container
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    terminationGracePeriodSeconds: 30
status:
availableReplicas: 10
conditions:
 - lastTransitionTime: "2024-12-21T11:14:01Z"
  lastUpdateTime: "2024-12-21T11:14:01Z"
  message: Deployment has minimum availability.
  reason: MinimumReplicasAvailable
  status: "True"
  type: Available
 - lastTransitionTime: "2024-12-21T11:13:06Z"
  lastUpdateTime: "2024-12-21T11:14:06Z"
  message: ReplicaSet "deployment-demo-6995c75668" has successfully progressed.
  reason: NewReplicaSetAvailable
  status: "True"
  type: Progressing
observedGeneration: 2
readyReplicas: 10
replicas: 10
updatedReplicas: 10
# kubectl patch deployment deployment-demo -p '{"spec":{"template":{"spec":{"containers":[{"name":"deployment-demo-container","image":"wangyanglinux/myapp:v2.0"}]}}}}' # 触发滚动更新把镜像改为2.0版本
# kubectl rollout pause deployment deployment-demo 暂停名为deployment-demo的deploy控制器滚动更新
[root@k8s-master ~]# kubectl patch deployment deployment-demo -p '{"spec":{"template":{"spec":{"containers":[{"name":"deployment-demo-container","image":"wangyanglinux/myapp:v2.0"}]}}}}' && kubectl rollout pause deployment deployment-demo
deployment.apps/deployment-demo patched
deployment.apps/deployment-demo paused
[root@k8s-master ~]# kubectl get pod # 查看pod 发现滚动更新创建了上面定义的pod数量 为1个pod
NAME                               READY   STATUS             RESTARTS   AGE
deployment-demo-6465d4c5c9-jsnsl   0/1     ContainerCreating   0         7s # 也就是这个最新创建的
deployment-demo-6995c75668-crh9g   1/1     Running             0         22m
deployment-demo-6995c75668-fk2fj   1/1     Running             0         22m
deployment-demo-6995c75668-glzph   1/1     Running             0         22m
deployment-demo-6995c75668-ln92q   1/1     Running             0         22m
deployment-demo-6995c75668-lq76f   1/1     Running             0         22m
deployment-demo-6995c75668-m4nmn   1/1     Running             0         22m
deployment-demo-6995c75668-q5969   1/1     Running             0         22m
deployment-demo-6995c75668-rfpl5   1/1     Running             0         22m
deployment-demo-6995c75668-th6jv   1/1     Running             0         22m
deployment-demo-6995c75668-z2tl5   1/1     Running             0         22m  
[root@k8s-master ~]# kubectl get deploy # 查看deploy控制器发现 容器数量为11,更新1个pod
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
deployment-demo   11/10   1            11         25m
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS   RESTARTS   AGE
deployment-demo-6465d4c5c9-jsnsl   1/1     Running   0         2m25s
deployment-demo-6995c75668-crh9g   1/1     Running   0         25m
deployment-demo-6995c75668-fk2fj   1/1     Running   0         25m
deployment-demo-6995c75668-glzph   1/1     Running   0         25m
deployment-demo-6995c75668-ln92q   1/1     Running   0         25m
deployment-demo-6995c75668-lq76f   1/1     Running   0         25m
deployment-demo-6995c75668-m4nmn   1/1     Running   0         25m
deployment-demo-6995c75668-q5969   1/1     Running   0         25m
deployment-demo-6995c75668-rfpl5   1/1     Running   0         25m
deployment-demo-6995c75668-th6jv   1/1     Running   0         25m
deployment-demo-6995c75668-z2tl5   1/1     Running   0         25m

# 打开新终端,测试访问svc 负载均衡 11个pod 负载均衡的方式访问
[root@k8s-master ~]# while true ; do curl 10.2.163.19 && sleep 1 ; done
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0

[root@k8s-master ~]# kubectl rollout resume deployment deployment-demo # 继续进行滚动升级
deployment.apps/deployment-demo resumed
[root@k8s-master ~]# kubectl get pod # 再次查看pod,可以发现通过设置以后的pod runing的最低数量为10,并且同时进行更新1个pod,删除一个pod
NAME                               READY   STATUS       RESTARTS   AGE
deployment-demo-6465d4c5c9-jsnsl   1/1     Running       0         8m
deployment-demo-6465d4c5c9-sr8p7   0/1     Pending       0         2s
deployment-demo-6995c75668-crh9g   1/1     Running       0         30m
deployment-demo-6995c75668-fk2fj   1/1     Running       0         30m
deployment-demo-6995c75668-glzph   1/1     Running       0         30m
deployment-demo-6995c75668-ln92q   1/1     Terminating   0         30m
deployment-demo-6995c75668-lq76f   1/1     Running       0         30m
deployment-demo-6995c75668-m4nmn   1/1     Running       0         30m
deployment-demo-6995c75668-q5969   1/1     Running       0         30m
deployment-demo-6995c75668-rfpl5   1/1     Running       0         30m
deployment-demo-6995c75668-th6jv   1/1     Running       0         30m
deployment-demo-6995c75668-z2tl5   1/1     Running       0         30m
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS             RESTARTS   AGE
deployment-demo-6465d4c5c9-jsnsl   1/1     Running             0         8m4s
deployment-demo-6465d4c5c9-sr8p7   0/1     ContainerCreating   0         6s
deployment-demo-6995c75668-crh9g   1/1     Running             0         30m
deployment-demo-6995c75668-fk2fj   1/1     Running             0         30m
deployment-demo-6995c75668-glzph   1/1     Running             0         30m
deployment-demo-6995c75668-ln92q   1/1     Terminating         0         30m
deployment-demo-6995c75668-lq76f   1/1     Running             0         30m
deployment-demo-6995c75668-m4nmn   1/1     Running             0         30m
deployment-demo-6995c75668-q5969   1/1     Running             0         30m
deployment-demo-6995c75668-rfpl5   1/1     Running             0         30m
deployment-demo-6995c75668-th6jv   1/1     Running             0         30m
deployment-demo-6995c75668-z2tl5   1/1     Running             0         30m

# 滚动降级,也就是回滚之前的版本
[root@k8s-master ~]# kubectl rollout undo deployment deployment-demo # 回滚
deployment.apps/deployment-demo rolled back
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS             RESTARTS   AGE
deployment-demo-6465d4c5c9-bxmwn   1/1     Running             0         8m52s
deployment-demo-6465d4c5c9-dt6xg   1/1     Running             0         8m17s
deployment-demo-6465d4c5c9-hx252   1/1     Running             0         10m
deployment-demo-6465d4c5c9-jsnsl   1/1     Running             0         18m
deployment-demo-6465d4c5c9-l4kwg   1/1     Running             0         9m42s
deployment-demo-6465d4c5c9-mhm6s   1/1     Running             0         10m
deployment-demo-6465d4c5c9-nzwgb   1/1     Running             0         9m8s
deployment-demo-6465d4c5c9-rn8w2   1/1     Running             0         8m35s
deployment-demo-6465d4c5c9-sr8p7   1/1     Running             0         10m
deployment-demo-6465d4c5c9-tgtqh   1/1     Running             0         9m25s
deployment-demo-6995c75668-zhb75   0/1     ContainerCreating   0         9s # 可以看到回滚到v1的pod被创建了

# 验证查看pod的版本
[root@k8s-master ~]# while true ; do curl 10.2.163.19 && sleep 1 ; done
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v2.0
www.xinxianghf.com | hello MyAPP | version v2.0

# 等待片刻 回滚完毕查看pod是否全部回滚完成running的状态
[root@k8s-master ~]# kubectl get pod  
NAME                               READY   STATUS   RESTARTS   AGE
deployment-demo-6995c75668-25glq   1/1     Running   0         2m22s
deployment-demo-6995c75668-2tfx5   1/1     Running   0         105s
deployment-demo-6995c75668-62m2m   1/1     Running   0         73s
deployment-demo-6995c75668-bmz4m   1/1     Running   0         2m4s
deployment-demo-6995c75668-mcnq5   1/1     Running   0         41s
deployment-demo-6995c75668-tjm65   1/1     Running   0         57s
deployment-demo-6995c75668-v9h8m   1/1     Running   0         90s
deployment-demo-6995c75668-vggmm   1/1     Running   0         2m39s
deployment-demo-6995c75668-x4h8p   1/1     Running   0         23s
deployment-demo-6995c75668-zhb75   1/1     Running   0         2m50s
# 验证访问回滚后的pod版本信息都是v1
[root@k8s-master ~]# while true ; do curl 10.2.163.19 && sleep 1 ; done  
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0
www.xinxianghf.com | hello MyAPP | version v1.0

# deployment实现滚动升级也是通过控制rs控制器进行的升级回滚,可以看到这里被deployment控制器创建出2个rs控制器
[root@k8s-master ~]# kubectl get rs
NAME                         DESIRED   CURRENT   READY   AGE
deployment-demo-6465d4c5c9   0         0         0       23m # 根据上面的实验得知 这个pod为0的就是v2版
deployment-demo-6995c75668   10        10        10     46m # 根据上面的实验得知 这个pod为10的就是v1版
1
2
3
生产环境中,滚动升级的幅度不建议超过25%,也就是同时创建的pod数量不超过当前副本数量的25%以免造成业务的不稳定性。 
限制不可用pod副本的数量为0 可以有效的避免同时删除大量pod线上用户大批量的连接中断给用户造成不好的体验。
可以根据实际的生产环境设置滚动升级创建pod副本的数量以及不可用容器的数量,

3.7 清理策略

1
Deployment 创建资源清单后,进行版本回退以及版本更新时都会创建rs控制器并存储在etcd数据库中
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
# 上面章节中提到回滚以及版本更新大多数都是基于命令修改镜像版本做升级,这里演示使用修改资源清单文件做升级
[root@k8s-master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
  app: deployment-demo
name: deployment-demo
spec:
replicas: 2
selector:
  matchLabels:
    app: deployment-demo
template:
  metadata:
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v1.0
      name: deployment-demo-container
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment-demo created
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS   RESTARTS   AGE
deployment-demo-6995c75668-58mrm   1/1     Running   0         13s
deployment-demo-6995c75668-pqzzb   1/1     Running   0         12s
[root@k8s-master ~]# kubectl get rs # deployment生成的rs控制器
NAME                         DESIRED   CURRENT   READY   AGE
deployment-demo-6995c75668   2         2         2       65s

# 修改镜像版本
[root@k8s-master ~]# sed -i 's/v1.0/v2.0/' deployment.yml # 镜像升级为2.0
[root@k8s-master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
  app: deployment-demo
name: deployment-demo
spec:
replicas: 2
selector:
  matchLabels:
    app: deployment-demo
template:
  metadata:
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v2.0 # 这里已经修改为2.0
      name: deployment-demo-container
       
# 升级版本,获取历史版本信息      
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment-demo configured
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS   RESTARTS   AGE
deployment-demo-6465d4c5c9-2lnlc   1/1     Running   0         22s
deployment-demo-6465d4c5c9-688kr   1/1     Running   0         15s
[root@k8s-master ~]# kubectl get rs
NAME                         DESIRED   CURRENT   READY   AGE
deployment-demo-6465d4c5c9   2         2         2       108s # 这个就是v2.0的rs了
deployment-demo-6995c75668   0         0         0       5m49s # 这个是v1.0的rs 也就是旧版本
[root@k8s-master ~]# kubectl get svc
NAME             TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
deployment-demo   ClusterIP   10.2.163.19   <none>        80/TCP   3d12h
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   6d16h
[root@k8s-master ~]# curl 10.2.163.19 # 访问验证
www.xinxianghf.com | hello MyAPP | version v2.0


# 不保留版本信息到etcd数据库中
# 删除历史所有deploy控制器
[root@k8s-master ~]# kubectl delete deployment --all
deployment.apps "deployment-demo" deleted
[root@k8s-master ~]# kubectl get rs
No resources found in default namespace.
[root@k8s-master ~]# kubectl get pod
No resources found in default namespace.
# revisionHistoryLimit: 0 不保留历史版本
[root@k8s-master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
  app: deployment-demo
name: deployment-demo
spec:
revisionHistoryLimit: 0 # 添加到控制器中
replicas: 2
selector:
  matchLabels:
    app: deployment-demo
template:
  metadata:
    labels:
      app: deployment-demo
  spec:
    containers:
     - image: wangyanglinux/myapp:v2.0
      name: deployment-demo-container
[root@k8s-master ~]# kubectl apply -f deployment.yml # 创建不保留历史版本的控制器
deployment.apps/deployment-demo created
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS   RESTARTS   AGE
deployment-demo-6465d4c5c9-8zghd   1/1     Running   0         25s
deployment-demo-6465d4c5c9-qw7x6   1/1     Running   0         25s
[root@k8s-master ~]# sed -i 's/v2.0/v3.0/' deployment.yml # 修改版本
[root@k8s-master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment-demo configured
[root@k8s-master ~]# kubectl get pod
NAME                               READY   STATUS   RESTARTS   AGE
deployment-demo-7dbccb74d6-trbws   1/1     Running   0         62s
deployment-demo-7dbccb74d6-xshg9   1/1     Running   0         54s
[root@k8s-master ~]# curl 10.2.163.19
www.xinxianghf.com | hello MyAPP | version v3.0
[root@k8s-master ~]# kubectl get rs # 这里可以看到是没有历史的rs控制器
NAME                         DESIRED   CURRENT   READY   AGE
deployment-demo-7dbccb74d6   2         2         2       19s
# 这样的方式的好处是通过资源清单来做版本的回滚以及更新,rs控制器不会存储到etcd数据库中,在做版本变更时应该先备份原版的资源清单在做修改并添加变更的说明
1
2
3
4
5
6
7
8
9
# 删除创建的控制器 防止影响下章节
[root@k8s-master ~]# kubectl delete deployment --all
deployment.apps "deployment-demo" deleted
[root@k8s-master ~]# kubectl get svc
NAME             TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
deployment-demo   ClusterIP   10.2.163.19   <none>        80/TCP   3d12h
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   6d16h
[root@k8s-master ~]# kubectl delete svc deployment-demo
service "deployment-demo" deleted

四、DaemonSet 控制器

1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 创建DaemonSet控制器资源清单文件
[root@k8s-master ~]# cat >> Daemonset.yml <<<EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-demo
labels:
  app: daemonset-demo
spec:
selector:
  matchLabels:
    name: daemonset-demo
template:
  metadata:
    labels:
      name: daemonset-demo
  spec:
    containers:
       - name: daemonset-demo-container
        image: wangyanglinux/myapp:v1.0
EOF
[root@k8s-master ~]# kubectl apply -f Daemonset.yml
daemonset.apps/daemonset-demo created
[root@k8s-master ~]# kubectl get pod -o wide # 查看pod详细信息,可以发现node01,node02各一个,而资源清单中并没有写副本数量,
NAME                   READY   STATUS   RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
daemonset-demo-pc97q   1/1     Running   0         12s   10.244.58.196   k8s-node02   <none>           <none>
daemonset-demo-t8blw   1/1     Running   0         12s   10.244.85.221   k8s-node01   <none>           <none>
[root@k8s-master ~]# kubectl delete   daemonset daemonset-demo # 删除

七、Service控制器

修改kube-proxy 工作模式

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s-master ~]# kubectl  edit configmap  kube-proxy -n kube-system

# 修改 mode: "ipvs"

[root@k8s-master ~]# kubectl get pod -l k8s-app=kube-proxy -A   # 查询 kube-proxy 标签的pod
NAMESPACE     NAME               READY   STATUS   RESTARTS     AGE
kube-system   kube-proxy-86hs7   1/1     Running   3 (27h ago)   6d22h
kube-system   kube-proxy-mhzph   1/1     Running   1 (27h ago)   6d22h
kube-system   kube-proxy-vjfsw   1/1     Running   6 (27h ago)   6d22h

# 修改完后需要删除程序,自动重建
[root@k8s-master ~]# kubectl delete pod -n kube-system -l k8s-app=kube-proxy
pod "kube-proxy-86hs7" deleted
pod "kube-proxy-mhzph" deleted
pod "kube-proxy-vjfsw" deleted

# 查看是否重建成功 Running说明成功
[root@k8s-master ~]# kubectl get pod -l k8s-app=kube-proxy -A
NAMESPACE     NAME               READY   STATUS   RESTARTS   AGE
kube-system   kube-proxy-c6fht   1/1     Running   0         2m49s
kube-system   kube-proxy-gs7gw   1/1     Running   0         2m48s
kube-system   kube-proxy-jcs4b   1/1     Running   0         2m49s

image-20241224172158226

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
        # 就绪探测 
      readinessProbe:
        httpGet:
          port: 80 # 探测端口
          path: /index1.html # 探测是否有这个文件,有则就绪,没有就非就绪
        initialDelaySeconds: 1
        periodSeconds: 3
         
         
         
-------------
apiVersion: v1
kind: Service
metadata:
name: myapp-clusterip
namespace: default
spec:
type: ClusterIP # ClusterIP类型
selector:
  app: myapp
  release: stabel
  svc: clusterip
ports:
 - name: http
  port: 80 # 负载均衡的访问端口
  targetPort: 80 # 后端服务端口

7.1 Clusterip类型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
[root@k8s-master ~]# cat >> myapp-clusterip-deploy.yaml <EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-clusterip-deploy
namespace: default
spec:
replicas: 3
selector:
  matchLabels:
    app: myapp
    release: stabel
    svc: clusterip
template:
  metadata:
    labels:
      app: myapp
      release: stabel
      env: test
      svc: clusterip
  spec:
    containers:
     - name: myapp-container
      image: wangyanglinux/myapp:v1.0
      imagePullPolicy: IfNotPresent
      ports:
       - name: http
        containerPort: 80
      readinessProbe:
        httpGet:
          port: 80
          path: /index1.html
        initialDelaySeconds: 1
        periodSeconds: 3
EOF
[root@k8s-master ~]# kubectl apply -f myapp-clusterip-deploy.yaml
deployment.apps/myapp-clusterip-deploy created
[root@k8s-master ~]# kubectl get pod # 可以看就绪状态是非就绪
NAME                                     READY   STATUS   RESTARTS   AGE
myapp-clusterip-deploy-5c9cc9b64-2ssbv   0/1     Running   0         41s
myapp-clusterip-deploy-5c9cc9b64-pf6b7   0/1     Running   0         41s
myapp-clusterip-deploy-5c9cc9b64-pg4ww   0/1     Running   0         41s

# 创建svc控制器资源
[root@k8s-master ~]# cat >> myapp-service.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
name: myapp-clusterip
namespace: default
spec:
type: ClusterIP
selector:
  app: myapp
  release: stabel
  svc: clusterip
ports:
 - name: http
  port: 80
  targetPort: 80
EOF
[root@k8s-master ~]# kubectl apply -f myapp-service.yaml
service/myapp-clusterip created
[root@k8s-master ~]# kubectl get svc
NAME             TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   6d21h
myapp-clusterip   ClusterIP   10.0.192.93   <none>        80/TCP   7s
[root@k8s-master ~]# curl 10.0.192.93 # 访问这个svc ip发现无法访问
curl: (7) Failed to connect to 10.0.192.93 port 80: Connection refused

# 查看pod标签以及svc标签,发现pod的标签是svc标签的子集
[root@k8s-master ~]# kubectl get pod --show-labels
NAME                                     READY   STATUS   RESTARTS   AGE     LABELS
myapp-clusterip-deploy-5c9cc9b64-2ssbv   0/1     Running   0         8m24s   app=myapp,env=test,pod-template-hash=5c9cc9b64,release=stabel,svc=clusterip
myapp-clusterip-deploy-5c9cc9b64-pf6b7   0/1     Running   0         8m24s   app=myapp,env=test,pod-template-hash=5c9cc9b64,release=stabel,svc=clusterip
myapp-clusterip-deploy-5c9cc9b64-pg4ww   0/1     Running   0         8m24s   app=myapp,env=test,pod-template-hash=5c9cc9b64,release=stabel,svc=clusterip
[root@k8s-master ~]# kubectl get svc -o wide
NAME             TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     SELECTOR
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   6d21h   <none>
myapp-clusterip   ClusterIP   10.0.192.93   <none>        80/TCP   3m20s   app=myapp,release=stabel,svc=clusterip

# 创建就绪检测文件
[root@k8s-master ~]# kubectl exec -it myapp-clusterip-deploy-5c9cc9b64-2ssbv -- /bin/bash # 进入pod中
myapp-clusterip-deploy-5c9cc9b64-2ssbv:/# cd /usr/local/nginx/html/
# 把当前时间重定向到index1.html文件中
myapp-clusterip-deploy-5c9cc9b64-2ssbv:/usr/local/nginx/html# date > index1.html
[root@k8s-master ~]# kubectl get pod # myapp-clusterip-deploy-5c9cc9b64-2ssbv 这个pod已经就绪 NAME                                     READY   STATUS   RESTARTS   AGE
myapp-clusterip-deploy-5c9cc9b64-2ssbv   1/1     Running   0         11m
myapp-clusterip-deploy-5c9cc9b64-pf6b7   0/1     Running   0         11m
myapp-clusterip-deploy-5c9cc9b64-pg4ww   0/1     Running   0         11m
[root@k8s-master ~]# curl 10.0.192.93 # 发现也可以访问
www.xinxianghf.com | hello MyAPP | version v1.0
[root@k8s-master ~]# curl 10.0.192.93/hostname.html # 访问打印pod名的问题,发现只能访问到就绪pod
myapp-clusterip-deploy-5c9cc9b64-2ssbv
# 在创建一个就绪文件
[root@k8s-master ~]# kubectl exec -it myapp-clusterip-deploy-5c9cc9b64-pf6b7 -- /bin/bash
myapp-clusterip-deploy-5c9cc9b64-pf6b7:/# date > /usr/local/nginx/html/index1.html
myapp-clusterip-deploy-5c9cc9b64-pf6b7:/# exit
exit
[root@k8s-master ~]# kubectl get pod # 发现已经两个pod就绪
NAME                                     READY   STATUS   RESTARTS   AGE
myapp-clusterip-deploy-5c9cc9b64-2ssbv   1/1     Running   0         19m
myapp-clusterip-deploy-5c9cc9b64-pf6b7   1/1     Running   0         19m
myapp-clusterip-deploy-5c9cc9b64-pg4ww   0/1     Running   0         19m
[root@k8s-master ~]# curl 10.0.192.93/hostname.html
myapp-clusterip-deploy-5c9cc9b64-pf6b7
[root@k8s-master ~]# curl 10.0.192.93/hostname.html
myapp-clusterip-deploy-5c9cc9b64-2ssbv

# 那么就证实了 svc 关联到pod实现负载均衡就必须满足2个条件。
# 1、负载的pod标签必须是svc控制器的子集匹配,或者全集
# 2、pod 必须处于就绪状态 svc才会进行提供负载均衡访问

# 查看ipvs转发规则
[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.1:443 rr
 -> 192.168.68.160:6443         Masq    1      0          0
TCP  10.0.0.10:53 rr
 -> 10.244.58.204:53             Masq    1      0          0
 -> 10.244.58.209:53             Masq    1      0          0
TCP  10.0.0.10:9153 rr
 -> 10.244.58.204:9153           Masq    1      0          0
 -> 10.244.58.209:9153           Masq    1      0          0
TCP  10.0.192.93:80 rr # 这个就是创建的scv的规则 rr 轮询算法
 -> 10.244.58.197:80             Masq    1      0          0 # 后端服务器
 -> 10.244.85.224:80             Masq    1      0          0 # 后端服务器
TCP  10.6.2.65:5473 rr
 -> 192.168.68.161:5473         Masq    1      0          0
UDP  10.0.0.10:53 rr
 -> 10.244.58.204:53             Masq    1      0          0
 -> 10.244.58.209:53             Masq    1      0          0
[root@k8s-master ~]# kubectl get svc
NAME             TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP   6d22h
myapp-clusterip   ClusterIP   10.0.192.93   <none>        80/TCP   71m
[root@k8s-master ~]# kubectl get pod -o wide
NAME                                     READY   STATUS   RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
myapp-clusterip-deploy-5c9cc9b64-2ssbv   1/1     Running   0         77m   10.244.58.197   k8s-node02   <none>           <none>
myapp-clusterip-deploy-5c9cc9b64-pf6b7   1/1     Running   0         77m   10.244.85.224   k8s-node01   <none>           <none>
myapp-clusterip-deploy-5c9cc9b64-pg4ww   0/1     Running   0         77m   10.244.85.223   k8s-node01   <none>           <none>

控制器总结

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
控制器
Pod控制器
RC控制器
保障当前的Pod数量与期望一致

RS控制器
功能与RC控制器类似,但是多了标签选择运算的方式。k8s官方也推荐使用RS而不是RC

Deployment控制器
支持了声明式的表达
支持滚动更新和回滚
原理 deloyment > rs > pod

DaemonSet 控制器
保障每个节点有且只有一个pod,动态保障。

Kubectl create 、 applyreplace
create
创建资源对象
通过基于资源清单文件的创建,但是如果此文件描述的资源对象存在,那么文件描述发生了改变再次提交也不会应用
apply
创建资源对象、修改资源对象
通过基于资源清单文件的创建,如果目标对象文件与本身运行的资源对象发生改变,那么会根据改变的 文件指定一一倾向修改目标的属性 (部分更新)
replace
创建资源对象、修改资源对象
通过基于资源清单文件的创建,如果目标对象文件本身运行的资源对象发生改变,那么会重建此对象(完全替换)


svc控制器
选中进行负载均衡pod的逻辑 1、pod是处于就绪状态。2、pod的标签是svc标签的子集(同一个命名空间下)


Kubernetes集合篇
http://ziiix.cn/2024/12/19/Kubernetes集合篇/
作者
John Doe
发布于
2024年12月19日
许可协议