Kubernetes集群-Harbor

Kubernetes集群-Harbor

资源清单

主机名 ip hosts解析 主机配置 系统版本
kubernetes-master 10.10.10.6 kubernetes-master、kubernetes-master.com 2c2g CentOS7.9
kubernetes-node1 10.10.10.7 kubernetes-node1、kubernetes-node1.com 2c2g CentOS7.9
kubernetes-node2 10.10.10.8 kubernetes-node2、kubernetes-node2.com 2c2g CentOS7.9
kubernetes-harbor 10.10.10.9 kubernetes-harbor、kubernetes-harbor.com 2c4g CentOS7.9

主机环境配置

1、配置免密登录

生成秘钥

1
[root@localhost ~]# ssh-keygen

发送秘钥至其他服务器

1
2
3
4
[root@localhost ~]# for i in 7 8 9 
> do
> ssh-copy-id root@10.10.10.$i
> done

输入yes及密码

2、配置防火墙及swap分区

关闭本机防火墙

1
[root@localhost ~]# systemctl  disable  --now firewalld

关闭所有服务器防火墙

1
2
3
4
[root@localhost ~]#  for i in  7 8 9
do
ssh root@10.10.10.$i "systemctl disable firewalld && systemctl stop firewalld "
done

关闭所有服务器seliunx

1
[root@localhost ~]# setenforce 0 
1
2
3
4
[root@localhost ~]# for i in 6 7 8 
do
ssh root@10.10.10.$i "setenforce 0"
done

这里需要手动修改

1
[root@localhost ~]# vim/etc/selinux/config" #改为disabled

image-20231025150225046

发送到其他服务器

1
2
3
4
[root@localhost ~]# for i in 6 7 8 
do
scp /etc/selinux/config root@10.10.10.$i:/etc/selinux/config
done

关闭swap分区

1
2
3
[root@localhost ~]# swapoff -a (临时关闭)
##永久关闭需要在/etc/fstab文件中注释掉
#/dev/mapper/centos-swap swap swap defaults 0 0

关闭node节点swap分区 harbor可以不关闭,他并不加入集群中,只是作为仓库使用

1
2
3
4
[root@kubernetes-master ~]# for i in  7  8
do
ssh root@10.10.10.$i "swapoff -a"
done

再去node 节点把/etc/fstab文件中注释掉image-20231025151443368

3、配置本地解析

1
[root@localhost ~]# vim /etc/hosts
1
2
3
4
10.10.10.6 kubernetes-master kubernetes-master.com
10.10.10.7 kubernetes-node1 kubernetes-node1.com
10.10.10.8 kubernetes-node2 kubernetes-node2.com
10.10.10.9 kubernetes-harbor kubernetes-harbor.com

image-20231025150438703

发送至node服务器

1
2
3
4
[root@localhost ~]# for i in 7 8 9 
do
scp /etc/hosts root@10.10.10.$i:/etc/hosts
done

image-20231025150627795

4、配置主机名

1
2
3
4
5
6
7
8
9
10
#master节点
hostnamectl set-hostname kubernetes-master
#node1节点
hostnamectl set-hostname kubernetes-node1
#node2节点
hostnamectl set-hostname kubernetes-node2
#harbor节点
hostnamectl set-hostname kubernetes-harbor

# 断开客户端重新连接

5、开启路由转发功能

1
2
3
4
5
6
7
##添加配置文件
[root@kubernetes-master ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
1
2
3
4
5
#发送至node节点
[root@kubernetes-master ~]# for i in 7 8
do
scp /etc/sysctl.d/k8s.conf root@10.10.10.$i:/etc/sysctl.d/k8s.conf
done

harbor服务器外所有节点操作

1
2
3
4
5
6
7
8
9
10
11
##加载br_netfilter模块 && 查看是否加载
[root@kubernetes-master ~]# modprobe br_netfilter && lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter

##加载⽹桥过滤及内核转发配置⽂件
[root@kubernetes-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0

6、安装docker

1
2
3
#所有服务器都需要安装
[root@kubernetes-master ~]# curl https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker.repo
[root@kubernetes-master ~]# yum -y install docker-ce

配置docker加速器及Cgroup驱动程序

1
2
3
4
5
6
7
8
9
10
[root@kubernetes-master ~]# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn",
"https://docker.m.daocloud.io",
"http://hub-mirrors.c.163.com"],
"insecure-registries": ["kubernetes-harbor.com"],
"data-root": "/var/lib/docker",
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
1
2
3
注: "insecure-registries": ["kubernetes-harbor.com"]  为自建harbor仓库地址
"data-root": "/var/lib/docker" 为docker家目录
"exec-opts": ["native.cgroupdriver=systemd"] 含义:kubelet 将使用 systemd 作为其底层的 Cgroup 驱动程序
1
2
3
4
5
# 发送
[root@kubernetes-master ~]# for i in 7 8 9
do
scp /etc/docker/daemon.json root@10.10.10.$i:/etc/docker/daemon.json
done

所有节点启动docker并配置开机自启

1
[root@kubernetes-master ~]# systemctl daemon-reload && systemctl start docker && systemctl enable docker

7、安装cri-docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
##下载压缩包
[root@kubernetes-master ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4.amd64.tgz
##解压
[root@kubernetes-master ~]# tar zxvf cri-dockerd-0.3.4.amd64.tgz
##拷贝二进制命令文件
[root@kubernetes-master ~]# cp cri-dockerd/* /usr/bin/
##检查效果
[root@kubernetes-master ~]# cri-dockerd --version
cri-dockerd 0.3.4 (e88b1605)
##发送至node服务器
[root@kubernetes-master ~]# for i in 7 8
do
scp cri-dockerd/* root@10.10.10.$i:/usr/bin
done

定制配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
##配置systemctl管理
[root@kubernetes-master ~]# cat> /etc/systemd/system/cri-docker.service << EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9 --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --cri-dockerd-root-directory=/var/lib/dockershim --docker-endpoint=unix:///var/run/docker.sock --cri-dockerd-root-directory=/var/lib/docker
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@kubernetes-master ~]# cat > /etc/systemd/system/cri-docker.socket << EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=/var/run/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF
1
2
3
4
5
##发送至node节点
[root@kubernetes-master ~]# for i in 7 8
do
scp /etc/systemd/system/cri-docker* root@10.10.10.$i:/etc/systemd/system/
done
1
2
3
4
##master、node节点启动cri-docker并配置开机自启
[root@kubernetes-master ~]# systemctl daemon-reload && systemctl start cri-docker && systemctl enable cri-docker
#检查启动状态
[root@kubernetes-master ~]# systemctl status cri-docker

8、harboe仓库准备

1
2
3
##安装docker-compose
[root@kubernetes-harbor ~]# yum -y install epel-release
[root@kubernetes-harbor ~]# yum -y install docker-compose
1
2
3
4
5
6
[root@kubernetes-harbor ~]# mkdir /home/harbor/data -p && cd /home/harbor
[root@kubernetes-harbor harbor]# wget https://github.com/goharbor/harbor/releases/download/v2.7.3/harbor-offline-installer-v2.7.3.tgz
[root@kubernetes-harbor harbor]# tar xvf harbor-offline-installer-v2.7.3.tgz
[root@kubernetes-harbor harbor]# cd harbor
[root@kubernetes-harbor harbor]# cp harbor.yml.tmpl harbor.yml
[root@kubernetes-harbor harbor]# vim harbor.yml

image-20231026113212914

image-20231026113443182

1
2
3
4
##安装harbor
[root@kubernetes-harbor harbor]# ./install.sh
##访问 ip
用户 admin 密码 123456

创建新的用户并登录

image-20231026114107459

新建项目 用来存放kubernetes所需镜像

image-20231026114346850

harbor仓库测试

1
2
3
4
5
6
7
8
9
10
11
12
##登录仓库
[root@kubernetes-harbor harbor]# docker login kubernetes-harbor.com -u wanwan -p Jianren123
##拉取镜像
[root@kubernetes-harbor harbor]# docker pull busybox
##修改标签
[root@kubernetes-harbor harbor]# docker tag busybox:latest kubernetes-harbor.com/kubernetes/busybox:v0.1
##推送镜像到仓库
[root@kubernetes-harbor harbor]# docker push kubernetes-harbor.com/kubernetes/busybox:v0.1
##访问可以看到已经推送到仓库
# 拉取测试
[root@kubernetes-node1 ~]# docker pull kubernetes-harbor.com/kubernetes/busybox:v0.1
[root@kubernetes-node2 ~]# docker pull kubernetes-harbor.com/kubernetes/busybox:v0.1

image-20231026132701993

image-20231026132347577

集群初始化

1、软件安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
## 安装软件源 集群节点都需要安装
[root@kubernetes-master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
## 更新软件源
[root@kubernetes-master ~]# yum makecache fast
## 安装 kubelet kubeadm kubectl
[root@kubernetes-master ~]# yum -y install kubelet-1.27.4-0 kubeadm-1.27.4-0 kubectl-1.27.4-0 --disableexcludes=kubernetes
##启动kubetlet
[root@kubernetes-master ~]# systemctl start kubelet
[root@kubernetes-master ~]# systemctl enable kubelet
1
2
3
4
5
#master节点操作
##获取kubeadmin版本
[root@kubernetes-master ~]# kubeadm version
##查看镜像
[root@kubernetes-master ~]# kubeadm config images list

image-20231030111836921

2、准备镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
##获取镜像文件
[root@kubernetes-master ~]# vim images.sh
#!/bin/bash
docker login -u wanwan -p Jianren123 kubernetes-harbor.com
images=$(kubeadm config images list --kubernetes-version=1.27.4 |awk -F '/' '{print $NF}')
for i in ${images}
do
docker pull registry.aliyuncs.com/google_containers/$i
docker tag registry.aliyuncs.com/google_containers/$i kubernetes-harbor.com/google_containers/$i
docker push kubernetes-harbor.com/google_containers/$i
docker rmi registry.aliyuncs.com/google_containers/$i
done

##仓库创建google_containers项目并公开


##执行脚本获取镜像并推送到本地仓库
[root@kubernetes-master ~]# bash images.sh

image-20231026135750773

3、kubeadm初始化

1
2
3
4
5
[root@kubernetes-master ~]# kubeadm  init \
--kubernetes-version=1.27.4 \
--image-repository=kubernetes-harbor.com/google_containers \
--pod-network-cidr=10.10.20.0/24 \
--cri-socket=unix:///var/run/cri-dockerd.sock
1
2
3
[root@kubernetes-master ~]# mkdir -p $HOME/.kube
[root@kubernetes-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kubernetes-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

image-20231030142128977

4、加入集群

1
2
3
4
5
# node节点执行
kubeadm join 10.10.10.6:6443 \
--token li4ogb.a9i89lmyi5xzbget \
--discovery-token-ca-cert-hash sha256:8dfcc831ccaa0bb849733e07ebff2693dba70a10046eab8f078d0fc039be6e67 \
--cri-socket unix:///var/run/cri-dockerd.sock

5、查看集群状态

1
2
3
4
5
[root@kubernetes-master ~]# kubectl  get nodes 
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady control-plane 5m29s v1.27.4
kubernetes-node1 NotReady <none> 4m6s v1.27.4
kubernetes-node2 NotReady <none> 3m58s v1.27.4

6、收尾工作

1
2
3
4
5
6
7
# 配置提示
[root@kubernetes-master ~]# vim .bashrc
source <(kubectl completion bash)
source <(kubeadm completion bash)
[root@kubernetes-master ~]# source .bashrc

# 此时kubectl kubeadmin 按 tab 即可出现提示

image-20231030144451051

7、配置网络代理

二选一即可 推荐使用calico网络插件
1、网络组件flannel
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@kubernetes-master ~]# mkdir /home/kubernetes/network/flannel -p && cd /home/kubernetes/network/flannel
[root@kubernetes-master flannel]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# 获取镜像
[root@kubernetes-master flannel]# grep image: kube-flannel.yml
image: docker.io/flannel/flannel:v0.22.3
image: docker.io/flannel/flannel-cni-plugin:v1.2.0
image: docker.io/flannel/flannel:v0.22.3
# 拉取镜像
[root@kubernetes-master flannel]# docker pull docker.io/flannel/flannel:v0.22.3
[root@kubernetes-master flannel]# docker pull docker.io/flannel/flannel-cni-plugin:v1.2.0
# 制作镜像
[root@kubernetes-master flannel]# docker tag flannel/flannel:v0.22.3 kubernetes-harbor.com/google_containers/flannel:v0.22.3
[root@kubernetes-master flannel]# docker tag flannel/flannel-cni-plugin:v1.2.0 kubernetes-harbor.com/google_containers/flannel-cni-plugin:v1.2.0
# 推送镜像
[root@kubernetes-master flannel]# docker push kubernetes-harbor.com/google_containers/flannel:v0.22.3
[root@kubernetes-master flannel]# docker push kubernetes-harbor.com/google_containers/flannel-cni-plugin:v1.2.0

修改三处为harbor仓库地址

image-20231030150900513

1
[root@kubernetes-master flannel]# kubectl apply -f kube-flannel.yml
2、网络组件calico
1
calico官网:https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart#install-calico

image-20231024112557545

1
2
3
4
5
# 下载这两个文件并上传到master
[root@kubernetes-master flannel]# kubectl apply -f custom-resources.yaml
[root@kubernetes-master flannel]# vim custom-resources.yaml
## 这个要和master初始化网段保持一致
[root@kubernetes-master flannel]# kubectl apply -f custom-resources.yaml

image-20231030154257115

验证集群可用性

1
2
3
4
5
[root@kubernetes-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
kubernetes-master Ready control-plane 92m v1.27.4
kubernetes-node1 Ready <none> 90m v1.27.4
kubernetes-node2 Ready <none> 89m v1.27.4

查看集群健康情况,理想状态

1
2
3
4
5
6
[root@kubernetes-master ~]# kubectl  get cs 
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy

查看kubernetes集群pod运⾏情况

1
2
3
4
5
6
7
8
9
10
11
[root@kubernetes-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS   RESTARTS         AGE
coredns-7bdc4cb885-bx4cq             1/1     Running   0               93m
coredns-7bdc4cb885-th6ft             1/1     Running   0               93m
etcd-k8s-master                     1/1     Running   0               93m
kube-apiserver-k8s-master           1/1     Running   0               93m
kube-controller-manager-k8s-master   1/1     Running   0               93m
kube-proxy-btqbd                     1/1     Running   0               91m
kube-proxy-j4qxl                     1/1     Running   0               90m
kube-proxy-qhx4j                     1/1     Running   0               93m
kube-scheduler-k8s-master           1/1     Running   0               93m

Kubernetes集群-Harbor
http://ziiix.cn/2024/11/08/Kubernetes集群-Harbor/
作者
John Doe
发布于
2024年11月8日
许可协议