kubeadm部署k8s1.30

旧版博客:https://egonlin.com/?p=6618

一、k8s包yum源介绍

二、准备工作

0、准备3台机器

每台机器内存>=2G

1、修改主机名及解析(三台节点)

# 1、修改主机名
hostnamectl set-hostname k8s-master-01
hostnamectl set-hostname k8s-node-01
hostnamectl set-hostname k8s-node-02

# 2、三台机器添加host解析
cat >> /etc/hosts << "EOF"
192.168.71.12 k8s-master-01 m1
192.168.71.13 k8s-node-01 n1
192.168.71.14 k8s-node-02 n2
EOF

2、关闭一些服务(三台节点)

# 1、关闭selinux
sed -i 's#enforcing#disabled#g' /etc/selinux/config
setenforce 0

# 2、禁用防火墙,网络管理,邮箱
systemctl disable --now firewalld NetworkManager postfix

# 3、关闭swap分区
swapoff -a 
​
# 注释swap分区
cp /etc/fstab /etc/fstab_bak
sed -i '/swap/d' /etc/fstab

3、sshd服务优化

# 1、加速访问
sed -ri 's@^#UseDNS yes@UseDNS no@g' /etc/ssh/sshd_config 
sed -ri 's#^GSSAPIAuthentication yes#GSSAPIAuthentication no#g' /etc/ssh/sshd_config 
grep ^UseDNS /etc/ssh/sshd_config 
grep ^GSSAPIAuthentication /etc/ssh/sshd_config
systemctl restart sshd

# 2、密钥登录(主机点做):为了让后续一些远程拷贝操作更方便
ssh-keygen
ssh-copy-id -i root@k8s-master-01
ssh-copy-id -i root@k8s-node-01
ssh-copy-id -i root@k8s-node-02

4、增大文件打开数量(退出当前会话立即生效)

cat > /etc/security/limits.d/k8s.conf <<'EOF' 
* soft nofile 65535 
* hard nofile 131070 
EOF 

ulimit -Sn 
ulimit -Hn

5、所有节点配置模块自动加载,此步骤不做的话(kubeadm init时会直接失败!)

modprobe br_netfilter
modprobe ip_conntrack
cat >>/etc/rc.sysinit<<EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
echo "modprobe br_netfilter" >/etc/sysconfig/modules/br_netfilter.modules
echo "modprobe ip_conntrack" >/etc/sysconfig/modules/ip_conntrack.modules
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
chmod 755 /etc/sysconfig/modules/ip_conntrack.modules
lsmod | grep br_netfilter

6、同步集群时间

# =====================》chrony服务端:服务端我们可以自己搭建,也可以直接用公网上的时间服务器,所以是否部署服务端看你自己
# 1、安装
yum -y install chrony
​
# 2、修改配置文件
mv /etc/chrony.conf /etc/chrony.conf.bak
​
cat > /etc/chrony.conf << EOF
server ntp1.aliyun.com iburst minpoll 4 maxpoll 10
server ntp2.aliyun.com iburst minpoll 4 maxpoll 10
server ntp3.aliyun.com iburst minpoll 4 maxpoll 10
server ntp4.aliyun.com iburst minpoll 4 maxpoll 10
server ntp5.aliyun.com iburst minpoll 4 maxpoll 10
server ntp6.aliyun.com iburst minpoll 4 maxpoll 10
server ntp7.aliyun.com iburst minpoll 4 maxpoll 10
driftfile /var/lib/chrony/drift
makestep 10 3
rtcsync
allow 0.0.0.0/0
local stratum 10
keyfile /etc/chrony.keys
logdir /var/log/chrony
stratumweight 0.05
noclientlog
logchange 0.5

EOF
​
# 4、启动chronyd服务
systemctl restart chronyd.service # 最好重启,这样无论原来是否启动都可以重新加载配置
systemctl enable chronyd.service
systemctl status chronyd.service

# =====================》chrony客户端:在需要与外部同步时间的机器上安装,启动后会自动与你指定的服务端同步时间
# 下述步骤一次性粘贴到每个客户端执行即可
# 1、安装chrony
yum -y install chrony
# 2、需改客户端配置文件
mv /etc/chrony.conf /etc/chrony.conf.bak
cat > /etc/chrony.conf << EOF
server 服务端的ip地址或可解析的主机名 iburst
driftfile /var/lib/chrony/drift
makestep 10 3
rtcsync
local stratum 10
keyfile /etc/chrony.key
logdir /var/log/chrony
stratumweight 0.05
noclientlog
logchange 0.5

EOF
# 3、启动chronyd
systemctl restart chronyd.service
systemctl enable chronyd.service
systemctl status chronyd.service

# 4、验证
chronyc sources -v

7、更新基础yum源(三台机器)

# 1、清理
rm -rf /etc/yum.repos.d/*
yum remove epel-release -y
rm -rf /var/cache/yum/x86_64/6/epel/

# 2、安装阿里的base与epel源
curl -s -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo 
curl -s -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all yum makecache

# 或者用华为的也行
# curl -o /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-7-reg.repo 
# yum install -y https://repo.huaweicloud.com/epel/epel-release-latest-7.noarch.rpm

8、更新系统软件(排除内核)

 yum update -y --exclud=kernel*

9、安装基础常用软件

yum -y install expect wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git ntpdate chrony bind-utils rsync unzip git

10、更新内核(docker 对系统内核要求比较高,最好使用4.4+)主节点操作

wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.274-1.el7.elrepo.x86_64.rpm
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.274-1.el7.elrepo.x86_64.rpm

for i in n1 n2 m1 ; do scp kernel-lt-* $i:/opt; done


补充:如果下载的慢就从网盘里拿吧
链接:https://pan.baidu.com/s/1gVyeBQsJPZjc336E8zGjyQ 
提取码:Egon

三个节点操作

 #安装
yum localinstall -y /opt/kernel-lt*

#调到默认启动
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg 

#查看当前默认启动的内核
grubby --default-kernel

#重启系统
reboot

11、三个节点安装IPVS

# 1、安装ipvsadm等相关工具
yum -y install ipvsadm ipset sysstat conntrack libseccomp 

# 2、配置加载
cat > /etc/sysconfig/modules/ipvs.modules <<"EOF" 
#!/bin/bash 
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" 

for kernel_module in ${ipvs_modules}; 
do 
	/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1 
	if [ $? -eq 0 ]; then 
		/sbin/modprobe ${kernel_module} 
	fi 
done 
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

12、三台机器修改内核参数

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF

# 立即生效
sysctl --system

三、安装containerd(三台节点都要做)

自Kubernetes1.24以后,K8S就不再原生支持docker了

我们都知道containerd来自于docker,后被docker捐献给了云原生计算基金会

我们安装docker会一并安装上containerd

# 0、在centos7中yum下载libseccomp的版本是2.3的,版本不满足我们最新containerd的需求。 
在安装containerd前,我们需要优先升级libseccomp,需要下载2.4以上的版本即可,我这里部署2.5.1版本。
rpm -e libseccomp-2.3.1-4.el7.x86_64 --nodeps
# wget http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -ivh libseccomp-2.5.1-1.el8.x86_64.rpm  # 官网已经gg了,不更新了,请用阿里云
wget https://mirrors.aliyun.com/centos/8/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
rpm -qa | grep libseccomp


# 1、卸载之前的
yum remove docker docker-ce containerd docker-common docker-selinux docker-engine -y 

# 2、安装docker所需安装包
yum install -y yum-utils device-mapper-persistent-data lvm2 

# 3、安装docker yum源
wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo

# 4、安装docker-ce、containerd
yum install docker-ce -y # 会一并安装containerd

# 5、为containerd生成配置文件
containerd config default > /etc/containerd/config.toml

# 6、替换默认pause镜像地址
grep sandbox_image /etc/containerd/config.toml
sed -i 's/registry.k8s.io/registry.cn-beijing.aliyuncs.com\/abcdocker/' /etc/containerd/config.toml 
grep sandbox_image /etc/containerd/config.toml

# 7、配置systemd作为容器的cgroup driver
grep SystemdCgroup /etc/containerd/config.toml
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml
grep SystemdCgroup /etc/containerd/config.toml

# 8、配置containerd开机自启动
# 8.1 启动containerd服务并配置开机自启动
systemctl enable --now containerd

# 8.2 查看containerd状态
systemctl status containerd

# 8.3 查看containerd的版本
ctr version
-------------------------配置docker(下述内容不用操作,因为k8s1.30直接对接containerd)
# 1、配置docker
# 修改配置:驱动与kubelet保持一致,否则会后期无法启动kubelet
cat > /etc/docker/daemon.json << "EOF"
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors":["https://reg-mirror.qiniu.com/"]
}
EOF

# 2、重启docker
systemctl restart docker.service
systemctl enable docker.service

# 3、查看验证
[root@k8s-master-01 ~]# docker info |grep -i cgroup
Cgroup Driver: systemd
Cgroup Version: 1

四、安装k8s

官网:https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/

1、三台机器准备k8s源

cat > /etc/yum.repos.d/kubernetes.repo << "EOF" 
[kubernetes] 
name=Kubernetes 
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/ 
enabled=1 
gpgcheck=1 
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key 
EOF

# 参考:https://developer.aliyun.com/mirror/kubernetes/
setenforce 0
yum install -y kubelet-1.30* kubeadm-1.30* kubectl-1.30*
systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet

2、主节点操作(node节点不执行)

初始化master节点(仅在master节点上执行): 
# 可以kubeadm config images list查看
[root@k8s-master-01 ~]# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.12-0

 

部署方法一:先生成配置文件,编辑修改后,再部署(推荐,因为高级配置只能通过配置文件指定,方案二直接用kubeadm init则无法指定,例如配置使用ipvs模式)

部署方案二:直接命令行敲命令(命令行不能指定用什么模式,只能用默认为iptables模式)

结果

。。。。。。。。。。。。。。。。。。。。。。。
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.71.12:6443 --token 9hovhy.vxm1l7zs16zr53ve \
	--discovery-token-ca-cert-hash sha256:3b210d53b7f26a43ccf251cfb9f809f280048ab70bf5c1458c69586ed0eb9905

 

查看node: 最开始时NotReady状态正常,因为网络组件没有部署ok

[root@k8s-master-01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-01 NotReady control-plane 4m26s v1.30.0

[root@k8s-master-01 ~]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-7c445c467-mfls7 0/1 Pending 0 6m30s
coredns-7c445c467-zvkkw 0/1 Pending 0 6m30s
etcd-k8s-master-01 1/1 Running 0 6m44s
kube-apiserver-k8s-master-01 1/1 Running 0 6m44s
kube-controller-manager-k8s-master-01 1/1 Running 0 6m44s
kube-proxy-jhxrd 1/1 Running 0 109s
kube-proxy-nh7tj 1/1 Running 0 33s
kube-proxy-q92mx 1/1 Running 0 6m30s
kube-scheduler-k8s-master-01 1/1 Running 0 6m44s

3、加入node节点

去另外两个node节点上执行

 kubeadm join 192.168.71.12:6443 --token 9hovhy.vxm1l7zs16zr53ve \
--discovery-token-ca-cert-hash sha256:3b210d53b7f26a43ccf251cfb9f809f280048ab70bf5c1458c69586ed0eb9905

4、部署网络插件

下载网络插件

 wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

[root@master01 flannel]# vim kube-flannel.yml 

apiVersion: v1
data:
  ...
  net-conf.json: |
    {
      "Network": "10.244.0.0/16", # 与--pod-network-cidr保持一致
      "Backend": {
        "Type": "vxlan"
      }
    }

部署(在master01上即可)

kubectl -n kube-flannel get pods
kubectl -n kube-flannel get pods -w
[root@k8s-master-01 ~]# kubectl get nodes # 全部ready
[root@k8s-master-01 ~]# kubectl -n kube-system get pods # 两个coredns的pod也都ready

5、部署kubectl命令提示(在所有节点执行)

yum install bash-completion* -y

kubectl completion bash > ~/.kube/completion.bash.inc
echo "source '$HOME/.kube/completion.bash.inc'" >> $HOME/.bash_profile
source $HOME/.bash_profile

6、其他

 

上一篇
下一篇
Copyright © 2022 Egon的技术星球 egonlin.com 版权所有 帮助IT小伙伴学到真正的技术