<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <author>
    <name>starttech</name>
  </author>
  <generator uri="https://hexo.io/">Hexo</generator>
  <id>https://jsonhc.github.io/</id>
  <link href="https://jsonhc.github.io/" rel="alternate"/>
  <link href="https://jsonhc.github.io/atom.xml" rel="self"/>
  <rights>All rights reserved 2026, starttech</rights>
  <subtitle>一起成长</subtitle>
  <title>starttech 的个人博客</title>
  <updated>2026-03-01T13:49:11.709Z</updated>
  <entry>
    <author>
      <name>starttech</name>
    </author>
    <category term="随笔" scheme="https://jsonhc.github.io/categories/%E9%9A%8F%E7%AC%94/"/>
    <category term="随笔" scheme="https://jsonhc.github.io/tags/%E9%9A%8F%E7%AC%94/"/>
    <content>
      <![CDATA[<hr /><p>本篇文章是在 CentOS9系统上安装高可用Kubernetes1.35集群</p><hr /><p>节点资源配置如下：</p><table><thead><tr><th>主机名</th><th>ip</th><th>内存</th><th>核心</th><th>操作系统</th></tr></thead><tbody><tr><td>master01</td><td>192.168.213.41</td><td>4g</td><td>2</td><td>centos9</td></tr><tr><td>master02</td><td>192.168.213.42</td><td>4g</td><td>2</td><td>centos9</td></tr><tr><td>master03</td><td>192.168.213.43</td><td>4g</td><td>2</td><td>centos9</td></tr><tr><td>worker01</td><td>192.168.213.44</td><td>10g</td><td>4</td><td>centos9</td></tr><tr><td>worker02</td><td>192.168.213.45</td><td>10g</td><td>4</td><td>centos9</td></tr></tbody></table><p>高可用vip：192.168.213.40</p><p>此篇文章高可用方案为nginx+keepalived，当然其他方案诸如haproxy+keepalived也是可行的，这里不做介绍</p><h2 id="开始配置操作"><a class="markdownIt-Anchor" href="#开始配置操作"></a> 开始配置操作：</h2><h3 id="配置aliyun源所有节点都需要"><a class="markdownIt-Anchor" href="#配置aliyun源所有节点都需要"></a> 配置aliyun源(所有节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">tee</span> /etc/yum.repos.d/centos.repo &gt; /dev/null &lt;&lt; <span class="string">&#x27;EOF&#x27;</span></span><br><span class="line">[baseos]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/x86_64/os/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=1</span><br><span class="line">[baseos-debuginfo]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS Debuginfo - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/x86_64/debug/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">[baseos-source]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS Source - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/source/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">[appstream]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/x86_64/os/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=1</span><br><span class="line">[appstream-debuginfo]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream Debuginfo - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/x86_64/debug/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">[appstream-source]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream Source - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/source/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h3 id="配置主机名"><a class="markdownIt-Anchor" href="#配置主机名"></a> 配置主机名</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# hostnamectl set-hostname master01</span><br><span class="line">[root@master01 ~]# hostname</span><br><span class="line">master01</span><br><span class="line">[root@master02 ~]# hostname</span><br><span class="line">master02</span><br><span class="line">[root@master03 ~]# hostname</span><br><span class="line">master03</span><br><span class="line">[root@worker01 ~]# hostname</span><br><span class="line">worker01</span><br><span class="line">[root@worker02 ~]# hostname</span><br><span class="line">worker02</span><br></pre></td></tr></table></figure><h3 id="配置节点ip"><a class="markdownIt-Anchor" href="#配置节点ip"></a> 配置节点ip</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# nmcli connection modify <span class="string">&quot;ens160&quot;</span> ipv4.method manual ipv4.addresses 192.168.213.41/24 ipv4.gateway 192.168.213.2 ipv4.dns <span class="string">&quot;192.168.213.2&quot;</span></span><br><span class="line">[root@master01 ~]# nmcli connection up <span class="string">&quot;ens160&quot;</span></span><br><span class="line">Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master02 ~]# nmcli connection modify <span class="string">&quot;ens160&quot;</span> ipv4.method manual ipv4.addresses 192.168.213.42/24 ipv4.gateway 192.168.213.2 ipv4.dns <span class="string">&quot;192.168.213.2&quot;</span></span><br><span class="line">[root@master02 ~]# nmcli connection up <span class="string">&quot;ens160&quot;</span></span><br><span class="line">Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master03 ~]# nmcli connection modify <span class="string">&quot;ens160&quot;</span> ipv4.method manual ipv4.addresses 192.168.213.43/24 ipv4.gateway 192.168.213.2 ipv4.dns <span class="string">&quot;192.168.213.2&quot;</span></span><br><span class="line">[root@master03 ~]# nmcli connection up <span class="string">&quot;ens160&quot;</span></span><br><span class="line">Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@worker01 ~]# nmcli connection modify <span class="string">&quot;ens160&quot;</span> ipv4.method manual ipv4.addresses 192.168.213.44/24 ipv4.gateway 192.168.213.2 ipv4.dns <span class="string">&quot;192.168.213.2&quot;</span></span><br><span class="line">[root@worker01 ~]# nmcli connection up <span class="string">&quot;ens160&quot;</span></span><br><span class="line">Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@worker02 ~]# nmcli connection modify <span class="string">&quot;ens160&quot;</span> ipv4.method manual ipv4.addresses 192.168.213.45/24 ipv4.gateway 192.168.213.2 ipv4.dns <span class="string">&quot;192.168.213.2&quot;</span></span><br><span class="line">[root@worker02 ~]# nmcli connection up <span class="string">&quot;ens160&quot;</span></span><br><span class="line">Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)</span><br></pre></td></tr></table></figure><h3 id="关闭防火墙和设置selinux所有节点都需要"><a class="markdownIt-Anchor" href="#关闭防火墙和设置selinux所有节点都需要"></a> 关闭防火墙和设置selinux(所有节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# systemctl stop firewalld.service</span><br><span class="line">[root@master01 ~]# systemctl <span class="built_in">disable</span> firewalld.service</span><br><span class="line">Removed <span class="string">&quot;/etc/systemd/system/multi-user.target.wants/firewalld.service&quot;</span>.</span><br><span class="line">Removed <span class="string">&quot;/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service&quot;</span>.</span><br><span class="line">[root@master01 ~]# setenforce 0</span><br><span class="line">[root@master01 ~]# sed -i <span class="string">&#x27;s/^SELINUX=enforcing$/SELINUX=permissive/&#x27;</span> /etc/selinux/config</span><br></pre></td></tr></table></figure><h3 id="关闭swap所有节点都需要"><a class="markdownIt-Anchor" href="#关闭swap所有节点都需要"></a> 关闭swap(所有节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# sed -ri <span class="string">&#x27;s/.*swap.*/#&amp;/&#x27;</span> /etc/fstab</span><br><span class="line">[root@master01 ~]# swapoff -a</span><br></pre></td></tr></table></figure><h3 id="配置epel源并安装nginx-keepalived只需要在master三个节点安装"><a class="markdownIt-Anchor" href="#配置epel源并安装nginx-keepalived只需要在master三个节点安装"></a> 配置epel源，并安装nginx、keepalived(只需要在master三个节点安装)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# yum install epel-release</span><br><span class="line">[root@master01 ~]# yum install nginx keepalived</span><br><span class="line">[root@master01 ~]# <span class="built_in">cp</span> /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak</span><br><span class="line">[root@master01 ~]# yum install nginx-mod-stream</span><br><span class="line">[root@master01 ~]# <span class="built_in">cat</span> /etc/nginx/nginx.conf</span><br><span class="line">user nginx;</span><br><span class="line">worker_processes auto;</span><br><span class="line">error_log /var/log/nginx/error.log;</span><br><span class="line">pid /run/nginx.pid;</span><br><span class="line">include /usr/share/nginx/modules/*.conf;</span><br><span class="line">events &#123;</span><br><span class="line">    worker_connections 1024;</span><br><span class="line">&#125;</span><br><span class="line">stream &#123;</span><br><span class="line">    log_format  main  <span class="string">&#x27;$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent&#x27;</span>;</span><br><span class="line">    access_log  /var/log/nginx/k8s-access.log  main;</span><br><span class="line">    upstream k8s-apiserver &#123;</span><br><span class="line">       server 192.168.213.41:6443;   <span class="comment"># k8s-master01 APISERVER IP:PORT</span></span><br><span class="line">       server 192.168.213.42:6443;   <span class="comment"># k8s-master02 APISERVER IP:PORT</span></span><br><span class="line">       server 192.168.213.43:6443;   <span class="comment"># k8s-master03 APISERVER IP:PORT</span></span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    server &#123;</span><br><span class="line">       listen 16443; <span class="comment"># 由于nginx与master节点复用，这个监听端口不能是6443，否则会冲突</span></span><br><span class="line">       proxy_pass k8s-apiserver;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">http &#123;</span><br><span class="line">    log_format  main  <span class="string">&#x27;$remote_addr - $remote_user [$time_local] &quot;$request&quot; &#x27;</span></span><br><span class="line">                      <span class="string">&#x27;$status $body_bytes_sent &quot;$http_referer&quot; &#x27;</span></span><br><span class="line">                      <span class="string">&#x27;&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;&#x27;</span>;</span><br><span class="line">    access_log  /var/log/nginx/access.log  main;</span><br><span class="line">    sendfile            on;</span><br><span class="line">    tcp_nopush          on;</span><br><span class="line">    tcp_nodelay         on;</span><br><span class="line">    keepalive_timeout   65;</span><br><span class="line">    types_hash_max_size 4096;</span><br><span class="line">    include             /etc/nginx/mime.types;</span><br><span class="line">    default_type        application/octet-stream;</span><br><span class="line">    <span class="comment"># Load modular configuration files from the /etc/nginx/conf.d directory.</span></span><br><span class="line">    <span class="comment"># See http://nginx.org/en/docs/ngx_core_module.html#include</span></span><br><span class="line">    <span class="comment"># for more information.</span></span><br><span class="line">    include /etc/nginx/conf.d/*.conf;</span><br><span class="line">    server &#123;</span><br><span class="line">        listen       80;</span><br><span class="line">        listen       [::]:80;</span><br><span class="line">        server_name  _;</span><br><span class="line">        root         /usr/share/nginx/html;</span><br><span class="line">        <span class="comment"># Load configuration files for the default server block.</span></span><br><span class="line">        include /etc/nginx/default.d/*.conf;</span><br><span class="line">        error_page 404 /404.html;</span><br><span class="line">        location = /404.html &#123;</span><br><span class="line">        &#125;</span><br><span class="line">        error_page 500 502 503 504 /50x.html;</span><br><span class="line">        location = /50x.html &#123;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">[root@master01 ~]# nginx -t</span><br><span class="line">nginx: the configuration file /etc/nginx/nginx.conf syntax is ok</span><br><span class="line">nginx: configuration file /etc/nginx/nginx.conf <span class="built_in">test</span> is successful</span><br><span class="line">[root@master01 ~]# <span class="built_in">cp</span> /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak</span><br><span class="line">[root@master01 ~]# <span class="built_in">cat</span> /etc/keepalived/keepalived.conf</span><br><span class="line">global_defs &#123;</span><br><span class="line">   notification_email &#123;</span><br><span class="line">     acassen@firewall.loc</span><br><span class="line">     failover@firewall.loc</span><br><span class="line">     sysadmin@firewall.loc</span><br><span class="line">   &#125;</span><br><span class="line">   notification_email_from Alexandre.Cassen@firewall.loc</span><br><span class="line">   smtp_server 127.0.0.1</span><br><span class="line">   smtp_connect_timeout 30</span><br><span class="line">   router_id NGINX_MASTER</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">vrrp_script check_nginx &#123;</span><br><span class="line">    script <span class="string">&quot;/etc/keepalived/check_nginx.sh&quot;</span></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">vrrp_instance VI_1 &#123;</span><br><span class="line">    state MASTER</span><br><span class="line">    interface ens160  <span class="comment"># 修改为实际网卡名</span></span><br><span class="line">    mcast_src_ip 192.168.213.41</span><br><span class="line">    virtual_router_id 51 <span class="comment"># VRRP 路由 ID实例，每个实例是唯一的</span></span><br><span class="line">    priority 100    <span class="comment"># 优先级，备服务器设置 90</span></span><br><span class="line">    advert_int 1    <span class="comment"># 指定VRRP 心跳包通告间隔时间，默认1秒</span></span><br><span class="line">    authentication &#123;</span><br><span class="line">        auth_type PASS</span><br><span class="line">        auth_pass 1111</span><br><span class="line">    &#125;</span><br><span class="line">    <span class="comment"># 虚拟IP</span></span><br><span class="line">    virtual_ipaddress &#123;</span><br><span class="line">        192.168.213.40/24</span><br><span class="line">    &#125;</span><br><span class="line">    track_script &#123;</span><br><span class="line">        check_nginx</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">[root@master01 ~]# <span class="built_in">cat</span> /etc/keepalived/check_nginx.sh</span><br><span class="line"><span class="comment">#!/bin/bash</span></span><br><span class="line">count=$(ps -ef |grep nginx | grep sbin | egrep -cv <span class="string">&quot;grep|$$&quot;</span>)</span><br><span class="line"><span class="keyword">if</span> [ <span class="string">&quot;<span class="variable">$count</span>&quot;</span> -eq 0 ];<span class="keyword">then</span></span><br><span class="line">    systemctl stop keepalived</span><br><span class="line"><span class="keyword">fi</span></span><br><span class="line">[root@master01 ~]# <span class="built_in">chmod</span> +x  /etc/keepalived/check_nginx.sh</span><br><span class="line">[root@master01 ~]# systemctl daemon-reload &amp;&amp; systemctl start nginx keepalived &amp;&amp; systemctl <span class="built_in">enable</span> nginx keepalived</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master02 ~]# yum install epel-release</span><br><span class="line">[root@master02 ~]# yum install nginx keepalived</span><br><span class="line">[root@master02 ~]# <span class="built_in">cp</span> /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak</span><br><span class="line">[root@master02 ~]# yum install nginx-mod-stream</span><br><span class="line">[root@master02 ~]# <span class="built_in">cat</span> /etc/nginx/nginx.conf</span><br><span class="line">user nginx;</span><br><span class="line">worker_processes auto;</span><br><span class="line">error_log /var/log/nginx/error.log;</span><br><span class="line">pid /run/nginx.pid;</span><br><span class="line">include /usr/share/nginx/modules/*.conf;</span><br><span class="line">events &#123;</span><br><span class="line">    worker_connections 1024;</span><br><span class="line">&#125;</span><br><span class="line">stream &#123;</span><br><span class="line">    log_format  main  <span class="string">&#x27;$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent&#x27;</span>;</span><br><span class="line">    access_log  /var/log/nginx/k8s-access.log  main;</span><br><span class="line">    upstream k8s-apiserver &#123;</span><br><span class="line">       server 192.168.213.41:6443;   <span class="comment"># k8s-master01 APISERVER IP:PORT</span></span><br><span class="line">       server 192.168.213.42:6443;   <span class="comment"># k8s-master02 APISERVER IP:PORT</span></span><br><span class="line">       server 192.168.213.43:6443;   <span class="comment"># k8s-master03 APISERVER IP:PORT</span></span><br><span class="line">    &#125;</span><br><span class="line">    server &#123;</span><br><span class="line">       listen 16443; <span class="comment"># 由于nginx与master节点复用，这个监听端口不能是6443，否则会冲突</span></span><br><span class="line">       proxy_pass k8s-apiserver;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">http &#123;</span><br><span class="line">    log_format  main  <span class="string">&#x27;$remote_addr - $remote_user [$time_local] &quot;$request&quot; &#x27;</span></span><br><span class="line">                      <span class="string">&#x27;$status $body_bytes_sent &quot;$http_referer&quot; &#x27;</span></span><br><span class="line">                      <span class="string">&#x27;&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;&#x27;</span>;</span><br><span class="line">    access_log  /var/log/nginx/access.log  main;</span><br><span class="line">    sendfile            on;</span><br><span class="line">    tcp_nopush          on;</span><br><span class="line">    tcp_nodelay         on;</span><br><span class="line">    keepalive_timeout   65;</span><br><span class="line">    types_hash_max_size 4096;</span><br><span class="line">    include             /etc/nginx/mime.types;</span><br><span class="line">    default_type        application/octet-stream;</span><br><span class="line">    <span class="comment"># Load modular configuration files from the /etc/nginx/conf.d directory.</span></span><br><span class="line">    <span class="comment"># See http://nginx.org/en/docs/ngx_core_module.html#include</span></span><br><span class="line">    <span class="comment"># for more information.</span></span><br><span class="line">    include /etc/nginx/conf.d/*.conf;</span><br><span class="line">    server &#123;</span><br><span class="line">        listen       80;</span><br><span class="line">        listen       [::]:80;</span><br><span class="line">        server_name  _;</span><br><span class="line">        root         /usr/share/nginx/html;</span><br><span class="line">        <span class="comment"># Load configuration files for the default server block.</span></span><br><span class="line">        include /etc/nginx/default.d/*.conf;</span><br><span class="line">        error_page 404 /404.html;</span><br><span class="line">        location = /404.html &#123;</span><br><span class="line">        &#125;</span><br><span class="line">        error_page 500 502 503 504 /50x.html;</span><br><span class="line">        location = /50x.html &#123;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">[root@master02 ~]# nginx -t</span><br><span class="line">nginx: the configuration file /etc/nginx/nginx.conf syntax is ok</span><br><span class="line">nginx: configuration file /etc/nginx/nginx.conf <span class="built_in">test</span> is successful</span><br><span class="line">[root@master02 ~]# <span class="built_in">cp</span> /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak</span><br><span class="line">[root@master02 ~]# <span class="built_in">cat</span> /etc/keepalived/keepalived.conf</span><br><span class="line">global_defs &#123;</span><br><span class="line">   notification_email &#123;</span><br><span class="line">     acassen@firewall.loc</span><br><span class="line">     failover@firewall.loc</span><br><span class="line">     sysadmin@firewall.loc</span><br><span class="line">   &#125;</span><br><span class="line">   notification_email_from Alexandre.Cassen@firewall.loc</span><br><span class="line">   smtp_server 127.0.0.1</span><br><span class="line">   smtp_connect_timeout 30</span><br><span class="line">   router_id NGINX_BACKUP</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">vrrp_script check_nginx &#123;</span><br><span class="line">    script <span class="string">&quot;/etc/keepalived/check_nginx.sh&quot;</span></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">vrrp_instance VI_1 &#123;</span><br><span class="line">    state BACKUP</span><br><span class="line">    interface ens160</span><br><span class="line">    mcast_src_ip 192.168.213.42</span><br><span class="line">    virtual_router_id 51 <span class="comment"># VRRP 路由 ID实例，每个实例是唯一的</span></span><br><span class="line">    priority 99</span><br><span class="line">    advert_int 1</span><br><span class="line">    authentication &#123;</span><br><span class="line">        auth_type PASS</span><br><span class="line">        auth_pass 1111</span><br><span class="line">    &#125;</span><br><span class="line">    virtual_ipaddress &#123;</span><br><span class="line">        192.168.213.40/24</span><br><span class="line">    &#125;</span><br><span class="line">    track_script &#123;</span><br><span class="line">        check_nginx</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">[root@master02 ~]# <span class="built_in">cat</span> /etc/keepalived/check_nginx.sh</span><br><span class="line"><span class="comment">#!/bin/bash</span></span><br><span class="line">count=$(ps -ef |grep nginx | grep sbin | egrep -cv <span class="string">&quot;grep|$$&quot;</span>)</span><br><span class="line"><span class="keyword">if</span> [ <span class="string">&quot;<span class="variable">$count</span>&quot;</span> -eq 0 ];<span class="keyword">then</span></span><br><span class="line">    systemctl stop keepalived</span><br><span class="line"><span class="keyword">fi</span></span><br><span class="line">[root@master02 ~]# <span class="built_in">chmod</span> +x /etc/keepalived/check_nginx.sh</span><br><span class="line">[root@master02 ~]# systemctl daemon-reload &amp;&amp; systemctl start nginx keepalived &amp;&amp; systemctl <span class="built_in">enable</span> nginx keepalived</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master03 ~]# yum install epel-release</span><br><span class="line">[root@master03 ~]# yum install nginx keepalived</span><br><span class="line">[root@master03 ~]# <span class="built_in">cp</span> /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak</span><br><span class="line">[root@master03 ~]# yum install nginx-mod-stream</span><br><span class="line">[root@master03 ~]# <span class="built_in">cat</span> /etc/nginx/nginx.conf</span><br><span class="line">user nginx;</span><br><span class="line">worker_processes auto;</span><br><span class="line">error_log /var/log/nginx/error.log;</span><br><span class="line">pid /run/nginx.pid;</span><br><span class="line">include /usr/share/nginx/modules/*.conf;</span><br><span class="line">events &#123;</span><br><span class="line">    worker_connections 1024;</span><br><span class="line">&#125;</span><br><span class="line">stream &#123;</span><br><span class="line">    log_format  main  <span class="string">&#x27;$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent&#x27;</span>;</span><br><span class="line">    access_log  /var/log/nginx/k8s-access.log  main;</span><br><span class="line">    upstream k8s-apiserver &#123;</span><br><span class="line">       server 192.168.213.41:6443;   <span class="comment"># k8s-master01 APISERVER IP:PORT</span></span><br><span class="line">       server 192.168.213.42:6443;   <span class="comment"># k8s-master02 APISERVER IP:PORT</span></span><br><span class="line">       server 192.168.213.43:6443;   <span class="comment"># k8s-master03 APISERVER IP:PORT</span></span><br><span class="line">    &#125;</span><br><span class="line"></span><br><span class="line">    server &#123;</span><br><span class="line">       listen 16443; <span class="comment"># 由于nginx与master节点复用，这个监听端口不能是6443，否则会冲突</span></span><br><span class="line">       proxy_pass k8s-apiserver;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">http &#123;</span><br><span class="line">    log_format  main  <span class="string">&#x27;$remote_addr - $remote_user [$time_local] &quot;$request&quot; &#x27;</span></span><br><span class="line">                      <span class="string">&#x27;$status $body_bytes_sent &quot;$http_referer&quot; &#x27;</span></span><br><span class="line">                      <span class="string">&#x27;&quot;$http_user_agent&quot; &quot;$http_x_forwarded_for&quot;&#x27;</span>;</span><br><span class="line">    access_log  /var/log/nginx/access.log  main;</span><br><span class="line">    sendfile            on;</span><br><span class="line">    tcp_nopush          on;</span><br><span class="line">    tcp_nodelay         on;</span><br><span class="line">    keepalive_timeout   65;</span><br><span class="line">    types_hash_max_size 4096;</span><br><span class="line">    include             /etc/nginx/mime.types;</span><br><span class="line">    default_type        application/octet-stream;</span><br><span class="line">    <span class="comment"># Load modular configuration files from the /etc/nginx/conf.d directory.</span></span><br><span class="line">    <span class="comment"># See http://nginx.org/en/docs/ngx_core_module.html#include</span></span><br><span class="line">    <span class="comment"># for more information.</span></span><br><span class="line">    include /etc/nginx/conf.d/*.conf;</span><br><span class="line">    server &#123;</span><br><span class="line">        listen       80;</span><br><span class="line">        listen       [::]:80;</span><br><span class="line">        server_name  _;</span><br><span class="line">        root         /usr/share/nginx/html;</span><br><span class="line">        <span class="comment"># Load configuration files for the default server block.</span></span><br><span class="line">        include /etc/nginx/default.d/*.conf;</span><br><span class="line">        error_page 404 /404.html;</span><br><span class="line">        location = /404.html &#123;</span><br><span class="line">        &#125;</span><br><span class="line">        error_page 500 502 503 504 /50x.html;</span><br><span class="line">        location = /50x.html &#123;</span><br><span class="line">        &#125;</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">[root@master03 ~]# nginx -t</span><br><span class="line">nginx: the configuration file /etc/nginx/nginx.conf syntax is ok</span><br><span class="line">nginx: configuration file /etc/nginx/nginx.conf <span class="built_in">test</span> is successful</span><br><span class="line">[root@master03 ~]# <span class="built_in">cp</span> /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak</span><br><span class="line">[root@master03 ~]# <span class="built_in">cat</span> /etc/keepalived/keepalived.conf</span><br><span class="line">global_defs &#123;</span><br><span class="line">   notification_email &#123;</span><br><span class="line">     acassen@firewall.loc</span><br><span class="line">     failover@firewall.loc</span><br><span class="line">     sysadmin@firewall.loc</span><br><span class="line">   &#125;</span><br><span class="line">   notification_email_from Alexandre.Cassen@firewall.loc</span><br><span class="line">   smtp_server 127.0.0.1</span><br><span class="line">   smtp_connect_timeout 30</span><br><span class="line">   router_id NGINX_BACKUP</span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">vrrp_script check_nginx &#123;</span><br><span class="line">    script <span class="string">&quot;/etc/keepalived/check_nginx.sh&quot;</span></span><br><span class="line">&#125;</span><br><span class="line"></span><br><span class="line">vrrp_instance VI_1 &#123;</span><br><span class="line">    state BACKUP</span><br><span class="line">    interface ens160</span><br><span class="line">    mcast_src_ip 192.168.213.43</span><br><span class="line">    virtual_router_id 51 <span class="comment"># VRRP 路由 ID实例，每个实例是唯一的</span></span><br><span class="line">    priority 98</span><br><span class="line">    advert_int 1</span><br><span class="line">    authentication &#123;</span><br><span class="line">        auth_type PASS</span><br><span class="line">        auth_pass 1111</span><br><span class="line">    &#125;</span><br><span class="line">    virtual_ipaddress &#123;</span><br><span class="line">        192.168.213.40/24</span><br><span class="line">    &#125;</span><br><span class="line">    track_script &#123;</span><br><span class="line">        check_nginx</span><br><span class="line">    &#125;</span><br><span class="line">&#125;</span><br><span class="line">[root@master03 ~]# <span class="built_in">cat</span> /etc/keepalived/check_nginx.sh</span><br><span class="line"><span class="comment">#!/bin/bash</span></span><br><span class="line">count=$(ps -ef |grep nginx | grep sbin | egrep -cv <span class="string">&quot;grep|$$&quot;</span>)</span><br><span class="line"><span class="keyword">if</span> [ <span class="string">&quot;<span class="variable">$count</span>&quot;</span> -eq 0 ];<span class="keyword">then</span></span><br><span class="line">    systemctl stop keepalived</span><br><span class="line"><span class="keyword">fi</span></span><br><span class="line">[root@master03 ~]# <span class="built_in">chmod</span> +x /etc/keepalived/check_nginx.sh</span><br><span class="line">[root@master03 ~]# systemctl daemon-reload &amp;&amp; systemctl start nginx keepalived &amp;&amp; systemctl <span class="built_in">enable</span> nginx keepalived</span><br></pre></td></tr></table></figure><p>三个master节点配置完成后，查看vip情况<br /><img src="/img/article/k8s/1.png" alt="1" /></p><p>稍微通过stop掉nginx节点服务来验证是否继续提供服务</p><ul><li>当stop掉master01节点的nginx：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# systemctl stop nginx</span><br></pre></td></tr></table></figure><p><img src="/img/article/k8s/2.png" alt="2" /></p><p>可以看见vip已经飘走，而且keepalived服务也已经挂掉：<br /><img src="/img/article/k8s/3.png" alt="3" /></p><p>可以在master02节点发现vip：<br /><img src="/img/article/k8s/4.png" alt="4" /></p><ul><li>当在master01节点上，启动nginx、keepalived服务后：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# systemctl start nginx</span><br><span class="line">[root@master01 ~]# systemctl start keepalived</span><br></pre></td></tr></table></figure><p>vip已经飘回到master01节点了：<br /><img src="/img/article/k8s/5.png" alt="5" /></p><h3 id="修改hosts解析所有节点都进行配置"><a class="markdownIt-Anchor" href="#修改hosts解析所有节点都进行配置"></a> 修改hosts解析(所有节点都进行配置)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# <span class="built_in">cat</span> /etc/hosts</span><br><span class="line">127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4</span><br><span class="line">::1         localhost localhost.localdomain localhost6 localhost6.localdomain6</span><br><span class="line"></span><br><span class="line">192.168.213.41  master01</span><br><span class="line">192.168.213.42  master02</span><br><span class="line">192.168.213.43  master03</span><br><span class="line">192.168.213.44  worker01</span><br><span class="line">192.168.213.45  worker02</span><br></pre></td></tr></table></figure><h3 id="安装内核相关的包文件所有节点都需要"><a class="markdownIt-Anchor" href="#安装内核相关的包文件所有节点都需要"></a> 安装内核相关的包文件(所有节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# dnf install kernel-devel-$(<span class="built_in">uname</span> -r)</span><br></pre></td></tr></table></figure><h3 id="启用内核模块配置桥接iptables所有节点都需要"><a class="markdownIt-Anchor" href="#启用内核模块配置桥接iptables所有节点都需要"></a> 启用内核模块配置桥接iptables(所有节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# modprobe br_netfilter</span><br><span class="line">[root@master01 ~]# modprobe ip_vs</span><br><span class="line">[root@master01 ~]# modprobe ip_vs_rr</span><br><span class="line">[root@master01 ~]# modprobe ip_vs_wrr</span><br><span class="line">[root@master01 ~]# modprobe ip_vs_sh</span><br><span class="line">[root@master01 ~]# modprobe overlay</span><br></pre></td></tr></table></figure><p>上面作用：加载kubernetes正常运行所必需的内核模块，通过加载这些模块，您可以确保服务器已为 Kubernetes 安装做好准备，并能有效地管理集群内的网络和负载均衡任务</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# <span class="built_in">cat</span> &gt; /etc/modules-load.d/kubernetes.conf &lt;&lt; <span class="string">EOF</span></span><br><span class="line"><span class="string">br_netfilter</span></span><br><span class="line"><span class="string">ip_vs</span></span><br><span class="line"><span class="string">ip_vs_rr</span></span><br><span class="line"><span class="string">ip_vs_wrr</span></span><br><span class="line"><span class="string">ip_vs_sh</span></span><br><span class="line"><span class="string">overlay</span></span><br><span class="line"><span class="string">EOF</span></span><br></pre></td></tr></table></figure><p>这里是为了这些模块在系统启动时进行加载</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# <span class="built_in">cat</span> &gt; /etc/sysctl.d/kubernetes.conf &lt;&lt; <span class="string">EOF</span></span><br><span class="line"><span class="string">net.ipv4.ip_forward = 1</span></span><br><span class="line"><span class="string">net.bridge.bridge-nf-call-ip6tables = 1</span></span><br><span class="line"><span class="string">net.bridge.bridge-nf-call-iptables = 1</span></span><br><span class="line"><span class="string">EOF</span></span><br></pre></td></tr></table></figure><ul><li>net.bridge.bridge-nf-call-ip6tables：使iptables能够处理桥接的 IPv6 流量</li><li>net.bridge.bridge-nf-call-iptables：使 iptables 能够处理桥接的 IPv4 流量</li><li>net.ipv4.ip_forward：启用IPv4数据包转发</li></ul><p>通过设置这些sysctl参数，可以确保系统配置正确，以支持 Kubernetes 网络需求以及集群内部的网络流量转发。这些设置对于 Kubernetes 网络组件的平稳运行至关重要</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# sysctl --system</span><br></pre></td></tr></table></figure><p>以上就是k8s节点系统中相关的初始化配置，接下来就是其他的配置<br />这里还是使用docker作为k8s的容器运行，替代Containerd(所有节点都需要)</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# <span class="built_in">cd</span> /etc/yum.repos.d/ &amp;&amp; wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo</span><br><span class="line">[root@master01 yum.repos.d]# <span class="built_in">cd</span></span><br><span class="line">[root@master01 ~]# yum install docker-ce -y</span><br><span class="line">[root@master01 ~]# systemctl start docker</span><br><span class="line">[root@master01 ~]# systemctl <span class="built_in">enable</span> docker</span><br></pre></td></tr></table></figure><ul><li>给docker配置加速器</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# <span class="built_in">cat</span> &lt;&lt; <span class="string">EOF &gt; /etc/docker/daemon.json</span></span><br><span class="line"><span class="string">&#123;</span></span><br><span class="line"><span class="string">    &quot;registry-mirrors&quot;: [</span></span><br><span class="line"><span class="string">     &quot;https://docker.1ms.run&quot;,</span></span><br><span class="line"><span class="string">     &quot;https://docker.aityp.com&quot;,</span></span><br><span class="line"><span class="string">     &quot;https://docker.m.daocloud.io&quot;</span></span><br><span class="line"><span class="string">     ]</span></span><br><span class="line"><span class="string">&#125;</span></span><br><span class="line"><span class="string">EOF</span></span><br><span class="line">[root@master01 ~]# systemctl daemon-reload</span><br><span class="line">[root@master01 ~]# systemctl restart docker</span><br></pre></td></tr></table></figure><h3 id="安装cri-dockerd所有节点都需要"><a class="markdownIt-Anchor" href="#安装cri-dockerd所有节点都需要"></a> 安装cri-dockerd(所有节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd-0.3.16.amd64.tgz</span><br><span class="line">[root@master01 ~]# tar xf cri-dockerd-0.3.16.amd64.tgz</span><br><span class="line">[root@master01 ~]# <span class="built_in">cp</span> cri-dockerd/cri-dockerd /usr/bin/</span><br><span class="line">[root@master01 ~]# <span class="built_in">cat</span> /etc/systemd/system/cri-docker.service</span><br><span class="line">[Unit]</span><br><span class="line">Description=CRI Interface <span class="keyword">for</span> Docker Application Container Engine</span><br><span class="line">Documentation=https://docs.mirantis.com</span><br><span class="line">After=network-online.target firewalld.service docker.service</span><br><span class="line">Wants=network-online.target</span><br><span class="line">Requires=cri-docker.socket</span><br><span class="line">[Service]</span><br><span class="line">Type=notify</span><br><span class="line">ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://</span><br><span class="line">ExecReload=/bin/kill -s HUP <span class="variable">$MAINPID</span></span><br><span class="line">TimeoutSec=0</span><br><span class="line">RestartSec=2</span><br><span class="line">Restart=always</span><br><span class="line"><span class="comment"># Note that StartLimit* options were moved from &quot;Service&quot; to &quot;Unit&quot; in systemd 229.</span></span><br><span class="line"><span class="comment"># Both the old, and new location are accepted by systemd 229 and up, so using the old location</span></span><br><span class="line"><span class="comment"># to make them work for either version of systemd.</span></span><br><span class="line">StartLimitBurst=3</span><br><span class="line"><span class="comment"># Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.</span></span><br><span class="line"><span class="comment"># Both the old, and new name are accepted by systemd 230 and up, so using the old name to make</span></span><br><span class="line"><span class="comment"># this option work for either version of systemd.</span></span><br><span class="line">StartLimitInterval=60s</span><br><span class="line"><span class="comment"># Having non-zero Limit*s causes performance problems due to accounting overhead</span></span><br><span class="line"><span class="comment"># in the kernel. We recommend using cgroups to do container-local accounting.</span></span><br><span class="line">LimitNOFILE=infinity</span><br><span class="line">LimitNPROC=infinity</span><br><span class="line">LimitCORE=infinity</span><br><span class="line"><span class="comment"># Comment TasksMax if your systemd version does not support it.</span></span><br><span class="line"><span class="comment"># Only systemd 226 and above support this option.</span></span><br><span class="line">TasksMax=infinity</span><br><span class="line">Delegate=<span class="built_in">yes</span></span><br><span class="line">KillMode=process</span><br><span class="line">[Install]</span><br><span class="line">WantedBy=multi-user.target</span><br><span class="line">[root@master01 ~]# <span class="built_in">cat</span> /etc/systemd/system/cri-docker.socket</span><br><span class="line">[Unit]</span><br><span class="line">Description=CRI Docker Socket <span class="keyword">for</span> the API</span><br><span class="line">PartOf=cri-docker.service</span><br><span class="line">[Socket]</span><br><span class="line">ListenStream=%t/cri-dockerd.sock</span><br><span class="line">SocketMode=0660</span><br><span class="line">SocketUser=root</span><br><span class="line">SocketGroup=docker</span><br><span class="line">[Install]</span><br><span class="line">WantedBy=sockets.target</span><br><span class="line">[root@master01 ~]# systemctl <span class="built_in">enable</span> cri-docker --now</span><br><span class="line">Created symlink /etc/systemd/system/multi-user.target.wants/cri-docker.service → /etc/systemd/system/cri-docker.service.</span><br><span class="line">[root@master01 ~]# systemctl <span class="built_in">enable</span> cri-docker.socket --now</span><br><span class="line">Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /etc/systemd/system/cri-docker.socket.</span><br><span class="line">[root@master01 ~]# systemctl status cri-docker</span><br></pre></td></tr></table></figure><h3 id="配置k8s-repo所有节点都需要"><a class="markdownIt-Anchor" href="#配置k8s-repo所有节点都需要"></a> 配置k8s repo(所有节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# <span class="built_in">cat</span> &lt;&lt; <span class="string">EOF &gt; /etc/yum.repos.d/kubernetes.repo</span></span><br><span class="line"><span class="string">[kubernetes]</span></span><br><span class="line"><span class="string">name=Kubernetes</span></span><br><span class="line"><span class="string">baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.35/rpm/</span></span><br><span class="line"><span class="string">enabled=1</span></span><br><span class="line"><span class="string">gpgcheck=1</span></span><br><span class="line"><span class="string">gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.35/rpm/repodata/repomd.xml.key</span></span><br><span class="line"><span class="string">exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni</span></span><br><span class="line"><span class="string">EOF</span></span><br></pre></td></tr></table></figure><h3 id="安装kubernetes相关软件"><a class="markdownIt-Anchor" href="#安装kubernetes相关软件"></a> 安装kubernetes相关软件</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes</span><br></pre></td></tr></table></figure><p><img src="/img/article/k8s/6.png" alt="6" /></p><p>启用kubelet服务(所有节点)</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# systemctl <span class="built_in">enable</span> --now kubelet.service</span><br></pre></td></tr></table></figure><p>等到上面所有相关配置完成无误后，接下来就是k8s集群初始化阶段</p><ul><li>master01节点：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubeadm init --apiserver-advertise-address=192.168.213.40 --control-plane-endpoint 192.168.213.40:16443 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock</span><br></pre></td></tr></table></figure><p>其中192.168.213.40是上面配置的vip，16443端口是nginx那里配置的监听端口<br /><img src="/img/article/k8s/7.png" alt="7" /></p><p>正常初始化无误如上图，接下来初始化其他mster节点<br />在其他master节点操作之前需要将master01上面的证书同步到其他mster节点上</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master02 ~]# <span class="built_in">mkdir</span> /etc/kubernetes/pki</span><br><span class="line">[root@master02 ~]# <span class="built_in">mkdir</span> /etc/kubernetes/pki/etcd/</span><br><span class="line">[root@master01 ~]# scp /etc/kubernetes/pki/ca.* master02:/etc/kubernetes/pki/</span><br><span class="line">[root@master01 ~]# scp /etc/kubernetes/pki/sa.* master02:/etc/kubernetes/pki/</span><br><span class="line">[root@master01 ~]# scp /etc/kubernetes/pki/front-proxy-c* master02:/etc/kubernetes/pki/</span><br><span class="line">[root@master01 ~]# scp /etc/kubernetes/pki/etcd/ca.* master02:/etc/kubernetes/pki/etcd/</span><br></pre></td></tr></table></figure><ul><li>完成这个同步后，接下来在master02节点上进行初始化</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master02 ~]# kubeadm <span class="built_in">join</span> 192.168.213.40:16443 --token xwcjjp.jnl94i2ez1ej9ocm --discovery-token-ca-cert-hash sha256:2dcdcc678c105bff36d7f6614db324b1d871e836929e03897731f18df318b7c1 --control-plane --cri-socket unix:///var/run/cri-dockerd.sock</span><br></pre></td></tr></table></figure><p><img src="/img/article/k8s/8.png" alt="8" /></p><p>成功初始化如上</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubectl get nodes</span><br><span class="line">NAME       STATUS     ROLES           AGE     VERSION</span><br><span class="line">master01   NotReady   control-plane   8m17s   v1.35.1</span><br><span class="line">master02   NotReady   control-plane   64s     v1.35.1</span><br></pre></td></tr></table></figure><ul><li>接下来master03按照相同方式进行同步证书并初始化</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master03 ~]# <span class="built_in">mkdir</span> /etc/kubernetes/pki</span><br><span class="line">[root@master03 ~]# <span class="built_in">mkdir</span> /etc/kubernetes/pki/etcd/</span><br><span class="line">[root@master01 ~]# scp /etc/kubernetes/pki/ca.* master03:/etc/kubernetes/pki/</span><br><span class="line">[root@master01 ~]# scp /etc/kubernetes/pki/sa.* master03:/etc/kubernetes/pki/</span><br><span class="line">[root@master01 ~]# scp /etc/kubernetes/pki/etcd/ca.* master03:/etc/kubernetes/pki/etcd/</span><br><span class="line">[root@master01 ~]# scp /etc/kubernetes/pki/front-proxy-c* master03:/etc/kubernetes/pki/</span><br><span class="line"></span><br><span class="line">[root@master03 ~]# kubeadm <span class="built_in">join</span> 192.168.213.40:16443 --token xwcjjp.jnl94i2ez1ej9ocm --discovery-token-ca-cert-hash sha256:2dcdcc678c105bff36d7f6614db324b1d871e836929e03897731f18df318b7c1 --control-plane --cri-socket unix:///var/run/cri-dockerd.sock</span><br></pre></td></tr></table></figure><p><img src="/img/article/k8s/9.png" alt="9" /></p><ul><li>当master所有节点都初始化完成后，然后添加worker节点：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@worker01 ~]# kubeadm <span class="built_in">join</span> 192.168.213.40:16443 --token xwcjjp.jnl94i2ez1ej9ocm --discovery-token-ca-cert-hash sha256:2dcdcc678c105bff36d7f6614db324b1d871e836929e03897731f18df318b7c1 --cri-socket unix:///var/run/cri-dockerd.sock</span><br><span class="line">[root@worker02 ~]# kubeadm <span class="built_in">join</span> 192.168.213.40:16443 --token xwcjjp.jnl94i2ez1ej9ocm --discovery-token-ca-cert-hash sha256:2dcdcc678c105bff36d7f6614db324b1d871e836929e03897731f18df318b7c1 --cri-socket unix:///var/run/cri-dockerd.sock</span><br></pre></td></tr></table></figure><p>添加worker两台节点后，集群状态如下</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubectl get nodes</span><br><span class="line">NAME       STATUS     ROLES           AGE     VERSION</span><br><span class="line">master01   NotReady   control-plane   15m     v1.35.1</span><br><span class="line">master02   NotReady   control-plane   8m46s   v1.35.1</span><br><span class="line">master03   NotReady   control-plane   2m52s   v1.35.1</span><br><span class="line">worker01   NotReady   &lt;none&gt;          63s     v1.35.1</span><br><span class="line">worker02   NotReady   &lt;none&gt;          54s     v1.35.1</span><br></pre></td></tr></table></figure><ul><li>接下来安装网络插件calico，实现集群中各个 Pod 之间的联网，为 Calico 部署 Tigera Operator</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml</span><br></pre></td></tr></table></figure><p>部署如下：</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubectl create -f tigera-operator.yaml</span><br><span class="line">namespace/tigera-operator created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created</span><br><span class="line">serviceaccount/tigera-operator created</span><br><span class="line">clusterrole.rbac.authorization.k8s.io/tigera-operator created</span><br><span class="line">clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created</span><br><span class="line">deployment.apps/tigera-operator created</span><br></pre></td></tr></table></figure><p>下载calico自定义资源清单</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml</span><br><span class="line">[root@master01 ~]# sed -i <span class="string">&#x27;s/cidr: 192\.168\.0\.0\/16/cidr: 10.244.0.0\/16/g&#x27;</span> custom-resources.yaml</span><br><span class="line">[root@master01 ~]# kubectl create -f custom-resources.yaml</span><br><span class="line">installation.operator.tigera.io/default created</span><br><span class="line">apiserver.operator.tigera.io/default created</span><br></pre></td></tr></table></figure><p>完成上面配置后，查看pod状态<br /><img src="/img/article/k8s/10.png" alt="10" /></p><p>至此高可用k8s集群1.35版本安装完成<br />查看集群信息</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubectl cluster-info</span><br><span class="line">Kubernetes control plane is running at https://192.168.213.40:16443</span><br><span class="line">CoreDNS is running at https://192.168.213.40:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy</span><br><span class="line"></span><br><span class="line">To further debug and diagnose cluster problems, use <span class="string">&#x27;kubectl cluster-info dump&#x27;</span>.</span><br></pre></td></tr></table></figure><p>简单创建一个nginx应用实例</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# <span class="built_in">cat</span> &gt; nginx-deploy.yaml &lt;&lt; <span class="string">&#x27;EOF&#x27;</span></span><br><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  name: nginx-deployment</span><br><span class="line">  labels:</span><br><span class="line">    app: nginx</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: nginx</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: nginx</span><br><span class="line">    spec:</span><br><span class="line">      containers:</span><br><span class="line">      - name: nginx</span><br><span class="line">        image: nginx:latest</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 80</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubectl create -f nginx-deploy.yaml</span><br><span class="line">deployment.apps/nginx-deployment created</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubectl get pod</span><br><span class="line">NAME                                READY   STATUS    RESTARTS   AGE</span><br><span class="line">nginx-deployment-59f86b59ff-dhr7t   1/1     Running   0          31s</span><br></pre></td></tr></table></figure><p>将部署的nginx应用暴露给外部网络</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# <span class="built_in">cat</span> &gt; nginx-svc.yaml &lt;&lt; <span class="string">&#x27;EOF&#x27;</span></span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: nginx-service</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    app: nginx</span><br><span class="line">  ports:</span><br><span class="line">  - protocol: TCP</span><br><span class="line">    port: 80</span><br><span class="line">    targetPort: 80</span><br><span class="line">  <span class="built_in">type</span>: NodePort</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubectl apply -f nginx-svc.yaml</span><br><span class="line">service/nginx-service created</span><br></pre></td></tr></table></figure><p>查看暴露的外部端口</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master01 ~]# kubectl get svc|grep nginx</span><br><span class="line">nginx-service   NodePort    10.96.9.115   &lt;none&gt;        80:30934/TCP   36s</span><br></pre></td></tr></table></figure><p>浏览器进行访问验证<br /><img src="/img/article/k8s/11.png" alt="11" /></p>]]>
    </content>
    <id>https://jsonhc.github.io/posts/1243066720/</id>
    <link href="https://jsonhc.github.io/posts/1243066720/"/>
    <published>2026-03-01T13:00:00.000Z</published>
    <summary>
      <![CDATA[<hr />
<p>本篇文章是在 CentOS9系统上安装高可用Kubernetes1.35集群</p>
<hr />
<p>节点资源配置如下：</p>
<table>
<thead>
<tr>
<th>主机名</th>
<th>ip</th>
<th>内存</th>
<th>核]]>
    </summary>
    <title>在CentOS9上部署高可用 Kubernetes1.35集群</title>
    <updated>2026-03-01T13:49:11.709Z</updated>
  </entry>
  <entry>
    <author>
      <name>starttech</name>
    </author>
    <category term="随笔" scheme="https://jsonhc.github.io/categories/%E9%9A%8F%E7%AC%94/"/>
    <category term="随笔" scheme="https://jsonhc.github.io/tags/%E9%9A%8F%E7%AC%94/"/>
    <content>
      <![CDATA[<hr /><p>此篇文章主要介绍如何创建阿里云ACK集群，并部署简单的应用，通过域名进行访问</p><hr /><h3 id="在阿里云控制平台创建ack集群负载均衡架构为alb-ingress"><a class="markdownIt-Anchor" href="#在阿里云控制平台创建ack集群负载均衡架构为alb-ingress"></a> 在阿里云控制平台创建ACK集群，负载均衡架构为ALB Ingress</h3><p><img src="/img/article/ACK1.png" alt="ACK1" /></p><p>确认完配置，等待创建</p><p><img src="/img/article/ACK2.png" alt="ACK2" /></p><p>所有创建过程结束后，如下图<br /><img src="/img/article/ACK3.png" alt="ACK3" /></p><p>通过workbench管理集群：<br /><img src="/img/article/ACK5.png" alt="ACK5" /></p><h3 id="创建测试应用"><a class="markdownIt-Anchor" href="#创建测试应用"></a> 创建测试应用</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  name: nginx-app</span><br><span class="line">  namespace: default</span><br><span class="line">spec:</span><br><span class="line">  replicas: 1</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: nginx</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: nginx</span><br><span class="line">    spec:</span><br><span class="line">      containers:</span><br><span class="line">      - name: nginx</span><br><span class="line">        image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 80</span><br><span class="line">        resources:</span><br><span class="line">          requests:</span><br><span class="line">            memory: <span class="string">&quot;128Mi&quot;</span></span><br><span class="line">            cpu: <span class="string">&quot;100m&quot;</span></span><br><span class="line">          limits:</span><br><span class="line">            memory: <span class="string">&quot;256Mi&quot;</span></span><br><span class="line">            cpu: <span class="string">&quot;200m&quot;</span></span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: nginx-service</span><br><span class="line">  namespace: default</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    app: nginx</span><br><span class="line">  ports:</span><br><span class="line">  - port: 80</span><br><span class="line">    targetPort: 80</span><br><span class="line">  <span class="built_in">type</span>: ClusterIP</span><br></pre></td></tr></table></figure><p>部署之后<br /><img src="/img/article/application1.png" alt="application1" /><br /><img src="/img/article/application2.png" alt="application2" /></p><h3 id="创建-alb-ingress-规则"><a class="markdownIt-Anchor" href="#创建-alb-ingress-规则"></a> 创建 ALB Ingress 规则</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">apiVersion: networking.k8s.io/v1</span><br><span class="line">kind: Ingress</span><br><span class="line">metadata:</span><br><span class="line">  name: nginx-alb-ingress</span><br><span class="line">  namespace: default</span><br><span class="line">  annotations:</span><br><span class="line">    <span class="comment"># 指定使用 ALB Ingress Class（必须）</span></span><br><span class="line">    alb.ingress.kubernetes.io/switch: <span class="string">&quot;true&quot;</span></span><br><span class="line">    </span><br><span class="line">    <span class="comment"># ALB 实例 ID（可选，不指定则自动创建新 ALB）</span></span><br><span class="line">    <span class="comment"># alb.ingress.kubernetes.io/id: &quot;alb-xxxxxx&quot;</span></span><br><span class="line">    </span><br><span class="line">    <span class="comment"># 监听端口配置</span></span><br><span class="line">    alb.ingress.kubernetes.io/listen-ports: <span class="string">&#x27;[&#123;&quot;HTTP&quot;: 80&#125;, &#123;&quot;HTTPS&quot;: 443&#125;]&#x27;</span></span><br><span class="line">    </span><br><span class="line">    <span class="comment"># 是否自动创建 HTTPS 证书（使用阿里云 SSL 证书服务）</span></span><br><span class="line">    alb.ingress.kubernetes.io/certificate-ids: <span class="string">&quot;your-cert-id&quot;</span>  <span class="comment"># 可选：已有证书 ID</span></span><br><span class="line">    </span><br><span class="line">    <span class="comment"># 重写配置（可选）</span></span><br><span class="line">    alb.ingress.kubernetes.io/rewrite-target: /</span><br><span class="line">    </span><br><span class="line">    <span class="comment"># 健康检查配置</span></span><br><span class="line">    alb.ingress.kubernetes.io/healthcheck-path: <span class="string">&quot;/&quot;</span></span><br><span class="line">    alb.ingress.kubernetes.io/healthcheck-interval-seconds: <span class="string">&quot;2&quot;</span></span><br><span class="line">    alb.ingress.kubernetes.io/healthy-threshold-count: <span class="string">&quot;3&quot;</span></span><br><span class="line">    </span><br><span class="line">spec:</span><br><span class="line">  ingressClassName: alb  <span class="comment"># 必须使用 alb</span></span><br><span class="line">  rules:</span><br><span class="line">  - host: www.jsonjsonstart.dpdns.org  <span class="comment"># 替换为你的域名</span></span><br><span class="line">    http:</span><br><span class="line">      paths:</span><br><span class="line">      - path: /</span><br><span class="line">        pathType: Prefix</span><br><span class="line">        backend:</span><br><span class="line">          service:</span><br><span class="line">            name: nginx-service</span><br><span class="line">            port:</span><br><span class="line">              number: 80</span><br></pre></td></tr></table></figure><p><img src="/img/article/ingress1.png" alt="ingress1" /></p><p>查看创建的ingress</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">alicloud:/# kubectl get ingress nginx-alb-ingress </span><br><span class="line">NAME                CLASS   HOSTS                         ADDRESS   PORTS   AGE</span><br><span class="line">nginx-alb-ingress   alb     www.jsonjsonstart.dpdns.org             80      104s</span><br></pre></td></tr></table></figure><p>查看 ALB 控制台<br /><img src="/img/article/ALB1.png" alt="ALB1" /></p><h3 id="访问测试"><a class="markdownIt-Anchor" href="#访问测试"></a> 访问测试</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">C:\Users\admin&gt;curl -H <span class="string">&quot;Host: www.jsonjsonstart.dpdns.org&quot;</span> http://alb-n2uzltobvzhbr32cq4.cn-hangzhou.alb.aliyuncsslb.com</span><br><span class="line">C:\Users\admin&gt;curl -H <span class="string">&quot;Host: www.jsonjsonstart.dpdns.org&quot;</span> http://47.97.243.72/ --resolve www.jsonjsonstart.dpdns.org:80:47.97.243.72</span><br></pre></td></tr></table></figure><p>将上面的DNS记录配置到自己的域名<br />由于我这里有一个空闲未使用的域名：<a href="http://jsonjsonstart.dpdns.org">jsonjsonstart.dpdns.org</a>，然后到cloudflare平台设置dns记录<br /><img src="/img/article/dns1.png" alt="dns1" /></p><p>最后通过浏览器访问</p><p><img src="/img/article/result.png" alt="result" /></p><p>由于该域名是免费申请的，未在国内备案，所以提示如上，但是验证应用访问是没有问题的</p><p>当然域名配置最好配置https：<a href="http://alb.ingress.kubernetes.io/certificate-ids:">alb.ingress.kubernetes.io/certificate-ids:</a> “your-cert-id”，去阿里云证书管理平台创建或者使用免费的证书机构cert-manager</p>]]>
    </content>
    <id>https://jsonhc.github.io/posts/1243066715/</id>
    <link href="https://jsonhc.github.io/posts/1243066715/"/>
    <published>2026-02-28T12:44:00.000Z</published>
    <summary>
      <![CDATA[<hr />
<p>此篇文章主要介绍如何创建阿里云ACK集群，并部署简单的应用，通过域名进行访问</p>
<hr />
<h3 id="在阿里云控制平台创建ack集群负载均衡架构为alb-ingress"><a class="markdownIt-Anchor" href="#在]]>
    </summary>
    <title>使用阿里云ACK集群配置ALB Ingress，实现域名访问</title>
    <updated>2026-03-01T13:49:11.709Z</updated>
  </entry>
  <entry>
    <author>
      <name>starttech</name>
    </author>
    <category term="随笔" scheme="https://jsonhc.github.io/categories/%E9%9A%8F%E7%AC%94/"/>
    <category term="随笔" scheme="https://jsonhc.github.io/tags/%E9%9A%8F%E7%AC%94/"/>
    <content>
      <![CDATA[<hr /><p>各位技术爱好者们，大家好！本文是在 CentOS 9系统上安装 Kubernetes 。在本指南中，将一步步指导你完成集群的启动和运行，从安装到配置，全程无遗漏</p><hr /><h3 id="先决条件"><a class="markdownIt-Anchor" href="#先决条件"></a> 先决条件</h3><p>在开始安装过程之前，请确保您已具备以下先决条件：</p><ul><li>至少需要三个节点（一个主节点和两个工作节点），运行系统CentOS 9。</li><li>每个节点至少应配备 2GB 内存和 2 个 CPU 核心。</li><li>如果没有设置 DNS，则每个节点的配置/etc/hosts文件中配置各个节点的解析</li></ul><p>本文是部署教程，所以只设置了master和node两台centos9系统的节点</p><table><thead><tr><th>主机名</th><th>ip</th><th>内存</th><th>核心</th><th>操作系统</th></tr></thead><tbody><tr><td>master</td><td>192.168.213.30</td><td>16</td><td>4</td><td>centos9</td></tr><tr><td>worker1</td><td>192.168.213.31</td><td>16</td><td>4</td><td>centos9</td></tr></tbody></table><p>在 CentOS9上安装Kubernetes集群</p><h3 id="配置ip"><a class="markdownIt-Anchor" href="#配置ip"></a> 配置ip</h3><ul><li>master节点</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# nmcli connection modify <span class="string">&quot;ens160&quot;</span> ipv4.method manual ipv4.addresses 192.168.213.30/24 ipv4.gateway 192.168.213.2 ipv4.dns <span class="string">&quot;192.168.213.2&quot;</span></span><br><span class="line">[root@localhost ~]# nmcli connection up <span class="string">&quot;ens160&quot;</span></span><br><span class="line">Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)</span><br></pre></td></tr></table></figure><ul><li>worker1节点</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# nmcli connection modify <span class="string">&quot;ens160&quot;</span> ipv4.method manual ipv4.addresses 192.168.213.31/24 ipv4.gateway 192.168.213.2 ipv4.dns <span class="string">&quot;192.168.213.2&quot;</span></span><br><span class="line">[root@localhost ~]# nmcli connection up <span class="string">&quot;ens160&quot;</span></span><br><span class="line">Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)</span><br></pre></td></tr></table></figure><p>配置好ip后，设置hosts(两个节点都需要)</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# <span class="built_in">cat</span> /etc/hosts</span><br><span class="line">127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4</span><br><span class="line">::1         localhost localhost.localdomain localhost6 localhost6.localdomain6</span><br><span class="line"></span><br><span class="line">192.168.213.30  master</span><br><span class="line">192.168.213.31  worker1</span><br></pre></td></tr></table></figure><h3 id="配置主机名"><a class="markdownIt-Anchor" href="#配置主机名"></a> 配置主机名</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# hostnamectl set-hostname master</span><br><span class="line">[root@localhost ~]# hostnamectl set-hostname worker1</span><br></pre></td></tr></table></figure><h3 id="关闭防火墙和设置selinux两个节点都需要"><a class="markdownIt-Anchor" href="#关闭防火墙和设置selinux两个节点都需要"></a> 关闭防火墙和设置selinux(两个节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# systemctl stop firewalld.service</span><br><span class="line">[root@master ~]# systemctl <span class="built_in">disable</span> firewalld.service</span><br><span class="line">Removed <span class="string">&quot;/etc/systemd/system/multi-user.target.wants/firewalld.service&quot;</span>.</span><br><span class="line">Removed <span class="string">&quot;/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service&quot;</span>.</span><br><span class="line">[root@master ~]# setenforce 0</span><br><span class="line">[root@master ~]# sed -i <span class="string">&#x27;s/^SELINUX=enforcing$/SELINUX=permissive/&#x27;</span> /etc/selinux/config</span><br></pre></td></tr></table></figure><h3 id="关闭swap两个节点都需要"><a class="markdownIt-Anchor" href="#关闭swap两个节点都需要"></a> 关闭swap(两个节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# sed -ri <span class="string">&#x27;s/.*swap.*/#&amp;/&#x27;</span> /etc/fstab</span><br><span class="line">[root@master ~]# swapoff -a</span><br></pre></td></tr></table></figure><h3 id="配置aliyun源两个节点都需要"><a class="markdownIt-Anchor" href="#配置aliyun源两个节点都需要"></a> 配置aliyun源(两个节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">tee</span> /etc/yum.repos.d/centos.repo &gt; /dev/null &lt;&lt; <span class="string">&#x27;EOF&#x27;</span></span><br><span class="line">[baseos]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/x86_64/os/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=1</span><br><span class="line">[baseos-debuginfo]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS Debuginfo - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/x86_64/debug/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">[baseos-source]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS Source - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/source/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">[appstream]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/x86_64/os/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=1</span><br><span class="line">[appstream-debuginfo]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream Debuginfo - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/x86_64/debug/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">[appstream-source]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream Source - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/source/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h3 id="安装内核相关的包文件两个节点都需要"><a class="markdownIt-Anchor" href="#安装内核相关的包文件两个节点都需要"></a> 安装内核相关的包文件(两个节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# dnf install kernel-devel-$(<span class="built_in">uname</span> -r)</span><br></pre></td></tr></table></figure><h3 id="启用内核模块配置桥接iptables两个节点都需要"><a class="markdownIt-Anchor" href="#启用内核模块配置桥接iptables两个节点都需要"></a> 启用内核模块配置桥接iptables(两个节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# modprobe br_netfilter</span><br><span class="line">[root@master ~]# modprobe ip_vs</span><br><span class="line">[root@master ~]# modprobe ip_vs_rr</span><br><span class="line">[root@master ~]# modprobe ip_vs_wrr</span><br><span class="line">[root@master ~]# modprobe ip_vs_sh</span><br><span class="line">[root@master ~]# modprobe overlay</span><br></pre></td></tr></table></figure><p>上面作用：加载kubernetes正常运行所必需的内核模块，通过加载这些模块，您可以确保服务器已为 Kubernetes 安装做好准备，并能有效地管理集群内的网络和负载均衡任务</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cat</span> &gt; /etc/modules-load.d/kubernetes.conf &lt;&lt; <span class="string">EOF</span></span><br><span class="line"><span class="string">br_netfilter</span></span><br><span class="line"><span class="string">ip_vs</span></span><br><span class="line"><span class="string">ip_vs_rr</span></span><br><span class="line"><span class="string">ip_vs_wrr</span></span><br><span class="line"><span class="string">ip_vs_sh</span></span><br><span class="line"><span class="string">overlay</span></span><br><span class="line"><span class="string">EOF</span></span><br></pre></td></tr></table></figure><p>这里是为了这些模块在系统启动时进行加载</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cat</span> &gt; /etc/sysctl.d/kubernetes.conf &lt;&lt; <span class="string">EOF</span></span><br><span class="line"><span class="string">net.ipv4.ip_forward = 1</span></span><br><span class="line"><span class="string">net.bridge.bridge-nf-call-ip6tables = 1</span></span><br><span class="line"><span class="string">net.bridge.bridge-nf-call-iptables = 1</span></span><br><span class="line"><span class="string">EOF</span></span><br></pre></td></tr></table></figure><ul><li>net.bridge.bridge-nf-call-ip6tables：使iptables能够处理桥接的 IPv6 流量</li><li>net.bridge.bridge-nf-call-iptables：使 iptables 能够处理桥接的 IPv4 流量</li><li>net.ipv4.ip_forward：启用IPv4数据包转发<br />通过设置这些sysctl参数，可以确保系统配置正确，以支持 Kubernetes 网络需求以及集群内部的网络流量转发。这些设置对于 Kubernetes 网络组件的平稳运行至关重要</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# sysctl --system</span><br></pre></td></tr></table></figure><p>以上就是k8s节点系统中相关的初始化配置，接下来就是其他的配置<br />这里还是使用docker作为k8s的容器运行，替代Containerd(两个节点都需要)</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# <span class="built_in">cd</span> /etc/yum.repos.d/ &amp;&amp; wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo</span><br><span class="line">[root@master yum.repos.d]# <span class="built_in">cd</span></span><br><span class="line">[root@master ~]# yum install docker-ce -y</span><br><span class="line">[root@master ~]# systemctl start docker</span><br><span class="line">[root@master ~]# systemctl status docker</span><br><span class="line">[root@master ~]# systemctl <span class="built_in">enable</span> docker</span><br></pre></td></tr></table></figure><ul><li>配置docker加速</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cat</span> &lt;&lt; <span class="string">EOF &gt; /etc/docker/daemon.json</span></span><br><span class="line"><span class="string">&#123;</span></span><br><span class="line"><span class="string">    &quot;registry-mirrors&quot;: [</span></span><br><span class="line"><span class="string">     &quot;https://docker.1ms.run&quot;,</span></span><br><span class="line"><span class="string">     &quot;https://docker.aityp.com&quot;,</span></span><br><span class="line"><span class="string">     &quot;https://docker.m.daocloud.io&quot;</span></span><br><span class="line"><span class="string">     ]</span></span><br><span class="line"><span class="string">&#125;</span></span><br><span class="line"><span class="string">systemctl daemon-reload</span></span><br><span class="line"><span class="string">systemctl restart docker</span></span><br></pre></td></tr></table></figure><ul><li>测试镜像下载</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@worker1 ~]# docker pull nginx</span><br></pre></td></tr></table></figure><ul><li>如果下载镜像还是有问题，建议docker做代理，或者离线下载好需要用到的镜像。docker代理配置：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"></span><br><span class="line">[root@master ~]# <span class="built_in">mkdir</span> -p /etc/systemd/system/docker.service.d</span><br><span class="line">[root@master ~]# vim /etc/systemd/system/docker.service.d/http-proxy.conf</span><br><span class="line">[root@master ~]# <span class="built_in">cat</span> /etc/systemd/system/docker.service.d/http-proxy.conf</span><br><span class="line">[Service]</span><br><span class="line">Environment=<span class="string">&quot;HTTPS_PROXY=http://192.168.3.31:7897&quot;</span></span><br><span class="line">[root@master ~]# systemctl daemon-reload</span><br><span class="line">[root@master ~]# systemctl restart docker</span><br><span class="line">[root@master ~]# systemctl show --property=Environment docker</span><br><span class="line">Environment=HTTPS_PROXY=http://192.168.3.31:7897</span><br></pre></td></tr></table></figure><h3 id="安装cri-dockerd两个节点都需要"><a class="markdownIt-Anchor" href="#安装cri-dockerd两个节点都需要"></a> 安装cri-dockerd(两个节点都需要)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.16/cri-dockerd-0.3.16.amd64.tgz</span><br><span class="line">[root@master ~]# tar xf cri-dockerd-0.3.16.amd64.tgz</span><br><span class="line">[root@master ~]# <span class="built_in">cp</span> cri-dockerd/cri-dockerd /usr/bin/</span><br><span class="line">[root@master ~]# systemctl <span class="built_in">enable</span> cri-docker --now</span><br><span class="line">[root@master ~]# systemctl <span class="built_in">enable</span> cri-docker.socket --now</span><br><span class="line">[root@master ~]# systemctl status cri-docker</span><br></pre></td></tr></table></figure><ul><li>上面涉及的服务cri-docker.service、cri-docker.socket，服务文件如下</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"></span><br><span class="line">[root@master ~]# <span class="built_in">cat</span> /etc/systemd/system/cri-docker.service</span><br><span class="line">[Unit]</span><br><span class="line">Description=CRI Interface <span class="keyword">for</span> Docker Application Container Engine</span><br><span class="line">Documentation=https://docs.mirantis.com</span><br><span class="line">After=network-online.target firewalld.service docker.service</span><br><span class="line">Wants=network-online.target</span><br><span class="line">Requires=cri-docker.socket</span><br><span class="line">[Service]</span><br><span class="line">Type=notify</span><br><span class="line">ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://</span><br><span class="line">ExecReload=/bin/kill -s HUP <span class="variable">$MAINPID</span></span><br><span class="line">TimeoutSec=0</span><br><span class="line">RestartSec=2</span><br><span class="line">Restart=always</span><br><span class="line"><span class="comment"># Note that StartLimit* options were moved from &quot;Service&quot; to &quot;Unit&quot; in systemd 229.</span></span><br><span class="line"><span class="comment"># Both the old, and new location are accepted by systemd 229 and up, so using the old location</span></span><br><span class="line"><span class="comment"># to make them work for either version of systemd.</span></span><br><span class="line">StartLimitBurst=3</span><br><span class="line"><span class="comment"># Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.</span></span><br><span class="line"><span class="comment"># Both the old, and new name are accepted by systemd 230 and up, so using the old name to make</span></span><br><span class="line"><span class="comment"># this option work for either version of systemd.</span></span><br><span class="line">StartLimitInterval=60s</span><br><span class="line"><span class="comment"># Having non-zero Limit*s causes performance problems due to accounting overhead</span></span><br><span class="line"><span class="comment"># in the kernel. We recommend using cgroups to do container-local accounting.</span></span><br><span class="line">LimitNOFILE=infinity</span><br><span class="line">LimitNPROC=infinity</span><br><span class="line">LimitCORE=infinity</span><br><span class="line"><span class="comment"># Comment TasksMax if your systemd version does not support it.</span></span><br><span class="line"><span class="comment"># Only systemd 226 and above support this option.</span></span><br><span class="line">TasksMax=infinity</span><br><span class="line">Delegate=<span class="built_in">yes</span></span><br><span class="line">KillMode=process</span><br><span class="line">[Install]</span><br><span class="line">WantedBy=multi-user.target</span><br></pre></td></tr></table></figure><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# <span class="built_in">cat</span> /etc/systemd/system/cri-docker.socket</span><br><span class="line">[Unit]</span><br><span class="line">Description=CRI Docker Socket <span class="keyword">for</span> the API</span><br><span class="line">PartOf=cri-docker.service</span><br><span class="line">[Socket]</span><br><span class="line">ListenStream=%t/cri-dockerd.sock</span><br><span class="line">SocketMode=0660</span><br><span class="line">SocketUser=root</span><br><span class="line">SocketGroup=docker</span><br><span class="line">[Install]</span><br><span class="line">WantedBy=sockets.target</span><br></pre></td></tr></table></figure><p>两个节点都验证cri-dockerd服务正常后，就可以继续接下来的操作了</p><h3 id="安装kubernetes组件两个节点都需要"><a class="markdownIt-Anchor" href="#安装kubernetes组件两个节点都需要"></a> 安装kubernetes组件(两个节点都需要)</h3><ul><li>配置kubernetes1.35版本仓库</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">cat</span> &lt;&lt; <span class="string">EOF &gt; /etc/yum.repos.d/kubernetes.repo</span></span><br><span class="line"><span class="string">[kubernetes]</span></span><br><span class="line"><span class="string">name=Kubernetes</span></span><br><span class="line"><span class="string">baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.35/rpm/</span></span><br><span class="line"><span class="string">enabled=1</span></span><br><span class="line"><span class="string">gpgcheck=1</span></span><br><span class="line"><span class="string">gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.35/rpm/repodata/repomd.xml.key</span></span><br><span class="line"><span class="string">exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni</span></span><br><span class="line"><span class="string">EOF</span></span><br></pre></td></tr></table></figure><ul><li>安装 Kubernetes 软件包</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes</span><br></pre></td></tr></table></figure><p>该–disableexcludes=kubernetes标志确保在安装过程中不会排除 Kubernetes 存储库中的软件包</p><ul><li>启动并启用 kubelet 服务</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# systemctl <span class="built_in">enable</span> --now kubelet.service</span><br></pre></td></tr></table></figure><h3 id="初始化kubernetes控制平面master节点操作"><a class="markdownIt-Anchor" href="#初始化kubernetes控制平面master节点操作"></a> 初始化Kubernetes控制平面(master节点操作)</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.213.30 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock</span><br></pre></td></tr></table></figure><p>初始化过程</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.213.30 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/16 --pod-network-cidr=10.244.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock</span><br><span class="line">[init] Using Kubernetes version: v1.35.0</span><br><span class="line">[preflight] Running pre-flight checks</span><br><span class="line">        [WARNING ContainerRuntimeVersion]: You must update your container runtime to a version that supports the CRI method RuntimeConfig. Falling back to using cgroupDriver from kubelet config will be removed <span class="keyword">in</span> 1.36. For more information, see https://git.k8s.io/enhancements/keps/sig-node/4033-group-driver-detection-over-cri</span><br><span class="line">        [WARNING SystemVerification]: kernel release 5.14.0-665.el9.x86_64 is unsupported. Supported LTS versions from the 5.x series are 5.4, 5.10 and 5.15. Any 6.x version is also supported. For cgroups v2 support, the recommended version is 5.10 or newer</span><br><span class="line">        [WARNING Hostname]: hostname <span class="string">&quot;master&quot;</span> could not be reached</span><br><span class="line">        [WARNING Hostname]: hostname <span class="string">&quot;master&quot;</span>: lookup master on 192.168.213.2:53: no such host</span><br><span class="line">[preflight] Pulling images required <span class="keyword">for</span> setting up a Kubernetes cluster</span><br><span class="line">[preflight] This might take a minute or two, depending on the speed of your internet connection</span><br><span class="line">[preflight] You can also perform this action beforehand using <span class="string">&#x27;kubeadm config images pull&#x27;</span></span><br><span class="line">W0131 22:35:18.560702   59468 checks.go:906] detected that the sandbox image <span class="string">&quot;registry.k8s.io/pause:3.9&quot;</span> of the container runtime is inconsistent with that used by kubeadm. It is recommended to use <span class="string">&quot;registry.aliyuncs.com/google_containers/pause:3.10.1&quot;</span> as the CRI sandbox image.</span><br><span class="line">[certs] Using certificateDir folder <span class="string">&quot;/etc/kubernetes/pki&quot;</span></span><br><span class="line">[certs] Generating <span class="string">&quot;ca&quot;</span> certificate and key</span><br><span class="line">[certs] Generating <span class="string">&quot;apiserver&quot;</span> certificate and key</span><br><span class="line">[certs] apiserver serving cert is signed <span class="keyword">for</span> DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.213.30]</span><br><span class="line">[certs] Generating <span class="string">&quot;apiserver-kubelet-client&quot;</span> certificate and key</span><br><span class="line">[certs] Generating <span class="string">&quot;front-proxy-ca&quot;</span> certificate and key</span><br><span class="line">[certs] Generating <span class="string">&quot;front-proxy-client&quot;</span> certificate and key</span><br><span class="line">[certs] Generating <span class="string">&quot;etcd/ca&quot;</span> certificate and key</span><br><span class="line">[certs] Generating <span class="string">&quot;etcd/server&quot;</span> certificate and key</span><br><span class="line">[certs] etcd/server serving cert is signed <span class="keyword">for</span> DNS names [localhost master] and IPs [192.168.213.30 127.0.0.1 ::1]</span><br><span class="line">[certs] Generating <span class="string">&quot;etcd/peer&quot;</span> certificate and key</span><br><span class="line">[certs] etcd/peer serving cert is signed <span class="keyword">for</span> DNS names [localhost master] and IPs [192.168.213.30 127.0.0.1 ::1]</span><br><span class="line">[certs] Generating <span class="string">&quot;etcd/healthcheck-client&quot;</span> certificate and key</span><br><span class="line">[certs] Generating <span class="string">&quot;apiserver-etcd-client&quot;</span> certificate and key</span><br><span class="line">[certs] Generating <span class="string">&quot;sa&quot;</span> key and public key</span><br><span class="line">[kubeconfig] Using kubeconfig folder <span class="string">&quot;/etc/kubernetes&quot;</span></span><br><span class="line">[kubeconfig] Writing <span class="string">&quot;admin.conf&quot;</span> kubeconfig file</span><br><span class="line">[kubeconfig] Writing <span class="string">&quot;super-admin.conf&quot;</span> kubeconfig file</span><br><span class="line">[kubeconfig] Writing <span class="string">&quot;kubelet.conf&quot;</span> kubeconfig file</span><br><span class="line">[kubeconfig] Writing <span class="string">&quot;controller-manager.conf&quot;</span> kubeconfig file</span><br><span class="line">[kubeconfig] Writing <span class="string">&quot;scheduler.conf&quot;</span> kubeconfig file</span><br><span class="line">[etcd] Creating static Pod manifest <span class="keyword">for</span> <span class="built_in">local</span> etcd <span class="keyword">in</span> <span class="string">&quot;/etc/kubernetes/manifests&quot;</span></span><br><span class="line">[control-plane] Using manifest folder <span class="string">&quot;/etc/kubernetes/manifests&quot;</span></span><br><span class="line">[control-plane] Creating static Pod manifest <span class="keyword">for</span> <span class="string">&quot;kube-apiserver&quot;</span></span><br><span class="line">[control-plane] Creating static Pod manifest <span class="keyword">for</span> <span class="string">&quot;kube-controller-manager&quot;</span></span><br><span class="line">[control-plane] Creating static Pod manifest <span class="keyword">for</span> <span class="string">&quot;kube-scheduler&quot;</span></span><br><span class="line">[kubelet-start] Writing kubelet environment file with flags to file <span class="string">&quot;/var/lib/kubelet/kubeadm-flags.env&quot;</span></span><br><span class="line">[kubelet-start] Writing kubelet configuration to file <span class="string">&quot;/var/lib/kubelet/instance-config.yaml&quot;</span></span><br><span class="line">[patches] Applied patch of <span class="built_in">type</span> <span class="string">&quot;application/strategic-merge-patch+json&quot;</span> to target <span class="string">&quot;kubeletconfiguration&quot;</span></span><br><span class="line">[kubelet-start] Writing kubelet configuration to file <span class="string">&quot;/var/lib/kubelet/config.yaml&quot;</span></span><br><span class="line">[kubelet-start] Starting the kubelet</span><br><span class="line">[wait-control-plane] Waiting <span class="keyword">for</span> the kubelet to boot up the control plane as static Pods from directory <span class="string">&quot;/etc/kubernetes/manifests&quot;</span></span><br><span class="line">[kubelet-check] Waiting <span class="keyword">for</span> a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s</span><br><span class="line">[kubelet-check] The kubelet is healthy after 1.002242613s</span><br><span class="line">[control-plane-check] Waiting <span class="keyword">for</span> healthy control plane components. This can take up to 4m0s</span><br><span class="line">[control-plane-check] Checking kube-apiserver at https://192.168.213.30:6443/livez</span><br><span class="line">[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz</span><br><span class="line">[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez</span><br><span class="line">[control-plane-check] kube-controller-manager is healthy after 8.506861505s</span><br><span class="line">[control-plane-check] kube-scheduler is healthy after 9.943662123s</span><br><span class="line">[control-plane-check] kube-apiserver is healthy after 12.004015281s</span><br><span class="line">[upload-config] Storing the configuration used <span class="keyword">in</span> ConfigMap <span class="string">&quot;kubeadm-config&quot;</span> <span class="keyword">in</span> the <span class="string">&quot;kube-system&quot;</span> Namespace</span><br><span class="line">[kubelet] Creating a ConfigMap <span class="string">&quot;kubelet-config&quot;</span> <span class="keyword">in</span> namespace kube-system with the configuration <span class="keyword">for</span> the kubelets <span class="keyword">in</span> the cluster</span><br><span class="line">[upload-certs] Skipping phase. Please see --upload-certs</span><br><span class="line">[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]</span><br><span class="line">[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]</span><br><span class="line">[bootstrap-token] Using token: eydjwt.nrde0uu9mslzfc13</span><br><span class="line">[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles</span><br><span class="line">[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes</span><br><span class="line">[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs <span class="keyword">in</span> order <span class="keyword">for</span> nodes to get long term certificate credentials</span><br><span class="line">[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token</span><br><span class="line">[bootstrap-token] Configured RBAC rules to allow certificate rotation <span class="keyword">for</span> all node client certificates <span class="keyword">in</span> the cluster</span><br><span class="line">[bootstrap-token] Creating the <span class="string">&quot;cluster-info&quot;</span> ConfigMap <span class="keyword">in</span> the <span class="string">&quot;kube-public&quot;</span> namespace</span><br><span class="line">[kubelet-finalize] Updating <span class="string">&quot;/etc/kubernetes/kubelet.conf&quot;</span> to point to a rotatable kubelet client certificate and key</span><br><span class="line">[addons] Applied essential addon: CoreDNS</span><br><span class="line">[addons] Applied essential addon: kube-proxy</span><br><span class="line">Your Kubernetes control-plane has initialized successfully!</span><br><span class="line">To start using your cluster, you need to run the following as a regular user:</span><br><span class="line">    <span class="built_in">mkdir</span> -p <span class="variable">$HOME</span>/.kube</span><br><span class="line">    <span class="built_in">sudo</span> <span class="built_in">cp</span> -i /etc/kubernetes/admin.conf <span class="variable">$HOME</span>/.kube/config</span><br><span class="line">    <span class="built_in">sudo</span> <span class="built_in">chown</span> $(<span class="built_in">id</span> -u):$(<span class="built_in">id</span> -g) <span class="variable">$HOME</span>/.kube/config</span><br><span class="line">Alternatively, <span class="keyword">if</span> you are the root user, you can run:</span><br><span class="line">    <span class="built_in">export</span> KUBECONFIG=/etc/kubernetes/admin.conf</span><br><span class="line">You should now deploy a pod network to the cluster.</span><br><span class="line">Run <span class="string">&quot;kubectl apply -f [podnetwork].yaml&quot;</span> with one of the options listed at:</span><br><span class="line">    https://kubernetes.io/docs/concepts/cluster-administration/addons/</span><br><span class="line">Then you can <span class="built_in">join</span> any number of worker nodes by running the following on each as root:</span><br><span class="line">kubeadm <span class="built_in">join</span> 192.168.213.30:6443 --token eydjwt.nrde0uu9mslzfc13 \</span><br><span class="line">        --discovery-token-ca-cert-hash sha256:288c17b1953041849daa0fd2f2c6ddf3d717c4ae994afbf555eef880e036228b</span><br></pre></td></tr></table></figure><p>没有报错，执行结果很丝滑，按照提示做相关操作</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# <span class="built_in">mkdir</span> -p <span class="variable">$HOME</span>/.kube</span><br><span class="line">[root@master ~]# <span class="built_in">sudo</span> <span class="built_in">cp</span> -i /etc/kubernetes/admin.conf <span class="variable">$HOME</span>/.kube/config</span><br><span class="line">[root@master ~]# <span class="built_in">sudo</span> <span class="built_in">chown</span> $(<span class="built_in">id</span> -u):$(<span class="built_in">id</span> -g) <span class="variable">$HOME</span>/.kube/config</span><br></pre></td></tr></table></figure><p>查看node状态</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubectl get nodes</span><br><span class="line">NAME     STATUS     ROLES           AGE     VERSION</span><br><span class="line">master   NotReady   control-plane   2m41s   v1.35.0</span><br></pre></td></tr></table></figure><p>master节点没问题后，加入工作节点worker1，通过提示的命令进行操作</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@worker1 ~]# kubeadm <span class="built_in">join</span> 192.168.213.30:6443 --token eydjwt.nrde0uu9mslzfc13 --discovery-token-ca-cert-hash sha256:288c17b1953041849daa0fd2f2c6ddf3d717c4ae994afbf555eef880e036228b --cri-socket unix:///var/run/cri-dockerd.sock</span><br></pre></td></tr></table></figure><p>查看加入的worker1节点：</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubectl get nodes</span><br><span class="line">NAME      STATUS     ROLES           AGE   VERSION</span><br><span class="line">master    NotReady   control-plane   26m   v1.35.0</span><br><span class="line">worker1   NotReady   &lt;none&gt;          35s   v1.35.0</span><br></pre></td></tr></table></figure><h3 id="安装网络插件calico"><a class="markdownIt-Anchor" href="#安装网络插件calico"></a> 安装网络插件calico</h3><ul><li>实现集群中各个 Pod 之间的联网，为 Calico 部署 Tigera Operator</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml</span><br><span class="line">[root@master ~]# kubectl create -f tigera-operator.yaml</span><br></pre></td></tr></table></figure><ul><li>下载自定义 Calico 资源清单</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml</span><br><span class="line">[root@master ~]# sed -i <span class="string">&#x27;s/cidr: 192\.168\.0\.0\/16/cidr: 10.244.0.0\/16/g&#x27;</span> custom-resources.yaml</span><br></pre></td></tr></table></figure><ul><li>上面pod子网的修改是和之前kubeadm init初始化一致的</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubectl create -f custom-resources.yaml</span><br><span class="line">installation.operator.tigera.io/default created</span><br><span class="line">apiserver.operator.tigera.io/default created</span><br></pre></td></tr></table></figure><p>验证工作节点</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubectl get nodes</span><br><span class="line">NAME      STATUS   ROLES           AGE   VERSION</span><br><span class="line">master    Ready    control-plane   37m   v1.35.0</span><br><span class="line">worker1   Ready    &lt;none&gt;          12m   v1.35.0</span><br></pre></td></tr></table></figure><p>至此kubernetes1.35集群安装成功,创建一个nginx应用进行测试</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# <span class="built_in">cat</span> nginx-deploy.yaml</span><br><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">    name: nginx-deployment</span><br><span class="line">    labels:</span><br><span class="line">    app: nginx</span><br><span class="line">spec:</span><br><span class="line">    replicas: 1</span><br><span class="line">    selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">        app: nginx</span><br><span class="line">    template:</span><br><span class="line">    metadata:</span><br><span class="line">        labels:</span><br><span class="line">        app: nginx</span><br><span class="line">    spec:</span><br><span class="line">        containers:</span><br><span class="line">        - name: nginx</span><br><span class="line">        image: nginx:latest</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 80</span><br></pre></td></tr></table></figure><p>然后进行部署</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubectl create -f nginx-deploy.yaml</span><br></pre></td></tr></table></figure><p>查看部署的nginx的状态</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubectl get pod</span><br><span class="line">NAME                                READY   STATUS    RESTARTS   AGE</span><br><span class="line">nginx-deployment-59f86b59ff-hm8d5   1/1     Running   0          32s</span><br></pre></td></tr></table></figure><p>将部署的应用暴露给外部网络</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# <span class="built_in">cat</span> nginx-svc.yaml</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">    name: nginx-service</span><br><span class="line">spec:</span><br><span class="line">    selector:</span><br><span class="line">    app: nginx</span><br><span class="line">    ports:</span><br><span class="line">    - protocol: TCP</span><br><span class="line">        port: 80</span><br><span class="line">        targetPort: 80</span><br><span class="line">    <span class="built_in">type</span>: NodePort</span><br><span class="line">[root@master ~]# kubectl apply -f nginx-svc.yaml</span><br></pre></td></tr></table></figure><p>查询服务状态</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@master ~]# kubectl get svc</span><br><span class="line">NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE</span><br><span class="line">kubernetes      ClusterIP   10.96.0.1      &lt;none&gt;        443/TCP        43m</span><br><span class="line">nginx-service   NodePort    10.96.74.125   &lt;none&gt;        80:32694/TCP   4s</span><br></pre></td></tr></table></figure><p>通过nodePort暴露的32694端口进行访问<a href="http://192.168.213.30:32694">http://192.168.213.30:32694</a></p><p>本指南介绍了在 CentOS 9系统上安装 Kubernetes 的基本步骤</p>]]>
    </content>
    <id>https://jsonhc.github.io/posts/1243066713/</id>
    <link href="https://jsonhc.github.io/posts/1243066713/"/>
    <published>2026-02-27T13:00:00.000Z</published>
    <summary>
      <![CDATA[<hr />
<p>各位技术爱好者们，大家好！本文是在 CentOS 9系统上安装 Kubernetes 。在本指南中，将一步步指导你完成集群的启动和运行，从安装到配置，全程无遗漏</p>
<hr />
<h3 id="先决条件"><a class="markdownIt-Anc]]>
    </summary>
    <title>在 CentOS9上安装 Kubernetes1.35集群</title>
    <updated>2026-03-01T13:49:11.709Z</updated>
  </entry>
  <entry>
    <author>
      <name>starttech</name>
    </author>
    <category term="随笔" scheme="https://jsonhc.github.io/categories/%E9%9A%8F%E7%AC%94/"/>
    <category term="随笔" scheme="https://jsonhc.github.io/tags/%E9%9A%8F%E7%AC%94/"/>
    <content>
      <![CDATA[<h1 id="install-k3s-on-centos9"><a class="markdownIt-Anchor" href="#install-k3s-on-centos9"></a> install k3s on centos9</h1><h3 id="配置aliyun仓库"><a class="markdownIt-Anchor" href="#配置aliyun仓库"></a> 配置aliyun仓库</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">tee /etc/yum.repos.d/centos.repo &gt; /dev/null &lt;&lt; &#x27;EOF&#x27;</span><br><span class="line">[baseos]</span><br><span class="line">name=CentOS Stream $releasever - BaseOS - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/x86_64/os/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=1</span><br><span class="line"></span><br><span class="line">[baseos-debuginfo]</span><br><span class="line">name=CentOS Stream $releasever - BaseOS Debuginfo - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/x86_64/debug/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line"></span><br><span class="line">[baseos-source]</span><br><span class="line">name=CentOS Stream $releasever - BaseOS Source - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/source/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line"></span><br><span class="line">[appstream]</span><br><span class="line">name=CentOS Stream $releasever - AppStream - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/x86_64/os/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=1</span><br><span class="line"></span><br><span class="line">[appstream-debuginfo]</span><br><span class="line">name=CentOS Stream $releasever - AppStream Debuginfo - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/x86_64/debug/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line"></span><br><span class="line">[appstream-source]</span><br><span class="line">name=CentOS Stream $releasever - AppStream Source - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/source/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h3 id="配置ip"><a class="markdownIt-Anchor" href="#配置ip"></a> 配置ip</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">nmcli connection modify &quot;ens160&quot; ipv4.method manual ipv4.addresses 192.168.213.110/24 ipv4.gateway 192.168.213.2 ipv4.dns &quot;192.168.213.2&quot;</span><br><span class="line">nmcli connection up &quot;ens160&quot;</span><br></pre></td></tr></table></figure><h3 id="安装k3s"><a class="markdownIt-Anchor" href="#安装k3s"></a> 安装k3s</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# curl -sfL https://get.k3s.io | sh</span><br></pre></td></tr></table></figure><p><img src="/img/article/k3s1.png" alt="k3s1" /></p><h3 id="查看k3s状态"><a class="markdownIt-Anchor" href="#查看k3s状态"></a> 查看k3s状态</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# systemctl status k3s</span><br><span class="line">[root@localhost ~]# kubectl get pod -A</span><br><span class="line">[root@localhost ~]# crictl image ls</span><br></pre></td></tr></table></figure><ul><li>由于下载的镜像网络原因，所以状态都是failed，需要修改镜像源加速</li></ul><h3 id="配置镜像源加速"><a class="markdownIt-Anchor" href="#配置镜像源加速"></a> 配置镜像源加速</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# cat &lt;&lt;EOF &gt; /etc/rancher/k3s/registries.yaml</span><br><span class="line">mirrors:</span><br><span class="line">  &quot;docker.io&quot;:</span><br><span class="line">    endpoint:</span><br><span class="line">      - &quot;https://docker.1ms.run&quot;</span><br><span class="line">      - &quot;https://docker-0.unsee.tech&quot;</span><br><span class="line">      - &quot;https://registry-1.docker.io&quot;</span><br><span class="line">  &quot;registry.k8s.io&quot;:</span><br><span class="line">    endpoint:</span><br><span class="line">      - &quot;https://k8s.m.daocloud.io&quot;</span><br><span class="line">EOF</span><br><span class="line">[root@localhost ~]# systemctl restart k3s</span><br><span class="line">[root@localhost ~]# systemctl status k3s</span><br></pre></td></tr></table></figure><p><img src="/img/article/k3s2.png" alt="k3s2" /></p><h3 id="通过k3s部署nginx进行测试"><a class="markdownIt-Anchor" href="#通过k3s部署nginx进行测试"></a> 通过k3s部署nginx进行测试</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# cat nginx.yaml</span><br><span class="line">---</span><br><span class="line">apiVersion: apps/v1</span><br><span class="line">kind: Deployment</span><br><span class="line">metadata:</span><br><span class="line">  name: nginx</span><br><span class="line">  labels:</span><br><span class="line">    app: nginx</span><br><span class="line">spec:</span><br><span class="line">  selector:</span><br><span class="line">    matchLabels:</span><br><span class="line">      app: nginx</span><br><span class="line">  template:</span><br><span class="line">    metadata:</span><br><span class="line">      labels:</span><br><span class="line">        app: nginx</span><br><span class="line">    spec:</span><br><span class="line">      containers:</span><br><span class="line">      - name: nginx</span><br><span class="line">        image: nginx:latest</span><br><span class="line">        ports:</span><br><span class="line">        - containerPort: 80</span><br><span class="line">---</span><br><span class="line">apiVersion: v1</span><br><span class="line">kind: Service</span><br><span class="line">metadata:</span><br><span class="line">  name: nginx</span><br><span class="line">  labels:</span><br><span class="line">    app: nginx</span><br><span class="line">spec:</span><br><span class="line">  ports:</span><br><span class="line">    - protocol: TCP</span><br><span class="line">      port: 8081</span><br><span class="line">      targetPort: 80</span><br><span class="line">  selector:</span><br><span class="line">    app: nginx</span><br><span class="line">  type: LoadBalancer</span><br><span class="line">[root@localhost ~]# kubectl apply -f nginx.yaml</span><br><span class="line">[root@localhost ~]# kubectl get pods</span><br></pre></td></tr></table></figure><ul><li>当nginx部署成功后，通过如下进行访问</li></ul><h3 id="访问nginx服务"><a class="markdownIt-Anchor" href="#访问nginx服务"></a> 访问nginx服务</h3><figure class="highlight plaintext"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# kubectl get svc</span><br><span class="line">NAME         TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)          AGE</span><br><span class="line">kubernetes   ClusterIP      10.43.0.1       &lt;none&gt;            443/TCP          18m</span><br><span class="line">nginx        LoadBalancer   10.43.179.184   192.168.213.110   8081:30768/TCP   56s</span><br></pre></td></tr></table></figure><p><img src="/img/article/k3s3.png" alt="k3s3" /></p><h3 id="设置k8s-config"><a class="markdownIt-Anchor" href="#设置k8s-config"></a> 设置k8s config</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# <span class="built_in">mkdir</span> -p ~/.kube</span><br><span class="line">[root@localhost ~]# <span class="built_in">cp</span> /etc/rancher/k3s/k3s.yaml ~/.kube/config</span><br><span class="line">[root@localhost ~]# <span class="built_in">chown</span> $(<span class="built_in">id</span> -u):$(<span class="built_in">id</span> -g) ~/.kube/config</span><br></pre></td></tr></table></figure><h3 id="安装helm"><a class="markdownIt-Anchor" href="#安装helm"></a> 安装helm</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash</span><br><span class="line">  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current</span><br><span class="line">                                 Dload  Upload   Total   Spent    Left  Speed</span><br><span class="line">100 11929  100 11929    0     0  23810      0 --:--:-- --:--:-- --:--:-- 23810</span><br><span class="line">[WARNING] Could not find git. It is required <span class="keyword">for</span> plugin installation.</span><br><span class="line">Downloading https://get.helm.sh/helm-v3.19.4-linux-amd64.tar.gz</span><br><span class="line">Verifying checksum... Done.</span><br><span class="line">Preparing to install helm into /usr/local/bin</span><br><span class="line">helm installed into /usr/local/bin/helm</span><br></pre></td></tr></table></figure><h3 id="添加rancher-repo"><a class="markdownIt-Anchor" href="#添加rancher-repo"></a> 添加rancher repo</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# helm repo add rancher-stable https://releases.rancher.com/server-charts/stable</span><br><span class="line">[root@localhost ~]# kubectl create namespace cattle-system</span><br></pre></td></tr></table></figure><h3 id="安装cert-manager"><a class="markdownIt-Anchor" href="#安装cert-manager"></a> 安装cert-manager</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.11.0/cert-manager.crds.yaml</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created</span><br><span class="line">customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created</span><br><span class="line">[root@localhost ~]# kubectl create namespace cert-manager</span><br><span class="line">namespace/cert-manager created</span><br><span class="line">[root@localhost ~]# helm repo add jetstack https://charts.jetstack.io</span><br><span class="line"><span class="string">&quot;jetstack&quot;</span> has been added to your repositories</span><br><span class="line">[root@localhost ~]# helm repo update</span><br><span class="line">Hang tight <span class="keyword">while</span> we grab the latest from your chart repositories...</span><br><span class="line">...Successfully got an update from the <span class="string">&quot;jetstack&quot;</span> chart repository</span><br><span class="line">...Successfully got an update from the <span class="string">&quot;rancher-stable&quot;</span> chart repository</span><br><span class="line">Update Complete. ⎈Happy Helming!⎈</span><br><span class="line">[root@localhost ~]# helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.11.01</span><br></pre></td></tr></table></figure><p><img src="/img/article/k3s4.png" alt="k3s4" /></p><h3 id="安装rancher"><a class="markdownIt-Anchor" href="#安装rancher"></a> 安装rancher</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# helm install rancher rancher-stable/rancher   --namespace cattle-system   --<span class="built_in">set</span> hostname=rancher.kolukisa.org   --<span class="built_in">set</span> bootstrapPassword=admin   --<span class="built_in">set</span> ingress.tls.source=letsEncrypt   --<span class="built_in">set</span> letsEncrypt.email=mail@kolukisa.org  --<span class="built_in">set</span> replicas=1</span><br></pre></td></tr></table></figure><p><img src="/img/article/k3s5.png" alt="k3s5" /></p><ul><li>安装完成后如下</li></ul><p><img src="/img/article/k3s6.png" alt="k3s6" /></p><ul><li>将rancher的暴露方式修改为LoadBalancer</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">kubectl -n cattle-system edit svc rancher</span><br></pre></td></tr></table></figure><ul><li>将type: ClusterIP修改为LoadBalancer，将443端口修改为8443，port: 80修改为8080</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# kubectl -n cattle-system get svc</span><br><span class="line">NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                         AGE</span><br><span class="line">cm-acme-http-solver-jmrjt   NodePort       10.43.212.21    &lt;none&gt;            8089:31676/TCP                  12m</span><br><span class="line">imperative-api-extension    ClusterIP      10.43.131.234   &lt;none&gt;            6666/TCP                        11m</span><br><span class="line">rancher                     LoadBalancer   10.43.14.7      192.168.213.110   8080:31266/TCP,8443:32137/TCP   13m</span><br><span class="line">rancher-webhook             ClusterIP      10.43.215.43    &lt;none&gt;            443/TCP                         9m29s</span><br><span class="line">[root@localhost ~]# kubectl -n cattle-system get ingress</span><br><span class="line">NAME                        CLASS     HOSTS                  ADDRESS           PORTS     AGE</span><br><span class="line">cm-acme-http-solver-47h5w   traefik   rancher.kolukisa.org   192.168.213.110   80        12m</span><br><span class="line">rancher                     traefik   rancher.kolukisa.org   192.168.213.110   80, 443   13m</span><br></pre></td></tr></table></figure><h3 id="访问方式"><a class="markdownIt-Anchor" href="#访问方式"></a> 访问方式</h3><ul><li>通过ip+port访问</li></ul><p><img src="/img/article/k3s7.png" alt="k3s7" /></p><ul><li>通过域名访问</li></ul><p><img src="/img/article/k3s8.png" alt="k3s8" /></p>]]>
    </content>
    <id>https://jsonhc.github.io/posts/1243066710/</id>
    <link href="https://jsonhc.github.io/posts/1243066710/"/>
    <published>2026-02-25T16:00:00.000Z</published>
    <summary>
      <![CDATA[<h1 id="install-k3s-on-centos9"><a class="markdownIt-Anchor" href="#install-k3s-on-centos9"></a> install k3s on centos9</h1>
<h3 id="配置aliyu]]>
    </summary>
    <title>install k3s on centos9</title>
    <updated>2026-03-01T13:49:11.709Z</updated>
  </entry>
  <entry>
    <author>
      <name>starttech</name>
    </author>
    <category term="随笔" scheme="https://jsonhc.github.io/categories/%E9%9A%8F%E7%AC%94/"/>
    <category term="随笔" scheme="https://jsonhc.github.io/tags/%E9%9A%8F%E7%AC%94/"/>
    <content>
      <![CDATA[<h1 id="install-microk8s-on-centos9"><a class="markdownIt-Anchor" href="#install-microk8s-on-centos9"></a> install microk8s on centos9</h1><h3 id="配置aliyun仓库"><a class="markdownIt-Anchor" href="#配置aliyun仓库"></a> 配置aliyun仓库</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line"><span class="built_in">tee</span> /etc/yum.repos.d/centos.repo &gt; /dev/null &lt;&lt; <span class="string">&#x27;EOF&#x27;</span></span><br><span class="line">[baseos]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/x86_64/os/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=1</span><br><span class="line"></span><br><span class="line">[baseos-debuginfo]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS Debuginfo - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/x86_64/debug/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line"></span><br><span class="line">[baseos-source]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - BaseOS Source - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/BaseOS/source/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line"></span><br><span class="line">[appstream]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/x86_64/os/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=1</span><br><span class="line"></span><br><span class="line">[appstream-debuginfo]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream Debuginfo - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/x86_64/debug/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line"></span><br><span class="line">[appstream-source]</span><br><span class="line">name=CentOS Stream <span class="variable">$releasever</span> - AppStream Source - mirrors.aliyun.com</span><br><span class="line">baseurl=https://mirrors.aliyun.com/centos-stream/9-stream/AppStream/source/tree/</span><br><span class="line">gpgcheck=1</span><br><span class="line">gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial</span><br><span class="line">enabled=0</span><br><span class="line">EOF</span><br></pre></td></tr></table></figure><h3 id="配置ip"><a class="markdownIt-Anchor" href="#配置ip"></a> 配置ip</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">nmcli connection modify <span class="string">&quot;ens160&quot;</span> ipv4.method manual ipv4.addresses 192.168.213.110/24 ipv4.gateway 192.168.213.2 ipv4.dns <span class="string">&quot;192.168.213.2&quot;</span></span><br><span class="line">nmcli connection up <span class="string">&quot;ens160&quot;</span></span><br></pre></td></tr></table></figure><h3 id="系统初始化"><a class="markdownIt-Anchor" href="#系统初始化"></a> 系统初始化</h3><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# systemctl stop firewalld.service</span><br><span class="line">[root@localhost ~]# systemctl <span class="built_in">disable</span> firewalld.service</span><br><span class="line">Removed <span class="string">&quot;/etc/systemd/system/multi-user.target.wants/firewalld.service&quot;</span>.</span><br><span class="line">Removed <span class="string">&quot;/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service&quot;</span>.</span><br><span class="line">[root@localhost ~]# setenforce 0</span><br><span class="line">[root@localhost ~]# sed -i <span class="string">&#x27;s/^SELINUX=enforcing$/SELINUX=permissive/&#x27;</span> /etc/selinux/config</span><br><span class="line">[root@localhost ~]#</span><br><span class="line">[root@localhost ~]#</span><br><span class="line">[root@localhost ~]# modprobe ip_tables</span><br><span class="line">[root@localhost ~]# modprobe ip_conntrack</span><br><span class="line">[root@localhost ~]# modprobe iptable_filter</span><br><span class="line">[root@localhost ~]# modprobe ipt_state</span><br></pre></td></tr></table></figure><h3 id="使用snap安装microk8s"><a class="markdownIt-Anchor" href="#使用snap安装microk8s"></a> 使用snap安装microk8s</h3><ul><li>安装snap</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# yum install epel-release</span><br><span class="line">[root@localhost ~]# yum install snapd</span><br><span class="line">[root@localhost ~]# systemctl <span class="built_in">enable</span> --now snapd.socket</span><br><span class="line">[root@localhost ~]# <span class="built_in">ln</span> -s /var/lib/snapd/snap /snap</span><br></pre></td></tr></table></figure><ul><li>安装microk8s</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# snap install microk8s --classic</span><br><span class="line">[root@localhost ~]# snap install microk8s --classic</span><br><span class="line">2026-01-21T21:59:26+08:00 INFO Waiting <span class="keyword">for</span> automatic snapd restart...</span><br><span class="line">Warning: /var/lib/snapd/snap/bin was not found <span class="keyword">in</span> your <span class="variable">$PATH</span>. If you<span class="string">&#x27;ve not restarted your session</span></span><br><span class="line"><span class="string">         since you installed snapd, try doing that. Please see https://forum.snapcraft.io/t/9469</span></span><br><span class="line"><span class="string">         for more details.</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="string">microk8s (1.32/stable) v1.32.9 from Canonical✓ installed</span></span><br></pre></td></tr></table></figure><ul><li>MicroK8s 会创建一个用户组，以便无缝使用需要管理员权限的命令。要将当前用户添加到该用户组并获得对 .kube 缓存目录的访问权限</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# usermod -a -G microk8s <span class="variable">$USER</span></span><br><span class="line">[root@localhost ~]# <span class="built_in">mkdir</span> -p ~/.kube</span><br><span class="line">[root@localhost ~]# <span class="built_in">chmod</span> 0700 ~/.kube</span><br><span class="line">[root@localhost ~]# su - <span class="variable">$USER</span></span><br></pre></td></tr></table></figure><ul><li>检查microk8s状态</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# microk8s status</span><br><span class="line">[root@localhost ~]# microk8s kubectl get pod -A</span><br><span class="line">[root@localhost ~]# microk8s kubectl get nodes</span><br></pre></td></tr></table></figure><p><img src="/img/article/1.png" alt="1" /></p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# journalctl -u snap.microk8s.daemon-kubelite -n 100 -f</span><br><span class="line">[root@localhost ~]# systemctl status snap.microk8s.daemon-containerd</span><br></pre></td></tr></table></figure><p><a href="http://xn--registry-t39lj4eo5wwv6hf35bi1q.k8s.io/pause:3.10%E9%95%9C%E5%83%8F%E5%A4%B1%E8%B4%A5%EF%BC%8C%E7%94%B1systemctl">可以看见下载registry.k8s.io/pause:3.10镜像失败，由systemctl</a> status snap.microk8s.daemon-containerd命令找到容器加速配置<br />/var/snap/microk8s/current/args/containerd-template.toml</p><ul><li>为microk8s配置镜像源加速</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# ll /var/snap/microk8s/current/args/certs.d/</span><br><span class="line">[root@localhost ~]# <span class="built_in">mkdir</span> -p /var/snap/microk8s/current/args/certs.d/registry.k8s.io</span><br><span class="line">[root@localhost ~]# <span class="built_in">cat</span> /var/snap/microk8s/current/args/certs.d/registry.k8s.io/hosts.toml</span><br><span class="line">server = <span class="string">&quot;https://registry.k8s.io&quot;</span></span><br><span class="line"></span><br><span class="line">[host.<span class="string">&quot;https://k8s.m.daocloud.io&quot;</span>]</span><br><span class="line">  capabilities = [<span class="string">&quot;pull&quot;</span>, <span class="string">&quot;resolve&quot;</span>]</span><br><span class="line">  skip_verify = <span class="literal">false</span></span><br><span class="line">[host.<span class="string">&quot;https://mirror.azure.cn&quot;</span>]</span><br><span class="line">  capabilities = [<span class="string">&quot;pull&quot;</span>, <span class="string">&quot;resolve&quot;</span>]</span><br><span class="line">  skip_verify = <span class="literal">false</span></span><br><span class="line">[host.<span class="string">&quot;https://registry-1.docker.io&quot;</span>]</span><br><span class="line">  capabilities = [<span class="string">&quot;pull&quot;</span>, <span class="string">&quot;resolve&quot;</span>]</span><br><span class="line">  skip_verify = <span class="literal">false</span></span><br><span class="line">[root@localhost ~]# <span class="built_in">cat</span> /var/snap/microk8s/current/args/certs.d/docker.io/hosts.toml</span><br><span class="line">server = <span class="string">&quot;https://docker.io&quot;</span></span><br><span class="line"></span><br><span class="line">[host.<span class="string">&quot;https://docker.1ms.run&quot;</span>]</span><br><span class="line">  capabilities = [<span class="string">&quot;pull&quot;</span>, <span class="string">&quot;resolve&quot;</span>]</span><br><span class="line">[root@localhost ~]# snap restart microk8s</span><br></pre></td></tr></table></figure><ul><li>查看下载的镜像：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# microk8s ctr images <span class="built_in">ls</span></span><br></pre></td></tr></table></figure><ul><li>microk8s状态：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# microk8s kubectl get pod -A</span><br><span class="line">NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE</span><br><span class="line">kube-system   calico-kube-controllers-5947598c79-92cff   1/1     Running   0          12m</span><br><span class="line">kube-system   calico-node-k2qsr                          1/1     Running   0          12m</span><br><span class="line">kube-system   coredns-79b94494c7-jl6rk                   1/1     Running   0          12m</span><br><span class="line">[root@localhost ~]# microk8s kubectl get nodes</span><br><span class="line">NAME                    STATUS   ROLES    AGE   VERSION</span><br><span class="line">localhost.localdomain   Ready    &lt;none&gt;   12m   v1.32.9</span><br></pre></td></tr></table></figure><ul><li>启用基础插件</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# microk8s <span class="built_in">enable</span> dns</span><br><span class="line">Infer repository core <span class="keyword">for</span> addon dns</span><br><span class="line">Addon core/dns is already enabled</span><br><span class="line">[root@localhost ~]# microk8s <span class="built_in">enable</span> hostpath-storage</span><br><span class="line">Infer repository core <span class="keyword">for</span> addon hostpath-storage</span><br><span class="line">Enabling default storage class.</span><br><span class="line">WARNING: Hostpath storage is not suitable <span class="keyword">for</span> production environments.</span><br><span class="line">         A hostpath volume can grow beyond the size <span class="built_in">limit</span> <span class="built_in">set</span> <span class="keyword">in</span> the volume claim manifest.</span><br><span class="line"></span><br><span class="line">deployment.apps/hostpath-provisioner created</span><br><span class="line">storageclass.storage.k8s.io/microk8s-hostpath created</span><br><span class="line">serviceaccount/microk8s-hostpath created</span><br><span class="line">clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created</span><br><span class="line">clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created</span><br><span class="line">Storage will be available soon.</span><br><span class="line">[root@localhost ~]# microk8s kubectl get sc</span><br><span class="line">NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE</span><br><span class="line">microk8s-hostpath (default)   microk8s.io/hostpath   Delete          WaitForFirstConsumer   <span class="literal">false</span>                  5s</span><br></pre></td></tr></table></figure><p>更多插件：<a href="https://canonical.com/microk8s/docs/addons">https://canonical.com/microk8s/docs/addons</a></p><h3 id="microk8s集群添加worker节点"><a class="markdownIt-Anchor" href="#microk8s集群添加worker节点"></a> microk8s集群添加Worker节点</h3><ul><li>新增worker节点并安装好microk8s集群,版本需要一致</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# microk8s kubectl get nodes</span><br><span class="line">NAME                    STATUS   ROLES    AGE   VERSION</span><br><span class="line">localhost.localdomain   Ready    &lt;none&gt;   90m   v1.32.9</span><br><span class="line"></span><br><span class="line">[root@localhost ~]# snap install microk8s --classic --channel=1.32</span><br><span class="line">2026-01-21T23:43:08+08:00 INFO Waiting <span class="keyword">for</span> automatic snapd restart...</span><br><span class="line">Warning: /var/lib/snapd/snap/bin was not found <span class="keyword">in</span> your <span class="variable">$PATH</span>. If you<span class="string">&#x27;ve not restarted your session</span></span><br><span class="line"><span class="string">         since you installed snapd, try doing that. Please see https://forum.snapcraft.io/t/9469</span></span><br><span class="line"><span class="string">         for more details.</span></span><br><span class="line"><span class="string"></span></span><br><span class="line"><span class="string">microk8s (1.32/stable) v1.32.9 from Canonical✓ installed</span></span><br><span class="line"><span class="string">[root@localhost ~]# hostnamectl set-hostname microk8s-worker</span></span><br><span class="line"><span class="string">[root@microk8s-worker ~]# microk8s kubectl get nodes</span></span><br><span class="line"><span class="string">NAME                    STATUS     ROLES    AGE     VERSION</span></span><br><span class="line"><span class="string">localhost.localdomain   NotReady   &lt;none&gt;   2m41s   v1.32.9</span></span><br></pre></td></tr></table></figure><ul><li>worker节点配置镜像加速，参考上面</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@microk8s-worker ~]# microk8s kubectl get nodes</span><br><span class="line">NAME                    STATUS     ROLES    AGE     VERSION</span><br><span class="line">localhost.localdomain   NotReady   &lt;none&gt;   7m34s   v1.32.9</span><br><span class="line">microk8s-worker         Ready      &lt;none&gt;   2m44s   v1.32.9</span><br><span class="line">[root@microk8s-worker ~]# microk8s remove-node localhost.localdomain</span><br><span class="line">[root@microk8s-worker ~]# microk8s kubectl get nodes</span><br><span class="line">NAME              STATUS   ROLES    AGE     VERSION</span><br><span class="line">microk8s-worker   Ready    &lt;none&gt;   2m55s   v1.32.9</span><br><span class="line">[root@microk8s-worker ~]# microk8s kubectl get pod -A</span><br><span class="line">NAMESPACE     NAME                                       READY   STATUS     RESTARTS   AGE</span><br><span class="line">kube-system   calico-kube-controllers-5947598c79-zp6wc   1/1     Running    0          7m41s</span><br><span class="line">kube-system   calico-node-h2rzw                          1/1     Running    0          116s</span><br><span class="line">kube-system   coredns-79b94494c7-zpd9n                   1/1     Running    0          7m41s</span><br></pre></td></tr></table></figure><ul><li>从192.168.213.110上添加worker节点192.168.213.111<br />获取token添加方式：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# microk8s add-node</span><br><span class="line">From the node you wish to <span class="built_in">join</span> to this cluster, run the following:</span><br><span class="line">microk8s <span class="built_in">join</span> 192.168.213.110:25000/3261736fb5043b8367253a69ee907dc5/7f32c8813166</span><br><span class="line"></span><br><span class="line">Use the <span class="string">&#x27;--worker&#x27;</span> flag to <span class="built_in">join</span> a node as a worker not running the control plane, eg:</span><br><span class="line">microk8s <span class="built_in">join</span> 192.168.213.110:25000/3261736fb5043b8367253a69ee907dc5/7f32c8813166 --worker</span><br><span class="line"></span><br><span class="line">If the node you are adding is not reachable through the default interface you can use one of the following:</span><br><span class="line">microk8s <span class="built_in">join</span> 192.168.213.110:25000/3261736fb5043b8367253a69ee907dc5/7f32c8813166</span><br></pre></td></tr></table></figure><ul><li>在worker节点上操作如下：</li></ul><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@microk8s-worker ~]# microk8s kubectl get nodes</span><br><span class="line">NAME              STATUS   ROLES    AGE    VERSION</span><br><span class="line">microk8s-worker   Ready    &lt;none&gt;   4m3s   v1.32.9</span><br><span class="line">[root@microk8s-worker ~]# microk8s <span class="built_in">join</span> 192.168.213.110:25000/3261736fb5043b8367253a69ee907dc5/7f32c8813166 --worker</span><br><span class="line">Contacting cluster at 192.168.213.110</span><br><span class="line"></span><br><span class="line">The node has joined the cluster and will appear <span class="keyword">in</span> the nodes list <span class="keyword">in</span> a few seconds.</span><br><span class="line"></span><br><span class="line">This worker node gets automatically configured with the API server endpoints.</span><br><span class="line">If the API servers are behind a loadbalancer please <span class="built_in">set</span> the <span class="string">&#x27;--refresh-interval&#x27;</span> to <span class="string">&#x27;0s&#x27;</span> <span class="keyword">in</span>:</span><br><span class="line">    /var/snap/microk8s/current/args/apiserver-proxy</span><br><span class="line">and replace the API server endpoints with the one provided by the loadbalancer <span class="keyword">in</span>:</span><br><span class="line">    /var/snap/microk8s/current/args/traefik/provider.yaml</span><br><span class="line"></span><br><span class="line">Successfully joined the cluster.</span><br></pre></td></tr></table></figure><p>添加成功后，在110节点上查询</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# microk8s kubectl get nodes</span><br><span class="line">NAME                    STATUS   ROLES    AGE    VERSION</span><br><span class="line">localhost.localdomain   Ready    &lt;none&gt;   138m   v1.32.9</span><br><span class="line">microk8s-worker         Ready    &lt;none&gt;   21s    v1.32.9</span><br></pre></td></tr></table></figure><p><img src="/img/article/2.png" alt="2" /></p><p>特别注意：在 MicroK8s 集群中，所有集群级别的管理操作，都需要在 Master 节点（即控制平面节点）上执行。Worker 节点的职责是运行工作负载，而不是管理集群本身<br />所以启用插件只需要在master节点操作就行，如果master事先已经启用了某些插件，那么加入的worker后就不需要再次启用了</p><figure class="highlight bash"><table><tr><td class="code"><pre><span class="line">[root@localhost ~]# microk8s <span class="built_in">enable</span> dns</span><br><span class="line">Infer repository core <span class="keyword">for</span> addon dns</span><br><span class="line">Addon core/dns is already enabled</span><br><span class="line">[root@localhost ~]# microk8s <span class="built_in">enable</span> hostpath-storage</span><br><span class="line">Infer repository core <span class="keyword">for</span> addon hostpath-storage</span><br><span class="line">Addon core/hostpath-storage is already enabled</span><br></pre></td></tr></table></figure>]]>
    </content>
    <id>https://jsonhc.github.io/posts/1243066712/</id>
    <link href="https://jsonhc.github.io/posts/1243066712/"/>
    <published>2026-02-25T16:00:00.000Z</published>
    <summary>
      <![CDATA[<h1 id="install-microk8s-on-centos9"><a class="markdownIt-Anchor" href="#install-microk8s-on-centos9"></a> install microk8s on centos9</h1>]]>
    </summary>
    <title>install microk8s on centos9</title>
    <updated>2026-03-01T13:49:11.709Z</updated>
  </entry>
</feed>
