摘要
安装前设置(所有节点)
修正系统的时间
1 2 3 4 5 6 7 8 sudo dnf install chrony -ysudo systemctl enable --now chronydsudo chronyc makestepdate
安装 docker
(可选)
创建用户
1 2 3 4 5 6 7 8 9 $ sudo useradd -m -s /bin/bash centos $ sudo usermod -aG docker centos $ sudo usermod -aG wheel centos $ su - centos
升级内核
由 kubeadm 创建的 Kubernetes 集群依赖于使用内核特性的相关软件。
Kubernetes 集群的节点对于使用 Linux 内核版本要求参加Linux 内核版本要求
kubeadm 项目支持 LTS 内核。参阅 LTS 内核列表 。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 $ uname -r 4.18.0-553.el8_10.x86_64 $ sudo yum list kernel --showduplicates $ sudo rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org $ sudo yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm $ sudo yum --disablerepo="*" --enablerepo="elrepo-kernel" list available $ sudo yum --enablerepo=elrepo-kernel install kernel-lt.x86_64 $ ls -lh /boot/vmlinuz-* /boot/initramfs-* | grep "5.4" -rw------- 1 root root 29M 6月 29 17:15 /boot/initramfs-5.4.295-1.el8.elrepo.x86_64.img -rwxr-xr-x 1 root root 9.5M 6月 28 01:21 /boot/vmlinuz-5.4.295-1.el8.elrepo.x86_64 $ sudo grubby --info=ALL | grep ^kernel kernel="/boot/vmlinuz-5.4.295-1.el8.elrepo.x86_64" kernel="/boot/vmlinuz-4.18.0-553.el8_10.x86_64" kernel="/boot/vmlinuz-0-rescue-88f75739047993488aacc30b9cd25ca0" $ sudo grubby --default-kernel /boot/vmlinuz-5.4.295-1.el8.elrepo.x86_64 $ sudo grubby --set-default /boot/vmlinuz-5.4.295-1.el8.elrepo.x86_64 $ sudo reboot $ uname -r 5.4.295-1.el8.elrepo.x86_64
将 SELinux 设置为 permissive 模式(相当于将其禁用)
1 2 $ sudo setenforce 0 $ sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
禁用Firewalld
1 2 3 sudo systemctl stop firewalldsudo systemctl disable firewalld
不关 Firewalld 应该开放哪些端口?(实测还是会遇到各种各样的问题)
端口
协议
说明
6443
TCP
kube-apiserver,用于 kubectl 与集群通信
2379-2380
TCP
etcd 集群通信(仅在你自己部署 etcd 时)
10250
TCP
kubelet 监听端口,供 apiserver 与节点通信
10259
TCP
kube-scheduler
10257
TCP
kube-controller-manager
端口
协议
说明
10250
TCP
kubelet 与 apiserver 通信
30000-32767
TCP
NodePort 服务默认端口范围
10255
TCP
kubelet 只读端口(默认关闭,可不开放)
端口
协议
说明
179
TCP
BGP 通信端口,用于 Calico 节点间路由(若使用 BGP 模式)
端口
协议
说明
8472
UDP
VXLAN 数据通信
端口
协议
说明
80 / 443
TCP
提供 HTTP/HTTPS 服务访问(Ingress 服务)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 sudo systemctl start firewalldsudo systemctl enable firewalldsudo firewall-cmd --permanent --add-port=6443/tcpsudo firewall-cmd --permanent --add-port=2379-2380/tcpsudo firewall-cmd --permanent --add-port=10250/tcpsudo firewall-cmd --permanent --add-port=10259/tcpsudo firewall-cmd --permanent --add-port=10257/tcpsudo firewall-cmd --permanent --add-port=30000-32767/tcpsudo firewall-cmd --permanent --add-port=179/tcp sudo firewall-cmd --permanent --add-port=8472/udp sudo firewall-cmd --permanent --add-port=80/tcp sudo firewall-cmd --permanent --add-port=443/tcp sudo firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p 4 -j ACCEPTsudo firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -p 4 -j ACCEPTsudo firewall-cmd --permanent --add-port=4789/udpsudo firewall-cmd --permanent --add-port=5473/tcpsudo firewall-cmd --permanent --add-port=51820/udpsudo firewall-cmd --permanent --add-port=51821/udpsudo firewall-cmd --reload$ sudo firewall-cmd --list-ports 80/tcp 179/tcp 443/tcp 2377/tcp 2379-2380/tcp 6443/tcp 7946/tcp 10250/tcp 10257/tcp 10259/tcp 30000-32767/tcp 4789/udp 7946/udp 8472/udp $ sudo firewall-cmd --direct --get-all-rules ipv4 filter INPUT 0 -p 4 -j ACCEPT
关闭swap
1 2 sudo swapoff -asudo sed -i '/ swap / s/^/#/' /etc/fstab
加载内核模块
1 2 sudo modprobe overlaysudo modprobe br_netfilter
设置内核参数
1 2 3 4 5 6 7 cat <<EOF | sudo tee /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF sudo sysctl --system
安装containerd
1 2 3 4 5 6 7 8 9 10 11 12 sudo yum install -y yum-utils device-mapper-persistent-data lvm2sudo yum config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.reposudo yum install -y containerd.iosudo mkdir -p /etc/containerdsudo containerd config default | sudo tee /etc/containerd/config.tomlsudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.tomlsudo systemctl enable --now containerd
安装cri-dockerd(可选)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 $ curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.19/cri-dockerd-0.3.19-3.fc35.x86_64.rpm $ sudo dnf install -y ./cri-dockerd-0.3.19-3.fc35.x86_64.rpm $ curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.14/cri-dockerd-0.3.14-3.el8.x86_64.rpm $ sudo dnf install -y ./cri-dockerd-0.3.14-3.el8.x86_64.rpm $ sudo systemctl daemon-reload $ sudo systemctl enable --now cri-docker
安装 kubelet, kubeadm, kubectl
1 2 3 4 5 6 7 8 9 10 11 12 $ cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key EOF $ sudo yum clean all && sudo yum makecache
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 $ sudo yum list --showduplicates kubeadm $ sudo yum install -y kubelet kubeadm kubectl $ sudo systemctl enable --now kubelet $ sudo crictl config runtime-endpoint /run/containerd/containerd.sock $ kubeadm version $ kubelet --version $ kubectl version --client
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $ type _init_completion $ dnf install bash-completion $ echo 'source <(kubectl completion bash)' >> ~/.bashrc $ kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null $ sudo chmod a+r /etc/bash_completion.d/kubectl $ echo 'alias k=kubectl' >>~/.bashrc $ echo 'complete -o default -F __start_kubectl k' >>~/.bashrc $ source ~/.bashrc
创建集群(master节点)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 $ sudo kubeadm init \ --apiserver-advertise-address=10.211.55.11 \ --kubernetes-version v1.33.2 \ --service-cidr=10.96.0.0/16 \ --pod-network-cidr=10.244.0.0/16 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME /.kube sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config sudo chown $(id -u):$(id -g) $HOME /.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.211.55.11:6443 --token sqwk6v.lxlnf0ibtbgr4i27 \ --discovery-token-ca-cert-hash sha256:c43f8b6d0e7081a76ab1d8ca8d3c5fb1ef3b21afcd81874566d7840167809412
1 2 3 4 5 6 $ k get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy ok
1 2 3 $ mkdir -p $HOME /.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config $ sudo chown $(id -u):$(id -g) $HOME /.kube/config
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 $ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-674b8bbfcf-8xllc 0/1 Pending 0 116m kube-system coredns-674b8bbfcf-w2sxz 0/1 Pending 0 116m kube-system etcd-k8s 1/1 Running 2 (74m ago) 116m kube-system kube-apiserver-k8s 1/1 Running 2 (74m ago) 116m kube-system kube-controller-manager-k8s 1/1 Running 2 (74m ago) 116m kube-system kube-proxy-94zqw 1/1 Running 1 (74m ago) 116m kube-system kube-scheduler-k8s 1/1 Running 2 (74m ago) 116m $ curl -LO https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/calico.yaml $ kubectl apply -f calico.yaml $ kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-7bfdc5b57c-9qv9m 1/1 Running 0 6m23s 10.244.77.1 k8s <none> <none> kube-system calico-node-m7wc5 1/1 Running 0 6m23s 10.211.55.11 k8s <none> <none> kube-system coredns-674b8bbfcf-8xllc 1/1 Running 0 123m 10.244.77.3 k8s <none> <none> kube-system coredns-674b8bbfcf-w2sxz 1/1 Running 0 123m 10.244.77.2 k8s <none> <none> kube-system etcd-k8s 1/1 Running 2 (82m ago) 123m 10.211.55.11 k8s <none> <none> kube-system kube-apiserver-k8s 1/1 Running 2 (82m ago) 123m 10.211.55.11 k8s <none> <none> kube-system kube-controller-manager-k8s 1/1 Running 2 (82m ago) 123m 10.211.55.11 k8s <none> <none> kube-system kube-proxy-94zqw 1/1 Running 1 (82m ago) 123m 10.211.55.11 k8s <none> <none> kube-system kube-scheduler-k8s 1/1 Running 2 (82m ago) 123m 10.211.55.11 k8s <none> <none>
calica 安装后可能出现不正常的情况,比如 calico-node-xxx
的pod始终无法正常运行,此时可以尝试重新安装 calica
1 2 3 4 5 6 7 8 $ kubectl delete -f calico.yaml $ sudo rm -rf /etc/cni/net.d/ $ sudo rm -rf /var/lib/calico $ kubectl apply -f calico.yaml
添加节点(worker节点)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 $ sudo kubeadm join 10.211.55.11:6443 --token sqwk6v.lxlnf0ibtbgr4i27 --discovery-token-ca-cert-hash sha256:c43f8b6d0e7081a76ab1d8ca8d3c5fb1ef3b21afcd81874566d7840167809412 [preflight] Running pre-flight checks [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2 [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system" ... [preflight] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s [kubelet-check] The kubelet is healthy after 1.004135788s [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
1 2 3 $ kubeadm token create --print-join-command kubeadm join 10.211.55.11:6443 --token 5o3p2i.gj95aopph0xbrcig --discovery-token-ca-cert-hash sha256:c43f8b6d0e7081a76ab1d8ca8d3c5fb1ef3b21afcd81874566d7840167809412
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 $ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s Ready control-plane 3h33m v1.33.2 k8s-worker1 Ready <none> 115s v1.33.2 $ kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-7bfdc5b57c-q5xwp 1/1 Running 0 37m 10.244.235.193 k8s-master <none> <none> kube-system calico-node-7pbbq 1/1 Running 0 4m51s 10.211.55.15 k8s-worker1 <none> <none> kube-system calico-node-w47qq 1/1 Running 0 37m 10.211.55.11 k8s-master <none> <none> kube-system coredns-674b8bbfcf-2tvld 1/1 Running 0 37m 10.244.235.195 k8s-master <none> <none> kube-system coredns-674b8bbfcf-h6kx7 1/1 Running 0 37m 10.244.235.194 k8s-master <none> <none> kube-system etcd-k8s-master 1/1 Running 2 37m 10.211.55.11 k8s-master <none> <none> kube-system kube-apiserver-k8s-master 1/1 Running 4 37m 10.211.55.11 k8s-master <none> <none> kube-system kube-controller-manager-k8s-master 1/1 Running 4 37m 10.211.55.11 k8s-master <none> <none> kube-system kube-proxy-nkbns 1/1 Running 0 4m51s 10.211.55.15 k8s-worker1 <none> <none> kube-system kube-proxy-plqw8 1/1 Running 0 37m 10.211.55.11 k8s-master <none> <none> kube-system kube-scheduler-k8s-master 1/1 Running 4 38m 10.211.55.11 k8s-master <none> <none>
1 2 3 4 5 6 7 scp ~/.kube/config k8s-work1:/tmp mkdir -p ~/.kubemv /tmp/config ~/.kube/config
1 2 3 4 5 6 7 8 $ sudo kubeadm reset $ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X $ kubectl delete node k8s-worker1
测试:用K8S部署Nginx
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --type =NodePort --port=80 $ kubectl get pod,svc -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/nginx-5869d7778c-95z74 1/1 Running 0 19m 10.244.194.65 k8s-worker1 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 61m <none> service/nginx NodePort 10.96.48.156 <none> 80:30291/TCP 14m app=nginx kubectl delete deployment nginx kubectl delete service nginx
通过 kubeadm 卸载 Kubernetes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 $ sudo kubeadm reset rm -rf $HOME /.kubesudo rm -rf /var/lib/etcdsudo rm -rf /etc/cni/net.dsudo rm -rf /var/lib/cni/sudo rm -rf /var/lib/kubelet/*sudo rm -rf /var/lib/calicosudo iptables -Fsudo iptables -Xsudo iptables -t nat -Fsudo iptables -t nat -Xsudo iptables -t mangle -Fsudo iptables -t mangle -Xsudo iptables -P INPUT ACCEPTsudo iptables -P FORWARD ACCEPTsudo iptables -P OUTPUT ACCEPT
停止和禁用 kubelet 服务
1 2 sudo systemctl stop kubeletsudo systemctl disable kubelet
升级 kubeadm 集群
升级步骤
升级 master 节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 $ sudo yum list --showduplicates kubeadm sudo yum install -y kubeadm-1.33.3kubeadm version $ sudo kubeadm upgrade plan $ sudo kubeadm upgrade apply v1.33.3 [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.33.3" . Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
1 2 3 4 5 sudo yum install -y kubelet-1.33.3 kubectl-1.33.3sudo systemctl daemon-reloadsudo systemctl restart kubelet
升级 worker 节点
1 2 3 4 5 6 7 $ sudo yum list --showduplicates kubeadm $ sudo yum install -y kubeadm-1.33.3 $ kubeadm version
drain 是为了在升级期间 避免该节点上正在运行的业务容器受到影响
1 2 3 $ kubectl drain <node-to-drain> --ignore-daemonsets
1 $ sudo kubeadm upgrade node
1 2 3 4 5 $ sudo yum install -y kubelet-1.33.3 kubectl-1.33.3 $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet
1 2 3 $ kubectl uncordon <node-to-uncordon>
更新证书
1 2 kubectl -n kube-system get cm kubeadm-config -o yaml openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 sudo kubeadm certs check-expirationCERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Jun 29, 2026 14:41 UTC 363d ca no apiserver Jun 29, 2026 14:41 UTC 363d ca no apiserver-etcd-client Jun 29, 2026 14:41 UTC 363d etcd-ca no apiserver-kubelet-client Jun 29, 2026 14:41 UTC 363d ca no controller-manager.conf Jun 29, 2026 14:41 UTC 363d ca no etcd-healthcheck-client Jun 29, 2026 14:41 UTC 363d etcd-ca no etcd-peer Jun 29, 2026 14:41 UTC 363d etcd-ca no etcd-server Jun 29, 2026 14:41 UTC 363d etcd-ca no front-proxy-client Jun 29, 2026 14:41 UTC 363d front-proxy-ca no scheduler.conf Jun 29, 2026 14:41 UTC 363d ca no super-admin.conf Jun 29, 2026 14:41 UTC 363d ca no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Jun 27, 2035 14:41 UTC 9y no etcd-ca Jun 27, 2035 14:41 UTC 9y no front-proxy-ca Jun 27, 2035 14:41 UTC 9y no
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 sudo cp -rf /etc/kubernetes/ /etc/kubernetes.baksudo cp -rf /var/lib/etcd/ /var/lib/etcd.baksudo kubeadm certs renew all[renew] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system" ... [renew] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it. certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed certificate embedded in the kubeconfig file for the super-admin renewed Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. $ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-7bfdc5b57c-q5xwp 1/1 Running 3 (5h37m ago) 41h calico-node-7pbbq 1/1 Running 3 (5h36m ago) 41h calico-node-v4hzr 1/1 Running 2 (5h37m ago) 19h calico-node-w47qq 1/1 Running 3 (5h37m ago) 41h coredns-674b8bbfcf-2tvld 1/1 Running 3 (5h37m ago) 41h coredns-674b8bbfcf-h6kx7 1/1 Running 3 (5h37m ago) 41h etcd-k8s-master 1/1 Running 5 (5h37m ago) 41h kube-apiserver-k8s-master 1/1 Running 7 (5h37m ago) 41h kube-controller-manager-k8s-master 1/1 Running 7 (5h37m ago) 41h kube-proxy-nkbns 1/1 Running 3 (5h36m ago) 41h kube-proxy-plqw8 1/1 Running 3 (5h37m ago) 41h kube-proxy-sbgh6 1/1 Running 2 (5h37m ago) 19h kube-scheduler-k8s-master 1/1 Running 7 (5h37m ago) 41h kubectl delete pod -n kube-system kube-apiserver-k8s-master kube-controller-manager-k8s-master kube-scheduler-k8s-master etcd-k8s-master sudo kubeadm certs check-expiration
Kubernetes 节点组件
角色
组件名
说明
Master Node
kube-apiserver
Kubernetes 的 API 请求入口,处理所有 REST 请求,协调各组件。
kube-scheduler
调度器,决定将 Pod 调度到哪个合适的 Node。
kube-controller-manager
包含多个控制器(如 NodeController、ReplicationController、DeploymentController 等),用于控制和调整集群状态。
etcd
分布式 KV 存储系统,存储 Kubernetes 所有状态数据。只有 API Server 能直接访问。
Worker Node
kubelet
负责与 Master 通信,执行其下发的 Pod 管理任务,控制容器生命周期。
kube-proxy
负责维护 Node 上的网络规则,支持服务负载均衡和网络通信。
container runtime
容器运行时,比如 Docker、containerd、CRI-O,负责实际运行容器。
crictl 命令
操作
docker
命令
crictl
命令
说明
查看正在运行的容器
docker ps
crictl ps
查看所有容器(包括已停止)
docker ps -a
crictl ps -a
查看镜像
docker images
crictl images
查看容器日志
docker logs <container_id>
crictl logs <container_id>
进入容器交互
docker exec -it <id> sh
crictl exec -it <id> sh
查看容器详细信息
docker inspect <container_id>
crictl inspect <container_id>
查看 Pod 详细信息
❌(不支持)
crictl inspectp <pod_id>
K8s 专属
删除容器
docker rm <container_id>
crictl rm <container_id>
删除镜像
docker rmi <image_id>
crictl rmi <image_id>
拉取镜像
docker pull nginx
crictl pull nginx
运行容器(非 K8s 场景)
docker run -it nginx
❌(不支持)
crictl
不运行容器,仅调试现有容器
列出容器运行时信息
docker info
crictl info
查看容器运行状态
docker stats
crictl stats
简要版
设置配置文件
~/.docker/config.json
/etc/crictl.yaml
如设置 endpoint