K8S 考试相关笔记
考试时,环境设置 设置环境,可以操作效率,考试环境默认都弄好了。vim 考试时候可以直接 :set paste 处理下。
alias k=kubectlexport do ="--dry-run=client -o yaml" export now="--force --grace-period 0"
修改 vim 设置,设置文件在 ~/.vimrc
set tabstop=2set expandtabset shiftwidth=2
需要熟悉命令 grep tr sed
ip netstat -anp | grep etcd netstat -nplt ps -aux
熟悉以上命令可以提高操作效率
备考资料 https://github.com/David-VTUK/CKA-StudyGuide/tree/master
题目操作:
一、权限控制 RBAC kubectl config use-context k8s kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets kubectl -n app-team1 create serviceaccount cicd-token kubectl -n app-team1 create rolebinding cicd-token-rolebinding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token kubectl -n app-team1 describe rolebinding cicd-token-rolebinding kubectl auth can-i create deployment --as system:serviceaccount:app-team1:cicd-token kubectl auth can-i create deployment -n app-team1 --as system:serviceaccount:app-team1:cicd-token
二、查看 pod 的 CPU kubectl config use-context k8s kubectl top pod -l name=cpu-loader --sort-by=cpu -A echo "查出来的 Pod Name" > /opt/KUTR000401/KUTR00401.txtcat /opt/KUTR000401/KUTR00401.txt
三、配置网络策略 NetworkPolicy kubectl get ns --show-labels kubectl label ns echo project=echo vim networkpolicy.yaml kubectl apply -f networkpolicy.yaml kubectl describe networkpolicy -n my-app
四、暴露服务 service kubectl config use-context kubectl get deployment front-end -o wide kubectl edit deployment front-end kubectl expose deployment front-end --type =NodePort --port=80 --target-port=80 --name=front-end-svc kubectl get svc front-end-svc -o wide kubectl get deployment front-end -o wide kubectl edit svc front-end-svc kubectl get pod,svc -o wide curl 所在的 node 的 ip 或主机名:30938 curl svc 的 ip 地址:80
五、创建 Ingress Concepts → Services, Load Balancing, and Networking → Ingress
kubectl config use-context k8s vim ingressclass.yaml kubectl apply -f ingressclass.yaml vim ingress.yaml kubectl apply -f ingress.yaml config use-context k8s 切换到此集群。 kubectl get ingress -n ing-internal curl ingress 的 ip 地址/hello
六、扩容 deployment 副本数量 kubectl scale deployment -h
kubectl config use-context k8s kubectl get deployments presentation -o wide kubectl get pod -l app=presentation kubectl scale deployment presentation --replicas=4 kubectl get deployments presentation -o wide kubectl get pod -l app=presentation
七、调度 pod 到指定节点 Tasks → Configure Pods and Containers → Assign Pods to Nodes
kubectl config use-context k8s kubectl get pod -A|grep nginx-kusc00401 kubectl get nodes --show-labels|grep 'disk=ssd' vim pod-disk-ssd.yaml kubectl apply -f pod-disk-ssd.yaml kubectl get pod nginx-kusc00401 -o wide kubectl run nginx-kusc00401 --image=nginx --dry-run=client -o yaml > pod.yaml vi pod.yaml
八、查看可用节点数 kubectl config use-context k8s kubectl get nodes kubectl describe nodes | grep -i Taints echo "查出来的数字" > /opt/KUSC00402/kusc00402.txtkubectl describe nodes | grep -i Taints | grep -vc NoSchedule echo "查出来的数字" > /opt/KUSC00402/kusc00402.txtcat /opt/KUSC00402/kusc00402.txt
九、创建多容器的 pod Concepts → Workloads → Pods
在 Yaml 配置文件里面写两个 -name 和 image
kubectl config use-context k8s vim pod-kucc.yaml kubectl apply -f pod-kucc.yaml kubectl get pod kucc8
十、创建 PV Tasks → Configure Pods and Containers → Configure a Pod to Use a PersistentVolume for Storage
kubectl config use-context k8s vim pv.yaml kubectl apply -f pv.yaml kubectl get pv
十一、创建 PVC Tasks → Configure Pods and Containers → Configure a Pod to Use a PersistentVolume for Storage
kubectl config use-context ok8s vim pvc.yaml kubectl apply -f pvc.yaml kubectl get pvc vim pvc-pod.yaml kubectl apply -f pvc-pod.yaml kubectl get pod web-server kubectl edit pvc pv-volume --record
十二、查看 pod 日志 kubectl config use-context k8s kubectl logs foo | grep "RLIMIT_NOFILE" > /opt/KUTR00101/foo cat /opt/KUTR00101/foo
十三、使用 sidecar 代理容器日志 Concepts → Cluster Administration → Logging Architecture
kubectl config use-context k8s kubectl get pod 11-factor-app -o yaml > varlog.yaml cp varlog.yaml varlog-bak.yaml vim varlog.yaml kubectl delete pod 11-factor-app kubectl get pod 11-factor-app kubectl apply -f varlog.yaml kubectl log 11-factor-app sidecar
十四、升级集群 Tasks → Administer a Cluster → Administration with kubeadm → Upgrading kubeadm clusters
kubectl config use-context mk8s kubectl get nodes kubectl cordon master01 kubectl drain master01 --ignore-deamonsets ssh master01 sudo -i apt-get update apt-cache show kubeadm|grep 1.28.1 apt-get insatll=1.28.1-00 kubeadm version kubectl upgrade apply v1.28.1 --etcd-upgrade=flase apt-get install kubelet=1.28.1-00 kubectl version exit exit kubectl uncordon master01 kubectl get node
十五、备份还原 etcd Tasks → Administer a Cluster → Operating etcd clusters for Kubernetes
kubectl config use-context xxxx export ETCDCTL_API=3etcdctl --endpoints=https://11.0.1.111:2379 --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" snapshot save /var/lib/backup/etcd-snapshot.db etcdctl snapshot status /var/lib/backup/etcd-snapshot.db -wtable sudo ETCDCTL_API=3 etcdctl --endpoints=https://11.0.1.111:2379 --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" -- key="/opt/KUIN00601/etcd-client.key" snapshot restore /data/backup/etcd-snapshot-previous.db
十六、排查集群中故障节点 kubectl get nodes ssh node02 sudo -i systemctl status kubelet systemctl start kubelet systemctl enable kubelet systemctl status kubelet exit exit kubectl get nodes
十七、节点维护 kubectl get node kubectl cordon node02 kubectl get node kubectl drain node02 --ignore-daemonsets kubectl get node kubectl get pod -A -o wide|grep node02