加强门户网站建设提升,浙江网站建设而,站内内容投放计划,做网站最低多少钱10、升级集群节点
你必须连接到正确的主机。不这样做可能导致零分。
[candidatebase] $ ssh cks000034
Context
kubeadm 配置的集群最近进行了升级#xff0c;由于工作负载兼容性问题#xff0c;将一个节点保留在稍旧的版本上。
Task
升级集群节点 node02 以匹配 control pla…10、升级集群节点你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks000034Contextkubeadm 配置的集群最近进行了升级由于工作负载兼容性问题将一个节点保留在稍旧的版本上。Task升级集群节点 node02 以匹配 control plane 节点的版本。使用如下所示命令连接到此计算节点[candidatecks000034] ssh node02PS: 不要修改集群中的任何正在运行的工作负责。完成任务后不要忘记退出此计算节点。解题过程1、查看节点版本 kubectl get node2、搜索可用版本apt-y search kubeletapt-y search kubelet Sorting... Done Full Text Search... Done kubelet/unknown1.32.3-1.1 ppc64el Node agentforKubernetes clusters3、将版本升级到与master01版本一致aptinstallkubelet1.32.2-1.1 systemctl daemon-reload systemctl restart kubelet11、bom 工具生成 SPDX 文档你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks000035Task在 alpine namespace 中的 alpine Deployment有三个运行不同版本的 alpine 镜像的容器。首先找出 alpine 镜像的哪个版本包含版本为 3.1.4-r5 的 libcrypto3 软件包。其次使用预安装的 bom 工具在 ~/alpine.spdx 为找出的镜像版本创建一个 SPDX 文档。最后更新 alpine Deployment 并删除 使用找出的镜像版本的容器。Deployment 的清单文件可以在~/alipine-deployment.yaml 中找到。PS: 请勿修改 Deployment 的任何其他容器解题思路1、找出包含指定依赖的alpine镜像alpine 注意使用apk做包管理2、使用bom 工具为找出的镜像版本生成spdx文档3、删除找出的镜像版本容器重新apply 更新deployment解题过程1、查看镜像确定是alpine-b candidatemaster01:~$ kubectlexec-it -n alpine alpine-6cbf67f985-hp8jb -c alpine-a -- apk list|greplibcrypto3libcrypto3-3.3.0-r2 x86_64{openssl}(Apache-2.0)[installed]candidatemaster01:~$ kubectlexec-it -n alpine alpine-6cbf67f985-hp8jb -c alpine-b -- apk list|greplibcrypto3libcrypto3-3.1.4-r5 x86_64{openssl}(Apache-2.0)[installed]candidatemaster01:~$ kubectlexec-it -n alpine alpine-6cbf67f985-hp8jb -c alpine-c -- apk list|greplibcrypto32、使用bom工具生成清单 不用特意记命令直接-help一路提示走就行 bom generate --image registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1 --output ~/alipine.spdx3、删除镜像版本重新apply apiVersion: apps/v1 kind: Deployment metadata: labels: run: alpine name: alpine namespace: alpine spec: replicas:1selector: matchLabels: run: alpine template: metadata: labels: run: alpine spec: containers: - name: alpine-a image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.20.0 imagePullPolicy: IfNotPresent args: - /bin/sh - -c -whiletrue;dosleep360000;done- name: alpine-c image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.16.9 imagePullPolicy: IfNotPresent args: - /bin/sh - -c -whiletrue;dosleep360000;done12、限制性 Pod 安全标准你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks000036Context为了符合要求所有用户命名空间都强制执行受限的 Pod 安全标准。Task在 confidential namespace 中包含一个不符合限制性的 Pod 安全标准的 Deployment。因此其 Pod 无法被调度。修改这个 Deployment 以符合标准并验证 Pod 可以正常运行。PS: 部署的清单文件可以在 ~/nginx-unprivileged.yaml 找到解题思路因为是受限的deployment删除原有的deployment文件直接apply按报错提示增加内容即可解题过程kubectl delete -f nginx-unprivileged.yaml deployment.appsnginx-unprivileged-deploymentdeleted candidatemaster01:~$ kubectl apply -f nginx-unprivileged.yaml Warning: would violate PodSecurityrestricted:latest:allowPrivilegeEscalation!false(containernginxmustsetsecurityContext.allowPrivilegeEscalationfalse), unrestricted capabilities(containernginxmustsetsecurityContext.capabilities.drop[ALL]), runAsNonRoot!true(pod or containernginxmustsetsecurityContext.runAsNonRoottrue), seccompProfile(pod or containernginxmustsetsecurityContext.seccompProfile.type toRuntimeDefaultorLocalhost)deployment.apps/nginx-unprivileged-deployment created 删除重新重建提示警告allowPrivilegeEscalation!false(containernginxmustsetsecurityContext.allowPrivilegeEscalationfalse), unrestricted capabilities(containernginxmustsetsecurityContext.capabilities.drop[ALL]), runAsNonRoot!true(pod or containernginxmustsetsecurityContext.runAsNonRoottrue), seccompProfile(pod or containernginxmustsetsecurityContext.seccompProfile.type toRuntimeDefaultorLocalhost)根据警告可以看出以下问题:1、containernginxmustsetsecurityContext.allowPrivilegeEscalationfalse2、containernginxmustsetsecurityContext.capabilities.drop[ALL]3、pod or containernginxmustsetsecurityContext.runAsNonRoottrue4、pod or containernginxmustsetsecurityContext.seccompProfile.type toRuntimeDefaultorLocalhost按照报错提示修改deployment apiVersion: apps/v1 kind: Deployment metadata: name: nginx-unprivileged-deployment namespace: confidential labels: app: nginx spec: replicas:1selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginxinc/nginx-unprivileged imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation:falsecapabilities: drop:[ALL]runAsNonRoot:trueseccompProfile: type: RuntimeDefault ports: - containerPort:8080kubectl apply -f nginx-unprivileged.yaml deployment.apps/nginx-unprivileged-deployment configured candidatemaster01:~$ kubectl get deployments.apps -n confidential NAME READY UP-TO-DATE AVAILABLE AGE nginx-unprivileged-deployment1/11116m13、Docker 守护进程你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks000037Task执行以下任务以保护集群节点 cks000037从 docker 组中删除用户 developerPS: 不要从任何其他组中删除用户重新配置并重启 Docker 守护程序以确保位于/var/run/docker.sock 的套接字文件由 root 组拥有。重新配置并重启 Docker 守护进程以确保它不监听任何 TCP 端口。PS: 完成工作后确保 Kubernetes 集群保持健康状态。解题过程:1、从docker租中删除用户developer rootnode02:/home/candidate# id developeruid1001(developer)gid0(root)groups0(root),40(src),100(users),998(docker)rootnode02:/home/candidate# gpasswd -d developer dockerRemoving user developer from group docker rootnode02:/home/candidate# id developeruid1001(developer)gid0(root)groups0(root),40(src),100(users)rootnode02:/home/candidate#2、查看docker服务状态 systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded(/lib/systemd/system/docker.service;enabled;vendor preset: enabled)Active: active(running)since Mon2025-10-2717:12:36 CST;3min 23s ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID:1491(dockerd)Tasks:9Memory:50.9M CGroup: /system.slice/docker.service └─1491 /usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd/run/containerd/containerd.sock Oct2717:12:36 node02 dockerd[1491]:time2025-10-27T17:12:36.28729358508:00levelinfomsgLoading containers: done.Oct2717:12:36 node02 dockerd[1491]:time2025-10-27T17:12:36.30206492508:00levelwarningmsg[DEPRECATION NOTICE]: API is accessible on http://0.0.0.0:2375 without encrypti Oct 27 17:12:36 node02 dockerd[1491]: time2025-10-27T17:12:36.30210021508:00 levelwarning msgWARNING: No swap limit support Oct 27 17:12:36 node02 dockerd[1491]: time2025-10-27T17:12:36.30211588508:00 levelinfo msgDocker daemon commitbbd0a17 containerd-snapshotterfalse storage-driveroverlay2 Oct 27 17:12:36 node02 dockerd[1491]: time2025-10-27T17:12:36.30230232508:00 levelinfo msgInitializing buildkit Oct 27 17:12:36 node02 dockerd[1491]: time2025-10-27T17:12:36.32311410708:00 levelinfo msgCompleted buildkit initialization Oct 27 17:12:36 node02 dockerd[1491]: time2025-10-27T17:12:36.33010050008:00 levelinfo msgDaemon has completed initialization Oct 27 17:12:36 node02 dockerd[1491]: time2025-10-27T17:12:36.33020086008:00 levelinfo msgAPI listen on[::]:2375 Oct 27 17:12:36 node02 systemd[1]: Started Docker Application Container Engine. Oct 27 17:12:36 node02 dockerd[1491]: time2025-10-27T17:12:36.33091054008:00 levelinfo msgAPI liste3、进入到相关配置文件cd/lib/systemd/system/ ll docker* -rw-r--r--1root root1749Mar152025docker.service -rw-r--r--1root root295Mar152025docker.socketcatdocker.socket[Unit]DescriptionDocker Socketforthe API[Socket]# If /var/run is not implemented as a symlink to /run, you may need to# specify ListenStream/var/run/docker.sock instead.ListenStream/run/docker.sockSocketMode0660SocketUserrootSocketGrouproot#修改为root[Install]WantedBysockets.targetcatdocker.service[Unit]DescriptionDocker Application Container EngineDocumentationhttps://docs.docker.comAfternetwork-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.targetWantsnetwork-online.target containerd.serviceRequiresdocker.socket[Service]Typenotify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerExecStart/usr/bin/dockerd -H fd:// --containerd/run/containerd/containerd.sock#ExecStart/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd/run/containerd/containerd.sockExecReload/bin/kill -s HUP$MAINPIDTimeoutStartSec0RestartSec2Restartalways# Note that StartLimit* options were moved from Service to Unit in systemd 229.# Both the old, and new location are accepted by systemd 229 and up, so using the old location# to make them work for either version of systemd.StartLimitBurst3# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make# this option work for either version of systemd.StartLimitInterval60s# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNPROCinfinityLimitCOREinfinity# Comment TasksMax if your systemd version does not support it.# Only systemd 226 and above support this option.TasksMaxinfinity# set delegate yes so that systemd does not reset the cgroups of docker containersDelegateyes# kill only the docker process, not all processes in the cgroupKillModeprocessOOMScoreAdjust-500[Install]WantedBymulti-user.target systemctl daemon-reload systemctl restart docker14、Cilium网络策略你必须连接到正确的主机。不这样做可能导致零分。[candidatebase] $ ssh cks000039Context这道题您参考网址CiliumNetworkPolicy https://docs.cilium.io/en/stable/network/kubernetes/policy/#ciliumnetworkpolicyTask使用 Cilium 执行以下任务以保护现有应用程序的内部和外部网络流量。PS: 您可以使用浏览器访问 Cilium 的文档。首先在 nodebb namespace 里创建一个名为 nodebb 的 L4 CiliumNetworkPolicy并按照如下方式配置它⚫ 允许 ingress-nginx namespace 中运行的所有 Pod 访问 nodebb Deployment 的 Pod⚫ 要求相互身份验证然后将前一步创建的网络策略扩展如下⚫ 允许主机访问 nodebb Deployment 的 Pod⚫ 不要使用相互身份验证解题思路我考试时候没遇到可以仅做了解题目给的网址会包含参考写法需要记住的点1、注解: k8s:io.kubernetes.pod.namespace: ingress-nginx与nginx注解不同的是需要用k8s开头2、题目需求本身不复杂需要用到的字段名可以直接kubectl explain记住需要用到的字段名名字和含义即可解题过程catcilium.yaml apiVersion:cilium.io/v2kind: CiliumNetworkPolicy metadata: name:nodebbnamespace:nodebbspec: endpointSelector: matchLabels: app: nodebb ingress: - fromEndpoints: - matchLabels: k8s:io.kubernetes.pod.namespace: ingress-nginx authentication: mode:required- fromEntities: -host