k8s 修炼手册
deployment
apiVersion: apps/v1
kind: Deployment
metadata:name: hello-deploy
spec:replicas: 10selector:matchLabels:app: hello-world # Pod的label # 这个Label与Service的Label筛选器是匹配的revisionHistoryLimit: 5progressDeadlineSeconds: 300minReadySeconds: 10 # 每个pod更新动作间隔10sstrategy:type: RollingUpdate # 使用RollingUpdate方式更新rollingUpdate: # 智能同时更新最多9个maxUnavailable: 1 # 不允许出现比期望状态指定的pod数量少超过一个的情况maxSurge: 1 # 不允许出现比期望状态指定的pod数量多超过一个的情况 template:metadata:labels:app: hello-worldspec:containers:- name: hello-podimage: nigelpoulton/k8sbook:1.0ports:- containerPort: 8080
创建对应svc
apiVersion: v1
kind: Service
metadata:name: hello-svclabels:app: hello-world # Label 筛选器 service正在查找带有app=hello-world的Pod
spec:type: NodePortports:- port: 8080nodePort: 30001protocol: TCPselector:app: hello-world
滚动更新查看命令
kubectl rollout status
查看历史版本
kubectl rollout history
查看更新后rs
kubectl get rs
根据版本信息回滚
kubectl rollout undo deployment hello-deploy --to-revision=1
Service
apiVersion: apps/v1
kind: Deployment
metadata:name: web-deploy
spec:replicas: 10selector:matchLabels:app: hello-worldtemplate:metadata:labels:app: hello-worldspec:containers:- name: hello-ctrimage: nigelpoulton/k8sbook:latestports:- containerPort: 8080
命令行创建svc
kubectl expose deployment web-deploy --name=hello-svc --target-port=8080 --type=NodePort
查看svc
kubectl describe svc hello-svc[root@master svc]# kubectl describe svc hello-svc
Name: hello-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=hello-world # Label筛选器定义的Label
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.12.205.3 # svc内部的ClusterIP(VIP)
IPs: 10.12.205.3
Port: <unset> 8080/TCP
TargetPort: 8080/TCP # 应用正在监听的pod端口
NodePort: <unset> 32458/TCP # 集群外可访问的svc端口
Endpoints: 10.244.104.59:8080,10.244.104.60:8080,10.244.104.61:8080 + 7 more... # 能够匹配到Label筛选器的健康Pod的IP动态列表
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
声明式创建svc
apiVersion: v1
kind: Service
metadata:name: hello-svclabels:chapter: services
spec:
# ipFamilyPolicy: PreferDualStack
# ipFamilies:
# - IPv4
# - IPv6type: NodePortports:- port: 8080nodePort: 30001targetPort: 8080protocol: TCPselector:app: hello-world
查看svc
kubectl get svc hello-svc
kubectl describe svc hello-svc
查看Endpoint
kubectl get ep hello-svc
滚动更新
原始状态
| Service app=biz1 zone=pord |
更新中
pod label 有ver 这时候新旧版本pod svc都提供服务
| Service app=biz1 zone=pord |
| Pod1 app=biz1 zone=pord ver=4.1 |
| Pod2 app=biz1 zone=pord ver=4.1 |
| Pod3 app=biz1 zone=pord ver=4.2 |
| Pod4 app=biz1 zone=pord ver=4.2 |
更新后
svc 添加label ver=4.2 此时svc流量至通向新版本pod 修改为ver=4.1 svc流量则通向旧版本
| Service app=biz1 zone=pord ver=4.2 |
服务发现及注册
kubectl get service
kubectl get endpoint
服务注册
-
- post Service 配置到API Service
-
- 分配ClutserIP
-
- 配置持久化到集群存储
-
- 维护有Pod IP的Endpoint被创建
-
- 集群DNS发现新的Service
-
- 创建DNS记录
-
- kube-proxy拉取Service的配置
-
- 创建IPVS规则进行负载均衡
服务发现
-
- 请求DNS解析Service名称
-
- 收到ClusterIP
-
- 发送流量到ClusterIP
-
- 无路由,发送至容器的默认网关
-
- 转发至节点
-
- 无路由,发送至容器的默认网关
-
- 被节点内核处理
-
- 捕获(IPVS规则)
-
- 将目标IP的值重写为Pod的IP
apiVersion: v1
kind: Namespace
metadata:name: dev
---
apiVersion: v1
kind: Namespace
metadata:name: prod
---
kind: Deployment
apiVersion: apps/v1
metadata:name: enterprisenamespace: devlabels:app: enterprise
spec:selector:matchLabels:app: enterprisereplicas: 2template:metadata:labels:app: enterprisespec:terminationGracePeriodSeconds: 1containers:- image: nigelpoulton/k8sbook:text-devname: enterprise-ctrports:- containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:name: enterprisenamespace: prodlabels:app: enterprise
spec:selector:matchLabels:app: enterprisereplicas: 2template:metadata:labels:app: enterprisespec:terminationGracePeriodSeconds: 1containers:- image: nigelpoulton/k8sbook:text-prodname: enterprise-ctrports:- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:name: entnamespace: dev
spec:ports:- port: 8080selector:app: enterprise
---
apiVersion: v1
kind: Service
metadata:name: entnamespace: prod
spec:ports:- port: 8080selector:app: enterprise
---
apiVersion: v1
kind: Pod
metadata:name: jumpnamespace: dev
spec:terminationGracePeriodSeconds: 5containers:- image: ubuntuname: jumptty: truestdin: true
[root@master ~]# kubectl get all -n dev
NAME READY STATUS RESTARTS AGE
pod/enterprise-76fc64bd9-h5gqg 1/1 Running 0 3h20m
pod/enterprise-76fc64bd9-kpxh9 1/1 Running 0 3h20m
pod/jump 1/1 Running 0 3h20mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ent ClusterIP 10.7.27.61 <none> 8080/TCP 3h20mNAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/enterprise 2/2 2 2 3h20mNAME DESIRED CURRENT READY AGE
replicaset.apps/enterprise-76fc64bd9 2 2 2 3h20m[root@master ~]# kubectl get all -n prod
NAME READY STATUS RESTARTS AGE
pod/enterprise-5cfcd578d7-lknbj 1/1 Running 0 3h27m
pod/enterprise-5cfcd578d7-mwzcb 1/1 Running 0 3h27mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ent ClusterIP 10.2.20.188 <none> 8080/TCP 3h27mNAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/enterprise 2/2 2 2 3h27mNAME DESIRED CURRENT READY AGE
replicaset.apps/enterprise-5cfcd578d7 2 2 2 3h27mroot@master ~]# kubectl exec -it jump -n dev -- bashroot@jump:/# cat /etc/resolv.conf
nameserver 10.0.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local
options ndots:5root@jump:/# apt-get update &&u apt-get install curl -yroot@jump:/# curl ent:8080
Hello from the DEV Namespace!
Hostname: enterprise-76fc64bd9-h5gqgroot@jump:/# curl ent.dev.svc.cluster.local:8080
Hello from the DEV Namespace!
Hostname: enterprise-76fc64bd9-h5gqgroot@jump:/# curl ent.prod.svc.cluster.local:8080
Hello from the PROD Namespace!
Hostname: enterprise-5cfcd578d7-mwzcb# pod内curl其他pod及端口
# serviceName.namespace.svc.cluster.local:port
服务排查
- Pod: 由coredns Deployment管理
- Service: 一个名为kube-dns的ClusterIP Service,其监听端口为TCP/UDP 53
- Endpoint: 也叫做kube-dns
所有与集群DNS相关的对象都有k8s-app=kube-dns的Label
- 首先排查coredns Deployment机器管理的Pod试运行状态的
[root@master ~]# kubectl get deploy -n kube-system -l k8s-app=kube-dns
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 33d[root@master ~]# kubectl get pods -n kube-system -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
coredns-857d9ff4c9-6cb2b 1/1 Running 28 (3d5h ago) 33d
coredns-857d9ff4c9-tvrff 1/1 Running 28 (3d5h ago) 33d[root@master ~]# kubectl logs -n kube-system coredns-857d9ff4c9-6cb2b
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
- 查看Service机器Endpoint对象 保证ClutserIP由IP地址病监听TCP/UDP 53端口
[root@master ~]# kubectl get svc kube-dns -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP,9153/TCP 33d[root@master ~]# kubectl get ep kube-dns -n kube-system
NAME ENDPOINTS AGE
kube-dns 10.244.219.68:53,10.244.219.69:53,10.244.219.68:53 + 3 more... 33d
- 确定DNS组件都正常后,使用 gcr.io/kubernetes-e2e-test-images/dnsutils:latest 镜像
镜像包含ping traceroute curl dig nslookup命令
apt install iputils-ping -y
apt install dnsutils
apt install traceroute
apt install nslookuproot@ubuntu-pod:/# nslookup kubernetes
# 返回
;; Got recursion not available from 10.0.0.10
Server: 10.0.0.10
Address: 10.0.0.10#53Name: kubernetes.default.svc.cluster.local
Address: 10.0.0.1
;; Got recursion not available from 10.0.0.10
volume
nfs
yum install -y nfs-common nfs-utils rpcbind
mkdir /nfsdata
chmod 666 /nfsdata
chown nfsnobody /nfsdata
chgrp nfsnobody /nfsdata # 没有nfsnobody 使用nobody
cat /etc/exports
/nfsdata *(rw,no_root_squash,no_all_squash,sync)
systemctl restart nfs-server
systemctl restart rpcbind
[root@master script]# ssh node1
[root@node1 ~]# mount -t nfs master:/nfsdata /nfsdata
[root@node1 ~]# cat /etc/fstab
10.0.17.100:/nfsdata /nfsdata nfs default,_netdev 0 0
pv
apiVersion: v1
kind: PersistentVolume
metadata:name: nfspv0 # pv 名字
spec:capacity: #容量storage: 10Gi #存储空间accessModes: #存储模式- ReadWriteOnce #单个节点读写模式,即卷可以被一个节点以读写方式挂载 块存储只支持RWO# - ReadWriteMany #多个节点读写模式,即卷可以被多个节点以读写方式挂载 NFS# - ReadOnlyMany #只读方式绑定多个PVCpersistentVolumeReclaimPolicy: Recycle #持久卷回收策略storageClassName: nfs # 存储类的名字nfs:path: /nfsdata/share # nfs共享路径server: 10.0.17.100 # nfs服务器地址
[root@master volume]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
nfspv0 10Gi RWO Recycle Bound default/nfspvc0 nfs <unset>
pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: nfspvc0 # pv 名字
spec:accessModes: #存储模式- ReadWriteOnce #单个节点读写模式,即卷可以被一个节点以读写方式挂载 块存储只支持RWO# - ReadWriteMany #多个节点读写模式,即卷可以被多个节点以读写方式挂载 NFS# - ReadOnlyMany #只读方式绑定多个PVCstorageClassName: nfs # 存储类的名字resources: requests: storage: 5Gi
[root@master volume]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
nfspvc0 Bound nfspv0 10Gi RWO nfs <unset> 11m
创建ubuntu 系统/data绑定nfspvc0
apiVersion: v1
kind: Pod
metadata:name: volpod
spec:volumes:- name: datapersistentVolumeClaim:claimName: nfspvc0 # 使用的pvccontainers:- name: ubuntu-ctrimage: ubuntu:latestcommand:- /bin/bash- "-c"- "sleep 60m"volumeMounts:- mountPath: /data # ubuntu系统挂载点name: data
[root@master volume]# kubectl exec -it volpod -- bash
root@volpod:/# cd /data/
root@volpod:/data# ls
1 3 8716283 876 default
# 进入/data目录发现/nfsdata/share中的文件
StorageClass
yum install -y nfs-utils rpcbind
mkdir /nfsdata/share
chown nobody /nfsdata/shareecho "/nfsdata/share *(rw,sync,no_subtree_check)" >> /etc/exports
systemctl enable nfs-server && systemctl enable nfs-server
systemctl restart nfs-server && systemctl restart nfs-server
showmount -e master
部署nfs-client-provisioner
vim nfs-client-provisioner.yaml
kind: Deployment
apiVersion: apps/v1
metadata:name: nfs-client-provisionernamespace: nfs-storageclass
spec:replicas: 1selector:matchLabels:app: nfs-client-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisioner# image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2# image: k8s.dockerproxy.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2image: registry.cn-beijing.aliyuncs.com/blice_haiwai/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVER# value: <YOUR NFS SERVER HOSTNAME>value: 10.0.17.100- name: NFS_PATH# value: /var/nfsvalue: /nfsdata/sharevolumes:- name: nfs-client-rootnfs:# server: <YOUR NFS SERVER HOSTNAME>server: 10.0.17.100# share nfs pathpath: /nfsdata/share
ca认证
vim RBAC.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:
#创建名字空间- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io
创建StorageClass
vim StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-clientnamespace: nfs-storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:pathPattern: ${.PVC.namespace}/${.PVC.name}onDelete: delete #删除模式
测试pod
vim test.yaml
kind: PersistentVolumeClaim # 创建pvc
apiVersion: v1
metadata:name: test-claimannotations:
spec:accessModes:- ReadWriteMany # 多节点读写resources:requests:storage: 1Mi # 请求资源大小1MBstorageClassName: nfs-client # 存储类名字
---
kind: Pod
apiVersion: v1
metadata:name: test-pod
spec:containers:- name: test-podimage: wangyanglinux/myapp:v1.0volumeMounts:- name: nfs-pvcmountPath: "/usr/local/nginx/html"restartPolicy: "Never"volumes: # 定义卷- name: nfs-pvc #pvc提供的卷persistentVolumeClaim:claimName: test-claim
[root@master ~]# ls /nfsdata/share/default/test-claim/
hostname.html pppppp
cm
ConfigMap通常用于存储如下非敏感数据
- 环境变量的值
- 整个配置文件(比如Web Server的配置和数据库的配置)
- 主机名(hostname)
- 服务端口(Service Port)
- 账号名称(Account name)
格式:
key: value
主容器的方式
- 环境变量
- 容器启动命令参数
- 某个卷(volume)上的文件 最灵活的方式
命令方式创建
# --from-literal 表示字面key=value
kubectl create configmap test1map \
--from-literal shortname=msb.com \
--from-literal longname=magicsandbox.com[root@master ~]# kubectl get cm test1map
NAME DATA AGE
test1map 2 14s[root@master ~]# kubectl describe cm test1map
Name: test1map
Namespace: default
Labels: <none>
Annotations: <none>Data
====
longname:
----
magicsandbox.com
shortname:
----
msb.comBinaryData
====Events: <none># --from-file 表示通过文件创建
[root@master cm]# kubectl create cm testmap2 --from-file test.txt
configmap/testmap2 created
[root@master cm]# kubectl describe cm testmap2
Name: testmap2
Namespace: default
Labels: <none>
Annotations: <none>Data
====
test.txt:
----
ConfigMap,HelloWorld!BinaryData
====Events: <none>[root@master cm]# kubectl get cm testmap2 -o yaml
apiVersion: v1
data:test.txt: |ConfigMap,HelloWorld!
kind: ConfigMap
metadata:creationTimestamp: "2024-09-25T01:34:00Z"name: testmap2namespace: defaultresourceVersion: "719604"uid: 6c0fd794-5e89-40de-b3a3-74c02799f9cd
声明式创建
kind: ConfigMap
apiVersion: v1
metadata:name: multimap
data: given: Nigelfamily: Poulton
[root@master cm]# kubectl apply -f multimap.yaml
configmap/multimap created[root@master cm]# kubectl describe cm multimap
Name: multimap
Namespace: default
Labels: <none>
Annotations: <none>Data
====
family:
----
Poulton
given:
----
NigelBinaryData
====Events: <none>
定义只有一个entry的map
# entry为test.conf | 后面的所有内容需要座位一个字面值看待
key: test.conf
value: env = plex-test endpoint = 0.0.0.0:31001 char = utf8 vault = PLEX/test log-size = 512Mkind: ConfigMap
apiVersion: v1
metadata:name: singlemap
data: test.conf: |env = plex-testendpoint = 0.0.0.0:31001char = utf8vault = PLEX/testlog-size = 512M
[root@master cm]# kubectl describe cm singlemap
Name: singlemap
Namespace: default
Labels: <none>
Annotations: <none>Data
====
test.conf:
----
env = plex-test
endpoint = 0.0.0.0:31001
char = utf8
vault = PLEX/test
log-size = 512MBinaryData
====Events: <none>
作为环境变量
kind: ConfigMap
apiVersion: v1
metadata:name: multimap
data:given: Nigelfamily: Poulton
---
apiVersion: v1
kind: Pod
metadata:labels:chapter: configmapsname: envpod
spec:containers:- name: ctr1image: busyboxcommand: ["sleep"]args: ["infinity"]env: # 环境变量配置- name: FIRSTNAME # 环境变量名称valueFrom:configMapKeyRef:name: multimap # 需要引用的cm名称key: given # 需要引用的key 值为Nigel- name: LASTNAMEvalueFrom:configMapKeyRef:name: multimapkey: family
查看pod环境变量
[root@master cm]# kubectl exec envpod -- env | grep NAME
HOSTNAME=envpod
FIRSTNAME=Nigel
LASTNAME=Poulton
作为容器的启动命令
kind: ConfigMap
apiVersion: v1
metadata:name: multimap
data:given: Nigelfamily: Poulton
---
apiVersion: v1
kind: Pod
metadata:name: startup-podlabels:chapter: configmaps
spec:restartPolicy: OnFailurecontainers:- name: args1image: busyboxcommand: [ "/bin/sh", "-c", "echo First name $(FIRSTNAME) last name $(LASTNAME)", "wait" ] # 输出环境变量中的$(FIRSTNAME)和$(LASTNAME)env: # 以环境变量的方式- name: FIRSTNAMEvalueFrom:configMapKeyRef:name: multimapkey: given- name: LASTNAMEvalueFrom:configMapKeyRef:name: multimapkey: family
[root@master cm]# kubectl describe pod startup-podEnvironment:FIRSTNAME: <set to the key 'given' of config map 'multimap'> Optional: falseLASTNAME: <set to the key 'family' of config map 'multimap'> Optional: false
[root@master cm]# kubectl logs startup-pod
First name Nigel last name Poulton
ConfigMap与Volume
步骤
- 创建ConfigMap
- 在Pod模版中创建一个ConfigMap卷
- 将ConfigMap卷挂载到容器中
- ConfigMap中的entry会分别作为单独文件出现在容器中
kind: ConfigMap
apiVersion: v1
metadata:name: multimap
data:given: Nigelfamily: Poulton
---
apiVersion: v1
kind: Pod
metadata:labels:chapter: configmapsname: volmap
spec:volumes:- name: volmapconfigMap:name: multimapcontainers:- name: ctr1image: ubuntucommand: [ "sleep" ]args: [ "3600" ]volumeMounts:- name: volmapmountPath: /etc/name
kubectl exec -it volmap -- bash
root@volmap:~# ll /etc/name
total 0
drwxrwxrwx 3 root root 87 Sep 25 02:49 ./
drwxr-xr-x 1 root root 18 Sep 25 02:49 ../
drwxr-xr-x 2 root root 33 Sep 25 02:49 ..2024_09_25_02_49_39.2473126764/
lrwxrwxrwx 1 root root 32 Sep 25 02:49 ..data -> ..2024_09_25_02_49_39.2473126764/
lrwxrwxrwx 1 root root 13 Sep 25 02:49 family -> ..data/family
lrwxrwxrwx 1 root root 12 Sep 25 02:49 given -> ..data/given
StatefulSet(部署有状态应用 会话数据 数据库)
StatefulSet特点
- Pod的名字是可预知和保持不变的
- DNS主机名是可预知和保持不变的
- 卷绑定是可预知和保持不变的
创建StatefulSet
首先依照volume章创建StorageClass
后创建governing headless Service 无头服务 管理该StatefulSet所有DNS子域名
# Headless Service for StatefulSet Pod DNS names
apiVersion: v1
kind: Service
metadata:name: dullahanlabels:app: web
spec:ports:- port: 80name: webclusterIP: Noneselector:app: web
部署StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:name: tkb-sts # StatefulSet的名字 所有pod都基于tkb-sts
spec:replicas: 3 # 定义了三个副本 tkb-sts-0 tkb-sts-1 tkb-sts-2 依续创建selector:matchLabels:app: webserviceName: "dullahan" # 指定 governing Service 名字为上面创建的Servicetemplate: # 定义Pod模版metadata:labels:app: webspec:terminationGracePeriodSeconds: 10containers:- name: ctr-webimage: nginx:latestports:- containerPort: 80name: webvolumeMounts:- name: webrootmountPath: /usr/share/nginx/htmlvolumeClaimTemplates: # 卷申请模板 每次创建新Pod是,自动创建pvc 自动命名- metadata:name: webrootspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "nfs-client" # StorageClass名字resources:requests:storage: 1Gi
[root@master calico]# kubectl get sts
NAME READY AGE
tkb-sts 3/3 14m
[root@master calico]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tkb-sts-0 1/1 Running 0 11m 10.244.166.130 node1 <none> <none>
tkb-sts-1 1/1 Running 0 7m47s 10.244.104.0 node2 <none> <none>
tkb-sts-2 1/1 Running 0 7m10s 10.244.104.1 node2 <none> <none>
[root@master calico]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-381d656d-342f-4276-aa95-7af89ea75ea3 1Gi RWO Delete Bound default/webroot-tkb-sts-2 nfs-client <unset> 9m50s
pvc-c5f418d8-90b7-40aa-adeb-e5cb63ce9cf6 1Gi RWO Delete Bound default/webroot-tkb-sts-0 nfs-client <unset> 14m
pvc-d9004ba6-3a5f-4251-b741-21e2adc8a9a3 1Gi RWO Delete Bound default/webroot-tkb-sts-1 nfs-client <unset> 10m[root@master calico]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
webroot-tkb-sts-0 Bound pvc-c5f418d8-90b7-40aa-adeb-e5cb63ce9cf6 1Gi RWO nfs-client <unset> 14m
webroot-tkb-sts-1 Bound pvc-d9004ba6-3a5f-4251-b741-21e2adc8a9a3 1Gi RWO nfs-client <unset> 10m
webroot-tkb-sts-2 Bound pvc-381d656d-342f-4276-aa95-7af89ea75ea3 1Gi RWO nfs-client <unset> 10m[root@master ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
dullahan ClusterIP None <none> 80/TCP 53m app=web
对点测试
部署jumppod
apiVersion: v1
kind: Pod
metadata:name: jump-pod
spec:terminationGracePeriodSeconds: 1containers:- image: nigelpoulton/curl:1.0name: jump-ctrtty: truestdin: true
进入jump测试
[root@master ~]# kubectl exec -it jump-pod -- bash
root@jump-pod:/# dig SRV dullahan.default.svc.cluster.local
;; ANSWER SECTION:
dullahan.default.svc.cluster.local. 30 IN SRV 0 33 80 tkb-sts-0.dullahan.default.svc.cluster.local.
dullahan.default.svc.cluster.local. 30 IN SRV 0 33 80 tkb-sts-1.dullahan.default.svc.cluster.local.
dullahan.default.svc.cluster.local. 30 IN SRV 0 33 80 tkb-sts-2.dullahan.default.svc.cluster.local.
解析不到删除pod coredns 让k8s自愈
新建一个ubuntu pod
apiVersion: v1
kind: Pod
metadata:name: ubuntu-podlabels:app: web # svc提供服务标识
spec:containers:- name: ubuntuimage: ubuntucommand: ["sleep"]args: ["3600"]ports:- containerPort: 80name: web
root@jump-pod:/# dig SRV dullahan.default.svc.cluster.local
tkb-sts-1.dullahan.default.svc.cluster.local. 30 IN A 10.244.104.0
tkb-sts-0.dullahan.default.svc.cluster.local. 30 IN A 10.244.166.130
tkb-sts-2.dullahan.default.svc.cluster.local. 30 IN A 10.244.104.1
10-244-166-133.dullahan.default.svc.cluster.local. 30 IN A 10.244.166.133
StatefulSet扩缩容
修改StatefulSet内容
replicas: 2
[root@master ~]# kubectl edit sts tkb-sts
statefulset.apps/tkb-sts edited
[root@master ~]# kubectl get sts
NAME READY AGE
tkb-sts 2/2 19h
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
tkb-sts-0 1/1 Running 0 19h
tkb-sts-1 1/1 Running 0 19h# 查看pvc还有3个 缩容扩容不会删除pod副本相关的pvc
[root@master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
webroot-tkb-sts-0 Bound pvc-c5f418d8-90b7-40aa-adeb-e5cb63ce9cf6 1Gi RWO nfs-client <unset> 19h
webroot-tkb-sts-1 Bound pvc-d9004ba6-3a5f-4251-b741-21e2adc8a9a3 1Gi RWO nfs-client <unset> 19h
webroot-tkb-sts-2 Bound pvc-381d656d-342f-4276-aa95-7af89ea75ea3 1Gi RWO nfs-client <unset> 19h#查看挂载情况
[root@master ~]# kubectl describe pvc webroot-tkb-sts-0 | grep Used
Used By: tkb-sts-0
[root@master ~]# kubectl describe pvc webroot-tkb-sts-1 | grep Used
Used By: tkb-sts-1
[root@master ~]# kubectl describe pvc webroot-tkb-sts-2 | grep Used
Used By: <none># 修改StatefulSet 副本数量为4
[root@master ~]# kubectl get sts
NAME READY AGE
tkb-sts 4/4 19h[root@master ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
webroot-tkb-sts-0 Bound pvc-c5f418d8-90b7-40aa-adeb-e5cb63ce9cf6 1Gi RWO nfs-client <unset> 19h
webroot-tkb-sts-1 Bound pvc-d9004ba6-3a5f-4251-b741-21e2adc8a9a3 1Gi RWO nfs-client <unset> 19h
webroot-tkb-sts-2 Bound pvc-381d656d-342f-4276-aa95-7af89ea75ea3 1Gi RWO nfs-client <unset> 19h
webroot-tkb-sts-3 Bound pvc-aa15e166-d805-4820-a592-1350bcfcb179 1Gi RWO nfs-client <unset> 43s# 查看挂载情况
[root@master ~]# kubectl describe pvc webroot-tkb-sts-0 | grep Used
Used By: tkb-sts-0
[root@master ~]# kubectl describe pvc webroot-tkb-sts-1 | grep Used
Used By: tkb-sts-1
[root@master ~]# kubectl describe pvc webroot-tkb-sts-2 | grep Used
Used By: tkb-sts-2
[root@master ~]# kubectl describe pvc webroot-tkb-sts-3 | grep Used
Used By: tkb-sts-3# 查看nfs挂载点的文件夹
[root@master ~]# ls /nfsdata/share/default/
webroot-tkb-sts-0 webroot-tkb-sts-1 webroot-tkb-sts-2 webroot-tkb-sts-3
调用Pod顺序
通过spec.PodManagermentPolicy控制Pod启动和停止顺序
- OrderedReady 按序管理策略
- Prarllel pod创建删除并行
执行滚动升级
升级按顺序从索引号最大的Pod开始,每次更新一个,直到最小索引号的Pod
模拟故障
删除Pod 查看状态
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
tkb-sts-0 1/1 Running 0 19h
tkb-sts-1 1/1 Running 0 19h
tkb-sts-2 1/1 Running 0 12m
tkb-sts-3 1/1 Running 0 12m[root@master ~]# kubectl describe pod tkb-sts-1
Name: tkb-sts-1
Namespace: default
Status: Running
IP: 10.244.104.0
Volumes:webroot:Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName: webroot-tkb-sts-1[root@master ~]# kubectl delete pod tkb-sts-1
[root@master ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
tkb-sts-0 1/1 Running 0 19h
tkb-sts-1 0/1 Terminating 0 1s
tkb-sts-2 1/1 Running 0 15m
tkb-sts-3 1/1 Running 0 15m
tkb-sts-1 0/1 Terminating 0 2s
tkb-sts-1 0/1 Terminating 0 2s
tkb-sts-1 0/1 Terminating 0 2s
tkb-sts-1 0/1 Pending 0 0s
tkb-sts-1 0/1 Pending 0 0s
tkb-sts-1 0/1 ContainerCreating 0 0s
tkb-sts-1 0/1 ContainerCreating 0 0s[root@master ~]# kubectl describe pod tkb-sts-1 | grep ClaimNameClaimName: webroot-tkb-sts-1
删除StatefulSet
按顺序关闭Pod
[root@master ~]# kubectl scale statefulset tkb-sts --replicas=0
statefulset.apps/tkb-sts scaled
[root@master ~]# kubectl get sts tkb-sts
NAME READY AGE
tkb-sts 0/0 19h[root@master ~]# kubectl delete sts tkb-sts
statefulset.apps "tkb-sts" deleted[root@master ~]# kubectl delete svc dullahan
service "dullahan" deleted[root@master ~]# kubectl delete pvc webroot-tkb-sts-0 webroot-tkb-sts-1 webroot-tkb-sts-2 webroot-tkb-sts-3
persistentvolumeclaim "webroot-tkb-sts-0" deleted
persistentvolumeclaim "webroot-tkb-sts-1" deleted
persistentvolumeclaim "webroot-tkb-sts-2" deleted
persistentvolumeclaim "webroot-tkb-sts-3" deleted#由于使用了StorageClass pvc删除后 pv自动删除 但是文件还在 保证持久化
[root@master ~]# ls /nfsdata/share/default/
webroot-tkb-sts-0 webroot-tkb-sts-2
webroot-tkb-sts-1 webroot-tkb-sts-3
相关文章:
k8s 修炼手册
deployment apiVersion: apps/v1 kind: Deployment metadata:name: hello-deploy spec:replicas: 10selector:matchLabels:app: hello-world # Pod的label # 这个Label与Service的Label筛选器是匹配的revisionHistoryLimit: 5progressDeadlineSeconds: 300minReadySeconds: 10…...
重回1899元,小米这新机太猛了
如果不出意外,距离高通年度旗舰骁龙 8 Gen4 发布还剩下不到一个月时间。 对于以小米 15 为首即将到来的下半年各家旗舰机型厮杀画面,讲道理小忆早已是备好瓜子儿摆上果盘翘首以盼了。 不过在这之前,中端主流选手们表示有话要说:为…...
jmeter本身常用性能优化方法
一、常用配置: 修改Jmeter.bat文件,调整JVM参数(修改jmeter本身的最小最大堆内存),默认都是1个G set HEAP-Xms5g -Xmx5g -XX:MaxMetaspaceSize256m我的本机内存是8G,那最大可以设置870%(本机内存的70%) 这里我设置的5g 如果有…...
Vue3中el-table组件实现分页,多选以及回显
el-table组件实现分页,多选以及回显 需求思路1、实现分页多选并保存上一页的选择2、记录当前选择的数据3、默认数据的回显 完整代码 需求 使用 dialog 显示 table,同时关闭时销毁el-table 表格多选回显已选择的表格数据,分页来回切换依然正确…...
柯桥韩语学校|韩语每日一词打卡:회갑연[회가변]【名词】花甲宴
今日一词:회갑연 韩语每日一词打卡:회갑연[회가변]【名词】花甲宴 原文:인구 노령화에 따라서 요즘 회갑연보다는 고희연을 더 많이 지냅니다. 意思:随着人口老龄化,最近比起花甲宴,更多人办古稀宴。 【原文分解】 1、인구[인구]…...
python概述
目录 python语言的特点 python语言的优点: python语言的缺点: 1.常用的python编辑器 PyCharm Jupyter Notebook VScode 模块的安装、导入与使用 安装 导入与使用 python语言的特点 1.简洁 2.语法优美 3.简单易学 4.开源:用户可自…...
使用celery+Redis+flask-mail发送邮箱验证码
Celery是一个分布式任务队列,它可以让你异步处理任务,例如发送邮件、图片处理、数据分析等。 在项目中和celery 有关系的文件如下: task.py : 创建celery.py 对象,并且添加任务,和app绑定,注意࿱…...
【第十四章:Sentosa_DSML社区版-机器学习之时间序列】
目录 【第十四章:Sentosa_DSML社区版-机器学习时间序列】 14.1 ARIMAX 14.2 ARIMA 14.3 HoltWinters 14.4 一次指数平滑预测 14.5 二次指数平滑预测 【第十四章:Sentosa_DSML社区版-机器学习时间序列】 14.1 ARIMAX 1.算子介绍 考虑其他序列对一…...
Vue3.X + SpringBoot小程序 | AI大模型项目 | 饮食陪伴官
gitee平台源码 github平台源码 饮食陪伴师是一个管理饮食的原生大模型小程序,优势: 精确营养监控:用户记录饮食后,我们会计算出食用的营养成分与分量,并反馈给用户。饮食建议有效:大模型经过我们训练具备大…...
【C++】检测TCP链接超时——时间轮组件设计
目录 引言 时间轮思想 设计的核心思路 完整代码 组件接口 个人主页:东洛的克莱斯韦克-CSDN博客 引言 对于高并发的服务器来说,链接是一种比较珍贵的资源,对不活跃的链接应该及时释放。判断连接是否活跃的策略是——在给定的时间内&#…...
中国新媒体联盟与中运律师事务所 建立战略合作伙伴关系
2024年9月27日,中国新媒体联盟与中运律师事务所举行战略合作协议签字仪式。中国新媒体联盟主任兼中国社会新闻网主编、中法新闻法制网运营中心主任左新发,中运律师事务所高级顾问刘学伟代表双方单位签字。 中国新媒体联盟是由央视微电影中文频道联合多家…...
【ArcGIS微课1000例】0121:面状数据共享边的修改方法
文章目录 一、共享边概述二、快速的修改办法1. 整形共享边2. 修改边3. 概化边缘一、共享边概述 面状数据共享边指的是两个或多个面状数据(如多边形)共同拥有的边界。在地理信息系统(GIS)、三维建模、大数据分析等领域,面状数据共享边是描述面状空间数据拓扑关系的重要组成…...
图论(dfs系列) 9/27
一、二维网格图中探测环 题意: 给定一个二维数组grid,如果二维数组中存在一个环,处于环上的值都是相同的。返回true;如果不存在就返回false; 思路: 在以往的dfs搜索中,都是往四个方向去dfs;但是在这一道…...
如何在Windows上安装Docker
在 Windows 上使用 Docker 有两种主要方式:通过 Docker Desktop 安装并使用 WSL 2 作为后端,或者直接在 WSL 2 中安装 Docker。这里推荐手残党直接用图形界面安装到WSL 2的后端: 一、启用Hyper-V和容器特性 1. 右键Windows点击应用和功能 …...
golang格式化输入输出
fmt包使用类似于C的printf和scanf的函数实现格式化I/O 1输出格式化 一般的: 动词效果解释%v[1 -23 3]、[1 -23 3]、&{sdlkjf 23}以默认格式显示的值,与bool(%t)、int, int8 etc(%d)、uint, uint8 et…...
Jenkins基于tag的构建
文章目录 Jenkins参数化构建设置设置gitlab tag在工程中维护构建的版本按指定tag的版本启动服务 Jenkins参数化构建设置 选择参数化构建: 在gradle构建之前,增加执行shell的步骤: 把新增的shell框挪到gradle构建之前, 最后保存 …...
性能设计模式
class Singleton { public: static Singleton& getInstance() {static Singleton instance; // 局部静态变量return instance; } private:Singleton() {}Singleton(const Singleton&) delete; // 禁止拷贝Singleton& operator(const Singleton&) delete; // …...
Android 热点分享二维码功能简单介绍
Android 热点分享二维码 文章目录 Android 热点分享二维码一、前言二、热点二维码1、热点分享的字符串2、代码中热点字符串拼接和设置示例3、一个图片示例 三、其他1、Android 热点分享二维码小结2、Android11 设置默认热点名称和热点密码、密码长度 一、前言 比较新的Android…...
SIEM之王,能否克服创新者的窘境?
《网安面试指南》http://mp.weixin.qq.com/s?__bizMzkwNjY1Mzc0Nw&mid2247484339&idx1&sn356300f169de74e7a778b04bfbbbd0ab&chksmc0e47aeff793f3f9a5f7abcfa57695e8944e52bca2de2c7a3eb1aecb3c1e6b9cb6abe509d51f&scene21#wechat_redirect 《Java代码审…...
(JAVA)浅尝关于 “栈” 数据结构
1. 栈的概述: 1.1 生活中的栈 存储货物或供旅客住宿的地方,可引申为仓库、中转站。例如酒店,在古时候叫客栈,是供旅客休息的地方,旅客可以进客栈休息,休息完毕后就离开客栈 1.2计算机中的栈 将生活中的…...
基于FPGA的PID算法学习———实现PID比例控制算法
基于FPGA的PID算法学习 前言一、PID算法分析二、PID仿真分析1. PID代码2.PI代码3.P代码4.顶层5.测试文件6.仿真波形 总结 前言 学习内容:参考网站: PID算法控制 PID即:Proportional(比例)、Integral(积分&…...
中南大学无人机智能体的全面评估!BEDI:用于评估无人机上具身智能体的综合性基准测试
作者:Mingning Guo, Mengwei Wu, Jiarun He, Shaoxian Li, Haifeng Li, Chao Tao单位:中南大学地球科学与信息物理学院论文标题:BEDI: A Comprehensive Benchmark for Evaluating Embodied Agents on UAVs论文链接:https://arxiv.…...
基于当前项目通过npm包形式暴露公共组件
1.package.sjon文件配置 其中xh-flowable就是暴露出去的npm包名 2.创建tpyes文件夹,并新增内容 3.创建package文件夹...
苍穹外卖--缓存菜品
1.问题说明 用户端小程序展示的菜品数据都是通过查询数据库获得,如果用户端访问量比较大,数据库访问压力随之增大 2.实现思路 通过Redis来缓存菜品数据,减少数据库查询操作。 缓存逻辑分析: ①每个分类下的菜品保持一份缓存数据…...
Python爬虫(一):爬虫伪装
一、网站防爬机制概述 在当今互联网环境中,具有一定规模或盈利性质的网站几乎都实施了各种防爬措施。这些措施主要分为两大类: 身份验证机制:直接将未经授权的爬虫阻挡在外反爬技术体系:通过各种技术手段增加爬虫获取数据的难度…...
Redis数据倾斜问题解决
Redis 数据倾斜问题解析与解决方案 什么是 Redis 数据倾斜 Redis 数据倾斜指的是在 Redis 集群中,部分节点存储的数据量或访问量远高于其他节点,导致这些节点负载过高,影响整体性能。 数据倾斜的主要表现 部分节点内存使用率远高于其他节…...
在web-view 加载的本地及远程HTML中调用uniapp的API及网页和vue页面是如何通讯的?
uni-app 中 Web-view 与 Vue 页面的通讯机制详解 一、Web-view 简介 Web-view 是 uni-app 提供的一个重要组件,用于在原生应用中加载 HTML 页面: 支持加载本地 HTML 文件支持加载远程 HTML 页面实现 Web 与原生的双向通讯可用于嵌入第三方网页或 H5 应…...
技术栈RabbitMq的介绍和使用
目录 1. 什么是消息队列?2. 消息队列的优点3. RabbitMQ 消息队列概述4. RabbitMQ 安装5. Exchange 四种类型5.1 direct 精准匹配5.2 fanout 广播5.3 topic 正则匹配 6. RabbitMQ 队列模式6.1 简单队列模式6.2 工作队列模式6.3 发布/订阅模式6.4 路由模式6.5 主题模式…...
LINUX 69 FTP 客服管理系统 man 5 /etc/vsftpd/vsftpd.conf
FTP 客服管理系统 实现kefu123登录,不允许匿名访问,kefu只能访问/data/kefu目录,不能查看其他目录 创建账号密码 useradd kefu echo 123|passwd -stdin kefu [rootcode caozx26420]# echo 123|passwd --stdin kefu 更改用户 kefu 的密码…...
Java编程之桥接模式
定义 桥接模式(Bridge Pattern)属于结构型设计模式,它的核心意图是将抽象部分与实现部分分离,使它们可以独立地变化。这种模式通过组合关系来替代继承关系,从而降低了抽象和实现这两个可变维度之间的耦合度。 用例子…...
