当前位置: 首页 > news >正文

天津网站设计诺亚科技/网络服务器配置与管理

天津网站设计诺亚科技,网络服务器配置与管理,什么网站做b2b免费,网站建设推广案例deployment apiVersion: apps/v1 kind: Deployment metadata:name: hello-deploy spec:replicas: 10selector:matchLabels:app: hello-world # Pod的label # 这个Label与Service的Label筛选器是匹配的revisionHistoryLimit: 5progressDeadlineSeconds: 300minReadySeconds: 10…

deployment

apiVersion: apps/v1
kind: Deployment
metadata:name: hello-deploy
spec:replicas: 10selector:matchLabels:app: hello-world # Pod的label # 这个Label与Service的Label筛选器是匹配的revisionHistoryLimit: 5progressDeadlineSeconds: 300minReadySeconds: 10 # 每个pod更新动作间隔10sstrategy:type: RollingUpdate # 使用RollingUpdate方式更新rollingUpdate: # 智能同时更新最多9个maxUnavailable: 1 # 不允许出现比期望状态指定的pod数量少超过一个的情况maxSurge: 1 # 不允许出现比期望状态指定的pod数量多超过一个的情况 template:metadata:labels:app: hello-worldspec:containers:- name: hello-podimage: nigelpoulton/k8sbook:1.0ports:- containerPort: 8080

创建对应svc

apiVersion: v1
kind: Service
metadata:name: hello-svclabels:app: hello-world # Label 筛选器 service正在查找带有app=hello-world的Pod
spec:type: NodePortports:- port: 8080nodePort: 30001protocol: TCPselector:app: hello-world

滚动更新查看命令

kubectl rollout status

查看历史版本

kubectl rollout history

查看更新后rs

kubectl get rs

根据版本信息回滚

kubectl rollout undo deployment hello-deploy --to-revision=1

Service

apiVersion: apps/v1
kind: Deployment
metadata:name: web-deploy
spec:replicas: 10selector:matchLabels:app: hello-worldtemplate:metadata:labels:app: hello-worldspec:containers:- name: hello-ctrimage: nigelpoulton/k8sbook:latestports:- containerPort: 8080

命令行创建svc

kubectl expose deployment web-deploy --name=hello-svc --target-port=8080 --type=NodePort

查看svc

kubectl describe svc hello-svc[root@master svc]# kubectl describe svc hello-svc
Name:                     hello-svc
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=hello-world # Label筛选器定义的Label
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.12.205.3 # svc内部的ClusterIP(VIP)
IPs:                      10.12.205.3
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP # 应用正在监听的pod端口
NodePort:                 <unset>  32458/TCP # 集群外可访问的svc端口
Endpoints:                10.244.104.59:8080,10.244.104.60:8080,10.244.104.61:8080 + 7 more... # 能够匹配到Label筛选器的健康Pod的IP动态列表
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

声明式创建svc

apiVersion: v1
kind: Service
metadata:name: hello-svclabels:chapter: services
spec:
# ipFamilyPolicy: PreferDualStack
# ipFamilies:
# - IPv4
# - IPv6type: NodePortports:- port: 8080nodePort: 30001targetPort: 8080protocol: TCPselector:app: hello-world

查看svc

kubectl get svc hello-svc
kubectl describe svc hello-svc

查看Endpoint

kubectl get ep hello-svc

滚动更新

原始状态

Service app=biz1 zone=pord
Pod1 app=biz1 zone=pord ver=4.1 Pod2 app=biz1 zone=pord ver=4.1

更新中

pod label 有ver 这时候新旧版本pod svc都提供服务

Service app=biz1 zone=pord
Pod1 app=biz1 zone=pord ver=4.1
Pod2 app=biz1 zone=pord ver=4.1
Pod3 app=biz1 zone=pord ver=4.2
Pod4 app=biz1 zone=pord ver=4.2

更新后

svc 添加label ver=4.2 此时svc流量至通向新版本pod 修改为ver=4.1 svc流量则通向旧版本

Service app=biz1 zone=pord ver=4.2
Pod1 app=biz1 zone=pord ver=4.1 Pod2 app=biz1 zone=pord ver=4.1 Pod3 app=biz1 zone=pord ver=4.2 Pod4 app=biz1 zone=pord ver=4.2

服务发现及注册

kubectl get service
kubectl get endpoint

服务注册

    1. post Service 配置到API Service
    1. 分配ClutserIP
    1. 配置持久化到集群存储
    1. 维护有Pod IP的Endpoint被创建
    1. 集群DNS发现新的Service
    1. 创建DNS记录
    1. kube-proxy拉取Service的配置
    1. 创建IPVS规则进行负载均衡

服务发现

    1. 请求DNS解析Service名称
    1. 收到ClusterIP
    1. 发送流量到ClusterIP
    1. 无路由,发送至容器的默认网关
    1. 转发至节点
    1. 无路由,发送至容器的默认网关
    1. 被节点内核处理
    1. 捕获(IPVS规则)
    1. 将目标IP的值重写为Pod的IP
apiVersion: v1
kind: Namespace
metadata:name: dev
---
apiVersion: v1
kind: Namespace
metadata:name: prod
---
kind: Deployment
apiVersion: apps/v1
metadata:name: enterprisenamespace: devlabels:app: enterprise
spec:selector:matchLabels:app: enterprisereplicas: 2template:metadata:labels:app: enterprisespec:terminationGracePeriodSeconds: 1containers:- image: nigelpoulton/k8sbook:text-devname: enterprise-ctrports:- containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:name: enterprisenamespace: prodlabels:app: enterprise
spec:selector:matchLabels:app: enterprisereplicas: 2template:metadata:labels:app: enterprisespec:terminationGracePeriodSeconds: 1containers:- image: nigelpoulton/k8sbook:text-prodname: enterprise-ctrports:- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:name: entnamespace: dev
spec:ports:- port: 8080selector:app: enterprise
---
apiVersion: v1
kind: Service
metadata:name: entnamespace: prod
spec:ports:- port: 8080selector:app: enterprise
---
apiVersion: v1
kind: Pod
metadata:name: jumpnamespace: dev
spec:terminationGracePeriodSeconds: 5containers:- image: ubuntuname: jumptty: truestdin: true
[root@master ~]# kubectl get all -n dev
NAME                             READY   STATUS    RESTARTS   AGE
pod/enterprise-76fc64bd9-h5gqg   1/1     Running   0          3h20m
pod/enterprise-76fc64bd9-kpxh9   1/1     Running   0          3h20m
pod/jump                         1/1     Running   0          3h20mNAME          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/ent   ClusterIP   10.7.27.61   <none>        8080/TCP   3h20mNAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/enterprise   2/2     2            2           3h20mNAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/enterprise-76fc64bd9   2         2         2       3h20m[root@master ~]# kubectl get all -n prod
NAME                              READY   STATUS    RESTARTS   AGE
pod/enterprise-5cfcd578d7-lknbj   1/1     Running   0          3h27m
pod/enterprise-5cfcd578d7-mwzcb   1/1     Running   0          3h27mNAME          TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/ent   ClusterIP   10.2.20.188   <none>        8080/TCP   3h27mNAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/enterprise   2/2     2            2           3h27mNAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/enterprise-5cfcd578d7   2         2         2       3h27mroot@master ~]# kubectl exec -it jump -n dev -- bashroot@jump:/# cat /etc/resolv.conf 
nameserver 10.0.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local
options ndots:5root@jump:/# apt-get update &&u apt-get install curl -yroot@jump:/# curl ent:8080
Hello from the DEV Namespace!
Hostname: enterprise-76fc64bd9-h5gqgroot@jump:/# curl ent.dev.svc.cluster.local:8080
Hello from the DEV Namespace!
Hostname: enterprise-76fc64bd9-h5gqgroot@jump:/# curl ent.prod.svc.cluster.local:8080
Hello from the PROD Namespace!
Hostname: enterprise-5cfcd578d7-mwzcb# pod内curl其他pod及端口
# serviceName.namespace.svc.cluster.local:port

服务排查

  • Pod: 由coredns Deployment管理
  • Service: 一个名为kube-dns的ClusterIP Service,其监听端口为TCP/UDP 53
  • Endpoint: 也叫做kube-dns
    所有与集群DNS相关的对象都有k8s-app=kube-dns的Label
  1. 首先排查coredns Deployment机器管理的Pod试运行状态的
[root@master ~]# kubectl get deploy -n kube-system -l k8s-app=kube-dns
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
coredns   2/2     2            2           33d[root@master ~]# kubectl get pods -n kube-system -l k8s-app=kube-dns
NAME                       READY   STATUS    RESTARTS        AGE
coredns-857d9ff4c9-6cb2b   1/1     Running   28 (3d5h ago)   33d
coredns-857d9ff4c9-tvrff   1/1     Running   28 (3d5h ago)   33d[root@master ~]# kubectl logs -n kube-system coredns-857d9ff4c9-6cb2b 
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
CoreDNS-1.11.1
linux/amd64, go1.20.7, ae2bbc2
  1. 查看Service机器Endpoint对象 保证ClutserIP由IP地址病监听TCP/UDP 53端口
[root@master ~]# kubectl get svc kube-dns -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.0.0.10    <none>        53/UDP,53/TCP,9153/TCP   33d[root@master ~]# kubectl get ep kube-dns -n kube-system
NAME       ENDPOINTS                                                        AGE
kube-dns   10.244.219.68:53,10.244.219.69:53,10.244.219.68:53 + 3 more...   33d
  1. 确定DNS组件都正常后,使用 gcr.io/kubernetes-e2e-test-images/dnsutils:latest 镜像
    镜像包含ping traceroute curl dig nslookup命令
apt install iputils-ping  -y
apt install dnsutils
apt install traceroute
apt install nslookuproot@ubuntu-pod:/# nslookup kubernetes
# 返回
;; Got recursion not available from 10.0.0.10
Server:         10.0.0.10
Address:        10.0.0.10#53Name:   kubernetes.default.svc.cluster.local
Address: 10.0.0.1
;; Got recursion not available from 10.0.0.10

volume

nfs

yum install -y nfs-common nfs-utils rpcbind
mkdir /nfsdata
chmod 666 /nfsdata
chown nfsnobody /nfsdata
chgrp nfsnobody /nfsdata # 没有nfsnobody 使用nobody
cat /etc/exports
/nfsdata *(rw,no_root_squash,no_all_squash,sync)
systemctl restart nfs-server
systemctl restart rpcbind
[root@master script]# ssh node1
[root@node1 ~]# mount -t nfs master:/nfsdata /nfsdata
[root@node1 ~]# cat /etc/fstab
10.0.17.100:/nfsdata /nfsdata nfs default,_netdev 0 0

pv

apiVersion: v1
kind: PersistentVolume
metadata:name: nfspv0 # pv 名字
spec:capacity: #容量storage: 10Gi #存储空间accessModes: #存储模式- ReadWriteOnce #单个节点读写模式,即卷可以被一个节点以读写方式挂载 块存储只支持RWO# - ReadWriteMany #多个节点读写模式,即卷可以被多个节点以读写方式挂载 NFS# - ReadOnlyMany #只读方式绑定多个PVCpersistentVolumeReclaimPolicy: Recycle #持久卷回收策略storageClassName: nfs # 存储类的名字nfs:path: /nfsdata/share # nfs共享路径server: 10.0.17.100 # nfs服务器地址
[root@master volume]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
nfspv0   10Gi       RWO            Recycle          Bound    default/nfspvc0   nfs            <unset>

pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: nfspvc0 # pv 名字
spec:accessModes: #存储模式- ReadWriteOnce #单个节点读写模式,即卷可以被一个节点以读写方式挂载 块存储只支持RWO# - ReadWriteMany #多个节点读写模式,即卷可以被多个节点以读写方式挂载 NFS# - ReadOnlyMany #只读方式绑定多个PVCstorageClassName: nfs # 存储类的名字resources: requests: storage: 5Gi
[root@master volume]# kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
nfspvc0   Bound    nfspv0   10Gi       RWO            nfs            <unset>                 11m

创建ubuntu 系统/data绑定nfspvc0

apiVersion: v1
kind: Pod
metadata:name: volpod
spec:volumes:- name: datapersistentVolumeClaim:claimName: nfspvc0 # 使用的pvccontainers:- name: ubuntu-ctrimage: ubuntu:latestcommand:- /bin/bash- "-c"- "sleep 60m"volumeMounts:- mountPath: /data # ubuntu系统挂载点name: data
[root@master volume]# kubectl exec -it volpod -- bash
root@volpod:/# cd /data/
root@volpod:/data# ls
1  3  8716283  876  default
# 进入/data目录发现/nfsdata/share中的文件

StorageClass

yum install -y nfs-utils rpcbind
mkdir /nfsdata/share
chown nobody /nfsdata/shareecho "/nfsdata/share   *(rw,sync,no_subtree_check)" >> /etc/exports
systemctl enable nfs-server && systemctl enable nfs-server
systemctl restart nfs-server && systemctl restart nfs-server
showmount -e master

部署nfs-client-provisioner

vim nfs-client-provisioner.yaml
kind: Deployment
apiVersion: apps/v1
metadata:name: nfs-client-provisionernamespace: nfs-storageclass
spec:replicas: 1selector:matchLabels:app: nfs-client-provisionerstrategy:type: Recreatetemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisioner# image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2# image: k8s.dockerproxy.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2image: registry.cn-beijing.aliyuncs.com/blice_haiwai/nfs-subdir-external-provisioner:v4.0.2volumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVER# value: <YOUR NFS SERVER HOSTNAME>value: 10.0.17.100- name: NFS_PATH# value: /var/nfsvalue: /nfsdata/sharevolumes:- name: nfs-client-rootnfs:# server: <YOUR NFS SERVER HOSTNAME>server: 10.0.17.100# share nfs pathpath: /nfsdata/share

ca认证

vim RBAC.yaml
apiVersion: v1
kind: ServiceAccount
metadata:name: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: nfs-client-provisioner-runner
rules:- apiGroups: [""]resources: ["nodes"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["persistentvolumes"]verbs: ["get", "list", "watch", "create", "delete"]- apiGroups: [""]resources: ["persistentvolumeclaims"]verbs: ["get", "list", "watch", "update"]- apiGroups: ["storage.k8s.io"]resources: ["storageclasses"]verbs: ["get", "list", "watch"]- apiGroups: [""]resources: ["events"]verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: run-nfs-client-provisioner
subjects:
#创建名字空间- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
roleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
rules:- apiGroups: [""]resources: ["endpoints"]verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
subjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: nfs-storageclass
roleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io

创建StorageClass

vim StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:name: nfs-clientnamespace: nfs-storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:pathPattern: ${.PVC.namespace}/${.PVC.name}onDelete: delete #删除模式

测试pod

vim test.yaml
kind: PersistentVolumeClaim # 创建pvc
apiVersion: v1
metadata:name: test-claimannotations: 
spec:accessModes:- ReadWriteMany # 多节点读写resources:requests:storage: 1Mi # 请求资源大小1MBstorageClassName: nfs-client # 存储类名字
---
kind: Pod
apiVersion: v1
metadata:name: test-pod
spec:containers:- name: test-podimage: wangyanglinux/myapp:v1.0volumeMounts:- name: nfs-pvcmountPath: "/usr/local/nginx/html"restartPolicy: "Never"volumes: # 定义卷- name: nfs-pvc #pvc提供的卷persistentVolumeClaim:claimName: test-claim
[root@master ~]# ls /nfsdata/share/default/test-claim/
hostname.html  pppppp

cm

ConfigMap通常用于存储如下非敏感数据

  • 环境变量的值
  • 整个配置文件(比如Web Server的配置和数据库的配置)
  • 主机名(hostname)
  • 服务端口(Service Port)
  • 账号名称(Account name)

格式:
key: value

主容器的方式

  • 环境变量
  • 容器启动命令参数
  • 某个卷(volume)上的文件 最灵活的方式

命令方式创建

# --from-literal 表示字面key=value
kubectl create configmap test1map \
--from-literal shortname=msb.com \
--from-literal longname=magicsandbox.com[root@master ~]# kubectl get cm test1map 
NAME       DATA   AGE
test1map   2      14s[root@master ~]# kubectl describe cm test1map 
Name:         test1map
Namespace:    default
Labels:       <none>
Annotations:  <none>Data
====
longname:
----
magicsandbox.com
shortname:
----
msb.comBinaryData
====Events:  <none># --from-file 表示通过文件创建
[root@master cm]# kubectl create cm testmap2 --from-file test.txt 
configmap/testmap2 created
[root@master cm]# kubectl describe cm testmap2 
Name:         testmap2
Namespace:    default
Labels:       <none>
Annotations:  <none>Data
====
test.txt:
----
ConfigMap,HelloWorld!BinaryData
====Events:  <none>[root@master cm]# kubectl get cm testmap2 -o yaml
apiVersion: v1
data:test.txt: |ConfigMap,HelloWorld!
kind: ConfigMap
metadata:creationTimestamp: "2024-09-25T01:34:00Z"name: testmap2namespace: defaultresourceVersion: "719604"uid: 6c0fd794-5e89-40de-b3a3-74c02799f9cd

声明式创建

kind: ConfigMap
apiVersion: v1
metadata:name: multimap
data: given: Nigelfamily: Poulton
[root@master cm]# kubectl apply -f multimap.yaml 
configmap/multimap created[root@master cm]# kubectl describe cm multimap 
Name:         multimap
Namespace:    default
Labels:       <none>
Annotations:  <none>Data
====
family:
----
Poulton
given:
----
NigelBinaryData
====Events:  <none>

定义只有一个entry的map

# entry为test.conf | 后面的所有内容需要座位一个字面值看待
key: test.conf
value:     env = plex-test endpoint = 0.0.0.0:31001 char = utf8 vault = PLEX/test log-size = 512Mkind: ConfigMap
apiVersion: v1
metadata:name: singlemap
data: test.conf: |env = plex-testendpoint = 0.0.0.0:31001char = utf8vault = PLEX/testlog-size = 512M
[root@master cm]# kubectl describe cm singlemap 
Name:         singlemap
Namespace:    default
Labels:       <none>
Annotations:  <none>Data
====
test.conf:
----
env = plex-test
endpoint = 0.0.0.0:31001
char = utf8
vault = PLEX/test
log-size = 512MBinaryData
====Events:  <none>

作为环境变量

kind: ConfigMap
apiVersion: v1
metadata:name: multimap
data:given: Nigelfamily: Poulton
---
apiVersion: v1
kind: Pod
metadata:labels:chapter: configmapsname: envpod
spec:containers:- name: ctr1image: busyboxcommand: ["sleep"]args: ["infinity"]env:  # 环境变量配置- name: FIRSTNAME # 环境变量名称valueFrom:configMapKeyRef:name: multimap # 需要引用的cm名称key: given # 需要引用的key 值为Nigel- name: LASTNAMEvalueFrom:configMapKeyRef:name: multimapkey: family

查看pod环境变量

[root@master cm]# kubectl exec envpod -- env | grep NAME
HOSTNAME=envpod
FIRSTNAME=Nigel
LASTNAME=Poulton

作为容器的启动命令

kind: ConfigMap
apiVersion: v1
metadata:name: multimap
data:given: Nigelfamily: Poulton
---
apiVersion: v1
kind: Pod
metadata:name: startup-podlabels:chapter: configmaps
spec:restartPolicy: OnFailurecontainers:- name: args1image: busyboxcommand: [ "/bin/sh", "-c", "echo First name $(FIRSTNAME) last name $(LASTNAME)", "wait" ] # 输出环境变量中的$(FIRSTNAME)和$(LASTNAME)env: # 以环境变量的方式- name: FIRSTNAMEvalueFrom:configMapKeyRef:name: multimapkey: given- name: LASTNAMEvalueFrom:configMapKeyRef:name: multimapkey: family
[root@master cm]# kubectl describe pod startup-podEnvironment:FIRSTNAME:  <set to the key 'given' of config map 'multimap'>   Optional: falseLASTNAME:   <set to the key 'family' of config map 'multimap'>  Optional: false
[root@master cm]# kubectl logs startup-pod 
First name Nigel last name Poulton

ConfigMap与Volume

步骤

  1. 创建ConfigMap
  2. 在Pod模版中创建一个ConfigMap卷
  3. 将ConfigMap卷挂载到容器中
  4. ConfigMap中的entry会分别作为单独文件出现在容器中
kind: ConfigMap
apiVersion: v1
metadata:name: multimap
data:given: Nigelfamily: Poulton
---
apiVersion: v1
kind: Pod
metadata:labels:chapter: configmapsname: volmap
spec:volumes:- name: volmapconfigMap:name: multimapcontainers:- name: ctr1image: ubuntucommand: [ "sleep" ]args: [ "3600" ]volumeMounts:- name: volmapmountPath: /etc/name
kubectl exec -it volmap -- bash
root@volmap:~# ll /etc/name
total 0
drwxrwxrwx 3 root root 87 Sep 25 02:49 ./
drwxr-xr-x 1 root root 18 Sep 25 02:49 ../
drwxr-xr-x 2 root root 33 Sep 25 02:49 ..2024_09_25_02_49_39.2473126764/
lrwxrwxrwx 1 root root 32 Sep 25 02:49 ..data -> ..2024_09_25_02_49_39.2473126764/
lrwxrwxrwx 1 root root 13 Sep 25 02:49 family -> ..data/family
lrwxrwxrwx 1 root root 12 Sep 25 02:49 given -> ..data/given

StatefulSet(部署有状态应用 会话数据 数据库)

StatefulSet特点

  • Pod的名字是可预知和保持不变的
  • DNS主机名是可预知和保持不变的
  • 卷绑定是可预知和保持不变的

创建StatefulSet

首先依照volume章创建StorageClass

后创建governing headless Service 无头服务 管理该StatefulSet所有DNS子域名

# Headless Service for StatefulSet Pod DNS names
apiVersion: v1
kind: Service
metadata:name: dullahanlabels:app: web
spec:ports:- port: 80name: webclusterIP: Noneselector:app: web

部署StatefulSet

apiVersion: apps/v1
kind: StatefulSet
metadata:name: tkb-sts # StatefulSet的名字 所有pod都基于tkb-sts
spec:replicas: 3 # 定义了三个副本 tkb-sts-0 tkb-sts-1 tkb-sts-2 依续创建selector:matchLabels:app: webserviceName: "dullahan"  # 指定 governing Service 名字为上面创建的Servicetemplate: # 定义Pod模版metadata:labels:app: webspec:terminationGracePeriodSeconds: 10containers:- name: ctr-webimage: nginx:latestports:- containerPort: 80name: webvolumeMounts:- name: webrootmountPath: /usr/share/nginx/htmlvolumeClaimTemplates: # 卷申请模板 每次创建新Pod是,自动创建pvc 自动命名- metadata:name: webrootspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "nfs-client" # StorageClass名字resources:requests:storage: 1Gi
[root@master calico]# kubectl get sts
NAME      READY   AGE
tkb-sts   3/3     14m
[root@master calico]# kubectl get pods -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
tkb-sts-0   1/1     Running   0          11m     10.244.166.130   node1   <none>           <none>
tkb-sts-1   1/1     Running   0          7m47s   10.244.104.0     node2   <none>           <none>
tkb-sts-2   1/1     Running   0          7m10s   10.244.104.1     node2   <none>           <none>
[root@master calico]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-381d656d-342f-4276-aa95-7af89ea75ea3   1Gi        RWO            Delete           Bound    default/webroot-tkb-sts-2   nfs-client     <unset>                          9m50s
pvc-c5f418d8-90b7-40aa-adeb-e5cb63ce9cf6   1Gi        RWO            Delete           Bound    default/webroot-tkb-sts-0   nfs-client     <unset>                          14m
pvc-d9004ba6-3a5f-4251-b741-21e2adc8a9a3   1Gi        RWO            Delete           Bound    default/webroot-tkb-sts-1   nfs-client     <unset>                          10m[root@master calico]# kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
webroot-tkb-sts-0   Bound    pvc-c5f418d8-90b7-40aa-adeb-e5cb63ce9cf6   1Gi        RWO            nfs-client     <unset>                 14m
webroot-tkb-sts-1   Bound    pvc-d9004ba6-3a5f-4251-b741-21e2adc8a9a3   1Gi        RWO            nfs-client     <unset>                 10m
webroot-tkb-sts-2   Bound    pvc-381d656d-342f-4276-aa95-7af89ea75ea3   1Gi        RWO            nfs-client     <unset>                 10m[root@master ~]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE   SELECTOR
dullahan     ClusterIP   None           <none>        80/TCP           53m   app=web

对点测试
部署jumppod

apiVersion: v1
kind: Pod
metadata:name: jump-pod
spec:terminationGracePeriodSeconds: 1containers:- image: nigelpoulton/curl:1.0name: jump-ctrtty: truestdin: true

进入jump测试


[root@master ~]# kubectl exec -it jump-pod -- bash
root@jump-pod:/# dig SRV dullahan.default.svc.cluster.local
;; ANSWER SECTION:
dullahan.default.svc.cluster.local. 30 IN SRV   0 33 80 tkb-sts-0.dullahan.default.svc.cluster.local.
dullahan.default.svc.cluster.local. 30 IN SRV   0 33 80 tkb-sts-1.dullahan.default.svc.cluster.local.
dullahan.default.svc.cluster.local. 30 IN SRV   0 33 80 tkb-sts-2.dullahan.default.svc.cluster.local.

解析不到删除pod coredns 让k8s自愈
新建一个ubuntu pod

apiVersion: v1
kind: Pod
metadata:name: ubuntu-podlabels:app: web # svc提供服务标识
spec:containers:- name: ubuntuimage: ubuntucommand: ["sleep"]args: ["3600"]ports:- containerPort: 80name: web
root@jump-pod:/# dig SRV dullahan.default.svc.cluster.local
tkb-sts-1.dullahan.default.svc.cluster.local. 30 IN A 10.244.104.0
tkb-sts-0.dullahan.default.svc.cluster.local. 30 IN A 10.244.166.130
tkb-sts-2.dullahan.default.svc.cluster.local. 30 IN A 10.244.104.1
10-244-166-133.dullahan.default.svc.cluster.local. 30 IN A 10.244.166.133

StatefulSet扩缩容

修改StatefulSet内容


replicas: 2
[root@master ~]# kubectl edit sts tkb-sts 
statefulset.apps/tkb-sts edited
[root@master ~]# kubectl get sts
NAME      READY   AGE
tkb-sts   2/2     19h
[root@master ~]# kubectl get pods
NAME         READY   STATUS    RESTARTS   AGE   
tkb-sts-0    1/1     Running   0          19h
tkb-sts-1    1/1     Running   0          19h# 查看pvc还有3个  缩容扩容不会删除pod副本相关的pvc
[root@master ~]# kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
webroot-tkb-sts-0   Bound    pvc-c5f418d8-90b7-40aa-adeb-e5cb63ce9cf6   1Gi        RWO            nfs-client     <unset>                 19h
webroot-tkb-sts-1   Bound    pvc-d9004ba6-3a5f-4251-b741-21e2adc8a9a3   1Gi        RWO            nfs-client     <unset>                 19h
webroot-tkb-sts-2   Bound    pvc-381d656d-342f-4276-aa95-7af89ea75ea3   1Gi        RWO            nfs-client     <unset>                 19h#查看挂载情况
[root@master ~]# kubectl describe pvc webroot-tkb-sts-0 | grep Used
Used By:       tkb-sts-0
[root@master ~]# kubectl describe pvc webroot-tkb-sts-1 | grep Used
Used By:       tkb-sts-1
[root@master ~]# kubectl describe pvc webroot-tkb-sts-2 | grep Used
Used By:       <none># 修改StatefulSet 副本数量为4
[root@master ~]# kubectl get sts
NAME      READY   AGE
tkb-sts   4/4     19h[root@master ~]# kubectl get pvc
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
webroot-tkb-sts-0   Bound    pvc-c5f418d8-90b7-40aa-adeb-e5cb63ce9cf6   1Gi        RWO            nfs-client     <unset>                 19h
webroot-tkb-sts-1   Bound    pvc-d9004ba6-3a5f-4251-b741-21e2adc8a9a3   1Gi        RWO            nfs-client     <unset>                 19h
webroot-tkb-sts-2   Bound    pvc-381d656d-342f-4276-aa95-7af89ea75ea3   1Gi        RWO            nfs-client     <unset>                 19h
webroot-tkb-sts-3   Bound    pvc-aa15e166-d805-4820-a592-1350bcfcb179   1Gi        RWO            nfs-client     <unset>                 43s# 查看挂载情况
[root@master ~]# kubectl describe pvc webroot-tkb-sts-0 | grep Used
Used By:       tkb-sts-0
[root@master ~]# kubectl describe pvc webroot-tkb-sts-1 | grep Used
Used By:       tkb-sts-1
[root@master ~]# kubectl describe pvc webroot-tkb-sts-2 | grep Used
Used By:       tkb-sts-2
[root@master ~]# kubectl describe pvc webroot-tkb-sts-3 | grep Used
Used By:       tkb-sts-3# 查看nfs挂载点的文件夹
[root@master ~]# ls /nfsdata/share/default/
webroot-tkb-sts-0  webroot-tkb-sts-1  webroot-tkb-sts-2  webroot-tkb-sts-3

调用Pod顺序

通过spec.PodManagermentPolicy控制Pod启动和停止顺序

  • OrderedReady 按序管理策略
  • Prarllel pod创建删除并行

执行滚动升级

升级按顺序从索引号最大的Pod开始,每次更新一个,直到最小索引号的Pod

模拟故障

删除Pod 查看状态

[root@master ~]# kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
tkb-sts-0   1/1     Running   0          19h
tkb-sts-1   1/1     Running   0          19h
tkb-sts-2   1/1     Running   0          12m
tkb-sts-3   1/1     Running   0          12m[root@master ~]# kubectl describe pod tkb-sts-1
Name:             tkb-sts-1
Namespace:        default
Status:           Running
IP:               10.244.104.0
Volumes:webroot:Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)ClaimName:  webroot-tkb-sts-1[root@master ~]# kubectl delete pod tkb-sts-1
[root@master ~]# kubectl get pods -w
NAME        READY   STATUS        RESTARTS   AGE
tkb-sts-0   1/1     Running       0          19h
tkb-sts-1   0/1     Terminating   0          1s
tkb-sts-2   1/1     Running       0          15m
tkb-sts-3   1/1     Running       0          15m
tkb-sts-1   0/1     Terminating   0          2s
tkb-sts-1   0/1     Terminating   0          2s
tkb-sts-1   0/1     Terminating   0          2s
tkb-sts-1   0/1     Pending       0          0s
tkb-sts-1   0/1     Pending       0          0s
tkb-sts-1   0/1     ContainerCreating   0          0s
tkb-sts-1   0/1     ContainerCreating   0          0s[root@master ~]# kubectl describe pod tkb-sts-1 | grep ClaimNameClaimName:  webroot-tkb-sts-1

删除StatefulSet

按顺序关闭Pod

[root@master ~]# kubectl scale statefulset tkb-sts --replicas=0
statefulset.apps/tkb-sts scaled
[root@master ~]# kubectl get sts tkb-sts 
NAME      READY   AGE
tkb-sts   0/0     19h[root@master ~]# kubectl delete sts tkb-sts 
statefulset.apps "tkb-sts" deleted[root@master ~]# kubectl delete svc dullahan 
service "dullahan" deleted[root@master ~]# kubectl delete pvc webroot-tkb-sts-0 webroot-tkb-sts-1 webroot-tkb-sts-2 webroot-tkb-sts-3 
persistentvolumeclaim "webroot-tkb-sts-0" deleted
persistentvolumeclaim "webroot-tkb-sts-1" deleted
persistentvolumeclaim "webroot-tkb-sts-2" deleted
persistentvolumeclaim "webroot-tkb-sts-3" deleted#由于使用了StorageClass pvc删除后 pv自动删除 但是文件还在 保证持久化
[root@master ~]# ls /nfsdata/share/default/
webroot-tkb-sts-0  webroot-tkb-sts-2
webroot-tkb-sts-1  webroot-tkb-sts-3

相关文章:

k8s 修炼手册

deployment apiVersion: apps/v1 kind: Deployment metadata:name: hello-deploy spec:replicas: 10selector:matchLabels:app: hello-world # Pod的label # 这个Label与Service的Label筛选器是匹配的revisionHistoryLimit: 5progressDeadlineSeconds: 300minReadySeconds: 10…...

重回1899元,小米这新机太猛了

如果不出意外&#xff0c;距离高通年度旗舰骁龙 8 Gen4 发布还剩下不到一个月时间。 对于以小米 15 为首即将到来的下半年各家旗舰机型厮杀画面&#xff0c;讲道理小忆早已是备好瓜子儿摆上果盘翘首以盼了。 不过在这之前&#xff0c;中端主流选手们表示有话要说&#xff1a;为…...

jmeter本身常用性能优化方法

一、常用配置&#xff1a; 修改Jmeter.bat文件&#xff0c;调整JVM参数(修改jmeter本身的最小最大堆内存)&#xff0c;默认都是1个G set HEAP-Xms5g -Xmx5g -XX:MaxMetaspaceSize256m我的本机内存是8G&#xff0c;那最大可以设置870%(本机内存的70%) 这里我设置的5g 如果有…...

Vue3中el-table组件实现分页,多选以及回显

el-table组件实现分页&#xff0c;多选以及回显 需求思路1、实现分页多选并保存上一页的选择2、记录当前选择的数据3、默认数据的回显 完整代码 需求 使用 dialog 显示 table&#xff0c;同时关闭时销毁el-table 表格多选回显已选择的表格数据&#xff0c;分页来回切换依然正确…...

柯桥韩语学校|韩语每日一词打卡:회갑연[회가변]【名词】花甲宴

今日一词:회갑연 韩语每日一词打卡&#xff1a;회갑연[회가변]【名词】花甲宴 原文:인구 노령화에 따라서 요즘 회갑연보다는 고희연을 더 많이 지냅니다. 意思&#xff1a;随着人口老龄化&#xff0c;最近比起花甲宴&#xff0c;更多人办古稀宴。 【原文分解】 1、인구[인구]…...

python概述

目录 python语言的特点 python语言的优点&#xff1a; python语言的缺点&#xff1a; 1.常用的python编辑器 PyCharm Jupyter Notebook VScode 模块的安装、导入与使用 安装 导入与使用 python语言的特点 1.简洁 2.语法优美 3.简单易学 4.开源&#xff1a;用户可自…...

使用celery+Redis+flask-mail发送邮箱验证码

Celery是一个分布式任务队列&#xff0c;它可以让你异步处理任务&#xff0c;例如发送邮件、图片处理、数据分析等。 在项目中和celery 有关系的文件如下&#xff1a; task.py : 创建celery.py 对象&#xff0c;并且添加任务&#xff0c;和app绑定&#xff0c;注意&#xff1…...

【第十四章:Sentosa_DSML社区版-机器学习之时间序列】

目录 【第十四章&#xff1a;Sentosa_DSML社区版-机器学习时间序列】 14.1 ARIMAX 14.2 ARIMA 14.3 HoltWinters 14.4 一次指数平滑预测 14.5 二次指数平滑预测 【第十四章&#xff1a;Sentosa_DSML社区版-机器学习时间序列】 14.1 ARIMAX 1.算子介绍 考虑其他序列对一…...

Vue3.X + SpringBoot小程序 | AI大模型项目 | 饮食陪伴官

gitee平台源码 github平台源码 饮食陪伴师是一个管理饮食的原生大模型小程序&#xff0c;优势&#xff1a; 精确营养监控&#xff1a;用户记录饮食后&#xff0c;我们会计算出食用的营养成分与分量&#xff0c;并反馈给用户。饮食建议有效&#xff1a;大模型经过我们训练具备大…...

【C++】检测TCP链接超时——时间轮组件设计

目录 引言 时间轮思想 设计的核心思路 完整代码 组件接口 个人主页&#xff1a;东洛的克莱斯韦克-CSDN博客 引言 对于高并发的服务器来说&#xff0c;链接是一种比较珍贵的资源&#xff0c;对不活跃的链接应该及时释放。判断连接是否活跃的策略是——在给定的时间内&#…...

中国新媒体联盟与中运律师事务所 建立战略合作伙伴关系

2024年9月27日&#xff0c;中国新媒体联盟与中运律师事务所举行战略合作协议签字仪式。中国新媒体联盟主任兼中国社会新闻网主编、中法新闻法制网运营中心主任左新发&#xff0c;中运律师事务所高级顾问刘学伟代表双方单位签字。 中国新媒体联盟是由央视微电影中文频道联合多家…...

【ArcGIS微课1000例】0121:面状数据共享边的修改方法

文章目录 一、共享边概述二、快速的修改办法1. 整形共享边2. 修改边3. 概化边缘一、共享边概述 面状数据共享边指的是两个或多个面状数据(如多边形)共同拥有的边界。在地理信息系统(GIS)、三维建模、大数据分析等领域,面状数据共享边是描述面状空间数据拓扑关系的重要组成…...

图论(dfs系列) 9/27

一、二维网格图中探测环 题意: 给定一个二维数组grid,如果二维数组中存在一个环&#xff0c;处于环上的值都是相同的。返回true&#xff1b;如果不存在就返回false&#xff1b; 思路&#xff1a; 在以往的dfs搜索中&#xff0c;都是往四个方向去dfs&#xff1b;但是在这一道…...

如何在Windows上安装Docker

在 Windows 上使用 Docker 有两种主要方式&#xff1a;通过 Docker Desktop 安装并使用 WSL 2 作为后端&#xff0c;或者直接在 WSL 2 中安装 Docker。这里推荐手残党直接用图形界面安装到WSL 2的后端&#xff1a; 一、启用Hyper-V和容器特性 1. 右键Windows点击应用和功能 …...

golang格式化输入输出

fmt包使用类似于C的printf和scanf的函数实现格式化I/O 1输出格式化 一般的&#xff1a; 动词效果解释%v[1 -23 3]、[1 -23 3]、&{sdlkjf 23}以默认格式显示的值&#xff0c;与bool&#xff08;%t&#xff09;、int, int8 etc&#xff08;%d&#xff09;、uint, uint8 et…...

Jenkins基于tag的构建

文章目录 Jenkins参数化构建设置设置gitlab tag在工程中维护构建的版本按指定tag的版本启动服务 Jenkins参数化构建设置 选择参数化构建&#xff1a; 在gradle构建之前&#xff0c;增加执行shell的步骤&#xff1a; 把新增的shell框挪到gradle构建之前&#xff0c; 最后保存 …...

性能设计模式

class Singleton { public: static Singleton& getInstance() {static Singleton instance; // 局部静态变量return instance; } private:Singleton() {}Singleton(const Singleton&) delete; // 禁止拷贝Singleton& operator(const Singleton&) delete; // …...

Android 热点分享二维码功能简单介绍

Android 热点分享二维码 文章目录 Android 热点分享二维码一、前言二、热点二维码1、热点分享的字符串2、代码中热点字符串拼接和设置示例3、一个图片示例 三、其他1、Android 热点分享二维码小结2、Android11 设置默认热点名称和热点密码、密码长度 一、前言 比较新的Android…...

SIEM之王,能否克服创新者的窘境?

《网安面试指南》http://mp.weixin.qq.com/s?__bizMzkwNjY1Mzc0Nw&mid2247484339&idx1&sn356300f169de74e7a778b04bfbbbd0ab&chksmc0e47aeff793f3f9a5f7abcfa57695e8944e52bca2de2c7a3eb1aecb3c1e6b9cb6abe509d51f&scene21#wechat_redirect 《Java代码审…...

(JAVA)浅尝关于 “栈” 数据结构

1. 栈的概述&#xff1a; 1.1 生活中的栈 存储货物或供旅客住宿的地方&#xff0c;可引申为仓库、中转站。例如酒店&#xff0c;在古时候叫客栈&#xff0c;是供旅客休息的地方&#xff0c;旅客可以进客栈休息&#xff0c;休息完毕后就离开客栈 1.2计算机中的栈 将生活中的…...

【前端】ES13:ES13新特性

文章目录 1 类新增特性1.1 私有属性和方法1.2 静态成员的私有属性和方法1.3 静态代码块1.4 使用in来判断某个对象是否拥有某个私有属性 2 支持在最外层写await3 at函数来索引元素4 正则匹配的开始和结束索引5 findLast() 和 findLastIndex() 函数6 Error对象的Cause属性 1 类新…...

vuepress 浏览器加载缓存,总是显示旧页面,无法自动刷新数据的解决方法

vuepress 采用多页面形式&#xff0c;每个md文件在打包时&#xff0c;都会被转为一个html页面&#xff1b;而浏览器默认会缓存页面&#xff0c;导致更新的页面必须手动刷新才行 对于更新较为频繁的文档 全局可在config.js里设置 参考文档: https://vuepress.github.io/zh/ref…...

如何使用代理IP解决反爬虫问题

在网络爬虫的世界里&#xff0c;反爬虫机制就像是守卫城池的士兵&#xff0c;时刻准备着抵御外来的“入侵者”。为了突破这些守卫&#xff0c;代理IP就像是你的隐形斗篷&#xff0c;帮助你在网络世界中自由穿梭。今天&#xff0c;我们就来聊聊如何使用代理IP解决反爬虫问题。 …...

QT学习笔记之绘图

或许有人会等你到天黑&#xff0c;但是你不该在天黑后再找他&#xff08;她&#xff09;。 1.绘图事件 在ui文件中添加一个按钮&#xff0c;同时在资源文件中添加一个名字为1.jpg的图片。 widget.cpp #include "widget.h" #include "ui_widget.h" #incl…...

大数据新视界 --大数据大厂之数据清洗工具 OpenRefine 实战:清理与转换数据

&#x1f496;&#x1f496;&#x1f496;亲爱的朋友们&#xff0c;热烈欢迎你们来到 青云交的博客&#xff01;能与你们在此邂逅&#xff0c;我满心欢喜&#xff0c;深感无比荣幸。在这个瞬息万变的时代&#xff0c;我们每个人都在苦苦追寻一处能让心灵安然栖息的港湾。而 我的…...

基于QT的C++中小项目软件开发架构源码

描述 基于QT信号槽机制实现类之间的交互调用通信&#xff0c;适用于使用不同枚举作为消息交互的类型场景&#xff0c;支持附带任意参数&#xff0c;代码使用方式参考前一篇文章 特性 代码简洁&#xff0c;不超过100行仅需包含一个头文件Communicator.h&#xff0c;需要通信的…...

self-supervised, weakly supervised, and supervised respectively区别

Self-supervised learning&#xff08;自监督学习&#xff09;、weakly supervised learning&#xff08;弱监督学习&#xff09;和supervised learning&#xff08;监督学习&#xff09;是机器学习中的不同学习范式&#xff0c;它们的主要区别如下&#xff1a; 一、监督学习&…...

安卓好软-----手机屏幕自动点击工具 无需root权限

工具可以设置后自动点击屏幕。可以用于一些操作。例如自动刷视频等等哦 工具介绍 一款可以帮你实现自动操作的软件。软件中你可以根据实际需要设置点击位置&#xff0c;可以是屏幕上的特定位置&#xff0c;也可以是按钮或控件。功能非常强大&#xff0c;但是操作非常简单&…...

【Redis】主从复制(下)--主从复制原理和流程

文章目录 主从复制原理主从节点建立复制流程图数据同步 psyncpsync的语法格式 psync运行流程全量复制全量复制的流程全量复制的缺陷有磁盘复制 vs 无磁盘复制 部分复制部分复制的流程复制积压缓冲区 实时复制 主从复制原理 主从节点建立复制流程图 保存主节点的信息从节点(sla…...

Pencils Protocol上线 Vaults 产品,为 $DAPP 深入赋能

Pencils Protocol 是 Scroll 生态一站式综合收益平台&#xff0c;该平台以 DeFi 功能作为抓手&#xff0c;基于 Farming、Vaults、Auction 等功能不断向 LRT、LaunchPad、AI、FHE、RWA 等领域深入的拓展。 近期 Pencils Protocol 生态不断迎来重磅进展&#xff0c;一个是 $DAPP…...