仅 K8s 部署
本篇指南仅适用于接触过 K8s 的用户。此篇文章不会介绍如何搭建 K8s 集群,也不会含有如何使用 kubectl
等命令的教程,同时文中可能出现一些 K8s 的专业术语,需要一定的基础进行阅读。
本文将着重介绍如何在 K8s 集群中部署 GZCTF,GZCTF 自身的配置教程请参考 快速上手。
部署须知
- GZCTF 支持多实例部署,但经测试,目前来说仅单实例部署且和数据库位于同一节点的部署方式是最稳定的,因此本文中将以单实例部署为例进行介绍。
- 多实例部署时,需要给全部实例挂载共享存储作为 PV,否则无法保证文件的一致性;同时需要保证部署 Redis,否则无法保证多实例的缓存一致性。
- 多实例部署时,负载均衡器需要配置 sticky session,否则无法使用 websocket 来获取实时数据。
- 如果你不想那么麻烦,那就部署单实例吧!
部署 GZCTF
-
创建命名空间及配置文件
apiVersion: v1 kind: Namespace metadata: name: gzctf-server --- apiVersion: v1 kind: ConfigMap metadata: name: gzctf-config namespace: gzctf-server data: appsettings.json: | { ... } # appsettings.json 中的内容 --- apiVersion: v1 kind: Secret metadata: name: gzctf-k8sconfig namespace: gzctf-server type: Opaque data: k8sconfig: ... # base64 编码后的 k8s 连接文件 --- apiVersion: v1 kind: Secret metadata: name: gzctf-tls namespace: gzctf-server type: kubernetes.io/tls data: tls.crt: ... # base64 编码后的 TLS 证书 tls.key: ... # base64 编码后的 TLS 私钥
-
创建本地 PV(如果需要多实例共享存储请自行更改配置)
apiVersion: v1 kind: PersistentVolume metadata: name: gzctf-files-pv namespace: gzctf-server spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce # 多实例部署时请改为 ReadWriteMany hostPath: path: /mnt/path/to/gzctf/files # 本地路径 --- apiVersion: v1 kind: PersistentVolume metadata: name: gzctf-db-pv namespace: gzctf-server spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce hostPath: path: /mnt/path/to/gzctf/db # 本地路径 apiVersion: v1 --- kind: PersistentVolumeClaim metadata: name: gzctf-files namespace: gzctf-server spec: accessModes: - ReadWriteOnce # 多实例部署时请改为 ReadWriteMany resources: requests: storage: 2Gi volumeName: gzctf-files-pv --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gzctf-db namespace: gzctf-server spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi volumeName: gzctf-db-pv
-
创建 GZCTF 的 Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: gzctf namespace: gzctf-server labels: app: gzctf spec: replicas: 1 strategy: type: RollingUpdate selector: matchLabels: app: gzctf template: metadata: labels: app: gzctf spec: nodeSelector: kubernetes.io/hostname: xxx # 指定部署节点,强制和数据库位于同一节点 containers: - name: gzctf image: gztime/gzctf:latest imagePullPolicy: Always env: - name: GZCTF_ADMIN_PASSWORD value: xxx # 管理员密码 ports: - containerPort: 80 name: http volumeMounts: - name: gzctf-files mountPath: /app/files - name: gzctf-config mountPath: /app/appsettings.json subPath: appsettings.json - name: gzctf-k8sconfig mountPath: /app/k8sconfig.yaml subPath: k8sconfig # 若需要持久化存储日志 .log 文件 # 请为每个 GZCTF 实例额外挂载一个卷至 /app/log # GZCTF 会自动处理日志文件,并会自动 rotate 和 compress resources: requests: cpu: 1000m memory: 384Mi volumes: - name: gzctf-files persistentVolumeClaim: claimName: gzctf-files - name: gzctf-config configMap: name: gzctf-config - name: gzctf-k8sconfig secret: secretName: gzctf-k8sconfig --- apiVersion: apps/v1 kind: Deployment metadata: name: gzctf-redis namespace: gzctf-server labels: app: gzctf-redis spec: replicas: 1 selector: matchLabels: app: gzctf-redis template: metadata: labels: app: gzctf-redis spec: containers: - name: gzctf-redis image: redis:alpine imagePullPolicy: Always ports: - containerPort: 6379 name: redis resources: requests: cpu: 10m memory: 64Mi --- apiVersion: apps/v1 kind: Deployment metadata: name: gzctf-db namespace: gzctf-server labels: app: gzctf-db spec: replicas: 1 selector: matchLabels: app: gzctf-db template: metadata: labels: app: gzctf-db spec: nodeSelector: kubernetes.io/hostname: xxx # 指定部署节点,强制和 GZCTF 位于同一节点 containers: - name: gzctf-db image: postgres:alpine imagePullPolicy: Always ports: - containerPort: 5432 name: postgres env: - name: POSTGRES_PASSWORD value: xxx # 数据库密码,需要和 appsettings.json 中的数据库密码一致 volumeMounts: - name: gzctf-db mountPath: /var/lib/postgresql/data resources: requests: cpu: 500m memory: 512Mi volumes: - name: gzctf-db persistentVolumeClaim: claimName: gzctf-db
-
创建 Service 和 Ingress
apiVersion: v1 kind: Service metadata: name: gzctf namespace: gzctf-server annotations: # 开启 Traefik 的 Sticky Session traefik.ingress.kubernetes.io/service.sticky.cookie: "true" traefik.ingress.kubernetes.io/service.sticky.cookie.name: "LB_Session" traefik.ingress.kubernetes.io/service.sticky.cookie.httponly: "true" spec: selector: app: gzctf ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: v1 kind: Service metadata: name: gzctf-db namespace: gzctf-server spec: selector: app: gzctf-db ports: - protocol: TCP port: 5432 targetPort: 5432 --- apiVersion: v1 kind: Service metadata: name: gzctf-redis namespace: gzctf-server spec: selector: app: gzctf-redis ports: - protocol: TCP port: 6379 targetPort: 6379 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: gzctf namespace: gzctf-server annotations: # 一些 Traefik 的 TLS 设置,可以根据自己的需求修改 kubernetes.io/ingress.class: "traefik" traefik.ingress.kubernetes.io/router.tls: "true" ingress.kubernetes.io/force-ssl-redirect: "true" spec: tls: - hosts: - ctf.example.com # 域名 secretName: gzctf-tls # 证书名称,需要自行创建对应的 Secret rules: - host: ctf.example.com # 域名 http: paths: - path: / pathType: Prefix backend: service: name: gzctf port: number: 80
-
Traefik 的额外配置
为了让 GZCTF 能够正常通过 XFF 获取用户真实 IP,需要让 Traefik 能够正确地添加 XFF 头。请注意如下内容不一定总是具有时效性和适用于所有版本的 Traefik,此处举例为 helm values,请自行查找最新的配置方法。
service: spec: externalTrafficPolicy: Local # 为了让 XFF 能够正常工作,需要将 externalTrafficPolicy 设置为 Local deployment: kind: DaemonSet ports: web: redirectTo: websecure # 重定向 HTTP 到 HTTPS additionalArguments: - "--entryPoints.web.proxyProtocol.insecure" - "--entryPoints.web.forwardedHeaders.insecure" - "--entryPoints.websecure.proxyProtocol.insecure" - "--entryPoints.websecure.forwardedHeaders.insecure"
部署提示
- 如果需要让 GZCTF 在初始化时自动创建管理员账户请注意传递
GZCTF_ADMIN_PASSWORD
环境变量,否则需要手动创建管理员账户。 - 请在系统日志界面调试并参考是否能够正常获取用户真实 IP,如果不能请检查 Traefik 的配置是否正确。
- 如有监控需求,请自行部署 Prometheus 和 Grafana,并打开 Traefik 的 Prometheus 支持,并且你可以通过 node exporter 监控题目容器的资源使用情况。
- 如果需要根据配置文件更改自动更新 GZCTF 的部署,请参考 Reloader (opens in a new tab)。