先决条件

1
2
3
4
5
6
7
8
9
10
tar -zxvf apache-dolphinscheduler-3.1.5-src.tar.gz
cd apache-dolphinscheduler-3.1.5-src/deploy/kubernetes/dolphinscheduler
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add bitnami-full-index https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami

helm dependency update .
helm install dolphinscheduler . --set image.tag=3.1.8


kubectl port-forward --address 0.0.0.0 svc/dolphinscheduler-api 12345:12345

修改values.yaml配置

1
2
persistentVolumeClaim:
storageClassName: "-"

改为

1
2
persistentVolumeClaim:
storageClassName: "openebs-hostpath"

若配置错误,则卸载后重新安装

1
2
helm uninstall dolphinscheduler
kubectl delete pvc -l app.kubernetes.io/instance=dolphinscheduler

创建命名空间

1
kubectl create namespace dolphinscheduler

问题

1.无法从bitnami更新,采用本地托管方式

1
2
3
4
5
单独下载
https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami/index.yaml
通过docker启动服务
docker run -p 80:8080 -v `pwd`/:/app bitnami/nginx
helm repo add bitnami-full http://localhost

2.无法创建租户--存储问题

2.1 Master、Worker 和 Api 服务之间支持共享存储?

1
2
3
4
5
6
7
8
9
10
# 修改values.yaml
common:
sharedStoragePersistence:
enabled: true
mountPath: "
/opt/soft"
accessModes:
- "ReadWriteMany"
storageClassName: "-"
storage: "20Gi"

storageClassNamestorage 需要被修改为实际值

注意: storageClassName 必须支持访问模式: ReadWriteMany

  1. 将 Hadoop 复制到目录 /opt/soft
  2. 确保 $HADOOP_HOME$HADOOP_CONF_DIR 正确

2.2 如何支持本地文件存储而非 HDFS 和 S3?

1
2
3
4
5
# 修改values.yaml
...
resource.storage.upload.base.path: /tmp/dolphinscheduler
...
resource.hdfs.fs.defaultFS: file:///

参考部署方案

![image-20231201132837643](/Users/ren/Library/Application Support/typora-user-images/image-20231201132837643.png)