Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: Use dev release for stacks #122

Merged
merged 1 commit into from
Nov 18, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 16 additions & 16 deletions stacks/stacks-v2.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
stacks:
monitoring:
description: Stack containing Prometheus and Grafana
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand All @@ -25,7 +25,7 @@ stacks:
default: adminadmin
logging:
description: Stack containing OpenSearch, OpenSearch Dashboards (Kibana) and Vector aggregator
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -85,7 +85,7 @@ stacks:
default: adminadmin
airflow:
description: Stack containing Airflow scheduling platform
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand All @@ -112,7 +112,7 @@ stacks:
default: airflowSecretKey
data-lakehouse-iceberg-trino-spark:
description: Data lakehouse using Iceberg lakehouse on S3, Trino as query engine, Spark for streaming ingest and Superset for data visualization
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -169,7 +169,7 @@ stacks:
default: supersetSecretKey
hdfs-hbase:
description: HBase cluster using HDFS as underlying storage
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand All @@ -192,7 +192,7 @@ stacks:
parameters: []
nifi-kafka-druid-superset-s3:
description: Stack containing NiFi, Kafka, Druid, MinIO and Superset for data visualization
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -238,7 +238,7 @@ stacks:
default: adminadmin
spark-trino-superset-s3:
description: Stack containing MinIO, Trino and Superset for data visualization
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -283,7 +283,7 @@ stacks:
default: supersetSecretKey
trino-superset-s3:
description: Stack containing MinIO, Trino and Superset for data visualization
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -325,7 +325,7 @@ stacks:
default: supersetSecretKey
trino-iceberg:
description: Stack containing Trino using Apache Iceberg as a S3 data lakehouse
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -359,7 +359,7 @@ stacks:
default: adminadmin
jupyterhub-pyspark-hdfs:
description: Jupyterhub with PySpark and HDFS integration
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -389,7 +389,7 @@ stacks:
default: adminadmin
dual-hive-hdfs-s3:
description: Dual stack Hive on HDFS and S3 for Hadoop/Hive to Trino migration
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -426,7 +426,7 @@ stacks:
The bind user credentials are: ldapadmin:ldapadminpassword.
No AuthenticationClass is configured, The AuthenticationClass is created manually in the tutorial.
Use the 'openldap' Stack for an OpenLDAD with an AuthenticationClass already installed.
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand All @@ -449,7 +449,7 @@ stacks:
The bind user credentials are: ldapadmin:ldapadminpassword.
The LDAP AuthenticationClass is called 'ldap' and the SecretClass for the bind credentials is called 'ldap-bind-credentials'.
The stack already creates an appropriate Secret, so referring to the 'ldap' AuthenticationClass in your ProductCluster should be enough.
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand All @@ -475,7 +475,7 @@ stacks:
3 users are created in Keycloak: admin:adminadmin, alice:alicealice, bob:bobbob. admin and alice are admins with
full authorization in Druid and Trino, bob is not authorized.
This is a proof-of-concept and the mechanisms used here are subject to change.
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -541,7 +541,7 @@ stacks:

Note that this stack is tightly coupled with the demo.
So if you install the stack you will get demo-specific parts (such as Keycloak users or regorules).
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down Expand Up @@ -611,7 +611,7 @@ stacks:
signal-processing:
description: >-
A stack used for creating, streaming and processing in-flight data and persisting it to TimescaleDB before it is displayed in Grafana
stackableRelease: 24.7
stackableRelease: dev
stackableOperators:
- commons
- listener
Expand Down