You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
added documentation on file run output to S3 storage and logging. Mentioned new runtime env variable ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3 (#123)
Copy file name to clipboardExpand all lines: pipelines/run-generic-pipelines-on-apache-airflow/README.md
+18-1
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,24 @@ Elyra currently supports Apache Airflow deployments that utilize GitHub or GitHu
53
53
- Branch in named repository, e.g. `test-dags`. This branch must exist.
54
54
-[Personal access token](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token) that Elyra can use to push DAGs to the repository, e.g. `4d79206e616d6520697320426f6e642e204a616d657320426f6e64`
55
55
56
-
Elyra utilizes S3-compatible cloud storage to make data available to notebooks and Python scripts while they are executed. Any kind of cloud storage should work (e.g. IBM Cloud Object Storage or Minio) as long as it can be accessed from the machine where JupyterLab is running and the Apache Airflow cluster. Collect the following information:
56
+
Elyra utilizes S3-compatible cloud storage to make data available to Jupyter notebooks and R or Python scripts while they are executed. Any kind of cloud storage should work (e.g. IBM Cloud Object Storage or Minio) as long as it can be accessed from the machine where JupyterLab is running and the Apache Airflow cluster.
57
+
58
+
Elyra also puts the STDOUT (including STDERR) run output into a file when env var `ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3` is set to `true` or not present in the runtime container, which is the default.
59
+
This happens in addition to logging and writing to STDOUT and STDERR at runtime.
60
+
61
+
`ipynb` file execution run/STDOUT output is written to S3-compatible object storage in the following files:
62
+
-`<notebook name>-output.ipynb`
63
+
-`<notebook name>.html`
64
+
65
+
.r and .py file execution run/STDOUT output is written to to S3-compatible object storage in the following files:
66
+
-`<r or python filename>.log`
67
+
68
+
Note: If you prefer to use S3-compatible storage for transfer of files between pipeline steps only and **not for logging information / run output of R, Python and Jupyter Notebook files**,
69
+
either set env var **`ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3`** to **`false`** in runtime container builds or pass that env value explicitely in the env section of the pipeline editor,
70
+
either at Pipeline Properties - Generic Node Defaults - Environment Variables or at
Copy file name to clipboardExpand all lines: pipelines/run-generic-pipelines-on-kubeflow-pipelines/README.md
+18-1
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,24 @@ Collect the following information for your Kubeflow Pipelines installation:
47
47
- Password, for a multi-user, auth-enabled Kubeflow installation, e.g. `passw0rd`
48
48
- Workflow engine type, which should be `Argo` or `Tekton`. Contact your administrator if you are unsure which engine your deployment utilizes.
49
49
50
-
Elyra utilizes S3-compatible cloud storage to make data available to notebooks and scripts while they are executed. Any kind of cloud storage should work (e.g. IBM Cloud Object Storage or Minio) as long as it can be accessed from the machine where JupyterLab is running and from the Kubeflow Pipelines cluster. Collect the following information:
50
+
Elyra utilizes S3-compatible cloud storage to make data available to Jupyter notebooks and R or Python scripts while they are executed. Any kind of cloud storage should work (e.g. IBM Cloud Object Storage or Minio) as long as it can be accessed from the machine where JupyterLab is running and from the Kubeflow Pipelines cluster.
51
+
52
+
Elyra also puts the STDOUT (including STDERR) run output into a file when env var `ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3` is set to `true` or not present in the runtime container, which is the default.
53
+
This happens in addition to logging and writing to STDOUT and STDERR at runtime.
54
+
55
+
`ipynb` file execution run/STDOUT output is written to S3-compatible object storage in the following files:
56
+
-`<notebook name>-output.ipynb`
57
+
-`<notebook name>.html`
58
+
59
+
.r and .py file execution run/STDOUT output is written to to S3-compatible object storage in the following files:
60
+
-`<r or python filename>.log`
61
+
62
+
Note: If you prefer to use S3-compatible storage for transfer of files between pipeline steps only and **not for logging information / run output of R, Python and Jupyter Notebook files**,
63
+
either set env var **`ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3`** to **`false`** in runtime container builds or pass that env value explicitely in the env section of the pipeline editor,
64
+
either at Pipeline Properties - Generic Node Defaults - Environment Variables or at
Copy file name to clipboardExpand all lines: pipelines/run-pipelines-on-apache-airflow/README.md
+16-1
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,22 @@ Collect the following information for your Apache Airflow installation:
52
52
53
53
Detailed instructions for setting up a DAG repository and generating an access token can be found in [the User Guide](https://elyra.readthedocs.io/en/latest/recipes/configure-airflow-as-a-runtime.html#setting-up-a-dag-repository-on-github).
54
54
55
-
Elyra utilizes S3-compatible cloud storage to make data available to notebooks and scripts while they are executed. Any kind of S3-based cloud storage should work (e.g. IBM Cloud Object Storage or Minio) as long as it can be accessed from the machine where JupyterLab/Elyra is running and from the Apache Airflow cluster.
55
+
Elyra utilizes S3-compatible cloud storage to make data available to Jupyter notebooks and R or Python scripts while they are executed. Any kind of S3-based cloud storage should work (e.g. IBM Cloud Object Storage or Minio) as long as it can be accessed from the machine where JupyterLab/Elyra is running and from the Apache Airflow cluster.
56
+
57
+
Elyra also puts the STDOUT (including STDERR) run output into a file when env var `ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3` is set to `true` or not present in the runtime container, which is the default.
58
+
This happens in addition to logging and writing to STDOUT and STDERR at runtime.
59
+
60
+
`ipynb` file execution run/STDOUT output is written to S3-compatible object storage in the following files:
61
+
-`<notebook name>-output.ipynb`
62
+
-`<notebook name>.html`
63
+
64
+
.r and .py file execution run/STDOUT output is written to to S3-compatible object storage in the following files:
65
+
-`<r or python filename>.log`
66
+
67
+
Note: If you prefer to use S3-compatible storage for transfer of files between pipeline steps only and **not for logging information / run output of R, Python and Jupyter Notebook files**,
68
+
either set env var **`ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3`** to **`false`** in runtime container builds or pass that env value explicitely in the env section of the pipeline editor,
69
+
either at Pipeline Properties - Generic Node Defaults - Environment Variables or at
Copy file name to clipboardExpand all lines: pipelines/run-pipelines-on-kubeflow-pipelines/README.md
+17-1
Original file line number
Diff line number
Diff line change
@@ -52,7 +52,23 @@ Collect the following information for your Kubeflow Pipelines installation:
52
52
- Password, for a multi-user, auth-enabled Kubeflow installation, e.g. `passw0rd`
53
53
- Workflow engine type, which should be `Argo` or `Tekton`. Contact your administrator if you are unsure which engine your deployment utilizes.
54
54
55
-
Elyra utilizes S3-compatible cloud storage to make data available to notebooks and scripts while they are executed. Any kind of S3-based cloud storage should work (e.g. IBM Cloud Object Storage or Minio) as long as it can be accessed from the machine where JupyterLab/Elyra is running and from the Kubeflow Pipelines cluster.
55
+
Elyra utilizes S3-compatible cloud storage to make data available to Jupyter notebooks and R or Python scripts while they are executed. Any kind of S3-based cloud storage should work (e.g. IBM Cloud Object Storage or Minio) as long as it can be accessed from the machine where JupyterLab/Elyra is running and from the Kubeflow Pipelines cluster.
56
+
57
+
Elyra also puts the STDOUT (including STDERR) run output into a file when env var `ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3` is set to `true` or not present in the runtime container, which is the default.
58
+
This happens in addition to logging and writing to STDOUT and STDERR at runtime.
59
+
60
+
`ipynb` file execution run/STDOUT output is written to S3-compatible object storage in the following files:
61
+
-`<notebook name>-output.ipynb`
62
+
-`<notebook name>.html`
63
+
64
+
.r and .py file execution run/STDOUT output is written to to S3-compatible object storage in the following files:
65
+
-`<r or python filename>.log`
66
+
67
+
Note: If you prefer to use S3-compatible storage for transfer of files between pipeline steps only and **not for logging information / run output of R, Python and Jupyter Notebook files**,
68
+
either set env var **`ELYRA_GENERIC_NODES_ENABLE_SCRIPT_OUTPUT_TO_S3`** to **`false`** in runtime container builds or pass that env value explicitely in the env section of the pipeline editor,
69
+
either at Pipeline Properties - Generic Node Defaults - Environment Variables or at
0 commit comments