Skip to content

Commit

Permalink
Merge branch 'master' into port-cli-docs
Browse files Browse the repository at this point in the history
  • Loading branch information
llewellyn-sl authored Jul 30, 2024
2 parents 010f27e + db0494f commit 6435707
Show file tree
Hide file tree
Showing 72 changed files with 895 additions and 1,039 deletions.
29 changes: 29 additions & 0 deletions .github/workflows/check-internal-links.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: "Internal link checking"

on: [pull_request]

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
check-internal-links:
runs-on: ubuntu-latest
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Check out repo
uses: actions/checkout@v2
# Node is required for npm
- name: Set up Node
uses: actions/setup-node@v2
with:
node-version: "18"
- name: Pull & update submodules recursively
run: |
git submodule update --init --recursive
# Fail the build in PRs, but not for Netlify previews or production builds
- name: Replace credentials
run: |
sed -Ei 's/onBroken([A-Za-z]+): "warn"/onBroken\1: "throw"/g' docusaurus.config.js
# Install and build Docusaurus website
- name: Build Docusaurus website
run: |
npm install
npm run build
29 changes: 29 additions & 0 deletions .github/workflows/links.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: Links

# https://github.com/lycheeverse/lychee-action

on:
repository_dispatch:
workflow_dispatch:
schedule:
- cron: "00 18 * * *"

jobs:
linkChecker:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Link Checker
id: lychee
uses: lycheeverse/lychee-action@v1
with:
args: --base . --verbose --no-progress -s https -s http './**/*.mdx'

- name: Create Issue From File
if: env.lychee_exit_code != 0
uses: peter-evans/create-issue-from-file@v4
with:
title: Link Checker Report
content-filepath: ./lychee/out.md
labels: report, automated issue
11 changes: 6 additions & 5 deletions fusion_docs/guide.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,15 @@ title: User guide

Fusion is a virtual, lightweight, distributed file system designed to optimise the data access of Nextflow data pipelines.

Fusion enables seamless filesystem I/O to cloud object stores via a standard POSIX interface resulting in simpler
pipeline logic and faster, more efficient pipeline execution.
Fusion enables seamless filesystem I/O to cloud object stores via a standard POSIX interface resulting in simpler pipeline logic and faster, more efficient pipeline execution.

:::note
Fusion requires a license for use beyond limited testing and validation within Seqera Platform or directly within Nextflow. [Contact Seqera](https://seqera.io/contact-us/) for more details.
:::

## Getting started

Fusion smoothly integrates with Nextflow and does not require any installation or change in pipeline code.
It only requires to use of container runtime or a container computing service such as Kubernetes, AWS Batch and
Google Cloud Batch.
Fusion smoothly integrates with Nextflow and does not require any installation or change in pipeline code. It only requires to use of container runtime or a container computing service such as Kubernetes, AWS Batch, or Google Cloud Batch.

:::note

Expand Down
4 changes: 4 additions & 0 deletions fusion_docs/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ Fusion is a virtual, lightweight, distributed file system that bridges the gap b
storage. Fusion enables seamless filesystem I/O to cloud object stores via a standard POSIX interface resulting in
simpler pipeline logic and faster, more efficient pipeline execution.

:::note
Fusion requires a license for use beyond limited testing and validation within Seqera Platform or directly within Nextflow. [Contact Seqera](https://seqera.io/contact-us/) for more details.
:::

## Features

### Transparent, automated installation
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ Once the AWS resources are set up, we can add a new **AWS Batch** environment in
:::

:::note
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx/#container-registry-credentials) tab.
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx) tab.
:::

7. Select a **Region**, for example "eu-west-1 - Europe (Ireland)".
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Once the Azure resources are set up, we can add a new **Azure Batch** environmen
:::

:::note
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx/#container-registry-credentials) tab.
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx) tab.
:::

7. Select a **Region**, for example "eastus (East US)".
Expand Down
4 changes: 2 additions & 2 deletions platform_versioned_docs/version-22.4.0/compute-envs/eks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Tower offers native support for AWS EKS clusters and streamlines the deployment

## Requirements

You need to have an EKS cluster up and running. Make sure you have followed the [cluster preparation](../k8s/#cluster-preparation) instructions to create the cluster resources required by Tower. In addition to the generic Kubernetes instructions, you will need to make a few modifications specific to EKS.
You need to have an EKS cluster up and running. Make sure you have followed the [cluster preparation](../compute-envs/k8s.mdx#cluster-preparation) instructions to create the cluster resources required by Tower. In addition to the generic Kubernetes instructions, you will need to make a few modifications specific to EKS.

**Assign service account role to IAM user.** You will need to assign the service role with an AWS user that will be used by Tower to access the EKS cluster.

Expand Down Expand Up @@ -67,7 +67,7 @@ For more details, refer to the [AWS documentation](https://docs.aws.amazon.com/e
:::

:::note
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx/#container-registry-credentials) tab.
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx) tab.
:::

5. Select a **Region**, for example "eu-west-1 - Europe (Ireland)".
Expand Down
6 changes: 3 additions & 3 deletions platform_versioned_docs/version-22.4.0/compute-envs/gke.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ Tower offers native support for Google GKE clusters and streamlines the deployme

### Requirements

Refer to the [Google Cloud](../google-cloud/#configure-google-cloud) section for instructions on how to set up your Google Cloud account and any other services (e.g. Cloud Storage) that you intend to use.
Refer to the [Google Cloud](./google-cloud-batch.mdx#configure-google-cloud) section for instructions on how to set up your Google Cloud account and any other services (e.g. Cloud Storage) that you intend to use.

You need to have a GKE cluster up and running. Make sure you have followed the [cluster preparation](../k8s/#cluster-preparation) instructions to create the cluster resources required by Tower. In addition to the generic Kubernetes instructions, you will need to make a few modifications specific to GKE.
You need to have a GKE cluster up and running. Make sure you have followed the [cluster preparation](../compute-envs/k8s.mdx#cluster-preparation) instructions to create the cluster resources required by Tower. In addition to the generic Kubernetes instructions, you will need to make a few modifications specific to GKE.

**Assign service account role to IAM user.** You will need to grant the cluster access to the service account used to authenticate the Tower compute environment. This can be done by updating the _role binding_ as shown below:

Expand Down Expand Up @@ -59,7 +59,7 @@ For more details, refer to the [Google documentation](https://cloud.google.com/k
:::

:::note
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx/#container-registry-credentials.mdx) tab.
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx) tab.
:::

7. Select the **Location** of your GKE cluster.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ To create a new compute environment for Google Cloud in Tower:
:::

:::note
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx/#container-registry-credentials) tab.
From version 22.3, Tower supports the use of credentials for container registry services. These credentials can be created from the [Credentials](../credentials/overview.mdx) tab.
:::

7. Select the [**Region** and **Zones**](https://cloud.google.com/compute/docs/regions-zones#available) where you'd like to execute pipelines. You can leave the **Location** empty and the Cloud Life Sciences API will use the closest available location.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: "Step-by-step instructions to set up a Nextflow Tower compute envir

Tower streamlines the deployment of Nextflow pipelines into Kubernetes both for cloud-based and on-prem clusters.

The following instructions are for a **generic Kubernetes** distribution. If you are using [Amazon EKS](../eks/) or [Google GKE](../gke/), see the corresponding documentation pages.
The following instructions are for a **generic Kubernetes** distribution. If you are using [Amazon EKS](eks.mdx) or [Google GKE](gke.mdx), see the corresponding documentation pages.

### Cluster Preparation

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ The following metadata fields are collected and stored by the Tower backend duri

| Name | Description |
| -------------- | ---------------------------------------------------------------------------------------------- |
| `attempt` | Number of execution attempt of the task |
| `attempt` | Number of Nextflow execution attempts of the task |
| `cloud_zone` | Cloud zone where the task execution was allocated |
| `complete` | Task execution completion timestamp |
| `container` | Container image name used to execute the task |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ description: "Manage users and teams for an organization."
Organizations consist of members, while workspaces consist of participants.

:::note
A workspace participant may be a member of the workspace organization or a collaborator within that workspace only. Collaborators count toward the total number of workspace participants. See [Usage limits](/docs/limits/limits.mdx).
A workspace participant may be a member of the workspace organization or a collaborator within that workspace only. Collaborators count toward the total number of workspace participants. See [Usage limits](../limits/limits.mdx).
:::

### Create a new workspace
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ When the compute environment is created with Forge, the following resources will

At execution time, when the jobs are submitted to Batch, the requests are set up to propagate tags to all the instances created by the head job.

The [`forge-policy.json`](/docs/_templates/aws-batch/forge-policy.json) file contains the roles needed for Batch Forge-created AWS compute environments to tag AWS resources. Specifically, the required roles are `iam:TagRole`, `iam:TagInstanceProfile`, and `batch:TagResource`.
The [`forge-policy.json`](../_templates/aws-batch/forge-policy.json) file contains the roles needed for Batch Forge-created AWS compute environments to tag AWS resources. Specifically, the required roles are `iam:TagRole`, `iam:TagInstanceProfile`, and `batch:TagResource`.

To view and manage the resource labels applied to AWS resources by Tower and Nextflow, navigate to the [AWS Tag Editor](https://docs.aws.amazon.com/tag-editor/latest/userguide/find-resources-to-tag.html)(as an administrative user) and follow these steps:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ To enable Fusion in Tower:

- Use Nextflow version `22.10.0` or later. The latest version of Nextflow is used in Tower by default, but a particular version can be specified using `NXF_VER` in the Nextflow config file field (**Advanced options -> Nextflow config file** under Pipeline settings on the launch page).

- Enable the [Wave containers service](https://www.nextflow.io/docs/latest/wave.html#wave-page) during [AWS Batch](/docs/compute-envs/aws-batch.mdx) compute environment creation.
- Enable the [Wave containers service](https://www.nextflow.io/docs/latest/wave.html#wave-page) during [AWS Batch](../../compute-envs/aws-batch.mdx) compute environment creation.

- Select **Enable Fusion v2** during compute environment creation.

- (Optional) Select **Enable fast instance storage** to make use of NVMe instance storage to further increase performance.

See the [AWS Batch](/docs/compute-envs/aws-batch.mdx#) compute environment page for detailed instructions.
See the [AWS Batch](../../compute-envs/aws-batch.mdx) compute environment page for detailed instructions.
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The following metadata fields are collected and stored by the Tower backend duri

| Name | Description |
| -------------- | ---------------------------------------------------------------------------------------------- |
| `attempt` | Number of execution attempt of the task |
| `attempt` | Number of Nextflow execution attempts of the task |
| `cloud_zone` | Cloud zone where the task execution was allocated |
| `complete` | Task execution completion timestamp |
| `container` | Container image name used to execute the task |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@ spec:
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
serviceName: frontend
servicePort: 80
service:
name: frontend
port:
number: 80
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,20 @@ metadata:
access_logs.s3.prefix=YOUR-LOGS-PREFIX
spec:
rules:
- host: YOUR-TOWER-HOST-NAME
- host: <YOUR-TOWER-HOST-NAME>
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
serviceName: ssl-redirect
servicePort: use-annotation
service:
name: ssl-redirect
port:
name: use-annotation
- path: /*
pathType: ImplementationSpecific
backend:
serviceName: frontend
servicePort: 80
service:
name: frontend
port:
number: 80
Original file line number Diff line number Diff line change
Expand Up @@ -412,7 +412,7 @@ tags: [changelog]
- Fix Unable to download execution log from a workflow with working directory specified just as "bucket" name [d025917c]
- Fix Prevent the creation of Spot fleet role [95acea2c]
- Fix Prevent deletion of an active workflow run [ba1f1ce9]
- Fix Prevent XSS attacks when uploading a datatable file [#2944](6d98210c)
- Fix Prevent XSS attacks when uploading a datatable file [#2944] (6d98210c)
- 8759d92e - Validate launch/re-launch action depending the user role
- d6113805 - Bump base image nf-jdk:corretto-11.0.14_2 [ci fast]

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ tags: [workspaces, teams, users, administration]
Organizations consist of members, while workspaces consist of participants.

:::note
A workspace participant may be a member of the workspace organization or a collaborator within that workspace only. Collaborators count toward the total number of workspace participants. See [Usage limits](/docs/limits/limits.mdx).
A workspace participant may be a member of the workspace organization or a collaborator within that workspace only. Collaborators count toward the total number of workspace participants. See [Usage limits](../limits/limits.mdx).
:::

### Create a new workspace
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ date: "21 Apr 2023"
tags: [agent, credentials]
---

[Tower Agent](../agent.mdx) enables Tower to launch pipelines on HPC clusters that do not allow direct access through an SSH client. Tower Agent authenticates a secure connection with Tower using a Tower Agent credential.
[Tower Agent](../supported_software/agent/agent.mdx) enables Tower to launch pipelines on HPC clusters that do not allow direct access through an SSH client. Tower Agent authenticates a secure connection with Tower using a Tower Agent credential.

### Tower Agent sharing

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ The following metadata fields are collected and stored by the Tower backend duri

| Name | Description |
| -------------- | ---------------------------------------------------------------------------------------------- |
| `attempt` | Number of execution attempt of the task |
| `attempt` | Number of Nextflow execution attempts of the task |
| `cloud_zone` | Cloud zone where the task execution was allocated |
| `complete` | Task execution completion timestamp |
| `container` | Container image name used to execute the task |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@ spec:
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
serviceName: frontend
servicePort: 80
service:
name: frontend
port:
number: 80
Original file line number Diff line number Diff line change
Expand Up @@ -17,14 +17,20 @@ metadata:
access_logs.s3.prefix=YOUR-LOGS-PREFIX
spec:
rules:
- host: YOUR-TOWER-HOST-NAME
- host: <YOUR-TOWER-HOST-NAME>
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
serviceName: ssl-redirect
servicePort: use-annotation
service:
name: ssl-redirect
port:
name: use-annotation
- path: /*
pathType: ImplementationSpecific
backend:
serviceName: frontend
servicePort: 80
service:
name: frontend
port:
number: 80
Loading

0 comments on commit 6435707

Please sign in to comment.