Skip to content

Add more GCVE documentation #8146

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
102 changes: 27 additions & 75 deletions infra/gcp/terraform/k8s-infra-gcp-gcve/README.md
Original file line number Diff line number Diff line change
@@ -1,87 +1,39 @@
# Setup
# Overview

## Creation of GCVE
The code in `k8s-infra-gcp-gcve` sets up the infra required to allow prow jobs to create VMs on vSphere, e.g. to allow testing of the [Cluster API provider vSphere (CAPV)](https://github.com/kubernetes-sigs/cluster-api-provider-vsphere).

```sh
gcloud auth application-default login
terraform init
terraform apply
```
![Overview](./docs/images/overview.jpg)

## Setup jumphost/vpn for further configuration
Prow container settings are managed outside of this folder, but understanding high level components could
help to understand how the `k8s-infra-gcp-gcve` is set up and consumed.

See [maintenance-jumphost/README.md](./maintenance-jumphost/README.md).
More specifically, to allow prow jobs to create VM on vSphere, a few resources are made available to a prow container, so as of today only in the `k8s-infra-prow-build` prow cluster.

## Manual creation of a user and other IAM configuration in vSphere
- A secret, added via the `preset-gcve-e2e-config` [preset](https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes-sigs/cluster-api-provider-vsphere/cluster-api-provider-vsphere-presets.yaml), that provides vSphere URL and vSphere credentials
- A set of Boskos resources of type `gcve-vsphere-project`, allowing access to:
- a vSphere folder and a vSphere resources pool where to run VMs during a test.
- a reserved IP range to be used for the test e.g. for the kube vip load balancer in a CAPV cluster (VM instead will get IPs via DHCP).

> **Note:**
> The configuration described here cannot be done via terraform due to non-existing functionality.
Also, the network of the prow container is going to be paired to the VMware engine network, thus
allowing access to both the GCVE management network and the NSX-T network where all the VM are running.

First we generate a password for the user which will be used in prow and set it as environment variable:
The `k8s-infra-gcp-gcve` project sets up the infrastructure that actually runs the VMs created from the prow container. There are ther main components of this infrastracture:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The `k8s-infra-gcp-gcve` project sets up the infrastructure that actually runs the VMs created from the prow container. There are ther main components of this infrastracture:
The `k8s-infra-gcp-gcve` project sets up the infrastructure that actually runs the VMs created from the prow container. These are the main components of this infrastructure:


```sh
export GCVE_PROW_CI_PASSWORD="SomePassword"
```
The terraform manifest in this folder, which is applied by test-infra automation (Atlantis), uses the GCP terraform provider for creating.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just learned that this does not work for us: #8148 (comment)

Should we add terraform instructions instead?

- A VMware Engine instance
- The network infrastructure required for vSphere and for allowing communication between vSphere and Prow container.
- The network used is `192.168.0.32/21`
- Usable Host IP Range: `192.168.32.1 - 192.168.39.254`
- DHCP Range: `192.168.32.11 - 192.168.33.255`
- IPPool for 40 Projects having 16 IPs each: `192.168.35.0 - 192.168.37.127`
- The network infrastructure used for maintenance.

And set credentials for `govc`:
See inline comments for more details.

```sh
export GOVC_URL="$(gcloud vmware private-clouds describe k8s-gcp-gcve-pc --location us-central1-a --format='get(vcenter.fqdn)')"
export GOVC_USERNAME='[email protected]'
export GOVC_PASSWORD="$(gcloud vmware private-clouds vcenter credentials describe --private-cloud=k8s-gcp-gcve-pc [email protected] --location=us-central1-a --format='get(password)')"
```
The terraform manifest in the `/maintenance-jumphost` uses the GCP terraform provider to setup a jumphost VM to be used to set up vSphere or for maintenance pourposes. See
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The terraform manifest in the `/maintenance-jumphost` uses the GCP terraform provider to setup a jumphost VM to be used to set up vSphere or for maintenance pourposes. See
The terraform manifest in the `/maintenance-jumphost` uses the GCP terraform provider to setup a jumphost VM to be used to set up vSphere or for maintenance purposes. See

- [maintenance-jumphost](./maintenance-jumphost/README.md)

Run the script to setup the user, groups and IAM in vSphere.
The terraform manifest in the `/vsphere` folder uses the vSphere and the NSX terraform providers to setup e.g. content libraries, templetes, folders,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The terraform manifest in the `/vsphere` folder uses the vSphere and the NSX terraform providers to setup e.g. content libraries, templetes, folders,
The terraform manifest in the `/vsphere` folder uses the vSphere and the NSX terraform providers to setup e.g. content libraries, templates, folders,

resource pools and other vSphere components required when running tests. See:
- [vsphere](./vsphere/README.md)

```
./vsphere/scripts/ensure-users-groups.sh
```

Create relevant secrets in Secrets Manager

```sh
gcloud secrets describe k8s-gcp-gcve-ci-url 2>/dev/null || echo "$GOVC_URL" | gcloud secrets create k8s-gcp-gcve-ci-url --data-file=-
gcloud secrets describe k8s-gcp-gcve-ci-username 2>/dev/null || echo "[email protected]" | gcloud secrets create k8s-gcp-gcve-ci-username --data-file=-
gcloud secrets describe k8s-gcp-gcve-ci-password 2>/dev/null || echo "${GCVE_PROW_CI_PASSWORD}" | gcloud secrets create k8s-gcp-gcve-ci-password --data-file=-
gcloud secrets describe k8s-gcp-gcve-ci-thumbprint 2>/dev/null || echo "$(govc about.cert -json | jq -r '.thumbprintSHA256')" | gcloud secrets create k8s-gcp-gcve-ci-thumbprint --data-file=-
```

* `k8s-gcp-gcve-ci-username` with value `[email protected]`
* `k8s-gcp-gcve-ci-password` with value set above for `GCVE_PROW_CI_PASSWORD`
* `k8s-gcp-gcve-ci-url` with value set above for `GOVC_URL`

> **Note:** Changing the GCVE CI user's password
>
> 1. Set GOVC credentials as above.
> 2. Run govc command to update password: `govc sso.user.update -p "${GCVE_PROW_CI_PASSWORD}" prow-ci-user`
> 3. Update secret `k8s-gcp-gcve-ci-password` in secrets-manager: `echo "${GCVE_PROW_CI_PASSWORD}" | gcloud secrets versions add k8s-gcp-gcve-ci-password --data-file=-`

## Configuration of GCVE

```sh
export [email protected]
export TF_VAR_vsphere_password="$(gcloud vmware private-clouds vcenter credentials describe --private-cloud=k8s-gcp-gcve-pc [email protected] --location=us-central1-a --format='get(password)')" # gcloud command
export TF_VAR_vsphere_server="$(gcloud vmware private-clouds describe k8s-gcp-gcve-pc --location us-central1-a --format='get(vcenter.fqdn)')"
export TF_VAR_nsxt_user=admin
export TF_VAR_nsxt_password="$(gcloud vmware private-clouds nsx credentials describe --private-cloud k8s-gcp-gcve-pc --location us-central1-a --format='get(password)')"
export TF_VAR_nsxt_server="$(gcloud vmware private-clouds describe k8s-gcp-gcve-pc --location us-central1-a --format='get(nsx.fqdn)')"
export GOVC_URL="${TF_VAR_vsphere_server}"
export GOVC_USERNAME="${TF_VAR_vsphere_user}"
export GOVC_PASSWORD="${TF_VAR_vsphere_password}"
```

```sh
cd vsphere
terraform init
terraform apply
./scripts/ensure-users-permissions.sh
```

## Initialize Boskos resources with project information

The script [boskos-userdata.sh](vsphere/scripts/boskos-userdata.sh) calculates and initializes the Boskos resources required for the project.

```sh
BOSKOS_HOST=""
vsphere/scripts/boskos-userdata.sh
```
25 changes: 25 additions & 0 deletions infra/gcp/terraform/k8s-infra-gcp-gcve/docs/boskos.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Boskos

Boskos support resources of type `gcve-vsphere-project` to allow each test run to use a subset of vSphere resources.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Boskos support resources of type `gcve-vsphere-project` to allow each test run to use a subset of vSphere resources.
Boskos resources of type `gcve-vsphere-project` allow each test run to use a subset of vSphere resources.


Boskos configuration is split in three parts:

- The definition of the resource type in the [boskos-reaper](https://github.com/kubernetes/k8s.io/blob/main/kubernetes/gke-prow-build/prow/boskos-reaper.yaml) Deployment
- search for e.g. `gcve-vsphere-project`
- A static list of resources in the [boskos-resources-configmap](https://github.com/kubernetes/k8s.io/blob/main/kubernetes/gke-prow-build/prow/boskos-resources-configmap.yaml)
- As of today we have 40 Boskos resources (from `k8s-infra-e2e-gcp-gcve-project-001` tp `k8s-infra-e2e-gcp-gcve-project-040`)
- Setting up user data for each resource.

The last step requires access to the Boskos instance running in prow.

Once you get access run the following script:

```sh
BOSKOS_HOST=""
vsphere/scripts/boskos-userdata.sh
```

This script adds user data to each one of the above resources, e.g. for `k8s-infra-e2e-gcp-gcve-project-001` we are going to set following user data linking to some of the objects previously set up in vSphere for prow tests:
- A vSphere folder, e.g. `/Datacenter/vm/prow/k8s-infra-e2e-gcp-gcve-project-001`
- A vSphere resource pool, e.g. `/Datacenter/host/k8s-gcve-cluster/Resources/prow/k8s-infra-e2e-gcp-gcve-project-001`
- An ipPool with 16 addresses, e.g. `192.168.35.0-192.168.35.15`, corresponding gateway, `192.168.32.1` and CIDR subnet mask prefix, e.g. `21`
Loading