Skip to content

Commit

Permalink
Introduce scenarios for 17.1 env for adoption
Browse files Browse the repository at this point in the history
Introduce an scenarios folder that will contain the needed input to
deploy a 17.1 environment using the cifmw role added in [1]. The
scenario is defined by a variable file, with undercloud specific
parameters, overcloud specific parameters, hooks that can be called
before or after both the undercloud and overcloud deployment, and two
maps that relate the groups in inventory produced by the playbook that
created the infra, to Roles and roles hostnames, to make it easier to
work with different roles in different scenarios.

[1] openstack-k8s-operators/ci-framework#2297
  • Loading branch information
cescgina committed Oct 7, 2024
1 parent d39856d commit 6afb8d5
Show file tree
Hide file tree
Showing 9 changed files with 648 additions and 0 deletions.
130 changes: 130 additions & 0 deletions scenarios/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# OSP 17.1 scenarios

The files stored in this folder define different osp 17.1 deployments to be
tested with adoption. For each scenario, we have a <scenario_name>.yaml file
and folder with the same name. The yaml file contains variables that will be
used to customize the scenario, while the folder contains files that will be
used in the deployment (network_data, role files, etc.).

This scenario definition assumes that all relevant parameters to the
deployment are known, with the exception of infra-dependent values like ips or
hostnames.

## Scenario definition file

The scenario definition file (the <scenario_name>.yaml) has the following top
level sections:

- `undercloud`
- `stacks`
- `cloud_domain`
- `hostname_groups_map`
- `roles_groups_map`
- `hooks`

### Undercloud section

The undercloud section contains the following parameters (all optional):

- `config`: a list of options to set in `undercloud.conf` file, each entry is
a dictionary with the fields `section`, `option` and `value`.
- `undercloud_parameters_override`: path to a file that contains some parameters
for the undercloud setup, is passed through the `hieradata_override` option in
the `undercloud.conf`.
- `undercloud_parameters_defaults`: path to a file that contains
parameters_defaults for the undercloud, is passed through the `custom_env_files`
option in the `undercloud.conf`.

### Stacks section

The stacks section contains list of stacks to be deployed. Typically, this will
be just one, commonly known as `overcloud`, but there can be arbitrarily many.
For each entry the following parameters can be passed:

- `stackname`: name of the stack deployment, default is `overcloud`.
- `args`: list of cli arguments to use when deploying the stack.
- `vars`: list of environment files to use when deploying the stack.
- `network_data_file`: path to the network_data file that defines the network
to use in the stack, required. This file can be a yaml or jinja file. It it
ends with `j2`, it will be treated as a template, otherwise it'll be copied as
is.
- `vips_data_file`: path to the file defining the virtual ips to use in the
stack, required.
- `roles_file`: path to the file defining the roles of the different nodes
used in the stack, required.
- `config_download_file`: path to the config-download file used to pass
environment variables to the stack, required.
- `ceph_osd_spec_file`: path to the osd spec file used to deploy ceph when
applicable, optional.
- `deploy_command`: string with the stack deploy command to run verbatim,
if defined, it ignores the `vars` and `args` fields, optional.
- `stack_nodes`: list of groups for the inventory that contains the nodes that
will be part of the stack, required. This groups must be a subset of the groups
used as keys in `hostname_groups_map` and `roles_groups_map`.

### Cloud domain

Name of the dns domain used for the overcloud, particularly relevant for tlse
environments.

### Hostname groups map

Map that relates ansible groups in the inventory produced by the infra creation
to role hostname format for 17.1 deployment. This allows to tell which nodes
belong to the overcloud without trying to rely on specific naming. Used to
build the hostnamemap. For example, let's assume that we have an inventory with
a group called `osp-computes` that contains the computes, and a group called
`osp-controllers` that contains the controllers, then a possible map would look
like:

```
hostname_groups_map:
osp-computes: "overcloud-novacompute"
osp-controllers: "overcloud-controller"
```

### Roles groups map

Map that relates ansible groups in the inventory produced by the infra creation
to OSP roles. This allows to build a tripleo-ansible-inventory which is used,
for example, to deploy Ceph. Continuing from the example mentioned in the
previous section, a possible value for this map would be:

```
hostname_groups_map:
osp-computes: "Compute"
osp-controllers: "Controller"
```

### Hooks

Hooks are a mechanism used in the ci-framework to run external code without
modifying the project's playbooks. See the [ci-framework
docs](https://ci-framework.readthedocs.io/en/latest/roles/run_hook.html) for
more details about how hooks are used in the ci-framework.

For deployment of osp 17.1, the following hooks are available:

- `pre_uc_run`, runs before deploying the undercloud
- `post_uc_run`, runs after deploying the undercloud
- `pre_oc_run`, runs before deploying the overcloud, but after provisioning
networks and virtual ips
- `post_oc_run`, runs after deploying the overcloud

Hooks provide flexibility to the users without adding too much complexity to
the ci-framework. An example of use case of hooks here, is to deploy ceph for
the scenarios that require it. Instead of having some flag in the code to
select whether we should deploy it or not, we can deploy it using the
`pre_oc_run`, like this:

```
pre_oc_run:
- name: Deploy Ceph
type: playbook
source: "adoption_deploy_ceph.yml"
```

Since the `source` attribute is not an absolute path, this example assumes that
the `adoption_deploy_ceph.yml` playbook exists in the ci-framework (it
introduced alongside the role to consume the scenarios defined here by
[this PR](https://github.com/openstack-k8s-operators/ci-framework/pull/2297)).
71 changes: 71 additions & 0 deletions scenarios/hci.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
---
undercloud:
config:
- section: DEFAULT
option: undercloud_hostname
value: undercloud.localdomain
- section: DEFAULT
option: undercloud_timezone
value: UTC
- section: DEFAULT
option: undercloud_debug
value: true
- section: DEFAULT
option: container_cli
value: podman
- section: DEFAULT
option: undercloud_enable_selinux
value: false
- section: DEFAULT
option: generate_service_certificate
value: false
undercloud_parameters_override: "hci/hieradata_overrides_undercloud.yaml"
undercloud_parameters_defaults: "hci/undercloud_parameter_defaults.yaml"
ctlplane_vip: 192.168.122.99
cloud_domain: "localdomain"
hostname_groups_map:
# map ansible groups in the inventory to role hostname format for
# 17.1 deployment
osp-computes: "overcloud-computehci"
osp-controllers: "overcloud-controller"
roles_groups_map:
# map ansible groups to tripleo Role names
osp-computes: "ComputeHCI"
osp-controllers: "Controller"
stacks:
- stackname: "overcloud"
args:
- "--override-ansible-cfg /home/zuul/ansible_config.cfg"
- "--templates /usr/share/openstack-tripleo-heat-templates"
- "--libvirt-type qemu"
- "--timeout 90"
- "--overcloud-ssh-user zuul"
- "--deployed-server"
- "--validation-warnings-fatal"
- "--disable-validations"
- "--heat-type pod"
- "--disable-protected-resource-types"
vars:
- "/home/zuul/deployed_ceph.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/podman.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/debug.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/enable-legacy-telemetry.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml"
network_data_file: "hci/network_data.yaml.j2"
vips_data_file: "hci/vips_data.yaml"
roles_file: "hci/roles.yaml"
ceph_osd_spec_file: "hci/osd_spec.yaml"
config_download_file: "hci/config_download.yaml"
stack_nodes:
- osp-computes
- osp-controllers
pre_oc_run:
- name: Deploy Ceph
type: playbook
source: "adoption_deploy_ceph.yml"
extra_vars:
stack_index: 0
80 changes: 80 additions & 0 deletions scenarios/hci/config_download.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
---
resource_registry:
# yamllint disable rule:line-length
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
OS::TripleO::OVNMacAddressNetwork: OS::Heat::None
OS::TripleO::OVNMacAddressPort: OS::Heat::None
OS::TripleO::ComputeHCI::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml
OS::TripleO::ComputeHCI::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml
OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml
OS::TripleO::ComputeHCI::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_external.yaml
OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml
OS::TripleO::Services::CeilometerAgentCentral: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-central-container-puppet.yaml
OS::TripleO::Services::CeilometerAgentNotification: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-notification-container-puppet.yaml
OS::TripleO::Services::CeilometerAgentIpmi: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-ipmi-container-puppet.yaml
OS::TripleO::Services::ComputeCeilometerAgent: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-compute-container-puppet.yaml
OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml
OS::TripleO::Services::MetricsQdr: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/qdr-container-puppet.yaml
OS::TripleO::Services::OsloMessagingRpc: /usr/share/openstack-tripleo-heat-templates/deployment/rabbitmq/rabbitmq-messaging-rpc-pacemaker-puppet.yaml
OS::TripleO::Services::OsloMessagingNotify: /usr/share/openstack-tripleo-heat-templates/deployment/rabbitmq/rabbitmq-messaging-notify-shared-puppet.yaml
OS::TripleO::Services::HAproxy: /usr/share/openstack-tripleo-heat-templates/deployment/haproxy/haproxy-pacemaker-puppet.yaml
OS::TripleO::Services::Pacemaker: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/pacemaker-baremetal-puppet.yaml
OS::TripleO::Services::PacemakerRemote: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/pacemaker-remote-baremetal-puppet.yaml
OS::TripleO::Services::Clustercheck: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/clustercheck-container-puppet.yaml
OS::TripleO::Services::Redis: /usr/share/openstack-tripleo-heat-templates/deployment/database/redis-pacemaker-puppet.yaml
OS::TripleO::Services::Rsyslog: /usr/share/openstack-tripleo-heat-templates/deployment/logging/rsyslog-container-puppet.yaml
OS::TripleO::Services::MySQL: /usr/share/openstack-tripleo-heat-templates/deployment/database/mysql-pacemaker-puppet.yaml
OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-backup-pacemaker-puppet.yaml
OS::TripleO::Services::CinderVolume: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-volume-pacemaker-puppet.yaml
OS::TripleO::Services::HeatApi: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-container-puppet.yaml
OS::TripleO::Services::HeatApiCfn: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cfn-container-puppet.yaml
OS::TripleO::Services::HeatApiCloudwatch: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cloudwatch-disabled-puppet.yaml
OS::TripleO::Services::HeatEngine: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-engine-container-puppet.yaml
parameter_defaults:
RedisVirtualFixedIPs:
- ip_address: 192.168.122.110
use_neutron: false
OVNDBsVirtualFixedIPs:
- ip_address: 192.168.122.111
use_neutron: false
ControllerExtraConfig:
nova::compute::libvirt::services::libvirt_virt_type: qemu
nova::compute::libvirt::virt_type: qemu
ComputeExtraConfig:
nova::compute::libvirt::services::libvirt_virt_type: qemu
nova::compute::libvirt::virt_type: qemu
Debug: true
DockerPuppetDebug: true
ContainerCli: podman
ControllerCount: 3
ComputeHCICount: 3
NeutronGlobalPhysnetMtu: 1350
CinderLVMLoopDeviceSize: 20480
CloudName: overcloud.localdomain
CloudNameInternal: overcloud.internalapi.localdomain
CloudNameStorage: overcloud.storage.localdomain
CloudNameStorageManagement: overcloud.storagemgmt.localdomain
CloudNameCtlplane: overcloud.ctlplane.localdomain
CloudDomain: localdomain
NetworkConfigWithAnsible: false
ControllerHostnameFormat: '%stackname%-controller-%index%'
ComputeHCIHostnameFormat: '%stackname%-computehci-%index%'
CtlplaneNetworkAttributes:
network:
dns_domain: localdomain
mtu: 1500
name: ctlplane
tags:
- 192.168.122.0/24
subnets:
ctlplane-subnet:
cidr: 192.168.122.0/24
dns_nameservers: 192.168.122.10
gateway_ip: 192.168.122.10
host_routes: []
name: ctlplane-subnet
ip_version: 4
11 changes: 11 additions & 0 deletions scenarios/hci/hieradata_overrides_undercloud.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
parameter_defaults:
UndercloudExtraConfig:
ironic::disk_utils::image_convert_memory_limit: 2048
ironic::conductor::heartbeat_interval: 20
ironic::conductor::heartbeat_timeout: 120

# Ironic defaults to using `qemu:///system`. When running libvirtd
# unprivileged we need to use `qemu:///session`. This allows us to pass
# the value of libvirt_uri into /etc/ironic/ironic.conf.
ironic::drivers::ssh::libvirt_uri: 'qemu:///session'
60 changes: 60 additions & 0 deletions scenarios/hci/network_data.yaml.j2
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
- name: Storage
mtu: 1500
vip: true
name_lower: storage
dns_domain: storage.{{ cloud_domain }}.
service_net_map_replace: storage
subnets:
storage_subnet:
vlan: 21
ip_subnet: '172.18.0.0/24'
allocation_pools: [{'start': '172.18.0.120', 'end': '172.18.0.250'}]

- name: StorageMgmt
mtu: 1500
vip: true
name_lower: storage_mgmt
dns_domain: storagemgmt.{{ cloud_domain }}.
service_net_map_replace: storage_mgmt
subnets:
storage_mgmt_subnet:
vlan: 23
ip_subnet: '172.20.0.0/24'
allocation_pools: [{'start': '172.20.0.120', 'end': '172.20.0.250'}]

- name: InternalApi
mtu: 1500
vip: true
name_lower: internal_api
dns_domain: internal-api.{{ cloud_domain }}.
service_net_map_replace: internal_api
subnets:
internal_api_subnet:
vlan: 20
ip_subnet: '172.17.0.0/24'
allocation_pools: [{'start': '172.17.0.120', 'end': '172.17.0.250'}]

- name: Tenant
mtu: 1500
vip: false # Tenant network does not use VIPs
name_lower: tenant
dns_domain: tenant.{{ cloud_domain }}.
service_net_map_replace: tenant
subnets:
tenant_subnet:
vlan: 22
ip_subnet: '172.19.0.0/24'
allocation_pools: [{'start': '172.19.0.120', 'end': '172.19.0.250'}]

- name: External
mtu: 1500
vip: true
name_lower: external
dns_domain: external.{{ cloud_domain }}.
service_net_map_replace: external
subnets:
external_subnet:
vlan: 44
ip_subnet: '172.21.0.0/24'
allocation_pools: [{'start': '172.21.0.120', 'end': '172.21.0.250'}]
3 changes: 3 additions & 0 deletions scenarios/hci/osd_spec.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
data_devices:
all: true
Loading

0 comments on commit 6afb8d5

Please sign in to comment.