Skip to content

Commit 6afb8d5

Browse files
committed
Introduce scenarios for 17.1 env for adoption
Introduce an scenarios folder that will contain the needed input to deploy a 17.1 environment using the cifmw role added in [1]. The scenario is defined by a variable file, with undercloud specific parameters, overcloud specific parameters, hooks that can be called before or after both the undercloud and overcloud deployment, and two maps that relate the groups in inventory produced by the playbook that created the infra, to Roles and roles hostnames, to make it easier to work with different roles in different scenarios. [1] openstack-k8s-operators/ci-framework#2297
1 parent d39856d commit 6afb8d5

9 files changed

+648
-0
lines changed

scenarios/README.md

Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,130 @@
1+
# OSP 17.1 scenarios
2+
3+
The files stored in this folder define different osp 17.1 deployments to be
4+
tested with adoption. For each scenario, we have a <scenario_name>.yaml file
5+
and folder with the same name. The yaml file contains variables that will be
6+
used to customize the scenario, while the folder contains files that will be
7+
used in the deployment (network_data, role files, etc.).
8+
9+
This scenario definition assumes that all relevant parameters to the
10+
deployment are known, with the exception of infra-dependent values like ips or
11+
hostnames.
12+
13+
## Scenario definition file
14+
15+
The scenario definition file (the <scenario_name>.yaml) has the following top
16+
level sections:
17+
18+
- `undercloud`
19+
- `stacks`
20+
- `cloud_domain`
21+
- `hostname_groups_map`
22+
- `roles_groups_map`
23+
- `hooks`
24+
25+
### Undercloud section
26+
27+
The undercloud section contains the following parameters (all optional):
28+
29+
- `config`: a list of options to set in `undercloud.conf` file, each entry is
30+
a dictionary with the fields `section`, `option` and `value`.
31+
- `undercloud_parameters_override`: path to a file that contains some parameters
32+
for the undercloud setup, is passed through the `hieradata_override` option in
33+
the `undercloud.conf`.
34+
- `undercloud_parameters_defaults`: path to a file that contains
35+
parameters_defaults for the undercloud, is passed through the `custom_env_files`
36+
option in the `undercloud.conf`.
37+
38+
### Stacks section
39+
40+
The stacks section contains list of stacks to be deployed. Typically, this will
41+
be just one, commonly known as `overcloud`, but there can be arbitrarily many.
42+
For each entry the following parameters can be passed:
43+
44+
- `stackname`: name of the stack deployment, default is `overcloud`.
45+
- `args`: list of cli arguments to use when deploying the stack.
46+
- `vars`: list of environment files to use when deploying the stack.
47+
- `network_data_file`: path to the network_data file that defines the network
48+
to use in the stack, required. This file can be a yaml or jinja file. It it
49+
ends with `j2`, it will be treated as a template, otherwise it'll be copied as
50+
is.
51+
- `vips_data_file`: path to the file defining the virtual ips to use in the
52+
stack, required.
53+
- `roles_file`: path to the file defining the roles of the different nodes
54+
used in the stack, required.
55+
- `config_download_file`: path to the config-download file used to pass
56+
environment variables to the stack, required.
57+
- `ceph_osd_spec_file`: path to the osd spec file used to deploy ceph when
58+
applicable, optional.
59+
- `deploy_command`: string with the stack deploy command to run verbatim,
60+
if defined, it ignores the `vars` and `args` fields, optional.
61+
- `stack_nodes`: list of groups for the inventory that contains the nodes that
62+
will be part of the stack, required. This groups must be a subset of the groups
63+
used as keys in `hostname_groups_map` and `roles_groups_map`.
64+
65+
### Cloud domain
66+
67+
Name of the dns domain used for the overcloud, particularly relevant for tlse
68+
environments.
69+
70+
### Hostname groups map
71+
72+
Map that relates ansible groups in the inventory produced by the infra creation
73+
to role hostname format for 17.1 deployment. This allows to tell which nodes
74+
belong to the overcloud without trying to rely on specific naming. Used to
75+
build the hostnamemap. For example, let's assume that we have an inventory with
76+
a group called `osp-computes` that contains the computes, and a group called
77+
`osp-controllers` that contains the controllers, then a possible map would look
78+
like:
79+
80+
```
81+
hostname_groups_map:
82+
osp-computes: "overcloud-novacompute"
83+
osp-controllers: "overcloud-controller"
84+
```
85+
86+
### Roles groups map
87+
88+
Map that relates ansible groups in the inventory produced by the infra creation
89+
to OSP roles. This allows to build a tripleo-ansible-inventory which is used,
90+
for example, to deploy Ceph. Continuing from the example mentioned in the
91+
previous section, a possible value for this map would be:
92+
93+
```
94+
hostname_groups_map:
95+
osp-computes: "Compute"
96+
osp-controllers: "Controller"
97+
```
98+
99+
### Hooks
100+
101+
Hooks are a mechanism used in the ci-framework to run external code without
102+
modifying the project's playbooks. See the [ci-framework
103+
docs](https://ci-framework.readthedocs.io/en/latest/roles/run_hook.html) for
104+
more details about how hooks are used in the ci-framework.
105+
106+
For deployment of osp 17.1, the following hooks are available:
107+
108+
- `pre_uc_run`, runs before deploying the undercloud
109+
- `post_uc_run`, runs after deploying the undercloud
110+
- `pre_oc_run`, runs before deploying the overcloud, but after provisioning
111+
networks and virtual ips
112+
- `post_oc_run`, runs after deploying the overcloud
113+
114+
Hooks provide flexibility to the users without adding too much complexity to
115+
the ci-framework. An example of use case of hooks here, is to deploy ceph for
116+
the scenarios that require it. Instead of having some flag in the code to
117+
select whether we should deploy it or not, we can deploy it using the
118+
`pre_oc_run`, like this:
119+
120+
```
121+
pre_oc_run:
122+
- name: Deploy Ceph
123+
type: playbook
124+
source: "adoption_deploy_ceph.yml"
125+
```
126+
127+
Since the `source` attribute is not an absolute path, this example assumes that
128+
the `adoption_deploy_ceph.yml` playbook exists in the ci-framework (it
129+
introduced alongside the role to consume the scenarios defined here by
130+
[this PR](https://github.com/openstack-k8s-operators/ci-framework/pull/2297)).

scenarios/hci.yaml

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
---
2+
undercloud:
3+
config:
4+
- section: DEFAULT
5+
option: undercloud_hostname
6+
value: undercloud.localdomain
7+
- section: DEFAULT
8+
option: undercloud_timezone
9+
value: UTC
10+
- section: DEFAULT
11+
option: undercloud_debug
12+
value: true
13+
- section: DEFAULT
14+
option: container_cli
15+
value: podman
16+
- section: DEFAULT
17+
option: undercloud_enable_selinux
18+
value: false
19+
- section: DEFAULT
20+
option: generate_service_certificate
21+
value: false
22+
undercloud_parameters_override: "hci/hieradata_overrides_undercloud.yaml"
23+
undercloud_parameters_defaults: "hci/undercloud_parameter_defaults.yaml"
24+
ctlplane_vip: 192.168.122.99
25+
cloud_domain: "localdomain"
26+
hostname_groups_map:
27+
# map ansible groups in the inventory to role hostname format for
28+
# 17.1 deployment
29+
osp-computes: "overcloud-computehci"
30+
osp-controllers: "overcloud-controller"
31+
roles_groups_map:
32+
# map ansible groups to tripleo Role names
33+
osp-computes: "ComputeHCI"
34+
osp-controllers: "Controller"
35+
stacks:
36+
- stackname: "overcloud"
37+
args:
38+
- "--override-ansible-cfg /home/zuul/ansible_config.cfg"
39+
- "--templates /usr/share/openstack-tripleo-heat-templates"
40+
- "--libvirt-type qemu"
41+
- "--timeout 90"
42+
- "--overcloud-ssh-user zuul"
43+
- "--deployed-server"
44+
- "--validation-warnings-fatal"
45+
- "--disable-validations"
46+
- "--heat-type pod"
47+
- "--disable-protected-resource-types"
48+
vars:
49+
- "/home/zuul/deployed_ceph.yaml"
50+
- "/usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml"
51+
- "/usr/share/openstack-tripleo-heat-templates/environments/podman.yaml"
52+
- "/usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml"
53+
- "/usr/share/openstack-tripleo-heat-templates/environments/debug.yaml"
54+
- "/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml"
55+
- "/usr/share/openstack-tripleo-heat-templates/environments/enable-legacy-telemetry.yaml"
56+
- "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml"
57+
- "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml"
58+
network_data_file: "hci/network_data.yaml.j2"
59+
vips_data_file: "hci/vips_data.yaml"
60+
roles_file: "hci/roles.yaml"
61+
ceph_osd_spec_file: "hci/osd_spec.yaml"
62+
config_download_file: "hci/config_download.yaml"
63+
stack_nodes:
64+
- osp-computes
65+
- osp-controllers
66+
pre_oc_run:
67+
- name: Deploy Ceph
68+
type: playbook
69+
source: "adoption_deploy_ceph.yml"
70+
extra_vars:
71+
stack_index: 0

scenarios/hci/config_download.yaml

Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
---
2+
resource_registry:
3+
# yamllint disable rule:line-length
4+
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
5+
OS::TripleO::OVNMacAddressNetwork: OS::Heat::None
6+
OS::TripleO::OVNMacAddressPort: OS::Heat::None
7+
OS::TripleO::ComputeHCI::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml
8+
OS::TripleO::ComputeHCI::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml
9+
OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml
10+
OS::TripleO::ComputeHCI::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml
11+
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_external.yaml
12+
OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml
13+
OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml
14+
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml
15+
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml
16+
OS::TripleO::Services::CeilometerAgentCentral: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-central-container-puppet.yaml
17+
OS::TripleO::Services::CeilometerAgentNotification: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-notification-container-puppet.yaml
18+
OS::TripleO::Services::CeilometerAgentIpmi: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-ipmi-container-puppet.yaml
19+
OS::TripleO::Services::ComputeCeilometerAgent: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-compute-container-puppet.yaml
20+
OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml
21+
OS::TripleO::Services::MetricsQdr: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/qdr-container-puppet.yaml
22+
OS::TripleO::Services::OsloMessagingRpc: /usr/share/openstack-tripleo-heat-templates/deployment/rabbitmq/rabbitmq-messaging-rpc-pacemaker-puppet.yaml
23+
OS::TripleO::Services::OsloMessagingNotify: /usr/share/openstack-tripleo-heat-templates/deployment/rabbitmq/rabbitmq-messaging-notify-shared-puppet.yaml
24+
OS::TripleO::Services::HAproxy: /usr/share/openstack-tripleo-heat-templates/deployment/haproxy/haproxy-pacemaker-puppet.yaml
25+
OS::TripleO::Services::Pacemaker: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/pacemaker-baremetal-puppet.yaml
26+
OS::TripleO::Services::PacemakerRemote: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/pacemaker-remote-baremetal-puppet.yaml
27+
OS::TripleO::Services::Clustercheck: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/clustercheck-container-puppet.yaml
28+
OS::TripleO::Services::Redis: /usr/share/openstack-tripleo-heat-templates/deployment/database/redis-pacemaker-puppet.yaml
29+
OS::TripleO::Services::Rsyslog: /usr/share/openstack-tripleo-heat-templates/deployment/logging/rsyslog-container-puppet.yaml
30+
OS::TripleO::Services::MySQL: /usr/share/openstack-tripleo-heat-templates/deployment/database/mysql-pacemaker-puppet.yaml
31+
OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-backup-pacemaker-puppet.yaml
32+
OS::TripleO::Services::CinderVolume: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-volume-pacemaker-puppet.yaml
33+
OS::TripleO::Services::HeatApi: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-container-puppet.yaml
34+
OS::TripleO::Services::HeatApiCfn: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cfn-container-puppet.yaml
35+
OS::TripleO::Services::HeatApiCloudwatch: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cloudwatch-disabled-puppet.yaml
36+
OS::TripleO::Services::HeatEngine: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-engine-container-puppet.yaml
37+
parameter_defaults:
38+
RedisVirtualFixedIPs:
39+
- ip_address: 192.168.122.110
40+
use_neutron: false
41+
OVNDBsVirtualFixedIPs:
42+
- ip_address: 192.168.122.111
43+
use_neutron: false
44+
ControllerExtraConfig:
45+
nova::compute::libvirt::services::libvirt_virt_type: qemu
46+
nova::compute::libvirt::virt_type: qemu
47+
ComputeExtraConfig:
48+
nova::compute::libvirt::services::libvirt_virt_type: qemu
49+
nova::compute::libvirt::virt_type: qemu
50+
Debug: true
51+
DockerPuppetDebug: true
52+
ContainerCli: podman
53+
ControllerCount: 3
54+
ComputeHCICount: 3
55+
NeutronGlobalPhysnetMtu: 1350
56+
CinderLVMLoopDeviceSize: 20480
57+
CloudName: overcloud.localdomain
58+
CloudNameInternal: overcloud.internalapi.localdomain
59+
CloudNameStorage: overcloud.storage.localdomain
60+
CloudNameStorageManagement: overcloud.storagemgmt.localdomain
61+
CloudNameCtlplane: overcloud.ctlplane.localdomain
62+
CloudDomain: localdomain
63+
NetworkConfigWithAnsible: false
64+
ControllerHostnameFormat: '%stackname%-controller-%index%'
65+
ComputeHCIHostnameFormat: '%stackname%-computehci-%index%'
66+
CtlplaneNetworkAttributes:
67+
network:
68+
dns_domain: localdomain
69+
mtu: 1500
70+
name: ctlplane
71+
tags:
72+
- 192.168.122.0/24
73+
subnets:
74+
ctlplane-subnet:
75+
cidr: 192.168.122.0/24
76+
dns_nameservers: 192.168.122.10
77+
gateway_ip: 192.168.122.10
78+
host_routes: []
79+
name: ctlplane-subnet
80+
ip_version: 4
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
parameter_defaults:
3+
UndercloudExtraConfig:
4+
ironic::disk_utils::image_convert_memory_limit: 2048
5+
ironic::conductor::heartbeat_interval: 20
6+
ironic::conductor::heartbeat_timeout: 120
7+
8+
# Ironic defaults to using `qemu:///system`. When running libvirtd
9+
# unprivileged we need to use `qemu:///session`. This allows us to pass
10+
# the value of libvirt_uri into /etc/ironic/ironic.conf.
11+
ironic::drivers::ssh::libvirt_uri: 'qemu:///session'

scenarios/hci/network_data.yaml.j2

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
---
2+
- name: Storage
3+
mtu: 1500
4+
vip: true
5+
name_lower: storage
6+
dns_domain: storage.{{ cloud_domain }}.
7+
service_net_map_replace: storage
8+
subnets:
9+
storage_subnet:
10+
vlan: 21
11+
ip_subnet: '172.18.0.0/24'
12+
allocation_pools: [{'start': '172.18.0.120', 'end': '172.18.0.250'}]
13+
14+
- name: StorageMgmt
15+
mtu: 1500
16+
vip: true
17+
name_lower: storage_mgmt
18+
dns_domain: storagemgmt.{{ cloud_domain }}.
19+
service_net_map_replace: storage_mgmt
20+
subnets:
21+
storage_mgmt_subnet:
22+
vlan: 23
23+
ip_subnet: '172.20.0.0/24'
24+
allocation_pools: [{'start': '172.20.0.120', 'end': '172.20.0.250'}]
25+
26+
- name: InternalApi
27+
mtu: 1500
28+
vip: true
29+
name_lower: internal_api
30+
dns_domain: internal-api.{{ cloud_domain }}.
31+
service_net_map_replace: internal_api
32+
subnets:
33+
internal_api_subnet:
34+
vlan: 20
35+
ip_subnet: '172.17.0.0/24'
36+
allocation_pools: [{'start': '172.17.0.120', 'end': '172.17.0.250'}]
37+
38+
- name: Tenant
39+
mtu: 1500
40+
vip: false # Tenant network does not use VIPs
41+
name_lower: tenant
42+
dns_domain: tenant.{{ cloud_domain }}.
43+
service_net_map_replace: tenant
44+
subnets:
45+
tenant_subnet:
46+
vlan: 22
47+
ip_subnet: '172.19.0.0/24'
48+
allocation_pools: [{'start': '172.19.0.120', 'end': '172.19.0.250'}]
49+
50+
- name: External
51+
mtu: 1500
52+
vip: true
53+
name_lower: external
54+
dns_domain: external.{{ cloud_domain }}.
55+
service_net_map_replace: external
56+
subnets:
57+
external_subnet:
58+
vlan: 44
59+
ip_subnet: '172.21.0.0/24'
60+
allocation_pools: [{'start': '172.21.0.120', 'end': '172.21.0.250'}]

scenarios/hci/osd_spec.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
---
2+
data_devices:
3+
all: true

0 commit comments

Comments
 (0)