From 6afb8d5086f98cc5c7ecbe50b31283938a91e1d2 Mon Sep 17 00:00:00 2001 From: jgilaber Date: Tue, 10 Sep 2024 16:05:14 +0200 Subject: [PATCH] Introduce scenarios for 17.1 env for adoption Introduce an scenarios folder that will contain the needed input to deploy a 17.1 environment using the cifmw role added in [1]. The scenario is defined by a variable file, with undercloud specific parameters, overcloud specific parameters, hooks that can be called before or after both the undercloud and overcloud deployment, and two maps that relate the groups in inventory produced by the playbook that created the infra, to Roles and roles hostnames, to make it easier to work with different roles in different scenarios. [1] https://github.com/openstack-k8s-operators/ci-framework/pull/2297 --- scenarios/README.md | 130 +++++++++ scenarios/hci.yaml | 71 +++++ scenarios/hci/config_download.yaml | 80 ++++++ .../hci/hieradata_overrides_undercloud.yaml | 11 + scenarios/hci/network_data.yaml.j2 | 60 ++++ scenarios/hci/osd_spec.yaml | 3 + scenarios/hci/roles.yaml | 257 ++++++++++++++++++ .../hci/undercloud_parameter_defaults.yaml | 14 + scenarios/hci/vips_data.yaml | 22 ++ 9 files changed, 648 insertions(+) create mode 100644 scenarios/README.md create mode 100644 scenarios/hci.yaml create mode 100644 scenarios/hci/config_download.yaml create mode 100644 scenarios/hci/hieradata_overrides_undercloud.yaml create mode 100644 scenarios/hci/network_data.yaml.j2 create mode 100644 scenarios/hci/osd_spec.yaml create mode 100644 scenarios/hci/roles.yaml create mode 100644 scenarios/hci/undercloud_parameter_defaults.yaml create mode 100644 scenarios/hci/vips_data.yaml diff --git a/scenarios/README.md b/scenarios/README.md new file mode 100644 index 000000000..abc9cc232 --- /dev/null +++ b/scenarios/README.md @@ -0,0 +1,130 @@ +# OSP 17.1 scenarios + +The files stored in this folder define different osp 17.1 deployments to be +tested with adoption. For each scenario, we have a .yaml file +and folder with the same name. The yaml file contains variables that will be +used to customize the scenario, while the folder contains files that will be +used in the deployment (network_data, role files, etc.). + +This scenario definition assumes that all relevant parameters to the +deployment are known, with the exception of infra-dependent values like ips or +hostnames. + +## Scenario definition file + +The scenario definition file (the .yaml) has the following top +level sections: + +- `undercloud` +- `stacks` +- `cloud_domain` +- `hostname_groups_map` +- `roles_groups_map` +- `hooks` + +### Undercloud section + +The undercloud section contains the following parameters (all optional): + +- `config`: a list of options to set in `undercloud.conf` file, each entry is +a dictionary with the fields `section`, `option` and `value`. +- `undercloud_parameters_override`: path to a file that contains some parameters +for the undercloud setup, is passed through the `hieradata_override` option in +the `undercloud.conf`. +- `undercloud_parameters_defaults`: path to a file that contains +parameters_defaults for the undercloud, is passed through the `custom_env_files` +option in the `undercloud.conf`. + +### Stacks section + +The stacks section contains list of stacks to be deployed. Typically, this will +be just one, commonly known as `overcloud`, but there can be arbitrarily many. +For each entry the following parameters can be passed: + +- `stackname`: name of the stack deployment, default is `overcloud`. +- `args`: list of cli arguments to use when deploying the stack. +- `vars`: list of environment files to use when deploying the stack. +- `network_data_file`: path to the network_data file that defines the network +to use in the stack, required. This file can be a yaml or jinja file. It it +ends with `j2`, it will be treated as a template, otherwise it'll be copied as +is. +- `vips_data_file`: path to the file defining the virtual ips to use in the +stack, required. +- `roles_file`: path to the file defining the roles of the different nodes +used in the stack, required. +- `config_download_file`: path to the config-download file used to pass +environment variables to the stack, required. +- `ceph_osd_spec_file`: path to the osd spec file used to deploy ceph when +applicable, optional. +- `deploy_command`: string with the stack deploy command to run verbatim, +if defined, it ignores the `vars` and `args` fields, optional. +- `stack_nodes`: list of groups for the inventory that contains the nodes that +will be part of the stack, required. This groups must be a subset of the groups +used as keys in `hostname_groups_map` and `roles_groups_map`. + +### Cloud domain + +Name of the dns domain used for the overcloud, particularly relevant for tlse +environments. + +### Hostname groups map + +Map that relates ansible groups in the inventory produced by the infra creation +to role hostname format for 17.1 deployment. This allows to tell which nodes +belong to the overcloud without trying to rely on specific naming. Used to +build the hostnamemap. For example, let's assume that we have an inventory with +a group called `osp-computes` that contains the computes, and a group called +`osp-controllers` that contains the controllers, then a possible map would look +like: + +``` +hostname_groups_map: + osp-computes: "overcloud-novacompute" + osp-controllers: "overcloud-controller" +``` + +### Roles groups map + +Map that relates ansible groups in the inventory produced by the infra creation +to OSP roles. This allows to build a tripleo-ansible-inventory which is used, +for example, to deploy Ceph. Continuing from the example mentioned in the +previous section, a possible value for this map would be: + +``` +hostname_groups_map: + osp-computes: "Compute" + osp-controllers: "Controller" +``` + +### Hooks + +Hooks are a mechanism used in the ci-framework to run external code without +modifying the project's playbooks. See the [ci-framework +docs](https://ci-framework.readthedocs.io/en/latest/roles/run_hook.html) for +more details about how hooks are used in the ci-framework. + +For deployment of osp 17.1, the following hooks are available: + +- `pre_uc_run`, runs before deploying the undercloud +- `post_uc_run`, runs after deploying the undercloud +- `pre_oc_run`, runs before deploying the overcloud, but after provisioning +networks and virtual ips +- `post_oc_run`, runs after deploying the overcloud + +Hooks provide flexibility to the users without adding too much complexity to +the ci-framework. An example of use case of hooks here, is to deploy ceph for +the scenarios that require it. Instead of having some flag in the code to +select whether we should deploy it or not, we can deploy it using the +`pre_oc_run`, like this: + +``` +pre_oc_run: + - name: Deploy Ceph + type: playbook + source: "adoption_deploy_ceph.yml" +``` + +Since the `source` attribute is not an absolute path, this example assumes that +the `adoption_deploy_ceph.yml` playbook exists in the ci-framework (it +introduced alongside the role to consume the scenarios defined here by +[this PR](https://github.com/openstack-k8s-operators/ci-framework/pull/2297)). diff --git a/scenarios/hci.yaml b/scenarios/hci.yaml new file mode 100644 index 000000000..dd52750cc --- /dev/null +++ b/scenarios/hci.yaml @@ -0,0 +1,71 @@ +--- +undercloud: + config: + - section: DEFAULT + option: undercloud_hostname + value: undercloud.localdomain + - section: DEFAULT + option: undercloud_timezone + value: UTC + - section: DEFAULT + option: undercloud_debug + value: true + - section: DEFAULT + option: container_cli + value: podman + - section: DEFAULT + option: undercloud_enable_selinux + value: false + - section: DEFAULT + option: generate_service_certificate + value: false + undercloud_parameters_override: "hci/hieradata_overrides_undercloud.yaml" + undercloud_parameters_defaults: "hci/undercloud_parameter_defaults.yaml" + ctlplane_vip: 192.168.122.99 +cloud_domain: "localdomain" +hostname_groups_map: + # map ansible groups in the inventory to role hostname format for + # 17.1 deployment + osp-computes: "overcloud-computehci" + osp-controllers: "overcloud-controller" +roles_groups_map: + # map ansible groups to tripleo Role names + osp-computes: "ComputeHCI" + osp-controllers: "Controller" +stacks: + - stackname: "overcloud" + args: + - "--override-ansible-cfg /home/zuul/ansible_config.cfg" + - "--templates /usr/share/openstack-tripleo-heat-templates" + - "--libvirt-type qemu" + - "--timeout 90" + - "--overcloud-ssh-user zuul" + - "--deployed-server" + - "--validation-warnings-fatal" + - "--disable-validations" + - "--heat-type pod" + - "--disable-protected-resource-types" + vars: + - "/home/zuul/deployed_ceph.yaml" + - "/usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml" + - "/usr/share/openstack-tripleo-heat-templates/environments/podman.yaml" + - "/usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml" + - "/usr/share/openstack-tripleo-heat-templates/environments/debug.yaml" + - "/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml" + - "/usr/share/openstack-tripleo-heat-templates/environments/enable-legacy-telemetry.yaml" + - "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml" + - "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml" + network_data_file: "hci/network_data.yaml.j2" + vips_data_file: "hci/vips_data.yaml" + roles_file: "hci/roles.yaml" + ceph_osd_spec_file: "hci/osd_spec.yaml" + config_download_file: "hci/config_download.yaml" + stack_nodes: + - osp-computes + - osp-controllers + pre_oc_run: + - name: Deploy Ceph + type: playbook + source: "adoption_deploy_ceph.yml" + extra_vars: + stack_index: 0 diff --git a/scenarios/hci/config_download.yaml b/scenarios/hci/config_download.yaml new file mode 100644 index 000000000..75e0dfdf5 --- /dev/null +++ b/scenarios/hci/config_download.yaml @@ -0,0 +1,80 @@ +--- +resource_registry: + # yamllint disable rule:line-length + OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml + OS::TripleO::OVNMacAddressNetwork: OS::Heat::None + OS::TripleO::OVNMacAddressPort: OS::Heat::None + OS::TripleO::ComputeHCI::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml + OS::TripleO::ComputeHCI::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml + OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml + OS::TripleO::ComputeHCI::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml + OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_external.yaml + OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml + OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml + OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml + OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml + OS::TripleO::Services::CeilometerAgentCentral: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-central-container-puppet.yaml + OS::TripleO::Services::CeilometerAgentNotification: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-notification-container-puppet.yaml + OS::TripleO::Services::CeilometerAgentIpmi: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-ipmi-container-puppet.yaml + OS::TripleO::Services::ComputeCeilometerAgent: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-compute-container-puppet.yaml + OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml + OS::TripleO::Services::MetricsQdr: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/qdr-container-puppet.yaml + OS::TripleO::Services::OsloMessagingRpc: /usr/share/openstack-tripleo-heat-templates/deployment/rabbitmq/rabbitmq-messaging-rpc-pacemaker-puppet.yaml + OS::TripleO::Services::OsloMessagingNotify: /usr/share/openstack-tripleo-heat-templates/deployment/rabbitmq/rabbitmq-messaging-notify-shared-puppet.yaml + OS::TripleO::Services::HAproxy: /usr/share/openstack-tripleo-heat-templates/deployment/haproxy/haproxy-pacemaker-puppet.yaml + OS::TripleO::Services::Pacemaker: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/pacemaker-baremetal-puppet.yaml + OS::TripleO::Services::PacemakerRemote: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/pacemaker-remote-baremetal-puppet.yaml + OS::TripleO::Services::Clustercheck: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/clustercheck-container-puppet.yaml + OS::TripleO::Services::Redis: /usr/share/openstack-tripleo-heat-templates/deployment/database/redis-pacemaker-puppet.yaml + OS::TripleO::Services::Rsyslog: /usr/share/openstack-tripleo-heat-templates/deployment/logging/rsyslog-container-puppet.yaml + OS::TripleO::Services::MySQL: /usr/share/openstack-tripleo-heat-templates/deployment/database/mysql-pacemaker-puppet.yaml + OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-backup-pacemaker-puppet.yaml + OS::TripleO::Services::CinderVolume: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-volume-pacemaker-puppet.yaml + OS::TripleO::Services::HeatApi: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-container-puppet.yaml + OS::TripleO::Services::HeatApiCfn: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cfn-container-puppet.yaml + OS::TripleO::Services::HeatApiCloudwatch: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cloudwatch-disabled-puppet.yaml + OS::TripleO::Services::HeatEngine: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-engine-container-puppet.yaml +parameter_defaults: + RedisVirtualFixedIPs: + - ip_address: 192.168.122.110 + use_neutron: false + OVNDBsVirtualFixedIPs: + - ip_address: 192.168.122.111 + use_neutron: false + ControllerExtraConfig: + nova::compute::libvirt::services::libvirt_virt_type: qemu + nova::compute::libvirt::virt_type: qemu + ComputeExtraConfig: + nova::compute::libvirt::services::libvirt_virt_type: qemu + nova::compute::libvirt::virt_type: qemu + Debug: true + DockerPuppetDebug: true + ContainerCli: podman + ControllerCount: 3 + ComputeHCICount: 3 + NeutronGlobalPhysnetMtu: 1350 + CinderLVMLoopDeviceSize: 20480 + CloudName: overcloud.localdomain + CloudNameInternal: overcloud.internalapi.localdomain + CloudNameStorage: overcloud.storage.localdomain + CloudNameStorageManagement: overcloud.storagemgmt.localdomain + CloudNameCtlplane: overcloud.ctlplane.localdomain + CloudDomain: localdomain + NetworkConfigWithAnsible: false + ControllerHostnameFormat: '%stackname%-controller-%index%' + ComputeHCIHostnameFormat: '%stackname%-computehci-%index%' + CtlplaneNetworkAttributes: + network: + dns_domain: localdomain + mtu: 1500 + name: ctlplane + tags: + - 192.168.122.0/24 + subnets: + ctlplane-subnet: + cidr: 192.168.122.0/24 + dns_nameservers: 192.168.122.10 + gateway_ip: 192.168.122.10 + host_routes: [] + name: ctlplane-subnet + ip_version: 4 diff --git a/scenarios/hci/hieradata_overrides_undercloud.yaml b/scenarios/hci/hieradata_overrides_undercloud.yaml new file mode 100644 index 000000000..fa9807a5b --- /dev/null +++ b/scenarios/hci/hieradata_overrides_undercloud.yaml @@ -0,0 +1,11 @@ +--- +parameter_defaults: + UndercloudExtraConfig: + ironic::disk_utils::image_convert_memory_limit: 2048 + ironic::conductor::heartbeat_interval: 20 + ironic::conductor::heartbeat_timeout: 120 + + # Ironic defaults to using `qemu:///system`. When running libvirtd + # unprivileged we need to use `qemu:///session`. This allows us to pass + # the value of libvirt_uri into /etc/ironic/ironic.conf. + ironic::drivers::ssh::libvirt_uri: 'qemu:///session' diff --git a/scenarios/hci/network_data.yaml.j2 b/scenarios/hci/network_data.yaml.j2 new file mode 100644 index 000000000..1fcfa71fc --- /dev/null +++ b/scenarios/hci/network_data.yaml.j2 @@ -0,0 +1,60 @@ +--- +- name: Storage + mtu: 1500 + vip: true + name_lower: storage + dns_domain: storage.{{ cloud_domain }}. + service_net_map_replace: storage + subnets: + storage_subnet: + vlan: 21 + ip_subnet: '172.18.0.0/24' + allocation_pools: [{'start': '172.18.0.120', 'end': '172.18.0.250'}] + +- name: StorageMgmt + mtu: 1500 + vip: true + name_lower: storage_mgmt + dns_domain: storagemgmt.{{ cloud_domain }}. + service_net_map_replace: storage_mgmt + subnets: + storage_mgmt_subnet: + vlan: 23 + ip_subnet: '172.20.0.0/24' + allocation_pools: [{'start': '172.20.0.120', 'end': '172.20.0.250'}] + +- name: InternalApi + mtu: 1500 + vip: true + name_lower: internal_api + dns_domain: internal-api.{{ cloud_domain }}. + service_net_map_replace: internal_api + subnets: + internal_api_subnet: + vlan: 20 + ip_subnet: '172.17.0.0/24' + allocation_pools: [{'start': '172.17.0.120', 'end': '172.17.0.250'}] + +- name: Tenant + mtu: 1500 + vip: false # Tenant network does not use VIPs + name_lower: tenant + dns_domain: tenant.{{ cloud_domain }}. + service_net_map_replace: tenant + subnets: + tenant_subnet: + vlan: 22 + ip_subnet: '172.19.0.0/24' + allocation_pools: [{'start': '172.19.0.120', 'end': '172.19.0.250'}] + +- name: External + mtu: 1500 + vip: true + name_lower: external + dns_domain: external.{{ cloud_domain }}. + service_net_map_replace: external + subnets: + external_subnet: + vlan: 44 + ip_subnet: '172.21.0.0/24' + allocation_pools: [{'start': '172.21.0.120', 'end': '172.21.0.250'}] diff --git a/scenarios/hci/osd_spec.yaml b/scenarios/hci/osd_spec.yaml new file mode 100644 index 000000000..0433ffab1 --- /dev/null +++ b/scenarios/hci/osd_spec.yaml @@ -0,0 +1,3 @@ +--- +data_devices: + all: true diff --git a/scenarios/hci/roles.yaml b/scenarios/hci/roles.yaml new file mode 100644 index 000000000..6911fd472 --- /dev/null +++ b/scenarios/hci/roles.yaml @@ -0,0 +1,257 @@ +--- +############################################################################### +# Role: Controller # +############################################################################### +- name: Controller + description: | + Controller role that has all the controler services loaded and handles + Database, Messaging and Network functions. + CountDefault: 1 + tags: + - primary + - controller + # Create external Neutron bridge for SNAT (and floating IPs when using + # ML2/OVS without DVR) + - external_bridge + networks: + InternalApi: + subnet: internal_api_subnet + Storage: + subnet: storage_subnet + StorageMgmt: + subnet: storage_mgmt_subnet + Tenant: + subnet: tenant_subnet + External: + subnet: external_subnet + # For systems with both IPv4 and IPv6, you may specify a gateway network for + # each, such as ['ControlPlane', 'External'] + default_route_networks: ['ControlPlane'] + HostnameFormatDefault: '%stackname%-controller-%index%' + RoleParametersDefault: + OVNCMSOptions: "enable-chassis-as-gw" + # Deprecated & backward-compatible values (FIXME: Make parameters consistent) + # Set uses_deprecated_params to True if any deprecated params are used. + uses_deprecated_params: true + deprecated_param_extraconfig: 'controllerExtraConfig' + deprecated_param_flavor: 'OvercloudControlFlavor' + deprecated_param_image: 'controllerImage' + deprecated_nic_config_name: 'controller.yaml' + update_serial: 1 + ServicesDefault: + - OS::TripleO::Services::Aide + - OS::TripleO::Services::AodhApi + - OS::TripleO::Services::AodhEvaluator + - OS::TripleO::Services::AodhListener + - OS::TripleO::Services::AodhNotifier + - OS::TripleO::Services::AuditD + - OS::TripleO::Services::BarbicanApi + - OS::TripleO::Services::BarbicanBackendSimpleCrypto + - OS::TripleO::Services::BarbicanBackendDogtag + - OS::TripleO::Services::BarbicanBackendKmip + - OS::TripleO::Services::BarbicanBackendPkcs11Crypto + - OS::TripleO::Services::BootParams + - OS::TripleO::Services::CACerts + - OS::TripleO::Services::CeilometerAgentCentral + - OS::TripleO::Services::CeilometerAgentNotification + - OS::TripleO::Services::CephClient + - OS::TripleO::Services::CephExternal + - OS::TripleO::Services::CephGrafana + - OS::TripleO::Services::CephMds + - OS::TripleO::Services::CephMgr + - OS::TripleO::Services::CephMon + - OS::TripleO::Services::CephNfs + - OS::TripleO::Services::CephRbdMirror + - OS::TripleO::Services::CephRgw + - OS::TripleO::Services::CinderApi + - OS::TripleO::Services::CinderBackendDellSc + - OS::TripleO::Services::CinderBackendDellEMCPowerFlex + - OS::TripleO::Services::CinderBackendDellEMCPowermax + - OS::TripleO::Services::CinderBackendDellEMCPowerStore + - OS::TripleO::Services::CinderBackendDellEMCSc + - OS::TripleO::Services::CinderBackendDellEMCUnity + - OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI + - OS::TripleO::Services::CinderBackendDellEMCVNX + - OS::TripleO::Services::CinderBackendDellEMCVxFlexOS + - OS::TripleO::Services::CinderBackendDellEMCXtremio + - OS::TripleO::Services::CinderBackendNetApp + - OS::TripleO::Services::CinderBackendPure + - OS::TripleO::Services::CinderBackendScaleIO + - OS::TripleO::Services::CinderBackendNVMeOF + - OS::TripleO::Services::CinderBackup + - OS::TripleO::Services::CinderHPELeftHandISCSI + - OS::TripleO::Services::CinderScheduler + - OS::TripleO::Services::CinderVolume + - OS::TripleO::Services::Clustercheck + - OS::TripleO::Services::Collectd + - OS::TripleO::Services::ContainerImagePrepare + - OS::TripleO::Services::DesignateApi + - OS::TripleO::Services::DesignateCentral + - OS::TripleO::Services::DesignateProducer + - OS::TripleO::Services::DesignateWorker + - OS::TripleO::Services::DesignateMDNS + - OS::TripleO::Services::DesignateSink + - OS::TripleO::Services::DesignateBind + - OS::TripleO::Services::Etcd + - OS::TripleO::Services::ExternalSwiftProxy + - OS::TripleO::Services::Frr + - OS::TripleO::Services::GlanceApi + - OS::TripleO::Services::GlanceApiInternal + - OS::TripleO::Services::GnocchiApi + - OS::TripleO::Services::GnocchiMetricd + - OS::TripleO::Services::GnocchiStatsd + - OS::TripleO::Services::HAproxy + - OS::TripleO::Services::HeatApi + - OS::TripleO::Services::HeatApiCloudwatch + - OS::TripleO::Services::HeatApiCfn + - OS::TripleO::Services::HeatEngine + - OS::TripleO::Services::Horizon + - OS::TripleO::Services::IpaClient + - OS::TripleO::Services::Ipsec + - OS::TripleO::Services::IronicApi + - OS::TripleO::Services::IronicConductor + - OS::TripleO::Services::IronicInspector + - OS::TripleO::Services::IronicPxe + - OS::TripleO::Services::IronicNeutronAgent + - OS::TripleO::Services::Iscsid + - OS::TripleO::Services::Kernel + - OS::TripleO::Services::Keystone + - OS::TripleO::Services::LoginDefs + - OS::TripleO::Services::ManilaApi + - OS::TripleO::Services::ManilaBackendCephFs + - OS::TripleO::Services::ManilaBackendFlashBlade + - OS::TripleO::Services::ManilaBackendIsilon + - OS::TripleO::Services::ManilaBackendNetapp + - OS::TripleO::Services::ManilaBackendPowerMax + - OS::TripleO::Services::ManilaBackendUnity + - OS::TripleO::Services::ManilaBackendVNX + - OS::TripleO::Services::ManilaBackendVMAX + - OS::TripleO::Services::ManilaScheduler + - OS::TripleO::Services::ManilaShare + - OS::TripleO::Services::Memcached + - OS::TripleO::Services::MetricsQdr + - OS::TripleO::Services::Multipathd + - OS::TripleO::Services::MySQL + - OS::TripleO::Services::MySQLClient + - OS::TripleO::Services::NeutronApi + - OS::TripleO::Services::NeutronBgpVpnApi + - OS::TripleO::Services::NeutronSfcApi + - OS::TripleO::Services::NeutronCorePlugin + - OS::TripleO::Services::NeutronDhcpAgent + - OS::TripleO::Services::NeutronL2gwAgent + - OS::TripleO::Services::NeutronL2gwApi + - OS::TripleO::Services::NeutronL3Agent + - OS::TripleO::Services::NeutronLinuxbridgeAgent + - OS::TripleO::Services::NeutronMetadataAgent + - OS::TripleO::Services::NeutronOvsAgent + - OS::TripleO::Services::NeutronVppAgent + - OS::TripleO::Services::NeutronAgentsIBConfig + - OS::TripleO::Services::NovaApi + - OS::TripleO::Services::NovaConductor + - OS::TripleO::Services::NovaIronic + - OS::TripleO::Services::NovaMetadata + - OS::TripleO::Services::NovaScheduler + - OS::TripleO::Services::NovaVncProxy + - OS::TripleO::Services::ContainersLogrotateCrond + - OS::TripleO::Services::OctaviaApi + - OS::TripleO::Services::OctaviaDeploymentConfig + - OS::TripleO::Services::OctaviaHealthManager + - OS::TripleO::Services::OctaviaHousekeeping + - OS::TripleO::Services::OctaviaWorker + - OS::TripleO::Services::OpenStackClients + - OS::TripleO::Services::OVNDBs + - OS::TripleO::Services::OVNController + - OS::TripleO::Services::Pacemaker + - OS::TripleO::Services::PlacementApi + - OS::TripleO::Services::OsloMessagingRpc + - OS::TripleO::Services::OsloMessagingNotify + - OS::TripleO::Services::Podman + - OS::TripleO::Services::Redis + - OS::TripleO::Services::Rhsm + - OS::TripleO::Services::Rsyslog + - OS::TripleO::Services::RsyslogSidecar + - OS::TripleO::Services::Securetty + - OS::TripleO::Services::Snmp + - OS::TripleO::Services::Sshd + - OS::TripleO::Services::SwiftProxy + - OS::TripleO::Services::SwiftDispersion + - OS::TripleO::Services::SwiftRingBuilder + - OS::TripleO::Services::SwiftStorage + - OS::TripleO::Services::Timesync + - OS::TripleO::Services::Timezone + - OS::TripleO::Services::TripleoFirewall + - OS::TripleO::Services::TripleoPackages + - OS::TripleO::Services::Tuned + - OS::TripleO::Services::Unbound + - OS::TripleO::Services::Vpp +############################################################################### +# Role: ComputeHCI # +############################################################################### +- name: ComputeHCI + description: | + Compute Node role hosting Ceph OSD too + tags: + - compute + networks: + InternalApi: + subnet: internal_api_subnet + Tenant: + subnet: tenant_subnet + Storage: + subnet: storage_subnet + StorageMgmt: + subnet: storage_mgmt_subnet + RoleParametersDefault: + FsAioMaxNumber: 1048576 + TunedProfileName: "throughput-performance" + NovaComputeStartupDelay: 180 + # CephOSD present so serial has to be 1 + update_serial: 1 + ServicesDefault: + - OS::TripleO::Services::Aide + - OS::TripleO::Services::AuditD + - OS::TripleO::Services::BootParams + - OS::TripleO::Services::CACerts + - OS::TripleO::Services::CephClient + - OS::TripleO::Services::CephExternal + - OS::TripleO::Services::CephOSD + - OS::TripleO::Services::Collectd + - OS::TripleO::Services::ComputeCeilometerAgent + - OS::TripleO::Services::CeilometerAgentIpmi + - OS::TripleO::Services::ComputeNeutronCorePlugin + - OS::TripleO::Services::ComputeNeutronL3Agent + - OS::TripleO::Services::ComputeNeutronMetadataAgent + - OS::TripleO::Services::ComputeNeutronOvsAgent + - OS::TripleO::Services::Frr + - OS::TripleO::Services::IpaClient + - OS::TripleO::Services::Ipsec + - OS::TripleO::Services::Iscsid + - OS::TripleO::Services::Kernel + - OS::TripleO::Services::LoginDefs + - OS::TripleO::Services::MetricsQdr + - OS::TripleO::Services::Multipathd + - OS::TripleO::Services::MySQLClient + - OS::TripleO::Services::NeutronBgpVpnBagpipe + - OS::TripleO::Services::NeutronLinuxbridgeAgent + - OS::TripleO::Services::NeutronVppAgent + - OS::TripleO::Services::NovaAZConfig + - OS::TripleO::Services::NovaCompute + - OS::TripleO::Services::NovaLibvirt + - OS::TripleO::Services::NovaLibvirtGuests + - OS::TripleO::Services::NovaMigrationTarget + - OS::TripleO::Services::ContainersLogrotateCrond + - OS::TripleO::Services::Podman + - OS::TripleO::Services::Rhsm + - OS::TripleO::Services::Rsyslog + - OS::TripleO::Services::RsyslogSidecar + - OS::TripleO::Services::Securetty + - OS::TripleO::Services::Snmp + - OS::TripleO::Services::Sshd + - OS::TripleO::Services::Timesync + - OS::TripleO::Services::Timezone + - OS::TripleO::Services::TripleoFirewall + - OS::TripleO::Services::TripleoPackages + - OS::TripleO::Services::Tuned + - OS::TripleO::Services::Vpp + - OS::TripleO::Services::OVNController + - OS::TripleO::Services::OVNMetadataAgent diff --git a/scenarios/hci/undercloud_parameter_defaults.yaml b/scenarios/hci/undercloud_parameter_defaults.yaml new file mode 100644 index 000000000..64e2481da --- /dev/null +++ b/scenarios/hci/undercloud_parameter_defaults.yaml @@ -0,0 +1,14 @@ +--- +{ + "parameter_defaults": { + "MasqueradeNetworks": { + "10.0.0.1/24": [ + "10.0.0.1/24" + ], + "192.168.122.0/24": [ + "192.168.122.0/24" + ] + } + }, + "resource_registry": {} +} diff --git a/scenarios/hci/vips_data.yaml b/scenarios/hci/vips_data.yaml new file mode 100644 index 000000000..3aff3a399 --- /dev/null +++ b/scenarios/hci/vips_data.yaml @@ -0,0 +1,22 @@ +--- +- name: internal_api_vip + network: internal_api + subnet: internal_api_subnet + dns_name: overcloud +- name: storage_vip + network: storage + subnet: storage_subnet + dns_name: overcloud +- name: storage_mgmt_vip + network: storage_mgmt + subnet: storage_mgmt_subnet + dns_name: overcloud +- name: external_vip + network: external + subnet: external_subnet + dns_name: overcloud +- name: ctlplane_vip + network: ctlplane + ip_address: 192.168.122.99 + subnet: ctlplane-subnet + dns_name: overcloud