Skip to content

Commit f9f92ae

Browse files
committed
More generalized location
Signed-off-by: Marc Schöchlin <[email protected]>
1 parent 2f5b1ce commit f9f92ae

File tree

1 file changed

+184
-0
lines changed

1 file changed

+184
-0
lines changed
Lines changed: 184 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
---
2+
sidebar_label: Pre Installation Checklist
3+
---
4+
5+
# Pre Installation Checklist
6+
7+
:::warning
8+
9+
This checklist is currently really **work in progress and incomplete**.
10+
11+
:::
12+
13+
This list describes some aspects (without claiming to be exhaustive) that should be clarified before a pilot and at least before production installation.
14+
15+
The aim of this list is to reduce:
16+
17+
- Projects that could be more successful
18+
- Long project waiting/implementation times
19+
- Unexpected errors or difficulties
20+
- Major restructuring work soon after the system was initially put into operation
21+
- Unexpected issues that have a major impact on costs.
22+
23+
*Opensource benefits from the collaboration of its users and its developers.*
24+
25+
For this reason, we are collecting questions, important topics to be clarified and hints to make it easier for users of the Sovereign Cloud Stack to increase the success with it.
26+
Therefore we would be very pleased if specific experiences from users, implementers and operators [were contributed](https://github.com/SovereignCloudStack/docs/docs/01-getting-started/preinstall-checklist.md) to this list.
27+
28+
## General
29+
30+
### Availability and Support
31+
32+
- What requirements do you have for the availability of the system?
33+
- What gradation or requirements are there for the elimination of problems with regard to the different types of problems?
34+
- Examples problem scenarios:
35+
- complete loud service outage or downtime
36+
- performance problems
37+
- application problems
38+
- ....
39+
- Where should rollouts and changes to the system be tested or prepared, or does a dedicated environment make sense for t
40+
41+
### Hardware Definition
42+
43+
- Are there defined hardware standards for the target data center and what are the general conditions?
44+
- How should the systems be provisioned with an operating system?
45+
- Decide which base operating system is used (e.g. RHEL or Ubuntu) and whether this fits the hardware support, strategy, upgrade support and cost structure.
46+
- How many failure domains, environments, availability zones are required?
47+
48+
### Required IP Networks
49+
50+
Estimate the expected number of IP addresses and plan sufficient reserves so that no adjustments to the networks will be necessary at a later date.
51+
The installation can be carried out via IPv4 or IPv6 as well as hybrid.
52+
53+
- Frontend Access: A dedicated IP address space / network for services published by the cloud platform and its users
54+
- this is in most cases a public IPv4 network
55+
- at least TCP port 443 should be accessible for all addresses of this network from other networks
56+
- Node Communication: A dedicated private IP adress space / network for the internal communication between the nodes
57+
- every node needs a dedicated IP
58+
- a DHCP range for installation might be useful, but not mandatory
59+
- all nodes in this network should have access to the NTP server
60+
- all nodes should have access to public DNS servers and HTTP/HTTPS servers
61+
- In some cases, it may make sense to operate Ceph in a dedicated network or multiple dedicated networks (public, cluster).
62+
Methods for high-performance and scalable access to the storage:
63+
- very high-performance routing (layer 3), for example via switch infrastructure
64+
- Dedicated network adapters in the compute nodes for direct access to the storage network
65+
- Management: A private IP adress space / network for the hardware out of out band management of the nodes
66+
- every node needs a dedicated management IP
67+
- a DHCP range for installation might be useful, but not mandatory
68+
- Manager Access: Dedicated IP adresses for the access of the manager nodes
69+
- Every manager gets a dedicated external address for SSH and Wireguard Access
70+
- The IP adresses should not be part of the "Frontend Access" network
71+
- At least Port 443/TCP and 51820/UDP should be reachable from external networks
72+
73+
### Identity Management of the Platform
74+
75+
How should access to the administration of the environment (e.g. Openstack) be managed?
76+
77+
Should there only be local access or should the system be linked to one or more identity providers via OIDC or SAML (identity brokering)?
78+
79+
### Network configuration of nodes and tenant networks
80+
81+
TBD:
82+
83+
- It must be decided how the networks of the tenants should be separated in Openstack (Neutron)
84+
- It must be decided how the underlay network of the cloud platform should be designed.
85+
(e.g. native Layer2, Layer2 underlay with Tenant VLANs, Layer3 underlay)
86+
- Layer 3 Underlay
87+
- FRR Routing on the Nodes?
88+
- ASN nameing scheme
89+
90+
### Domains and Hosts
91+
92+
- Cloud Domain: A dedicated subdomain used for the cloud environment
93+
(i.e. `*.zone1.landscape.scs.community`)
94+
- Internal API endpoint: A hostname for the internal api endpoint which points to address to the "Node Communication" network
95+
(i.e. `api-internal.zone1.landscape.scs.community`)
96+
- External API endpoint: A hostname for the external api endpoint which points to address to the "Frontend Access" network
97+
(i.e. `api.zone1.landscape.scs.community`)
98+
99+
### TLS Certificates
100+
101+
Since not all domains that are used for the environment will be publicly accessible and therefore the use of "Let's Encrypt" certificates
102+
is not generally possible without problems, we recommend that official TLS certificates are available for at least the two API endpoints.
103+
Either a multi-domain certificate (with SANs) or a wildcard certificate (wildcard on the first level of the cloud domain) can be used for this.
104+
105+
### Access to installation resources
106+
107+
For the download of installation data such as container images, operating system packages, etc.,
108+
either access to publicly accessible networks must be provided or a caching proxy or a dedicated
109+
repository server must be provided directly from the network for "Node communication".
110+
111+
The [Configuration Guide](https://docs.scs.community/docs/iaas/guides/configuration-guide/proxy) provides more detailed information on how this can be configured.
112+
113+
TBD:
114+
115+
- Proxy requirements
116+
- Are authenticated proxies possible?
117+
118+
### Git Repository
119+
120+
- A private Git Repository for the [configuration repository](https://osism.tech/docs/guides/configuration-guide/configuration-repository)
121+
122+
### Access managment
123+
124+
- What requirments are neede or defined for the administration of the system
125+
- The public Keys of all administrators
126+
127+
### Monitoring and On-Call/On-Duty
128+
129+
- Connection and integration into existing operational monitoring
130+
131+
- What kind of On-Call/On-Duty do you need?
132+
- How quickly should the solution to a problem be started?
133+
- What downtimes are tolerable in extreme cases?
134+
- Does a log aggregation system already exist and does it make sense to use it for the new environment?
135+
136+
## NTP Infrastructure
137+
138+
- The deployed nodes should have permanent access to at least 3 ntp servers
139+
- It has turned out to be advantageous that the 3 control nodes have access to NTP servers
140+
and provide NTP servers for the other nodes of the SCS installation.
141+
- The NTP servers used, should not run on virtual hardware
142+
(Depending on the architecture and the virtualization platform, this can otherwise cause minor or major problems in special situations.)
143+
144+
## Openstack
145+
146+
### Hardware Concept
147+
148+
TBD:
149+
150+
- How many compute nodes are needed?
151+
- Are local NVMe needed?
152+
- Are GPUs needed?
153+
154+
## Ceph Storage
155+
156+
### General
157+
158+
TBD:
159+
160+
- Amount of usable storage
161+
- External Ceph storage installation?
162+
- What is the purpose of your storage?
163+
- Fast NVMe disks?
164+
- More read/write intensive workloads or mixed?
165+
- Huge amounts of data, but perfomance is a second level requirement?
166+
- Object Storage?
167+
- ...
168+
- What kind of network storage is needed?
169+
- Spinners
170+
- NVMe/SSD
171+
- Dedicated ceph environment or hyperconverged setup?
172+
- Crush / Failure domain properies
173+
- Failure domains?
174+
- Erasure encoded?
175+
- Inter datacenter replication?
176+
- ...
177+
178+
### Disk Storage
179+
180+
- What use cases can be expected and on what scale?
181+
182+
### Object Storage
183+
184+
- Rados Gateway Setup

0 commit comments

Comments
 (0)