-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade from 2018 #101
Upgrade from 2018 #101
Conversation
This will allow the convert plan to be used to update trusted certificate extensions without enforcing node group changes. Such a capability is useful for upgrading from 2018.1 to 2019.7
Puppet 5, doesn't have the `puppet ssl` command.
New style compilers don't have a peadm_role cert extension anymore; they only have a pp_auth_role.
So that compilers classify successfully and can run Puppet
When using the orchestrator transport there is a problem with the built-in service task when the orchestrator is upgraded but the pxp-agents are not. Switching to run_command and `systemctl stop` during this time avoids the problem.
Otherwise, the certs potentially can't be signed due to having authorization extensions.
If it was stopped before, it should still be stopped after.
We now use this plan at a time when the master is 2019.x but agents could be 2018.x. So, make it compatible.
@reidmv there is 1 rubocop issue (single quotes preferred over double quotes) breaking syntax check. |
@reidmv Can you recommend a straightforward way of testing this? |
I don't think you can install PE2018.1 using this module at all, only upgrade |
@timidri the 0.4.x branch, in which the module is named "pe_xl" rather than "peadm", is the only one that can be used to actually install 2018.1. I've been using the autope project to deploy test stacks, modifying the plan to change out "peadm" for "pe_xl". The 2.x version of peadm supports installing 2019.7 (nothing older), and upgrading from 2018.1.x, and 2019.1.0 or newer. |
@reidmv I did the same, but using the aws provider pe_xl fails with the error we have fixed since:
|
And btw, I've created a symlink from peadm to pe_xl and the plan worked under its old name |
Yeah, the 0.4.x version insisted on hostnames matching exactly the inventory names used to connect. I can confirm that in GCP, that condition is met so the deployment seems to go smoothly. |
I use the docker examples in this repo to create the 2018 stack. Will need to switch to the 0.4.x stack to do so. @vchepkov @timidri See docker examples |
@@ -13,6 +13,9 @@ | |||
# Common Configuration | |||
String $compiler_pool_address = $master_host, | |||
Array[String] $dns_alt_names = [ ], | |||
|
|||
# Options | |||
Boolean $configure_node_groups = true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the $configure_node_groups should be dynamically generated rather than relying on a human. Create a task to find the value of PE version in order to make the boolean.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the $configure_node_groups should be dynamically generated rather than relying on a human. Create a task to find the value of PE version in order to make the boolean.
I think that option not to create additional classification is very useful. There is no need for it in standard configuration with only primary and replica. Also module doesn't provide a plan to promote a replica and one would have issues with removing classifications added by the module when primary is not available to use standard promote procedure
$pe_version = run_task('peadm::read_file', $master_target, | ||
path => '/opt/puppetlabs/server/pe_version', | ||
)[0][content].chomp | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh we have the PE version here already. So my comment above should use this value.
) | ||
# Shut down PuppetDB on CMs that use the PM's PDB PG. Use run_command instead | ||
# of run_task(service, ...) so that upgrading from 2018.1 works over PCP. | ||
run_command('systemctl stop pe-puppetdb', $compiler_m1_targets) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assumes systemctl is available on the host. Since 2018 supports RHEL6 this wouldn't work in all cases.
https://puppet.com/docs/pe/2018.1/supported_operating_systems.html#supported_operating_systems
@@ -13,6 +13,9 @@ | |||
# Common Configuration | |||
String $compiler_pool_address = $master_host, | |||
Array[String] $dns_alt_names = [ ], | |||
|
|||
# Options | |||
Boolean $configure_node_groups = true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would like to see this dynamically calculated instead. We can't trust humans to figure this out.
# Shut down PuppetDB on CMs that use the replica's PDB PG. Use run_command | ||
# instead of run_task(service, ...) so that upgrading from 2018.1 works | ||
# over PCP. | ||
run_command('systemctl stop pe-puppetdb', $compiler_m2_targets) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same systemctl issue here
I don't want to mud the water here, I can open a ticket if v0.4 is still supported, but module doesn't work for me. I use Vagrant and this json:
Plan fails:
|
I had the same result as @vchepkov when using autope in GCP. |
@vchepkov I talked to @timidri and he found one issue on 0.4.x which might be the same one you're running into. When the plan fails, it is failing right after
Because the PE installer is expected to fail on first install (Puppet can't run successfully before the database node is installed as well), the plan doesn't halt there. In 0.4.x particularly, any failure is ignored, and the plan proceeds. It looks like the The bug @timidri found in 0.4.x is that If you are running from a Mac OSX machine you may have seen the same failure. If you are not, try running with the
|
Yes, I am indeed using Mac and I discovered that installation file isn't being copied, so I transferred it manually and plan failed when provisioning replica instead. |
@vchepkov ah, understood. If you don't need to deploy 2018.1 yourself I would say then let's not worry about it. I definitely don't have any unsolved need to deploy it; the only reason it's coming up is @timidri trying to help me test the ability to upgrade from it. New deployments of 2018.1 are not considered to be supported. 😄 |
@reidmv Unfortunately I couldn't complete my test run (with a 2018 environment created by pe_xl v 0.42). The famous last words from my
|
Otherwise it seems there's a chance it'll re-create files we need to be absent.
This PR adds the ability to upgrade from PE 2018.1 to PE 2019.7.