-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pgpool2 showing all nodes standby mode at "show pool_nodes" commands #73
Comments
I don't think the minor version differences are causing this issue. Could you ensure that the two standby servers are connected to the primary and receiving WAL from it?
And to show |
At the backend it was working smoothly, no issue at replication it was OK, issue at pgpool. Pgpool was unable to find primary node. Basically we did security update using "yum update --security" at node2 and node3 which cause this line found:
Inside that file (/usr/lib/tmpfiles.d/pgpool-II-pg13.conf) we found below line: We followed below steps to security update on:
For node2:
When we tried to attach 2nd node2 it become stuck, and exit with an error that pgpool "terminating connection". And also observed 9898 not up instantly if we do restart pgpool. This is the output of pg_stat_replication
|
In the pgpool.conf you need to make sure you have right credentials for streaming replication health check or else you would see error similar to the one stated below
|
I found suddenly pgpool showing all pg_role in standby mode for all nodes but from database side we found node1 is primary , node2 and node3 are stanbdy.
I tried removing pgpool status file but it is also not working. Later stop all services and removing all status file start again pgpool service at all nodes. Then I found primary is showing as primary at pg_role but when I attach the standby node it is terminated pgpool connection for long time. After few minute I observed it is again showing all node as standby node. Even pg_promote_node not working.
Finally, when we recovered node2 and node3 I found it is recovered and issue solved but I found replication_state and replication_sync_state not showing.
We have checked all the configurations with the previous all pgpool nodes it is found ok. But one thing I noticed is that at node1 pgpool version is 4.2.3 where at other two nodes it is 4.2.2 . Is it the reason for showing this mismatch of information?
Where pgpool store that pg_role status information?
@pengbo0328
@codeforall
@chen-5033
Pgpool at stable state
At the time of pgpool2 error
After all pgpool service stop , remove all status file and later restart pgpool service.
After recovery pgpool2, where replication_state and replication_sync_state is missing
The text was updated successfully, but these errors were encountered: