-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NiFi v2 Behind Openshift Route #750
Comments
Hi @greg-pendlebury, not sure i completely follow, but could you please have a look at https://docs.stackable.tech/home/stable/nifi/usage_guide/security/#host-header-check? Especially the part Otherwise could you provide the errors from the nifi pods and other possible logs / events? Malte |
Thanks @maltesander , I have tried those proxy settings and they weren't what I was looking for. TBH my hunch that I am specifically referring to the same issue noted in 697 where NiFi v2+ (or Jetty v10+) is very strict with regards to the SSL handshake. Because the cert generated inside the pod(? or somewhere else in the operator) only uses SANs that are either 1) internal cluster addresses ( various The pod logs something very minimal about the failed connection, but I think it is Jetty rejecting things, not NiFi, and the client receives the "Invalid SNI" error. All my research suggests everyone migrating to v2+ is challenged by this with docker. I have got v2 working perfectly using the host IP as the public access pathway (just to test). The heart of the above issue (I think) is Openshift routes using 'passthrough' (i.e. don't mess with the SSL, the pod can handle this) worked fine for v1 NiFi but cannot for v2... at least all the dead ends I tried Yesterday are suggesting that. I suspect I could switch to 're-encrypt', but then I need to lift all the SSL details out of the pod and build them into the route config... academically doable, but I was already concerned when I saw the Host IP on the cert SANs because it of course means the pod is inherently fragile and couldn't handle being moved to a new worker node, unless the cert got re-issued... which then means the cert inside the route config would be stale and I would be chasing my own tail. Maybe... I am off in theory-land on this one. And the third option for route (edge termination) didn't look like something supported either; but I didn't explore it much either, I want https working internally. Eventually I gave up Yesterday when I was considering implementing your cert-manager to manage the SSL outside the pod (I guess that is what it does) but an admin shut me down because they want to use the Openshift cert-manager they are already controlling (and that the routes leverage). We decided we would come back and re-assess the operator once v2 is a bit more mature. I was noticing the docker image is something like 3 weeks old at this stage and your team has only likely just hit the first set of teething issues to bed this down. |
Thank you @greg-pendlebury for the detailed steps! Regarding the SSL/DNS issues i woud ping @Jimvin to maybe have some insights. Now i have to throw some "experimentals" at you :)
Hopefully this is resolved until the next release, but i cannot promise anything. Cheers, |
Thanks @maltesander , unfortunately I am not going to get time to re-visit this until next week, but I will try and see what I can do by wiring up the cert-manager. |
Perhaps this is just a request for guidance (if I am wrong), but I can't work out how to gracefully get a NiFi v2 container online behind an Openshift Route because of the "Invalid SNI" issue.
This issue (697) gave some hints using an ingress, but I haven't found a way using routes.
From tracing through the current solution what I think is happening is:
StatefulSet
which forcesNODE_ADDRESS
to exist during startup, but builds the value itself based on a internal address.NODE_ADDRESS
using aConfigMap
.nifi.properties
loadsNODE_ADDRESS
into bothnifi.cluster.node.address
andnifi.web.https.host
, with the whole file being drawn from aConfigMap
.nifi.properties
in theConfigMap
are overwritten by the operator.StatefulSet
to adjust the CLI settingNODE_ADDRESS
are overwritten by the operator.ConfigMap
are not overwritten.Would it be as simple as allowing the
nifi.web.https.host
to be overwritten by a new variable, likePUBLIC_ADDRESS
that we can set in theConfigMap
? I gather this is what is causing Jetty to reject the traffic originating from the public route.For NiFi v1.27.0, using a pass-through route 'just worked', but either upgrading or a fresh install of v2 are all failing. Unclear to me why the internal pod wants to perform this validation... and TBH I would love to disable it, but maybe it adds value. My last gasp is going to track down an admin that knows cert-fu and trying to switch to a Re-Encrypt route, but not looking forward to that.
The text was updated successfully, but these errors were encountered: