Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: don't forget to request the pull scope #1

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

kingdonb
Copy link

Harbor will deny tags/list if you did not request the pull scope

(I do not know how scopes are supposed to be assigned, but when I debugged the conversation that happens in kuby build today, I came up against this issue on my public harbor registry)

This brilliant fellow explains the conversation and how it's intended to happen, and the conversation continues in the gist linked from there... https://stackoverflow.com/a/51921869/661659

I am not sure what's the correct way to determine the scope, or how to best pass the repo in so the repo scope can be requested, but this version resulted in a 200/ok for me.

Also, hey! LTNS

Harbor will deny tags/list if you did not request the pull scope

Signed-off-by: Kingdon Barrett <[email protected]>
@kingdonb
Copy link
Author

kingdonb commented Jul 23, 2022

In case you wanted to see where it's working and for what purpose:

https://github.com/kingdonb/scrob-web/actions/workflows/kuby.yml

(This is my weird fork of kuby that's designed to work on GitOps, with GitHub actions building and pushing images and manifests directed from my fork of the kuby cli!)

I haven't quite caught up with your branches yet, I went looking to see if you had support for ingress_class in them yet and it looked like you didn't, so I stuck with my weird fork for now

Edit: and here is the build where bundle update ran inside of GitHub Actions, you can see it's all hooked up to prebundler which makes bundle update where not much has changed fast, almost as fast as when nothing has changed, even inside of a container build pipeline!

❤️ 🥰

@kingdonb
Copy link
Author

I was looking for help with this issue and I googled only to find this discussion we had once before, with most of the information that I needed in it:

https://githubmemory.com/repo/getkuby/kuby-core/issues/72

@camertron
Copy link
Member

Hey @kingdonb, thanks for submitting this :)

It's my understanding that the registry should include the necessary scopes in the scope section of the www-authenticate response header. It's probably ok to include the repository:<repo>:pull scope by default, but I'm worried other container registries will error if they don't support it for whatever reason. Are you able to test with GitHub, Gitlab, Docker Hub, etc?

end

def token
@token ||= begin
http = Net::HTTP.new(realm.host, realm.port)
http.use_ssl = true if realm.scheme == 'https'

url_params = { service: service }
url_params = { service: service, scope: "repository:#{repo}:pull" }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The :scope hash entry will get replaced by the body of the if statement below, is that ok? Should the repository:<repo>:pull scope be added to the scopes in www-authenticate or just be a default?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the function of the body of the if statement below, but it wasn't getting replaced for me. I'm using a Harbor registry and it seemed like the Bearer token was getting requested with no scopes. I used Pixie to inspect the conversation, but I'm not sure I have all the details right, all I could say for sure is I had several processes of different kinds having this conversation (image-reflector-controller and others) and the one that was failing was here.

When I added the repository scope, it no longer received 401 on the request that uses the bearer token (in between building the app image and starting the build for the assets image, where it needs to confirm if any prior asset image exists, it tries to list the tags and fails 401.)

I wouldn't really suggest this change in particular as it's poorly thought out (this token will undoubtedly get used for other stuff besides pulling a specific repo), it was just the quickest way for me to confirm and resolve what was wrong in this one specific conversation that seems to be the only place this gem+token gets used by me, at least for now.

@camertron
Copy link
Member

Also, hey! LTNS

👋 😄

In case you wanted to see where it's working and for what purpose

Cool! Looks like it's working nicely :) I didn't know you could pass --push to docker build, TIL!

I haven't quite caught up with your branches yet, I went looking to see if you had support for ingress_class in them yet and it looked like you didn't, so I stuck with my weird fork for now

Oh, like the ability to use something other than ingress-nginx?

Edit: and here is the build where bundle update ran inside of GitHub Actions, you can see it's all hooked up to prebundler which makes bundle update where not much has changed fast, almost as fast as when nothing has changed, even inside of a container build pipeline!

Sweet! I looked at the build output, looks a-ok!

@kingdonb
Copy link
Author

Oh, like the ability to use something other than ingress-nginx?

I just want to set ingress_class, I have three nginx ingresses running with all different classes and one traefik also, this is a feature in the v1 Ingress that seemed to still be missing from the latest kuby-core and kube-dsl, but I might not have really upgraded fully (I had some other customizations in my fork and didn't want to break everything trying to catch up with your fork, and also upgraded to the latest Ruby and Rails, ... so I kept my old stable versions as much as possible for now)

looked at the build output, looks a-ok!

Yeah, this is working amazingly! I think I might even use it for my CNCF livestream next month. We're showing off image update automation and the new OCI config repo approach, that allows users of Flux to avoid adding sprawling many git repositories to their cluster, they can just use a lightweight and flexible OCI repository to handle the config instead.

So I needed an approach to config that generates YAML manifests, and there are basically two extremes on the manifest generation train, there's something simple like envsubst which just takes a list of variables and subs them in at build-time – and then there's something like Kuby, which builds the entire domain and universe and the user doesn't have to write or interact with a single line of YAML – only the domain representations in your chosen language.

I don't quite know of "something like Kuby" in any other languages, it's a pretty unique thing IMHO, but maybe I will point this out to 1000 people on the CNCF livestream and they will tell me about some things like it for go, scala, rust...

The important thing for me was to get build/CI times down to the point where they can be almost as fast as Flux's CD when fully instrumented with webhooks. The whole build/deploy takes place in about 60-120 seconds, with at least 1/3 of that time usually spent waiting for build robots to pick up the job, and another 1/8 or so spent waiting for app to pass health checks 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants