-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build FCOS using podman build
#1861
Comments
Ongoing discussions about this in https://gitlab.com/fedora/bootc/tracker/-/issues/32 and coreos/rpm-ostree#5221. The other approach being explored is rechunking a derived container. So it would look something like this: # build our rootfs
FROM quay.io/fedora/fedora-bootc:rawhide as rootfs
RUN --mount=type=bind,target=/run/src rpm-ostree compose apply /run/src/manifest.yaml
# rechunk
FROM quay.io/fedora/fedora-bootc:rawhide as builder
RUN --mount=type=bind,rw=true,dst=/run/src,bind-propagation=shared \
--mount=from=rootfs,dst=/rootfs \
rpm-ostree compose build-chunked-oci --ostree --rootfs=/rootfs --output /run/src/out.oci
# output rechunked OCI
FROM oci:./out.oci
LABEL containers.bootc 1
# <any other container-native metadata here>
# Need to reference builder here to force ordering. But since we have to run
# something anyway, we might as well cleanup after ourselves.
RUN --mount=type=bind,from=builder,src=.,target=/var/tmp \
--mount=type=bind,rw=true,dst=/buildcontext,bind-propagation=shared \
rm -f /buildcontext/out.oci If we can do coreos/rpm-ostree#5221 (comment), then it reduces to: # build our rootfs
FROM quay.io/fedora/fedora-bootc:rawhide as rootfs
RUN --mount=type=bind,rw=true,dst=/run/src,bind-propagation=shared \
rpm-ostree compose apply /run/src/manifest.yaml && \
rpm-ostree compose build-chunked-oci --ostree --rootfs=/ --output /run/src/out.oci
# output rechunked OCI
FROM oci:./out.oci
LABEL containers.bootc 1
# <any other container-native metadata here>
# Need to reference rootfs here to force ordering. But since we have to run
# something anyway, we might as well cleanup after ourselves.
RUN --mount=type=bind,from=rootfs,src=.,target=/var/tmp \
--mount=type=bind,rw=true,src=.,dst=/buildcontext,bind-propagation=shared \
rm -f /buildcontext/out.oci |
podman build
We discussed this during the community meeting today. Overall our understanding is that the goal of this issue/request is to change our build process for FCOS to use
Further discussion happened about some nuance on how to implement this summarized by @jlebon as:
from which we're still digesting the options, but we're generally agreed that |
So when deriving, one major area that'll need to be reworked is lockfile handling. For the base tier-x image, we can pin by digest and bump that. For the layered packages, we'll still need lockfiles. Unfortunately, lockfile support in dnf5 is not ready yet, so we'll probably have to re-implement this ourselves for now (see also rpm-software-management/libpkgmanifest#6 (comment)), e.g. as part of |
Note a large intersection here is Fedora konflux and the Konflux rpm lockfiles as are used by e.g. centos-bootc today; |
Is there a story around honouring that in a local dev environment as well? Whatever we land on has to work in both Konflux and locally. |
Note that the future viability of workflow needed for this (as detailed in the description) is in question so we'll need to resolve that before continuing down this path. |
We are investing in supporting a post-processing flow to add rechunking as an optional secondary step, so I don't think it should be considered a blocker for switching to |
Describe the enhancement
This tracking issue is the FCOS side of https://gitlab.com/fedora/bootc/tracker/-/issues/32
The way we use treefiles and git submodules is not container native and not something I'd like to support widely.
Effectively the goal here is that FCOS container build becomes:
Also, coreos-assembler (or the FCOS build pipeline) would need to switch to
podman build
(or equivalent).System details
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: