This now matches the cloud images (7c760864fd)
fixing bootloader updates in the buildd images, but also fixing
compatibility with using devtmpfs for losetup.
Add a file build.info on etc/cloud
with the serial information
Signed-off-by: Samir Akarioh <samir.akarioh@canonical.com>
(cherry picked from commit 105acdebc7)
Commit 245f7772bd added code to abort the build if a snap wants to
install "core" (the 16.04 runtime). That's great but there are still
some CPC maintained image builds that use snaps based on "core". So
make it possible to continue the build if the "ALLOW_CORE_SNAP" env
variable is set.
(cherry picked from commit 34735684d5)
Due to how `disk-image` file is structured, it builds BIOS and UEFI
images at the same time. However, certain images (e.g., GCE images)
require only UEFI image to be built, BIOS image is being simply
discarded. This results in longer build times.
Splitting out `disk-image-uefi` would allow images to use it instead of
`disk-image` and thus avoid building unused BIOS images.
`disk-image` now depends on `disk-image-uefi` for backward
compatibility.
(cherry picked from commit b40ce74fd6)
This fixes GCE shielded VM instances integrity monitoring failures on
focal and later. Our images are built with an empty /boot/grub/grubenv
file, however after the first boot `initrdless_boot_fallback_triggered`
is set to 0. This change in `grubenv` results in integrity monitoring
`lateBootReportEvent` error.
It seems that the only thing that's checking for this `grubenv` variable
is `grub-common.service`, and it is looking specifically for a `1`
value:
if grub-editenv /boot/grub/grubenv list | grep -q
initrdless_boot_fallback_triggered=1; then echo "grub:
GRUB_FORCE_PARTUUID set, initrdless boot paniced, fallback triggered.";
fi
Unsetting this variable instead of setting it to 0 would prevent issues
with integrity monitoring.
LP: 1960537 illustrates an issue where the calls to e2fsck in the
umount_partition call are failing due to an open file handle. At this
time, we are unable to find a root cause, and it's causing many builds
to fail for CPC. Adding a sleep 30 as a workaround as the file handle
releases within that timeframe. This does not address root cause.
Initialize passwords from sources.list.
Use urllib everywhere.
This way authentication is added to all the required requests.
And incoming headers, are passed to the outgoing requests.
And all the response headers, are passed to the original client.
And all the TCP & HTTP errors are passed back to the client.
Thus should avoiding hanging requests upon failure.
Also rewrite the URI when requesting things.
This allows to use private-ppa.buildd outside of launchpad.
Signed-off-by: Dimitri John Ledkov <xnox@ubuntu.com>
(cherry picked from commit dc2a472871)
* Simplify how the subiquity client is run on the serial console in the live
server environment, breaking a unit cycle that sometimes prevents
subiquity from starting up at all. (LP: #1888497)
* Do not set the password for the installer user via cloud-init as subiquity
can now do this itself. (LP: #1933523)
With that, the Dockerfile modifications[0] currently done externally
are done now here. That means that the created rootfs tarball can be
directly used within a Dockerfile to create a container from scratch:
FROM scratch
ADD livecd.ubuntu-oci.rootfs.tar.gz /
CMD ["/bin/bash"]
[0]
https://github.com/tianon/docker-brew-ubuntu-core/blob/master/update.sh
(cherry picked from commit a81972a58b)
This is a copy of the ubuntu-base project.
Currently ubuntu-base is used as a base for the docker/OCI container
images. The rootfs tarball that is created with ubuntu-base is
published under [0]. That tarball is used in the FROM statement of the
Dockerfile as base and then a couple of modifications are done inside
of the Dockerfile[1].
The ubuntu-oci project will include the changes that are currently
done in the Dockerfile. With that:
1) a Dockerfile using that tarball will be just a 2 line thing:
FROM scratch
ADD ubuntu-hirsute-core-cloudimg-amd64-root.tar.gz /
CMD ["/bin/bash"]
2) Ubuntu has the full control about the build process of the
docker/OCI container. No external sources (like [1]) need to be
modified anymore.
3) Ubuntu can publish containers without depending on the official
dockerhub containers[2]. Currently the containers for the AWS ECR
registry[3] use as a base[4] the official dockerhub containers. That's
no longer needed because a container just needs a Dockerfile described
in 1)
When the ubuntu-oci project has the modifications from [1] included,
we'll also update [1] to use the ubuntu-oci rootfs tarball as a base
and drop the modifications done at [1].
Note: Creating a new ubuntu-oci project instead of using ubuntu-base
will make sure that we don't break users who are currently using
ubuntu-base rootfs tarballs for doing their own thing.
[0] https://partner-images.canonical.com/core/
[1]
https://github.com/tianon/docker-brew-ubuntu-core/blob/master/update.sh
[2] https://hub.docker.com/_/ubuntu
[3] https://gallery.ecr.aws/ubuntu/ubuntu
[4]
https://launchpad.net/~ubuntu-docker-images/ubuntu-docker-images/+oci/ubuntu/+recipe/ubuntu-20.04
(cherry picked from commit ac4a95b931)
I recently pulled initramfs logic out of the base build hook, and
dropped that into the `replace_kernel` function. Any cloud image that
does not leverage the generic virtual kernel was expected to call
`replace_kernel` to pull in a custom kernel. That function will
disable initramfs boot for images that use a custom kernel.
Minimal cloud images on amd64 use the linux-kvm kernel, but the build
hook does not utilize the `replace_kernel` function. Instead, the
kernel flavor is set in `auto/config`. I pulled that logic out of
`auto/config` and am now calling `replace_kernel` in the build hook.
I also moved a call to generate the package list so that it will pick
up the change to the linux-kvm kernel.
Change mount option for ubuntu-cpc images from "defaults" to
"umask=0077". ESP partitions might contain sensitive data and
non-root users shouldn't have read access on it.