fuse3 was previously installed through recommends but with minimized images we no longer install recommends packages.
It is only required when preseeding snaps so does not need to be present in all minimized images so does not
need to be in the cloud-minimal seed.
During Realtime kernel image build, there was an error during
validating snap seed which derivative images copied 5.19
apparmor feature and can't validate when Realtime kernel (5.15)
installed [0].
To prevent this, bind correct apparmor feature with kernel
version.
[0] https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/2024639
(cherry picked from commit 6b54faa6be)
Canonical Public Cloud's project seems a bad place to build images for
hardware devices however this is how things were done a we now need to
maintain this.
The recent change to mount the ESP on /boot breaks those images, instead
of adding more hacky things in the hook, create a dedicated target for
those images and use a different hook to build UEFI images.
This is driven by online encryption scenarios. In order to efficiently
encrypt the root filesystem without modifying the partition layout, the
kernel should sit in an un-encrypted /boot partition. Instead of
creating a new partition that would change the default partition layout,
we mount the ESP on /boot. We also need to then bind mount /boot on
/boot/efi because that's where Grub expects the ESP to be located.
kpartx on riscv64 appears to be racy. Rather than trying to debug these
fraught races somewhere between udev and libdevmapper, we can use losetup
which should be simpler and less error-prone.
Cloud-init cannot write directly to
/etc/NetworkManager/system-connections because subiquity may
need to emit config to /etc/netplan/00-installer.yaml and call
netplan apply for autoinstall.network use-cases.
When cloud-init's config is written directly to
/etc/NetworkManager, neither netplan nor subiquity has knowledge of
this config and this results in namespace collisions in NetworkManager
due to `netplan-` named connections and `cloud-init` connection ids
fighting over which config own a given interface name.
Deleting this config overlay allows subiquity to manage all network
setup when it needs to with netplan directly.
Subiquity already has logic to rename any unwanted netplan
configuration when it intends to write cfg and run netplan apply[1].
This should allow subiquity full control of network config when needed.
[1] https://github.com/canonical/subiquity/blob/
92ac6544cdfedfd332d8cd94dbcfad0aab994575/subiquitycore/
controllers/network.py#L267
LP: #2015605
Autoinstall directives can be provided on the grub cmdline to
cloud-init via kernel parameters like the following:
autoinstall 'ds=nocloud-net;s=http://somedomain/'
In order to support DNS resolution for NoCloud datasource at
datasource discovery time, cloud-init.service needs to be
orderered after NetworkManager.service and
NetworkManager-wait-online.service
which will have brought up applicable NICs.
Since NetworkManager is After=dbus.service, the cloud-init.service
avoids systemd ordering cycles by also dropping
Before=sysinit.target when it adds, After=NetworkManager.service and
After=NetworkManager-wait-online.service
Add this file overlay for /lib/systemd/system/cloud-init.service
because systemd drop-in files can only add constraints and not
drop prexisting service constraints.
Also add an AUTOMATION_HEADER comment to any generated files to
add discoverability in the event of future bugs/concerns.
LP: #2008952
We add a ubuntu user inside the image because we
want to have a operational nonroot user and also
be aligned with the other Ubuntu images.
Signed-off-by: Samir Akarioh <samir.akarioh@canonical.com>
Commit 245f7772bd added code to abort the build if a snap wants to
install "core" (the 16.04 runtime). That's great but there are still
some CPC maintained image builds that use snaps based on "core". So
make it possible to continue the build if the "ALLOW_CORE_SNAP" env
variable is set.
This fixes GCE shielded VM instances integrity monitoring failures on
focal and later. Our images are built with an empty /boot/grub/grubenv
file, however after the first boot `initrdless_boot_fallback_triggered`
is set to 0. This change in `grubenv` results in integrity monitoring
`lateBootReportEvent` error.
It seems that the only thing that's checking for this `grubenv` variable
is `grub-common.service`, and it is looking specifically for a `1`
value:
if grub-editenv /boot/grub/grubenv list | grep -q
initrdless_boot_fallback_triggered=1; then echo "grub:
GRUB_FORCE_PARTUUID set, initrdless boot paniced, fallback triggered.";
fi
Unsetting this variable instead of setting it to 0 would prevent issues
with integrity monitoring.
LP: 1960537 illustrates an issue where the calls to e2fsck in the
umount_partition call are failing due to an open file handle. At this
time, we are unable to find a root cause, and it's causing many builds
to fail for CPC. Adding a sleep 30 as a workaround as the file handle
releases within that timeframe. This does not address root cause.
livecd-rootfs creates non-private mounts. When building locally using
the auto/build script unmounting fails.
To unmount dev/pts it is insufficient to make the mount private. Its
parents must be private too. Change teardown_mountpoint() accordingly.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
LP: 1944004 described an issue where a libc transition caused snapd
seccomp profiles to reference a path that no longer existed, leading to
permission denied errors. The committed fix for snapd then raised an
issue where running `snapd debug seeding` would present a
preseed-system-key and seed-restart-system-key due to a mismatch
between the running kernel capabilities and the profiles being loaded by
snapd. By mounting a cgroup2 type to /sys/fs/cgroup, the capabilities
match for snapd as mounted in the chroot. This is done similarly to
live-build/functions:138-140 where apparmour and seccomp actions are
mounted after updating the buildd.
With that, the Dockerfile modifications[0] currently done externally
are done now here. That means that the created rootfs tarball can be
directly used within a Dockerfile to create a container from scratch:
FROM scratch
ADD livecd.ubuntu-oci.rootfs.tar.gz /
CMD ["/bin/bash"]
[0]
https://github.com/tianon/docker-brew-ubuntu-core/blob/master/update.sh
This is a copy of the ubuntu-base project.
Currently ubuntu-base is used as a base for the docker/OCI container
images. The rootfs tarball that is created with ubuntu-base is
published under [0]. That tarball is used in the FROM statement of the
Dockerfile as base and then a couple of modifications are done inside
of the Dockerfile[1].
The ubuntu-oci project will include the changes that are currently
done in the Dockerfile. With that:
1) a Dockerfile using that tarball will be just a 2 line thing:
FROM scratch
ADD ubuntu-hirsute-core-cloudimg-amd64-root.tar.gz /
CMD ["/bin/bash"]
2) Ubuntu has the full control about the build process of the
docker/OCI container. No external sources (like [1]) need to be
modified anymore.
3) Ubuntu can publish containers without depending on the official
dockerhub containers[2]. Currently the containers for the AWS ECR
registry[3] use as a base[4] the official dockerhub containers. That's
no longer needed because a container just needs a Dockerfile described
in 1)
When the ubuntu-oci project has the modifications from [1] included,
we'll also update [1] to use the ubuntu-oci rootfs tarball as a base
and drop the modifications done at [1].
Note: Creating a new ubuntu-oci project instead of using ubuntu-base
will make sure that we don't break users who are currently using
ubuntu-base rootfs tarballs for doing their own thing.
[0] https://partner-images.canonical.com/core/
[1]
https://github.com/tianon/docker-brew-ubuntu-core/blob/master/update.sh
[2] https://hub.docker.com/_/ubuntu
[3] https://gallery.ecr.aws/ubuntu/ubuntu
[4]
https://launchpad.net/~ubuntu-docker-images/ubuntu-docker-images/+oci/ubuntu/+recipe/ubuntu-20.04
One can call divert_grub; replace_kernel; undivert_grub. And
replace_kernel will call into force_boot_without_initramfs, which
under certain conditions can call divert_grub &
undivert_grub. Resulting in undivert_grub called twice in a row.
When undivert_grub is called twice in a row it wipes
systemd-detect-virt binary from disk, as the rm call is unguarded to
check that there is something to divert if systemd package is
installed. And if the systemd package is not installed, it does not
check that systemd-detect-virt file is in-fact what divert_grub has
created.
Add a guard to check that systemd-detect-virt is the placeholder one,
before removing it.
LP: #1902260