Canonical Public Cloud's project seems a bad place to build images for
hardware devices however this is how things were done a we now need to
maintain this.
The recent change to mount the ESP on /boot breaks those images, instead
of adding more hacky things in the hook, create a dedicated target for
those images and use a different hook to build UEFI images.
live-build/auto/config:
- for Ubuntu Server live images and the arm64+tegra full arch, build a
tegra variant with linux-nvidia-tegra as the flavor and
linux-nvidia-tegra as the kernel meta-package
- default to nvidia-$SUBARCH as the kernel flavor for all images using
arm64+tegra as full arch
hooks/03-kernel-metapkg.chroot_early:
- use linux-nvidia-tegra as kernel meta-package for the nvidia-tegra
flavor
SUBARCH=visionfive2 is used to build images for the StarFive VisionFive 2
boards. For the device-tree we assume board revision 1.3B.
Signed-off-by: Heinrich Schuchardt <heinrich.schuchardt@canonical.com>
Evolve the seed to only ship the specific part useful to WSL users. This
allows to trim down the image size.
Co-authored-by: Jean-Baptiste Lallement <jean-baptiste@ubuntu.com>
So that people without network access can download the package and
install it using a usb drive for example.
Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
This reverts commit d5a7d6655f.
The Wifi driver package is in universe and can't be promoted in time for
the release, so revert this.
Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
The LicheeRV Dock board comes with only 512MB of DRAM so the only difference
with a Nezha image is the fact that we have to remove
cryptsetup-initramfs package which makes the initrd too big for the
board to boot.
Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
For now, all RISC-V hardware is SBC-like board which embed a Wifi
chipset so install wpasupplicant by default. We'll certainly split the
seeds between server and embedded hardware later.
Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
The image created uses a UEFI bootflow, so we install grub for this board
only. We also need flash-kernel to install the dtb where grub can find
it.
This image is specifically architectured so that it can be installed on
a "factory" board, meaning using the u-boot firmware which was
originally implemented for Fedora, so we need the p3 partition that
embeds a uEnv.txt file to tell u-boot what/where to load next stage.
Signed-off-by: Alexandre Ghiti <alexandre.ghiti@canonical.com>
Germinate doesn't take very long at all to run but downloading the
indices it operates on can take a while and nothing else in auto/config
does so not doing it every time you run "lb config" can be a real time
saver.
The code that invokes germinate already checked if the output was
already there but it was unconditionally deleted by the time control got
to that point.
for the live server build, i want to make a layer to install the kernel
into but do not want the layer itself to be published.
the implementation is a bit clunky but it works.
Some packages are in universe at release time then promoted to
the main pocket in -updates during the release lifecycle.
These packages should be considered by germinate when the root fs is
built (LP: #1921862)
Co-authored-by: Didier Roche <didrocks@ubuntu.com>
This is a copy of the ubuntu-base project.
Currently ubuntu-base is used as a base for the docker/OCI container
images. The rootfs tarball that is created with ubuntu-base is
published under [0]. That tarball is used in the FROM statement of the
Dockerfile as base and then a couple of modifications are done inside
of the Dockerfile[1].
The ubuntu-oci project will include the changes that are currently
done in the Dockerfile. With that:
1) a Dockerfile using that tarball will be just a 2 line thing:
FROM scratch
ADD ubuntu-hirsute-core-cloudimg-amd64-root.tar.gz /
CMD ["/bin/bash"]
2) Ubuntu has the full control about the build process of the
docker/OCI container. No external sources (like [1]) need to be
modified anymore.
3) Ubuntu can publish containers without depending on the official
dockerhub containers[2]. Currently the containers for the AWS ECR
registry[3] use as a base[4] the official dockerhub containers. That's
no longer needed because a container just needs a Dockerfile described
in 1)
When the ubuntu-oci project has the modifications from [1] included,
we'll also update [1] to use the ubuntu-oci rootfs tarball as a base
and drop the modifications done at [1].
Note: Creating a new ubuntu-oci project instead of using ubuntu-base
will make sure that we don't break users who are currently using
ubuntu-base rootfs tarballs for doing their own thing.
[0] https://partner-images.canonical.com/core/
[1]
https://github.com/tianon/docker-brew-ubuntu-core/blob/master/update.sh
[2] https://hub.docker.com/_/ubuntu
[3] https://gallery.ecr.aws/ubuntu/ubuntu
[4]
https://launchpad.net/~ubuntu-docker-images/ubuntu-docker-images/+oci/ubuntu/+recipe/ubuntu-20.04
shim-signed depends on grub-efi-amd64-signed, which in turn has
alternative depends on either `grub-efi-amd64 | grub-pc`. However to
support booting with either via shim&signed-grub and BIOS, the choice
must be made to install grub-pc, not grub-efi-amd64.
This makes images consistent with Ubuntu Deskop, Live Server, buildd
bootable images; all of which already do install grub-pc and
shim-signed.
LP: #1901906
When desktop-preinstalled image options were added in
38157b3748, for the raspi subarch, the
options listed there were not scoped for raspi subarch. This results
in those options getting also applied for the HYPERV
ubuntu:desktop-preinstalled image.
Thus scope the newly added options under raspi subarch case only.
Regression introduced in 38157b3748 when
desktop-preinstalled code branch was added, it dropped addint
ubuntu-desktop task. Instead it added ubuntu-desktop-raspi task, only
for the raspi subarch, which depends on ubuntu-desktop. But the hyperv
case, now ended up without ubuntu-desktop task.
It looks like introduction of "desktop-preinstalled" assumed, that it
is for raspi only, when in fact that code path now started to be used
for hyperv gallery image too.