mirror of
https://git.launchpad.net/livecd-rootfs
synced 2026-03-24 18:51:10 +00:00
Compare commits
No commits in common. "ubuntu/master" and "26.04.13" have entirely different histories.
ubuntu/mas
...
26.04.13
@ -1,12 +0,0 @@
|
||||
pipeline:
|
||||
- [lint]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
series: noble
|
||||
architectures: amd64
|
||||
packages:
|
||||
- black
|
||||
- mypy
|
||||
- python3-flake8
|
||||
run: ./check-lint
|
||||
@ -46,8 +46,6 @@ These variables can be set for both lb config and lb build:
|
||||
|
||||
PROJECT (mandatory, comes from "project" in the metadata)
|
||||
ARCH (set to the abi tag of the distroarchseries being built for)
|
||||
ARCH_VARIANT (set to the isa tag of the distroarchseries being built for if it is
|
||||
different from the abi tag)
|
||||
SUBPROJECT (optional, comes from "subproject" in the metadata)
|
||||
SUBARCH (optional, comes from "subarch" in the metadata)
|
||||
CHANNEL (optional, comes from "subarch" in the metadata)
|
||||
@ -76,8 +74,6 @@ are some things set for lb config only? no idea):
|
||||
"extra_ppas" is a list. EXTRA_PPAS is set to " ".join(extra_ppas))
|
||||
EXTRA_SNAPS (optional, comes from "extra_snaps" in the metadata
|
||||
"extra_snaps" is a list. EXTRA_SNAPS is set to " ".join(extra_snaps))
|
||||
BUILD_TYPE (optional, the "type" (i.e. Daily or Release) of ISO being built,
|
||||
goes into .disk/info on the ISO, defaults to Daily)
|
||||
|
||||
Here is an opinionated and slightly angry attempt to describe what
|
||||
each of these is for:
|
||||
@ -100,14 +96,6 @@ The architecture being built for. This is always the same as `dpkg
|
||||
|
||||
It's kind of redundant but it's not really a problem that this exists.
|
||||
|
||||
ARCH_VARIANT
|
||||
------------
|
||||
|
||||
The "variant" being built for, i.e. the ISA tag of the
|
||||
distroarchseries. Only set if this is different from the ABI tag.
|
||||
|
||||
This is definitely needed to be able to build images for variants.
|
||||
|
||||
SUBPROJECT
|
||||
----------
|
||||
|
||||
@ -246,18 +234,3 @@ EXTRA_SNAPS
|
||||
-----------
|
||||
|
||||
Extra snaps to include (but only for ubuntu-image based builds).
|
||||
|
||||
BUILD_TYPE
|
||||
----------
|
||||
|
||||
Before release, the .disk/info on an ISO looks like:
|
||||
|
||||
Ubuntu-Server 26.04 LTS "Resolute Raccoon" - Daily amd64 (20260210)
|
||||
|
||||
after release it looks like:
|
||||
|
||||
Ubuntu-Server 26.04 LTS "Resolute Raccoon" - Release amd64 (20270210)
|
||||
|
||||
We could do a livecd-rootfs upload to change this (it only changes
|
||||
once per cycle), but it's quicker and easier to manage this from the
|
||||
code that triggers the livefs builds.
|
||||
|
||||
11
check-lint
11
check-lint
@ -1,11 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -eux
|
||||
|
||||
export MYPYPATH=live-build
|
||||
mypy live-build/isobuilder live-build/isobuild
|
||||
mypy live-build/gen-iso-ids
|
||||
|
||||
black --check live-build/isobuilder live-build/isobuild live-build/gen-iso-ids
|
||||
|
||||
python3 -m flake8 --max-line-length 88 --ignore E203 live-build/isobuilder live-build/isobuild live-build/gen-iso-ids
|
||||
123
debian/changelog
vendored
123
debian/changelog
vendored
@ -1,126 +1,3 @@
|
||||
livecd-rootfs (26.04.25) resolute; urgency=medium
|
||||
|
||||
* bake LIVECD_ROOTFS_ROOT into config/functions, fixing some build failures
|
||||
(for at least ubuntu and some ubuntu-cpc configurations).
|
||||
|
||||
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Fri, 20 Mar 2026 06:47:44 +1300
|
||||
|
||||
livecd-rootfs (26.04.24) resolute; urgency=medium
|
||||
|
||||
[ Allen Abraham ]
|
||||
* Added a hook to produce a working minimal Ubuntu image using imagecraft
|
||||
|
||||
[ Michael Hudson-Doyle ]
|
||||
* Various quality of life improvements for hacking on livecd-rootfs:
|
||||
- Add a "ubuntu-test-iso" project that builds a not very useful ISO in 2-5 minutes.
|
||||
- Add a build-livefs script that takes care of copying the auto scripts and
|
||||
invoking lb clean/config/build with the right environment.
|
||||
- Add a build-livefs-lxd script to run the above script in a lxd vm.
|
||||
|
||||
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Mon, 16 Mar 2026 11:05:13 +1300
|
||||
|
||||
livecd-rootfs (26.04.23) resolute; urgency=medium
|
||||
|
||||
[ Tobias Heider ]
|
||||
* Fix ISO builds when KERNEL_FLAVOUR != generic.
|
||||
|
||||
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Mon, 02 Mar 2026 10:51:47 +1300
|
||||
|
||||
livecd-rootfs (26.04.22) resolute; urgency=medium
|
||||
|
||||
[ Oliver Gayot ]
|
||||
* Pull the model from Launchpad's lp:canonical-models
|
||||
repo, instead of having it uploaded as part of livecd-rootfs. This
|
||||
indirection makes it possible to update the models without requiring a new
|
||||
upload of livecd-rootfs every time.
|
||||
|
||||
[ Michael Hudson-Doyle ]
|
||||
* Fix two more problems with livefs-built ISOs:
|
||||
- Generate the for-iso squashfs in the right place for Kubuntu.
|
||||
- Fix confusion about the kernel path on the ISO on riscv64.
|
||||
|
||||
[ Tobias Heider ]
|
||||
* Fix pool generation when using extra_ppas.
|
||||
|
||||
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Thu, 26 Feb 2026 10:56:42 +1300
|
||||
|
||||
livecd-rootfs (26.04.21) resolute; urgency=medium
|
||||
|
||||
[ Dan Bungert ]
|
||||
* Update new signed models to ship latest nvidia drivers for ubuntu hybrid.
|
||||
|
||||
-- Didier Roche-Tolomelli <didrocks@ubuntu.com> Wed, 25 Feb 2026 08:38:32 +0100
|
||||
|
||||
livecd-rootfs (26.04.20) resolute; urgency=medium
|
||||
|
||||
[ Michael Raymond ]
|
||||
* Bug-fix: Only use main archive keyring when building with debootstrap
|
||||
so EOL release signatures can be verified after EOL.
|
||||
|
||||
[ Allen Abraham ]
|
||||
* Make SBOM generation optional in create_manifest function.
|
||||
|
||||
[ Michael Hudson-Doyle ]
|
||||
* 030-ubuntu-live-system-seed.binary: do not run if there is no layer to
|
||||
install the system, in particular on arm64.
|
||||
* Fix some path confusion in the new isobuilder.boot package and refactor
|
||||
grub config generation to be more string based.
|
||||
|
||||
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Fri, 20 Feb 2026 12:45:41 +1300
|
||||
|
||||
livecd-rootfs (26.04.19) resolute; urgency=medium
|
||||
|
||||
* Translate the debian-cd tools/boot/$series/boot-$arch scripts to Python
|
||||
and use that to make ISOs bootable rather than cloning debian-cd.
|
||||
|
||||
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Tue, 17 Feb 2026 11:16:43 +1300
|
||||
|
||||
livecd-rootfs (26.04.18) resolute; urgency=medium
|
||||
|
||||
[ Michael Hudson-Doyle ]
|
||||
* document ARCH_VARIANT and BUILD_TYPE in README.parameters
|
||||
* isobuilder: pass ignore_dangling_symlinks=True when copying apt config
|
||||
|
||||
-- Utkarsh Gupta <utkarsh@ubuntu.com> Mon, 16 Feb 2026 16:14:03 +0530
|
||||
|
||||
livecd-rootfs (26.04.17) resolute; urgency=medium
|
||||
|
||||
* desktop: build the stable ISO using the stable model - essentially
|
||||
reverting all the hacks.
|
||||
* desktop: update the stable model to the latest. It has:
|
||||
- components defined for the 6.19 kernel (nvidia 580 series)
|
||||
- no core26: for TPM/FDE recovery testing, please install the core26 snap
|
||||
from edge.
|
||||
|
||||
-- Olivier Gayot <olivier.gayot@canonical.com> Thu, 12 Feb 2026 10:25:15 +0100
|
||||
|
||||
livecd-rootfs (26.04.16) resolute; urgency=medium
|
||||
|
||||
* Rename ISO_STATUS to BUILD_TYPE for image builds.
|
||||
|
||||
-- Utkarsh Gupta <utkarsh@debian.org> Thu, 12 Feb 2026 01:41:11 +0530
|
||||
|
||||
livecd-rootfs (26.04.15) resolute; urgency=medium
|
||||
|
||||
[ Matthew Hagemann ]
|
||||
* desktop: delay display manager starting until snapd seeding completes
|
||||
|
||||
[ Michael Hudson-Doyle ]
|
||||
* Make an ISO in the livefs build when building an installer.
|
||||
|
||||
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Wed, 11 Feb 2026 10:04:37 +1300
|
||||
|
||||
livecd-rootfs (26.04.14) resolute; urgency=medium
|
||||
|
||||
[ Olivier Gayot ]
|
||||
* desktop: build stable image with snapd from beta. Snapd 2.74 has just been
|
||||
uploaded to beta. Let's stop using the version declared in the dangerous model.
|
||||
|
||||
[ Didier Roche-Tolomelli ]
|
||||
* desktop: add (commented out) config to force reexecution of snapd snap version
|
||||
|
||||
-- Olivier Gayot <olivier.gayot@canonical.com> Thu, 22 Jan 2026 10:13:36 +0100
|
||||
|
||||
livecd-rootfs (26.04.13) resolute; urgency=medium
|
||||
|
||||
* Bootstrap and install variant packages if ARCH_VARIANT is set.
|
||||
|
||||
1
debian/control
vendored
1
debian/control
vendored
@ -37,7 +37,6 @@ Depends: ${misc:Depends},
|
||||
procps,
|
||||
python3,
|
||||
python3-apt,
|
||||
python3-click,
|
||||
python3-launchpadlib [!i386],
|
||||
python3-yaml,
|
||||
qemu-utils [!i386],
|
||||
|
||||
1
debian/livecd-rootfs.links
vendored
1
debian/livecd-rootfs.links
vendored
@ -1 +0,0 @@
|
||||
usr/share/livecd-rootfs/live-build/build-livefs usr/bin/build-livefs
|
||||
@ -208,22 +208,6 @@ EOF
|
||||
undivert_update_initramfs
|
||||
undivert_grub chroot
|
||||
fi
|
||||
if [ "${MAKE_ISO}" = yes ]; then
|
||||
isobuild init --disk-info "$(cat config/iso-ids/disk-info)" --series "${LB_DISTRIBUTION}" --arch "${ARCH}"
|
||||
# Determine which chroot directory has the apt configuration to use.
|
||||
# Layered builds (PASSES set) create overlay directories named
|
||||
# "overlay.base", "overlay.live", etc. - we use the first one (base).
|
||||
# Single-pass builds use the "chroot" directory directly.
|
||||
if [ "${PASSES}" ]; then
|
||||
CHROOT="overlay.$(set -- $PASSES; echo $1)"
|
||||
else
|
||||
CHROOT=chroot
|
||||
fi
|
||||
isobuild setup-apt --chroot $CHROOT
|
||||
if [ -n "${POOL_SEED_NAME}" ]; then
|
||||
isobuild generate-pool --package-list-file "config/germinate-output/${POOL_SEED_NAME}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -d chroot/etc/apt/preferences.d.save ]; then
|
||||
# https://mastodon.social/@scream@botsin.space
|
||||
@ -375,7 +359,7 @@ EOF
|
||||
(cd chroot && find usr/share/doc -maxdepth 1 -type d | xargs du -s | sort -nr)
|
||||
echo END docdirs
|
||||
|
||||
${LIVECD_ROOTFS_ROOT}/minimize-manual chroot
|
||||
/usr/share/livecd-rootfs/minimize-manual chroot
|
||||
|
||||
clean_debian_chroot
|
||||
fi
|
||||
@ -443,6 +427,13 @@ if [ -e config/manifest-minimal-remove ]; then
|
||||
cp config/manifest-minimal-remove "$PREFIX.manifest-minimal-remove"
|
||||
fi
|
||||
|
||||
for ISO in binary.iso binary.hybrid.iso; do
|
||||
[ -e "$ISO" ] || continue
|
||||
ln "$ISO" "$PREFIX.iso"
|
||||
chmod 644 "$PREFIX.iso"
|
||||
break
|
||||
done
|
||||
|
||||
if [ -e "binary/$INITFS/filesystem.dir" ]; then
|
||||
(cd "binary/$INITFS/filesystem.dir/" && tar -c --sort=name --xattrs *) | \
|
||||
gzip -9 --rsyncable > "$PREFIX.rootfs.tar.gz"
|
||||
@ -567,28 +558,3 @@ case $PROJECT in
|
||||
ubuntu-cpc)
|
||||
config/hooks.d/remove-implicit-artifacts
|
||||
esac
|
||||
|
||||
if [ "${MAKE_ISO}" = "yes" ]; then
|
||||
# Link build artifacts with "for-iso." prefix for isobuild to consume.
|
||||
# Layered builds create squashfs via lb_binary_layered (which already
|
||||
# creates for-iso.*.squashfs files). Single-pass builds only have
|
||||
# ${PREFIX}.squashfs, which does not contain cdrom.sources, so we
|
||||
# create a for-iso.filesystem.squashfs that does.
|
||||
if [ -z "$PASSES" ]; then
|
||||
isobuild generate-sources --mountpoint=/cdrom > chroot/etc/apt/sources.list.d/cdrom.sources
|
||||
create_squashfs chroot ${PWD}/for-iso.filesystem.squashfs
|
||||
fi
|
||||
# Link kernel and initrd files. The ${thing#${PREFIX}} expansion strips
|
||||
# the PREFIX, so "livecd.ubuntu-server.kernel-generic" becomes
|
||||
# "for-iso.kernel-generic".
|
||||
for thing in ${PREFIX}.kernel-* ${PREFIX}.initrd-*; do
|
||||
for_iso_path=for-iso${thing#${PREFIX}}
|
||||
if [ ! -f $for_iso_path ]; then
|
||||
ln -v $thing $for_iso_path
|
||||
fi
|
||||
done
|
||||
isobuild add-live-filesystem --artifact-prefix for-iso.
|
||||
isobuild make-bootable --project "${PROJECT}" --capproject "$(cat config/iso-ids/capproject)" \
|
||||
${SUBARCH:+--subarch "${SUBARCH}"}
|
||||
isobuild make-iso --volid "$(cat config/iso-ids/vol-id)" --dest ${PREFIX}.iso
|
||||
fi
|
||||
|
||||
@ -1,8 +1,6 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
LIVECD_ROOTFS_ROOT=${LIVECD_ROOTFS_ROOT:-/usr/share/livecd-rootfs}
|
||||
|
||||
case $ARCH:$SUBARCH in
|
||||
amd64:|amd64:generic|amd64:intel-iot|\
|
||||
arm64:|arm64:generic|arm64:raspi|arm64:snapdragon|arm64:nvidia|\
|
||||
@ -49,14 +47,12 @@ if [ -z "$MIRROR" ]; then
|
||||
fi
|
||||
|
||||
mkdir -p config
|
||||
echo "LIVECD_ROOTFS_ROOT=\"$LIVECD_ROOTFS_ROOT\"" > config/functions
|
||||
chmod --reference=${LIVECD_ROOTFS_ROOT}/live-build/functions config/functions
|
||||
cat ${LIVECD_ROOTFS_ROOT}/live-build/functions >> config/functions
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/lb_*_layered config/
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/snap-seed-parse.py config/snap-seed-parse
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/snap-seed-missing-providers.py config/snap-seed-missing-providers
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/expand-task config/expand-task
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/squashfs-exclude-files config/
|
||||
cp -af /usr/share/livecd-rootfs/live-build/functions config/functions
|
||||
cp -af /usr/share/livecd-rootfs/live-build/lb_*_layered config/
|
||||
cp -af /usr/share/livecd-rootfs/live-build/snap-seed-parse.py config/snap-seed-parse
|
||||
cp -af /usr/share/livecd-rootfs/live-build/snap-seed-missing-providers.py config/snap-seed-missing-providers
|
||||
cp -af /usr/share/livecd-rootfs/live-build/expand-task config/expand-task
|
||||
cp -af /usr/share/livecd-rootfs/live-build/squashfs-exclude-files config/
|
||||
|
||||
mkdir -p config/package-lists
|
||||
|
||||
@ -394,7 +390,7 @@ if [ -z "${IMAGEFORMAT:-}" ]; then
|
||||
;;
|
||||
esac
|
||||
;;
|
||||
ubuntu-server:live|ubuntu-mini-iso:|ubuntu-test-iso:|ubuntu-core-installer:*)
|
||||
ubuntu-server:live|ubuntu-mini-iso:|ubuntu-core-installer:*)
|
||||
IMAGEFORMAT=plain
|
||||
;;
|
||||
esac
|
||||
@ -430,7 +426,7 @@ case $IMAGEFORMAT in
|
||||
ubuntu-server:live|ubuntu-core-installer:*)
|
||||
touch config/universe-enabled
|
||||
;;
|
||||
ubuntu-mini-iso:|ubuntu-test-iso:)
|
||||
ubuntu-mini-iso:)
|
||||
fs=none
|
||||
;;
|
||||
*)
|
||||
@ -640,7 +636,7 @@ case $PROJECT in
|
||||
esac
|
||||
|
||||
case $PROJECT in
|
||||
ubuntu-mini-iso|ubuntu-test-iso)
|
||||
ubuntu-mini-iso)
|
||||
COMPONENTS='main'
|
||||
;;
|
||||
edubuntu|ubuntu-budgie|ubuntucinnamon|ubuntukylin)
|
||||
@ -657,13 +653,6 @@ case $SUBPROJECT in
|
||||
;;
|
||||
esac
|
||||
|
||||
case $PROJECT in
|
||||
ubuntu-test-iso)
|
||||
# ubuntu-test-iso uses only add_package (not add_task) and has no
|
||||
# pool, so germinate output is never needed.
|
||||
touch config/germinate-output/structure
|
||||
;;
|
||||
*)
|
||||
if ! [ -e config/germinate-output/structure ]; then
|
||||
echo "Running germinate..."
|
||||
if [ -n "$COMPONENTS" ]; then
|
||||
@ -673,29 +662,11 @@ case $PROJECT in
|
||||
-S $SEEDMIRROR -m $MIRROR -d $SUITE,$SUITE-updates \
|
||||
-s $FLAVOUR.$SUITE $GERMINATE_ARG -a ${ARCH_VARIANT:-$ARCH})
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
# ISO build configuration. These defaults are overridden per-project below.
|
||||
#
|
||||
# MAKE_ISO: Set to "yes" to generate an installer ISO at the end of the build.
|
||||
# This triggers isobuild to run in auto/build.
|
||||
MAKE_ISO=no
|
||||
# POOL_SEED_NAME: The germinate output file defining packages for the ISO's
|
||||
# package pool (repository). Different flavors use different seeds:
|
||||
# - "ship-live" for most desktop images
|
||||
# - "server-ship-live" for Ubuntu Server (includes server-specific packages)
|
||||
# - "" (empty) for images without a pool, like Ubuntu Core Installer
|
||||
POOL_SEED_NAME=ship-live
|
||||
# SQUASHFS_COMP: compression algorithm for squashfs images. lz4 is ~10x
|
||||
# faster than xz and useful for test builds that don't need small images.
|
||||
SQUASHFS_COMP=xz
|
||||
|
||||
# Common functionality for layered desktop images
|
||||
common_layered_desktop_image() {
|
||||
touch config/universe-enabled
|
||||
PASSES_TO_LAYERS="true"
|
||||
MAKE_ISO=yes
|
||||
|
||||
if [ -n "$HAS_MINIMAL" ]; then
|
||||
if [ -z "$MINIMAL_TASKS" ]; then
|
||||
@ -820,7 +791,7 @@ do_layered_desktop_image() {
|
||||
DEFAULT_KERNEL="linux-$KERNEL_FLAVOURS"
|
||||
|
||||
if [ "$LOCALE_SUPPORT" != none ]; then
|
||||
${LIVECD_ROOTFS_ROOT}/checkout-translations-branch \
|
||||
/usr/share/livecd-rootfs/checkout-translations-branch \
|
||||
https://git.launchpad.net/subiquity po \
|
||||
config/catalog-translations
|
||||
fi
|
||||
@ -926,7 +897,6 @@ case $PROJECT in
|
||||
add_task install minimal standard
|
||||
add_task install kubuntu-desktop
|
||||
LIVE_TASK='kubuntu-live'
|
||||
MAKE_ISO=yes
|
||||
add_chroot_hook remove-gnome-icon-cache
|
||||
;;
|
||||
|
||||
@ -953,7 +923,6 @@ case $PROJECT in
|
||||
ubuntu-unity)
|
||||
add_task install minimal standard ${PROJECT}-desktop
|
||||
LIVE_TASK=${PROJECT}-live
|
||||
MAKE_ISO=yes
|
||||
;;
|
||||
|
||||
lubuntu)
|
||||
@ -1028,8 +997,6 @@ case $PROJECT in
|
||||
live)
|
||||
OPTS="${OPTS:+$OPTS }--bootstrap-flavour=minimal"
|
||||
PASSES_TO_LAYERS=true
|
||||
MAKE_ISO=yes
|
||||
POOL_SEED_NAME=server-ship-live
|
||||
add_task ubuntu-server-minimal server-minimal
|
||||
add_package ubuntu-server-minimal lxd-installer
|
||||
add_task ubuntu-server-minimal.ubuntu-server minimal standard server
|
||||
@ -1140,7 +1107,7 @@ case $PROJECT in
|
||||
NO_SQUASHFS_PASSES=ubuntu-server-minimal.ubuntu-server.installer.$flavor.netboot
|
||||
|
||||
DEFAULT_KERNEL="$kernel_metapkg"
|
||||
${LIVECD_ROOTFS_ROOT}/checkout-translations-branch \
|
||||
/usr/share/livecd-rootfs/checkout-translations-branch \
|
||||
https://git.launchpad.net/subiquity po config/catalog-translations
|
||||
;;
|
||||
*)
|
||||
@ -1158,12 +1125,10 @@ case $PROJECT in
|
||||
# created in ubuntu-core-installer/hooks/05-prepare-image.binary, which
|
||||
# subiquity knows how to install.
|
||||
if [ ${SUBPROJECT} == "desktop" ]; then
|
||||
cp ${LIVECD_ROOTFS_ROOT}/live-build/${PROJECT}/ubuntu-core-desktop-24-amd64.model-assertion config/
|
||||
cp /usr/share/livecd-rootfs/live-build/${PROJECT}/ubuntu-core-desktop-24-amd64.model-assertion config/
|
||||
fi
|
||||
OPTS="${OPTS:+$OPTS }--bootstrap-flavour=minimal"
|
||||
PASSES_TO_LAYERS=true
|
||||
MAKE_ISO=yes
|
||||
POOL_SEED_NAME=
|
||||
add_task base server-minimal server
|
||||
add_task base.live server-live
|
||||
add_package base.live linux-image-generic
|
||||
@ -1172,7 +1137,7 @@ case $PROJECT in
|
||||
USE_BRIDGE_KERNEL=false
|
||||
DEFAULT_KERNEL="snap:pc-kernel"
|
||||
|
||||
${LIVECD_ROOTFS_ROOT}/checkout-translations-branch \
|
||||
/usr/share/livecd-rootfs/checkout-translations-branch \
|
||||
https://git.launchpad.net/subiquity po config/catalog-translations
|
||||
;;
|
||||
|
||||
@ -1182,7 +1147,6 @@ case $PROJECT in
|
||||
OPTS="${OPTS:+$OPTS }--linux-packages=none --initramfs=none"
|
||||
KERNEL_FLAVOURS=none
|
||||
BINARY_REMOVE_LINUX=false
|
||||
MAKE_ISO=yes
|
||||
|
||||
add_package install mini-iso-tools linux-generic
|
||||
case $ARCH in
|
||||
@ -1195,22 +1159,6 @@ case $PROJECT in
|
||||
esac
|
||||
;;
|
||||
|
||||
ubuntu-test-iso)
|
||||
OPTS="${OPTS:+$OPTS }--bootstrap-flavour=minimal"
|
||||
KERNEL_FLAVOURS=virtual
|
||||
BINARY_REMOVE_LINUX=false
|
||||
MAKE_ISO=yes
|
||||
POOL_SEED_NAME=
|
||||
SQUASHFS_COMP=lz4
|
||||
PASSES_TO_LAYERS=true
|
||||
add_package base linux-$KERNEL_FLAVOURS
|
||||
add_package base.live casper
|
||||
case $ARCH in
|
||||
amd64) ;;
|
||||
*) echo "ubuntu-test-iso only supports amd64"; exit 1 ;;
|
||||
esac
|
||||
;;
|
||||
|
||||
ubuntu-base|ubuntu-oci)
|
||||
OPTS="${OPTS:+$OPTS }--bootstrap-flavour=minimal"
|
||||
;;
|
||||
@ -1310,7 +1258,7 @@ case $SUBPROJECT in
|
||||
# and a variety of things fail without it.
|
||||
add_package install tzdata
|
||||
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/make-lxd-metadata.py config/make-lxd-metadata
|
||||
cp -af /usr/share/livecd-rootfs/live-build/make-lxd-metadata.py config/make-lxd-metadata
|
||||
;;
|
||||
esac
|
||||
|
||||
@ -1429,19 +1377,13 @@ if [ -n "$PASSES" ] && [ -z "$LIVE_PASSES" ]; then
|
||||
"Either set \$LIVE_PASSES or add a pass ending with '.live'."
|
||||
fi
|
||||
|
||||
echo "DEBOOTSTRAP_OPTIONS=\"--keyring=/usr/share/keyrings/ubuntu-archive-keyring.gpg\"" >> config/bootstrap
|
||||
|
||||
echo "LB_CHROOT_HOOKS=\"$CHROOT_HOOKS\"" >> config/chroot
|
||||
echo "SUBPROJECT=\"${SUBPROJECT:-}\"" >> config/chroot
|
||||
echo "LB_DISTRIBUTION=\"$SUITE\"" >> config/chroot
|
||||
echo "IMAGEFORMAT=\"$IMAGEFORMAT\"" >> config/chroot
|
||||
echo "LIVECD_ROOTFS_ROOT=\"$LIVECD_ROOTFS_ROOT\"" >> config/common
|
||||
if [ -n "$PASSES" ]; then
|
||||
echo "PASSES=\"$PASSES\"" >> config/common
|
||||
fi
|
||||
echo "MAKE_ISO=\"$MAKE_ISO\"" >> config/common
|
||||
echo "POOL_SEED_NAME=\"$POOL_SEED_NAME\"" >> config/common
|
||||
echo "SQUASHFS_COMP=\"$SQUASHFS_COMP\"" >> config/common
|
||||
if [ -n "$NO_SQUASHFS_PASSES" ]; then
|
||||
echo "NO_SQUASHFS_PASSES=\"$NO_SQUASHFS_PASSES\"" >> config/common
|
||||
fi
|
||||
@ -1477,7 +1419,7 @@ rm -fv /etc/ssl/private/ssl-cert-snakeoil.key \
|
||||
EOF
|
||||
|
||||
case $PROJECT in
|
||||
ubuntu-cpc|ubuntu-core|ubuntu-base|ubuntu-oci|ubuntu-wsl|ubuntu-mini-iso|ubuntu-test-iso)
|
||||
ubuntu-cpc|ubuntu-core|ubuntu-base|ubuntu-oci|ubuntu-wsl|ubuntu-mini-iso)
|
||||
# ubuntu-cpc gets this added in 025-create-groups.chroot, and we do
|
||||
# not want this group in projects that are effectively just chroots
|
||||
;;
|
||||
@ -1565,11 +1507,11 @@ fi
|
||||
|
||||
case $PROJECT:${SUBPROJECT:-} in
|
||||
ubuntu-cpc:*|ubuntu-server:live|ubuntu:desktop-preinstalled| \
|
||||
ubuntu-wsl:*|ubuntu-mini-iso:*|ubuntu-test-iso:*|ubuntu:|ubuntu:dangerous|ubuntu-oem:*| \
|
||||
ubuntu-wsl:*|ubuntu-mini-iso:*|ubuntu:|ubuntu:dangerous|ubuntu-oem:*| \
|
||||
ubuntustudio:*|edubuntu:*|ubuntu-budgie:*|ubuntucinnamon:*|xubuntu:*| \
|
||||
ubuntukylin:*|ubuntu-mate:*|ubuntu-core-installer:*|lubuntu:*)
|
||||
# Ensure that most things e.g. includes.chroot are copied as is
|
||||
for entry in ${LIVECD_ROOTFS_ROOT}/live-build/${PROJECT}/*; do
|
||||
for entry in /usr/share/livecd-rootfs/live-build/${PROJECT}/*; do
|
||||
case $entry in
|
||||
*hooks*)
|
||||
# But hooks are shared across the projects with symlinks
|
||||
@ -1604,11 +1546,11 @@ esac
|
||||
case $PROJECT in
|
||||
ubuntu-oem|ubuntustudio|edubuntu|ubuntu-budgie|ubuntucinnamon| \
|
||||
xubuntu|ubuntukylin|ubuntu-mate|lubuntu)
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/ubuntu/includes.chroot \
|
||||
cp -af /usr/share/livecd-rootfs/live-build/ubuntu/includes.chroot \
|
||||
config/includes.chroot
|
||||
|
||||
LIVE_LAYER=${LIVE_PREFIX}live
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/ubuntu/includes.chroot.minimal.standard.live \
|
||||
cp -af /usr/share/livecd-rootfs/live-build/ubuntu/includes.chroot.minimal.standard.live \
|
||||
config/includes.chroot.$LIVE_LAYER
|
||||
|
||||
if [ $PROJECT != ubuntu-oem ]; then
|
||||
@ -1624,7 +1566,7 @@ esac
|
||||
|
||||
case $SUBPROJECT in
|
||||
buildd)
|
||||
cp -af ${LIVECD_ROOTFS_ROOT}/live-build/buildd/* config/
|
||||
cp -af /usr/share/livecd-rootfs/live-build/buildd/* config/
|
||||
;;
|
||||
esac
|
||||
|
||||
@ -1648,7 +1590,7 @@ if [ "$EXTRA_PPAS" ]; then
|
||||
extra_ppa=${extra_ppa%:*}
|
||||
;;
|
||||
esac
|
||||
extra_ppa_fingerprint="$(${LIVECD_ROOTFS_ROOT}/get-ppa-fingerprint "$extra_ppa")"
|
||||
extra_ppa_fingerprint="$(/usr/share/livecd-rootfs/get-ppa-fingerprint "$extra_ppa")"
|
||||
|
||||
cat >> config/archives/extra-ppas.list.chroot <<EOF
|
||||
deb https://ppa.launchpadcontent.net/$extra_ppa/ubuntu @DISTRIBUTION@ main
|
||||
@ -1735,22 +1677,3 @@ apt-get -y download $PREINSTALL_POOL
|
||||
EOF
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "${MAKE_ISO}" = "yes" ]; then
|
||||
# XXX should pass --build-type here.
|
||||
${LIVECD_ROOTFS_ROOT}/live-build/gen-iso-ids \
|
||||
--project $PROJECT ${SUBPROJECT:+--subproject $SUBPROJECT} \
|
||||
--arch $ARCH ${SUBARCH:+--subarch $SUBARCH} ${NOW+--serial $NOW} \
|
||||
--output-dir config/iso-ids/
|
||||
fi
|
||||
|
||||
if [ -n "$http_proxy" ]; then
|
||||
mkdir -p /etc/systemd/system/snapd.service.d/
|
||||
cat > /etc/systemd/system/snapd.service.d/snap_proxy.conf <<EOF
|
||||
[Service]
|
||||
Environment="HTTP_PROXY=${http_proxy}"
|
||||
Environment="HTTPS_PROXY=${http_proxy}"
|
||||
EOF
|
||||
systemctl daemon-reload
|
||||
systemctl restart snapd.service
|
||||
fi
|
||||
|
||||
@ -1,218 +0,0 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
import configparser
|
||||
import os
|
||||
import pathlib
|
||||
import platform
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
|
||||
|
||||
_CONFIG_FILE = pathlib.Path.home() / ".config" / "livecd-rootfs" / "build-livefs.conf"
|
||||
|
||||
|
||||
def _read_config() -> dict[str, str]:
|
||||
"""Read default values from the user config file if it exists.
|
||||
|
||||
The config file uses INI format with a [defaults] section, e.g.:
|
||||
|
||||
[defaults]
|
||||
http-proxy = http://squid.internal:3128/
|
||||
mirror = http://ftpmaster.internal/ubuntu/
|
||||
"""
|
||||
cp = configparser.ConfigParser()
|
||||
cp.read(_CONFIG_FILE)
|
||||
return dict(cp["defaults"]) if "defaults" in cp else {}
|
||||
|
||||
|
||||
_MACHINE_TO_ARCH = {
|
||||
"x86_64": "amd64",
|
||||
"aarch64": "arm64",
|
||||
"ppc64le": "ppc64el",
|
||||
"s390x": "s390x",
|
||||
"riscv64": "riscv64",
|
||||
"armv7l": "armhf",
|
||||
}
|
||||
|
||||
|
||||
def _default_arch():
|
||||
machine = platform.machine()
|
||||
try:
|
||||
return _MACHINE_TO_ARCH[machine]
|
||||
except KeyError:
|
||||
raise click.UsageError(
|
||||
f"Cannot determine default arch for machine {machine!r}; use --arch"
|
||||
)
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.option(
|
||||
"--work-dir",
|
||||
default=".",
|
||||
type=click.Path(file_okay=False, path_type=pathlib.Path),
|
||||
help="Working directory for the build (default: current directory)",
|
||||
)
|
||||
@click.option("--project", required=True, help="Project name (e.g. ubuntu, ubuntu-cpc)")
|
||||
@click.option("--suite", required=True, help="Ubuntu suite/series (e.g. noble)")
|
||||
@click.option("--arch", default=None, help="Target architecture (default: host arch)")
|
||||
@click.option("--arch-variant", default=None, help="Architecture variant")
|
||||
@click.option("--subproject", default=None, help="Subproject")
|
||||
@click.option("--subarch", default=None, help="Sub-architecture")
|
||||
@click.option("--channel", default=None, help="Channel")
|
||||
@click.option(
|
||||
"--image-target",
|
||||
"image_targets",
|
||||
multiple=True,
|
||||
help="Image target (may be repeated)",
|
||||
)
|
||||
@click.option("--repo-snapshot-stamp", default=None, help="Repository snapshot stamp")
|
||||
@click.option(
|
||||
"--snapshot-service-timestamp", default=None, help="Snapshot service timestamp"
|
||||
)
|
||||
@click.option("--cohort-key", default=None, help="Cohort key")
|
||||
@click.option("--datestamp", default=None, help="Datestamp (sets NOW)")
|
||||
@click.option("--image-format", default=None, help="Image format (sets IMAGEFORMAT)")
|
||||
@click.option(
|
||||
"--proposed",
|
||||
is_flag=True,
|
||||
default=False,
|
||||
help="Enable proposed pocket (sets PROPOSED=1)",
|
||||
)
|
||||
@click.option(
|
||||
"--extra-ppa", "extra_ppas", multiple=True, help="Extra PPA (may be repeated)"
|
||||
)
|
||||
@click.option(
|
||||
"--extra-snap", "extra_snaps", multiple=True, help="Extra snap (may be repeated)"
|
||||
)
|
||||
@click.option("--build-type", default=None, help="Build type")
|
||||
@click.option(
|
||||
"--http-proxy",
|
||||
default=None,
|
||||
help="HTTP proxy (sets http_proxy, HTTP_PROXY, LB_APT_HTTP_PROXY)",
|
||||
)
|
||||
@click.option(
|
||||
"--mirror",
|
||||
default=None,
|
||||
help="Ubuntu archive mirror URL (sets MIRROR)",
|
||||
)
|
||||
@click.option(
|
||||
"--debug", is_flag=True, default=False, help="Enable debug mode (set -x in lb scripts)"
|
||||
)
|
||||
def main(
|
||||
work_dir,
|
||||
project,
|
||||
suite,
|
||||
arch,
|
||||
arch_variant,
|
||||
subproject,
|
||||
subarch,
|
||||
channel,
|
||||
image_targets,
|
||||
repo_snapshot_stamp,
|
||||
snapshot_service_timestamp,
|
||||
cohort_key,
|
||||
datestamp,
|
||||
image_format,
|
||||
proposed,
|
||||
extra_ppas,
|
||||
extra_snaps,
|
||||
build_type,
|
||||
http_proxy,
|
||||
mirror,
|
||||
debug,
|
||||
):
|
||||
cfg = _read_config()
|
||||
if http_proxy is None:
|
||||
http_proxy = cfg.get("http-proxy")
|
||||
if mirror is None:
|
||||
mirror = cfg.get("mirror")
|
||||
|
||||
if arch is None:
|
||||
arch = _default_arch()
|
||||
|
||||
# Locate auto/ scripts relative to this script, following symlinks.
|
||||
# Works for: git checkout, installed deb, and /usr/bin/build-livefs symlink.
|
||||
live_build_dir = pathlib.Path(__file__).resolve().parent
|
||||
auto_source = live_build_dir / "auto"
|
||||
|
||||
# base_env is passed to both lb config and lb build
|
||||
base_env = {
|
||||
"PROJECT": project,
|
||||
"ARCH": arch,
|
||||
"LIVECD_ROOTFS_ROOT": str(live_build_dir.parent),
|
||||
}
|
||||
if arch_variant is not None:
|
||||
base_env["ARCH_VARIANT"] = arch_variant
|
||||
if subproject is not None:
|
||||
base_env["SUBPROJECT"] = subproject
|
||||
if subarch is not None:
|
||||
base_env["SUBARCH"] = subarch
|
||||
if channel is not None:
|
||||
base_env["CHANNEL"] = channel
|
||||
if image_targets:
|
||||
base_env["IMAGE_TARGETS"] = " ".join(image_targets)
|
||||
if repo_snapshot_stamp is not None:
|
||||
base_env["REPO_SNAPSHOT_STAMP"] = repo_snapshot_stamp
|
||||
if snapshot_service_timestamp is not None:
|
||||
base_env["SNAPSHOT_SERVICE_TIMESTAMP"] = snapshot_service_timestamp
|
||||
if cohort_key is not None:
|
||||
base_env["COHORT_KEY"] = cohort_key
|
||||
if http_proxy is not None:
|
||||
base_env["http_proxy"] = http_proxy
|
||||
base_env["HTTP_PROXY"] = http_proxy
|
||||
base_env["LB_APT_HTTP_PROXY"] = http_proxy
|
||||
|
||||
# config_env adds lb-config-only vars on top of base_env
|
||||
config_env = {
|
||||
**base_env,
|
||||
"SUITE": suite,
|
||||
}
|
||||
if datestamp is not None:
|
||||
config_env["NOW"] = datestamp
|
||||
if image_format is not None:
|
||||
config_env["IMAGEFORMAT"] = image_format
|
||||
if proposed:
|
||||
config_env["PROPOSED"] = "1"
|
||||
if extra_ppas:
|
||||
config_env["EXTRA_PPAS"] = " ".join(extra_ppas)
|
||||
if extra_snaps:
|
||||
config_env["EXTRA_SNAPS"] = " ".join(extra_snaps)
|
||||
if build_type is not None:
|
||||
config_env["BUILD_TYPE"] = build_type
|
||||
if mirror is not None:
|
||||
config_env["MIRROR"] = mirror
|
||||
|
||||
work_dir = work_dir.resolve()
|
||||
work_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Create/replace auto/ symlinks
|
||||
auto_dir = work_dir / "auto"
|
||||
auto_dir.mkdir(exist_ok=True)
|
||||
for script in ("config", "build", "clean"):
|
||||
link = auto_dir / script
|
||||
if link.is_symlink() or link.exists():
|
||||
link.unlink()
|
||||
link.symlink_to(auto_source / script)
|
||||
|
||||
# Write debug.sh if requested
|
||||
if debug:
|
||||
debug_dir = work_dir / "local" / "functions"
|
||||
debug_dir.mkdir(parents=True, exist_ok=True)
|
||||
(debug_dir / "debug.sh").write_text("set -x\n")
|
||||
|
||||
def run(cmd, env_extra):
|
||||
env = os.environ.copy()
|
||||
env.update(env_extra)
|
||||
if os.getuid() != 0:
|
||||
env_args = [f"{k}={v}" for k, v in env_extra.items()]
|
||||
cmd = ["sudo", "env"] + env_args + cmd
|
||||
subprocess.run(cmd, cwd=work_dir, env=env, check=True)
|
||||
|
||||
run(["lb", "clean", "--purge"], base_env)
|
||||
run(["lb", "config"], config_env)
|
||||
run(["lb", "build"], base_env)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -1,150 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import configparser
|
||||
import pathlib
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
import click
|
||||
|
||||
|
||||
_CONFIG_FILE = pathlib.Path.home() / ".config" / "livecd-rootfs" / "build-livefs.conf"
|
||||
|
||||
|
||||
def _read_config() -> dict[str, str]:
|
||||
cp = configparser.ConfigParser()
|
||||
cp.read(_CONFIG_FILE)
|
||||
return dict(cp["defaults"]) if "defaults" in cp else {}
|
||||
|
||||
|
||||
@click.command(
|
||||
context_settings={"ignore_unknown_options": True, "allow_extra_args": True}
|
||||
)
|
||||
@click.option("--suite", required=True, help="Ubuntu suite/series (e.g. noble)")
|
||||
@click.option(
|
||||
"--vm-name",
|
||||
default=None,
|
||||
help="LXD VM name (default: livefs-builder-{suite})",
|
||||
)
|
||||
@click.option(
|
||||
"--http-proxy",
|
||||
default=None,
|
||||
help="HTTP proxy URL for apt inside the VM (also read from build-livefs.conf)",
|
||||
)
|
||||
@click.argument("extra_args", nargs=-1, type=click.UNPROCESSED)
|
||||
def main(suite, vm_name, http_proxy, extra_args):
|
||||
livecd_rootfs_root = pathlib.Path(__file__).resolve().parent.parent
|
||||
vm_name = vm_name or f"livefs-builder-{suite}"
|
||||
host_conf = (
|
||||
pathlib.Path.home() / ".config" / "livecd-rootfs" / "build-livefs.conf"
|
||||
)
|
||||
|
||||
if http_proxy is None:
|
||||
http_proxy = _read_config().get("http-proxy")
|
||||
|
||||
result = subprocess.run(["lxc", "info", vm_name], capture_output=True)
|
||||
if result.returncode != 0:
|
||||
launch_cmd = [
|
||||
"lxc", "launch", f"ubuntu-daily:{suite}", vm_name, "--vm",
|
||||
"--config", "limits.cpu=4",
|
||||
"--config", "limits.memory=8GiB",
|
||||
"--device", "root,size=100GiB",
|
||||
]
|
||||
user_data = "#cloud-config\npackage_update: true\n"
|
||||
if http_proxy is not None:
|
||||
user_data += (
|
||||
"apt:\n"
|
||||
f" http_proxy: {http_proxy}\n"
|
||||
f" https_proxy: {http_proxy}\n"
|
||||
)
|
||||
launch_cmd += ["--config", f"user.user-data={user_data}"]
|
||||
subprocess.run(launch_cmd, check=True)
|
||||
|
||||
device_info = subprocess.run(
|
||||
["lxc", "config", "device", "show", vm_name],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
).stdout
|
||||
if "livecd-rootfs" not in device_info:
|
||||
subprocess.run(
|
||||
[
|
||||
"lxc",
|
||||
"config",
|
||||
"device",
|
||||
"add",
|
||||
vm_name,
|
||||
"livecd-rootfs",
|
||||
"disk",
|
||||
f"source={livecd_rootfs_root}",
|
||||
"path=/srv/livecd-rootfs",
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
|
||||
info = subprocess.run(
|
||||
["lxc", "info", vm_name], capture_output=True, text=True, check=True
|
||||
).stdout
|
||||
if "Status: STOPPED" in info:
|
||||
subprocess.run(["lxc", "start", vm_name], check=True)
|
||||
|
||||
for _ in range(30):
|
||||
result = subprocess.run(
|
||||
["lxc", "exec", vm_name, "--", "true"], capture_output=True
|
||||
)
|
||||
if result.returncode == 0:
|
||||
break
|
||||
time.sleep(2)
|
||||
else:
|
||||
raise click.ClickException(f"VM {vm_name!r} did not become ready in time")
|
||||
|
||||
subprocess.run(
|
||||
["lxc", "exec", vm_name, "--", "cloud-init", "status", "--wait"], check=True
|
||||
)
|
||||
|
||||
subprocess.run(
|
||||
["lxc", "exec", vm_name, "--", "apt-get", "install", "-y", "livecd-rootfs"],
|
||||
check=True,
|
||||
)
|
||||
|
||||
if host_conf.exists():
|
||||
subprocess.run(
|
||||
[
|
||||
"lxc",
|
||||
"exec",
|
||||
vm_name,
|
||||
"--",
|
||||
"mkdir",
|
||||
"-p",
|
||||
"/root/.config/livecd-rootfs",
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
subprocess.run(
|
||||
[
|
||||
"lxc",
|
||||
"file",
|
||||
"push",
|
||||
str(host_conf),
|
||||
f"{vm_name}/root/.config/livecd-rootfs/build-livefs.conf",
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
|
||||
subprocess.run(
|
||||
[
|
||||
"lxc",
|
||||
"exec",
|
||||
vm_name,
|
||||
"--",
|
||||
"/srv/livecd-rootfs/live-build/build-livefs",
|
||||
"--suite",
|
||||
suite,
|
||||
*extra_args,
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -44,7 +44,6 @@ create_manifest() {
|
||||
local base_default_sbom_name="ubuntu-cloud-image-$(grep "VERSION_ID" $chroot_root/etc/os-release | cut --delimiter "=" --field 2 | tr -d '"')-${ARCH}-$(date +%Y%m%dT%H:%M:%S)"
|
||||
local sbom_file_name=${3:-"${base_default_sbom_name}.spdx"}
|
||||
local sbom_document_name=${4:-"${base_default_sbom_name}"}
|
||||
local should_include_sbom=${5:-"true"}
|
||||
local sbom_log=${sbom_document_name}.log
|
||||
echo "create_manifest chroot_root: ${chroot_root}"
|
||||
dpkg-query --show --admindir="${chroot_root}/var/lib/dpkg" > ${target_file}
|
||||
@ -55,7 +54,6 @@ create_manifest() {
|
||||
echo "create_manifest creating file listing."
|
||||
local target_filelist=${2%.manifest}.filelist
|
||||
(cd "${chroot_root}" && find -xdev) | sort > "${target_filelist}"
|
||||
if [ "$should_include_sbom" = "true" ]; then
|
||||
# only creating sboms for CPC project at this time
|
||||
if [[ ! $(which cpc-sbom) ]]; then
|
||||
# ensure the tool is installed
|
||||
@ -72,9 +70,6 @@ create_manifest() {
|
||||
else
|
||||
echo "SBOM generation succeeded. see ${sbom_log} for details"
|
||||
fi
|
||||
else
|
||||
echo "SBOM generation skipped"
|
||||
fi
|
||||
fi
|
||||
echo "create_manifest finished"
|
||||
}
|
||||
@ -188,8 +183,8 @@ setup_mountpoint() {
|
||||
mount sysfs-live -t sysfs "$mountpoint/sys"
|
||||
mount securityfs -t securityfs "$mountpoint/sys/kernel/security"
|
||||
# Provide more up to date apparmor features, matching target kernel
|
||||
mount -o bind ${LIVECD_ROOTFS_ROOT}/live-build/apparmor/generic "$mountpoint/sys/kernel/security/apparmor/features/"
|
||||
mount -o bind ${LIVECD_ROOTFS_ROOT}/live-build/seccomp/generic.actions_avail "$mountpoint/proc/sys/kernel/seccomp/actions_avail"
|
||||
mount -o bind /usr/share/livecd-rootfs/live-build/apparmor/generic "$mountpoint/sys/kernel/security/apparmor/features/"
|
||||
mount -o bind /usr/share/livecd-rootfs/live-build/seccomp/generic.actions_avail "$mountpoint/proc/sys/kernel/seccomp/actions_avail"
|
||||
# cgroup2 mount for LP: 1944004
|
||||
mount -t cgroup2 none "$mountpoint/sys/fs/cgroup"
|
||||
mount -t tmpfs none "$mountpoint/tmp"
|
||||
@ -408,7 +403,7 @@ create_squashfs() {
|
||||
squashfs_file="$2"
|
||||
config_dir="$PWD/config"
|
||||
(cd $rootfs_dir &&
|
||||
mksquashfs . $squashfs_file -no-progress -xattrs -comp "${SQUASHFS_COMP:-xz}" \
|
||||
mksquashfs . $squashfs_file -no-progress -xattrs -comp xz \
|
||||
-ef "$config_dir/squashfs-exclude-files")
|
||||
|
||||
}
|
||||
@ -860,7 +855,7 @@ snap_validate_seed() {
|
||||
fi
|
||||
if [ ${boot_filename} != undefined ]; then # we have a known boot file so we can proceed with checking for features to mount
|
||||
kern_major_min=$(readlink --canonicalize --no-newline ${CHROOT_ROOT}/boot/${boot_filename} | grep --extended-regexp --only-matching --max-count 1 '[0-9]+\.[0-9]+')
|
||||
if [ -d ${LIVECD_ROOTFS_ROOT}/live-build/apparmor/${kern_major_min} ]; then
|
||||
if [ -d /usr/share/livecd-rootfs/live-build/apparmor/${kern_major_min} ]; then
|
||||
# if an Ubuntu version has different kernel apparmor features between LTS and HWE kernels
|
||||
# a snap pre-seeding issue can occur, where the incorrect apparmor features are reported
|
||||
# basic copy of a directory structure overriding the "generic" feature set
|
||||
@ -868,7 +863,7 @@ snap_validate_seed() {
|
||||
|
||||
# Bind kernel apparmor directory to feature directory for snap preseeding
|
||||
umount "${CHROOT_ROOT}/sys/kernel/security/apparmor/features/"
|
||||
mount --bind ${LIVECD_ROOTFS_ROOT}/live-build/apparmor/${kern_major_min} "${CHROOT_ROOT}/sys/kernel/security/apparmor/features/"
|
||||
mount --bind /usr/share/livecd-rootfs/live-build/apparmor/${kern_major_min} "${CHROOT_ROOT}/sys/kernel/security/apparmor/features/"
|
||||
fi
|
||||
fi
|
||||
|
||||
@ -894,7 +889,7 @@ snap_validate_seed() {
|
||||
# mount generic apparmor feature again (cleanup)
|
||||
if [ -d /build/config/hooks.d/extra/apparmor/${kern_major_min} ]; then
|
||||
umount "${CHROOT_ROOT}/sys/kernel/security/apparmor/features/"
|
||||
mount -o bind ${LIVECD_ROOTFS_ROOT}/live-build/apparmor/generic "${CHROOT_ROOT}/sys/kernel/security/apparmor/features/"
|
||||
mount -o bind /usr/share/livecd-rootfs/live-build/apparmor/generic "${CHROOT_ROOT}/sys/kernel/security/apparmor/features/"
|
||||
fi
|
||||
|
||||
}
|
||||
@ -1254,7 +1249,7 @@ setup_cidata() {
|
||||
local mountpoint=$(mktemp -d)
|
||||
mkfs.vfat -F 32 -n CIDATA ${cidata_dev}
|
||||
mount ${cidata_dev} ${mountpoint}
|
||||
cp ${LIVECD_ROOTFS_ROOT}/live-build/cidata/* ${mountpoint}
|
||||
cp /usr/share/livecd-rootfs/live-build/cidata/* ${mountpoint}
|
||||
cat >>${mountpoint}/meta-data.sample <<END
|
||||
#instance-id: iid-$(openssl rand -hex 8)
|
||||
|
||||
@ -1449,10 +1444,3 @@ gpt_root_partition_uuid() {
|
||||
|
||||
echo "${ROOTFS_PARTITION_TYPE}"
|
||||
}
|
||||
|
||||
# Wrapper for the isobuild tool. Sets PYTHONPATH so the isobuilder module
|
||||
# is importable, and uses config/iso-dir as the standard working directory
|
||||
# for ISO metadata and intermediate files.
|
||||
isobuild () {
|
||||
PYTHONPATH=${LIVECD_ROOTFS_ROOT}/live-build/ ${LIVECD_ROOTFS_ROOT}/live-build/isobuild --workdir config/iso-dir "$@"
|
||||
}
|
||||
|
||||
@ -1,198 +0,0 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Compute various slightly obscure IDs and labels used by ISO builds.
|
||||
#
|
||||
# * ISO9660 images have a "volume id".
|
||||
# * Our ISOs contain a ".disk/info" file that is read by various
|
||||
# other things (casper, the installer) and is generally used as a
|
||||
# record of where an installation came from.
|
||||
# * The code that sets up grub for the ISO needs a "capitalized
|
||||
# project name" or capproject.
|
||||
#
|
||||
# All of these are derived from other build parameters (and/or
|
||||
# information in etc/os-release) in slightly non-obvious ways so the
|
||||
# logic to do so is confined to this file to avoid it cluttering
|
||||
# anywhere else.
|
||||
|
||||
import pathlib
|
||||
import platform
|
||||
import time
|
||||
|
||||
import click
|
||||
|
||||
|
||||
# Be careful about the values here. They end up in .disk/info, which is read by
|
||||
# casper to create the live session user, so if there is a space in the
|
||||
# capproject things go a bit wonky.
|
||||
#
|
||||
# It will also be used by make_vol_id to construct an ISO9660 volume ID as
|
||||
#
|
||||
# "$(CAPPROJECT) $(DEBVERSION) $(ARCH)",
|
||||
#
|
||||
# e.g. "Ubuntu 14.10 amd64". The volume ID is limited to 32 characters. This
|
||||
# therefore imposes a limit on the length of project_map values of 25 - (length
|
||||
# of longest relevant architecture name).
|
||||
project_to_capproject_map = {
|
||||
"edubuntu": "Edubuntu",
|
||||
"kubuntu": "Kubuntu",
|
||||
"lubuntu": "Lubuntu",
|
||||
"ubuntu": "Ubuntu",
|
||||
"ubuntu-base": "Ubuntu-Base",
|
||||
"ubuntu-budgie": "Ubuntu-Budgie",
|
||||
"ubuntu-core-installer": "Ubuntu-Core-Installer",
|
||||
"ubuntu-mate": "Ubuntu-MATE",
|
||||
"ubuntu-mini-iso": "Ubuntu-Mini-ISO",
|
||||
"ubuntu-test-iso": "Ubuntu-Test-ISO",
|
||||
"ubuntu-oem": "Ubuntu OEM",
|
||||
"ubuntu-server": "Ubuntu-Server",
|
||||
"ubuntu-unity": "Ubuntu-Unity",
|
||||
"ubuntu-wsl": "Ubuntu WSL",
|
||||
"ubuntucinnamon": "Ubuntu-Cinnamon",
|
||||
"ubuntukylin": "Ubuntu-Kylin",
|
||||
"ubuntustudio": "Ubuntu-Studio",
|
||||
"xubuntu": "Xubuntu",
|
||||
}
|
||||
|
||||
|
||||
def make_disk_info(
|
||||
os_release: dict[str, str],
|
||||
arch: str,
|
||||
subarch: str,
|
||||
capproject: str,
|
||||
subproject: str,
|
||||
build_type: str,
|
||||
serial: str,
|
||||
) -> str:
|
||||
# os-release VERSION is _almost_ what goes into .disk/info...
|
||||
# it can be
|
||||
# VERSION="24.04.3 LTS (Noble Numbat)"
|
||||
# or
|
||||
# VERSION="25.10 (Questing Quokka)"
|
||||
# We want the Adjective Animal to be in quotes, not parentheses, e.g.
|
||||
# 'Ubuntu 24.04.3 LTS "Noble Numbat"'. This format is expected by casper
|
||||
# (which parses .disk/info to set up the live session) and the installer.
|
||||
version = os_release["VERSION"]
|
||||
version = version.replace("(", '"')
|
||||
version = version.replace(")", '"')
|
||||
|
||||
capsubproject = ""
|
||||
if subproject == "minimal":
|
||||
capsubproject = " Minimal"
|
||||
|
||||
fullarch = arch
|
||||
if subarch:
|
||||
fullarch += "+" + subarch
|
||||
|
||||
return f"{capproject}{capsubproject} {version} - {build_type} {fullarch} ({serial})"
|
||||
|
||||
|
||||
def make_vol_id(os_release: dict[str, str], arch: str, capproject: str) -> str:
|
||||
# ISO9660 volume IDs are limited to 32 characters. The volume ID format is
|
||||
# "CAPPROJECT VERSION ARCH", e.g. "Ubuntu 24.04.3 LTS amd64". Longer arch
|
||||
# names like ppc64el and riscv64 can push us over the limit, so we shorten
|
||||
# them here. This is why capproject names are also kept short (see the
|
||||
# comment above project_to_capproject_map).
|
||||
arch_for_volid_map = {
|
||||
"ppc64el": "ppc64",
|
||||
"riscv64": "riscv",
|
||||
}
|
||||
arch_for_volid = arch_for_volid_map.get(arch, arch)
|
||||
|
||||
# from
|
||||
# VERSION="24.04.3 LTS (Noble Numbat)"
|
||||
# or
|
||||
# VERSION="25.10 (Questing Quokka)"
|
||||
# we want "24.04.3 LTS" or "25.10", i.e. everything up to the first "(" (apart
|
||||
# from the whitespace).
|
||||
version = os_release["VERSION"].split("(")[0].strip()
|
||||
|
||||
volid = f"{capproject} {version} {arch_for_volid}"
|
||||
|
||||
# If still over 32 characters (e.g. long capproject + LTS version), fall
|
||||
# back to shorter forms. amd64 gets "x64" since it's widely recognized and
|
||||
# fits; other architectures just drop the arch entirely since multi-arch
|
||||
# ISOs are less common for non-amd64 platforms.
|
||||
if len(volid) > 32:
|
||||
if arch == "amd64":
|
||||
volid = f"{capproject} {version} x64"
|
||||
else:
|
||||
volid = f"{capproject} {version}"
|
||||
return volid
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.option(
|
||||
"--project",
|
||||
type=str,
|
||||
required=True,
|
||||
)
|
||||
@click.option(
|
||||
"--subproject",
|
||||
type=str,
|
||||
default=None,
|
||||
)
|
||||
@click.option(
|
||||
"--arch",
|
||||
type=str,
|
||||
required=True,
|
||||
)
|
||||
@click.option(
|
||||
"--subarch",
|
||||
type=str,
|
||||
default=None,
|
||||
)
|
||||
@click.option(
|
||||
"--serial",
|
||||
type=str,
|
||||
default=time.strftime("%Y%m%d"),
|
||||
)
|
||||
@click.option(
|
||||
"--build-type",
|
||||
type=str,
|
||||
default="Daily",
|
||||
)
|
||||
@click.option(
|
||||
"--output-dir",
|
||||
type=click.Path(file_okay=False, resolve_path=True, path_type=pathlib.Path),
|
||||
required=True,
|
||||
help="working directory",
|
||||
)
|
||||
def main(
|
||||
project: str,
|
||||
subproject: str,
|
||||
arch: str,
|
||||
subarch: str,
|
||||
serial: str,
|
||||
build_type: str,
|
||||
output_dir: pathlib.Path,
|
||||
):
|
||||
output_dir.mkdir(exist_ok=True)
|
||||
capproject = project_to_capproject_map[project]
|
||||
|
||||
os_release = platform.freedesktop_os_release()
|
||||
|
||||
with output_dir.joinpath("disk-info").open("w") as fp:
|
||||
disk_info = make_disk_info(
|
||||
os_release,
|
||||
arch,
|
||||
subarch,
|
||||
capproject,
|
||||
subproject,
|
||||
build_type,
|
||||
serial,
|
||||
)
|
||||
print(f"disk_info: {disk_info!r}")
|
||||
fp.write(disk_info)
|
||||
|
||||
with output_dir.joinpath("vol-id").open("w") as fp:
|
||||
vol_id = make_vol_id(os_release, arch, capproject)
|
||||
print(f"vol_id: {vol_id!r} {len(vol_id)}")
|
||||
fp.write(vol_id)
|
||||
|
||||
with output_dir.joinpath("capproject").open("w") as fp:
|
||||
print(f"capproject: {capproject!r}")
|
||||
fp.write(capproject)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -1,221 +0,0 @@
|
||||
#!/usr/bin/python3
|
||||
|
||||
# Building an ISO requires knowing:
|
||||
#
|
||||
# * The architecture and series we are building for
|
||||
# * The address of the mirror to pull packages from the pool from and the
|
||||
# components of that mirror to use
|
||||
# * The list of packages to include in the pool
|
||||
# * Where the squashfs files that contain the rootfs and other metadata layers
|
||||
# are
|
||||
# * Where to put the final ISO
|
||||
# * All the bits of information that end up in .disk/info on the ISO and in the
|
||||
# "volume ID" for the ISO
|
||||
#
|
||||
# It's not completely trivial to come up with a nice feeling interface between
|
||||
# livecd-rootfs and this tool. There are about 13 parameters that are needed to
|
||||
# build the ISO and having a tool take 13 arguments seems a bit overwhelming. In
|
||||
# addition some steps need to run before the layers are made into squashfs files
|
||||
# and some after. It felt nicer to have a tool with a few subcommands (7, in the
|
||||
# end) and taking arguments relevant to each step:
|
||||
#
|
||||
# $ isobuild --work-dir "" init --disk-id "" --series "" --arch ""
|
||||
#
|
||||
# Set up the work-dir for later steps. Create the skeleton file layout of the
|
||||
# ISO, populate .disk/info etc, create the gpg key referred to above. Store
|
||||
# series and arch somewhere that later steps can refer to.
|
||||
#
|
||||
# $ isobuild --work-dir "" setup-apt --chroot ""
|
||||
#
|
||||
# Set up aptfor use by later steps, using the configuration from the passed
|
||||
# chroot.
|
||||
#
|
||||
# $ isobuild --work-dir "" generate-pool --package-list-file ""
|
||||
#
|
||||
# Create the pool from the passed germinate output file.
|
||||
#
|
||||
# $ isobuild --work-dir "" generate-sources --mountpoint ""
|
||||
#
|
||||
# Generate an apt deb822 source for the pool, assuming it is mounted at the
|
||||
# passed mountpoint, and output it on stdout.
|
||||
#
|
||||
# $ isobuild --work-dir "" add-live-filesystem --artifact-prefix ""
|
||||
#
|
||||
# Copy the relevant artifacts to the casper directory (and extract the uuids
|
||||
# from the initrds)
|
||||
#
|
||||
# $ isobuild --work-dir "" make-bootable --project "" --capitalized-project ""
|
||||
# --subarch ""
|
||||
#
|
||||
# Set up the bootloader etc so that the ISO can boot (for this clones debian-cd
|
||||
# and run the tools/boot/$series-$arch script but those should be folded into
|
||||
# isobuild fairly promptly IMO).
|
||||
#
|
||||
# $ isobuild --work-dir "" make-iso --vol-id "" --dest ""
|
||||
#
|
||||
# Generate the checksum file and run xorriso to build the final ISO.
|
||||
|
||||
|
||||
import pathlib
|
||||
import shlex
|
||||
|
||||
import click
|
||||
|
||||
from isobuilder.builder import ISOBuilder
|
||||
|
||||
|
||||
@click.group()
|
||||
@click.option(
|
||||
"--workdir",
|
||||
type=click.Path(file_okay=False, resolve_path=True, path_type=pathlib.Path),
|
||||
required=True,
|
||||
help="working directory",
|
||||
)
|
||||
@click.pass_context
|
||||
def main(ctxt, workdir):
|
||||
ctxt.obj = ISOBuilder(workdir)
|
||||
cwd = pathlib.Path().cwd()
|
||||
if workdir.is_relative_to(cwd):
|
||||
workdir = workdir.relative_to(cwd)
|
||||
ctxt.obj.logger.log(f"isobuild starting, workdir: {workdir}")
|
||||
|
||||
|
||||
def subcommand(f):
|
||||
"""Decorator that converts a function into a Click subcommand with logging.
|
||||
|
||||
This decorator:
|
||||
1. Converts function name from snake_case to kebab-case for the CLI
|
||||
2. Wraps the function to log the subcommand name and all parameters
|
||||
3. Registers it as a Click command under the main command group
|
||||
4. Extracts the ISOBuilder instance from the context and passes it as first arg
|
||||
"""
|
||||
name = f.__name__.replace("_", "-")
|
||||
|
||||
def wrapped(ctxt, **kw):
|
||||
# Build a log message showing the subcommand and all its parameters.
|
||||
# We use ctxt.params (Click's resolved parameters) rather than **kw
|
||||
# because ctxt.params includes path resolution and type conversion.
|
||||
# Paths are converted to relative form to keep logs readable and avoid
|
||||
# exposing full filesystem paths in build artifacts.
|
||||
msg = f"subcommand {name}"
|
||||
cwd = pathlib.Path().cwd()
|
||||
for k, v in sorted(ctxt.params.items()):
|
||||
if isinstance(v, pathlib.Path):
|
||||
if v.is_relative_to(cwd):
|
||||
v = v.relative_to(cwd)
|
||||
v = shlex.quote(str(v))
|
||||
msg += f" {k}={v}"
|
||||
with ctxt.obj.logger.logged(msg):
|
||||
f(ctxt.obj, **kw)
|
||||
|
||||
return main.command(name=name)(click.pass_context(wrapped))
|
||||
|
||||
|
||||
@click.option(
|
||||
"--disk-info",
|
||||
type=str,
|
||||
required=True,
|
||||
help="contents of .disk/info",
|
||||
)
|
||||
@click.option(
|
||||
"--series",
|
||||
type=str,
|
||||
required=True,
|
||||
help="series being built",
|
||||
)
|
||||
@click.option(
|
||||
"--arch",
|
||||
type=str,
|
||||
required=True,
|
||||
help="architecture being built",
|
||||
)
|
||||
@subcommand
|
||||
def init(builder, disk_info, series, arch):
|
||||
builder.init(disk_info, series, arch)
|
||||
|
||||
|
||||
@click.option(
|
||||
"--chroot",
|
||||
type=click.Path(
|
||||
file_okay=False, resolve_path=True, path_type=pathlib.Path, exists=True
|
||||
),
|
||||
required=True,
|
||||
)
|
||||
@subcommand
|
||||
def setup_apt(builder, chroot: pathlib.Path):
|
||||
builder.setup_apt(chroot)
|
||||
|
||||
|
||||
@click.pass_obj
|
||||
@click.option(
|
||||
"--package-list-file",
|
||||
type=click.Path(
|
||||
dir_okay=False, exists=True, resolve_path=True, path_type=pathlib.Path
|
||||
),
|
||||
required=True,
|
||||
)
|
||||
@subcommand
|
||||
def generate_pool(builder, package_list_file: pathlib.Path):
|
||||
builder.generate_pool(package_list_file)
|
||||
|
||||
|
||||
@click.option(
|
||||
"--mountpoint",
|
||||
type=str,
|
||||
required=True,
|
||||
)
|
||||
@subcommand
|
||||
def generate_sources(builder, mountpoint: str):
|
||||
builder.generate_sources(mountpoint)
|
||||
|
||||
|
||||
@click.option(
|
||||
"--artifact-prefix",
|
||||
type=click.Path(dir_okay=False, resolve_path=True, path_type=pathlib.Path),
|
||||
required=True,
|
||||
)
|
||||
@subcommand
|
||||
def add_live_filesystem(builder, artifact_prefix: pathlib.Path):
|
||||
builder.add_live_filesystem(artifact_prefix)
|
||||
|
||||
|
||||
@click.option(
|
||||
"--project",
|
||||
type=str,
|
||||
required=True,
|
||||
)
|
||||
@click.option("--capproject", type=str, required=True)
|
||||
@click.option(
|
||||
"--subarch",
|
||||
type=str,
|
||||
default="",
|
||||
)
|
||||
@subcommand
|
||||
def make_bootable(builder, project: str, capproject: str | None, subarch: str):
|
||||
# capproject is the "capitalized project name" used in GRUB menu entries,
|
||||
# e.g. "Ubuntu" or "Kubuntu". It should come from gen-iso-ids (which uses
|
||||
# project_to_capproject_map for proper formatting like "Ubuntu-MATE"), but
|
||||
# we provide a simple .capitalize() fallback for cases where the caller
|
||||
# doesn't have the pre-computed value.
|
||||
if capproject is None:
|
||||
capproject = project.capitalize()
|
||||
builder.make_bootable(project, capproject, subarch)
|
||||
|
||||
|
||||
@click.option(
|
||||
"--dest",
|
||||
type=click.Path(dir_okay=False, resolve_path=True, path_type=pathlib.Path),
|
||||
required=True,
|
||||
)
|
||||
@click.option(
|
||||
"--volid",
|
||||
type=str,
|
||||
default=None,
|
||||
)
|
||||
@subcommand
|
||||
def make_iso(builder, dest: pathlib.Path, volid: str | None):
|
||||
builder.make_iso(dest, volid)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@ -1 +0,0 @@
|
||||
#
|
||||
@ -1,117 +0,0 @@
|
||||
import dataclasses
|
||||
import os
|
||||
import pathlib
|
||||
import shutil
|
||||
import subprocess
|
||||
from typing import Iterator
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class PackageInfo:
|
||||
package: str
|
||||
filename: str
|
||||
architecture: str
|
||||
version: str
|
||||
|
||||
@property
|
||||
def spec(self) -> str:
|
||||
return f"{self.package}:{self.architecture}={self.version}"
|
||||
|
||||
|
||||
def check_proc(proc, ok_codes=(0,)) -> None:
|
||||
proc.wait()
|
||||
if proc.returncode not in ok_codes:
|
||||
raise Exception(f"{proc} failed")
|
||||
|
||||
|
||||
class AptStateManager:
|
||||
"""Maintain and use an apt state directory to access package info and debs."""
|
||||
|
||||
def __init__(self, logger, series: str, apt_dir: pathlib.Path):
|
||||
self.logger = logger
|
||||
self.series = series
|
||||
self.apt_root = apt_dir.joinpath("root")
|
||||
self.apt_conf_path = apt_dir.joinpath("apt.conf")
|
||||
|
||||
def _apt_env(self) -> dict[str, str]:
|
||||
return dict(os.environ, APT_CONFIG=str(self.apt_conf_path))
|
||||
|
||||
def setup(self, chroot: pathlib.Path):
|
||||
"""Set up the manager by copying the apt configuration from `chroot`."""
|
||||
for path in "etc/apt", "var/lib/apt":
|
||||
tgt = self.apt_root.joinpath(path)
|
||||
tgt.parent.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copytree(chroot.joinpath(path), tgt, ignore_dangling_symlinks=True)
|
||||
self.apt_conf_path.write_text(f'Dir "{self.apt_root}/"; \n')
|
||||
with self.logger.logged("updating apt indices"):
|
||||
self.logger.run(["apt-get", "update"], env=self._apt_env())
|
||||
|
||||
def show(self, pkgs: list[str]) -> Iterator[PackageInfo]:
|
||||
"""Return information about the binary packages named by `pkgs`.
|
||||
|
||||
Parses apt-cache output, which uses RFC822-like format: field names
|
||||
followed by ": " and values, with multi-line values indented with
|
||||
leading whitespace. We skip continuation lines (starting with space)
|
||||
since PackageInfo only needs single-line fields.
|
||||
|
||||
The `fields` set (derived from PackageInfo's dataclass fields) acts as
|
||||
a filter - we only extract fields we care about, ignoring others like
|
||||
Description.
|
||||
"""
|
||||
proc = subprocess.Popen(
|
||||
["apt-cache", "-o", "APT::Cache::AllVersions=0", "show"] + pkgs,
|
||||
stdout=subprocess.PIPE,
|
||||
encoding="utf-8",
|
||||
env=self._apt_env(),
|
||||
)
|
||||
assert proc.stdout is not None
|
||||
fields = {f.name for f in dataclasses.fields(PackageInfo)}
|
||||
params: dict[str, str] = {}
|
||||
for line in proc.stdout:
|
||||
if line == "\n":
|
||||
yield PackageInfo(**params)
|
||||
params = {}
|
||||
continue
|
||||
if line.startswith(" "):
|
||||
continue
|
||||
field, value = line.split(": ", 1)
|
||||
field = field.lower()
|
||||
if field in fields:
|
||||
params[field] = value.strip()
|
||||
check_proc(proc)
|
||||
if params:
|
||||
yield PackageInfo(**params)
|
||||
|
||||
def download(self, rootdir: pathlib.Path, pkg_info: PackageInfo) -> None:
|
||||
"""Download the package specified by `pkg_info` under `rootdir`.
|
||||
|
||||
The package is saved to the same path under `rootdir` as it is
|
||||
at in the archive it comes from.
|
||||
"""
|
||||
target_dir = rootdir.joinpath(pkg_info.filename).parent
|
||||
target_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.download_direct(pkg_info.spec, target_dir)
|
||||
|
||||
def download_direct(self, spec: str, target: pathlib.Path) -> None:
|
||||
"""Download the package specified by spec to target directory.
|
||||
|
||||
The package is downloaded using apt-get download and saved directly
|
||||
in the target directory.
|
||||
"""
|
||||
self.logger.run(
|
||||
["apt-get", "download", spec],
|
||||
cwd=target,
|
||||
check=True,
|
||||
env=self._apt_env(),
|
||||
)
|
||||
|
||||
def in_release_path(self) -> pathlib.Path:
|
||||
"""Return the path to the InRelease file.
|
||||
|
||||
This ignores all but the first path.
|
||||
Will raise Error if there isn't at least one match.
|
||||
"""
|
||||
[path] = self.apt_root.joinpath("var/lib/apt/lists").glob(
|
||||
f"*ubuntu.com*_dists_{self.series}_InRelease"
|
||||
)
|
||||
return path
|
||||
@ -1,49 +0,0 @@
|
||||
"""Boot configuration package for ISO builder.
|
||||
|
||||
This package contains architecture-specific boot configurators for building
|
||||
bootable ISOs for different architectures.
|
||||
"""
|
||||
|
||||
import pathlib
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from ..apt_state import AptStateManager
|
||||
from ..builder import Logger
|
||||
from .base import BaseBootConfigurator
|
||||
|
||||
|
||||
def make_boot_configurator_for_arch(
|
||||
arch: str,
|
||||
logger: "Logger",
|
||||
apt_state: "AptStateManager",
|
||||
workdir: pathlib.Path,
|
||||
iso_root: pathlib.Path,
|
||||
) -> "BaseBootConfigurator":
|
||||
"""Factory function to create boot configurator for a specific architecture."""
|
||||
match arch:
|
||||
case "amd64":
|
||||
from .amd64 import AMD64BootConfigurator
|
||||
|
||||
return AMD64BootConfigurator(logger, apt_state, workdir, iso_root)
|
||||
case "arm64":
|
||||
from .arm64 import ARM64BootConfigurator
|
||||
|
||||
return ARM64BootConfigurator(logger, apt_state, workdir, iso_root)
|
||||
case "ppc64el":
|
||||
from .ppc64el import PPC64ELBootConfigurator
|
||||
|
||||
return PPC64ELBootConfigurator(logger, apt_state, workdir, iso_root)
|
||||
case "riscv64":
|
||||
from .riscv64 import RISCV64BootConfigurator
|
||||
|
||||
return RISCV64BootConfigurator(logger, apt_state, workdir, iso_root)
|
||||
case "s390x":
|
||||
from .s390x import S390XBootConfigurator
|
||||
|
||||
return S390XBootConfigurator(logger, apt_state, workdir, iso_root)
|
||||
case _:
|
||||
raise ValueError(f"Unsupported architecture: {arch}")
|
||||
|
||||
|
||||
__all__ = ["make_boot_configurator_for_arch"]
|
||||
@ -1,216 +0,0 @@
|
||||
"""AMD64/x86_64 architecture boot configuration."""
|
||||
|
||||
import pathlib
|
||||
import shutil
|
||||
|
||||
from .base import default_kernel_params
|
||||
from .grub import copy_grub_modules
|
||||
from .uefi import UEFIBootConfigurator
|
||||
|
||||
|
||||
CALAMARES_PROJECTS = ["kubuntu", "lubuntu"]
|
||||
|
||||
|
||||
class AMD64BootConfigurator(UEFIBootConfigurator):
|
||||
"""Boot setup for AMD64/x86_64 architecture."""
|
||||
|
||||
efi_suffix = "x64"
|
||||
grub_target = "x86_64"
|
||||
arch = "amd64"
|
||||
|
||||
def mkisofs_opts(self) -> list[str | pathlib.Path]:
|
||||
# Boring mkisofs options that should be set somewhere architecture independent.
|
||||
opts: list[str | pathlib.Path] = ["-J", "-joliet-long", "-l"]
|
||||
|
||||
# Generalities on booting
|
||||
#
|
||||
# There is a 2x2 matrix of boot modes we care about: legacy or UEFI
|
||||
# boot modes and having the installer be on a cdrom or a disk. Booting
|
||||
# from cdrom uses the el torito standard and booting from disk expects
|
||||
# a MBR or GPT partition table.
|
||||
#
|
||||
# https://wiki.osdev.org/El-Torito has a lot more background on this.
|
||||
|
||||
# ## Set up the mkisofs options for legacy boot.
|
||||
|
||||
# Set the el torito boot image "name", i.e. the path on the ISO
|
||||
# containing the bootloader for legacy-cdrom boot.
|
||||
opts.extend(["-b", "boot/grub/i386-pc/eltorito.img"])
|
||||
|
||||
# Back in the day, el torito booting worked by emulating a floppy
|
||||
# drive. This hasn't been a useful way of operating for a long time.
|
||||
opts.append("-no-emul-boot")
|
||||
|
||||
# Misc options to make the legacy-cdrom boot work.
|
||||
opts.extend(["-boot-load-size", "4", "-boot-info-table", "--grub2-boot-info"])
|
||||
|
||||
# The bootloader to write to the MBR for legacy-disk boot.
|
||||
#
|
||||
# We use the grub stage1 bootloader, boot_hybrid.img, which then jumps
|
||||
# to the eltorito image based on the information xorriso provides it
|
||||
# via the --grub2-boot-info option.
|
||||
opts.extend(
|
||||
[
|
||||
"--grub2-mbr",
|
||||
self.scratch.joinpath("boot_hybrid.img"),
|
||||
]
|
||||
)
|
||||
|
||||
# ## Set up the mkisofs options for UEFI boot.
|
||||
opts.extend(self.get_uefi_mkisofs_opts())
|
||||
|
||||
return opts
|
||||
|
||||
def extract_files(self) -> None:
|
||||
with self.logger.logged("extracting AMD64 boot files"):
|
||||
|
||||
# Extract UEFI files (common with ARM64)
|
||||
self.extract_uefi_files()
|
||||
|
||||
# AMD64-specific: Add BIOS/legacy boot files
|
||||
with self.logger.logged("adding BIOS/legacy boot files"):
|
||||
grub_pc_pkg_dir = self.scratch.joinpath("grub-pc-pkg")
|
||||
self.download_and_extract_package("grub-pc-bin", grub_pc_pkg_dir)
|
||||
|
||||
grub_boot_dir = self.iso_root.joinpath("boot", "grub", "i386-pc")
|
||||
grub_boot_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
src_grub_dir = grub_pc_pkg_dir.joinpath("usr", "lib", "grub", "i386-pc")
|
||||
|
||||
shutil.copy(src_grub_dir.joinpath("eltorito.img"), grub_boot_dir)
|
||||
shutil.copy(src_grub_dir.joinpath("boot_hybrid.img"), self.scratch)
|
||||
|
||||
copy_grub_modules(
|
||||
grub_pc_pkg_dir,
|
||||
self.iso_root,
|
||||
"i386-pc",
|
||||
["*.mod", "*.lst", "*.o"],
|
||||
)
|
||||
|
||||
def generate_grub_config(self) -> str:
|
||||
"""Generate grub.cfg content for AMD64."""
|
||||
result = self.grub_header()
|
||||
|
||||
if self.project == "ubuntu-mini-iso":
|
||||
result += """\
|
||||
menuentry "Choose an Ubuntu version to install" {
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinuz iso-chooser-menu ip=dhcp ---
|
||||
initrd /casper/initrd
|
||||
}
|
||||
"""
|
||||
return result
|
||||
|
||||
kernel_params = default_kernel_params(self.project)
|
||||
|
||||
# Main menu entry
|
||||
result += f"""\
|
||||
menuentry "Try or Install {self.humanproject}" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinuz {kernel_params}
|
||||
initrd /casper/initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
# All but server get safe-graphics mode
|
||||
if self.project != "ubuntu-server":
|
||||
result += f"""\
|
||||
menuentry "{self.humanproject} (safe graphics)" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinuz nomodeset {kernel_params}
|
||||
initrd /casper/initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
# ubiquity based projects get OEM mode
|
||||
if "maybe-ubiquity" in kernel_params:
|
||||
oem_kernel_params = kernel_params.replace(
|
||||
"maybe-ubiquity", "only-ubiquity oem-config/enable=true"
|
||||
)
|
||||
result += f"""\
|
||||
menuentry "OEM install (for manufacturers)" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinuz {oem_kernel_params}
|
||||
initrd /casper/initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
# Calamares-based projects get OEM mode
|
||||
if self.project in CALAMARES_PROJECTS:
|
||||
result += f"""\
|
||||
menuentry "OEM install (for manufacturers)" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinuz {kernel_params} oem-config/enable=true
|
||||
initrd /casper/initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
# Currently only server is built with HWE, hence no safe-graphics/OEM
|
||||
if self.hwe:
|
||||
result += f"""\
|
||||
menuentry "{self.humanproject} with the HWE kernel" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/hwe-vmlinuz {kernel_params}
|
||||
initrd /casper/hwe-initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
# UEFI Entries (wrapped in grub_platform check for dual BIOS/UEFI support)
|
||||
uefi_menu_entries = self.uefi_menu_entries()
|
||||
|
||||
result += f"""\
|
||||
grub_platform
|
||||
if [ "$grub_platform" = "efi" ]; then
|
||||
{uefi_menu_entries}\
|
||||
fi
|
||||
"""
|
||||
|
||||
return result
|
||||
|
||||
@staticmethod
|
||||
def generate_loopback_config(grub_content: str) -> str:
|
||||
"""Derive loopback.cfg from grub.cfg content.
|
||||
|
||||
Strips the header (up to menu_color_highlight) and the UEFI
|
||||
trailer (from grub_platform to end), and adds iso-scan/filename
|
||||
to linux lines.
|
||||
"""
|
||||
lines = grub_content.split("\n")
|
||||
start_idx = 0
|
||||
for i, line in enumerate(lines):
|
||||
if "menu_color_highlight" in line:
|
||||
start_idx = i + 1
|
||||
break
|
||||
|
||||
end_idx = len(lines)
|
||||
for i, line in enumerate(lines):
|
||||
if "grub_platform" in line:
|
||||
end_idx = i
|
||||
break
|
||||
|
||||
loopback_lines = lines[start_idx:end_idx]
|
||||
loopback_lines = [
|
||||
(
|
||||
line.replace("---", "iso-scan/filename=${iso_path} ---")
|
||||
if "linux" in line
|
||||
else line
|
||||
)
|
||||
for line in loopback_lines
|
||||
]
|
||||
|
||||
return "\n".join(loopback_lines)
|
||||
|
||||
def make_bootable(
|
||||
self,
|
||||
project: str,
|
||||
capproject: str,
|
||||
subarch: str,
|
||||
hwe: bool,
|
||||
) -> None:
|
||||
"""Make the ISO bootable, including generating loopback.cfg."""
|
||||
super().make_bootable(project, capproject, subarch, hwe)
|
||||
grub_cfg = self.iso_root.joinpath("boot", "grub", "grub.cfg")
|
||||
grub_content = grub_cfg.read_text()
|
||||
self.iso_root.joinpath("boot", "grub", "loopback.cfg").write_text(
|
||||
self.generate_loopback_config(grub_content)
|
||||
)
|
||||
@ -1,76 +0,0 @@
|
||||
"""ARM 64-bit architecture boot configuration."""
|
||||
|
||||
import pathlib
|
||||
|
||||
from .uefi import UEFIBootConfigurator
|
||||
from .base import default_kernel_params
|
||||
|
||||
|
||||
class ARM64BootConfigurator(UEFIBootConfigurator):
|
||||
"""Boot setup for ARM 64-bit architecture."""
|
||||
|
||||
efi_suffix = "aa64"
|
||||
grub_target = "arm64"
|
||||
arch = "arm64"
|
||||
|
||||
def mkisofs_opts(self) -> list[str | pathlib.Path]:
|
||||
"""Return mkisofs options for ARM64."""
|
||||
opts: list[str | pathlib.Path] = [
|
||||
"-J",
|
||||
"-joliet-long",
|
||||
"-l",
|
||||
"-c",
|
||||
"boot/boot.cat",
|
||||
]
|
||||
# Add common UEFI options
|
||||
opts.extend(self.get_uefi_mkisofs_opts())
|
||||
# ARM64-specific: partition cylinder alignment
|
||||
opts.extend(["-partition_cyl_align", "all"])
|
||||
return opts
|
||||
|
||||
def extract_files(self) -> None:
|
||||
"""Download and extract bootloader packages for ARM64."""
|
||||
with self.logger.logged("extracting ARM64 boot files"):
|
||||
self.extract_uefi_files()
|
||||
|
||||
def generate_grub_config(self) -> str:
|
||||
"""Generate grub.cfg for ARM64."""
|
||||
kernel_params = default_kernel_params(self.project)
|
||||
|
||||
result = self.grub_header()
|
||||
|
||||
# ARM64-specific: Snapdragon workarounds
|
||||
result += f"""\
|
||||
set cmdline=
|
||||
smbios --type 4 --get-string 5 --set proc_version
|
||||
regexp "Snapdragon.*" "$proc_version"
|
||||
if [ $? = 0 ]; then
|
||||
# Work around Snapdragon X firmware bug. cutmem is not allowed in lockdown mode.
|
||||
if [ $lockdown != "y" ]; then
|
||||
cutmem 0x8800000000 0x8fffffffff
|
||||
fi
|
||||
# arm64.nopauth works around 8cx Gen 3 firmware bug
|
||||
cmdline="clk_ignore_unused pd_ignore_unused arm64.nopauth"
|
||||
fi
|
||||
|
||||
menuentry "Try or Install {self.humanproject}" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinuz $cmdline {kernel_params} console=tty0
|
||||
initrd /casper/initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
# HWE kernel option if available
|
||||
result += self.hwe_menu_entry(
|
||||
"vmlinuz",
|
||||
f"{kernel_params} console=tty0",
|
||||
extra_params="$cmdline ",
|
||||
)
|
||||
|
||||
# Note: ARM64 HWE also includes $dtb in the original shell script,
|
||||
# but it's not actually set anywhere in the grub.cfg, so we omit it here
|
||||
|
||||
# UEFI Entries (ARM64 is UEFI-only, no grub_platform check needed)
|
||||
result += self.uefi_menu_entries()
|
||||
|
||||
return result
|
||||
@ -1,98 +0,0 @@
|
||||
"""Base classes and helper functions for boot configuration."""
|
||||
|
||||
import pathlib
|
||||
import subprocess
|
||||
import tempfile
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
from ..builder import Logger
|
||||
from ..apt_state import AptStateManager
|
||||
|
||||
|
||||
def default_kernel_params(project: str) -> str:
|
||||
if project == "ubuntukylin":
|
||||
return (
|
||||
"file=/cdrom/preseed/ubuntu.seed locale=zh_CN "
|
||||
"keyboard-configuration/layoutcode?=cn quiet splash --- "
|
||||
)
|
||||
if project == "ubuntu-server":
|
||||
return " --- "
|
||||
else:
|
||||
return " --- quiet splash"
|
||||
|
||||
|
||||
class BaseBootConfigurator(ABC):
|
||||
"""Abstract base class for architecture-specific boot configurators.
|
||||
|
||||
Subclasses must implement:
|
||||
- extract_files(): Download and extract bootloader packages
|
||||
- mkisofs_opts(): Return mkisofs command-line options
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
logger: Logger,
|
||||
apt_state: AptStateManager,
|
||||
workdir: pathlib.Path,
|
||||
iso_root: pathlib.Path,
|
||||
) -> None:
|
||||
self.logger = logger
|
||||
self.apt_state = apt_state
|
||||
self.scratch = workdir.joinpath("boot-stuff")
|
||||
self.iso_root = iso_root
|
||||
|
||||
def download_and_extract_package(
|
||||
self, pkg_name: str, target_dir: pathlib.Path
|
||||
) -> None:
|
||||
"""Download a Debian package and extract its contents to target directory."""
|
||||
self.logger.log(f"downloading and extracting {pkg_name}")
|
||||
target_dir.mkdir(exist_ok=True, parents=True)
|
||||
with tempfile.TemporaryDirectory() as tdir_str:
|
||||
tdir = pathlib.Path(tdir_str)
|
||||
self.apt_state.download_direct(pkg_name, tdir)
|
||||
[deb] = tdir.glob("*.deb")
|
||||
dpkg_proc = subprocess.Popen(
|
||||
["dpkg-deb", "--fsys-tarfile", deb], stdout=subprocess.PIPE
|
||||
)
|
||||
tar_proc = subprocess.Popen(
|
||||
["tar", "xf", "-", "-C", target_dir], stdin=dpkg_proc.stdout
|
||||
)
|
||||
assert dpkg_proc.stdout is not None
|
||||
dpkg_proc.stdout.close()
|
||||
tar_proc.communicate()
|
||||
|
||||
@abstractmethod
|
||||
def extract_files(self) -> None:
|
||||
"""Download and extract bootloader packages to the boot tree.
|
||||
|
||||
Each architecture must implement this to set up its specific bootloader files.
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def mkisofs_opts(self) -> list[str | pathlib.Path]:
|
||||
"""Return mkisofs command-line options for this architecture.
|
||||
|
||||
Returns:
|
||||
List of command-line options to pass to mkisofs/xorriso.
|
||||
"""
|
||||
...
|
||||
|
||||
def post_process_iso(self, iso_path: pathlib.Path) -> None:
|
||||
"""Post-process the ISO image after xorriso creates it."""
|
||||
|
||||
def make_bootable(
|
||||
self,
|
||||
project: str,
|
||||
capproject: str,
|
||||
subarch: str,
|
||||
hwe: bool,
|
||||
) -> None:
|
||||
"""Make the ISO bootable by extracting bootloader files."""
|
||||
self.project = project
|
||||
self.humanproject = capproject.replace("-", " ")
|
||||
self.subarch = subarch
|
||||
self.hwe = hwe
|
||||
self.scratch.mkdir(exist_ok=True)
|
||||
with self.logger.logged("configuring boot"):
|
||||
self.extract_files()
|
||||
@ -1,104 +0,0 @@
|
||||
"""GRUB boot configuration for multiple architectures."""
|
||||
|
||||
import pathlib
|
||||
import shutil
|
||||
from abc import abstractmethod
|
||||
|
||||
from .base import BaseBootConfigurator
|
||||
|
||||
|
||||
def copy_grub_common_files(grub_pkg_dir: pathlib.Path, iso_root: pathlib.Path) -> None:
|
||||
fonts_dir = iso_root.joinpath("boot", "grub", "fonts")
|
||||
fonts_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
src = grub_pkg_dir.joinpath("usr", "share", "grub", "unicode.pf2")
|
||||
dst = fonts_dir.joinpath("unicode.pf2")
|
||||
shutil.copy(src, dst)
|
||||
|
||||
|
||||
def copy_grub_modules(
|
||||
grub_pkg_dir: pathlib.Path,
|
||||
iso_root: pathlib.Path,
|
||||
grub_target: str,
|
||||
patterns: list[str],
|
||||
) -> None:
|
||||
"""Copy GRUB module files matching given patterns from src to dest."""
|
||||
src_dir = grub_pkg_dir.joinpath("usr", "lib", "grub", grub_target)
|
||||
dest_dir = iso_root.joinpath("boot", "grub", grub_target)
|
||||
dest_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for pat in patterns:
|
||||
for file in src_dir.glob(pat):
|
||||
shutil.copy(file, dest_dir)
|
||||
|
||||
|
||||
class GrubBootConfigurator(BaseBootConfigurator):
|
||||
"""Base class for architectures that use GRUB (all except S390X).
|
||||
|
||||
Common GRUB functionality shared across AMD64, ARM64, PPC64EL, and RISC-V64.
|
||||
Subclasses must implement generate_grub_config().
|
||||
"""
|
||||
|
||||
def grub_header(self, include_loadfont: bool = True) -> str:
|
||||
"""Return common GRUB config header (timeout, colors).
|
||||
|
||||
Args:
|
||||
include_loadfont: Whether to include 'loadfont unicode'
|
||||
(not needed for RISC-V)
|
||||
"""
|
||||
result = "set timeout=30\n\n"
|
||||
if include_loadfont:
|
||||
result += "loadfont unicode\n\n"
|
||||
result += """\
|
||||
set menu_color_normal=white/black
|
||||
set menu_color_highlight=black/light-gray
|
||||
|
||||
"""
|
||||
return result
|
||||
|
||||
def hwe_menu_entry(
|
||||
self,
|
||||
kernel_name: str,
|
||||
kernel_params: str,
|
||||
extra_params: str = "",
|
||||
) -> str:
|
||||
"""Return HWE kernel menu entry if HWE is enabled.
|
||||
|
||||
Args:
|
||||
kernel_name: Kernel binary name (vmlinuz or vmlinux)
|
||||
kernel_params: Kernel parameters to append
|
||||
extra_params: Additional parameters (e.g., console=tty0, $cmdline)
|
||||
"""
|
||||
if not self.hwe:
|
||||
return ""
|
||||
return f"""\
|
||||
menuentry "{self.humanproject} with the HWE kernel" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/hwe-{kernel_name} {extra_params}{kernel_params}
|
||||
initrd /casper/hwe-initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def generate_grub_config(self) -> str:
|
||||
"""Generate grub.cfg content.
|
||||
|
||||
Each GRUB-based architecture must implement this to return the
|
||||
GRUB configuration.
|
||||
"""
|
||||
...
|
||||
|
||||
def make_bootable(
|
||||
self,
|
||||
project: str,
|
||||
capproject: str,
|
||||
subarch: str,
|
||||
hwe: bool,
|
||||
) -> None:
|
||||
"""Make the ISO bootable by extracting files and generating GRUB config."""
|
||||
super().make_bootable(project, capproject, subarch, hwe)
|
||||
with self.logger.logged("generating grub config"):
|
||||
content = self.generate_grub_config()
|
||||
grub_dir = self.iso_root.joinpath("boot", "grub")
|
||||
grub_dir.mkdir(parents=True, exist_ok=True)
|
||||
grub_dir.joinpath("grub.cfg").write_text(content)
|
||||
@ -1,74 +0,0 @@
|
||||
"""PowerPC 64-bit Little Endian architecture boot configuration."""
|
||||
|
||||
import pathlib
|
||||
import shutil
|
||||
|
||||
from .grub import (
|
||||
copy_grub_common_files,
|
||||
copy_grub_modules,
|
||||
GrubBootConfigurator,
|
||||
)
|
||||
from .base import default_kernel_params
|
||||
|
||||
|
||||
class PPC64ELBootConfigurator(GrubBootConfigurator):
|
||||
"""Boot setup for PowerPC 64-bit Little Endian architecture."""
|
||||
|
||||
def mkisofs_opts(self) -> list[str | pathlib.Path]:
|
||||
"""Return mkisofs options for PPC64EL."""
|
||||
return []
|
||||
|
||||
def extract_files(self) -> None:
|
||||
"""Download and extract bootloader packages for PPC64EL."""
|
||||
self.logger.log("extracting PPC64EL boot files")
|
||||
|
||||
grub_pkg_dir = self.scratch.joinpath("grub-pkg")
|
||||
|
||||
# Download and extract bootloader packages
|
||||
self.download_and_extract_package("grub2-common", grub_pkg_dir)
|
||||
self.download_and_extract_package("grub-ieee1275-bin", grub_pkg_dir)
|
||||
|
||||
# Add common files for GRUB to tree
|
||||
copy_grub_common_files(grub_pkg_dir, self.iso_root)
|
||||
|
||||
# Add IEEE1275 ppc boot files
|
||||
ppc_dir = self.iso_root.joinpath("ppc")
|
||||
ppc_dir.mkdir()
|
||||
|
||||
src_grub_dir = grub_pkg_dir.joinpath("usr", "lib", "grub", "powerpc-ieee1275")
|
||||
|
||||
# Copy bootinfo.txt to ppc directory
|
||||
shutil.copy(
|
||||
src_grub_dir.joinpath("bootinfo.txt"), ppc_dir.joinpath("bootinfo.txt")
|
||||
)
|
||||
|
||||
# Copy eltorito.elf to boot/grub as powerpc.elf
|
||||
shutil.copy(
|
||||
src_grub_dir.joinpath("eltorito.elf"),
|
||||
self.iso_root.joinpath("boot", "grub", "powerpc.elf"),
|
||||
)
|
||||
|
||||
# Copy GRUB modules
|
||||
copy_grub_modules(
|
||||
grub_pkg_dir, self.iso_root, "powerpc-ieee1275", ["*.mod", "*.lst"]
|
||||
)
|
||||
|
||||
def generate_grub_config(self) -> str:
|
||||
"""Generate grub.cfg for PPC64EL."""
|
||||
kernel_params = default_kernel_params(self.project)
|
||||
|
||||
result = self.grub_header()
|
||||
|
||||
# Main menu entry
|
||||
result += f"""\
|
||||
menuentry "Try or Install {self.humanproject}" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinux quiet {kernel_params}
|
||||
initrd /casper/initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
# HWE kernel option if available
|
||||
result += self.hwe_menu_entry("vmlinux", kernel_params, extra_params="quiet ")
|
||||
|
||||
return result
|
||||
@ -1,207 +0,0 @@
|
||||
"""RISC-V 64-bit architecture boot configuration."""
|
||||
|
||||
import pathlib
|
||||
import shutil
|
||||
|
||||
from .grub import GrubBootConfigurator, copy_grub_common_files, copy_grub_modules
|
||||
|
||||
|
||||
def copy_unsigned_monolithic_grub(
|
||||
grub_pkg_dir: pathlib.Path,
|
||||
efi_suffix: str,
|
||||
grub_target: str,
|
||||
iso_root: pathlib.Path,
|
||||
) -> None:
|
||||
efi_boot_dir = iso_root.joinpath("EFI", "boot")
|
||||
efi_boot_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
shutil.copy(
|
||||
grub_pkg_dir.joinpath(
|
||||
"usr",
|
||||
"lib",
|
||||
"grub",
|
||||
grub_target,
|
||||
"monolithic",
|
||||
f"gcd{efi_suffix}.efi",
|
||||
),
|
||||
efi_boot_dir.joinpath(f"boot{efi_suffix}.efi"),
|
||||
)
|
||||
|
||||
copy_grub_modules(grub_pkg_dir, iso_root, grub_target, ["*.mod", "*.lst"])
|
||||
|
||||
|
||||
class RISCV64BootConfigurator(GrubBootConfigurator):
|
||||
"""Boot setup for RISC-V 64-bit architecture."""
|
||||
|
||||
def mkisofs_opts(self) -> list[str | pathlib.Path]:
|
||||
"""Return mkisofs options for RISC-V64."""
|
||||
efi_img = self.scratch.joinpath("efi.img")
|
||||
|
||||
return [
|
||||
"-joliet",
|
||||
"on",
|
||||
"-compliance",
|
||||
"joliet_long_names",
|
||||
"--append_partition",
|
||||
"2",
|
||||
"0xef",
|
||||
efi_img,
|
||||
"-boot_image",
|
||||
"any",
|
||||
"partition_offset=10240",
|
||||
"-boot_image",
|
||||
"any",
|
||||
"partition_cyl_align=all",
|
||||
"-boot_image",
|
||||
"any",
|
||||
"efi_path=--interval:appended_partition_2:all::",
|
||||
"-boot_image",
|
||||
"any",
|
||||
"appended_part_as=gpt",
|
||||
"-boot_image",
|
||||
"any",
|
||||
"cat_path=/boot/boot.cat",
|
||||
"-fs",
|
||||
"64m",
|
||||
]
|
||||
|
||||
def extract_files(self) -> None:
|
||||
"""Download and extract bootloader packages for RISC-V64."""
|
||||
self.logger.log("extracting RISC-V64 boot files")
|
||||
u_boot_dir = self.scratch.joinpath("u-boot-sifive")
|
||||
|
||||
grub_pkg_dir = self.scratch.joinpath("grub-pkg")
|
||||
|
||||
# Download and extract bootloader packages
|
||||
self.download_and_extract_package("grub2-common", grub_pkg_dir)
|
||||
self.download_and_extract_package("grub-efi-riscv64-bin", grub_pkg_dir)
|
||||
self.download_and_extract_package("grub-efi-riscv64-unsigned", grub_pkg_dir)
|
||||
self.download_and_extract_package("u-boot-sifive", u_boot_dir)
|
||||
|
||||
# Add GRUB to tree
|
||||
copy_grub_common_files(grub_pkg_dir, self.iso_root)
|
||||
|
||||
copy_unsigned_monolithic_grub(
|
||||
grub_pkg_dir, "riscv64", "riscv64-efi", self.iso_root
|
||||
)
|
||||
|
||||
# Extract DTBs to tree
|
||||
self.logger.log("extracting device tree files")
|
||||
kernel_layer = self.scratch.joinpath("kernel-layer")
|
||||
squashfs_path = self.iso_root.joinpath(
|
||||
"casper", "ubuntu-server-minimal.squashfs"
|
||||
)
|
||||
|
||||
# Extract device tree firmware from squashfs
|
||||
self.logger.run(
|
||||
[
|
||||
"unsquashfs",
|
||||
"-no-xattrs",
|
||||
"-d",
|
||||
kernel_layer,
|
||||
squashfs_path,
|
||||
"usr/lib/firmware",
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
|
||||
# Copy DTBs if they exist
|
||||
dtb_dir = self.iso_root.joinpath("dtb")
|
||||
dtb_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
firmware_dir = kernel_layer.joinpath("usr", "lib", "firmware")
|
||||
|
||||
for dtb_file in firmware_dir.glob("*/device-tree/*"):
|
||||
if dtb_file.is_file():
|
||||
shutil.copy(dtb_file, dtb_dir)
|
||||
|
||||
# Create ESP image with GRUB and dtbs
|
||||
efi_img = self.scratch.joinpath("efi.img")
|
||||
self.logger.run(
|
||||
["mkfs.msdos", "-n", "ESP", "-C", "-v", efi_img, "32768"], check=True
|
||||
)
|
||||
|
||||
# Add EFI files to ESP
|
||||
efi_dir = self.iso_root.joinpath("EFI")
|
||||
self.logger.run(["mcopy", "-s", "-i", efi_img, efi_dir, "::/."], check=True)
|
||||
|
||||
# Add DTBs to ESP
|
||||
self.logger.run(["mcopy", "-s", "-i", efi_img, dtb_dir, "::/."], check=True)
|
||||
|
||||
def generate_grub_config(self) -> str:
|
||||
"""Generate grub.cfg for RISC-V64."""
|
||||
result = self.grub_header(include_loadfont=False)
|
||||
|
||||
# Main menu entry
|
||||
result += f"""\
|
||||
menuentry "Try or Install {self.humanproject}" {{
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinux efi=debug sysctl.kernel.watchdog_thresh=60 ---
|
||||
initrd /casper/initrd
|
||||
}}
|
||||
"""
|
||||
|
||||
# HWE kernel option if available
|
||||
result += self.hwe_menu_entry(
|
||||
"vmlinux",
|
||||
"---",
|
||||
extra_params="efi=debug sysctl.kernel.watchdog_thresh=60 ",
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
def post_process_iso(self, iso_path: pathlib.Path) -> None:
|
||||
"""Add GPT partitions with U-Boot for SiFive Unmatched board.
|
||||
|
||||
The SiFive Unmatched board needs a GPT table containing U-Boot in
|
||||
order to boot. U-Boot does not currently support booting from a CD,
|
||||
so the GPT table also contains an entry pointing to the ESP so that
|
||||
U-Boot can find it.
|
||||
"""
|
||||
u_boot_dir = self.scratch.joinpath(
|
||||
"u-boot-sifive", "usr", "lib", "u-boot", "sifive_unmatched"
|
||||
)
|
||||
self.logger.run(
|
||||
[
|
||||
"sgdisk",
|
||||
iso_path,
|
||||
"--set-alignment=2",
|
||||
"-d",
|
||||
"1",
|
||||
"-n",
|
||||
"1:2082:10273",
|
||||
"-c",
|
||||
"1:loader2",
|
||||
"-t",
|
||||
"1:2E54B353-1271-4842-806F-E436D6AF6985",
|
||||
"-n",
|
||||
"3:10274:12321",
|
||||
"-c",
|
||||
"3:loader1",
|
||||
"-t",
|
||||
"3:5B193300-FC78-40CD-8002-E86C45580B47",
|
||||
"-c",
|
||||
"2:ESP",
|
||||
"-r=2:3",
|
||||
],
|
||||
)
|
||||
self.logger.run(
|
||||
[
|
||||
"dd",
|
||||
f"if={u_boot_dir / 'u-boot.itb'}",
|
||||
f"of={iso_path}",
|
||||
"bs=512",
|
||||
"seek=2082",
|
||||
"conv=notrunc",
|
||||
],
|
||||
)
|
||||
self.logger.run(
|
||||
[
|
||||
"dd",
|
||||
f"if={u_boot_dir / 'u-boot-spl.bin'}",
|
||||
f"of={iso_path}",
|
||||
"bs=512",
|
||||
"seek=10274",
|
||||
"conv=notrunc",
|
||||
],
|
||||
)
|
||||
@ -1,206 +0,0 @@
|
||||
"""IBM S/390 architecture boot configuration."""
|
||||
|
||||
import pathlib
|
||||
import shutil
|
||||
import struct
|
||||
|
||||
from .base import BaseBootConfigurator
|
||||
|
||||
|
||||
README_dot_boot = """\
|
||||
About the S/390 installation CD
|
||||
===============================
|
||||
|
||||
It is possible to "boot" the installation system off this CD using
|
||||
the files provided in the /boot directory.
|
||||
|
||||
Although you can boot the installer from this CD, the installation
|
||||
itself is *not* actually done from the CD. Once the initrd is loaded,
|
||||
the installer will ask you to configure your network connection and
|
||||
uses the network-console component to allow you to continue the
|
||||
installation over SSH. The rest of the installation is done over the
|
||||
network: all installer components and Debian packages are retrieved
|
||||
from a mirror.
|
||||
|
||||
Instead of SSH, one can also use the ASCII terminal available in HMC.
|
||||
|
||||
Exporting full .iso contents (including the hidden .disk directory)
|
||||
allows one to use the result as a valid mirror for installation.
|
||||
"""
|
||||
|
||||
ubuntu_dot_exec = """\
|
||||
/* REXX EXEC TO IPL Ubuntu for */
|
||||
/* z Systems FROM THE VM READER. */
|
||||
/* */
|
||||
'CP CLOSE RDR'
|
||||
'PURGE RDR ALL'
|
||||
'SPOOL PUNCH * RDR'
|
||||
'PUNCH KERNEL UBUNTU * (NOHEADER'
|
||||
'PUNCH PARMFILE UBUNTU * (NOHEADER'
|
||||
'PUNCH INITRD UBUNTU * (NOHEADER'
|
||||
'CHANGE RDR ALL KEEP NOHOLD'
|
||||
'CP IPL 000C CLEAR'
|
||||
"""
|
||||
|
||||
ubuntu_dot_ins = """\
|
||||
* Ubuntu for IBM Z (default kernel)
|
||||
kernel.ubuntu 0x00000000
|
||||
initrd.off 0x0001040c
|
||||
initrd.siz 0x00010414
|
||||
parmfile.ubuntu 0x00010480
|
||||
initrd.ubuntu 0x01000000
|
||||
"""
|
||||
|
||||
|
||||
def gen_s390_cd_kernel(
|
||||
kernel: pathlib.Path, initrd: pathlib.Path, cmdline: str, outfile: pathlib.Path
|
||||
) -> None:
|
||||
"""Generate a bootable S390X CD kernel image.
|
||||
|
||||
This is a Python translation of gen-s390-cd-kernel.pl from debian-cd.
|
||||
It creates a bootable image for S/390 architecture by combining kernel,
|
||||
initrd, and boot parameters in a specific format.
|
||||
"""
|
||||
# Calculate sizes
|
||||
initrd_size = initrd.stat().st_size
|
||||
|
||||
# The initrd is placed at a fixed offset of 16 MiB
|
||||
initrd_offset = 0x1000000
|
||||
|
||||
# Calculate total boot image size (rounded up to 4K blocks)
|
||||
boot_size = ((initrd_offset + initrd_size) >> 12) + 1
|
||||
boot_size = boot_size << 12
|
||||
|
||||
# Validate cmdline length (max 896 bytes)
|
||||
if len(cmdline) >= 896:
|
||||
raise ValueError(f"Kernel commandline too long ({len(cmdline)} bytes)")
|
||||
|
||||
# Create output file and fill with zeros
|
||||
with outfile.open("wb") as out_fh:
|
||||
# Fill entire file with zeros
|
||||
out_fh.write(b"\x00" * boot_size)
|
||||
|
||||
# Copy kernel to offset 0
|
||||
out_fh.seek(0)
|
||||
with kernel.open("rb") as kernel_fh:
|
||||
out_fh.write(kernel_fh.read())
|
||||
|
||||
# Copy initrd to offset 0x1000000 (16 MiB)
|
||||
out_fh.seek(initrd_offset)
|
||||
with initrd.open("rb") as initrd_fh:
|
||||
out_fh.write(initrd_fh.read())
|
||||
|
||||
# Write boot loader control value at offset 4
|
||||
# This tells the S/390 boot loader where to find the kernel
|
||||
out_fh.seek(4)
|
||||
out_fh.write(struct.pack("!I", 0x80010000))
|
||||
|
||||
# Write kernel command line at offset 0x10480
|
||||
out_fh.seek(0x10480)
|
||||
out_fh.write(cmdline.encode("utf-8"))
|
||||
|
||||
# Write initrd parameters
|
||||
# Initrd offset at 0x1040C
|
||||
out_fh.seek(0x1040C)
|
||||
out_fh.write(struct.pack("!I", initrd_offset))
|
||||
|
||||
# Initrd size at 0x10414
|
||||
out_fh.seek(0x10414)
|
||||
out_fh.write(struct.pack("!I", initrd_size))
|
||||
|
||||
|
||||
class S390XBootConfigurator(BaseBootConfigurator):
|
||||
"""Boot setup for IBM S/390 architecture."""
|
||||
|
||||
def mkisofs_opts(self) -> list[str | pathlib.Path]:
|
||||
"""Return mkisofs options for S390X."""
|
||||
return [
|
||||
"-J",
|
||||
"-no-emul-boot",
|
||||
"-b",
|
||||
"boot/ubuntu.ikr",
|
||||
]
|
||||
|
||||
def extract_files(self) -> None:
|
||||
"""Set up boot files for S390X."""
|
||||
self.logger.log("extracting S390X boot files")
|
||||
boot_dir = self.iso_root.joinpath("boot")
|
||||
boot_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Copy static .ins & exec scripts, docs from data directory
|
||||
self.iso_root.joinpath("README.boot").write_text(README_dot_boot)
|
||||
boot_dir.joinpath("ubuntu.exec").write_text(ubuntu_dot_exec)
|
||||
boot_dir.joinpath("ubuntu.ins").write_text(ubuntu_dot_ins)
|
||||
|
||||
# Move kernel image to the name used in .ins & exec scripts
|
||||
kernel_src = self.iso_root.joinpath("casper", "vmlinuz")
|
||||
kernel_dst = boot_dir.joinpath("kernel.ubuntu")
|
||||
kernel_src.replace(kernel_dst)
|
||||
|
||||
# Move initrd to the name used in .ins & exec scripts
|
||||
initrd_src = self.iso_root.joinpath("casper", "initrd")
|
||||
initrd_dst = boot_dir.joinpath("initrd.ubuntu")
|
||||
initrd_src.replace(initrd_dst)
|
||||
|
||||
# Compute initrd offset & size, store in files used by .ins & exec scripts
|
||||
# Offset is always 0x1000000 (16 MiB)
|
||||
initrd_offset_file = boot_dir.joinpath("initrd.off")
|
||||
with initrd_offset_file.open("wb") as f:
|
||||
f.write(struct.pack("!I", 0x1000000))
|
||||
|
||||
# Size is the actual size of the initrd
|
||||
initrd_size = initrd_dst.stat().st_size
|
||||
initrd_size_file = boot_dir.joinpath("initrd.siz")
|
||||
with initrd_size_file.open("wb") as f:
|
||||
f.write(struct.pack("!I", initrd_size))
|
||||
|
||||
# Compute cmdline, store in parmfile used by .ins & exec scripts
|
||||
parmfile = boot_dir.joinpath("parmfile.ubuntu")
|
||||
with parmfile.open("w") as f:
|
||||
f.write(" --- ")
|
||||
|
||||
# Generate secondary top-level ubuntu.ins file
|
||||
# This transforms lines not starting with * by prepending "boot/"
|
||||
ubuntu_ins_src = boot_dir.joinpath("ubuntu.ins")
|
||||
ubuntu_ins_dst = self.iso_root.joinpath("ubuntu.ins")
|
||||
if ubuntu_ins_src.exists():
|
||||
self.logger.run(
|
||||
["sed", "-e", "s,^[^*],boot/&,g", ubuntu_ins_src],
|
||||
stdout=ubuntu_ins_dst.open("w"),
|
||||
check=True,
|
||||
)
|
||||
|
||||
# Generate QEMU-KVM boot image using gen_s390_cd_kernel
|
||||
cmdline = parmfile.read_text().strip()
|
||||
ikr_file = boot_dir.joinpath("ubuntu.ikr")
|
||||
gen_s390_cd_kernel(kernel_dst, initrd_dst, cmdline, ikr_file)
|
||||
|
||||
# Extract bootloader signing certificate
|
||||
installed_pem = pathlib.Path("/usr/lib/s390-tools/stage3.pem")
|
||||
squashfs_root = self.iso_root.joinpath("squashfs-root")
|
||||
squashfs_path = self.iso_root.joinpath(
|
||||
"casper", "ubuntu-server-minimal.squashfs"
|
||||
)
|
||||
|
||||
if squashfs_path.exists():
|
||||
self.logger.run(
|
||||
[
|
||||
"unsquashfs",
|
||||
"-no-xattrs",
|
||||
"-i",
|
||||
"-d",
|
||||
squashfs_root,
|
||||
squashfs_path,
|
||||
installed_pem,
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
|
||||
# Move certificate to iso root
|
||||
cert_src = squashfs_root.joinpath(str(installed_pem).lstrip("/"))
|
||||
cert_dst = self.iso_root.joinpath("ubuntu.pem")
|
||||
if cert_src.exists():
|
||||
cert_src.replace(cert_dst)
|
||||
|
||||
# Clean up squashfs extraction
|
||||
shutil.rmtree(squashfs_root)
|
||||
@ -1,168 +0,0 @@
|
||||
"""UEFI boot configuration for AMD64 and ARM64 architectures."""
|
||||
|
||||
import pathlib
|
||||
import shutil
|
||||
|
||||
from ..builder import Logger
|
||||
from .grub import copy_grub_common_files, GrubBootConfigurator
|
||||
|
||||
|
||||
def copy_signed_shim_grub(
|
||||
shim_pkg_dir: pathlib.Path,
|
||||
grub_pkg_dir: pathlib.Path,
|
||||
efi_suffix: str,
|
||||
grub_target: str,
|
||||
iso_root: pathlib.Path,
|
||||
) -> None:
|
||||
efi_boot_dir = iso_root.joinpath("EFI", "boot")
|
||||
efi_boot_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
shutil.copy(
|
||||
shim_pkg_dir.joinpath(
|
||||
"usr", "lib", "shim", f"shim{efi_suffix}.efi.signed.latest"
|
||||
),
|
||||
efi_boot_dir.joinpath(f"boot{efi_suffix}.efi"),
|
||||
)
|
||||
shutil.copy(
|
||||
shim_pkg_dir.joinpath("usr", "lib", "shim", f"mm{efi_suffix}.efi"),
|
||||
efi_boot_dir.joinpath(f"mm{efi_suffix}.efi"),
|
||||
)
|
||||
shutil.copy(
|
||||
grub_pkg_dir.joinpath(
|
||||
"usr",
|
||||
"lib",
|
||||
"grub",
|
||||
f"{grub_target}-efi-signed",
|
||||
f"gcd{efi_suffix}.efi.signed",
|
||||
),
|
||||
efi_boot_dir.joinpath(f"grub{efi_suffix}.efi"),
|
||||
)
|
||||
|
||||
grub_boot_dir = iso_root.joinpath("boot", "grub", f"{grub_target}-efi")
|
||||
grub_boot_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
src_grub_dir = grub_pkg_dir.joinpath("usr", "lib", "grub", f"{grub_target}-efi")
|
||||
for mod_file in src_grub_dir.glob("*.mod"):
|
||||
shutil.copy(mod_file, grub_boot_dir)
|
||||
for lst_file in src_grub_dir.glob("*.lst"):
|
||||
shutil.copy(lst_file, grub_boot_dir)
|
||||
|
||||
|
||||
def create_eltorito_esp_image(
|
||||
logger: Logger, iso_root: pathlib.Path, target_file: pathlib.Path
|
||||
) -> None:
|
||||
logger.log("creating El Torito ESP image")
|
||||
efi_dir = iso_root.joinpath("EFI")
|
||||
|
||||
# Calculate size: du -s --apparent-size --block-size=1024 + 1024
|
||||
result = logger.run(
|
||||
["du", "-s", "--apparent-size", "--block-size=1024", efi_dir],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
size_kb = int(result.stdout.split()[0]) + 1024
|
||||
|
||||
# Create filesystem: mkfs.msdos -n ESP -C -v
|
||||
logger.run(
|
||||
["mkfs.msdos", "-n", "ESP", "-C", "-v", target_file, str(size_kb)],
|
||||
check=True,
|
||||
)
|
||||
|
||||
# Copy files: mcopy -s -i target_file EFI ::/.
|
||||
logger.run(["mcopy", "-s", "-i", target_file, efi_dir, "::/."], check=True)
|
||||
|
||||
|
||||
class UEFIBootConfigurator(GrubBootConfigurator):
|
||||
"""Base class for UEFI-based architectures (AMD64, ARM64).
|
||||
|
||||
Subclasses should set:
|
||||
- efi_suffix: EFI binary suffix (e.g., "x64", "aa64")
|
||||
- grub_target: GRUB target name (e.g., "x86_64", "arm64")
|
||||
"""
|
||||
|
||||
# Subclasses must override these
|
||||
efi_suffix: str = ""
|
||||
grub_target: str = ""
|
||||
arch: str = ""
|
||||
|
||||
def get_uefi_grub_packages(self) -> list[str]:
|
||||
"""Return list of UEFI GRUB packages to download."""
|
||||
return [
|
||||
"grub2-common",
|
||||
f"grub-efi-{self.arch}-bin",
|
||||
f"grub-efi-{self.arch}-signed",
|
||||
]
|
||||
|
||||
def extract_uefi_files(self) -> None:
|
||||
"""Extract common UEFI files to boot tree."""
|
||||
|
||||
shim_pkg_dir = self.scratch.joinpath("shim-pkg")
|
||||
grub_pkg_dir = self.scratch.joinpath("grub-pkg")
|
||||
|
||||
# Download UEFI packages
|
||||
self.download_and_extract_package("shim-signed", shim_pkg_dir)
|
||||
for pkg in self.get_uefi_grub_packages():
|
||||
self.download_and_extract_package(pkg, grub_pkg_dir)
|
||||
|
||||
# Add common files for GRUB to tree
|
||||
copy_grub_common_files(grub_pkg_dir, self.iso_root)
|
||||
|
||||
# Add EFI GRUB to tree
|
||||
copy_signed_shim_grub(
|
||||
shim_pkg_dir,
|
||||
grub_pkg_dir,
|
||||
self.efi_suffix,
|
||||
self.grub_target,
|
||||
self.iso_root,
|
||||
)
|
||||
|
||||
# Create ESP image for El-Torito catalog and hybrid boot
|
||||
create_eltorito_esp_image(
|
||||
self.logger, self.iso_root, self.scratch.joinpath("cd-boot-efi.img")
|
||||
)
|
||||
|
||||
def uefi_menu_entries(self) -> str:
|
||||
"""Return UEFI firmware menu entries."""
|
||||
return """\
|
||||
menuentry 'Boot from next volume' {
|
||||
exit 1
|
||||
}
|
||||
menuentry 'UEFI Firmware Settings' {
|
||||
fwsetup
|
||||
}
|
||||
"""
|
||||
|
||||
def get_uefi_mkisofs_opts(self) -> list[str | pathlib.Path]:
|
||||
"""Return common UEFI mkisofs options."""
|
||||
# To make our ESP / El-Torito image compliant with MBR/GPT standards,
|
||||
# we first append it as a partition and then point the El Torito at
|
||||
# it. See https://lists.debian.org/debian-cd/2019/07/msg00007.html
|
||||
opts: list[str | pathlib.Path] = [
|
||||
"-append_partition",
|
||||
"2",
|
||||
"0xef",
|
||||
self.scratch.joinpath("cd-boot-efi.img"),
|
||||
"-appended_part_as_gpt",
|
||||
]
|
||||
|
||||
# Some BIOSes ignore removable disks with no partitions marked bootable
|
||||
# in the MBR. Make sure our protective MBR partition is marked bootable.
|
||||
opts.append("--mbr-force-bootable")
|
||||
|
||||
# Start a new entry in the el torito boot catalog
|
||||
opts.append("-eltorito-alt-boot")
|
||||
|
||||
# Specify where the el torito UEFI boot image "name". We use a special
|
||||
# syntax available in latest xorriso to point at our newly-created
|
||||
# partition.
|
||||
opts.extend(["-e", "--interval:appended_partition_2:all::"])
|
||||
|
||||
# Whether to emulate a floppy or not is a per-boot-catalog-entry
|
||||
# thing, so we need to say it again.
|
||||
opts.append("-no-emul-boot")
|
||||
|
||||
# Create a partition table entry that covers the iso9660 filesystem
|
||||
opts.extend(["-partition_offset", "16"])
|
||||
|
||||
return opts
|
||||
@ -1,368 +0,0 @@
|
||||
import contextlib
|
||||
import json
|
||||
import pathlib
|
||||
import shlex
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
from isobuilder.apt_state import AptStateManager
|
||||
from isobuilder.boot import make_boot_configurator_for_arch
|
||||
from isobuilder.gpg_key import EphemeralGPGKey
|
||||
from isobuilder.pool_builder import PoolBuilder
|
||||
|
||||
# Constants
|
||||
PACKAGE_BATCH_SIZE = 200
|
||||
MAX_CMD_DISPLAY_LENGTH = 80
|
||||
|
||||
|
||||
def package_list_packages(package_list_file: pathlib.Path) -> list[str]:
|
||||
# Parse germinate output to extract package names. Germinate is Ubuntu's
|
||||
# package dependency resolver that outputs dependency trees for seeds (like
|
||||
# "ship-live" or "server-ship-live").
|
||||
#
|
||||
# Germinate output format has 2 header lines at the start and 2 footer lines
|
||||
# at the end (showing statistics), so we skip them with [2:-2].
|
||||
# Each data line starts with the package name followed by whitespace and
|
||||
# dependency info. This format is stable but if germinate ever changes its
|
||||
# header/footer count, this will break silently.
|
||||
lines = package_list_file.read_text().splitlines()[2:-2]
|
||||
return [line.split(None, 1)[0] for line in lines]
|
||||
|
||||
|
||||
def make_sources_text(
|
||||
series: str, gpg_key: EphemeralGPGKey, components: list[str], mountpoint: str
|
||||
) -> str:
|
||||
"""Generate a deb822-format apt source file for the ISO's package pool.
|
||||
|
||||
deb822 is the modern apt sources format (see sources.list(5) and deb822(5)).
|
||||
It uses RFC822-style fields where multi-line values must be indented with a
|
||||
leading space, and empty lines within a value are represented as " ."
|
||||
(space-dot). This format is required for inline GPG keys in the Signed-By
|
||||
field.
|
||||
"""
|
||||
key = gpg_key.export_public()
|
||||
quoted_key = []
|
||||
for line in key.splitlines():
|
||||
if not line:
|
||||
quoted_key.append(" .")
|
||||
else:
|
||||
quoted_key.append(" " + line)
|
||||
return f"""\
|
||||
Types: deb
|
||||
URIs: file://{mountpoint}
|
||||
Suites: {series}
|
||||
Components: {" ".join(components)}
|
||||
Check-Date: no
|
||||
Signed-By:
|
||||
""" + "\n".join(
|
||||
quoted_key
|
||||
)
|
||||
|
||||
|
||||
class Logger:
|
||||
|
||||
def __init__(self):
|
||||
self._indent = ""
|
||||
|
||||
def log(self, msg):
|
||||
print(self._indent + msg, file=sys.stderr)
|
||||
|
||||
@contextlib.contextmanager
|
||||
def logged(self, msg, done_msg=None):
|
||||
self.log(msg)
|
||||
self._indent += " "
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
self._indent = self._indent[:-2]
|
||||
if done_msg is not None:
|
||||
self.log(done_msg)
|
||||
|
||||
def msg_for_cmd(self, cmd, limit_length=True, cwd=None) -> str:
|
||||
if cwd is None:
|
||||
_cwd = pathlib.Path().cwd()
|
||||
else:
|
||||
_cwd = cwd
|
||||
fmted_cmd = []
|
||||
for arg in cmd:
|
||||
if isinstance(arg, pathlib.Path):
|
||||
if arg.is_relative_to(_cwd):
|
||||
arg = arg.relative_to(_cwd)
|
||||
arg = str(arg)
|
||||
fmted_cmd.append(shlex.quote(arg))
|
||||
fmted_cmd_str = " ".join(fmted_cmd)
|
||||
if len(fmted_cmd_str) > MAX_CMD_DISPLAY_LENGTH and limit_length:
|
||||
fmted_cmd_str = fmted_cmd_str[:MAX_CMD_DISPLAY_LENGTH] + "..."
|
||||
msg = f"running `{fmted_cmd_str}`"
|
||||
if cwd is not None:
|
||||
msg += f" in {cwd}"
|
||||
return msg
|
||||
|
||||
def run(
|
||||
self, cmd: list[str | pathlib.Path], *args, limit_length=True, check=True, **kw
|
||||
):
|
||||
with self.logged(
|
||||
self.msg_for_cmd(cmd, cwd=kw.get("cwd"), limit_length=limit_length)
|
||||
):
|
||||
return subprocess.run(cmd, *args, check=check, **kw)
|
||||
|
||||
|
||||
class ISOBuilder:
|
||||
|
||||
def __init__(self, workdir: pathlib.Path):
|
||||
self.workdir = workdir
|
||||
self.logger = Logger()
|
||||
self.iso_root = workdir.joinpath("iso-root")
|
||||
self._config: dict | None = None
|
||||
self._gpg_key = self._apt_state = None
|
||||
|
||||
# UTILITY STUFF
|
||||
|
||||
def _read_config(self):
|
||||
with self.workdir.joinpath("config.json").open() as fp:
|
||||
self._config = json.load(fp)
|
||||
|
||||
@property
|
||||
def config(self):
|
||||
if self._config is None:
|
||||
self._read_config()
|
||||
return self._config
|
||||
|
||||
def save_config(self):
|
||||
with self.workdir.joinpath("config.json").open("w") as fp:
|
||||
json.dump(self._config, fp)
|
||||
|
||||
@property
|
||||
def arch(self):
|
||||
return self.config["arch"]
|
||||
|
||||
@property
|
||||
def series(self):
|
||||
if self._config is None:
|
||||
self._read_config()
|
||||
return self._config["series"]
|
||||
|
||||
@property
|
||||
def gpg_key(self):
|
||||
if self._gpg_key is None:
|
||||
self._gpg_key = EphemeralGPGKey(
|
||||
self.logger, self.workdir.joinpath("gpg-home")
|
||||
)
|
||||
return self._gpg_key
|
||||
|
||||
@property
|
||||
def apt_state(self):
|
||||
if self._apt_state is None:
|
||||
self._apt_state = AptStateManager(
|
||||
self.logger, self.series, self.workdir.joinpath("apt-state")
|
||||
)
|
||||
return self._apt_state
|
||||
|
||||
# COMMANDS
|
||||
|
||||
def init(self, disk_info: str, series: str, arch: str):
|
||||
self.logger.log("creating directories")
|
||||
self.workdir.mkdir(exist_ok=True)
|
||||
self.iso_root.mkdir()
|
||||
dot_disk = self.iso_root.joinpath(".disk")
|
||||
dot_disk.mkdir()
|
||||
|
||||
self.logger.log("saving config")
|
||||
self._config = {"arch": arch, "series": series}
|
||||
self.save_config()
|
||||
|
||||
self.logger.log("populating .disk")
|
||||
dot_disk.joinpath("base_installable").touch()
|
||||
dot_disk.joinpath("cd_type").write_text("full_cd/single\n")
|
||||
dot_disk.joinpath("info").write_text(disk_info)
|
||||
self.iso_root.joinpath("casper").mkdir()
|
||||
|
||||
self.gpg_key.create()
|
||||
|
||||
def setup_apt(self, chroot: pathlib.Path):
|
||||
self.apt_state.setup(chroot)
|
||||
|
||||
def generate_pool(self, package_list_file: pathlib.Path):
|
||||
# do we need any of the symlinks we create here??
|
||||
self.logger.log("creating pool skeleton")
|
||||
self.iso_root.joinpath("ubuntu").symlink_to(".")
|
||||
if self.arch not in ("amd64", "i386"):
|
||||
self.iso_root.joinpath("ubuntu-ports").symlink_to(".")
|
||||
self.iso_root.joinpath("dists", self.series).mkdir(parents=True)
|
||||
|
||||
builder = PoolBuilder(
|
||||
self.logger,
|
||||
series=self.series,
|
||||
rootdir=self.iso_root,
|
||||
apt_state=self.apt_state,
|
||||
)
|
||||
pkgs = package_list_packages(package_list_file)
|
||||
# XXX include 32-bit deps of 32-bit packages if needed here
|
||||
with self.logger.logged("adding packages"):
|
||||
for i in range(0, len(pkgs), PACKAGE_BATCH_SIZE):
|
||||
builder.add_packages(
|
||||
self.apt_state.show(pkgs[i : i + PACKAGE_BATCH_SIZE])
|
||||
)
|
||||
builder.make_packages()
|
||||
release_file = builder.make_release()
|
||||
self.gpg_key.sign(release_file)
|
||||
for name in "stable", "unstable":
|
||||
self.iso_root.joinpath("dists", name).symlink_to(self.series)
|
||||
|
||||
def generate_sources(self, mountpoint: str):
|
||||
components = [p.name for p in self.iso_root.joinpath("pool").iterdir()]
|
||||
print(
|
||||
make_sources_text(
|
||||
self.series, self.gpg_key, mountpoint=mountpoint, components=components
|
||||
)
|
||||
)
|
||||
|
||||
def _extract_casper_uuids(self):
|
||||
# Extract UUID files from initrd images for casper (the live boot system).
|
||||
# Each initrd contains a conf/uuid.conf with a unique identifier that
|
||||
# casper uses at boot time to locate the correct root filesystem. These
|
||||
# UUIDs must be placed in .disk/casper-uuid-<flavor> on the ISO so casper
|
||||
# can verify it's booting from the right media.
|
||||
with self.logger.logged("extracting casper uuids"):
|
||||
casper_dir = self.iso_root.joinpath("casper")
|
||||
dot_disk = self.iso_root.joinpath(".disk")
|
||||
for initrd in casper_dir.glob("*initrd"):
|
||||
initrddir = self.workdir.joinpath("initrd")
|
||||
with self.logger.logged(
|
||||
f"unpacking {initrd.name} ...", done_msg="... done"
|
||||
):
|
||||
self.logger.run(["unmkinitramfs", initrd, initrddir])
|
||||
# unmkinitramfs can produce different directory structures:
|
||||
# - Platforms with early firmware: subdirs like "main/" or "early/"
|
||||
# containing conf/uuid.conf
|
||||
# - Other platforms: conf/uuid.conf directly in the root
|
||||
# Try to find uuid.conf in both locations.
|
||||
confs = list(initrddir.glob("*/conf/uuid.conf"))
|
||||
if confs:
|
||||
[uuid_conf] = confs
|
||||
elif initrddir.joinpath("conf/uuid.conf").exists():
|
||||
uuid_conf = initrddir.joinpath("conf/uuid.conf")
|
||||
else:
|
||||
raise Exception("uuid.conf not found")
|
||||
self.logger.log(f"found {uuid_conf.relative_to(initrddir)}")
|
||||
if initrd.name == "initrd":
|
||||
suffix = "generic"
|
||||
elif initrd.name == "hwe-initrd":
|
||||
suffix = "generic-hwe"
|
||||
else:
|
||||
raise Exception(f"unexpected initrd name {initrd.name}")
|
||||
uuid_conf.rename(dot_disk.joinpath(f"casper-uuid-{suffix}"))
|
||||
shutil.rmtree(initrddir)
|
||||
|
||||
def add_live_filesystem(self, artifact_prefix: pathlib.Path):
|
||||
casper_dir = self.iso_root.joinpath("casper")
|
||||
artifact_dir = artifact_prefix.parent
|
||||
filename_prefix = artifact_prefix.name
|
||||
|
||||
def link(src: pathlib.Path, target_name: str):
|
||||
target = casper_dir.joinpath(target_name)
|
||||
self.logger.log(
|
||||
f"creating link from $ISOROOT/casper/{target_name} to $src/{src.name}"
|
||||
)
|
||||
target.hardlink_to(src)
|
||||
|
||||
kernel_name = "vmlinuz"
|
||||
if self.arch in ("ppc64el", "riscv64"):
|
||||
kernel_name = "vmlinux"
|
||||
|
||||
with self.logger.logged(
|
||||
f"linking artifacts from {casper_dir} to {artifact_dir}"
|
||||
):
|
||||
for ext in "squashfs", "squashfs.gpg", "size", "manifest", "yaml":
|
||||
for path in artifact_dir.glob(f"{filename_prefix}*.{ext}"):
|
||||
newname = path.name[len(filename_prefix) :]
|
||||
link(path, newname)
|
||||
|
||||
for kernel_path in artifact_dir.glob(f"{filename_prefix}kernel*"):
|
||||
suffix = kernel_path.name[len(filename_prefix) + len("kernel") :]
|
||||
prefix = "hwe-" if suffix.endswith("-hwe") else ""
|
||||
link(
|
||||
artifact_dir.joinpath(f"{filename_prefix}kernel{suffix}"),
|
||||
f"{prefix}{kernel_name}",
|
||||
)
|
||||
link(
|
||||
artifact_dir.joinpath(f"{filename_prefix}initrd{suffix}"),
|
||||
f"{prefix}initrd",
|
||||
)
|
||||
self._extract_casper_uuids()
|
||||
|
||||
def make_bootable(self, project: str, capproject: str, subarch: str):
|
||||
configurator = make_boot_configurator_for_arch(
|
||||
self.arch,
|
||||
self.logger,
|
||||
self.apt_state,
|
||||
self.workdir,
|
||||
self.iso_root,
|
||||
)
|
||||
configurator.make_bootable(
|
||||
project,
|
||||
capproject,
|
||||
subarch,
|
||||
self.iso_root.joinpath("casper/hwe-initrd").exists(),
|
||||
)
|
||||
|
||||
def checksum(self):
|
||||
# Generate md5sum.txt for ISO integrity verification.
|
||||
# - Symlinks are excluded because their targets are already checksummed
|
||||
# - Files are sorted for deterministic, reproducible output across builds
|
||||
# - Paths use "./" prefix and we run md5sum from iso_root so the output
|
||||
# matches what users get when they verify with "md5sum -c" from the ISO
|
||||
all_files = []
|
||||
for dirpath, dirnames, filenames in self.iso_root.walk():
|
||||
filepaths = [dirpath.joinpath(filename) for filename in filenames]
|
||||
all_files.extend(
|
||||
"./" + str(filepath.relative_to(self.iso_root))
|
||||
for filepath in filepaths
|
||||
if not filepath.is_symlink()
|
||||
)
|
||||
self.iso_root.joinpath("md5sum.txt").write_bytes(
|
||||
self.logger.run(
|
||||
["md5sum"] + sorted(all_files),
|
||||
cwd=self.iso_root,
|
||||
stdout=subprocess.PIPE,
|
||||
).stdout
|
||||
)
|
||||
|
||||
def make_iso(self, dest: pathlib.Path, volid: str | None):
|
||||
# xorriso with "-as mkisofs" runs in mkisofs compatibility mode.
|
||||
# -r enables Rock Ridge extensions for Unix metadata (permissions, symlinks).
|
||||
# -iso-level 3 (amd64 only) allows files >4GB which some amd64 ISOs need.
|
||||
# mkisofs_opts comes from the boot configurator and contains architecture-
|
||||
# specific options for boot sectors, EFI images, etc.
|
||||
self.checksum()
|
||||
configurator = make_boot_configurator_for_arch(
|
||||
self.arch,
|
||||
self.logger,
|
||||
self.apt_state,
|
||||
self.workdir,
|
||||
self.iso_root,
|
||||
)
|
||||
mkisofs_opts = configurator.mkisofs_opts()
|
||||
cmd: list[str | pathlib.Path] = ["xorriso"]
|
||||
if self.arch == "riscv64":
|
||||
# For $reasons, xorriso is not run in mkisofs mode on riscv64 only.
|
||||
cmd.extend(["-rockridge", "on", "-outdev", dest])
|
||||
if volid:
|
||||
cmd.extend(["-volid", volid])
|
||||
cmd.extend(mkisofs_opts)
|
||||
cmd.extend(["-map", self.iso_root, "/"])
|
||||
else:
|
||||
# xorriso with "-as mkisofs" runs in mkisofs compatibility mode on
|
||||
# other architectures. -r enables Rock Ridge extensions for Unix
|
||||
# metadata (permissions, symlinks). -iso-level 3 (amd64 only)
|
||||
# allows files >4GB which some amd64 ISOs need.
|
||||
cmd.extend(["-as", "mkisofs", "-r"])
|
||||
if self.arch == "amd64":
|
||||
cmd.extend(["-iso-level", "3"])
|
||||
if volid:
|
||||
cmd.extend(["-V", volid])
|
||||
cmd.extend(mkisofs_opts + [self.iso_root, "-o", dest])
|
||||
with self.logger.logged("running xorriso"):
|
||||
self.logger.run(cmd, cwd=self.workdir, check=True, limit_length=False)
|
||||
configurator.post_process_iso(dest)
|
||||
@ -1,58 +0,0 @@
|
||||
import pathlib
|
||||
import subprocess
|
||||
|
||||
key_conf = """\
|
||||
%no-protection
|
||||
Key-Type: eddsa
|
||||
Key-Curve: Ed25519
|
||||
Key-Usage: sign
|
||||
Name-Real: Ubuntu ISO One-Time Signing Key
|
||||
Name-Email: noone@nowhere.invalid
|
||||
Expire-Date: 0
|
||||
"""
|
||||
|
||||
|
||||
class EphemeralGPGKey:
|
||||
|
||||
def __init__(self, logger, gpghome):
|
||||
self.logger = logger
|
||||
self.gpghome = gpghome
|
||||
|
||||
def _run_gpg(self, cmd, **kwargs):
|
||||
return self.logger.run(
|
||||
["gpg", "--homedir", self.gpghome] + cmd, check=True, **kwargs
|
||||
)
|
||||
|
||||
def create(self):
|
||||
with self.logger.logged("creating gpg key ...", done_msg="... done"):
|
||||
self.gpghome.mkdir(mode=0o700)
|
||||
self._run_gpg(
|
||||
["--gen-key", "--batch"],
|
||||
input=key_conf,
|
||||
text=True,
|
||||
)
|
||||
|
||||
def sign(self, path: pathlib.Path):
|
||||
with self.logger.logged(f"signing {path}"):
|
||||
with path.open("rb") as inp:
|
||||
with pathlib.Path(str(path) + ".gpg").open("wb") as outp:
|
||||
self._run_gpg(
|
||||
[
|
||||
"--no-options",
|
||||
"--batch",
|
||||
"--no-tty",
|
||||
"--armour",
|
||||
"--digest-algo",
|
||||
"SHA512",
|
||||
"--detach-sign",
|
||||
],
|
||||
stdin=inp,
|
||||
stdout=outp,
|
||||
)
|
||||
|
||||
def export_public(self) -> str:
|
||||
return self._run_gpg(
|
||||
["--export", "--armor"],
|
||||
stdout=subprocess.PIPE,
|
||||
text=True,
|
||||
).stdout
|
||||
@ -1,166 +0,0 @@
|
||||
import pathlib
|
||||
import subprocess
|
||||
import tempfile
|
||||
|
||||
from isobuilder.apt_state import AptStateManager, PackageInfo
|
||||
|
||||
|
||||
generate_template = """
|
||||
Dir::ArchiveDir "{root}";
|
||||
Dir::CacheDir "{scratch}/apt-ftparchive-db";
|
||||
|
||||
TreeDefault::Contents " ";
|
||||
|
||||
Tree "dists/{series}" {{
|
||||
FileList "{scratch}/filelist_$(SECTION)";
|
||||
Sections "{components}";
|
||||
Architectures "{arches}";
|
||||
}}
|
||||
"""
|
||||
|
||||
|
||||
class PoolBuilder:
|
||||
|
||||
def __init__(
|
||||
self, logger, series: str, apt_state: AptStateManager, rootdir: pathlib.Path
|
||||
):
|
||||
self.logger = logger
|
||||
self.series = series
|
||||
self.apt_state = apt_state
|
||||
self.rootdir = rootdir
|
||||
self.arches: set[str] = set()
|
||||
self._present_components: set[str] = set()
|
||||
|
||||
def add_packages(self, pkglist: list[PackageInfo]):
|
||||
for pkg_info in pkglist:
|
||||
if pkg_info.architecture != "all":
|
||||
self.arches.add(pkg_info.architecture)
|
||||
self.apt_state.download(self.rootdir, pkg_info)
|
||||
|
||||
def make_packages(self) -> None:
|
||||
with self.logger.logged("making Packages files"):
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
scratchdir = pathlib.Path(tmpdir)
|
||||
with self.logger.logged("scanning for packages"):
|
||||
for component in ["main", "restricted", "universe", "multiverse"]:
|
||||
if not self.rootdir.joinpath("pool", component).is_dir():
|
||||
continue
|
||||
self._present_components.add(component)
|
||||
for arch in self.arches:
|
||||
self.rootdir.joinpath(
|
||||
"dists", self.series, component, f"binary-{arch}"
|
||||
).mkdir(parents=True)
|
||||
proc = self.logger.run(
|
||||
["find", f"pool/{component}"],
|
||||
stdout=subprocess.PIPE,
|
||||
cwd=self.rootdir,
|
||||
encoding="utf-8",
|
||||
check=True,
|
||||
)
|
||||
scratchdir.joinpath(f"filelist_{component}").write_text(
|
||||
"\n".join(sorted(proc.stdout.splitlines()))
|
||||
)
|
||||
with self.logger.logged("writing apt-ftparchive config"):
|
||||
scratchdir.joinpath("apt-ftparchive-db").mkdir()
|
||||
generate_path = scratchdir.joinpath("generate-binary")
|
||||
generate_path.write_text(
|
||||
generate_template.format(
|
||||
arches=" ".join(self.arches),
|
||||
series=self.series,
|
||||
root=self.rootdir.resolve(),
|
||||
scratch=scratchdir.resolve(),
|
||||
components=" ".join(self._present_components),
|
||||
)
|
||||
)
|
||||
with self.logger.logged("running apt-ftparchive generate"):
|
||||
self.logger.run(
|
||||
[
|
||||
"apt-ftparchive",
|
||||
"--no-contents",
|
||||
"--no-md5",
|
||||
"--no-sha1",
|
||||
"--no-sha512",
|
||||
"generate",
|
||||
generate_path,
|
||||
],
|
||||
check=True,
|
||||
)
|
||||
|
||||
def make_release(self) -> pathlib.Path:
|
||||
# Build the Release file by merging metadata from the mirror with
|
||||
# checksums for our pool. We can't just use apt-ftparchive's Release
|
||||
# output directly because:
|
||||
# 1. apt-ftparchive doesn't know about Origin, Label, Suite, Version,
|
||||
# Codename, etc. - these come from the mirror and maintain package
|
||||
# provenance
|
||||
# 2. We keep the mirror's Date (when packages were released) rather than
|
||||
# apt-ftparchive's Date (when we ran the command)
|
||||
# 3. We need to override Architectures/Components to match our pool
|
||||
#
|
||||
# There may be a cleaner way (apt-get indextargets?) but this works.
|
||||
with self.logger.logged("making Release file"):
|
||||
in_release = self.apt_state.in_release_path()
|
||||
cp_mirror_release = self.logger.run(
|
||||
["gpg", "--verify", "--output", "-", in_release],
|
||||
stdout=subprocess.PIPE,
|
||||
encoding="utf-8",
|
||||
check=False,
|
||||
)
|
||||
if cp_mirror_release.returncode not in (0, 2):
|
||||
# gpg returns code 2 when the public key the InRelease is
|
||||
# signed with is not available, which is most of the time.
|
||||
raise Exception("gpg failed")
|
||||
mirror_release_lines = cp_mirror_release.stdout.splitlines()
|
||||
release_dir = self.rootdir.joinpath("dists", self.series)
|
||||
af_release_lines = self.logger.run(
|
||||
[
|
||||
"apt-ftparchive",
|
||||
"--no-contents",
|
||||
"--no-md5",
|
||||
"--no-sha1",
|
||||
"--no-sha512",
|
||||
"release",
|
||||
".",
|
||||
],
|
||||
stdout=subprocess.PIPE,
|
||||
encoding="utf-8",
|
||||
cwd=release_dir,
|
||||
check=True,
|
||||
).stdout.splitlines()
|
||||
# Build the final Release file by merging mirror metadata with pool
|
||||
# checksums.
|
||||
# Strategy:
|
||||
# 1. Take metadata fields (Suite, Origin, etc.) from the mirror's InRelease
|
||||
# 2. Override Architectures and Components to match what's actually in our
|
||||
# pool
|
||||
# 3. Skip the mirror's checksum sections (MD5Sum, SHA256, etc.) because they
|
||||
# don't apply to our pool
|
||||
# 4. Skip Acquire-By-Hash since we don't use it
|
||||
# 5. Append checksums from apt-ftparchive (but not the Date field)
|
||||
release_lines = []
|
||||
skipping = False
|
||||
for line in mirror_release_lines:
|
||||
if line.startswith("Architectures:"):
|
||||
line = "Architectures: " + " ".join(sorted(self.arches))
|
||||
elif line.startswith("Components:"):
|
||||
line = "Components: " + " ".join(sorted(self._present_components))
|
||||
elif line.startswith("MD5") or line.startswith("SHA"):
|
||||
# Start of a checksum section - skip this and indented lines below
|
||||
# it
|
||||
skipping = True
|
||||
elif not line.startswith(" "):
|
||||
# Non-indented line means we've left the checksum section if we were
|
||||
# in one.
|
||||
skipping = False
|
||||
if line.startswith("Acquire-By-Hash"):
|
||||
continue
|
||||
if not skipping:
|
||||
release_lines.append(line)
|
||||
# Append checksums from apt-ftparchive, but skip its Date field
|
||||
# (we want to keep the Date from the mirror release)
|
||||
for line in af_release_lines:
|
||||
if not line.startswith("Date"):
|
||||
release_lines.append(line)
|
||||
release_path = release_dir.joinpath("Release")
|
||||
release_path.write_text("\n".join(release_lines))
|
||||
return release_path
|
||||
@ -180,22 +180,10 @@ build_layered_squashfs () {
|
||||
# Operate on the upperdir directly, so that we are only
|
||||
# modifying mtime on files that are actually changed in
|
||||
# this layer. LP: #2107332
|
||||
${LIVECD_ROOTFS_ROOT}/sync-mtime chroot "$overlay_dir"
|
||||
/usr/share/livecd-rootfs/sync-mtime chroot "$overlay_dir"
|
||||
fi
|
||||
|
||||
create_squashfs "${overlay_dir}" ${squashfs_f}
|
||||
# Create a "for-iso" variant of the squashfs for ISO builds. For
|
||||
# the root layer (the base system) when building with a pool, we
|
||||
# need to include cdrom.sources so casper can access the ISO's
|
||||
# package repository. This requires regenerating the squashfs with
|
||||
# that file included, then removing it (so it doesn't pollute the
|
||||
# regular squashfs). Non-root layers (desktop environment, etc.)
|
||||
# and builds without pools can just hardlink to the regular squashfs.
|
||||
if [ -n "${POOL_SEED_NAME}" ] && $(is_root_layer $pass); then
|
||||
isobuild generate-sources --mountpoint=/cdrom > ${overlay_dir}/etc/apt/sources.list.d/cdrom.sources
|
||||
create_squashfs "${overlay_dir}" ${PWD}/for-iso.${pass}.squashfs
|
||||
rm ${overlay_dir}/etc/apt/sources.list.d/cdrom.sources
|
||||
fi
|
||||
|
||||
if [ -f config/$pass.catalog-in.yaml ]; then
|
||||
echo "Expanding catalog entry template for $pass"
|
||||
@ -206,7 +194,7 @@ build_layered_squashfs () {
|
||||
if [ -f config/seeded-languages ]; then
|
||||
usc_opts="$usc_opts --langs $(cat config/seeded-languages)"
|
||||
fi
|
||||
${LIVECD_ROOTFS_ROOT}/update-source-catalog source $usc_opts
|
||||
/usr/share/livecd-rootfs/update-source-catalog source $usc_opts
|
||||
else
|
||||
echo "No catalog entry template for $pass"
|
||||
fi
|
||||
@ -227,7 +215,7 @@ done
|
||||
|
||||
if [ -n "$DEFAULT_KERNEL" -a -f livecd.${PROJECT_FULL}.install-sources.yaml ]; then
|
||||
write_kernel_yaml "$DEFAULT_KERNEL" "$BRIDGE_KERNEL_REASONS"
|
||||
${LIVECD_ROOTFS_ROOT}/update-source-catalog merge \
|
||||
/usr/share/livecd-rootfs/update-source-catalog merge \
|
||||
--output livecd.${PROJECT_FULL}.install-sources.yaml \
|
||||
--template config/kernel.yaml
|
||||
fi
|
||||
@ -239,11 +227,3 @@ if [ -n "$(ls livecd.${PROJECT_FULL}.*install.live.manifest.full 2>/dev/null)" ]
|
||||
fi
|
||||
|
||||
chmod 644 *.squashfs *.manifest* *.size
|
||||
|
||||
prefix=livecd.${PROJECT_FULL}
|
||||
for artifact in ${prefix}.*; do
|
||||
for_iso_path=for-iso${artifact#${prefix}}
|
||||
if [ ! -f $for_iso_path ]; then
|
||||
ln -v $artifact $for_iso_path
|
||||
fi
|
||||
done
|
||||
|
||||
@ -237,7 +237,7 @@ create_chroot_pass () {
|
||||
lb chroot_interactive ${*}
|
||||
|
||||
# Misc ubuntu cleanup and post-layer configuration
|
||||
${LIVECD_ROOTFS_ROOT}/minimize-manual chroot
|
||||
/usr/share/livecd-rootfs/minimize-manual chroot
|
||||
clean_debian_chroot
|
||||
|
||||
Chroot chroot "dpkg-query -W" > chroot.packages.${pass}
|
||||
|
||||
@ -11,7 +11,6 @@ case ${PASS:-} in
|
||||
esac
|
||||
|
||||
. config/binary
|
||||
. config/common
|
||||
. config/functions
|
||||
|
||||
case ${SUBPROJECT} in
|
||||
@ -57,4 +56,4 @@ PROJECT_FULL=$PROJECT${SUBARCH:+-$SUBARCH}
|
||||
usc_opts="--output livecd.${PROJECT_FULL}.install-sources.yaml \
|
||||
--template config/edge.catalog-in.yaml \
|
||||
--size 0"
|
||||
${LIVECD_ROOTFS_ROOT}/update-source-catalog source $usc_opts
|
||||
/usr/share/livecd-rootfs/update-source-catalog source $usc_opts
|
||||
|
||||
@ -1 +0,0 @@
|
||||
datasource_list: [ OpenStack, None ]
|
||||
@ -1,2 +0,0 @@
|
||||
dsmode: local
|
||||
instance_id: ubuntu-server
|
||||
@ -1,104 +0,0 @@
|
||||
name: ubuntu-minimal
|
||||
version: "0.1"
|
||||
base: bare
|
||||
build-base: devel
|
||||
summary: Minimal Ubuntu image for CPC
|
||||
description: A minimal Ubuntu image to be built using livecd-rootfs by CPC
|
||||
|
||||
platforms:
|
||||
amd64:
|
||||
|
||||
volumes:
|
||||
pc:
|
||||
schema: gpt
|
||||
structure:
|
||||
# 1. BIOS Boot
|
||||
- name: bios-boot
|
||||
type: 21686148-6449-6E6F-744E-656564454649
|
||||
role: system-boot
|
||||
filesystem: vfat
|
||||
size: 4M
|
||||
partition-number: 14
|
||||
# 2. EFI System Partition
|
||||
- name: efi
|
||||
type: C12A7328-F81F-11D2-BA4B-00A0C93EC93B
|
||||
filesystem: vfat
|
||||
filesystem-label: UEFI
|
||||
role: system-boot
|
||||
size: 106M
|
||||
partition-number: 15
|
||||
# 3. Linux Extended Boot
|
||||
- name: boot
|
||||
type: 0FC63DAF-8483-4772-8E79-3D69D8477DE4
|
||||
filesystem: ext4
|
||||
filesystem-label: BOOT
|
||||
role: system-data
|
||||
size: 1G
|
||||
partition-number: 13
|
||||
# 4. Root Filesystem
|
||||
- name: rootfs
|
||||
type: 0FC63DAF-8483-4772-8E79-3D69D8477DE4
|
||||
filesystem: ext4
|
||||
filesystem-label: cloudimg-rootfs
|
||||
role: system-data
|
||||
size: 3G
|
||||
partition-number: 1
|
||||
|
||||
filesystems:
|
||||
default:
|
||||
- mount: "/"
|
||||
device: "(volume/pc/rootfs)"
|
||||
- mount: "/boot"
|
||||
device: "(volume/pc/boot)"
|
||||
- mount: "/boot/efi"
|
||||
device: "(volume/pc/efi)"
|
||||
|
||||
parts:
|
||||
rootfs:
|
||||
plugin: nil
|
||||
build-packages: ["mmdebstrap"]
|
||||
override-build: |
|
||||
mmdebstrap --arch $CRAFT_ARCH_BUILD_FOR \
|
||||
--mode=sudo \
|
||||
--format=dir \
|
||||
--variant=minbase \
|
||||
--include=apt \
|
||||
resolute \
|
||||
$CRAFT_PART_INSTALL/ \
|
||||
http://archive.ubuntu.com/ubuntu/
|
||||
rm -r $CRAFT_PART_INSTALL/dev/*
|
||||
mkdir $CRAFT_PART_INSTALL/boot/efi
|
||||
organize:
|
||||
'*': (overlay)/
|
||||
|
||||
packages:
|
||||
plugin: nil
|
||||
overlay-packages:
|
||||
- ubuntu-server-minimal
|
||||
- grub2-common
|
||||
- grub-pc
|
||||
- shim-signed
|
||||
- linux-image-generic
|
||||
overlay-script: |
|
||||
rm $CRAFT_OVERLAY/etc/cloud/cloud.cfg.d/90_dpkg.cfg
|
||||
|
||||
snaps:
|
||||
plugin: nil
|
||||
after: [packages]
|
||||
overlay-script: |
|
||||
env SNAPPY_STORE_NO_CDN=1 snap prepare-image --classic \
|
||||
--arch=amd64 --snap snapd --snap core24 "" $CRAFT_OVERLAY
|
||||
|
||||
fstab:
|
||||
plugin: nil
|
||||
after: [snaps]
|
||||
overlay-script: |
|
||||
cat << EOF > $CRAFT_OVERLAY/etc/fstab
|
||||
LABEL=cloudimg-rootfs / ext4 discard,errors=remount-ro 0 1
|
||||
LABEL=BOOT /boot ext4 defaults 0 2
|
||||
LABEL=UEFI /boot/efi vfat umask=0077 0 1
|
||||
EOF
|
||||
|
||||
cloud-init:
|
||||
plugin: dump
|
||||
source: cloud-init/
|
||||
@ -1,81 +0,0 @@
|
||||
#!/bin/bash -eux
|
||||
|
||||
. config/functions
|
||||
|
||||
ARCH="${ARCH:-}"
|
||||
SUBPROJECT="${SUBPROJECT:-}"
|
||||
|
||||
# We want to start off imagecraft builds with just amd64 support right now
|
||||
case $ARCH in
|
||||
amd64)
|
||||
;;
|
||||
*)
|
||||
echo "imagecraft build is currently not implemented for ARCH=${ARCH:-unset}."
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
case ${SUBPROJECT} in
|
||||
minimized)
|
||||
;;
|
||||
*)
|
||||
echo "imagecraft build is currently not implemented for SUBPROJECT=${SUBPROJECT:-unset}."
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
|
||||
_src_d=$(dirname $(readlink -f ${0}))
|
||||
|
||||
snap install imagecraft --classic --channel latest/edge
|
||||
|
||||
cp -r "$_src_d"/imagecraft-configs/* .
|
||||
|
||||
CRAFT_BUILD_ENVIRONMENT=host imagecraft --verbosity debug pack
|
||||
|
||||
# We are using this function instead of mount_disk_image from functions
|
||||
# because imagecraft doesn't currently support XBOOTLDR's GUID and
|
||||
# mount_disk_image has an explicit check for the XBOOTLDR GUID
|
||||
# TODO: Use mount_disk_image once imagecraft supports XBOOTLDR's GUID
|
||||
mount_image_partitions() {
|
||||
mount_image "${disk_image}" "$ROOT_PARTITION"
|
||||
|
||||
# Making sure that the loop device is ready
|
||||
partprobe "${loop_device}"
|
||||
udevadm settle
|
||||
mount_partition "${rootfs_dev_mapper}" "$mountpoint"
|
||||
mount "${loop_device}p13" "$mountpoint/boot"
|
||||
mount "${loop_device}p15" "$mountpoint/boot/efi"
|
||||
}
|
||||
|
||||
install_grub_on_image() {
|
||||
divert_grub "$mountpoint"
|
||||
chroot "$mountpoint" grub-install --target=i386-pc "${loop_device}"
|
||||
chroot "$mountpoint" update-grub
|
||||
undivert_grub "$mountpoint"
|
||||
|
||||
echo "GRUB for BIOS boot installed successfully."
|
||||
}
|
||||
|
||||
unmount_image_partitions() {
|
||||
umount "$mountpoint/boot/efi"
|
||||
umount "$mountpoint/boot"
|
||||
|
||||
umount_partition "$mountpoint"
|
||||
rmdir "$mountpoint"
|
||||
}
|
||||
|
||||
disk_image="pc.img"
|
||||
ROOT_PARTITION=1
|
||||
mountpoint=$(mktemp -d)
|
||||
|
||||
mount_image_partitions
|
||||
|
||||
install_grub_on_image
|
||||
create_manifest "$mountpoint/" "$PWD/livecd.ubuntu-cpc.imagecraft.manifest" "$PWD/livecd.ubuntu-cpc.imagecraft.spdx" "cloud-image-$ARCH-$(date +%Y%m%dT%H:%M:%S)" "false"
|
||||
|
||||
unmount_image_partitions
|
||||
|
||||
clean_loops
|
||||
trap - EXIT
|
||||
|
||||
qemu-img convert -f raw -O qcow2 "${disk_image}" livecd.ubuntu-cpc.imagecraft.img
|
||||
@ -6,4 +6,3 @@ depends qcow2
|
||||
depends vmdk
|
||||
depends vagrant
|
||||
depends wsl
|
||||
depends imagecraft-image
|
||||
|
||||
@ -1,5 +0,0 @@
|
||||
base/imagecraft-image.binary
|
||||
|
||||
provides livecd.ubuntu-cpc.imagecraft.img
|
||||
provides livecd.ubuntu-cpc.imagecraft.manifest
|
||||
provides livecd.ubuntu-cpc.imagecraft.filelist
|
||||
@ -1,9 +1,5 @@
|
||||
#!/bin/sh
|
||||
|
||||
# Create kernel/initrd artifacts for isobuilder to consume.
|
||||
# The standard MAKE_ISO flow in auto/build expects files named
|
||||
# ${PREFIX}.kernel-${flavour} and ${PREFIX}.initrd-${flavour}.
|
||||
|
||||
set -eu
|
||||
|
||||
case $ARCH in
|
||||
@ -14,7 +10,68 @@ case $ARCH in
|
||||
;;
|
||||
esac
|
||||
|
||||
PREFIX="livecd.${PROJECT}"
|
||||
. config/binary
|
||||
|
||||
cp chroot/boot/vmlinuz "${PREFIX}.kernel-generic"
|
||||
cp chroot/boot/initrd.img "${PREFIX}.initrd-generic"
|
||||
KERNEL=chroot/boot/vmlinuz
|
||||
INITRD=chroot/boot/initrd.img
|
||||
|
||||
git clone https://git.launchpad.net/~ubuntu-cdimage/debian-cd/+git/ubuntu debian-cd
|
||||
export BASEDIR=$(readlink -f debian-cd) DIST=$LB_DISTRIBUTION
|
||||
|
||||
cat > apt.conf <<EOF
|
||||
Dir "$(pwd)/chroot";
|
||||
EOF
|
||||
|
||||
case $ARCH in
|
||||
amd64)
|
||||
mkdir -p "ubuntu-mini-iso/amd64/tree/casper"
|
||||
cp "$KERNEL" ubuntu-mini-iso/amd64/tree/casper/filesystem.kernel-generic
|
||||
cp "$INITRD" ubuntu-mini-iso/amd64/tree/casper/filesystem.initrd-generic
|
||||
APT_CONFIG_amd64=$(pwd)/apt.conf $BASEDIR/tools/boot/$LB_DISTRIBUTION/boot-amd64 1 $(readlink -f ubuntu-mini-iso/amd64/tree)
|
||||
# Overwrite the grub.cfg that debian-cd generates by default
|
||||
cat > ubuntu-mini-iso/amd64/tree/boot/grub/grub.cfg <<EOF
|
||||
menuentry "Choose an Ubuntu version to install" {
|
||||
set gfxpayload=keep
|
||||
linux /casper/vmlinuz iso-chooser-menu ip=dhcp ---
|
||||
initrd /casper/initrd
|
||||
}
|
||||
EOF
|
||||
rm -f ubuntu-mini-iso/amd64/tree/boot/grub/loopback.cfg ubuntu-mini-iso/amd64/tree/boot/memtest*.bin
|
||||
;;
|
||||
esac
|
||||
|
||||
mkdir -p ubuntu-mini-iso/$ARCH/tree/.disk
|
||||
|
||||
touch ubuntu-mini-iso/$ARCH/tree/.disk/base_installable
|
||||
|
||||
tmpdir=$(mktemp -d)
|
||||
unmkinitramfs $INITRD $tmpdir
|
||||
if [ -e $tmpdir/*/conf/uuid.conf ]; then
|
||||
uuid_conf=$tmpdir/*/conf/uuid.conf
|
||||
elif [ -e "$tmpdir/conf/uuid.conf" ]; then
|
||||
uuid_conf="$tmpdir/conf/uuid.conf"
|
||||
else
|
||||
echo "uuid.conf not found"
|
||||
exit 1
|
||||
fi
|
||||
cp $uuid_conf ubuntu-mini-iso/$ARCH/tree/.disk/casper-uuid-generic
|
||||
rm -fr $tmpdir
|
||||
|
||||
cat > ubuntu-mini-iso/$ARCH/tree/.disk/cd_type <<EOF
|
||||
full_cd/single
|
||||
EOF
|
||||
|
||||
version=$(distro-info --fullname --series=$LB_DISTRIBUTION \
|
||||
| sed s'/^Ubuntu/ubuntu-mini-iso/')
|
||||
|
||||
cat > ubuntu-mini-iso/$ARCH/tree/.disk/info <<EOF
|
||||
$version - $ARCH ($BUILDSTAMP)
|
||||
EOF
|
||||
|
||||
dest="${PWD}/livecd.${PROJECT}.iso"
|
||||
|
||||
cd ubuntu-mini-iso/$ARCH
|
||||
xorriso -as mkisofs $(cat 1.mkisofs_opts) tree -o $dest
|
||||
cd ../..
|
||||
|
||||
rm -rf ubuntu-mini-iso
|
||||
|
||||
@ -1,8 +0,0 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -eu
|
||||
|
||||
mkdir -p "etc/initramfs-tools/conf.d"
|
||||
cat > etc/initramfs-tools/conf.d/casperize.conf <<EOF
|
||||
export CASPER_GENERATE_UUID=1
|
||||
EOF
|
||||
@ -1,15 +0,0 @@
|
||||
#!/bin/sh
|
||||
# Copy kernel/initrd artifacts for isobuilder to consume.
|
||||
# The MAKE_ISO flow in auto/build expects ${PREFIX}.kernel-* and
|
||||
# ${PREFIX}.initrd-* files. With --linux-packages=none live-build won't
|
||||
# create them, so we do it here (mirroring ubuntu-mini-iso's approach).
|
||||
# This hook runs for every pass; exit early when the kernel isn't present.
|
||||
|
||||
set -eu
|
||||
|
||||
[ -e chroot/boot/vmlinuz ] || exit 0
|
||||
[ -e chroot/boot/initrd.img ] || exit 0
|
||||
|
||||
PREFIX="livecd.${PROJECT}"
|
||||
cp chroot/boot/vmlinuz "${PREFIX}.kernel-generic"
|
||||
cp chroot/boot/initrd.img "${PREFIX}.initrd-generic"
|
||||
@ -2,7 +2,7 @@
|
||||
|
||||
# create the system seed for TPM-backed FDE in the live layer of the installer.
|
||||
|
||||
set -eu
|
||||
set -eux
|
||||
|
||||
case ${PASS:-} in
|
||||
*.live)
|
||||
@ -13,15 +13,8 @@ case ${PASS:-} in
|
||||
esac
|
||||
|
||||
. config/binary
|
||||
. config/common
|
||||
. config/functions
|
||||
|
||||
set -x
|
||||
|
||||
if ! echo $PASSES | grep --quiet enhanced-secureboot; then
|
||||
# Only run this hook if there is going to be a layer that installs it...
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Naive conversion from YAML to JSON. This is needed because yq is in universe
|
||||
# (but jq is not).
|
||||
@ -133,25 +126,8 @@ get_components()
|
||||
|
||||
# env SNAPPY_STORE_NO_CDN=1 snap known --remote model series=16 brand-id=canonical model=ubuntu-classic-2410-amd64 > config/classic-model.model
|
||||
#
|
||||
|
||||
# We used to have the models included in livecd-rootfs itself, but now we pull
|
||||
# them from the Launchpad git mirror.
|
||||
canonical_models_tree=$(mktemp -d)
|
||||
git clone --depth 1 https://git.launchpad.net/canonical-models -- "${canonical_models_tree}"
|
||||
|
||||
cleanup_repo()
|
||||
{
|
||||
rm -rf -- "${canonical_models_tree}"
|
||||
}
|
||||
|
||||
trap cleanup_repo EXIT
|
||||
|
||||
echo 'Checked out canonical-models revision' "$(git -C "${canonical_models_tree}" rev-parse HEAD)"
|
||||
|
||||
model_version=$(release_ver | sed 's/\.//')
|
||||
|
||||
dangerous_model="${canonical_models_tree}"/ubuntu-classic-"${model_version}"-amd64-dangerous.model
|
||||
stable_model="${canonical_models_tree}"/ubuntu-classic-"${model_version}"-amd64.model
|
||||
dangerous_model=/usr/share/livecd-rootfs/live-build/${PROJECT}/ubuntu-classic-amd64-dangerous.model
|
||||
stable_model=/usr/share/livecd-rootfs/live-build/${PROJECT}/ubuntu-classic-amd64.model
|
||||
|
||||
prepare_args=()
|
||||
|
||||
@ -172,27 +148,23 @@ if [ "$SUBPROJECT" = "dangerous" ]; then
|
||||
components+=("$comp")
|
||||
done
|
||||
else
|
||||
model="${stable_model}"
|
||||
# If we need to override anything from the model, we need grade: dangerous.
|
||||
# And if so, uncomment the below to use the dangerous model and set the
|
||||
# snaps_from_dangerous and snaps_from_beta variables to still use snaps
|
||||
# from the stable model.
|
||||
#model="${dangerous_model}"
|
||||
snaps_from_dangerous=()
|
||||
# For these snaps, we ignore the model entirely.
|
||||
snaps_from_beta=()
|
||||
for snap in "${snaps_from_beta[@]}"; do
|
||||
prepare_args+=("--snap=$snap=beta")
|
||||
done
|
||||
# snaps that we are special casing.
|
||||
_exclude=("${snaps_from_dangerous[@]}" "${snaps_from_beta[@]}")
|
||||
|
||||
if [ "$model" = "$dangerous_model" ]; then
|
||||
for snap_arg in $(get_snaps_args_excluding "$stable_model" "${_exclude[@]}"); do
|
||||
# Normally we use the stable model here. Use the dangerous one for now
|
||||
# until we get snaps on stable 26.04 tracks and channels.
|
||||
#model="${stable_model}"
|
||||
model="${dangerous_model}"
|
||||
# We're currently using the dangerous model for the stable image because it
|
||||
# allows us to override snaps. But we don't want all snaps from edge like
|
||||
# the dangerous model has, we want most of them from stable excluding:
|
||||
# * snapd (for TPM/FDE)
|
||||
# * snapd-desktop-integration (for TPM/FDE)
|
||||
# * firmware-updater (for TPM/FDE)
|
||||
# * desktop-security-center (for TPM/FDE)
|
||||
snaps_from_dangerous=(snapd snapd-desktop-integration firmware-updater desktop-security-center)
|
||||
for snap_arg in $(get_snaps_args_excluding "$stable_model" "${snaps_from_dangerous[@]}"); do
|
||||
prepare_args+=("$snap_arg")
|
||||
done
|
||||
fi
|
||||
for comp in $(get_components_excluding "$stable_model" "${_exclude[@]}"); do
|
||||
|
||||
for comp in $(get_components_excluding "$stable_model" "${snaps_from_dangerous[@]}"); do
|
||||
components+=("$comp")
|
||||
done
|
||||
for comp in $(get_components "$dangerous_model" "${snaps_from_dangerous[@]}"); do
|
||||
|
||||
@ -1,8 +0,0 @@
|
||||
# When booting the live ISO, snapd seeding takes a while to complete, which
|
||||
# can cause GDM to start before the Ubuntu installer is seeded and ready to be
|
||||
# launched. This leads to a confusing delay between the user leaving Plymouth
|
||||
# and seeing the desktop wallpaper and the installer launching.
|
||||
# This drop-in delays display-manager.service until snapd seeding completes, so
|
||||
# the installer launches within seconds of Plymouth disappearing.
|
||||
[Unit]
|
||||
After=snapd.seeded.service
|
||||
@ -1,6 +0,0 @@
|
||||
# force reexecuting the snapd snap version on the live system
|
||||
# while developping features that only lands on edge, even if the
|
||||
# deb version is higher.
|
||||
# This allows automated tests to always run what’s next.
|
||||
#[Service]
|
||||
#Environment="SNAP_REEXEC=force"
|
||||
109
live-build/ubuntu/ubuntu-classic-amd64-dangerous.model
Normal file
109
live-build/ubuntu/ubuntu-classic-amd64-dangerous.model
Normal file
@ -0,0 +1,109 @@
|
||||
type: model
|
||||
authority-id: canonical
|
||||
series: 16
|
||||
brand-id: canonical
|
||||
model: ubuntu-classic-2604-amd64-dangerous
|
||||
architecture: amd64
|
||||
base: core24
|
||||
classic: true
|
||||
distribution: ubuntu
|
||||
grade: dangerous
|
||||
snaps:
|
||||
-
|
||||
default-channel: classic-26.04/edge
|
||||
id: UqFziVZDHLSyO3TqSWgNBoAdHbLI4dAH
|
||||
name: pc
|
||||
type: gadget
|
||||
-
|
||||
components:
|
||||
nvidia-580-uda-ko:
|
||||
presence: optional
|
||||
nvidia-580-uda-user:
|
||||
presence: optional
|
||||
default-channel: 26.04/beta
|
||||
id: pYVQrBcKmBa0mZ4CCN7ExT6jH8rY1hza
|
||||
name: pc-kernel
|
||||
type: kernel
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: amcUKQILKXHHTlmSa7NMdnXSx02dNeeT
|
||||
name: core22
|
||||
type: base
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: dwTAh7MZZ01zyriOZErqd1JynQLiOGvM
|
||||
name: core24
|
||||
type: base
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: cUqM61hRuZAJYmIS898Ux66VY61gBbZf
|
||||
name: core26
|
||||
type: base
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: PMrrV4ml8uWuEUDBT8dSGnKUYbevVhc4
|
||||
name: snapd
|
||||
type: snapd
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: EISPgh06mRh1vordZY9OZ34QHdd7OrdR
|
||||
name: bare
|
||||
type: base
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: HyhSEBPv3vHsW6uOHkQR384NgI7S6zpj
|
||||
name: mesa-2404
|
||||
type: app
|
||||
-
|
||||
default-channel: 1/edge
|
||||
id: EI0D1KHjP8XiwMZKqSjuh6W8zvcowUVP
|
||||
name: firmware-updater
|
||||
type: app
|
||||
-
|
||||
default-channel: 1/edge
|
||||
id: FppXWunWzuRT2NUT9CwoBPNJNZBYOCk0
|
||||
name: desktop-security-center
|
||||
type: app
|
||||
-
|
||||
default-channel: 1/edge
|
||||
id: aoc5lfC8aUd2VL8VpvynUJJhGXp5K6Dj
|
||||
name: prompting-client
|
||||
type: app
|
||||
-
|
||||
default-channel: 2/edge
|
||||
id: gjf3IPXoRiipCu9K0kVu52f0H56fIksg
|
||||
name: snap-store
|
||||
type: app
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: jZLfBRzf1cYlYysIjD2bwSzNtngY0qit
|
||||
name: gtk-common-themes
|
||||
type: app
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: 3wdHCAVyZEmYsCMFDE9qt92UV8rC8Wdk
|
||||
name: firefox
|
||||
type: app
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: ew7OxpbRTxfK7ImpIygRR85lkxvU7Pzt
|
||||
name: gnome-46-2404
|
||||
type: app
|
||||
-
|
||||
default-channel: latest/edge
|
||||
id: IrwRHakqtzhFRHJOOPxKVPU0Kk7Erhcu
|
||||
name: snapd-desktop-integration
|
||||
type: app
|
||||
timestamp: 2025-12-09T12:00:00.0Z
|
||||
sign-key-sha3-384: 9tydnLa6MTJ-jaQTFUXEwHl1yRx7ZS4K5cyFDhYDcPzhS7uyEkDxdUjg9g08BtNn
|
||||
|
||||
AcLBXAQAAQoABgUCaUFt7QAKCRDgT5vottzAEhdnD/92LBcQm3iw/kPao4KqGE0OhfXDFd7Z6+Qv
|
||||
A1Dlzz6Cw0tuj0r5aZH7vJQCx4kC1Eaoi8apg3XhqAyhr74/MsIwMhPPL8qcSNv8ZWruoGwFp/rx
|
||||
M6NSBKc6hrYqACYfEkBwfq9SgmIDQKFeBVudwswLK2SN58wrDNJjuWz/eJ5hUIIe3ga5ScfzO4Jr
|
||||
jTWS4kh5lpttCPFX8ouLkMgLUxijQpxFbHoF1trXJndFvavStT0yuC0y5TXzb3wJbbiF/MXZWyjV
|
||||
/4U+oQLodO77MhaD01kk2y5bZ62YuQ3MPL0fQGypon12GPHeNNcEcYWRZlFv+JkWAduWlnuefj1D
|
||||
dVWV8dQQmSZGZNiGTsIJxkY9+4B+t/OhosGDc6jEmEZcKNVi9fnl0+awkzK6scNNmupZ8NwJl8ZR
|
||||
mJSsfaBcH4paYV1x31y4uTELv+OuDWAJ3D0RoCR8H0djTBxRhsF2/JpSJasxVmSbzWHPSeM3f1aO
|
||||
ChZGwbD6J2SpzsrdogUP/9z6o8YuVnJkOxoBYuXhT1pEYTd93/hE++j3MpOqey/xw8UDbYmq5oJf
|
||||
uKaYLOMphqDm5hUCZmxQp8gTzDleZGjxYS2fOS4qFUJlvyVwsSoJMXU+6YfA6tgEQ4Dbh6zp6r78
|
||||
MjEqfWn4lL16xW2Zzr6e8xWwUrM7T3Gp4WTA7/xOeA==
|
||||
108
live-build/ubuntu/ubuntu-classic-amd64.model
Normal file
108
live-build/ubuntu/ubuntu-classic-amd64.model
Normal file
@ -0,0 +1,108 @@
|
||||
type: model
|
||||
authority-id: canonical
|
||||
series: 16
|
||||
brand-id: canonical
|
||||
model: ubuntu-classic-2604-amd64
|
||||
architecture: amd64
|
||||
base: core24
|
||||
classic: true
|
||||
distribution: ubuntu
|
||||
grade: signed
|
||||
snaps:
|
||||
-
|
||||
default-channel: classic-26.04/stable
|
||||
id: UqFziVZDHLSyO3TqSWgNBoAdHbLI4dAH
|
||||
name: pc
|
||||
type: gadget
|
||||
-
|
||||
components:
|
||||
nvidia-550-erd-ko:
|
||||
presence: optional
|
||||
nvidia-550-erd-user:
|
||||
presence: optional
|
||||
nvidia-570-erd-ko:
|
||||
presence: optional
|
||||
nvidia-570-erd-user:
|
||||
presence: optional
|
||||
default-channel: 26.04/stable
|
||||
id: pYVQrBcKmBa0mZ4CCN7ExT6jH8rY1hza
|
||||
name: pc-kernel
|
||||
type: kernel
|
||||
-
|
||||
default-channel: latest/stable
|
||||
id: amcUKQILKXHHTlmSa7NMdnXSx02dNeeT
|
||||
name: core22
|
||||
type: base
|
||||
-
|
||||
default-channel: latest/stable
|
||||
id: dwTAh7MZZ01zyriOZErqd1JynQLiOGvM
|
||||
name: core24
|
||||
type: base
|
||||
-
|
||||
default-channel: latest/stable
|
||||
id: PMrrV4ml8uWuEUDBT8dSGnKUYbevVhc4
|
||||
name: snapd
|
||||
type: snapd
|
||||
-
|
||||
default-channel: latest/stable
|
||||
id: EISPgh06mRh1vordZY9OZ34QHdd7OrdR
|
||||
name: bare
|
||||
type: base
|
||||
-
|
||||
default-channel: latest/stable/ubuntu-26.04
|
||||
id: HyhSEBPv3vHsW6uOHkQR384NgI7S6zpj
|
||||
name: mesa-2404
|
||||
type: app
|
||||
-
|
||||
default-channel: 1/stable/ubuntu-26.04
|
||||
id: EI0D1KHjP8XiwMZKqSjuh6W8zvcowUVP
|
||||
name: firmware-updater
|
||||
type: app
|
||||
-
|
||||
default-channel: 1/stable/ubuntu-26.04
|
||||
id: FppXWunWzuRT2NUT9CwoBPNJNZBYOCk0
|
||||
name: desktop-security-center
|
||||
type: app
|
||||
-
|
||||
default-channel: 1/stable/ubuntu-26.04
|
||||
id: aoc5lfC8aUd2VL8VpvynUJJhGXp5K6Dj
|
||||
name: prompting-client
|
||||
type: app
|
||||
-
|
||||
default-channel: 2/stable/ubuntu-26.04
|
||||
id: gjf3IPXoRiipCu9K0kVu52f0H56fIksg
|
||||
name: snap-store
|
||||
type: app
|
||||
-
|
||||
default-channel: latest/stable/ubuntu-26.04
|
||||
id: jZLfBRzf1cYlYysIjD2bwSzNtngY0qit
|
||||
name: gtk-common-themes
|
||||
type: app
|
||||
-
|
||||
default-channel: latest/stable/ubuntu-26.04
|
||||
id: 3wdHCAVyZEmYsCMFDE9qt92UV8rC8Wdk
|
||||
name: firefox
|
||||
type: app
|
||||
-
|
||||
default-channel: latest/stable/ubuntu-26.04
|
||||
id: ew7OxpbRTxfK7ImpIygRR85lkxvU7Pzt
|
||||
name: gnome-46-2404
|
||||
type: app
|
||||
-
|
||||
default-channel: latest/stable/ubuntu-26.04
|
||||
id: IrwRHakqtzhFRHJOOPxKVPU0Kk7Erhcu
|
||||
name: snapd-desktop-integration
|
||||
type: app
|
||||
timestamp: 2025-11-06T12:00:00.0Z
|
||||
sign-key-sha3-384: 9tydnLa6MTJ-jaQTFUXEwHl1yRx7ZS4K5cyFDhYDcPzhS7uyEkDxdUjg9g08BtNn
|
||||
|
||||
AcLBXAQAAQoABgUCaSatwAAKCRDgT5vottzAElN8EAC81ZgmWYxnh9l2UrGl8I3WIa2yPrblQB4m
|
||||
2qdfj35umxfNtZdhBux74g6UpXttX5djcf2qfrK2VAk0tf3lolSprAfPeIoBxthl2Ig0CfWOD7Qa
|
||||
sJAiUZ2CVY0gX53tTxc+Lsaj2CCdmEVnlG5Lbzk6DDr6OYQ1jf+SyntSlaB4mvuy+YO89sA/E8X9
|
||||
xaYhZpS7NU+J5nfc9hB8xf/f7UvXVrcRmkX1t5Pra1T/eQ+3hgLzp+fLvFbwMRcEGqwE2KXTWwm1
|
||||
F191SI2UazuS4lWv0yJ40uljd26q53E8edKPmtPlmWEY0GwbofvcXKM3tw8gf9ZwZMlewjNYYHGu
|
||||
V1FsI+6GdULFPMoQptmEhQmZNOiAE706D+HVTgDvWfv/yw1fOmTUbFaT/dmUb8dSmndouRt2AF0c
|
||||
WivlBgo3fKjRZg/sPyZX3FwhggglmuCRiiYK9xu1b4wsplv090fAF3q33o9wLB+G6A4DE9QDzhfu
|
||||
7y5ABm/cG15nKDkanpbCFWwYEq7ANlzz3y6/KctQnFms3+qa5p5bdd+Q4mpqcJcNXMWFnb3b+lSp
|
||||
TITMdTf9afNKHFTbwBABoNVLDYelkNCYD99ukuSIS8MeiIHEXxUV9lNaEPTKoXgv3LETI8Wd43Qs
|
||||
Msb1UuoDShZo2gfDOlb8P0W7gxz79QbjMcSBBoqVew==
|
||||
@ -7,12 +7,10 @@
|
||||
|
||||
set -e
|
||||
|
||||
. config/common
|
||||
|
||||
chroot_directory=$1
|
||||
|
||||
auto_packages=$(${LIVECD_ROOTFS_ROOT}/auto-markable-pkgs $chroot_directory)
|
||||
auto_packages=$(/usr/share/livecd-rootfs/auto-markable-pkgs $chroot_directory)
|
||||
if [ -n "$auto_packages" ]; then
|
||||
chroot $chroot_directory apt-mark auto $auto_packages
|
||||
fi
|
||||
[ -z "$(${LIVECD_ROOTFS_ROOT}/auto-markable-pkgs $chroot_directory 2> /dev/null)" ]
|
||||
[ -z "$(/usr/share/livecd-rootfs/auto-markable-pkgs $chroot_directory 2> /dev/null)" ]
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user