Compare commits

..

21 Commits

Author SHA1 Message Date
Olivier Gayot
3645bdf230 Release livecd-rootfs 26.04.17
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
2026-02-12 10:25:26 +01:00
Olivier Gayot
c3671c739d ubuntu: update model to latest stable model
Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
2026-02-12 10:25:02 +01:00
Olivier Gayot
733ad14e33 ubuntu: for the stable image, use the stable model
Let's stop leaning on overrides for now.

Signed-off-by: Olivier Gayot <olivier.gayot@canonical.com>
2026-02-12 10:24:30 +01:00
Utkarsh Gupta
e26de340e2 Merge build-status into ubuntu/master [a=utkarsh] [r=]
Rename ISO_STATUS to BUILD_TYPE for image builds

MP: https://code.launchpad.net/~utkarsh/livecd-rootfs/+git/livecd-rootfs/+merge/500253

* build-status:
  Update d/ch for 26.04.16 release
  Rename ISO_STATUS to BUILD_TYPE for image builds
2026-02-12 01:53:18 +05:30
Utkarsh Gupta
7f1c505f20 Update d/ch for 26.04.16 release 2026-02-12 01:41:28 +05:30
Utkarsh Gupta
6d954c975d Rename ISO_STATUS to BUILD_TYPE for image builds 2026-02-12 01:41:06 +05:30
michael.hudson@canonical.com
73035c0b19
releasing package livecd-rootfs version 26.04.15 2026-02-11 10:07:53 +13:00
michael.hudson@canonical.com
84760de4da
rename the Daily|Release in .disk/info from "official" to "iso_status" 2026-02-11 09:42:44 +13:00
michael.hudson@canonical.com
2c2f7d5e5c
fix xorriso -map to include target path for riscv64
The -map option requires two arguments: the source filesystem path and
the target path in the ISO. Without the "/" target, xorriso fails.
This only affects riscv64, which uses native xorriso mode rather than
mkisofs compatibility mode.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-11 09:42:43 +13:00
michael.hudson@canonical.com
45aa1e4550
run add_riscv_gpt on riscv64 2026-02-11 09:42:42 +13:00
michael.hudson@canonical.com
c1edc22c24
xorriso is not run with -as mkisofs for whatever reason 2026-02-11 09:42:40 +13:00
michael.hudson@canonical.com
9add6d4ab8
do not truncate xorriso invocation in output 2026-02-11 09:42:39 +13:00
michael.hudson@canonical.com
acd63ee3e4
Make sure the unlayered ISO has a cdrom.sources in as well. 2026-02-11 09:42:38 +13:00
michael.hudson@canonical.com
ab2b82e3c2
a more generic way to make sure all artefacts get a for-iso path 2026-02-11 09:42:37 +13:00
michael.hudson@canonical.com
9a9ca07a76
Copy-edit Claude's comments a bit. 2026-02-11 09:42:36 +13:00
michael.hudson@canonical.com
4d8cfd89b8
Update changelog for ISO build support 2026-02-11 09:42:20 +13:00
michael.hudson@canonical.com
ce809612c4
Add CI lint checks for Python code
Add a lint job to the Launchpad CI pipeline that runs mypy, black, and
flake8 on the new Python code (gen-iso-ids, isobuild, isobuilder).
2026-02-11 09:41:08 +13:00
michael.hudson@canonical.com
b3fdc4e615
Add isobuild tool to build installer ISOs
This adds a new tool, isobuild, which replaces the ISO-building
functionality previously provided by live-build and cdimage. It is
invoked from auto/build when MAKE_ISO=yes.

The tool supports:
 - Layered desktop images (Ubuntu Desktop, flavors)
 - Non-layered images (Kubuntu, Ubuntu Unity)
 - Images with package pools (most installers)
 - Images without pools (Ubuntu Core Installer)

The isobuild command has several subcommands:
 - init: Initialize the ISO build directory structure
 - setup-apt: Configure APT for package pool generation
 - generate-pool: Create the package pool from a seed
 - generate-sources: Generate cdrom.sources for the installed system
 - add-live-filesystem: Add squashfs and kernel/initrd to the ISO
 - make-bootable: Add GRUB and other boot infrastructure
 - make-iso: Generate the final ISO image

auto/config is updated to:
 - Set MAKE_ISO=yes for relevant image types
 - Set POOL_SEED_NAME for images that need a package pool
 - Invoke gen-iso-ids to compute ISO metadata

auto/build is updated to:
 - Remove old live-build ISO handling code
 - Invoke isobuild at appropriate points in the build

lb_binary_layered is updated to create squashfs files with
cdrom.sources included for use in the ISO.
2026-02-11 09:41:06 +13:00
michael.hudson@canonical.com
3112c5f175
Add gen-iso-ids tool to compute ISO metadata
Add a script to compute the values for .disk/info, the ISO volume ID,
and the "capproject" (capitalized project name) used in various places
in the ISO boot configuration.

This replaces the logic that was previously scattered across live-build
and cdimage.
2026-02-11 09:41:01 +13:00
Matthew Hagemann
8e26b08f59
changelog 2026-02-05 13:27:01 +02:00
Matthew Hagemann
7cbabf55d5
ubuntu: delay display manager until snapd seeding completes
Add systemd drop-in to wait for snapd seeding completion before starting the
display manager. This improves the user experience as users now wait in
Plymouth for the installer to finish being seeded, instead of in GDM with only
the wallpaper visible. When GDM starts, the installer launches with minimal
delay.
2026-02-05 13:25:28 +02:00
18 changed files with 1320 additions and 42 deletions

12
.launchpad.yaml Normal file
View File

@ -0,0 +1,12 @@
pipeline:
- [lint]
jobs:
lint:
series: noble
architectures: amd64
packages:
- black
- mypy
- python3-flake8
run: ./check-lint

11
check-lint Executable file
View File

@ -0,0 +1,11 @@
#!/bin/sh
set -eux
export MYPYPATH=live-build
mypy live-build/isobuilder live-build/isobuild
mypy live-build/gen-iso-ids
black --check live-build/isobuilder live-build/isobuild live-build/gen-iso-ids
python3 -m flake8 --max-line-length 88 --ignore E203 live-build/isobuilder live-build/isobuild live-build/gen-iso-ids

27
debian/changelog vendored
View File

@ -1,3 +1,30 @@
livecd-rootfs (26.04.17) resolute; urgency=medium
* desktop: build the stable ISO using the stable model - essentially
reverting all the hacks.
* desktop: update the stable model to the latest. It has:
- components defined for the 6.19 kernel (nvidia 580 series)
- no core26: for TPM/FDE recovery testing, please install the core26 snap
from edge.
-- Olivier Gayot <olivier.gayot@canonical.com> Thu, 12 Feb 2026 10:25:15 +0100
livecd-rootfs (26.04.16) resolute; urgency=medium
* Rename ISO_STATUS to BUILD_TYPE for image builds.
-- Utkarsh Gupta <utkarsh@debian.org> Thu, 12 Feb 2026 01:41:11 +0530
livecd-rootfs (26.04.15) resolute; urgency=medium
[ Matthew Hagemann ]
* desktop: delay display manager starting until snapd seeding completes
[ Michael Hudson-Doyle ]
* Make an ISO in the livefs build when building an installer.
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Wed, 11 Feb 2026 10:04:37 +1300
livecd-rootfs (26.04.14) resolute; urgency=medium livecd-rootfs (26.04.14) resolute; urgency=medium
[ Olivier Gayot ] [ Olivier Gayot ]

1
debian/control vendored
View File

@ -37,6 +37,7 @@ Depends: ${misc:Depends},
procps, procps,
python3, python3,
python3-apt, python3-apt,
python3-click,
python3-launchpadlib [!i386], python3-launchpadlib [!i386],
python3-yaml, python3-yaml,
qemu-utils [!i386], qemu-utils [!i386],

View File

@ -208,6 +208,22 @@ EOF
undivert_update_initramfs undivert_update_initramfs
undivert_grub chroot undivert_grub chroot
fi fi
if [ "${MAKE_ISO}" = yes ]; then
isobuild init --disk-info "$(cat config/iso-ids/disk-info)" --series "${LB_DISTRIBUTION}" --arch "${ARCH}"
# Determine which chroot directory has the apt configuration to use.
# Layered builds (PASSES set) create overlay directories named
# "overlay.base", "overlay.live", etc. - we use the first one (base).
# Single-pass builds use the "chroot" directory directly.
if [ "${PASSES}" ]; then
CHROOT="overlay.$(set -- $PASSES; echo $1)"
else
CHROOT=chroot
fi
isobuild setup-apt --chroot $CHROOT
if [ -n "${POOL_SEED_NAME}" ]; then
isobuild generate-pool --package-list-file "config/germinate-output/${POOL_SEED_NAME}"
fi
fi
if [ -d chroot/etc/apt/preferences.d.save ]; then if [ -d chroot/etc/apt/preferences.d.save ]; then
# https://mastodon.social/@scream@botsin.space # https://mastodon.social/@scream@botsin.space
@ -427,13 +443,6 @@ if [ -e config/manifest-minimal-remove ]; then
cp config/manifest-minimal-remove "$PREFIX.manifest-minimal-remove" cp config/manifest-minimal-remove "$PREFIX.manifest-minimal-remove"
fi fi
for ISO in binary.iso binary.hybrid.iso; do
[ -e "$ISO" ] || continue
ln "$ISO" "$PREFIX.iso"
chmod 644 "$PREFIX.iso"
break
done
if [ -e "binary/$INITFS/filesystem.dir" ]; then if [ -e "binary/$INITFS/filesystem.dir" ]; then
(cd "binary/$INITFS/filesystem.dir/" && tar -c --sort=name --xattrs *) | \ (cd "binary/$INITFS/filesystem.dir/" && tar -c --sort=name --xattrs *) | \
gzip -9 --rsyncable > "$PREFIX.rootfs.tar.gz" gzip -9 --rsyncable > "$PREFIX.rootfs.tar.gz"
@ -558,3 +567,28 @@ case $PROJECT in
ubuntu-cpc) ubuntu-cpc)
config/hooks.d/remove-implicit-artifacts config/hooks.d/remove-implicit-artifacts
esac esac
if [ "${MAKE_ISO}" = "yes" ]; then
# Link build artifacts with "for-iso." prefix for isobuild to consume.
# Layered builds create squashfs via lb_binary_layered (which already
# creates for-iso.*.squashfs files). Single-pass builds only have
# ${PREFIX}.squashfs, which does not contain cdrom.sources, so we
# create a for-iso.filesystem.squashfs that does.
if [ -z "$PASSES" ]; then
isobuild generate-sources --mountpoint=/cdrom > chroot/etc/apt/sources.list.d/cdrom.sources
create_squashfs chroot for-iso.filesystem.squashfs
fi
# Link kernel and initrd files. The ${thing#${PREFIX}} expansion strips
# the PREFIX, so "livecd.ubuntu-server.kernel-generic" becomes
# "for-iso.kernel-generic".
for thing in ${PREFIX}.kernel-* ${PREFIX}.initrd-*; do
for_iso_path=for-iso${thing#${PREFIX}}
if [ ! -f $for_iso_path ]; then
ln -v $thing $for_iso_path
fi
done
isobuild add-live-filesystem --artifact-prefix for-iso.
isobuild make-bootable --project "${PROJECT}" --capproject "$(cat config/iso-ids/capproject)" \
${SUBARCH:+--subarch "${SUBARCH}"}
isobuild make-iso --volid "$(cat config/iso-ids/vol-id)" --dest ${PREFIX}.iso
fi

View File

@ -663,10 +663,23 @@ if ! [ -e config/germinate-output/structure ]; then
-s $FLAVOUR.$SUITE $GERMINATE_ARG -a ${ARCH_VARIANT:-$ARCH}) -s $FLAVOUR.$SUITE $GERMINATE_ARG -a ${ARCH_VARIANT:-$ARCH})
fi fi
# ISO build configuration. These defaults are overridden per-project below.
#
# MAKE_ISO: Set to "yes" to generate an installer ISO at the end of the build.
# This triggers isobuild to run in auto/build.
MAKE_ISO=no
# POOL_SEED_NAME: The germinate output file defining packages for the ISO's
# package pool (repository). Different flavors use different seeds:
# - "ship-live" for most desktop images
# - "server-ship-live" for Ubuntu Server (includes server-specific packages)
# - "" (empty) for images without a pool, like Ubuntu Core Installer
POOL_SEED_NAME=ship-live
# Common functionality for layered desktop images # Common functionality for layered desktop images
common_layered_desktop_image() { common_layered_desktop_image() {
touch config/universe-enabled touch config/universe-enabled
PASSES_TO_LAYERS="true" PASSES_TO_LAYERS="true"
MAKE_ISO=yes
if [ -n "$HAS_MINIMAL" ]; then if [ -n "$HAS_MINIMAL" ]; then
if [ -z "$MINIMAL_TASKS" ]; then if [ -z "$MINIMAL_TASKS" ]; then
@ -897,6 +910,7 @@ case $PROJECT in
add_task install minimal standard add_task install minimal standard
add_task install kubuntu-desktop add_task install kubuntu-desktop
LIVE_TASK='kubuntu-live' LIVE_TASK='kubuntu-live'
MAKE_ISO=yes
add_chroot_hook remove-gnome-icon-cache add_chroot_hook remove-gnome-icon-cache
;; ;;
@ -923,6 +937,7 @@ case $PROJECT in
ubuntu-unity) ubuntu-unity)
add_task install minimal standard ${PROJECT}-desktop add_task install minimal standard ${PROJECT}-desktop
LIVE_TASK=${PROJECT}-live LIVE_TASK=${PROJECT}-live
MAKE_ISO=yes
;; ;;
lubuntu) lubuntu)
@ -997,6 +1012,8 @@ case $PROJECT in
live) live)
OPTS="${OPTS:+$OPTS }--bootstrap-flavour=minimal" OPTS="${OPTS:+$OPTS }--bootstrap-flavour=minimal"
PASSES_TO_LAYERS=true PASSES_TO_LAYERS=true
MAKE_ISO=yes
POOL_SEED_NAME=server-ship-live
add_task ubuntu-server-minimal server-minimal add_task ubuntu-server-minimal server-minimal
add_package ubuntu-server-minimal lxd-installer add_package ubuntu-server-minimal lxd-installer
add_task ubuntu-server-minimal.ubuntu-server minimal standard server add_task ubuntu-server-minimal.ubuntu-server minimal standard server
@ -1129,6 +1146,8 @@ case $PROJECT in
fi fi
OPTS="${OPTS:+$OPTS }--bootstrap-flavour=minimal" OPTS="${OPTS:+$OPTS }--bootstrap-flavour=minimal"
PASSES_TO_LAYERS=true PASSES_TO_LAYERS=true
MAKE_ISO=yes
POOL_SEED_NAME=
add_task base server-minimal server add_task base server-minimal server
add_task base.live server-live add_task base.live server-live
add_package base.live linux-image-generic add_package base.live linux-image-generic
@ -1384,6 +1403,8 @@ echo "IMAGEFORMAT=\"$IMAGEFORMAT\"" >> config/chroot
if [ -n "$PASSES" ]; then if [ -n "$PASSES" ]; then
echo "PASSES=\"$PASSES\"" >> config/common echo "PASSES=\"$PASSES\"" >> config/common
fi fi
echo "MAKE_ISO=\"$MAKE_ISO\"" >> config/common
echo "POOL_SEED_NAME=\"$POOL_SEED_NAME\"" >> config/common
if [ -n "$NO_SQUASHFS_PASSES" ]; then if [ -n "$NO_SQUASHFS_PASSES" ]; then
echo "NO_SQUASHFS_PASSES=\"$NO_SQUASHFS_PASSES\"" >> config/common echo "NO_SQUASHFS_PASSES=\"$NO_SQUASHFS_PASSES\"" >> config/common
fi fi
@ -1677,3 +1698,11 @@ apt-get -y download $PREINSTALL_POOL
EOF EOF
fi fi
fi fi
if [ "${MAKE_ISO}" = "yes" ]; then
# XXX should pass --build-type here.
/usr/share/livecd-rootfs/live-build/gen-iso-ids \
--project $PROJECT ${SUBPROJECT:+--subproject $SUBPROJECT} \
--arch $ARCH ${SUBARCH:+--subarch $SUBARCH} ${NOW+--serial $NOW} \
--output-dir config/iso-ids/
fi

View File

@ -1444,3 +1444,10 @@ gpt_root_partition_uuid() {
echo "${ROOTFS_PARTITION_TYPE}" echo "${ROOTFS_PARTITION_TYPE}"
} }
# Wrapper for the isobuild tool. Sets PYTHONPATH so the isobuilder module
# is importable, and uses config/iso-dir as the standard working directory
# for ISO metadata and intermediate files.
isobuild () {
PYTHONPATH=/usr/share/livecd-rootfs/live-build/ /usr/share/livecd-rootfs/live-build/isobuild --workdir config/iso-dir "$@"
}

197
live-build/gen-iso-ids Executable file
View File

@ -0,0 +1,197 @@
#!/usr/bin/python3
# Compute various slightly obscure IDs and labels used by ISO builds.
#
# * ISO9660 images have a "volume id".
# * Our ISOs contain a ".disk/info" file that is read by various
# other things (casper, the installer) and is generally used as a
# record of where an installation came from.
# * The code that sets up grub for the ISO needs a "capitalized
# project name" or capproject.
#
# All of these are derived from other build parameters (and/or
# information in etc/os-release) in slightly non-obvious ways so the
# logic to do so is confined to this file to avoid it cluttering
# anywhere else.
import pathlib
import platform
import time
import click
# Be careful about the values here. They end up in .disk/info, which is read by
# casper to create the live session user, so if there is a space in the
# capproject things go a bit wonky.
#
# It will also be used by make_vol_id to construct an ISO9660 volume ID as
#
# "$(CAPPROJECT) $(DEBVERSION) $(ARCH)",
#
# e.g. "Ubuntu 14.10 amd64". The volume ID is limited to 32 characters. This
# therefore imposes a limit on the length of project_map values of 25 - (length
# of longest relevant architecture name).
project_to_capproject_map = {
"edubuntu": "Edubuntu",
"kubuntu": "Kubuntu",
"lubuntu": "Lubuntu",
"ubuntu": "Ubuntu",
"ubuntu-base": "Ubuntu-Base",
"ubuntu-budgie": "Ubuntu-Budgie",
"ubuntu-core-installer": "Ubuntu-Core-Installer",
"ubuntu-mate": "Ubuntu-MATE",
"ubuntu-mini-iso": "Ubuntu-Mini-ISO",
"ubuntu-oem": "Ubuntu OEM",
"ubuntu-server": "Ubuntu-Server",
"ubuntu-unity": "Ubuntu-Unity",
"ubuntu-wsl": "Ubuntu WSL",
"ubuntucinnamon": "Ubuntu-Cinnamon",
"ubuntukylin": "Ubuntu-Kylin",
"ubuntustudio": "Ubuntu-Studio",
"xubuntu": "Xubuntu",
}
def make_disk_info(
os_release: dict[str, str],
arch: str,
subarch: str,
capproject: str,
subproject: str,
build_type: str,
serial: str,
) -> str:
# os-release VERSION is _almost_ what goes into .disk/info...
# it can be
# VERSION="24.04.3 LTS (Noble Numbat)"
# or
# VERSION="25.10 (Questing Quokka)"
# We want the Adjective Animal to be in quotes, not parentheses, e.g.
# 'Ubuntu 24.04.3 LTS "Noble Numbat"'. This format is expected by casper
# (which parses .disk/info to set up the live session) and the installer.
version = os_release["VERSION"]
version = version.replace("(", '"')
version = version.replace(")", '"')
capsubproject = ""
if subproject == "minimal":
capsubproject = " Minimal"
fullarch = arch
if subarch:
fullarch += "+" + subarch
return f"{capproject}{capsubproject} {version} - {build_type} {fullarch} ({serial})"
def make_vol_id(os_release: dict[str, str], arch: str, capproject: str) -> str:
# ISO9660 volume IDs are limited to 32 characters. The volume ID format is
# "CAPPROJECT VERSION ARCH", e.g. "Ubuntu 24.04.3 LTS amd64". Longer arch
# names like ppc64el and riscv64 can push us over the limit, so we shorten
# them here. This is why capproject names are also kept short (see the
# comment above project_to_capproject_map).
arch_for_volid_map = {
"ppc64el": "ppc64",
"riscv64": "riscv",
}
arch_for_volid = arch_for_volid_map.get(arch, arch)
# from
# VERSION="24.04.3 LTS (Noble Numbat)"
# or
# VERSION="25.10 (Questing Quokka)"
# we want "24.04.3 LTS" or "25.10", i.e. everything up to the first "(" (apart
# from the whitespace).
version = os_release["VERSION"].split("(")[0].strip()
volid = f"{capproject} {version} {arch_for_volid}"
# If still over 32 characters (e.g. long capproject + LTS version), fall
# back to shorter forms. amd64 gets "x64" since it's widely recognized and
# fits; other architectures just drop the arch entirely since multi-arch
# ISOs are less common for non-amd64 platforms.
if len(volid) > 32:
if arch == "amd64":
volid = f"{capproject} {version} x64"
else:
volid = f"{capproject} {version}"
return volid
@click.command()
@click.option(
"--project",
type=str,
required=True,
)
@click.option(
"--subproject",
type=str,
default=None,
)
@click.option(
"--arch",
type=str,
required=True,
)
@click.option(
"--subarch",
type=str,
default=None,
)
@click.option(
"--serial",
type=str,
default=time.strftime("%Y%m%d"),
)
@click.option(
"--build-type",
type=str,
default="Daily",
)
@click.option(
"--output-dir",
type=click.Path(file_okay=False, resolve_path=True, path_type=pathlib.Path),
required=True,
help="working directory",
)
def main(
project: str,
subproject: str,
arch: str,
subarch: str,
serial: str,
build_type: str,
output_dir: pathlib.Path,
):
output_dir.mkdir(exist_ok=True)
capproject = project_to_capproject_map[project]
os_release = platform.freedesktop_os_release()
with output_dir.joinpath("disk-info").open("w") as fp:
disk_info = make_disk_info(
os_release,
arch,
subarch,
capproject,
subproject,
build_type,
serial,
)
print(f"disk_info: {disk_info!r}")
fp.write(disk_info)
with output_dir.joinpath("vol-id").open("w") as fp:
vol_id = make_vol_id(os_release, arch, capproject)
print(f"vol_id: {vol_id!r} {len(vol_id)}")
fp.write(vol_id)
with output_dir.joinpath("capproject").open("w") as fp:
print(f"capproject: {capproject!r}")
fp.write(capproject)
if __name__ == "__main__":
main()

221
live-build/isobuild Executable file
View File

@ -0,0 +1,221 @@
#!/usr/bin/python3
# Building an ISO requires knowing:
#
# * The architecture and series we are building for
# * The address of the mirror to pull packages from the pool from and the
# components of that mirror to use
# * The list of packages to include in the pool
# * Where the squashfs files that contain the rootfs and other metadata layers
# are
# * Where to put the final ISO
# * All the bits of information that end up in .disk/info on the ISO and in the
# "volume ID" for the ISO
#
# It's not completely trivial to come up with a nice feeling interface between
# livecd-rootfs and this tool. There are about 13 parameters that are needed to
# build the ISO and having a tool take 13 arguments seems a bit overwhelming. In
# addition some steps need to run before the layers are made into squashfs files
# and some after. It felt nicer to have a tool with a few subcommands (7, in the
# end) and taking arguments relevant to each step:
#
# $ isobuild --work-dir "" init --disk-id "" --series "" --arch ""
#
# Set up the work-dir for later steps. Create the skeleton file layout of the
# ISO, populate .disk/info etc, create the gpg key referred to above. Store
# series and arch somewhere that later steps can refer to.
#
# $ isobuild --work-dir "" setup-apt --chroot ""
#
# Set up aptfor use by later steps, using the configuration from the passed
# chroot.
#
# $ isobuild --work-dir "" generate-pool --package-list-file ""
#
# Create the pool from the passed germinate output file.
#
# $ isobuild --work-dir "" generate-sources --mountpoint ""
#
# Generate an apt deb822 source for the pool, assuming it is mounted at the
# passed mountpoint, and output it on stdout.
#
# $ isobuild --work-dir "" add-live-filesystem --artifact-prefix ""
#
# Copy the relevant artifacts to the casper directory (and extract the uuids
# from the initrds)
#
# $ isobuild --work-dir "" make-bootable --project "" --capitalized-project ""
# --subarch ""
#
# Set up the bootloader etc so that the ISO can boot (for this clones debian-cd
# and run the tools/boot/$series-$arch script but those should be folded into
# isobuild fairly promptly IMO).
#
# $ isobuild --work-dir "" make-iso --vol-id "" --dest ""
#
# Generate the checksum file and run xorriso to build the final ISO.
import pathlib
import shlex
import click
from isobuilder.builder import ISOBuilder
@click.group()
@click.option(
"--workdir",
type=click.Path(file_okay=False, resolve_path=True, path_type=pathlib.Path),
required=True,
help="working directory",
)
@click.pass_context
def main(ctxt, workdir):
ctxt.obj = ISOBuilder(workdir)
cwd = pathlib.Path().cwd()
if workdir.is_relative_to(cwd):
workdir = workdir.relative_to(cwd)
ctxt.obj.logger.log(f"isobuild starting, workdir: {workdir}")
def subcommand(f):
"""Decorator that converts a function into a Click subcommand with logging.
This decorator:
1. Converts function name from snake_case to kebab-case for the CLI
2. Wraps the function to log the subcommand name and all parameters
3. Registers it as a Click command under the main command group
4. Extracts the ISOBuilder instance from the context and passes it as first arg
"""
name = f.__name__.replace("_", "-")
def wrapped(ctxt, **kw):
# Build a log message showing the subcommand and all its parameters.
# We use ctxt.params (Click's resolved parameters) rather than **kw
# because ctxt.params includes path resolution and type conversion.
# Paths are converted to relative form to keep logs readable and avoid
# exposing full filesystem paths in build artifacts.
msg = f"subcommand {name}"
cwd = pathlib.Path().cwd()
for k, v in sorted(ctxt.params.items()):
if isinstance(v, pathlib.Path):
if v.is_relative_to(cwd):
v = v.relative_to(cwd)
v = shlex.quote(str(v))
msg += f" {k}={v}"
with ctxt.obj.logger.logged(msg):
f(ctxt.obj, **kw)
return main.command(name=name)(click.pass_context(wrapped))
@click.option(
"--disk-info",
type=str,
required=True,
help="contents of .disk/info",
)
@click.option(
"--series",
type=str,
required=True,
help="series being built",
)
@click.option(
"--arch",
type=str,
required=True,
help="architecture being built",
)
@subcommand
def init(builder, disk_info, series, arch):
builder.init(disk_info, series, arch)
@click.option(
"--chroot",
type=click.Path(
file_okay=False, resolve_path=True, path_type=pathlib.Path, exists=True
),
required=True,
)
@subcommand
def setup_apt(builder, chroot: pathlib.Path):
builder.setup_apt(chroot)
@click.pass_obj
@click.option(
"--package-list-file",
type=click.Path(
dir_okay=False, exists=True, resolve_path=True, path_type=pathlib.Path
),
required=True,
)
@subcommand
def generate_pool(builder, package_list_file: pathlib.Path):
builder.generate_pool(package_list_file)
@click.option(
"--mountpoint",
type=str,
required=True,
)
@subcommand
def generate_sources(builder, mountpoint: str):
builder.generate_sources(mountpoint)
@click.option(
"--artifact-prefix",
type=click.Path(dir_okay=False, resolve_path=True, path_type=pathlib.Path),
required=True,
)
@subcommand
def add_live_filesystem(builder, artifact_prefix: pathlib.Path):
builder.add_live_filesystem(artifact_prefix)
@click.option(
"--project",
type=str,
required=True,
)
@click.option("--capproject", type=str, required=True)
@click.option(
"--subarch",
type=str,
default="",
)
@subcommand
def make_bootable(builder, project: str, capproject: str | None, subarch: str):
# capproject is the "capitalized project name" used in GRUB menu entries,
# e.g. "Ubuntu" or "Kubuntu". It should come from gen-iso-ids (which uses
# project_to_capproject_map for proper formatting like "Ubuntu-MATE"), but
# we provide a simple .capitalize() fallback for cases where the caller
# doesn't have the pre-computed value.
if capproject is None:
capproject = project.capitalize()
builder.make_bootable(project, capproject, subarch)
@click.option(
"--dest",
type=click.Path(dir_okay=False, resolve_path=True, path_type=pathlib.Path),
required=True,
)
@click.option(
"--volid",
type=str,
default=None,
)
@subcommand
def make_iso(builder, dest: pathlib.Path, volid: str | None):
builder.make_iso(dest, volid)
if __name__ == "__main__":
main()

View File

@ -0,0 +1 @@
#

View File

@ -0,0 +1,109 @@
import dataclasses
import os
import pathlib
import shutil
import subprocess
from typing import Iterator
@dataclasses.dataclass
class PackageInfo:
package: str
filename: str
architecture: str
version: str
@property
def spec(self) -> str:
return f"{self.package}:{self.architecture}={self.version}"
def check_proc(proc, ok_codes=(0,)) -> None:
proc.wait()
if proc.returncode not in ok_codes:
raise Exception(f"{proc} failed")
class AptStateManager:
"""Maintain and use an apt state directory to access package info and debs."""
def __init__(self, logger, series: str, apt_dir: pathlib.Path):
self.logger = logger
self.series = series
self.apt_root = apt_dir.joinpath("root")
self.apt_conf_path = apt_dir.joinpath("apt.conf")
def _apt_env(self) -> dict[str, str]:
return dict(os.environ, APT_CONFIG=str(self.apt_conf_path))
def setup(self, chroot: pathlib.Path):
"""Set up the manager by copying the apt configuration from `chroot`."""
for path in "etc/apt", "var/lib/apt":
tgt = self.apt_root.joinpath(path)
tgt.parent.mkdir(parents=True, exist_ok=True)
shutil.copytree(chroot.joinpath(path), tgt)
self.apt_conf_path.write_text(f'Dir "{self.apt_root}/"; \n')
with self.logger.logged("updating apt indices"):
self.logger.run(["apt-get", "update"], env=self._apt_env())
def show(self, pkgs: list[str]) -> Iterator[PackageInfo]:
"""Return information about the binary packages named by `pkgs`.
Parses apt-cache output, which uses RFC822-like format: field names
followed by ": " and values, with multi-line values indented with
leading whitespace. We skip continuation lines (starting with space)
since PackageInfo only needs single-line fields.
The `fields` set (derived from PackageInfo's dataclass fields) acts as
a filter - we only extract fields we care about, ignoring others like
Description.
"""
proc = subprocess.Popen(
["apt-cache", "-o", "APT::Cache::AllVersions=0", "show"] + pkgs,
stdout=subprocess.PIPE,
encoding="utf-8",
env=self._apt_env(),
)
assert proc.stdout is not None
fields = {f.name for f in dataclasses.fields(PackageInfo)}
params: dict[str, str] = {}
for line in proc.stdout:
if line == "\n":
yield PackageInfo(**params)
params = {}
continue
if line.startswith(" "):
continue
field, value = line.split(": ", 1)
field = field.lower()
if field in fields:
params[field] = value.strip()
check_proc(proc)
if params:
yield PackageInfo(**params)
def download(self, rootdir: pathlib.Path, pkg_info: PackageInfo):
"""Download the package specified by `pkg_info` under `rootdir`.
The package is saved to the same path under `rootdir` as it is
at in the archive it comes from.
"""
target_dir = rootdir.joinpath(pkg_info.filename).parent
target_dir.mkdir(parents=True, exist_ok=True)
self.logger.run(
["apt-get", "download", pkg_info.spec],
cwd=target_dir,
check=True,
env=self._apt_env(),
)
def in_release_path(self) -> pathlib.Path:
"""Return the path to the InRelease file.
This assumes exactly one InRelease file matches the pattern.
Will raise ValueError if there are 0 or multiple matches.
"""
[path] = self.apt_root.joinpath("var/lib/apt/lists").glob(
f"*_dists_{self.series}_InRelease"
)
return path

View File

@ -0,0 +1,386 @@
import contextlib
import json
import os
import pathlib
import shlex
import shutil
import subprocess
import sys
from isobuilder.apt_state import AptStateManager
from isobuilder.gpg_key import EphemeralGPGKey
from isobuilder.pool_builder import PoolBuilder
# Constants
PACKAGE_BATCH_SIZE = 200
MAX_CMD_DISPLAY_LENGTH = 80
def package_list_packages(package_list_file: pathlib.Path) -> list[str]:
# Parse germinate output to extract package names. Germinate is Ubuntu's
# package dependency resolver that outputs dependency trees for seeds (like
# "ship-live" or "server-ship-live").
#
# Germinate output format has 2 header lines at the start and 2 footer lines
# at the end (showing statistics), so we skip them with [2:-2].
# Each data line starts with the package name followed by whitespace and
# dependency info. This format is stable but if germinate ever changes its
# header/footer count, this will break silently.
lines = package_list_file.read_text().splitlines()[2:-2]
return [line.split(None, 1)[0] for line in lines]
def make_sources_text(
series: str, gpg_key: EphemeralGPGKey, components: list[str], mountpoint: str
) -> str:
"""Generate a deb822-format apt source file for the ISO's package pool.
deb822 is the modern apt sources format (see sources.list(5) and deb822(5)).
It uses RFC822-style fields where multi-line values must be indented with a
leading space, and empty lines within a value are represented as " ."
(space-dot). This format is required for inline GPG keys in the Signed-By
field.
"""
key = gpg_key.export_public()
quoted_key = []
for line in key.splitlines():
if not line:
quoted_key.append(" .")
else:
quoted_key.append(" " + line)
return f"""\
Types: deb
URIs: file://{mountpoint}
Suites: {series}
Components: {" ".join(components)}
Check-Date: no
Signed-By:
""" + "\n".join(
quoted_key
)
class Logger:
def __init__(self):
self._indent = ""
def log(self, msg):
print(self._indent + msg, file=sys.stderr)
@contextlib.contextmanager
def logged(self, msg, done_msg=None):
self.log(msg)
self._indent += " "
try:
yield
finally:
self._indent = self._indent[:-2]
if done_msg is not None:
self.log(done_msg)
def msg_for_cmd(self, cmd, limit_length=True, cwd=None) -> str:
if cwd is None:
_cwd = pathlib.Path().cwd()
else:
_cwd = cwd
fmted_cmd = []
for arg in cmd:
if isinstance(arg, pathlib.Path):
if arg.is_relative_to(_cwd):
arg = arg.relative_to(_cwd)
arg = str(arg)
fmted_cmd.append(shlex.quote(arg))
fmted_cmd_str = " ".join(fmted_cmd)
if len(fmted_cmd_str) > MAX_CMD_DISPLAY_LENGTH and limit_length:
fmted_cmd_str = fmted_cmd_str[:MAX_CMD_DISPLAY_LENGTH] + "..."
msg = f"running `{fmted_cmd_str}`"
if cwd is not None:
msg += f" in {cwd}"
return msg
def run(
self, cmd: list[str | pathlib.Path], *args, limit_length=True, check=True, **kw
):
with self.logged(
self.msg_for_cmd(cmd, cwd=kw.get("cwd"), limit_length=limit_length)
):
return subprocess.run(cmd, *args, check=check, **kw)
class ISOBuilder:
def __init__(self, workdir: pathlib.Path):
self.workdir = workdir
self.logger = Logger()
self.iso_root = workdir.joinpath("iso-root")
self._series = self._arch = self._gpg_key = self._apt_state = None
# UTILITY STUFF
def _read_config(self):
with self.workdir.joinpath("config.json").open() as fp:
data = json.load(fp)
self._series = data["series"]
self._arch = data["arch"]
@property
def arch(self):
if self._arch is None:
self._read_config()
return self._arch
@property
def series(self):
if self._series is None:
self._read_config()
return self._series
@property
def gpg_key(self):
if self._gpg_key is None:
self._gpg_key = EphemeralGPGKey(
self.logger, self.workdir.joinpath("gpg-home")
)
return self._gpg_key
@property
def apt_state(self):
if self._apt_state is None:
self._apt_state = AptStateManager(
self.logger, self.series, self.workdir.joinpath("apt-state")
)
return self._apt_state
# COMMANDS
def init(self, disk_info: str, series: str, arch: str):
self.logger.log("creating directories")
self.workdir.mkdir(exist_ok=True)
self.iso_root.mkdir()
dot_disk = self.iso_root.joinpath(".disk")
dot_disk.mkdir()
self.logger.log("saving config")
with self.workdir.joinpath("config.json").open("w") as fp:
json.dump({"arch": arch, "series": series}, fp)
self.logger.log("populating .disk")
dot_disk.joinpath("base_installable").touch()
dot_disk.joinpath("cd_type").write_text("full_cd/single\n")
dot_disk.joinpath("info").write_text(disk_info)
self.iso_root.joinpath("casper").mkdir()
self.gpg_key.create()
def setup_apt(self, chroot: pathlib.Path):
self.apt_state.setup(chroot)
def generate_pool(self, package_list_file: pathlib.Path):
# do we need any of the symlinks we create here??
self.logger.log("creating pool skeleton")
self.iso_root.joinpath("ubuntu").symlink_to(".")
if self.arch not in ("amd64", "i386"):
self.iso_root.joinpath("ubuntu-ports").symlink_to(".")
self.iso_root.joinpath("dists", self.series).mkdir(parents=True)
builder = PoolBuilder(
self.logger,
series=self.series,
rootdir=self.iso_root,
apt_state=self.apt_state,
)
pkgs = package_list_packages(package_list_file)
# XXX include 32-bit deps of 32-bit packages if needed here
with self.logger.logged("adding packages"):
for i in range(0, len(pkgs), PACKAGE_BATCH_SIZE):
builder.add_packages(
self.apt_state.show(pkgs[i : i + PACKAGE_BATCH_SIZE])
)
builder.make_packages()
release_file = builder.make_release()
self.gpg_key.sign(release_file)
for name in "stable", "unstable":
self.iso_root.joinpath("dists", name).symlink_to(self.series)
def generate_sources(self, mountpoint: str):
components = [p.name for p in self.iso_root.joinpath("pool").iterdir()]
print(
make_sources_text(
self.series, self.gpg_key, mountpoint=mountpoint, components=components
)
)
def _extract_casper_uuids(self):
# Extract UUID files from initrd images for casper (the live boot system).
# Each initrd contains a conf/uuid.conf with a unique identifier that
# casper uses at boot time to locate the correct root filesystem. These
# UUIDs must be placed in .disk/casper-uuid-<flavor> on the ISO so casper
# can verify it's booting from the right media.
with self.logger.logged("extracting casper uuids"):
casper_dir = self.iso_root.joinpath("casper")
prefix = "filesystem.initrd-"
dot_disk = self.iso_root.joinpath(".disk")
for initrd in casper_dir.glob(f"{prefix}*"):
initrddir = self.workdir.joinpath("initrd")
with self.logger.logged(
f"unpacking {initrd.name} ...", done_msg="... done"
):
self.logger.run(["unmkinitramfs", initrd, initrddir])
# unmkinitramfs can produce different directory structures:
# - Platforms with early firmware: subdirs like "main/" or "early/"
# containing conf/uuid.conf
# - Other platforms: conf/uuid.conf directly in the root
# Try to find uuid.conf in both locations. The [uuid_conf] = confs
# unpacking asserts exactly one match; multiple matches would
# indicate an unexpected initrd structure.
confs = list(initrddir.glob("*/conf/uuid.conf"))
if confs:
[uuid_conf] = confs
elif initrddir.joinpath("conf/uuid.conf").exists():
uuid_conf = initrddir.joinpath("conf/uuid.conf")
else:
raise Exception("uuid.conf not found")
self.logger.log(f"found {uuid_conf.relative_to(initrddir)}")
uuid_conf.rename(
dot_disk.joinpath("casper-uuid-" + initrd.name[len(prefix) :])
)
shutil.rmtree(initrddir)
def add_live_filesystem(self, artifact_prefix: pathlib.Path):
# Link build artifacts into the ISO's casper directory. We use hardlinks
# (not copies) for filesystem efficiency - they reference the same inode.
#
# Artifacts come from the layered build with names like "for-iso.base.squashfs"
# and need to be renamed for casper. The prefix is stripped, so:
# for-iso.base.squashfs -> base.squashfs
# for-iso.kernel-generic -> filesystem.kernel-generic
#
# Kernel and initrd get the extra "filesystem." prefix because debian-cd
# expects names like filesystem.kernel-* and filesystem.initrd-*.
casper_dir = self.iso_root.joinpath("casper")
artifact_dir = artifact_prefix.parent
filename_prefix = artifact_prefix.name
def link(src, target_name):
target = casper_dir.joinpath(target_name)
self.logger.log(
f"creating link from $ISOROOT/casper/{target_name} to $src/{src.name}"
)
target.hardlink_to(src)
with self.logger.logged(
f"linking artifacts from {casper_dir} to {artifact_dir}"
):
for ext in "squashfs", "squashfs.gpg", "size", "manifest", "yaml":
for path in artifact_dir.glob(f"{filename_prefix}*.{ext}"):
newname = path.name[len(filename_prefix) :]
link(path, newname)
for item in "kernel", "initrd":
for path in artifact_dir.glob(f"{filename_prefix}{item}-*"):
newname = "filesystem." + path.name[len(filename_prefix) :]
link(path, newname)
self._extract_casper_uuids()
def make_bootable(self, project: str, capproject: str, subarch: str):
# debian-cd is Ubuntu's CD/ISO image build system. It contains
# architecture and series-specific boot configuration scripts that set up
# GRUB, syslinux, EFI boot, etc. The tools/boot/$series/boot-$arch script
# knows how to make an ISO bootable for each architecture.
#
# TODO: The boot configuration logic should eventually be ported directly
# into isobuilder to avoid this external dependency and git clone.
debian_cd_dir = self.workdir.joinpath("debian-cd")
with self.logger.logged("cloning debian-cd"):
self.logger.run(
[
"git",
"clone",
"--depth=1",
"https://git.launchpad.net/~ubuntu-cdimage/debian-cd/+git/ubuntu",
debian_cd_dir,
],
)
# Override apt-selection to use our ISO's apt configuration instead of
# debian-cd's default. This ensures the boot scripts get packages from
# the correct repository when installing boot packages.
apt_selection = debian_cd_dir.joinpath("tools/apt-selection")
with self.logger.logged("overwriting apt-selection"):
apt_selection.write_text(
"#!/bin/sh\n" f"APT_CONFIG={self.apt_state.apt_conf_path} apt-get $@\n"
)
env = dict(
os.environ,
BASEDIR=str(debian_cd_dir),
DIST=self.series,
PROJECT=project,
CAPPROJECT=capproject,
SUBARCH=subarch,
)
tool_name = f"tools/boot/{self.series}/boot-{self.arch}"
with self.logger.logged(f"running {tool_name} ...", done_msg="... done"):
self.logger.run(
[
debian_cd_dir.joinpath(tool_name),
"1",
self.iso_root,
],
env=env,
)
def checksum(self):
# Generate md5sum.txt for ISO integrity verification.
# - Symlinks are excluded because their targets are already checksummed
# - Files are sorted for deterministic, reproducible output across builds
# - Paths use "./" prefix and we run md5sum from iso_root so the output
# matches what casper-md5check expects.
all_files = []
for dirpath, dirnames, filenames in self.iso_root.walk():
filepaths = [dirpath.joinpath(filename) for filename in filenames]
all_files.extend(
"./" + str(filepath.relative_to(self.iso_root))
for filepath in filepaths
if not filepath.is_symlink()
)
self.iso_root.joinpath("md5sum.txt").write_bytes(
self.logger.run(
["md5sum"] + sorted(all_files),
cwd=self.iso_root,
stdout=subprocess.PIPE,
).stdout
)
def make_iso(self, dest: pathlib.Path, volid: str | None):
# 1.mkisofs_opts is generated by debian-cd's make_bootable step. The "1"
# refers to "pass 1" of the build (a legacy naming convention). It contains
# architecture-specific xorriso options for boot sectors, EFI images, etc.
mkisofs_opts = shlex.split(self.workdir.joinpath("1.mkisofs_opts").read_text())
self.checksum()
# xorriso with "-as mkisofs" runs in mkisofs compatibility mode.
# -r enables Rock Ridge extensions for Unix metadata (permissions, symlinks).
# -iso-level 3 (amd64 only) allows files >4GB which some amd64 ISOs need.
cmd: list[str | pathlib.Path] = ["xorriso"]
if self.arch == "riscv64":
# For $reasons, xorriso is not run in mkisofs mode on riscv64 only.
cmd.extend(["-rockridge", "on", "-outdev", dest])
if volid:
cmd.extend(["-volid", volid])
cmd.extend(mkisofs_opts)
cmd.extend(["-map", self.iso_root, "/"])
else:
# xorriso with "-as mkisofs" runs in mkisofs compatibility mode on
# other architectures. -r enables Rock Ridge extensions for Unix
# metadata (permissions, symlinks). -iso-level 3 (amd64 only)
# allows files >4GB which some amd64 ISOs need.
cmd.extend(["-as", "mkisofs", "-r"])
if self.arch == "amd64":
cmd.extend(["-iso-level", "3"])
if volid:
cmd.extend(["-V", volid])
cmd.extend(mkisofs_opts + [self.iso_root, "-o", dest])
with self.logger.logged("running xorriso"):
self.logger.run(cmd, cwd=self.workdir, check=True, limit_length=False)
if self.arch == "riscv64":
debian_cd_dir = self.workdir.joinpath("debian-cd")
add_riscv_gpt = debian_cd_dir.joinpath("tools/add_riscv_gpt")
self.logger.run([add_riscv_gpt, dest], cwd=self.workdir)

View File

@ -0,0 +1,58 @@
import pathlib
import subprocess
key_conf = """\
%no-protection
Key-Type: eddsa
Key-Curve: Ed25519
Key-Usage: sign
Name-Real: Ubuntu ISO One-Time Signing Key
Name-Email: noone@nowhere.invalid
Expire-Date: 0
"""
class EphemeralGPGKey:
def __init__(self, logger, gpghome):
self.logger = logger
self.gpghome = gpghome
def _run_gpg(self, cmd, **kwargs):
return self.logger.run(
["gpg", "--homedir", self.gpghome] + cmd, check=True, **kwargs
)
def create(self):
with self.logger.logged("creating gpg key ...", done_msg="... done"):
self.gpghome.mkdir(mode=0o700)
self._run_gpg(
["--gen-key", "--batch"],
input=key_conf,
text=True,
)
def sign(self, path: pathlib.Path):
with self.logger.logged(f"signing {path}"):
with path.open("rb") as inp:
with pathlib.Path(str(path) + ".gpg").open("wb") as outp:
self._run_gpg(
[
"--no-options",
"--batch",
"--no-tty",
"--armour",
"--digest-algo",
"SHA512",
"--detach-sign",
],
stdin=inp,
stdout=outp,
)
def export_public(self) -> str:
return self._run_gpg(
["--export", "--armor"],
stdout=subprocess.PIPE,
text=True,
).stdout

View File

@ -0,0 +1,166 @@
import pathlib
import subprocess
import tempfile
from isobuilder.apt_state import AptStateManager, PackageInfo
generate_template = """
Dir::ArchiveDir "{root}";
Dir::CacheDir "{scratch}/apt-ftparchive-db";
TreeDefault::Contents " ";
Tree "dists/{series}" {{
FileList "{scratch}/filelist_$(SECTION)";
Sections "{components}";
Architectures "{arches}";
}}
"""
class PoolBuilder:
def __init__(
self, logger, series: str, apt_state: AptStateManager, rootdir: pathlib.Path
):
self.logger = logger
self.series = series
self.apt_state = apt_state
self.rootdir = rootdir
self.arches: set[str] = set()
self._present_components: set[str] = set()
def add_packages(self, pkglist: list[PackageInfo]):
for pkg_info in pkglist:
if pkg_info.architecture != "all":
self.arches.add(pkg_info.architecture)
self.apt_state.download(self.rootdir, pkg_info)
def make_packages(self) -> None:
with self.logger.logged("making Packages files"):
with tempfile.TemporaryDirectory() as tmpdir:
scratchdir = pathlib.Path(tmpdir)
with self.logger.logged("scanning for packages"):
for component in ["main", "restricted", "universe", "multiverse"]:
if not self.rootdir.joinpath("pool", component).is_dir():
continue
self._present_components.add(component)
for arch in self.arches:
self.rootdir.joinpath(
"dists", self.series, component, f"binary-{arch}"
).mkdir(parents=True)
proc = self.logger.run(
["find", f"pool/{component}"],
stdout=subprocess.PIPE,
cwd=self.rootdir,
encoding="utf-8",
check=True,
)
scratchdir.joinpath(f"filelist_{component}").write_text(
"\n".join(sorted(proc.stdout.splitlines()))
)
with self.logger.logged("writing apt-ftparchive config"):
scratchdir.joinpath("apt-ftparchive-db").mkdir()
generate_path = scratchdir.joinpath("generate-binary")
generate_path.write_text(
generate_template.format(
arches=" ".join(self.arches),
series=self.series,
root=self.rootdir.resolve(),
scratch=scratchdir.resolve(),
components=" ".join(self._present_components),
)
)
with self.logger.logged("running apt-ftparchive generate"):
self.logger.run(
[
"apt-ftparchive",
"--no-contents",
"--no-md5",
"--no-sha1",
"--no-sha512",
"generate",
generate_path,
],
check=True,
)
def make_release(self) -> pathlib.Path:
# Build the Release file by merging metadata from the mirror with
# checksums for our pool. We can't just use apt-ftparchive's Release
# output directly because:
# 1. apt-ftparchive doesn't know about Origin, Label, Suite, Version,
# Codename, etc. - these come from the mirror and maintain package
# provenance
# 2. We keep the mirror's Date (when packages were released) rather than
# apt-ftparchive's Date (when we ran the command)
# 3. We need to override Architectures/Components to match our pool
#
# There may be a cleaner way (apt-get indextargets?) but this works.
with self.logger.logged("making Release file"):
in_release = self.apt_state.in_release_path()
cp_mirror_release = self.logger.run(
["gpg", "--verify", "--output", "-", in_release],
stdout=subprocess.PIPE,
encoding="utf-8",
check=False,
)
if cp_mirror_release.returncode not in (0, 2):
# gpg returns code 2 when the public key the InRelease is
# signed with is not available, which is most of the time.
raise Exception("gpg failed")
mirror_release_lines = cp_mirror_release.stdout.splitlines()
release_dir = self.rootdir.joinpath("dists", self.series)
af_release_lines = self.logger.run(
[
"apt-ftparchive",
"--no-contents",
"--no-md5",
"--no-sha1",
"--no-sha512",
"release",
".",
],
stdout=subprocess.PIPE,
encoding="utf-8",
cwd=release_dir,
check=True,
).stdout.splitlines()
# Build the final Release file by merging mirror metadata with pool
# checksums.
# Strategy:
# 1. Take metadata fields (Suite, Origin, etc.) from the mirror's InRelease
# 2. Override Architectures and Components to match what's actually in our
# pool
# 3. Skip the mirror's checksum sections (MD5Sum, SHA256, etc.) because they
# don't apply to our pool
# 4. Skip Acquire-By-Hash since we don't use it
# 5. Append checksums from apt-ftparchive (but not the Date field)
release_lines = []
skipping = False
for line in mirror_release_lines:
if line.startswith("Architectures:"):
line = "Architectures: " + " ".join(sorted(self.arches))
elif line.startswith("Components:"):
line = "Components: " + " ".join(sorted(self._present_components))
elif line.startswith("MD5") or line.startswith("SHA"):
# Start of a checksum section - skip this and indented lines below
# it
skipping = True
elif not line.startswith(" "):
# Non-indented line means we've left the checksum section if we were
# in one.
skipping = False
if line.startswith("Acquire-By-Hash"):
continue
if not skipping:
release_lines.append(line)
# Append checksums from apt-ftparchive, but skip its Date field
# (we want to keep the Date from the mirror release)
for line in af_release_lines:
if not line.startswith("Date"):
release_lines.append(line)
release_path = release_dir.joinpath("Release")
release_path.write_text("\n".join(release_lines))
return release_path

View File

@ -184,6 +184,18 @@ build_layered_squashfs () {
fi fi
create_squashfs "${overlay_dir}" ${squashfs_f} create_squashfs "${overlay_dir}" ${squashfs_f}
# Create a "for-iso" variant of the squashfs for ISO builds. For
# the root layer (the base system) when building with a pool, we
# need to include cdrom.sources so casper can access the ISO's
# package repository. This requires regenerating the squashfs with
# that file included, then removing it (so it doesn't pollute the
# regular squashfs). Non-root layers (desktop environment, etc.)
# and builds without pools can just hardlink to the regular squashfs.
if [ -n "${POOL_SEED_NAME}" ] && $(is_root_layer $pass); then
isobuild generate-sources --mountpoint=/cdrom > ${overlay_dir}/etc/apt/sources.list.d/cdrom.sources
create_squashfs "${overlay_dir}" ${PWD}/for-iso.${pass}.squashfs
rm ${overlay_dir}/etc/apt/sources.list.d/cdrom.sources
fi
if [ -f config/$pass.catalog-in.yaml ]; then if [ -f config/$pass.catalog-in.yaml ]; then
echo "Expanding catalog entry template for $pass" echo "Expanding catalog entry template for $pass"
@ -227,3 +239,11 @@ if [ -n "$(ls livecd.${PROJECT_FULL}.*install.live.manifest.full 2>/dev/null)" ]
fi fi
chmod 644 *.squashfs *.manifest* *.size chmod 644 *.squashfs *.manifest* *.size
prefix=livecd.${PROJECT_FULL}
for artifact in ${prefix}.*; do
for_iso_path=for-iso${artifact#${prefix}}
if [ ! -f $for_iso_path ]; then
ln -v $artifact $for_iso_path
fi
done

View File

@ -148,31 +148,26 @@ if [ "$SUBPROJECT" = "dangerous" ]; then
components+=("$comp") components+=("$comp")
done done
else else
# Normally we use the stable model here. Use the dangerous one for now model="${stable_model}"
# until we get snaps on stable 26.04 tracks and channels. # If we need to override anything from the model, we need grade: dangerous.
#model="${stable_model}" # And if so, uncomment the below to use the dangerous model and set the
model="${dangerous_model}" # snaps_from_dangerous and snaps_from_beta variables to still use snaps
# We're currently using the dangerous model for the stable image because it # from the stable model.
# allows us to override snaps. But we don't want all snaps from edge like #model="${dangerous_model}"
# the dangerous model has, we want most of them from stable excluding: snaps_from_dangerous=()
# * snapd-desktop-integration (for TPM/FDE)
# * firmware-updater (for TPM/FDE)
# * desktop-security-center (for TPM/FDE)
snaps_from_dangerous=(snapd-desktop-integration firmware-updater desktop-security-center)
# For these snaps, we ignore the model entirely. # For these snaps, we ignore the model entirely.
snaps_from_beta=(snapd) snaps_from_beta=()
for snap in "${snaps_from_beta[@]}"; do for snap in "${snaps_from_beta[@]}"; do
prepare_args+=("--snap=$snap=beta") prepare_args+=("--snap=$snap=beta")
done done
# snaps that we are special casing. # snaps that we are special casing.
_exclude=("${snaps_from_dangerous[@]}" "${snaps_from_beta[@]}") _exclude=("${snaps_from_dangerous[@]}" "${snaps_from_beta[@]}")
if [ "$model" = "$dangerous_model" ]; then
for snap_arg in $(get_snaps_args_excluding "$stable_model" "${_exclude[@]}"); do for snap_arg in $(get_snaps_args_excluding "$stable_model" "${_exclude[@]}"); do
prepare_args+=("$snap_arg") prepare_args+=("$snap_arg")
done done
fi
for comp in $(get_components_excluding "$stable_model" "${_exclude[@]}"); do for comp in $(get_components_excluding "$stable_model" "${_exclude[@]}"); do
components+=("$comp") components+=("$comp")
done done

View File

@ -0,0 +1,8 @@
# When booting the live ISO, snapd seeding takes a while to complete, which
# can cause GDM to start before the Ubuntu installer is seeded and ready to be
# launched. This leads to a confusing delay between the user leaving Plymouth
# and seeing the desktop wallpaper and the installer launching.
# This drop-in delays display-manager.service until snapd seeding completes, so
# the installer launches within seconds of Plymouth disappearing.
[Unit]
After=snapd.seeded.service

View File

@ -16,13 +16,9 @@ snaps:
type: gadget type: gadget
- -
components: components:
nvidia-550-erd-ko: nvidia-580-uda-ko:
presence: optional presence: optional
nvidia-550-erd-user: nvidia-580-uda-user:
presence: optional
nvidia-570-erd-ko:
presence: optional
nvidia-570-erd-user:
presence: optional presence: optional
default-channel: 26.04/stable default-channel: 26.04/stable
id: pYVQrBcKmBa0mZ4CCN7ExT6jH8rY1hza id: pYVQrBcKmBa0mZ4CCN7ExT6jH8rY1hza
@ -93,16 +89,16 @@ snaps:
id: IrwRHakqtzhFRHJOOPxKVPU0Kk7Erhcu id: IrwRHakqtzhFRHJOOPxKVPU0Kk7Erhcu
name: snapd-desktop-integration name: snapd-desktop-integration
type: app type: app
timestamp: 2025-11-06T12:00:00.0Z timestamp: 2025-12-09T12:00:00.0Z
sign-key-sha3-384: 9tydnLa6MTJ-jaQTFUXEwHl1yRx7ZS4K5cyFDhYDcPzhS7uyEkDxdUjg9g08BtNn sign-key-sha3-384: 9tydnLa6MTJ-jaQTFUXEwHl1yRx7ZS4K5cyFDhYDcPzhS7uyEkDxdUjg9g08BtNn
AcLBXAQAAQoABgUCaSatwAAKCRDgT5vottzAElN8EAC81ZgmWYxnh9l2UrGl8I3WIa2yPrblQB4m AcLBXAQAAQoABgUCaYzP9QAKCRDgT5vottzAEus2D/4jJVutpoPmDrLjNQLn2KNf/f1L2zU8ESSe
2qdfj35umxfNtZdhBux74g6UpXttX5djcf2qfrK2VAk0tf3lolSprAfPeIoBxthl2Ig0CfWOD7Qa VpFjy+9Ff7AxXckALM4eEy/J5mc+UNhHQ/7Thp4XYy2NiH14n9Lv5kVqZCz8udiEfcfLy5gGveio
sJAiUZ2CVY0gX53tTxc+Lsaj2CCdmEVnlG5Lbzk6DDr6OYQ1jf+SyntSlaB4mvuy+YO89sA/E8X9 oXyGX7J5x9sq3YXV1IHS84aqJS0si80TTLCRQXUN8oUZIVRkgFOGIVVneQkn1ppNs87kNgvBT1ow
xaYhZpS7NU+J5nfc9hB8xf/f7UvXVrcRmkX1t5Pra1T/eQ+3hgLzp+fLvFbwMRcEGqwE2KXTWwm1 nwr9fVvZnt5bTprCxs4R5cEUlWTJMN4l96Eh530Q+wqCjFxbTs6FADUYielsFnBDl/Q1M0fozg4F
F191SI2UazuS4lWv0yJ40uljd26q53E8edKPmtPlmWEY0GwbofvcXKM3tw8gf9ZwZMlewjNYYHGu Ct4gBbvFGWZhp8LXiCbJvTd3PAAV1HYAgtKDKZT0NQp8qaU5DpgTDiUzIjaAJP7feSU5AYDLuVSH
V1FsI+6GdULFPMoQptmEhQmZNOiAE706D+HVTgDvWfv/yw1fOmTUbFaT/dmUb8dSmndouRt2AF0c V3zD8sosg1nmPvVtuSi2q5Z+/zd6gmG+vLn5d16whNqELDnX0O9Hxarc/3DD3ANZrrbXlq/PEJNB
WivlBgo3fKjRZg/sPyZX3FwhggglmuCRiiYK9xu1b4wsplv090fAF3q33o9wLB+G6A4DE9QDzhfu Lor5osHLN4utW7CUC5MIEQ5/Z/6cSuav6rQ+bBiAOzQSHRCbhfyCGSMMINX2CE3ePw3moi9gwXeh
7y5ABm/cG15nKDkanpbCFWwYEq7ANlzz3y6/KctQnFms3+qa5p5bdd+Q4mpqcJcNXMWFnb3b+lSp vKw1iItEOxywEKbeBNEvddnGsvmzoqf9Jg53/X0yrQQVZTHYFsQlTRk9ggajdZnPjJMTqlAqjXnP
TITMdTf9afNKHFTbwBABoNVLDYelkNCYD99ukuSIS8MeiIHEXxUV9lNaEPTKoXgv3LETI8Wd43Qs QCsgnprvln0akW4IfEzc+IgoF5eiShJd4IidkBbbdNXRRYlHfmOG7ZvR9upJwe1M73Zfu1nQFEvT
Msb1UuoDShZo2gfDOlb8P0W7gxz79QbjMcSBBoqVew== fly59e2Vw8O50ljOVW3jT5fW36z8h1+ttxkKwVsQJg==