Merge branch 'ubuntu/master' of git+ssh://git.launchpad.net/livecd-rootfs into u-i-disk-info

u-i-disk-info
Łukasz 'sil2100' Zemczak 5 years ago
commit 95772fd9df

124
debian/changelog vendored

@ -1,9 +1,129 @@
livecd-rootfs (2.635) UNRELEASED; urgency=medium
livecd-rootfs (2.650) UNRELEASED; urgency=medium
* Support generating a .disk/info file via ubuntu-image from the passed-in
datestamp parameter (using the $NOW environment variable).
-- Łukasz 'sil2100' Zemczak <lukasz.zemczak@ubuntu.com> Fri, 13 Dec 2019 18:12:12 +0100
-- Łukasz 'sil2100' Zemczak <lukasz.zemczak@ubuntu.com> Fri, 06 Mar 2020 11:12:12 +0100
livecd-rootfs (2.649) focal; urgency=medium
* Fix autoinstall-extracting runcmd in the case no user-data is passed.
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Thu, 05 Mar 2020 15:36:25 +0100
livecd-rootfs (2.648) focal; urgency=medium
* Enable cloud-init in live server installer live session on all
architectures.
* Remove code for old design for getting autoinstall.yaml.
* Add runcmd to extract autoinstall.yaml from user-data.
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Wed, 04 Mar 2020 16:10:35 +0100
livecd-rootfs (2.647) focal; urgency=medium
* Address snap base regression after snap-tool removal
-- Robert C Jennings <robert.jennings@canonical.com> Tue, 25 Feb 2020 16:15:48 -0600
livecd-rootfs (2.646) focal; urgency=medium
* Pass --verbose to `snap info` so that it includes the base.
-- Iain Lane <iain.lane@canonical.com> Mon, 24 Feb 2020 11:22:21 +0000
livecd-rootfs (2.645) focal; urgency=medium
[ Robert C Jennings ]
* Use snap cli rather than custom snap-tool
-- Steve Langasek <steve.langasek@ubuntu.com> Fri, 21 Feb 2020 13:02:43 -0800
livecd-rootfs (2.644) focal; urgency=medium
* Rename the raspi3 SUBARCH to raspi, as we generate universal generic pi
images since long.
-- Łukasz 'sil2100' Zemczak <lukasz.zemczak@ubuntu.com> Fri, 21 Feb 2020 12:37:02 +0100
livecd-rootfs (2.643) focal; urgency=medium
* subiquity:
- drop ds-identify policy, not needed with improved cloud config
- drop disabling network, doesn't work with ip=
- fixup setting up the INSTALLER_ROOT mountpoint
-- Dimitri John Ledkov <xnox@ubuntu.com> Mon, 10 Feb 2020 23:50:16 +0000
livecd-rootfs (2.642) focal; urgency=medium
* Set uc20 image size to 10G.
-- Dimitri John Ledkov <xnox@ubuntu.com> Mon, 10 Feb 2020 12:43:44 +0000
livecd-rootfs (2.641) focal; urgency=medium
* Configure a better nocloud datasource for subiquity cloud-init.
* Encode CHANNEL specification in the UC20 model names.
-- Dimitri John Ledkov <xnox@ubuntu.com> Fri, 07 Feb 2020 22:18:11 +0000
livecd-rootfs (2.640) focal; urgency=medium
* Although the request flavour to install is oem-20.04, it really is
called just oem on disk. Override the flavour name from oem-20.04 to
oem when renaming built artefacts. This also means that ubuntu-cdimage
needs to simply download 'oem' vmlinuz+initrd pairs, not 'oem-20.04'.
-- Dimitri John Ledkov <xnox@ubuntu.com> Thu, 30 Jan 2020 11:52:32 +0000
livecd-rootfs (2.639) focal; urgency=medium
* On s390x subiquity:
- enable cloud-init
- make cloud-init handle the default/baked in networking configuration
- install and enable openssh-server for the installation only
- provide cloud.cfg that generates random installer user password
- disable subiquity on sclp_line0 line based console
-- Dimitri John Ledkov <xnox@ubuntu.com> Wed, 29 Jan 2020 14:16:09 +0000
livecd-rootfs (2.638) focal; urgency=medium
* Install oem-20.04 kernel flavour on Ubuntu Desktop builds.
-- Dimitri John Ledkov <xnox@ubuntu.com> Tue, 28 Jan 2020 15:06:02 +0000
livecd-rootfs (2.637) focal; urgency=medium
* Ensure seed partition is mounted on no-cloud images which use system-boot
as their seed (LP: #1860046)
* Have getty wait for cloud-init to complete to ensure that the default
user exists before presenting a login prompt
-- Dave Jones <dave.jones@canonical.com> Fri, 24 Jan 2020 15:17:56 +0000
livecd-rootfs (2.636) focal; urgency=medium
* Stop trying to install linux-oem. It's dropped, but us trying to install
it is causing Ubuntu images to fail to build. It is due to be replaced by
linux-oem-20.04 (currently built from linux-...-5.4). But that is stuck in
focal-proposed at the minute, so there is nothing to transition to until
it migrates.
* Drop linux-signed-generic for flavours too - follow up from 2.630 which
handled this for Ubuntu. (LP: #1859146)
* Ditto for ubuntu-core:system-image - move from linux-signed-image-generic
to linux-image-generic.
-- Iain Lane <iain@orangesquash.org.uk> Fri, 10 Jan 2020 12:11:02 +0000
livecd-rootfs (2.635) focal; urgency=medium
* Preserve apt preferences created by any package we install (i.e.
ubuntu-advantage-tools) against live-build's attempt to delete them.
(LP: #1855354)
-- Michael Hudson-Doyle <michael.hudson@ubuntu.com> Sat, 14 Dec 2019 21:00:45 +1300
livecd-rootfs (2.634) focal; urgency=medium

1
debian/install vendored

@ -4,4 +4,3 @@ get-ppa-fingerprint usr/share/livecd-rootfs
minimize-manual usr/share/livecd-rootfs
magic-proxy usr/share/livecd-rootfs
lp-in-release usr/share/livecd-rootfs
snap-tool usr/share/livecd-rootfs

@ -314,6 +314,12 @@ EOF
undivert_grub chroot
fi
if [ -d chroot/etc/apt/preferences.d.save ]; then
# https://twitter.com/infinite_scream
mv chroot/etc/apt/preferences.d.save/* chroot/etc/apt/preferences.d/
rmdir chroot/etc/apt/preferences.d.save
fi
# Let all configuration non multi-layered project here.
# If those are moving to a multi-layer layout, this needs to be
# done in chroot hooks.
@ -600,7 +606,7 @@ case $PROJECT:${SUBPROJECT:-} in
linux_package="linux-image-$devarch"
case $ARCH in
amd64)
linux_package="linux-signed-image-generic"
linux_package="linux-image-generic"
;;
arm64)
if [ "$devarch" = "dragonboard" ]; then
@ -813,10 +819,14 @@ for FLAVOUR in $LB_LINUX_FLAVOURS; do
if [ -z "$LB_LINUX_FLAVOURS" ] || [ "$LB_LINUX_FLAVOURS" = "none" ]; then
continue
fi
if [ "$FLAVOUR" = "virtual" ]; then
# The virtual kernel is named generic in /boot
case $FLAVOUR in
virtual)
FLAVOUR="generic"
fi
;;
oem-*)
FLAVOUR="oem"
;;
esac
KVERS="$( (cd "binary/$INITFS"; ls vmlinu?-* 2>/dev/null || true) | (fgrep -v .efi || true) | sed -n "s/^vmlinu.-\\([^-]*-[^-]*-$FLAVOUR\\)$/\\1/p" )"
if [ -z "$KVERS" ]; then
if [ -e "binary/$INITFS/vmlinuz" ]; then
@ -861,7 +871,7 @@ if [ "$NUMFLAVOURS" = 1 ] && [ "$LB_LINUX_FLAVOURS" != "none" ]; then
fi
case $SUBARCH in
raspi2|raspi3)
raspi|raspi2)
# copy the kernel and initrd to a predictable directory for
# ubuntu-image consumption. In some cases, like in pi2/3
# u-boot, the bootloader needs to contain the kernel and initrd,

@ -280,7 +280,7 @@ if [ -z "${IMAGEFORMAT:-}" ]; then
case $PROJECT:${SUBPROJECT:-} in
ubuntu-cpc:*|ubuntu:desktop-preinstalled)
case $SUBARCH in
raspi3|imx6)
raspi|imx6)
IMAGEFORMAT=ubuntu-image
;;
*)
@ -326,10 +326,14 @@ case $IMAGEFORMAT in
MODEL=pc-i386 ;;
arm64+snapdragon)
MODEL=dragonboard ;;
armhf+raspi)
MODEL=pi ;;
armhf+raspi2)
MODEL=pi2 ;;
armhf+raspi3)
MODEL=pi3 ;;
arm64+raspi)
MODEL=pi-arm64 ;;
arm64+raspi3)
MODEL=pi3-arm64 ;;
armhf+cm3)
@ -371,11 +375,23 @@ case $IMAGEFORMAT in
UBUNTU_IMAGE_ARGS="$UBUNTU_IMAGE_ARGS -c $CHANNEL"
;;
*)
UBUNTU_IMAGE_ARGS="--image-size 10G"
# Ubuntu Core 20
# XXX: Currently uc20 assertions do not support global
# channel overrides.
# Currently uc20 assertions do not support global
# channel overrides, instead we have per-channel models
case $CHANNEL in
stable)
MODEL="ubuntu-core-20-${MODEL#pc-}"
;;
candidate|beta|edge|dangerous)
MODEL="ubuntu-core-20-${MODEL#pc-}-${CHANNEL}"
;;
*)
echo "Unknown CHANNEL ${CHANNEL} specification for ${SUITE}"
exit 1
;;
esac
;;
esac
case "$ARCH+${SUBARCH:-}" in
@ -399,7 +415,7 @@ case $IMAGEFORMAT in
# Certain models have different names but are built from the same source gadget tree
case $MODEL in
pi3-arm64)
pi-arm64|pi3-arm64)
MODEL=pi3 ;;
esac
@ -559,7 +575,7 @@ case $PROJECT in
LIVE_TASK='ubuntu-live'
add_task install minimal standard ubuntu-desktop
add_task live ubuntu-desktop-minimal-default-languages ubuntu-desktop-default-languages
KERNEL_FLAVOURS='generic oem'
KERNEL_FLAVOURS='generic oem-20.04'
;;
esac
;;
@ -568,9 +584,6 @@ case $PROJECT in
add_task install minimal standard
add_task install kubuntu-desktop
LIVE_TASK='kubuntu-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe'
add_chroot_hook remove-gnome-icon-cache
;;
@ -597,9 +610,6 @@ case $PROJECT in
edubuntu|edubuntu-dvd)
add_task install minimal standard ubuntu-desktop edubuntu-desktop-gnome
LIVE_TASK='edubuntu-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe'
;;
@ -607,9 +617,6 @@ case $PROJECT in
add_task install minimal standard xubuntu-desktop
add_package install xterm
LIVE_TASK='xubuntu-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe multiverse'
case $ARCH in
amd64|i386) KERNEL_FLAVOURS=generic ;;
@ -624,18 +631,12 @@ case $PROJECT in
mythbuntu)
add_task install minimal standard mythbuntu-desktop
LIVE_TASK='mythbuntu-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe multiverse'
;;
lubuntu)
add_task install minimal standard lubuntu-desktop
LIVE_TASK='lubuntu-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe multiverse'
case $ARCH in
amd64|i386) KERNEL_FLAVOURS=generic ;;
@ -645,27 +646,18 @@ case $PROJECT in
ubuntu-gnome)
add_task install minimal standard ubuntu-gnome-desktop
LIVE_TASK='ubuntu-gnome-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe'
;;
ubuntu-budgie)
add_task install minimal standard ubuntu-budgie-desktop
LIVE_TASK='ubuntu-budgie-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe'
;;
ubuntu-mate)
add_task install minimal standard ubuntu-mate-core ubuntu-mate-desktop
LIVE_TASK='ubuntu-mate-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe multiverse'
;;
@ -681,9 +673,6 @@ case $PROJECT in
add_task install minimal standard ubuntukylin-desktop
add_package install ubuntukylin-default-settings
LIVE_TASK='ubuntukylin-live'
case $ARCH in
amd64) add_package live linux-signed-generic ;;
esac
COMPONENTS='main restricted universe'
;;
@ -917,7 +906,8 @@ case $ARCH in
add_package install linux-firmware-raspi2 u-boot-rpi flash-kernel u-boot-tools wpasupplicant
BINARY_REMOVE_LINUX=false
;;
raspi3)
raspi)
# Generic Raspberry Pi images
COMPONENTS='main restricted universe multiverse'
KERNEL_FLAVOURS=raspi2
add_package install linux-firmware-raspi2 u-boot-rpi flash-kernel u-boot-tools wpasupplicant
@ -1022,7 +1012,7 @@ EOF
esac
case $ARCH+$SUBARCH in
armhf+raspi2|armhf+raspi3|arm64+raspi3)
armhf+raspi2|armhf+raspi|arm64+raspi)
cat > config/hooks/01-firmware-directory.chroot_early <<EOF
#!/bin/sh -ex
mkdir -p /boot/firmware
@ -1103,6 +1093,19 @@ rm -f /etc/fstab
EOF
fi
if [ $PROJECT != ubuntu-cpc ]; then
cat > config/hooks/100-preserve-apt-prefs.chroot <<\EOF
#! /bin/sh -ex
# live-build "helpfully" removes /etc/apt/preferences.d/* so we put a
# copy somewhere it won't touch it.
if [ -n "$(ls -A /etc/apt/preferences.d)" ]; then
cp -a /etc/apt/preferences.d /etc/apt/preferences.d.save
fi
EOF
fi
if [ $PROJECT = ubuntukylin ]; then
cat > config/hooks/100-ubuntukylin.chroot <<EOF
#! /bin/sh

@ -482,18 +482,19 @@ _snap_preseed() {
return
fi
# Pre-seed snap's base
case $SNAP_NAME in
snapd)
# snapd is self-contained, ignore base
;;
core|core[0-9][0-9])
# core and core## are self-contained, ignore base
;;
*)
# Determine if and what core snap is needed
# Determine which core snap is needed
local snap_info
snap_info=$(/usr/share/livecd-rootfs/snap-tool info \
--cohort-key="${COHORT_KEY:-}" \
--channel="$CHANNEL" "${SNAP_NAME}" \
)
snap_info=$(snap info --verbose "${SNAP_NAME}")
if [ $? -ne 0 ]; then
echo "Failed to retrieve base of $SNAP_NAME!"
@ -502,19 +503,18 @@ _snap_preseed() {
local core_snap=$(echo "$snap_info" | grep '^base:' | awk '{print $2}')
# If $core_snap is not the empty string then SNAP itself is not a core
# snap and we must additionally seed the core snap.
if [ -n "$core_snap" ]; then
# If snap info does not list a base use 'core'
core_snap=${core_snap:-core}
_snap_preseed $CHROOT_ROOT $core_snap stable
fi
;;
esac
sh -c "
set -x;
cd \"$CHROOT_ROOT/var/lib/snapd/seed\";
SNAPPY_STORE_NO_CDN=1 /usr/share/livecd-rootfs/snap-tool download \
--cohort-key=\"${COHORT_KEY:-}\" \
SNAPPY_STORE_NO_CDN=1 snap download \
--cohort="${COHORT_KEY:-}" \
--channel=\"$CHANNEL\" \"$SNAP_NAME\"" || snap_download_failed=1
if [ $snap_download_failed = 1 ] ; then
echo "If the channel ($CHANNEL) includes '*/ubuntu-##.##' track per "

@ -0,0 +1,8 @@
#! /bin/sh -ex
# live-build "helpfully" removes /etc/apt/preferences.d/* so we put a
# copy somewhere it won't touch it.
if [ -n "$(ls -A /etc/apt/preferences.d)" ]; then
cp -a /etc/apt/preferences.d /etc/apt/preferences.d.save
fi

@ -19,5 +19,19 @@ datasource_list: [ NoCloud, None ]
datasource:
NoCloud:
fs_label: system-boot
EOF
mkdir -p /etc/systemd/system/cloud-init-local.service.d
cat << EOF > /etc/systemd/system/cloud-init-local.service.d/mount-seed.conf
# Ensure our customized seed location is mounted prior to execution
[Unit]
RequiresMountsFor=/boot/firmware
EOF
mkdir -p /etc/systemd/system/cloud-config.service.d
cat << EOF > /etc/systemd/system/cloud-config.service.d/getty-wait.conf
# Wait for cloud-init to finish (creating users, etc.) before running getty
[Unit]
Before=getty.target
EOF
fi

@ -35,7 +35,7 @@ mkdir -p "$INSTALLER_ROOT" "$OVERLAY_ROOT"
# Create an installer squashfs layer
mount_overlay "$FILESYSTEM_ROOT/" "$OVERLAY_ROOT/" "$INSTALLER_ROOT/"
setup_mountpoint binary/boot/squashfs.dir
setup_mountpoint "$INSTALLER_ROOT"
# Override JobRunningTimeoutSec to 0s on the .device unit that
# subiquity_config.mount depends on to avoid a 5s delay on switching
@ -50,20 +50,21 @@ JobRunningTimeoutSec=0s
Wants=subiquity_config.mount
EOF
AUTOINSTALL_DEVICE_UNIT='dev-disk-by\x2dlabel-autoinstall.device'
mkdir -p "$INSTALLER_ROOT/etc/systemd/system/$AUTOINSTALL_DEVICE_UNIT.d"
cat > "$INSTALLER_ROOT/etc/systemd/system/$AUTOINSTALL_DEVICE_UNIT.d/override.conf" <<EOF
[Unit]
JobRunningTimeoutSec=0s
Wants=subiquity_autoinstall.mount
EOF
# Prepare installer layer.
# Install casper for live session magic.
chroot $INSTALLER_ROOT apt-get -y install lupin-casper
# Install linux-firmware for kernel to upload into hardware.
chroot $INSTALLER_ROOT apt-get -y install linux-firmware
# Install:
# 1. linux-firmware for kernel to upload into hardware.
# 2. casper for live session magic.
# 3. openssh-server to enable the "ssh into live session" feature
chroot $INSTALLER_ROOT apt-get -y install linux-firmware lupin-casper openssh-server
# Make sure NoCloud is last
values=$(echo get cloud-init/datasources | chroot $INSTALLER_ROOT debconf-communicate | sed 's/^0 //;s/NoCloud, //;s/None/NoCloud, None/')
printf "%s\t%s\t%s\t%s\n" \
cloud-init cloud-init/datasources multiselect "$values" |
chroot $INSTALLER_ROOT debconf-set-selections
chroot $INSTALLER_ROOT dpkg-reconfigure --frontend=noninteractive cloud-init
if [ `dpkg --print-architecture` = s390x ]; then
chroot $INSTALLER_ROOT apt-get -y install s390-tools-zkey
fi
@ -73,27 +74,12 @@ chroot $INSTALLER_ROOT apt-get clean
# "helpful" casper script that mounts any swap partitions it finds.
rm -f $INSTALLER_ROOT/usr/share/initramfs-tools/scripts/casper-bottom/*swap
# Don't let cloud-init run in the live session.
touch $INSTALLER_ROOT/etc/cloud/cloud-init.disabled
# Preseed subiquity into installer layer
snap_prepare $INSTALLER_ROOT
snap_preseed $INSTALLER_ROOT subiquity/classic
# Drop lxd from the installer layer preseed
sed -i -e'N;/name: lxd/,+2d' $INSTALLER_ROOT/var/lib/snapd/seed/seed.yaml
# Add initramfs hook to copy /autoinstall.yaml from initrd
# /run/initrd-autoinstall.yaml
cat <<EOF > "$INSTALLER_ROOT"/etc/initramfs-tools/scripts/init-bottom/copy-autoinstall
#!/bin/sh
case \$1 in
prereqs) exit 0;;
esac
[ -f /autoinstall.yaml ] && cp /autoinstall.yaml /run/initrd-autoinstall.yaml
EOF
chmod +x "$INSTALLER_ROOT"/etc/initramfs-tools/scripts/init-bottom/copy-autoinstall
teardown_mountpoint "$INSTALLER_ROOT"
squashfs_f="${PWD}/livecd.${PROJECT}.installer.squashfs"

@ -0,0 +1,131 @@
# The top level settings are used as module
# and system configuration.
# A set of users which may be applied and/or used by various modules
# when a 'default' entry is found it will reference the 'default_user'
# from the distro configuration specified below
users:
- default
# If this is set, 'root' will not be able to ssh in and they
# will get a message to login instead as the default $user
disable_root: true
# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: true
ssh_pwauth: yes
chpasswd:
expire: false
list:
- installer:RANDOM
# This is the initial network config.
# It can be overwritten by cloud-init or subiquity.
network:
version: 2
ethernets:
all-en:
match:
name: "en*"
dhcp4: true
all-eth:
match:
name: "eth*"
dhcp4: true
final_message: "## template: jinja\nCloud-init v. {{version}} finished at {{timestamp}}. Datasource {{datasource}}. Up {{uptime}} seconds\n\n\nWelcome to Ubuntu Server Installer!\n\nAbove you will find SSH host keys and a random password set for the `installer` user. You can use these credentials to ssh-in and complete the installation. If you provided SSH keys in the cloud-init datasource, they were also provisioned to the installer user.\n\nIf you have access to the graphical console, like TTY1 or HMC ASCII terminal you can complete the installation there too."
# Example datasource config
# datasource:
# Ec2:
# metadata_urls: [ 'blah.com' ]
# timeout: 5 # (defaults to 50 seconds)
# max_wait: 10 # (defaults to 120 seconds)
# The modules that run in the 'init' stage
cloud_init_modules:
- bootcmd
- write-files
- ca-certs
- rsyslog
- users-groups
- ssh
# The modules that run in the 'config' stage
cloud_config_modules:
# Emit the cloud config ready event
# this can be used by upstart jobs for 'start on cloud-config'.
- ssh-import-id
- set-passwords
- timezone
- disable-ec2-metadata
- runcmd
# The modules that run in the 'final' stage
cloud_final_modules:
- scripts-per-once
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
# System and/or distro specific settings
# (not accessible to handlers/transforms)
system_info:
# This will affect which distro class gets used
distro: ubuntu
# Default user name + that default users groups (if added/used)
default_user:
name: installer
lock_passwd: false
gecos: Ubuntu
groups: [adm, audio, cdrom, dialout, dip, floppy, lxd, netdev, plugdev, sudo, video]
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
shell: /usr/bin/subiquity-shell
# Automatically discover the best ntp_client
ntp_client: auto
# Other config here will be given to the distro class and/or path classes
paths:
cloud_dir: /var/lib/cloud/
templates_dir: /etc/cloud/templates/
upstart_dir: /etc/init/
package_mirrors:
- arches: [i386, amd64]
failsafe:
primary: http://archive.ubuntu.com/ubuntu
security: http://security.ubuntu.com/ubuntu
search:
primary:
- http://%(ec2_region)s.ec2.archive.ubuntu.com/ubuntu/
- http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
- http://%(region)s.clouds.archive.ubuntu.com/ubuntu/
security: []
- arches: [arm64, armel, armhf]
failsafe:
primary: http://ports.ubuntu.com/ubuntu-ports
security: http://ports.ubuntu.com/ubuntu-ports
search:
primary:
- http://%(ec2_region)s.ec2.ports.ubuntu.com/ubuntu-ports/
- http://%(availability_zone)s.clouds.ports.ubuntu.com/ubuntu-ports/
- http://%(region)s.clouds.ports.ubuntu.com/ubuntu-ports/
security: []
- arches: [default]
failsafe:
primary: http://ports.ubuntu.com/ubuntu-ports
security: http://ports.ubuntu.com/ubuntu-ports
ssh_svcname: ssh
runcmd:
- - "python3"
- "-c"
- |
import subprocess, sys, yaml
user_data = yaml.safe_load(subprocess.run([
"cloud-init", "query", "userdata"],
check=True, stdout=subprocess.PIPE, encoding='utf-8').stdout)
if user_data is not None and 'autoinstall' in user_data:
with open("/autoinstall.yaml", "w") as fp:
yaml.dump(user_data['autoinstall'], fp)

@ -1,13 +0,0 @@
# This is the initial network config.
# It can be overwritten by cloud-init or subiquity.
network:
version: 2
ethernets:
all-en:
match:
name: "en*"
dhcp4: true
all-eth:
match:
name: "eth*"
dhcp4: true

@ -1,4 +0,0 @@
[Mount]
What=/dev/disk/by-label/autoinstall
Where=/autoinstall
Type=ext4

@ -1,603 +0,0 @@
#!/usr/bin/python3
#-*- encoding: utf-8 -*-
"""
This script can be used instead of the traditional `snap` command to download
snaps and accompanying assertions. It uses the new store API (v2) which allows
creating temporary snapshots of the channel map.
To create such a snapshot run
snap-tool cohort-create
This will print a "cohort-key" to stdout, which can then be passed to future
invocations of `snap-tool download`. Whenever a cohort key is provided, the
store will provide a view of the channel map as it existed when the key was
created.
"""
from textwrap import dedent
import argparse
import base64
import binascii
import getopt
import hashlib
import json
import os
import re
import shutil
import subprocess
import sys
import time
import urllib.error
import urllib.request
EXIT_OK = 0
EXIT_ERR = 1
class SnapError(Exception):
"""Generic error thrown by the Snap class."""
pass
class SnapCraftError(SnapError):
"""Error thrown on problems with the snapcraft APIs."""
pass
class SnapAssertionError(SnapError):
"""Error thrown on problems with the assertions API."""
pass
class ExpBackoffHTTPClient:
"""This class is an abstraction layer on top of urllib with additional
retry logic for more reliable downloads."""
class Request:
"""This is a convenience wrapper around urllib.request."""
def __init__(self, request, do_retry, base_interval, num_tries):
"""
:param request:
An urllib.request.Request instance.
:param do_retry:
Whether to enable the exponential backoff and retry logic.
:param base_interval:
The initial interval to sleep after a failed attempt.
:param num_tries:
How many attempts to make.
"""
self._request = request
self._do_retry = do_retry
self._base_interval = base_interval
self._num_tries = num_tries
self._response = None
def open(self):
"""Open the connection."""
if not self._response:
self._response = self._retry_urlopen()
def close(self):
"""Close the connection."""
if self._response:
self._response.close()
self._response = None
def data(self):
"""Return the raw response body."""
with self:
return self.read()
def json(self):
"""Return the deserialized response body interpreted as JSON."""
return json.loads(self.data(), encoding="utf-8")
def text(self):
"""Return the response body as a unicode string."""
encoding = "utf-8"
with self:
content_type = self._response.getheader("Content-Type", "")
if content_type == "application/json":
encoding = "utf-8"
else:
m = re.match(r"text/\S+;\s*charset=(?P<charset>\S+)",
content_type)
if m:
encoding=m.group("charset")
return self.read().decode(encoding)
def read(self, size=None):
"""Read size bytes from the response. If size if not set, the
complete response body is read in."""
return self._response.read(size)
def __enter__(self):
"""Make this class a context manager."""
self.open()
return self
def __exit__(self, type, value, traceback):
"""Make this class a context manager."""
self.close()
def _retry_urlopen(self):
"""Try to open the HTTP connection as many times as configured
through the constructor. Every time an error occurs, double the
time to wait until the next attempt."""
for attempt in range(self._num_tries):
try:
return urllib.request.urlopen(self._request)
except Exception as e:
if isinstance(e, urllib.error.HTTPError) and e.code < 500:
raise
if attempt >= self._num_tries - 1:
raise
sys.stderr.write(
"WARNING: failed to open URL '{}': {}\n"
.format(self._request.full_url, str(e))
)
else:
break
sleep_interval = self._base_interval * 2**attempt
sys.stderr.write(
"Retrying HTTP request in {} seconds...\n"
.format(sleep_interval)
)
time.sleep(sleep_interval)
def __init__(self, do_retry=True, base_interval=2, num_tries=8):
"""
:param do_retry:
Whether to enable the retry logic.
:param base_interval:
The initial interval to sleep after a failed attempt.
:param num_tries:
How many attempts to make.
"""
self._do_retry = do_retry
self._base_interval = base_interval
self._num_tries = num_tries if do_retry else 1
def get(self, url, headers=None):
"""Create a GET request that can be used to retrieve the resource
at the given URL.
:param url:
An HTTP URL.
:param headers:
A dictionary of extra headers to send along.
:return:
An ExpBackoffHTTPClient.Request instance.
"""
return self._prepare_request(url, headers=headers)
def post(self, url, data=None, json=None, headers=None):
"""Create a POST request that can be used to submit data to the
endpoint at the given URL."""
return self._prepare_request(
url, data=data, json_data=json, headers=headers
)
def _prepare_request(self, url, data=None, json_data=None, headers=None):
"""Prepare a Request instance that can be used to retrieve data from
and/or send data to the endpoint at the given URL.
:param url:
An HTTP URL.
:param data:
Raw binary data to send along in the request body.
:param json_data:
A Python data structure to be serialized and sent out in JSON
format.
:param headers:
A dictionary of extra headers to send along.
:return:
An ExpBackoffHTTPClient.Request instance.
"""
if data is not None and json_data is not None:
raise ValueError(
"Parameters 'data' and 'json_data' are mutually exclusive."
)
if json_data:
data = json.dumps(json_data, ensure_ascii=False)
if headers is None:
headers = {}
headers["Content-Type"] = "application/json"
if isinstance(data, str):
data = data.encode("utf-8")
return ExpBackoffHTTPClient.Request(
urllib.request.Request(url, data=data, headers=headers or {}),
self._do_retry,
self._base_interval,
self._num_tries
)
class Snap:
"""This class provides methods to retrieve information about a snap and
download it together with its assertions."""
def __init__(self, name, channel="stable", arch="amd64", series=16,
cohort_key=None, assertion_url="https://assertions.ubuntu.com",
snapcraft_url="https://api.snapcraft.io", **kwargs):
"""
:param name:
The name of the snap.
:param channel:
The channel to operate on.
:param arch:
The Debian architecture of the snap (e.g. amd64, armhf, arm64, ...).
:param series:
The device series. This should always be 16.
:param cohort_key:
A cohort key to access a snapshot of the channel map.
"""
self._name = name
self._channel = channel
self._arch = arch
self._series = series
self._cohort_key = cohort_key
self._assertion_url = assertion_url
self._snapcraft_url = snapcraft_url
self._details = None
self._assertions = {}
@classmethod
def cohort_create(cls):
"""Get a cohort key for the current moment. A cohort key is valid
across all snaps, channels and architectures."""
return Snap("core")\
.get_details(cohort_create=True)\
.get("cohort-key")
def download(self, download_assertions=True):
"""Download the snap container. If download_assertions is True, the
corresponding assertions will be downloaded, as well."""
snap = self.get_details()
snap_name = snap["name"]
snap_revision = snap["revision"]
publisher_id = snap["publisher"]["id"]
snap_download_url = snap["download"]["url"]
snap_byte_size = snap["download"]["size"]
filename = snap_name + "_" + str(snap_revision)
snap_filename = filename + ".snap"
assert_filename = filename + ".assert"
skip_snap_download = False
if os.path.exists(snap_filename) and os.path.getsize(snap_filename) \
== snap_byte_size:
skip_snap_download = True
headers = {}
if os.environ.get("SNAPPY_STORE_NO_CDN", "0") != "0":
headers.update({
"X-Ubuntu-No-Cdn": "true",
"Snap-CDN": "none",
})
if not skip_snap_download:
http_client = ExpBackoffHTTPClient()
response = http_client.get(snap_download_url, headers=headers)
with response, open(snap_filename, "wb+") as fp:
shutil.copyfileobj(response, fp)
if os.path.getsize(snap_filename) != snap_byte_size:
raise SnapError(
"The downloaded snap does not have the expected size."
)
if not download_assertions:
return
required_assertions = [
"account-key",
"account",
"snap-declaration",
"snap-revision",
]
if publisher_id == "canonical":
required_assertions.remove("account")
for assertion_name in required_assertions:
attr_name = "get_assertion_" + assertion_name.replace("-", "_")
# This will populate self._assertions[<assertion_name>].
getattr(self, attr_name)()
with open(assert_filename, "w+", encoding="utf-8") as fp:
fp.write("\n".join(self._assertions[a] for a in
required_assertions))
def get_details(self, cohort_create=False):
"""Get details about the snap. On subsequent calls, the cached results
are returned. If cohort_create is set to True, a cohort key will be
created and included in the result."""
if self._details and not cohort_create:
return self._details
if self.is_core_snap() and self._channel.startswith("stable/ubuntu-"):
sys.stderr.write(
"WARNING: switching channel from '{}' to 'stable' for '{}' "
"snap.\n".format(self._channel, self._name)
)
self._channel = "stable"
path = "/v2/snaps/refresh"
data = {
"context": [],
"actions": [
{
"action": "download",
"instance-key": "0",
"name": self._name,
"channel": self._channel,
}
],
"fields": [
"base",
"created-at",
"download",
"license",
"name",
"prices",
"publisher",
"revision",
"snap-id",
"summary",
"title",
"type",
"version",
],
}
# These are mutually exclusive.
if cohort_create:
data["actions"][0]["cohort-create"] = True
elif self._cohort_key:
data["actions"][0]["cohort-key"] = self._cohort_key
try:
response_dict = self._do_snapcraft_request(path, json_data=data)
except SnapCraftError as e:
raise SnapError("failed to get details for '{}': {}"
.format(self._name, str(e)))
snap_data = response_dict["results"][0]
if snap_data.get("result") == "error":
raise SnapError(
"failed to get details for '{}': {}"
.format(self._name, snap_data.get("error", {}).get("message"))
)
# Have "base" initialized to something meaningful.
if self.is_core_snap():
snap_data["snap"]["base"] = ""
elif snap_data["snap"].get("base") is None:
snap_data["snap"]["base"] = "core"
# Copy the key into the snap details.
if "cohort-key" in snap_data:
snap_data["snap"]["cohort-key"] = snap_data["cohort-key"]
if "error" in snap_data:
raise SnapError(
"failed to get details for '{}' in '{}' on '{}': {}"
.format(self._name, self._channel, self._arch,
snap_data["error"]["message"])
)
self._details = snap_data["snap"]
return self._details
def get_assertion_snap_revision(self):
"""Download the snap-revision assertion associated with this snap. The
assertion is returned as a string."""
if "snap-revision" in self._assertions:
return self._assertions["snap-revision"]
snap = self.get_details()
snap_sha3_384 = base64.urlsafe_b64encode(
binascii.a2b_hex(snap["download"]["sha3-384"])
).decode("us-ascii")
data = self._do_assertion_request("/v1/assertions/snap-revision/{}"
.format(snap_sha3_384))
self._assertions["snap-revision"] = data
return data
def get_assertion_snap_declaration(self):
"""Download the snap-declaration assertion associated with this snap.
The assertion is returned as a string."""
if "snap-declaration" in self._assertions:
return self._assertions["snap-declaration"]
snap = self.get_details()
series = self._series
snap_id = snap["snap-id"]
data = self._do_assertion_request(
"/v1/assertions/snap-declaration/{}/{}"
.format(series, snap_id))
self._assertions["snap-declaration"] = data
return data
def get_assertion_account(self):
"""Download the account assertion associated with this snap. The
assertion is returned as a string."""
if "account" in self._assertions:
return self._assertions["account"]
snap = self.get_details()
publisher_id = snap["publisher"]["id"]
data = self._do_assertion_request("/v1/assertions/account/{}"
.format(publisher_id))
self._assertions["account"] = data
return data
def get_assertion_account_key(self):
"""Download the account-key assertion associated with this snap. The
assertion will be returned as a string."""
if "account-key" in self._assertions:
return self._assertions["account-key"]
declaration_data = self.get_assertion_snap_declaration()
sign_key_sha3 = None
for line in declaration_data.splitlines():
if line.startswith("sign-key-sha3-384:"):
sign_key_sha3 = line.split(":")[1].strip()
data = self._do_assertion_request("/v1/assertions/account-key/{}"
.format(sign_key_sha3))
self._assertions["account-key"] = data
return data
def is_core_snap(self):
return re.match(r"^core\d*$", self._name) != None
def _do_assertion_request(self, path):
url = self._assertion_url + path
headers = {
"Accept": "application/x.ubuntu.assertion",
}
http_client = ExpBackoffHTTPClient()
try:
with http_client.get(url, headers=headers) as response:
return response.text()
except urllib.error.HTTPError as e:
raise SnapAssertionError(str(e))
def _do_snapcraft_request(self, path, json_data=None):
url = self._snapcraft_url + "/" + path
headers = {
"Snap-Device-Series": str(self._series),
"Snap-Device-Architecture": self._arch,
}
http_client = ExpBackoffHTTPClient()
try:
response = http_client.post(url, json=json_data, headers=headers)
with response:
return response.json()
except urllib.error.HTTPError as e:
raise SnapCraftError(str(e))
class SnapCli:
def __call__(self, args):
"""Parse the command line arguments and execute the selected command."""
options = self._parse_opts(args)
try:
options.func(getattr(options, "snap", None), **vars(options))
except SnapError as e:
sys.stderr.write("snap-tool {}: {}\n".format(
options.command, str(e)))
return EXIT_ERR
return EXIT_OK
@staticmethod
def _get_host_deb_arch():
result = subprocess.run(["dpkg", "--print-architecture"],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True, check=True)
return result.stdout.strip()
def _parse_opts(self, args):
main_parser = argparse.ArgumentParser()
subparsers = main_parser.add_subparsers(dest="command")
parser_cohort_create = subparsers.add_parser("cohort-create",
help="Create a cohort key for the snap store channel map.")
parser_cohort_create.set_defaults(func=self._cohort_create)
parser_download = subparsers.add_parser("download",
help="Download a snap from the store.")
parser_download.set_defaults(func=self._download)
parser_info = subparsers.add_parser("info",
help="Retrieve information about a snap.")
parser_info.set_defaults(func=self._info)
# Add common parameters.
for parser in [parser_download, parser_info]:
parser.add_argument("--cohort-key", dest="cohort_key",
help="A cohort key to pin the channel map to.", type=str)
parser.add_argument("--channel", dest="channel",
help="The publication channel to query (default: stable).",
type=str, default="stable")
parser.add_argument("--series", dest="series",
help="The device series (default: 16)",
type=int, default=16)
parser.add_argument("--arch", dest="arch",
help="The Debian architecture (default: amd64).",
type=str, default=self._get_host_deb_arch())
parser.add_argument("snap", help="The name of the snap.")
if not args:
main_parser.print_help()
sys.exit(EXIT_ERR)
return main_parser.parse_args(args)
def _cohort_create(self, _, **kwargs):
print(Snap.cohort_create())
def _download(self, snap_name, **kwargs):
Snap(snap_name, **kwargs).download()
def _info(self, snap_name, **kwargs):
snap = Snap(snap_name, **kwargs)
info = snap.get_details()
print(dedent("""\
name: {}
summary: {}
arch: {}
base: {}
channel: {}
publisher: {}
license: {}
snap-id: {}
revision: {}"""
.format(
snap_name,
info.get("summary", ""),
snap._arch,
info.get("base"),
snap._channel,
info.get("publisher", {}).get("display-name", ""),
info.get("license", ""),
info.get("snap-id", ""),
info.get("revision", "")
))
)
if __name__ == "__main__":
try:
rval = SnapCli()(sys.argv[1:])
except KeyboardInterrupt:
sys.stderr.write("snap-tool: caught keyboard interrupt, exiting.\n")
sys.exit(EXIT_ERR)
sys.exit(rval)
Loading…
Cancel
Save