* magic-proxy: Replace http.client with urllib calls. live-build/auto/build:
change iptables calls to query rules and quickly check that connectivity
works after transparent proxy has been installed. (LP: #1917920)
* magic-proxy: fix TypeError when trying to call get_uri() (LP: #1944906)
Currently the uri that is passed into urllib.parse.urlparse() is not
prefixed with "http(s)://" which leads urlparse() to return a wrong
scheme/netloc/path. Currently it looks like:
ParseResult(scheme='', netloc='',
path='de.archive.ubuntu.com/ubuntu/dists/impish-backports/InRelease'
, params='', query='', fragment='')
That's wrong. The path should look like
'ubuntu/dists/impish-backports/InRelease'.
Prefixing the 'host' header with 'http://' in case it's not there does
fix the problem.
This fixes:
Traceback (most recent call last):
File "/usr/lib/python3.9/socketserver.py", line 683, in process_request_thread
self.finish_request(request, client_address)
File "/usr/lib/python3.9/socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/usr/lib/python3.9/socketserver.py", line 747, in __init__
self.handle()
File "/usr/lib/python3.9/http/server.py", line 427, in handle
self.handle_one_request()
File "/usr/lib/python3.9/http/server.py", line 415, in handle_one_request
method()
File "/home/tom/devel/livecd-rootfs/./magic-proxy", line 787, in do_GET
File "/home/tom/devel/livecd-rootfs/./magic-proxy", line 838, in __get_request
File "/home/tom/devel/livecd-rootfs/./magic-proxy", line 84, in get_uri
TypeError: can only concatenate str (not "NoneType") to str
(cherry picked from commit 3559153c7d)
Initialize passwords from sources.list.
Use urllib everywhere.
This way authentication is added to all the required requests.
And incoming headers, are passed to the outgoing requests.
And all the response headers, are passed to the original client.
And all the TCP & HTTP errors are passed back to the client.
Thus should avoiding hanging requests upon failure.
Also rewrite the URI when requesting things.
This allows to use private-ppa.buildd outside of launchpad.
Signed-off-by: Dimitri John Ledkov <xnox@ubuntu.com>
(cherry picked from commit dc2a472871)
* Simplify how the subiquity client is run on the serial console in the live
server environment, breaking a unit cycle that sometimes prevents
subiquity from starting up at all. (LP: #1888497)
* Do not set the password for the installer user via cloud-init as subiquity
can now do this itself. (LP: #1933523)
With that, the Dockerfile modifications[0] currently done externally
are done now here. That means that the created rootfs tarball can be
directly used within a Dockerfile to create a container from scratch:
FROM scratch
ADD livecd.ubuntu-oci.rootfs.tar.gz /
CMD ["/bin/bash"]
[0]
https://github.com/tianon/docker-brew-ubuntu-core/blob/master/update.sh
(cherry picked from commit a81972a58b)
This is a copy of the ubuntu-base project.
Currently ubuntu-base is used as a base for the docker/OCI container
images. The rootfs tarball that is created with ubuntu-base is
published under [0]. That tarball is used in the FROM statement of the
Dockerfile as base and then a couple of modifications are done inside
of the Dockerfile[1].
The ubuntu-oci project will include the changes that are currently
done in the Dockerfile. With that:
1) a Dockerfile using that tarball will be just a 2 line thing:
FROM scratch
ADD ubuntu-hirsute-core-cloudimg-amd64-root.tar.gz /
CMD ["/bin/bash"]
2) Ubuntu has the full control about the build process of the
docker/OCI container. No external sources (like [1]) need to be
modified anymore.
3) Ubuntu can publish containers without depending on the official
dockerhub containers[2]. Currently the containers for the AWS ECR
registry[3] use as a base[4] the official dockerhub containers. That's
no longer needed because a container just needs a Dockerfile described
in 1)
When the ubuntu-oci project has the modifications from [1] included,
we'll also update [1] to use the ubuntu-oci rootfs tarball as a base
and drop the modifications done at [1].
Note: Creating a new ubuntu-oci project instead of using ubuntu-base
will make sure that we don't break users who are currently using
ubuntu-base rootfs tarballs for doing their own thing.
[0] https://partner-images.canonical.com/core/
[1]
https://github.com/tianon/docker-brew-ubuntu-core/blob/master/update.sh
[2] https://hub.docker.com/_/ubuntu
[3] https://gallery.ecr.aws/ubuntu/ubuntu
[4]
https://launchpad.net/~ubuntu-docker-images/ubuntu-docker-images/+oci/ubuntu/+recipe/ubuntu-20.04
(cherry picked from commit ac4a95b931)
I recently pulled initramfs logic out of the base build hook, and
dropped that into the `replace_kernel` function. Any cloud image that
does not leverage the generic virtual kernel was expected to call
`replace_kernel` to pull in a custom kernel. That function will
disable initramfs boot for images that use a custom kernel.
Minimal cloud images on amd64 use the linux-kvm kernel, but the build
hook does not utilize the `replace_kernel` function. Instead, the
kernel flavor is set in `auto/config`. I pulled that logic out of
`auto/config` and am now calling `replace_kernel` in the build hook.
I also moved a call to generate the package list so that it will pick
up the change to the linux-kvm kernel.
Change mount option for ubuntu-cpc images from "defaults" to
"umask=0077". ESP partitions might contain sensitive data and
non-root users shouldn't have read access on it.
With this change, when we attempt to boot with an initramfs and fail,
initrdless_boot_fallback_triggered is set to non-zero in the grubenv.
This value can be checked after boot by looking in /boot/grub/grubenv
or by using the grub-editenv list command.
Addresses LP: #1870189
Initramfs-less boot, which is a boot optimization, should only be
applied where we know it could work for users and provide an improved
boot boot experience; images with custom kernels are candidates for
that.