If reverse-dependencies of this package still exist on the architecture, these
would be captured as uninstallables; we don't need to additionally trigger
tests that can never succeed, would tell us nothing about the package if they
did, and will just cause more work to hint away the results.
The update of gcc to gcc-9 introduced a regression in buildability of
anything relying on kernel headers. This could have been caught by the
kernel's standard rebuild autopkgtest, but we currently only trigger the
linux autopkgtest for source packages named gcc-N, which excludes
gcc-defaults.
Include gcc-defaults in the list of packages that trigger a linux rebuild
test.
Bug-Ubuntu: https://bugs.launchpad.net/bugs/1836100
The apt version comparison sorts 'blacklisted' greater than most version
numbers, which means that we accidentally apply force hints for version
'blacklisted' to all uploads. Since this is the only case of a hacked
version number, let's special case it so that 'blacklisted' hints only
match packages with 'blacklisted' version.
We're supposed to synthesise an "unknown" version for these, but a bug
in the worker meant we didn't do that in some cases and these leaked
into swift. Let's repair it client-side.
This means 'all tests were skipped', or 'no tests in this package'. The
former case is new after our recent autopkgtest rebase.
Upstream britney (624b185ba60709f1686fbaa2a7623a94c5cb23ef) handles this
as "neutral", which doesn't influence the result and is a more explicit
(better) way of expressing this situation. Once we rebase, we should
take that approach.
We just had the autopkgtest queues DoSed because britney was crashing
after requesting each reverse dependency for a perl upload, but before
it had written pending.json out so it knew what not to request again.
This was 25,000 requests per arch...
Let's write pending.json straight after sending each request, so that
the next run - even after a crash - won't re-request the same things
again.
We should only run autopkgtests for testsuite triggers if the source
package has any binaries on the relevant architecture, as otherwise it
should be expected to fail.
Currently we re-trigger all reverse binary dependencies of a package,
including binary packages built from the same source. We already
explicity trigger the source's own tests if they still exist in unstable
- don't also consider the source when looking at reverse dependencies.
Add new autopkgtest policy: it determines the autopkgtests for a
source package (its own, direct reverse binary dependencies, and
Testsuite-Triggers), requests tests via AMQP, fetches results from swift, and
keeps track of pending tests between run. This also caches the downloaded
results from swift, as re-dowloading them all is very expensive.
This introduces two new hints:
* force-badtest pkg/ver[/arch]: Failing results for that package will be
ignored. This is useful to deal with broken tests that get imported from
Debian or are from under-maintained packages, or broke due to some
infrastructure changes. These are long-lived usually.
* force-skiptest pkg/ver: Test results *triggered by* that package (i. e.
reverse dependencies) will be ignored. This is mostly useful for landing
packages that trigger a huge amount of tests (glibc, perl) where some tests
are just too flaky to get them all passing, and one just wants to land it
after the remaining failures have been checked. This should be used rarely
and the hints should be removed immediately again.
Add integration tests that call britney in various scenarios on constructed
fake archives, with mocked AMQP and Swift results.