We currently skip running autopkgtests where there is an installability
problem, but in a few cases the depends policy notes these only, but
otherwise doesn't block migration on them.
In these cases, let's try to run the autopkgtests anyway. There will be
a few instances of uninstallability here, but since we migrate the
packages we should give them a chance to be tested.
We currently concatenate all triggers together into a string, but the
AMQP consumer expects this to be a list.
When using AMQP, keep the triggers as a list. Ensure that the "real"
trigger (the package being tested) is kept first, as before.
If we have this situation
- Binary in target taken over by new source
- Not cleaned up yet
- New source updated in source suite
We can have *three* copies of a source package visible to britney at
once. Handle this case by recording the source package a pkgid was seen
in before, and trying to remove from that and the current source.
On some distros (Ubuntu), arch:all packages are built along with one of
the architectures. We shouldn't be listing 'all' as its own arch in this
case. Instead we filter out the binaries except for on the
'all_buildarch'.
We just had the autopkgtest queues DoSed because britney was crashing
after requesting each reverse dependency for a perl upload, but before
it had written pending.json out so it knew what not to request again.
This was 25,000 requests per arch...
Let's write pending.json straight after sending each request, so that
the next run - even after a crash - won't re-request the same things
again.
These have a hash appended.
We don't actually use the baseline retrying, which is where the ID
parsing is used, but we might as well handle this, not least so we don't
crash.
The apt version comparison sorts 'blacklisted' greater than most version
numbers, which means that we accidentally apply force hints for version
'blacklisted' to all uploads. Since this is the only case of a hacked
version number, let's special case it so that 'blacklisted' hints only
match packages with 'blacklisted' version.
We're supposed to synthesise an "unknown" version for these, but a bug
in the worker meant we didn't do that in some cases and these leaked
into swift. Let's repair it client-side.
This is a test carried over from Ubuntu which ensures that a package
which builds no binaries on an arch doesn't have tests requested.
It was disabled. Enable it.
Most DKMS packages do not declare Testsuite: autopkgtest-pkg-dkms, but
we can detect this anyway, and this way we can enforce that the module
is buildable.
It works like this. We wait until all tests have finished running. and
then grab their results. If there are any regressions, we mail each bug
with a link to pending-sru.html. There's a state file which records the
mails we've sent out, so that we don't mail the same bug multiple times.
We want to treat linux-$flavor and linux-meta-$flavor as one set in
britney which goes in together or not at all. We never want to promote
linux-$flavor without the accompanying linux-meta-$flavor.
Add a new LinuxPolicy which runs after most of the other policies, which
invalidates linux-foo if linux-meta-foo is invalid.
Sometimes (for the LinuxPolicy), excuses don't have enough information
when they are processed. They need to defer some of their work until
another excuse has been evaluated and then perhaps get invalidated.
Introduce the concept of "external invalidation" to deal with this. A
policy can keep references to other excuses around, and then invalidate
them later (when processing another one). The excuse finder needs to
know this happened to remove the excuse from the list of "actionable
excuses".
For example. The Linux policy invalidates linux-foo if linux-meta-foo is
invalid. If linux-foo is encountered first, we will not yet know if
linux-meta-foo is going to be invalid. We effectively defer the
evaluation until after linux-meta-foo has been dealt with, and then we
know whether we need to invalidate linux-foo or not.