Sometimes it's desirable to force a test while it is still in progress - maybe
for an infrastructure bug, or maybe because you just don't want to wait that
long for another reason.
We usually bump force-badtest versions for the devel series, which can cause
apparent regressions in stable tests, or when running tests against
devel-release when bumping the version for devel-proposed.
If a particular test is running on all architectures, the tested version is not
very useful and often even wrong, as we can't yet predict which version will
actually be used (sometimes from testing, sometimes from unstable depending on
apt pinning). So only show it once we have at least one result.
Revert commit 551/bug 1522893 again, as we still want to see test results even
for force-skip'ed packages in some cases.
In cases where we really don't want to run the tests at all, we need to flush
the AMQP queues manually and remove pending.json after the package got
accepted.
This separates them from Ubuntu and upstream test requests, avoids that any of
those can completely starve the other two, and makes queues easier to manage.
Some tests are known-broken on a particular architecture only.
force-badtest'ing the entire version is overzealous as it hides regressions on
the other architectures that we expect to work. It's also hard to maintain as
the version has to be bumped constantly.
Support hints of the form "force-badtest srcpkg/architecture/all" (e. g.
"force-badtest chromium-browser/armhf/all"). The special version "all" will
match any version.
Add new state "IGNORE-FAIL" for regressions which have a 'force' or
'force-badtest' hint. In the HTML, show them as yellow "Ignored failure"
(without a retry link) instead of "Regression", and drop the separate
"Should wait for ..." reason, as that is hard to read for packages with a long
list of tests.
This also makes retry-autopkgtest-regressions more useful as this will now only
run the "real" regressions.
This is needed by the CI train, where we
(1) don't want to cache intermediate results for PPA runs, as they might
"accidentally" pass in between and fail again for the final silo,
(2) want to seed britney with the Ubuntu results.cache, to detect regressions
relative to Ubuntu.
Introduce ADT_SHARED_RESULTS_CACHE option which can point to a path to
results.cache. This will then not be updated by britney.
This was a giant copy&paste, was disabled four months ago, and the
infrastructure for this ceased to exist.
If this comes back, the AutoPackageTest class should be generalized to also
issue phone boot tests (exposed as new architectures, which should then be
called "platforms"), to avoid all this duplicated code.
Generate https://autopkgtest.ubuntu.com/retry.cgi links for re-running tests
that regressed.
Change Excuse.html() back to usual % string formatting to be consistent with
the rest of the code.
If we have a result, directly link to the log file on swift in excuses.html.
The architecture name still leads to the package history as before.
If result is still pending, link to the "running tests" page instead.
- Invert the map to go from triggers to tested packages, instead of the other
way around. This is the lookup and update mode that we usually want, which
simplifies the code and speeds up lookups. The one exception is for fetching
results (as that is per tested source package, not per trigger), but there
is a FIXME to get rid of the "triggers" argument completely.
- Stop tracking tested package versions. We don't actually care about it
anywhere, as the important piece of data is the trigger.
- Drop our home-grown pending.txt format and write pending.json instead.
ATTENTION: This changes the on-disk cache format for pending tests, so
pending.txt needs to be cleaned up manually and any pending tests at the time
of upgrading to this revision will be re-run.
- Invert the map to go from triggers to tested versions, instead of from
tested versions to triggers. This is the lookup and update mode that we
usually want (except for determining "ever passed"), thus this simplifies
the code and speeds up lookups.
- Drop "latest_stamp" in favor of tracking individual run IDs for every
result. This allows us in the future to directly expose the run IDs on
excuses.{yaml,html}, e. g. by providing direct links to result logs.
- Drop "ever_passed" flag as we can compute this from the individual results.
- Don't track multiple package versions for a package and a particular
trigger. We are only interested in the latest (passing) version and don't
otherwise use the tested version except for displaying.
This requires adjusting the test_dkms_results_per_kernel_old_results test, as
we now consistently ignore "ever passed" for kernel tests also for "RUNNING"
vs. "RUNNING-ALWAYSFAILED", not just for "PASS" vs. "ALWAYSFAIL".
Also fix a bug in results() when checking if a test for which we don't have
results yet is currently running: Check for correct trigger, not for the
current source package version. This most probably already fixes LP: #1494786.
Also upgrade the warning about "result is neither known nor pending" to a grave
error, for making it more obvious to debug remaining errors with this.
ATTENTION: This changes the on-disk format of results.cache, and thus this
needs to be dropped and rebuilt when rolling this out.
We don't want to accept a result for an older package version than what the
trigger says.
Also drop two repeated (and thus unnecessary) results in set_results().
For using britney on PPAs we need to add the "ppas" test parameter to AMQP
autopkgtest requests. Add ADT_PPAS britney.conf option which gets passed
through to test requests.
In situations where we don't have an up to date existing results.cache, the
fallback for handling test results would attribute old test results to new
requests, as we don't have a solid way to map them. This is detrimental for ad
hoc britney instances, like for testing PPAs, and also when we need to rebuild
our cache.
Ignore test results without a package trigger, and drop the code for handling
those.
The main risk here is that if we need to rebuild the cache from scratch we
might miss historic "PASS" results which haven't run since we switched to
recording triggers two months ago. But in these two months most of the
interesting packages have run, in particular for the development series and for
stable kernels, and non-kernel SRUs are not auto-promoted anyway.
It does not make sense to have first a passing and then a failing result for
the same package version and trigger. We should prefer old passing results over
new failed ones for the same version (this isn't done yet, but will be soon).
We're going to modify britney so that RUNNING tests don't block
promotion if they have never failed - for this we will need to change a
few tests so that the packages being tested have passed before, meaning
that there could potentially be a regression.
Stop special-casing the kernel and move to one test run per trigger. This
allows us to only install the triggering package from unstable and run the rest
out of testing, which gives much better isolation.
For linux* themselves we don't want to trigger tests -- these should all come
from linux-meta*. A new kernel ABI without a corresponding -meta won't be
installed and thus we can't sensibly run tests against it. This caused
unnecessary and wrong regressions, and unnecessary runs (like linux-goldfish
being triggered by linux).
We want to treat linux-$flavor and linux-meta-$flavor as one set in britney
which goes in together or not at all. We never want to promote linux-$flavor
without the accompanying linux-meta-$flavor.
Introduce a synthetic linux* → linux-meta* dependency to enforce this grouping.