Sometimes it's desirable to force a test while it is still in progress - maybe
for an infrastructure bug, or maybe because you just don't want to wait that
long for another reason.
We usually bump force-badtest versions for the devel series, which can cause
apparent regressions in stable tests, or when running tests against
devel-release when bumping the version for devel-proposed.
If a particular test is running on all architectures, the tested version is not
very useful and often even wrong, as we can't yet predict which version will
actually be used (sometimes from testing, sometimes from unstable depending on
apt pinning). So only show it once we have at least one result.
Dial back commit 570 a bit: We should tolerate 404's, something went wrong on
uploading the result. We still want to fail hard on any other error as these
would indicate infra problems.
This separates them from Ubuntu and upstream test requests, avoids that any of
those can completely starve the other two, and makes queues easier to manage.
Some tests are known-broken on a particular architecture only.
force-badtest'ing the entire version is overzealous as it hides regressions on
the other architectures that we expect to work. It's also hard to maintain as
the version has to be bumped constantly.
Support hints of the form "force-badtest srcpkg/architecture/all" (e. g.
"force-badtest chromium-browser/armhf/all"). The special version "all" will
match any version.
Add new state "IGNORE-FAIL" for regressions which have a 'force' or
'force-badtest' hint. In the HTML, show them as yellow "Ignored failure"
(without a retry link) instead of "Regression", and drop the separate
"Should wait for ..." reason, as that is hard to read for packages with a long
list of tests.
This also makes retry-autopkgtest-regressions more useful as this will now only
run the "real" regressions.
When using a shared results cache with PPAs (silos) we cannot rely on the
latest time stamp from the distro's results.cache. As soon as there is a new
run for a package in Ubuntu proper, that updated time stamp hides all previous
results for the PPA, and causes tests to be re-requested unnecessarily.
In the CI train we sometimes run into transient "HTTP Error 502: Proxy Error".
As we don't keep a results.cache there, this leads to retrying tests for which
we already have a result in swift, but can't download it. Treat this as a hard
failure now, to let the next britney run try again. This will also tell us if
we need to handle any other status code than 200, 204 (empty result), or 401
(container does not exist).
This is needed by the CI train, where we
(1) don't want to cache intermediate results for PPA runs, as they might
"accidentally" pass in between and fail again for the final silo,
(2) want to seed britney with the Ubuntu results.cache, to detect regressions
relative to Ubuntu.
Introduce ADT_SHARED_RESULTS_CACHE option which can point to a path to
results.cache. This will then not be updated by britney.
If we have a result, directly link to the log file on swift in excuses.html.
The architecture name still leads to the package history as before.
If result is still pending, link to the "running tests" page instead.
Don't clobber passed run IDs with newer failed results. This is potentially a
bit more expensive as we might re-fetch failed results at every run after a
PASS, but the IDs in our cache will be correct so that we can expose them in
the UI.
Splitting up the processes of request(), submit(), and collect() makes our data
structures, house keeping, and code unnecessarily complicated. Drop the latter
two and now do all of it in just request(). This avoids having to have a
separate requested_test map, having to fetch test results twice, and gets rid
of some state keeping.
This could have led to (re-)fetching results more than once when we only got
the latest ID from the few triggers we were currently looking at, not for all
possible triggers of a package. Drop this kludge, and replace it with a proper
full iteration and caching.
- Invert the map to go from triggers to tested packages, instead of the other
way around. This is the lookup and update mode that we usually want, which
simplifies the code and speeds up lookups. The one exception is for fetching
results (as that is per tested source package, not per trigger), but there
is a FIXME to get rid of the "triggers" argument completely.
- Stop tracking tested package versions. We don't actually care about it
anywhere, as the important piece of data is the trigger.
- Drop our home-grown pending.txt format and write pending.json instead.
ATTENTION: This changes the on-disk cache format for pending tests, so
pending.txt needs to be cleaned up manually and any pending tests at the time
of upgrading to this revision will be re-run.
- Invert the map to go from triggers to tested versions, instead of from
tested versions to triggers. This is the lookup and update mode that we
usually want (except for determining "ever passed"), thus this simplifies
the code and speeds up lookups.
- Drop "latest_stamp" in favor of tracking individual run IDs for every
result. This allows us in the future to directly expose the run IDs on
excuses.{yaml,html}, e. g. by providing direct links to result logs.
- Drop "ever_passed" flag as we can compute this from the individual results.
- Don't track multiple package versions for a package and a particular
trigger. We are only interested in the latest (passing) version and don't
otherwise use the tested version except for displaying.
This requires adjusting the test_dkms_results_per_kernel_old_results test, as
we now consistently ignore "ever passed" for kernel tests also for "RUNNING"
vs. "RUNNING-ALWAYSFAILED", not just for "PASS" vs. "ALWAYSFAIL".
Also fix a bug in results() when checking if a test for which we don't have
results yet is currently running: Check for correct trigger, not for the
current source package version. This most probably already fixes LP: #1494786.
Also upgrade the warning about "result is neither known nor pending" to a grave
error, for making it more obvious to debug remaining errors with this.
ATTENTION: This changes the on-disk format of results.cache, and thus this
needs to be dropped and rebuilt when rolling this out.
For using britney on PPAs we need to add the "ppas" test parameter to AMQP
autopkgtest requests. Add ADT_PPAS britney.conf option which gets passed
through to test requests.
In situations where we don't have an up to date existing results.cache, the
fallback for handling test results would attribute old test results to new
requests, as we don't have a solid way to map them. This is detrimental for ad
hoc britney instances, like for testing PPAs, and also when we need to rebuild
our cache.
Ignore test results without a package trigger, and drop the code for handling
those.
The main risk here is that if we need to rebuild the cache from scratch we
might miss historic "PASS" results which haven't run since we switched to
recording triggers two months ago. But in these two months most of the
interesting packages have run, in particular for the development series and for
stable kernels, and non-kernel SRUs are not auto-promoted anyway.
python3-amqplib already exists in trusty, but python3-kombu does not. This
makes it possible to run britney on a standard trusty without manual backports.
Stop special-casing the kernel and move to one test run per trigger. This
allows us to only install the triggering package from unstable and run the rest
out of testing, which gives much better isolation.
For linux* themselves we don't want to trigger tests -- these should all come
from linux-meta*. A new kernel ABI without a corresponding -meta won't be
installed and thus we can't sensibly run tests against it. This caused
unnecessary and wrong regressions, and unnecessary runs (like linux-goldfish
being triggered by linux).
Test requesting: Don't re-request a test if we already have a result for it for
that trigger (for a relevant version), but there is a new version of the tested
package. First this unnecessarily delays propagation as the test will go back
to "in progress", and second if it fails in the next run this isn't the fault
of the original trigger, but the new version of the tested package.
Result finding: Don't limit acceptable results to the latest version of the
tested package. It's perfectly fine if an earlier version (like the one in
testing, or an earlier upload) was ran and gave a result for the requesting
trigger. If it's PASS, then we are definitively done, and if it's a failure
there is the "Checking for new results for failed test..." logic in collect()
logic which will also accept newer results.
In addition to not reading ever_passed for kernel triggers, we must also not
write it for those. Otherwise we introduce false regressions for e. g. "dkms"
when some DKMS package always failed on the main kernel but succeeds on one
flavour.
We trigger independent tests for every linux/linux-meta* reverse dependencies,
as they run under the triggering kernel. Thus "ever passed" is rather
meaningless for these as we don't want to track this on a per-trigger basis (as
it would be wrong for everything else but kernels). This led to a lot of false
regressions, as some DKMS modules only work on some kernel flavours.
The kernel team is doing per-kernel regression analysis of the test results, so
we don't need to duplicate this logic in britney. Thus effectively disable the
"Regression" state for kernel reverse dependencies, and rely on the kernel
test machinery to untag the tracking bug only if there are no actual
regressions.
When fetching a result with explicit triggers, always update self.results, not
just when we have a pending trigger for it. Otherwise satisfied_triggers will
be empty after reading the first result, and we clobber test results for all
triggers with the latest result.
When we need to blow away and rebuild results.cache we want to avoid
re-triggering all tests. Thus collect already existing results for requested
tests before submitting new requests.
This is rather hackish now, as fetch_one_result() now has to deal with both
self.requested_tests and self.pending_tests. The code should be refactored to
eliminate one of these maps.
When downloading results, check the ADT_TEST_TRIGGERS var in testinfo.json to
get back the original trigger that previous britney runs requested the test run
for. This allows us to much more precisely map a test result to an original
test request.
In order to tell apart test results which pass/fail depending on the package
which triggers them (in particular, if that is a kernel), we must keep track of
pass/fail results on a per-trigger granularity. Rearrange the results map
accordingly.
Keep the old "latest test result applies to all triggers of this pkg/version"
logic around as long as we still have existing test results without
testinfo.json.
ATTENTION: This breaks the format of results.cache, so this needs to be removed
when rolling this out.
gccgo-5 exists in Ubuntu 15.04 only and builds all binary packages of gcc-5.
Triggering all tests is pointless and a big waste of test resources, so trim
down the list to actually useful ones. This can be dropped when 15.04 goes EOL.
We often get "tmpfail" results (repeated failure to start cloud instance, etc.)
with no package/version at all. Stop attributing them to the latest pending
request for that package, as that has already messed up some results. With
moving to tracking test triggers in testinfo.jar and running multiple test
requests for each triggering kernel version it becomes completely impossible to
interpret anything into a tmpfail result without testpkg-version, so just
ignore them.
This will leave some orphaned entries in pending.txt and thus require manual
retries after fixing the tmpfail reason. But this needs to happen anyway, so
this does not complicate operation but instead shows those as "in progress"
instead of "regression".
So far we only added the triggering test name. Add the version as well, so that
we'll retain the complete trigger information in result.tar's testinfo.json in
swift. This will allow us to completely reconstruct our results.cache from
scratch without losing any trigger information.
This isn't significantly harder to parse from shell either (in tests): You can
still iterate over $ADT_TEST_TRIGGERS with a "for" loop and split package and
version on '/'.