Splitting up the processes of request(), submit(), and collect() makes our data
structures, house keeping, and code unnecessarily complicated. Drop the latter
two and now do all of it in just request(). This avoids having to have a
separate requested_test map, having to fetch test results twice, and gets rid
of some state keeping.
This could have led to (re-)fetching results more than once when we only got
the latest ID from the few triggers we were currently looking at, not for all
possible triggers of a package. Drop this kludge, and replace it with a proper
full iteration and caching.
- Invert the map to go from triggers to tested packages, instead of the other
way around. This is the lookup and update mode that we usually want, which
simplifies the code and speeds up lookups. The one exception is for fetching
results (as that is per tested source package, not per trigger), but there
is a FIXME to get rid of the "triggers" argument completely.
- Stop tracking tested package versions. We don't actually care about it
anywhere, as the important piece of data is the trigger.
- Drop our home-grown pending.txt format and write pending.json instead.
ATTENTION: This changes the on-disk cache format for pending tests, so
pending.txt needs to be cleaned up manually and any pending tests at the time
of upgrading to this revision will be re-run.
- Invert the map to go from triggers to tested versions, instead of from
tested versions to triggers. This is the lookup and update mode that we
usually want (except for determining "ever passed"), thus this simplifies
the code and speeds up lookups.
- Drop "latest_stamp" in favor of tracking individual run IDs for every
result. This allows us in the future to directly expose the run IDs on
excuses.{yaml,html}, e. g. by providing direct links to result logs.
- Drop "ever_passed" flag as we can compute this from the individual results.
- Don't track multiple package versions for a package and a particular
trigger. We are only interested in the latest (passing) version and don't
otherwise use the tested version except for displaying.
This requires adjusting the test_dkms_results_per_kernel_old_results test, as
we now consistently ignore "ever passed" for kernel tests also for "RUNNING"
vs. "RUNNING-ALWAYSFAILED", not just for "PASS" vs. "ALWAYSFAIL".
Also fix a bug in results() when checking if a test for which we don't have
results yet is currently running: Check for correct trigger, not for the
current source package version. This most probably already fixes LP: #1494786.
Also upgrade the warning about "result is neither known nor pending" to a grave
error, for making it more obvious to debug remaining errors with this.
ATTENTION: This changes the on-disk format of results.cache, and thus this
needs to be dropped and rebuilt when rolling this out.
For using britney on PPAs we need to add the "ppas" test parameter to AMQP
autopkgtest requests. Add ADT_PPAS britney.conf option which gets passed
through to test requests.
In situations where we don't have an up to date existing results.cache, the
fallback for handling test results would attribute old test results to new
requests, as we don't have a solid way to map them. This is detrimental for ad
hoc britney instances, like for testing PPAs, and also when we need to rebuild
our cache.
Ignore test results without a package trigger, and drop the code for handling
those.
The main risk here is that if we need to rebuild the cache from scratch we
might miss historic "PASS" results which haven't run since we switched to
recording triggers two months ago. But in these two months most of the
interesting packages have run, in particular for the development series and for
stable kernels, and non-kernel SRUs are not auto-promoted anyway.
python3-amqplib already exists in trusty, but python3-kombu does not. This
makes it possible to run britney on a standard trusty without manual backports.
Stop special-casing the kernel and move to one test run per trigger. This
allows us to only install the triggering package from unstable and run the rest
out of testing, which gives much better isolation.
For linux* themselves we don't want to trigger tests -- these should all come
from linux-meta*. A new kernel ABI without a corresponding -meta won't be
installed and thus we can't sensibly run tests against it. This caused
unnecessary and wrong regressions, and unnecessary runs (like linux-goldfish
being triggered by linux).
Test requesting: Don't re-request a test if we already have a result for it for
that trigger (for a relevant version), but there is a new version of the tested
package. First this unnecessarily delays propagation as the test will go back
to "in progress", and second if it fails in the next run this isn't the fault
of the original trigger, but the new version of the tested package.
Result finding: Don't limit acceptable results to the latest version of the
tested package. It's perfectly fine if an earlier version (like the one in
testing, or an earlier upload) was ran and gave a result for the requesting
trigger. If it's PASS, then we are definitively done, and if it's a failure
there is the "Checking for new results for failed test..." logic in collect()
logic which will also accept newer results.
In addition to not reading ever_passed for kernel triggers, we must also not
write it for those. Otherwise we introduce false regressions for e. g. "dkms"
when some DKMS package always failed on the main kernel but succeeds on one
flavour.
We trigger independent tests for every linux/linux-meta* reverse dependencies,
as they run under the triggering kernel. Thus "ever passed" is rather
meaningless for these as we don't want to track this on a per-trigger basis (as
it would be wrong for everything else but kernels). This led to a lot of false
regressions, as some DKMS modules only work on some kernel flavours.
The kernel team is doing per-kernel regression analysis of the test results, so
we don't need to duplicate this logic in britney. Thus effectively disable the
"Regression" state for kernel reverse dependencies, and rely on the kernel
test machinery to untag the tracking bug only if there are no actual
regressions.
When fetching a result with explicit triggers, always update self.results, not
just when we have a pending trigger for it. Otherwise satisfied_triggers will
be empty after reading the first result, and we clobber test results for all
triggers with the latest result.
When we need to blow away and rebuild results.cache we want to avoid
re-triggering all tests. Thus collect already existing results for requested
tests before submitting new requests.
This is rather hackish now, as fetch_one_result() now has to deal with both
self.requested_tests and self.pending_tests. The code should be refactored to
eliminate one of these maps.
When downloading results, check the ADT_TEST_TRIGGERS var in testinfo.json to
get back the original trigger that previous britney runs requested the test run
for. This allows us to much more precisely map a test result to an original
test request.
In order to tell apart test results which pass/fail depending on the package
which triggers them (in particular, if that is a kernel), we must keep track of
pass/fail results on a per-trigger granularity. Rearrange the results map
accordingly.
Keep the old "latest test result applies to all triggers of this pkg/version"
logic around as long as we still have existing test results without
testinfo.json.
ATTENTION: This breaks the format of results.cache, so this needs to be removed
when rolling this out.
gccgo-5 exists in Ubuntu 15.04 only and builds all binary packages of gcc-5.
Triggering all tests is pointless and a big waste of test resources, so trim
down the list to actually useful ones. This can be dropped when 15.04 goes EOL.
We often get "tmpfail" results (repeated failure to start cloud instance, etc.)
with no package/version at all. Stop attributing them to the latest pending
request for that package, as that has already messed up some results. With
moving to tracking test triggers in testinfo.jar and running multiple test
requests for each triggering kernel version it becomes completely impossible to
interpret anything into a tmpfail result without testpkg-version, so just
ignore them.
This will leave some orphaned entries in pending.txt and thus require manual
retries after fixing the tmpfail reason. But this needs to happen anyway, so
this does not complicate operation but instead shows those as "in progress"
instead of "regression".
So far we only added the triggering test name. Add the version as well, so that
we'll retain the complete trigger information in result.tar's testinfo.json in
swift. This will allow us to completely reconstruct our results.cache from
scratch without losing any trigger information.
This isn't significantly harder to parse from shell either (in tests): You can
still iterate over $ADT_TEST_TRIGGERS with a "for" loop and split package and
version on '/'.
In collect(), check if there are new results for failed tests on a
per-architecture level. This updates results while tests for other
architectures are still in progress (i. e. in self.pending_tests).
So far we've only calculated the reverse dependencies on amd64. This breaks
when triggering packages which do not exist on some architectures, like
bcmwl-kernel-source. It also makes it impossible to e. g. trigger DKMS tests on
armhf only for an ARM-only kernel like linux-ti-omap4.
Through the usual reverse dependency triggering, gcc-* usually triggers many
hundreds of (mostly universe) tests via libgccN. But:
- This does not help to prevent compiler regressions: as all packages are
built in -proposed anyway, the new compiler is being used immediately, so we
can't hold it back in -proposed.
- It does not trigger toolchain tests which actually are affected, most
importantly binutils and linux.
- This puts enormous stress onto our test infrastructure.
So special case gcc by triggering binutils and linux, and fglrx-installer as a
typical (and important) example of a DKMS package which also needs a compiler,
and libreoffice as our favourite tool chain stress test to cover libgccN.
If a package gets triggered by several sources, we can ordinarily run just one
test for all triggers. But for proposed kernels we want to run a separate test
for each, so that the test can run under that particular kernel.
With this, tests can do special things when they get triggered by a particular
package. E. g. "linux" or "gcc" could skip their "rebuild myself" test if they
were triggered by a new version of themselves (as opposed to a new binutils).
This is particularly aimed at DKMS tests which need to install the triggering
kernel (e. g. -generic vs. -generic-lts-backport-XXX).
New kernels are prone to break LXC. In https://bugs.debian.org/779559 there is
a proposal for a flexible approach to add extra "reverse test dependencies".
Hardcode this trigger until this gets implemented.
By the kernel team's request we want to trigger DKMS package tests on new
kernel uploads, to ensure that we don't regress them with newer kernels.
Pretend that linux-meta builds the "dkms" binary, so that the existing reverse
dependency magic takes care of the actual triggering.
Note that this needs to be "linux-meta", not "linux", so that tests will
actually use the new kernel (via dist-upgrade).
Add Excuse.addtest() for adding a test type/package/arch/result, so that the
excuses YAML will get structured test results instead of pre-formatted HTML.
Move the HTML rendering into Excuse.html() instead.
This supports a "test type" whose only value is "autopkgtest" right now, but
we will have "bootest", perhaps "piuparts" and other tests in the future.
Drop the "(<ver> is unbuilt/uninstallable)" note from excuses.html as this is
really a per-architecture property, not a per-tested-source one. This needs to
be re-thought and generalized.
r472 added >= version matching to the results evaluation. But we also must do
this in add_test_request() so that we avoid requesting a test for the testing
version over and over again if we get results for the unstable version only.
But here it is enough to only check the requested version and the unstable
version (if that's higher).
If a result.tar does not contain a testpkg-version, we must still match it
against pending.txt, but we must not add it to the results cache. This ends up
being a "null" version key (JSON's serialization of None) which becomes an
actual version string once this is read back.
There are scenarios when britney requests a package test for a particular
version but we actually get a result for a later version:
* When britney runs the later version is not built yet and thus it is in
excludes; but at the time when the test actually runs the package is built.
* We don't support running tests for a given older (source) version yet, tests
always get run from the latest unstable source even if that isn't built yet.
Thus we need to consider results >= the requested version. However, we prefer a
succesful result for the originally requested version so that we can continue
to remove a broken version from unstable. This is already covered by
TestAutoPkgTest.test_remove_from_unstable.
Disabling AMQP requests with "ADT_ENABLE = yes" but ADT_AMQP unset made sense
while we still supported adt-britney. But as that's gone now, let's use the
ADT_ENABLE switch only, and if it's on, require ADT_AMQP and ADT_SWIFT_URL be
set.
This simplifies the code a bit and is less confusing.
We already handle the exclusions in tests_for_source() (and run the testing
version if appropriate), so don't unconditionally skip requests for those.
Adjust the TestAutoPkgTest.test_rdepends_unbuilt case to catch that: The "run
britney once to pick up previous results" was a thinko as this already
satisfies all tests for green 2.
The previous commit introduced a KeyError crash in tests_for_source() for
packages which are unbuilt/uninstallable and only present in unstable.
Ignore these in tests_for_source() as they can't possibly be a regression for
their dependencies, and there is no sensible way to run a test for them.
Commit 463 ("Don't promote packages with unbuilt reverse dependencies") turned
out to be too strict: This holds up too many innocent packages in -proposed.
If unstable has an unbuilt/uninstallable reverse dependency D of a package P,
trigger a test anyway (which will then most likely run against the testing
version of D). If that succeeds, the unstable P did not break D and can be
accepted. If it fails, D needs to be fixed.
Ideally we would set up some clever apt pinning to force installation of
testing-D, to avoid running into the uninstallability of unstable-D, but this
is tricky and error prone.
Drop the temporary "UNINST" state from commit 466 again. Instead, excuses.html
will now show a test against the testing version of D together with a note that
the unstable version is unbuilt/uninstallable.
This should ideally clear up all cases where a requested result is neither
present or pending. Log an error if that still happens (will be checked in the
next couple of runs), and ensure in the tests that we don't trigger any
outstanding "FIXME" log messages.