75 Commits

Author SHA1 Message Date
Martin Pitt
2ba7fd223c Autopkgtest: Trigger LXC from linux only on valid architectures
If the linux source package does not build any binaries on the given
architecture, don't trigger an LXC test for it.
2015-09-23 15:35:37 +02:00
Martin Pitt
effdc25263 autopkgtest.py: Drop unused itertools 2015-09-21 16:30:12 +02:00
Martin Pitt
622115a2fb Autopkgtest: Fix updating results with explicit triggers
When fetching a result with explicit triggers, always update self.results, not
just when we have a pending trigger for it. Otherwise satisfied_triggers will
be empty after reading the first result, and we clobber test results for all
triggers with the latest result.
2015-09-21 16:27:49 +02:00
Martin Pitt
9b70fe361d Autopkgtest: Collect results for requested tests before submitting
When we need to blow away and rebuild results.cache we want to avoid
re-triggering all tests. Thus collect already existing results for requested
tests before submitting new requests.

This is rather hackish now, as fetch_one_result() now has to deal with both 
self.requested_tests and self.pending_tests. The code should be refactored to
eliminate one of these maps.
2015-09-18 06:47:28 +02:00
Martin Pitt
19cd69cb47 Autopkgtest: Track/cache results by triggering package
When downloading results, check the ADT_TEST_TRIGGERS var in testinfo.json to
get back the original trigger that previous britney runs requested the test run
for. This allows us to much more precisely map a test result to an original
test request.

In order to tell apart test results which pass/fail depending on the package
which triggers them (in particular, if that is a kernel), we must keep track of
pass/fail results on a per-trigger granularity. Rearrange the results map
accordingly.

Keep the old "latest test result applies to all triggers of this pkg/version"
logic around as long as we still have existing test results without
testinfo.json.

ATTENTION: This breaks the format of results.cache, so this needs to be removed
when rolling this out.
2015-09-18 06:46:34 +02:00
Martin Pitt
0eddac8476 Autopkgtest: Trim triggered tests for gccgo-5
gccgo-5 exists in Ubuntu 15.04 only and builds all binary packages of gcc-5.
Triggering all tests is pointless and a big waste of test resources, so trim
down the list to actually useful ones. This can be dropped when 15.04 goes EOL.
2015-09-18 06:43:39 +02:00
Martin Pitt
775274ca89 Autopkgtest: Ignore results without package/version
We often get "tmpfail" results (repeated failure to start cloud instance, etc.)
with no package/version at all. Stop attributing them to the latest pending
request for that package, as that has already messed up some results. With
moving to tracking test triggers in testinfo.jar and running multiple test
requests for each triggering kernel version it becomes completely impossible to
interpret anything into a tmpfail result without testpkg-version, so just
ignore them.

This will leave some orphaned entries in pending.txt and thus require manual
retries after fixing the tmpfail reason. But this needs to happen anyway, so
this does not complicate operation but instead shows those as "in progress"
instead of "regression".
2015-09-17 12:31:13 +02:00
Martin Pitt
6af2e9c1dc Autopkgtest: Include triggering package version in test request params
So far we only added the triggering test name. Add the version as well, so that
we'll retain the complete trigger information in result.tar's testinfo.json in
swift. This will allow us to completely reconstruct our results.cache from
scratch without losing any trigger information.

This isn't significantly harder to parse from shell either (in tests): You can
still iterate over $ADT_TEST_TRIGGERS with a "for" loop and split package and
version on '/'.
2015-09-17 11:56:29 +02:00
Martin Pitt
50673de2d8 autopkgtest: Check for new results on a per-architecture granularity
In collect(), check if there are new results for failed tests on a
per-architecture level. This updates results while tests for other
architectures are still in progress (i. e. in self.pending_tests).
2015-09-17 09:32:35 +02:00
Martin Pitt
49260078e4 autopkgtest: Make Linux -> DKMS triggering arch specific
Only trigger DKMS tests for architectures on which the given kernel actually
exists.
2015-09-16 22:59:05 +02:00
Martin Pitt
76751fff88 autopkgtest: Make tests_for_source() arch specific
So far we've only calculated the reverse dependencies on amd64. This breaks
when triggering packages which do not exist on some architectures, like
bcmwl-kernel-source. It also makes it impossible to e. g. trigger DKMS tests on
armhf only for an ARM-only kernel like linux-ti-omap4.
2015-09-16 17:07:19 +02:00
Martin Pitt
21fec5d92a Only trigger autopkgtests for some key packages for gcc-*
Through the usual reverse dependency triggering, gcc-* usually triggers many
hundreds of (mostly universe) tests via libgccN. But:

 - This does not help to prevent compiler regressions: as all packages are
   built in -proposed anyway, the new compiler is being used immediately, so we
   can't hold it back in -proposed.

 - It does not trigger toolchain tests which actually are affected, most
   importantly binutils and linux.

 - This puts enormous stress onto our test infrastructure.

So special case gcc by triggering binutils and linux, and fglrx-installer as a
typical (and important) example of a DKMS package which also needs a compiler,
and libreoffice as our favourite tool chain stress test to cover libgccN.
2015-09-16 08:03:17 +02:00
Martin Pitt
6591d67c47 autopkgtest: Trigger lxc tests from linux-meta*, not linux
This is more consistent with the DKMS triggers, and will make things easier for
the kernel status matrix.
2015-09-15 15:10:49 +02:00
Martin Pitt
2aa0948250 autopkgtest: Request separate tests for linux-meta* triggers
If a package gets triggered by several sources, we can ordinarily run just one
test for all triggers. But for proposed kernels we want to run a separate test
for each, so that the test can run under that particular kernel.
2015-09-14 16:07:44 +02:00
Martin Pitt
16f4d7c8b2 AutoPackageTest.submit(): Factor out calculation of requests 2015-09-14 15:51:46 +02:00
Martin Pitt
90596ff8b0 autopkgtest: Include triggering packages in AMQP requests
With this, tests can do special things when they get triggered by a particular
package. E. g. "linux" or "gcc" could skip their "rebuild myself" test if they
were triggered by a new version of themselves (as opposed to a new binutils).
This is particularly aimed at DKMS tests which need to install the triggering
kernel (e. g. -generic vs. -generic-lts-backport-XXX).
2015-08-31 14:30:07 +02:00
Martin Pitt
c195f87ba5 autopkgtest: Trigger lxc tests for linux updates
New kernels are prone to break LXC. In https://bugs.debian.org/779559 there is
a proposal for a flexible approach to add extra "reverse test dependencies".
Hardcode this trigger until this gets implemented.
2015-08-28 06:58:12 +02:00
Martin Pitt
ec83f7aaff autopkgtest: Trigger DKMS packages for linux-meta-* backports too 2015-08-28 06:44:12 +02:00
Martin Pitt
78aa12994c autopkgtest: Trigger DKMS packages for new linux-meta uploads
By the kernel team's request we want to trigger DKMS package tests on new
kernel uploads, to ensure that we don't regress them with newer kernels.

Pretend that linux-meta builds the "dkms" binary, so that the existing reverse
dependency magic takes care of the actual triggering.

Note that this needs to be "linux-meta", not "linux", so that tests will
actually use the new kernel (via dist-upgrade).
2015-08-26 16:24:13 +02:00
Martin Pitt
71b07bc66a Add structured test results to Excuse objects
Add Excuse.addtest() for adding a test type/package/arch/result, so that the
excuses YAML will get structured test results instead of pre-formatted HTML.
Move the HTML rendering into Excuse.html() instead.

This supports a "test type" whose only value is "autopkgtest" right now, but
we will have "bootest", perhaps "piuparts" and other tests in the future.

Drop the "(<ver> is unbuilt/uninstallable)" note from excuses.html as this is
really a per-architecture property, not a per-tested-source one. This needs to
be re-thought and generalized.
2015-08-25 12:21:51 +02:00
Martin Pitt
32f33baf09 Merge with trunk, port to Python 3 2015-08-24 20:46:42 +02:00
Martin Pitt
c59033afae autopkgtest: Check for existing test results for unstable version too
r472 added >= version matching to the results evaluation. But we also must do
this in add_test_request() so that we avoid requesting a test for the testing
version over and over again if we get results for the unstable version only.
But here it is enough to only check the requested version and the unstable
version (if that's higher).
2015-08-24 10:58:27 +02:00
Martin Pitt
6db26ca1c6 autopkgtest: Don't cache results for undefined versions
If a result.tar does not contain a testpkg-version, we must still match it
against pending.txt, but we must not add it to the results cache. This ends up
being a "null" version key (JSON's serialization of None) which becomes an
actual version string once this is read back.
2015-08-18 22:53:14 +02:00
Martin Pitt
380e3fca64 autopkgtest: Check for test results from newer package version than the requested one
There are scenarios when britney requests a package test for a particular
version but we actually get a result for a later version:

 * When britney runs the later version is not built yet and thus it is in
   excludes; but at the time when the test actually runs the package is built.

 * We don't support running tests for a given older (source) version yet, tests
   always get run from the latest unstable source even if that isn't built yet.

Thus we need to consider results >= the requested version. However, we prefer a
succesful result for the originally requested version so that we can continue
to remove a broken version from unstable. This is already covered by
TestAutoPkgTest.test_remove_from_unstable.
2015-08-17 21:55:18 +02:00
Martin Pitt
e85c59b46a Always require ADT_{AMQP,SWIFT_URL} with ADT_ENABLE
Disabling AMQP requests with "ADT_ENABLE = yes" but ADT_AMQP unset made sense
while we still supported adt-britney. But as that's gone now, let's use the
ADT_ENABLE switch only, and if it's on, require ADT_AMQP and ADT_SWIFT_URL be
set.

This simplifies the code a bit and is less confusing.
2015-08-14 09:54:18 +02:00
Martin Pitt
42e1ac635d Autopkgtest.request(): Don't ignore excluded packages
We already handle the exclusions in tests_for_source() (and run the testing
version if appropriate), so don't unconditionally skip requests for those.

Adjust the TestAutoPkgTest.test_rdepends_unbuilt case to catch that: The "run
britney once to pick up previous results" was a thinko as this already
satisfies all tests for green 2.
2015-08-14 09:39:26 +02:00
Martin Pitt
6c3dd0a3e2 Fix KeyError crash for sources which are only in unstable
The previous commit introduced a KeyError crash in tests_for_source() for
packages which are unbuilt/uninstallable and only present in unstable.

Ignore these in tests_for_source() as they can't possibly be a regression for
their dependencies, and there is no sensible way to run a test for them.
2015-08-13 09:36:47 +02:00
Martin Pitt
c9173b3ca3 Promote packages with unbuilt reverse dependencies if testing version succeeds
Commit 463 ("Don't promote packages with unbuilt reverse dependencies") turned
out to be too strict: This holds up too many innocent packages in -proposed.

If unstable has an unbuilt/uninstallable reverse dependency D of a package P,
trigger a test anyway (which will then most likely run against the testing
version of D). If that succeeds, the unstable P did not break D and can be
accepted. If it fails, D needs to be fixed.

Ideally we would set up some clever apt pinning to force installation of
testing-D, to avoid running into the uninstallability of unstable-D, but this
is tricky and error prone.

Drop the temporary "UNINST" state from commit 466 again. Instead, excuses.html
will now show a test against the testing version of D together with a note that
the unstable version is unbuilt/uninstallable.

This should ideally clear up all cases where a requested result is neither
present or pending. Log an error if that still happens (will be checked in the
next couple of runs), and ensure in the tests that we don't trigger any
outstanding "FIXME" log messages.
2015-08-13 08:31:55 +02:00
Martin Pitt
65c6e4df2a Clarify status of excluded reverse dependencies
Commit 463 introduced waiting on reverse dependencies which are not built or⎵
installable yet, but set their status as "RUNNING". This is confusing as there
is no actual test in progress yet.

Instead, set their status to a new UNINST value, displaying as⎵
"Unbuilt/uninstallable"
2015-08-12 16:52:15 +02:00
Martin Pitt
ee0d5096de Fix KeyError from last commit, needs more thorough debugging when that happens 2015-08-11 17:19:02 +02:00
Martin Pitt
ca9987def8 Don't promote packages with unbuilt reverse dependencies
If a reverse dependency D of a package P is not built yet, then D will be in
"exclusions" as we can't sensibly run D's tests at that time. In that case,
don't just ignore the missing test result but consider D's test as "in
progress".

Note that this might lead to stalling an innocent P if a broken (FTBFS) D gets
uploaded at the same time. This can/should be handled by overrides if fixing
D isn't appropriate, but this is better than allowing P to break D in that
situation.
2015-08-11 08:01:04 +02:00
Martin Pitt
4e5ed1739d merge trunk 2015-08-04 07:36:42 +02:00
Martin Pitt
027404b6e7 Run autopkgtests for DKMS packages
They are being tested through autodep8.
2015-08-03 18:16:18 +02:00
Martin Pitt
e32af66634 autopkgtest: Wait for Swift results for correct triggering package
Swift results were considered for older versions of triggers instead of waiting
for results for the actual package/version that triggered a new test.

This broke due to two reasons:

 * When evaluating the test results we need to check whether we have a result
   for the tested package/version that got triggered by the current excuse, not
   just for any older excuse.

 * AutoPackageTest.fetch_swift_results() re-downloaded all results for a
   package due to a wrong "marker" value: The marker needs to be the
   complete object path, not just the timestamp suffix. This caused old test
   results to be considered as "newer than the given marker".
2015-07-31 12:42:07 +02:00
Martin Pitt
48905892c8 Drop obsolete adt-britney autopkgtest code
Now that we look at autopkgtest results from swift we can drop the
adt-britney/lp:auto-package-testing code from autopkgtest.py.
Note that we still need it for boottest.py.

Adjust TestBoottestEnd2End.test_with_adt() for cloud results.
2015-07-31 09:49:01 +02:00
Martin Pitt
31e647f113 Switch autopkgtest evaluation to cloud results
Change AutoPackageTest.results() to evaluate the Swift results instead of the
adt-britney ones.

TODO:
 - Add more tests (like for adt-britney)
 - Drop triggering of adt-britney tests
 - Drop adt-britney tests (which fail now)
 - Adjust TestBoottestEnd2End.test_with_adt
2015-07-31 09:41:51 +02:00
Martin Pitt
89be9112d3 Alphabetically sort cloud autopkgtest results 2015-07-31 09:18:17 +02:00
Martin Pitt
269b156def Drop obsolete adt-britney autopkgtest code
Now that we look at autopkgtest results from swift we can drop the
adt-britney/lp:auto-package-testing code from autopkgtest.py.

Note that we still need it for boottest.py.
2015-07-28 11:46:17 +02:00
Martin Pitt
2bf6eb5652 Switch autopkgtest evaluation to cloud results
Change AutoPackageTest.results() to evaluate the Swift results instead of the
adt-britney ones.

TODO:
 - Add more tests (like for adt-britney)
 - Drop triggering of adt-britney tests
 - Drop adt-britney tests (which fail now)
2015-07-28 11:04:34 +02:00
Martin Pitt
76287b50ca Track "ever passed" in results cache
Add bool whether there is any successful test of src/arch of any version. This
will be used for detecting "regression" vs. "always failed".

WARNING: This changes the results.cache format, so results.cache has to be
removed and recreated before deploying this.
2015-07-28 10:46:30 +02:00
Martin Pitt
967dc07c21 Consider manually re-ran failed tests for reverse dependencies
Commit 446 only considered a package's own tests. But we also need to check for
newer results of failed reverse dependency tests.  Introduce a new
failed_tests_for_trigger() helper which computes the failed (src, arch) failed
tests for a given package, and fetch new results for all of them.
2015-07-15 09:49:06 +02:00
Martin Pitt
3e7c808e1c Consider manually re-ran failed tests
When collecting results, not only check pending tests, but also new results for
failed tests. This picks up new test results from manual retries which might
now have succeeded.
2015-07-15 08:22:37 +02:00
Martin Pitt
e3ad79bdfb Don't ignore incomplete result.tar files
These usually stem from repeatedly tmpfailing runs where we did not even get as
far as unpacking the source (e. g. repeatedly hitting the ceiling of max
allowed instances/CPUs/etc.). In that case, consider this run a tmpfail result,
instead of ignoring it, as otherwise we end up with that entry being orphaned
in pending.txt.
2015-07-14 18:11:39 +02:00
Martin Pitt
ffe0a99db1 swift result download: Correctly handle "204 No content" status 2015-07-14 08:34:46 +02:00
Martin Pitt
ce775eeb5d Add test results from swift
Until now, autopkgtest results were triggered via an external "adt-britney"
command from lp:auto-package-testing. This required a lot of state files and
duplicated effort, uses hardcoded absolute paths to these external tools, and
is quite hard to understand and maintain. We also want to move away from
Jenkins and rsyncing state files.

Directly retrieve autopkgtest results from a publicly readable and browsable
Swift container, with a debci-compatible layout
(https://wiki.debian.org/debci/DistributedSpec). This now tracks both requests
and results on a per-architecture granularity, so that we can track
per-architecture regressions/always-failed.

Introduce a new ADT_SWIFT_URL config option that sets the swift base URL. If
this key is not set, the behaviour does not change compared to previous
versions, and no results will be retrieved from the cloud.

This still keeps the old adt-britney requests/results as the authoritative
data and for now merely shows the swift results in addition. With that we can
compare the results and run the cloud testing in parallel to find/fix problems
until we switch over. Due to that, the code to britney.py is temporary, does
*not* use AutoPackageTest.results(), and instead just reads the internal
results map.
2015-07-10 06:21:46 +02:00
Martin Pitt
6e167a0343 Track architectures in requested/pending tests
This is necessary so that we can properly match requested to received results
when the latter arrive in different runs for different architectures.

This also opens up the possibility of per-arch blacklisting later.
2015-07-07 11:59:07 +02:00
Martin Pitt
bf470c6da0 Use current reverse dep version instead of None/-
This makes tracking test results easier and removes some special cases.
2015-07-07 11:11:44 +02:00
Martin Pitt
9c59f35af4 AutoPkgTest.tests_for_source(): Avoid reporting duplicate results 2015-07-07 08:05:19 +02:00
Martin Pitt
49500104ae AutoPkgTest.tests_for_source(): Don't trip over NBS binaries 2015-07-06 15:02:21 +02:00
Martin Pitt
e0c4ec15b6 AutoPkgTest.update_pending_tests(): Reset self.requested_tests after merging into self.pending_tests 2015-07-06 11:31:21 +02:00