We just had the autopkgtest queues DoSed because britney was crashing
after requesting each reverse dependency for a perl upload, but before
it had written pending.json out so it knew what not to request again.
This was 25,000 requests per arch...
Let's write pending.json straight after sending each request, so that
the next run - even after a crash - won't re-request the same things
again.
These have a hash appended.
We don't actually use the baseline retrying, which is where the ID
parsing is used, but we might as well handle this, not least so we don't
crash.
The apt version comparison sorts 'blacklisted' greater than most version
numbers, which means that we accidentally apply force hints for version
'blacklisted' to all uploads. Since this is the only case of a hacked
version number, let's special case it so that 'blacklisted' hints only
match packages with 'blacklisted' version.
We're supposed to synthesise an "unknown" version for these, but a bug
in the worker meant we didn't do that in some cases and these leaked
into swift. Let's repair it client-side.
Most DKMS packages do not declare Testsuite: autopkgtest-pkg-dkms, but
we can detect this anyway, and this way we can enforce that the module
is buildable.
It works like this. We wait until all tests have finished running. and
then grab their results. If there are any regressions, we mail each bug
with a link to pending-sru.html. There's a state file which records the
mails we've sent out, so that we don't mail the same bug multiple times.
We want to treat linux-$flavor and linux-meta-$flavor as one set in
britney which goes in together or not at all. We never want to promote
linux-$flavor without the accompanying linux-meta-$flavor.
Add a new LinuxPolicy which runs after most of the other policies, which
invalidates linux-foo if linux-meta-foo is invalid.
Sometimes (for the LinuxPolicy), excuses don't have enough information
when they are processed. They need to defer some of their work until
another excuse has been evaluated and then perhaps get invalidated.
Introduce the concept of "external invalidation" to deal with this. A
policy can keep references to other excuses around, and then invalidate
them later (when processing another one). The excuse finder needs to
know this happened to remove the excuse from the list of "actionable
excuses".
For example. The Linux policy invalidates linux-foo if linux-meta-foo is
invalid. If linux-foo is encountered first, we will not yet know if
linux-meta-foo is going to be invalid. We effectively defer the
evaluation until after linux-meta-foo has been dealt with, and then we
know whether we need to invalidate linux-foo or not.
For packages with lots of reverse dependencies, new versions of those reverse
dependencies may keep on showing up in testing. If migration is blocked until
the results for these new version, migration may take extremely long. If there
are results for the current trigger but for the previous version of the reverse
dependency, use those until the fresh resuts are available.
Similar for the reference runs.
Hint to allow smooth update, even if the section isn't allowed in the
configuration.
Note that this takes the source name and the source version IN TESTING
of the binaries that must be allowed to stay around to allow a smooth
update.
Currently, britney only schedules reference runs when they don't exist. It does
strip out runs against older versions of the autopkgtest, but the current version
may exist for a while and the reference run can be old. So, add an option to
ignore old results.
When the architecture are read from the Release file, they are not
defined in the config file.
Adding 'all' as an architecture will not give the correct result. To
avoid confusion, explicitly check for this and error out if it is added.
We only want to add packages which conflict in testing, but don't
conflict in unstable. For those, the newer version (from unstable)
probably fixes the conflict.
When an arch:all binary is uninstallable on a non-nobreakall arch, don't
block it, but still mark it as uninstallable, so the autopkgtest policy
knows not to schedule tests for it.
Currently when a package is uninstallable on an arch, no autopkgtests for that arch are triggered
and the autopkgtest policy blocks migration. However it's not the job of the autopkgtest policy
to judge uninstallability and packages that build an arch:all package that just isn't installable
on the autopkgtest arch should not be blocked for this.
Closes: #918620
Use the info based on the binaries in the Suite object, instead of the
inst_tester. This should also include packages that are in testing, but
are not installable.
Rework the dependencies between excuses.
Dependencies are specified based on packages (source or binary). Based
on these packages, dependencies on other excuses (that have these
packages) are calculated.
As with package dependencies, dependencies between excuses can contain
alternatives.
Each alternative can satisfy the dependency, so the excuse only becomes
invalid if all of the alternatives of a specific dependency are
invalidated.
Tracking of the alternatives and their validity is moved to separate
objects.
Adding a source parameter to the SourcePackage allow finding the source
the object is referring to, so that parameter doesn't have to be passed
along separately.
Signed-off-by: Ivo De Decker <ivodd@debian.org>
Because autopkgtest failure in a package is now a serious bug (RC), packages
that don't opt-in should not be tested. Due to the way autopkgtest and autodep8
work together, package that don't declare they have an autopkgtest can still be
tested. However, maintainers should not be force to say their package doesn't
work with autodep8 by introducing a fake test.
When there is no test in testing, and a failing test in unstable, don't
allow the package to migrate. Doing so would make it instantly RC-buggy.
Signed-off-by: Ivo De Decker <ivodd@debian.org>
If a new pseudo-essential package was added to testing *and* it is
mutually-exclusive with an existing member, britney would incorrectly
flag it as uninstallable (unless another change triggered a cache
invalidation of the pseudo-essential set).
With this commit, the cache invalidation is now done based on whether
we add something that *might* be in the (effective) pseudo-essential
set. It is an overapproximation as there are cases where the
invalidation is unnecessary, but at the moment it is not a performance
concern.
Closes: https://bugs.debian.org/944190
Signed-off-by: Niels Thykier <niels@thykier.net>
The optimizations relies on assumptions that are not valid with the
allow-uninst that has not been corrected. When these assumptions, we
end up with invalid nuninst counters.
Signed-off-by: Niels Thykier <niels@thykier.net>
The spec for Release files says that (at least) one of "Suite" and
"Codename" will be defined. This implies that "Suite" is optional
in some cases and SuiteLoader now correctly falls back to using
"Codename" in that case.
Closes: https://bugs.debian.org/942104
Signed-off-by: Niels Thykier <niels@thykier.net>
When an allow-uninst hint is added for an unversioned binary package, items
are allowed to migrate, even if they make that binary package uninstallable
(on the architecture specified in the hint, if one was specified, or on all
architectures otherwise).
Using this hint should avoid using force-hint to allow specific breakage.
Signed-off-by: Ivo De Decker <ivodd@debian.org>
The code that was supposed to check uninstallability of existing binaries was
broken:
- it only checked arch: all on nobreakall archs, not arch any
- it checked if the new binary was in testing (this never happens)
- it doesn't work when binaries from both unstable and tpu are needed
Note that _excuse_unsat_deps() now handles nobreakall correctly.
don't show information about unsatisfiable depends that is not blocking
migration:
* arch all not on nobreakall arch
* arch any or arch all on breakarch