Where possible, avoid creating a list only to discard immediately
afterwards. Example:
"""
for x in sorted([x for x in ...]):
...
"""
Creates a list, passes it to sorted, which generates a new list and
sorts that copy. Since sorted accepts an iterable, we can avoid the
"inner" list and just pass it a generator expression instead.
Signed-off-by: Niels Thykier <niels@thykier.net>
By moving the package loop inside register_reverses, it will be
invoked a lot less (reducing the overhead of invoking functions).
Signed-off-by: Niels Thykier <niels@thykier.net>
Beside some "minor differences" they were computing the same "tree"
(read: "graph"), so merge them into one (get_reverse_tree) and
properly document return value and special cases.
Signed-off-by: Niels Thykier <niels@thykier.net>
Rewrite the arguments of find_upgraded_binaries to not use an instance
of MigrationItem. We want to call it at a time where we have not
created MigrationItems yet.
Signed-off-by: Niels Thykier <niels@thykier.net>
A removal hint will generate both source and per-arch excuses if the
version of the source package differs between testing and unstable. If
the source versions are the same then only the per-arch excuses will
be generated.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
In rare cases with hints with overlapping virtual packages provided by
different sources, this can make a difference.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
If there are multiple versions of an arch:all package in unstable (due
to outdated or no longer built arch:any packages) then only one of them
should be recorded in the list of binary packages built from the source
package. Otherwise we may try and remove the binary package from various
lists multiple times, leading to crashes.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Given a source which provides two packages and has different versions
in testing and unstable, binNMUs in unstable corresponding to the
older source version should not be considered as migration candidates.
For example:
testing
-------
source 1
bin 1 arch1
bin 1 arch2
unstable
--------
source 2
bin 2 arch1
bin 1+b1 arch2
The binary migration on arch2 should not be considered a candidate.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Although this should never happen, rather than crashing if one of the
versions is none, simply indicate that they are unequal.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Although this isn't an issue during normal runs, the excuses might be
built multiple times during a hint-tester run and should not accumulate
during the run.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
For those hints which don't cause an immediate run (i.e. other than
easy, hint and force-hint), re-build the excuses after adding the
hint so that the actions are accounted for in later hints.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
A binNMU does not rebuild architecture:all packages. For migrations via
unstable this is not a problem as the packages corresponding to the
source upload are still present. However, for *pu migrations, the set of
packages considered only includes architecture-specific packages. In
order to avoid installability issues with packages in testing which
depend on the arch:all packages, we leave the existing arch:all packages
in testing and only consider the arch-specific packages for migration.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The test only needs to consider whether any binaries exist on a given
arch, not how many of them there are (or indeed which binaries they are)
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
When checking whether a tpu source has built on a particular arch, we
should only consider binaries produced by the latest version of the
source package in tpu.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Originally when binNMUs for packages in testing were scheduled, the
binaries would be installed into tpu with no accompanying source. This
allowed the "removed binary" portions of should_upgrade_srcarch() to be
skipped (as britney had generated a faux source record).
dak now adds the source package to tpu in such cases which lead to the
"removed binary" checks being applied to binNMUs in tpu with potentially
destructive consequences. For example, if a package with amd64 and i386
binaries in testing were binNMUed on just amd64, britney would notice
that there were no i386 binaries in tpu and subsequently remove the i386
binaries from testing as well.
In order to resolve this, we skip the check for removed binaries when
building excuses for a binary-only migration via *pu.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The primary difference between the parsing / output of excuses for *pu
and unstable unblocks is the messages displayed. We can therefore remove
some duplication by having the same code handle both, outputting the
appropriate message.
Where a *pu package is also the subject of a "block" (most likely during
a freeze) we only supply the "needs approval" or "approved" message;
previously both "needs approval" and "not touching due to block" were
output, which is redundant. We ensure that there is always a dummy
"block" hint for *pu packages to provide the "needs approval" behaviour.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
An "approve" hint is effectively an unblock for tpu packages and britney
is already quiite happy to parse "unblock $pkg/$tpuversion".
We allow the old name to be used for compatibility and replace it with
"unblock" internally.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
A dependency on an arch-specific package which is not a valid candidate
should lead to the depending package not being a candidate.
For now we ensure that the generated excuses output remains the same,
so that we don't have to wait for consumers to adapt to a new format.
Changing the output format should be revisited at a later point.
See Debian bug #693068.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The code using the variables was refactored in 694d614b. As a result
they were still set in iter_packages() but never subsequently used.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Previously a package which became obsolete during a run would not be
automatically removed until the next run. This was due to the fact that
sources[][BINARIES] is not updated during the run. Instead, we build a
list of source packages which produce at least one binary and then
remove any packages not in that list.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
"not force and not earlyabort" simplifies to "not earlyabort" rather
than "not force", as an easy hint would set "earlyabort" but not
"force".
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
All callers of get_reverse_tree compute the same modification of its
return value, so move that computation into get_reverse_tree.
Signed-off-by: Niels Thykier <niels@thykier.net>
When processing a hint of the form "easy pkgX libX" where libX would be
a candidate for smooth updates because pkgX/testing depends on it but
pkgX/unstable does not, and there are no other reverse dependencies,
the old binary from libX can simply be dropped straight away.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The only test currently implemented is to ensure that any prospective
hint contains at least one item beyond the hint name. This prevents
lines in a hint file consisting simply of e.g. "easy" being added to
the hint list and causing later processing to abort with an error.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
This causes Multi-arch dependencies like "pkg:i386" to show up as
unsatisfiable in excuses.
Previously, the dependency would be checked on the wrong architecture
(if available) and cause the package to become a valid candidate. The
package would still be prevent from migrating as the installability
checker does not know of the "pkg:i386" package.
Signed-off-by: Niels Thykier <niels@thykier.net>
In the unsat_deps case, it was used to update a field in the excuse,
but the field was never read anywhere.
Signed-off-by: Niels Thykier <niels@thykier.net>
Use the is_valid in "html"-method to determine whether to write "Valid
candidate" or not. This avoids the occasional:
* Valid candidate
* Invalidated by dependency
* Not considered
Signed-off-by: Niels Thykier <niels@thykier.net>
generate_package_list had the unintended side-effect of regenerating
self.excuses (up top of the original excuses).
Signed-off-by: Niels Thykier <niels@thykier.net>
Presumably there once was a reason for having a notion of "depth",
but these days only 3 "values" were given as "maxdepth":
* "easy" (for easy hint)
* 0 (for "main run" or in a "hint"-hint)
* -1 (for force-hint)
Signed-off-by: Niels Thykier <niels@thykier.net>
This allows britney to load a python2.7 variant of the C module when
run under python2.7.
Note for python3, we add "python3" rather than "python3.Y". This is
to reflect the include path in the python3 package in the archive.
Signed-off-by: Niels Thykier <niels@thykier.net>
"affected" is not allowed to contain duplicates. Since the current
method of removing any such duplicates is
affected = list(set(affected))
then the order of affected is not important.
As a side effect, make get_reverse_tree return a set instead of a
list as its return value is always inserted into affected.
Signed-off-by: Niels Thykier <niels@thykier.net>
Features like the auto-hinter, smooth-upgrades and removal of obsolete
source packages are now unconditionally enabled.
Signed-off-by: Niels Thykier <niels@thykier.net>
An "obsolete source" is one which produces no binaries. This situation
generally arises when all of the binaries which used to be produced by
the source package are now built by other sources.
Sooner or later such sources will probably be auto-crufted from unstable,
but there's no real reason to keep them in testing in the meantime.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The initial packages of a hint are HintItems, whereas other packages
considered whilst processing the hint will be MigrationItems. In
either case, the version information is irrelevant during the output
of hint processing and only displaying it for some items is confusing
and distracting. Therefore, whilst processing a hint we always use
unversioned names.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Remove unused "excluded" argument from get_dependency_solvers and
excuse_unsat_deps. The argument was either always default or an empty
list.
Signed-off-by: Niels Thykier <niels@thykier.net>
Simply updating the include in britney-py.c, rebuilding and changing
the shebang of britney.py appears to be enough to make the switch in my
tests, so we just do that for now at least.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
If a hint item references unstable but its version is not correct for
that suite, we compare the version to the (t)pu version (if any) and
if a match is found update the item as if it had been explicitly
specified as applying to that suite originally.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The feature is used to remove binaries left by smooth-updates and is not
exposed as an available hint type.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Commit 94071b1649 excluded intra-source
dependencies from the determination as to whether a binary package was
eligible for smooth updates. Whilst this works in many cases, there
are situations where it breaks migration. For instance:
foo depends on libdropped1
libdropped1 depends on libdropped2
libdropped1 and libdropped2 are built from the same source; foo from
another source
libdropped2 is otherwise leaf in testing
In order to resolve this, we build a list of all packages which might
be eligible and filter out those which have reverse-dependencies outside
of their source package. For each remaining package, we consider it
eligible if its intra-source reverse-dependencies are within the list
of packages already determined to be eligible.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The minimal set is comprised of only the first level of (reverse)
dependencies, before any further iterations of packages are added to
the set. In some cases, the result of the full iteration will contain
packages which cause problems when migrated but the minimal set,
although possibly a less optimal solution, may be able to migrate
successfully.
It is assumed that migrating the larger set of packages will be
preferred if possible, so minimal sets are tried later.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The minimal set is comprised of only the first level of (reverse)
dependencies, before any further iterations of packages are added to
the set. In some cases, the result of the full iteration will contain
packages which cause problems when migrated but the minimal set,
although possibly a less optimal solution, may be able to migrate
successfully.
It is assumed that migrating the larger set of packages will be
preferred if possible, so minimal sets are tried later.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
In order to make a number of the changes required for the migration simpler,
we also complete the previous migration to using {Hint,Migration}Item rather
than passing around strings representing packages and converting between
the two forms in several places.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The use of MigrationItem allows us to centralise the parsing and splitting of
package names and architectures, avoiding duplication and simplifying a
number of conditions.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Rather than only considering pairs of packages, we start from a "leaf"
package (i.e. one with an excuse which declares no dependencies on
other packages' excuses) and recursively build a list of packages
which are the dependency or reverse dependency of a package already
in the list.
Any list which is a subset of another list is ignored and the remaining
items are then processed as "easy" hints.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
Previously we could not reliably detect whether an excuse's dependency
from a source package to a binNMU was valid, as the excuse did not
contain sufficient information to determine the set of architecture(s)
on which the dependency existed.
By modifying the representation of the dependency list in the excuse to
include an architecture list we can walk the relationships in reverse
in order to sanity-check the source -> binNMU dependency.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
When considering an excuse for pkg1/arch, a dependency on either of
pkg2/source or pkg2/arch should be considered acceptable so long
as there is a corresponding excuse.
Dependencies from pkg1/source to pkg2/arch will still be considered
"impossible", as pkg1's excuse does not contain any information
regarding the architecture(s) on which its dependency to pkg2 exists.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
This allows the consideration of packages from proposed-updates to be
{dis,en}abled depending on whether the configuration file specifies
the path to the packages / sources files.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
This is most likely to be useful near the beginning of a release cycle,
when the versions of a package in stable and testing are the same and
the new version of the package is unable to migrate from unstable for
some reason.
Signed-off-by: Adam D. Barratt <adam@adam-barratt.org.uk>
The permissions issues which led to the writing being disabled no longer
exist and not persisting the date list makes b2 unsuitable for use as a
primary implementation.