Bug#851350: ITP: jitescript -- Java API for generating JVM bytecode

2017-01-14 Thread Miguel Landaeta
Package: wnpp
Severity: wishlist
Owner: Miguel Landaeta 

* Package name: jitescript
  Version : 0.4.1
  Upstream Author : Douglas Campos 
* URL : https://github.com/qmx/jitescript
* License : Apache-2.0
  Programming Lang: Java
  Description : Java API for generating JVM bytecode

 jitescript provides a nice domain specific language around the
 popular ASM Java library for bytecode generation purposes. It's
 modeled after a Ruby library called BiteScript with similar
 functionality.
 .
 The goal is to offer a Java library with a similar API so that
 bytecode generation can be as nice in Java as BiteScript makes
 it in JRuby.

 - It is a dependency for JRuby 9.1.x.x.
 - It will be maintained in the Debian Java team.

-- 
Miguel Landaeta, nomadium at debian.org
secure email with PGP 0x6E608B637D8967E9 available at http://miguel.cc/key.
"Faith means not wanting to know what is true." -- Nietzsche


signature.asc
Description: Digital signature


Re: Test instance of our infrastructure

2017-01-14 Thread Bálint Réczey
Hi,

2016-12-08 23:24 GMT+01:00 Paul Wise :
> On Mon, Nov 28, 2016 at 8:04 PM, Ian Jackson wrote:
>
>> Should we not have public test instances of all these things ?
>
> If this will increase the bus factor of Debian services, that would be great.
> If this will just be a time sink for the people involved, that would
> be less great.
> On balance, it sounds like the vagrant suggestion is the best trade-off here.

DSA Team already relies on puppet for setting up new machines [1].

Migrating all the configuration to puppet could help DSA's work and would
also enable setting up private/public test instances in an automated way.

The puppet packages seem to be in good shape in Debian and adding
a few more modules could cover most of DSA's needs. I would happily
contribute by packaging a few modules.

Providing a puppet module could be a criterion for accepting new services,
but I'm not part of the DSA team, this is just a suggestion. :-)

Cheers,
Balint

[1] https://dsa.debian.org/howto/new-machine/



installing kernel debug symbols on stretch?

2017-01-14 Thread Daniel Pocock


I notice the dbg package for the kernel was moved, but it doesn't appear
to be installable.


I've added the necessary entry to /etc/apt/sources.list:


deb http://debug.mirrors.debian.org/debian-debug/ stretch-debug main
non-free contrib


and then I try to get the package:


# apt-get install -t stretch-debug linux-image-amd64-dbgsym
Reading package lists... Done
Building dependency tree  Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 linux-image-amd64-dbgsym : Depends: linux-image-4.8.0-2-amd64-dbgsym
but it is not installable
E: Unable to correct problems, you have held broken packages.



It looks like the kernel was built over a week ago:

# uname -aLinux srv1 4.8.0-2-amd64 #1 SMP Debian 4.8.15-2 (2017-01-04)
x86_64 GNU/Linux

so would the dbgsym package still be in Incoming?

Regards,

Daniel



Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS

2017-01-14 Thread Ole Streicher
Paul Gevers  writes:
> One can always file bug reports against the release.debian.org pseudo
> package to ask for britney to ignore the autopkgtest result.

This would again concentrate work on a relatively small team.

> One other thing that I can envision (but maybe to early to agree on or
> set in stone) is that we lower the NMU criteria for fixing (or
> temporarily disabling) autopkgtest in ones reverse dependencies. In
> the end, personally I don't think this is up to the "relevant
> maintainers" but up to the release team. And I assume that badly
> maintained autopkgtest will just be a good reason to kick a package
> out of testing.

I already brought an example where autopkgtest ist well maintained but
keeps failing.

And I think that it is the package maintainers who have the experience
of whether a CI test failure is critical or not.

BTW, in the moment the CI tests are done in unstable -- if you want to
kick out a package from *testing*, you need to test the new unstable
package against this, which would be some change in the logic of
autopkgtest.

>> What is the reason not to use automated bug reports here? This would
>> allow to use all the tools the bug system has: severities, reassigning
>> closing etc.
>
> The largest reason is that it didn't cross my mind yet and nobody else
> except you has raised the idea so far.

I already don't understand this with the piuparts blocker: we have an
established workflow for problems with packages that need some
intervention, and this is bugs.d.o. This has a lot of very nice
features, like:

 * discussion of the problem attached to the problem itself and stored
   for reference
 * formal documentation of problem solving in the changelog (Closes: #)
 * severities, tags, re-assignments, affects etc.
 * maintainer notifications, migration blocks, autoremovals etc.
 * documented manual intervention possible

I don't see a feature that one would need for piuparts complaints or for
CI test failures that is not in our bug system. And (I am not sure)
aren't already package conflict bugs autogenerated?

I would really prefer to use the bug system instead of something else.

> One cravat that I see though is which system should hold the
> logic. The current idea is that it is britney that determines which
> combinations need to be tested and thus can use the result straight
> away for the migration decision.

> As Martin Pitt described in the thread I referenced in my first reply,
> Ubuntu already experimented with this and they came to the conclusion
> that it didn't really work if two entities have to try and keep the
> logic in sync.

I don't see the need to keep things in sync: If a new failure is
detected, it creates an RC bug against the migration candidate, with an
"affects" to the package that failed the test. The maintainer then has
the possibilities:

 * solve the problem in his own package, upload a new revision, and close
   the bug there

 * re-assign the problem to the package that failed the test is the
   problem lies there. In this case, that maintainer can decide if the
   problem is RC, and if not, then lower the severity.

In any case, the maintainers can follow the established workflow, and if
one needs to look up the problems a year later, one can just search for
the bug.

What else would you need to keep in sync?

>>> Possible autopkgtest extension: "Restrictions: unreliable"?
>> 
>> This is not specific enough. Often you have some tests that are
>> unreliable, and others that are important. Since one usually takes the
>> upstream test suite (which may be huge), one has to look manually first
>> to decide about the further processing.
>
> Than maybe change the wording: not-blocking, for-info, too-sensitive,
> ignore or 

The problem is that a test suite is not that homogenious, and often one
doesn't knows that ahead. For example, the summary of one of my packages
(python-astropy) has almost 9000 individual tests. Some of them are
critical and influence the behaviour of the whole package, but others
are for a small subsystem an/or a very special case. I have no
documentation of the importance of each individual test; this I decide
on when I see a failure (in cooperation with upstream). But more: these
9000 tests are combined into *one* autopkgtest result. What should I put
there?

> If you know your test suite needs investigation, you can have it not
> automatically block. But depending on the outcome of the
> investigation, you can still file (RC) bugs.

But then we are where we are already today: Almost all tests of my
packages are "a bit complex", so I would just all mark them as
non-blocking. But then I would need to file the bugs myself, and
especially then there is no formal sync between the test failure and the
bug.

> Why I am so motivated on doing this is because I really believe this is
> going to improve the quality of the release and the release process.

As I already wrote: I really appreciate autopkgtest, and I would like to
h

Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS

2017-01-14 Thread Ole Streicher
Colin Watson  writes:
> On Fri, Jan 13, 2017 at 07:35:10PM +, Simon McVittie wrote:
>> Possible autopkgtest extension: "Restrictions: unreliable"?
>
> May as well just use "Restrictions: allow-stderr" and "... || true".
> That's easier to do on a more fine-grained level, too.

As on my deprecation example: I *want* to have autopkgtest failing when
a deprecation warning appears. I also want to keep the failure until it
is solved, so I would not like to just override it.

It is just a non-critical CI test failure.

BTW, this was just the simplest example. Others (in python-astropy f.e.)
are internal tests that no warnings were written during a certain
test. This will fail if a deprecation warning pops up (even if not
written to stderr), but is still non-critical.

If I would need to limit the CI tests to critical ones, I would probably
switch them off completely: most of the failures I experienced so far
are not critical at all. But this would be counterproductive.

Best regards

Ole



Bug#851358: ITP: python-pybedtools -- Python wrapper around BEDTools for bioinformatics work

2017-01-14 Thread Michael R. Crusoe
Package: wnpp
Severity: wishlist
Owner: Debian Med team 

* Package name: python-pybedtools
  Version : 0.7.8
  Upstream Author : Ryan Dale 
* URL : https://github.com/daler/pybedtools
* License : GPL 2+
  Programming Lang: Python, Cython, 
  Description : Python wrapper around BEDTools for bioinformatics work


The BEDTools suite of programs is widely used for genomic interval
 manipulation or “genome algebra”. pybedtools wraps and extends BEDTools and
  offers feature-level manipulations from within Python.

Dependency for bcbio; and will be team maintained by Debian Med


ITP: toil -- cross-platform workflow engine

2017-01-14 Thread Steffen Möller
Package: wnpp
Severity: wishlist
Owner: Steffen Moeller 

* Package name: toil
  Version : 3.5.0~alpha
* URL : https://github.com/BD2KGenomics/toil
* License : Apache
  Programming Lang: Python
  Description : cross-platform workflow engine

The package will be maintained in the Debian Med git repository.
Toil goes together with the CommonWorkflowLanguage to distribute
processes in distributed environments, which means clouds and
clusters alike.



Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS

2017-01-14 Thread Adam D. Barratt
On Sat, 2017-01-14 at 11:05 +0100, Ole Streicher wrote:
> I don't see the need to keep things in sync: If a new failure is
> detected, it creates an RC bug against the migration candidate, with an
> "affects" to the package that failed the test. The maintainer then has
> the possibilities:
> 
>  * solve the problem in his own package, upload a new revision, and close
>the bug there
> 
>  * re-assign the problem to the package that failed the test is the
>problem lies there. In this case, that maintainer can decide if the
>problem is RC, and if not, then lower the severity.
> 
> In any case, the maintainers can follow the established workflow, and if
> one needs to look up the problems a year later, one can just search for
> the bug.

You missed the (not at all hypothetical) case:

* downgrades the bug, regardless of the practical impact of the failure,
just so her package can migrate.

Regards,

Adam



Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS

2017-01-14 Thread Ian Jackson
Paul Gevers writes ("Re: Auto reject if autopkgtest of reverse dependencies 
fail or cause FTBFS"):
> On 01/13/17 21:05, Ole Streicher wrote:
> > Simon McVittie  writes:
> >> On Fri, 13 Jan 2017 at 18:22:53 +, Ian Jackson wrote:
> >>> Maybe an intermediate position would be to respond to a CI failure by:
> >>>  * Increasing the migration delay for the affecting package
> 
> I like this and will suggest it to the release team. Especially for the
> start up time.

I definitely think we should start with this.  It provides a good
incentive to add tests to one's package: namely, advance notice of
problems which occur with newer dependencies.

But there are a lot of things that I think we are going to have to
work out.  Some of them have been mentioned in this thread.

At the moment we (Debian) 1. have very little experience of how
autopkgtests will work in practice 2. haven't really tackled any of
the social questions.  The Ubuntu experience is valuable for (1) but
Ubuntu has a very different social structure, so doesn't tell us much
about (2).

Questions which will come to the fore include: if a new version of a
core package A breaks an "unimportant" leaf package B, such that B
becomes RC-buggy, is that an RC bug in A ?  The only coherent answer
is "yes" but if B is just "too wrong" or unfixable, at some point
something will have to give.  I think our social structures will come
under some additional strain.

> One can always file bug reports against the release.debian.org pseudo
> package to ask for britney to ignore the autopkgtest result.

I think that if autopkgtests are a success, there will be far too much
of this for the release team to be involved in first-line response.

Since the autopkgtests are controlled by the depending package, I
suggest that there should be a way for the depending package
maintainer to provide this information and control the way the tests
affect migrations.

The information would want to be kept outside the depending package's
source tree, but rather in some kind of management system, because
uploads are disruptive in this context.  We could use the BTS: one way
would be for the autopkgtest analyser to look for a bug with a new
kind of tag "this bug causes broken tests".  Ideally there would be a
way to specify the specific failing tests.

If the bug is actually in the dep package, but the maintainer of the
rdep with the failing tests wants it not to block migration of the
dep, they would still file a bug against the rdep and mark it blocked
in the bts by the bug in the dep.

This way our existing rule that the maintainer of a packgae is (at
least in the first instance) in charge of the bugs against their
package extends naturally to giving the rdep first instance control
over migration of deps which cause test failures.

That is consistent with the principle of providing an incentive for
adding tests.  It also provides a way to work around broken tests that
is not throwing the package out of the release.  That is very
important because otherwise adding tests is a risky move: your package
might be removed from testing as a result of your excess of zeal.

The release team would become involved if the dep maintainer and the
the rdep maintainer disagree.  Ie, if the dep maintainer wants such a
"broken test" bug to exist, and the rdep maintainer wants not, then
the rdep maintainer would ask release@.  The existing principle that
the release team are the first escalation point for disagreements
about testing migration (currently, RC bug severity) extends naturally
to this case.

> > What is the reason not to use automated bug reports here? This would
> > allow to use all the tools the bug system has: severities, reassigning
> > closing etc.

The difficulty with automated bug reports is this: how do you tell
whether something is the same bug or not ?

If you're not careful, a test which fails 50% of the time will result
in an endless stream of new bugs from the CI system which then get
auto-closed...

(If there are bugs, we want them to auto-close because no matter how
hard we try, test failures due to "weather" will always occur some of
the time.  Closing such bugs by hand would be annoying.)

Thanks,
Ian.



Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS

2017-01-14 Thread Ian Jackson
Ole Streicher writes ("Re: Auto reject if autopkgtest of reverse dependencies 
fail or cause FTBFS"):
> I don't see the need to keep things in sync: If a new failure is
> detected, it creates an RC bug against the migration candidate, with an
> "affects" to the package that failed the test.

I prefer my other suggestion, that humans should write bugs if
necessary to unblock migration.  Because:

 * It eliminates a timing problem, where the testing migration
   infrastructure[1] needs to somehow decide whether the test have
   been run.  (This is needed because in the future we may want to
   accelerate migration, perhaps dramatically when there are lots of
   tests; and then, the testing queue may be longer than the minimum
   migration delay.)

 * See my other mail about the problems I anticipate with
   automatically opened bug reports.

> > Than maybe change the wording: not-blocking, for-info, too-sensitive,
> > ignore or 
> 
> The problem is that a test suite is not that homogenious, and often one
> doesn't knows that ahead. For example, the summary of one of my packages
> (python-astropy) has almost 9000 individual tests. Some of them are
> critical and influence the behaviour of the whole package, but others
> are for a small subsystem an/or a very special case. I have no
> documentation of the importance of each individual test; this I decide
> on when I see a failure (in cooperation with upstream). But more: these
> 9000 tests are combined into *one* autopkgtest result. What should I put
> there?

You should help enhance autopkgtest so that a single test script can
report results of multiple test.  This will involve some new protocol
for those test scripts.

Ian.

-- 
Ian JacksonThese opinions are my own.

If I emailed you from an address @fyvzl.net or @evade.org.uk, that is
a private address which bypasses my fierce spamfilter.



Re: "not authorised" doing various desktoppy things

2017-01-14 Thread Ian Jackson
Martín Ferrari writes ("Re: "not authorised" doing various desktoppy things"):
> On 03/01/17 17:05, Ian Jackson wrote:
> > Recently, my nm-applet can no longer control my network-manager
> > daemon.  I get a message saying[1]:
> 
> Did you ever get to the root of this?

No.  I have been too busy dealing with bugs in my own packages.  I
lost an awful lot of time to the dgit corrupted commits bug :-/ and am
now very behind on almsot everything.

> I am having the same kind of problem (no way to control networking,
> removable media, power settings) and a similar set-up (sysvinit,
> lightdm, cinnamon). I have been experiencing these kind of problems for
> months, but usually they would go away after an apt-get upgrade. Not
> now, and I am afraid that we will ship stretch with this problem.

Indeed.  I may get some time tomorrow to look at this properly, but
please don't wait for me.

Ian.

-- 
Ian JacksonThese opinions are my own.

If I emailed you from an address @fyvzl.net or @evade.org.uk, that is
a private address which bypasses my fierce spamfilter.



Bug#851378: ITP: python-h5netcdf -- netCDF4 support for Python via h5py

2017-01-14 Thread Ghislain Antony Vaillant
Package: wnpp
Severity: wishlist
Owner: Ghislain Antony Vaillant 

* Package name: python-h5netcdf
  Version : 0.3.1
  Upstream Author : Stephan Hoyer 
* URL : https://github.com/shoyer/h5netcdf
* License : BSD
  Programming Lang: Python
  Description : netCDF4 support for Python via h5py

Long-Description:
 A Python interface for the netCDF4 file-format that reads and writes
 HDF5 files API directly via h5py, without relying on the Unidata netCDF
 library.

This package will be maitained by the Debian Science Team. It is a
dependency to the the future src:python-xarray.



Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS

2017-01-14 Thread Ole Streicher
Ian Jackson  writes:
> Ole Streicher writes ("Re: Auto reject if autopkgtest of reverse
> dependencies fail or cause FTBFS"):
>> I don't see the need to keep things in sync: If a new failure is
>> detected, it creates an RC bug against the migration candidate, with an
>> "affects" to the package that failed the test.
>
> I prefer my other suggestion, that humans should write bugs if
> necessary to unblock migration.  Because:
>
>  * It eliminates a timing problem, where the testing migration
>infrastructure[1] needs to somehow decide whether the test have
   ^^^ reference/footnote not found
>been run.  (This is needed because in the future we may want to
>accelerate migration, perhaps dramatically when there are lots of
>tests; and then, the testing queue may be longer than the minimum
>migration delay.)

I would not see this a big problem: the bug can also be filed against a
migrated package. As with any other bug. Also humans sometimes fail to
write bug reports during the sid quarantine, resulting in
autoremovals. I see no difference here.

>  * See my other mail about the problems I anticipate with
>automatically opened bug reports.

Just copying from your other mail:
> The difficulty with automated bug reports is this: how do you tell
> whether something is the same bug or not ?
>
> If you're not careful, a test which fails 50% of the time will result
> in an endless stream of new bugs from the CI system which then get
> auto-closed...

Just allow only one bug report per version pair, report only changes,
and don't report is another bug for the package pair is still
open. Either have a local database with the required information, of
store this as metadata in the bug reports. and query the BTS before
sending.

Basically the same procedure as one would do manually.

> (If there are bugs, we want them to auto-close because no matter how
> hard we try, test failures due to "weather" will always occur some of
> the time.  Closing such bugs by hand would be annoying.)

I just had a lengthy (and unresolved) discussion with Santiago Vila
about weather dependent built time failures https://bugs.debian.org/848859
While I disagree that those are RC, IMO occasional failures are useful
to report to the maintainer, without autoclosing.

>> > Than maybe change the wording: not-blocking, for-info, too-sensitive,
>> > ignore or 
>> 
>> The problem is that a test suite is not that homogenious, and often one
>> doesn't knows that ahead. For example, the summary of one of my packages
>> (python-astropy) has almost 9000 individual tests. Some of them are
>> critical and influence the behaviour of the whole package, but others
>> are for a small subsystem an/or a very special case. I have no
>> documentation of the importance of each individual test; this I decide
>> on when I see a failure (in cooperation with upstream). But more: these
>> 9000 tests are combined into *one* autopkgtest result. What should I put
>> there?
>
> You should help enhance autopkgtest so that a single test script can
> report results of multiple test.  This will involve some new protocol
> for those test scripts.

Sorry, but I can't evaluate all 9000 tests and categorize them which are
RC and which are not -- this will not work. It is also not realistic to
force upstream to do so. The only thing I can do is reactively tag a
certain failure being RC or not.

Often the test infrastructure even doesn't have a way to mark a test as
xfail (like cmake), and upstream even can't definitely say which tests
are xfailing, so that I already have enough to do to keep the tests in a
good shape.

Best regards

Ole



soft freeze

2017-01-14 Thread Ralf Treinen
Hi, I was under the impression that during the soft freeze (i.e, now) new
usptream versions of packages that are already in testing are blocked
from migrating. However, I can't find anything to this effect in the 
announcements by the release team. Can please someone in the know
confirm, or correct?

-Ralf.



Re: soft freeze

2017-01-14 Thread Sebastiaan Couwenberg
On 01/14/2017 03:31 PM, Ralf Treinen wrote:
> Hi, I was under the impression that during the soft freeze (i.e, now) new
> usptream versions of packages that are already in testing are blocked
> from migrating. However, I can't find anything to this effect in the 
> announcements by the release team. Can please someone in the know
> confirm, or correct?

No new source packages are allowed into stretch, new upstream versions
of packages which are in testing are still allowed.

"
 We have passed the 5th of January, which means that no new source
 packages will enter stretch as announced in [1].  This also applies to
 packages that have been (or will be) removed from stretch.
"

https://lists.debian.org/debian-devel-announce/2017/01/msg2.html

Kind Regards,

Bas

-- 
 GPG Key ID: 4096R/6750F10AE88D4AF1
Fingerprint: 8182 DE41 7056 408D 6146  50D1 6750 F10A E88D 4AF1



Re: soft freeze

2017-01-14 Thread Pirate Praveen
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Sebastiaan Couwenberg  wrote:
> No new source packages are allowed into stretch, new upstream
> versions of packages which are in testing are still allowed.

Is this applicable to new dependencies required for updating an
existing package as well? For example, diaspora 0.6.0.0 is
already in testing, I'd like to update it to 0.6.2.0, but the new
upstream release depends on two packages not in testing
(useragent and secure_headers). Would this be allowed?

-BEGIN PGP SIGNATURE-

iQIzBAEBCAAdFiEEKnl0ri/BUtd4Z9pKzh+cZ0USwioFAlh6OgQACgkQzh+cZ0US
wipJtg/+PeOgY9DSVK2YuXV4GJKA3pgxY4wWYpcjOjUWo6Yeq605Cyo8q+kiQpyU
VkC/eyBYoE+NnTiQbmQZsY4tOR7zPRArIm8vR1DdMzVzt24/g2/+FFZOZgd44cP2
K9HLtk+k3mX7o+EPhmqmeBEUotHNQPtVwipVdPQ0wjS1+XLN8pknipAr7PVPOVTn
Podc7FQK6LJb/GnaAYAU5tF/vA+MOrSQD/oT3NeKQAYDyjHTaE2hQdY0O6JlVfsv
Fekd9DOt3zPxda2b4v8QvO1NASyOh4legZozFmEcZfOjUTLWNp8jQz5i8h6R+p42
NxAj1YtQprrSqH1v/8Ffuoh6NQzYn+0/wfAilZVg8+rYpRzwyyMKarpR+KAs3IjW
M7GTEQ7PeJELqaWVvwrZINz4wYzJQf6jQ+v8M4nCI9/5tityPIM7YXKn4SUG0h0x
X6narCUZbUVseV8RXls0jwUI5hhZUDPc5uZk2cH4l6HA9ag5npGcFmtvSG/pS6zT
yA2MarqiLu4wO3EqKGRhg/DZ92t5OG80WjQXFRcmeVNh9EoGh55P3SJsZrxtn3mQ
C8snncAOkWuEtdtM7CmmMCYzp9xGMEbGf5lsn3lILnt5AyacLKO7kIMVF4y0rR6u
/bMRLH7GsR56btjQTj7gqrUFsxmY3jKlRzEfH7RpPMJA3ko2qc0=
=L1iQ
-END PGP SIGNATURE-


Re: soft freeze

2017-01-14 Thread Sebastiaan Couwenberg
On 01/14/2017 03:47 PM, Pirate Praveen wrote:
> Sebastiaan Couwenberg  wrote:
>> No new source packages are allowed into stretch, new upstream
>> versions of packages which are in testing are still allowed.
> 
> Is this applicable to new dependencies required for updating an
> existing package as well? For example, diaspora 0.6.0.0 is
> already in testing, I'd like to update it to 0.6.2.0, but the new
> upstream release depends on two packages not in testing
> (useragent and secure_headers). Would this be allowed?

No, because the source packages for the two dependencies are not already
in testing.

Kind Regards,

Bas

-- 
 GPG Key ID: 4096R/6750F10AE88D4AF1
Fingerprint: 8182 DE41 7056 408D 6146  50D1 6750 F10A E88D 4AF1



signature.asc
Description: OpenPGP digital signature


Bug#851392: ITP: truffleruby -- high performance Java implementation of the Ruby programming language

2017-01-14 Thread Miguel Landaeta
Package: wnpp
Severity: wishlist
Owner: Miguel Landaeta 

* Package name: truffleruby
  Version : TBD
  Upstream Author : Chris Seaton 
* URL : https://github.com/graalvm/truffleruby
* License : EPL-1.0/GPL-2.0/LGPL-2.1
  Programming Lang: Java/Ruby
  Description : high performance Java implementation of the Ruby 
programming language

 truffleruby (previously known as the Truffle runtime of JRuby) is an
 experimental implementation of an interpreter for JRuby using the
 Truffle AST interpreting framework and the Graal compiler.
 .
 It’s an alternative to the IR interpreter and bytecode compiler.
 The goal is to be significantly faster, simpler and to have more
 functionality than other implementations of Ruby.

 - It will be maintained in the Debian Java team.
 - Originally this was part of JRuby 9.x.x.x releases but the
   project has matured enough to be a project on its own.

-- 
Miguel Landaeta, nomadium at debian.org
secure email with PGP 0x6E608B637D8967E9 available at http://miguel.cc/key.
"Faith means not wanting to know what is true." -- Nietzsche


signature.asc
Description: Digital signature


Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS

2017-01-14 Thread Steve Langasek
Hi Ole,

On Sat, Jan 14, 2017 at 11:05:48AM +0100, Ole Streicher wrote:
> > One other thing that I can envision (but maybe to early to agree on or
> > set in stone) is that we lower the NMU criteria for fixing (or
> > temporarily disabling) autopkgtest in ones reverse dependencies. In
> > the end, personally I don't think this is up to the "relevant
> > maintainers" but up to the release team. And I assume that badly
> > maintained autopkgtest will just be a good reason to kick a package
> > out of testing.

> I already brought an example where autopkgtest ist well maintained but
> keeps failing.

> And I think that it is the package maintainers who have the experience
> of whether a CI test failure is critical or not.

If the failure of the test is not critical, then it should not be used as a
gate for CI.  Which means you, as the package maintainer who knows that this
test failure is not critical, should fix your autopkgtest to not fail when
the non-critical test case fails.

Quite to the contrary of the claims in this thread that gating on
autopkgtests will create a bottleneck in the release team for overriding
test failures, this will have the effect of holding maintainers accountable
for the state of their autopkgtest results.  CI tests are only useful if you
have a known good baseline.  If your tests are flaky, or otherwise produce
failures that you think don't matter, then those test results are not useful
than anyone but yourself.  Please help us make the autopkgtests useful for
the whole project.

And the incentive for maintainers to keep their autopkgtests in place
instead of removing them altogether is that packages with succeeding
autopkgtests can have their testing transition time decreased from the
default.  (The release team agreed to this policy once upon a time, but I'm
not sure if this is wired up or if that will happen as part of Paul's work?)


> BTW, in the moment the CI tests are done in unstable -- if you want to
> kick out a package from *testing*, you need to test the new unstable
> package against this, which would be some change in the logic of
> autopkgtest.

The autopkgtest policy in Ubuntu's britney deployment includes all the logic
to do this.  Hopefully Paul can make good use of this when integrating into
Debian.

> >>> Possible autopkgtest extension: "Restrictions: unreliable"?

> >> This is not specific enough. Often you have some tests that are
> >> unreliable, and others that are important. Since one usually takes the
> >> upstream test suite (which may be huge), one has to look manually first
> >> to decide about the further processing.

> > Than maybe change the wording: not-blocking, for-info, too-sensitive,
> > ignore or 

> The problem is that a test suite is not that homogenious, and often one
> doesn't knows that ahead. For example, the summary of one of my packages
> (python-astropy) has almost 9000 individual tests. Some of them are
> critical and influence the behaviour of the whole package, but others
> are for a small subsystem an/or a very special case. I have no
> documentation of the importance of each individual test; this I decide
> on when I see a failure (in cooperation with upstream). But more: these
> 9000 tests are combined into *one* autopkgtest result. What should I put
> there?

The result of the autopkgtest should be whatever you as the maintainer think
is the appropriate level for gating.  Frankly, I think it's sophistry to
argue both that you care about seeing the results of the tests, and that you
don't want a failure of those tests to gate because they only apply to
"special cases".  We should all strive to continually raise the quality of
Debian releases, and using automated CI tests is an effective tool for this.
Bear in mind that this is as much about preventing someone else's package
from silently breaking yours in the release, as it is about your package
being blocked in unstable.  This is a bidirectional contract, which works
precisely if your autopkgtest is constructed to be a meaningful gate.
Having a clear gate is the only way to meaningfully scale out CI for the
number of components in Debian and have it actually drive quality of the
distribution.

I will say that looking at Ubuntu autopkgtest results for packages you're
maintainer of, I see quite a few recent autopkgtest failures of packages
that are reverse-dependencies of python-astropy.  From my POV, that's a good
thing, and I'm happy that there were autopkgtests there that gated
python-astropy rather than letting it into the Ubuntu release in a state
that broke many of its reverse-dependencies (or at least, broke the tests).


> > If you know your test suite needs investigation, you can have it not
> > automatically block. But depending on the outcome of the
> > investigation, you can still file (RC) bugs.

> But then we are where we are already today: Almost all tests of my
> packages are "a bit complex", so I would just all mark them as
> non-blocking. But then I would n

Re: how to mount /(dev|run)/shm properly? (was Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS)

2017-01-14 Thread Steve Langasek
On Fri, Jan 13, 2017 at 03:54:30PM +, Simon McVittie wrote:
> If I'm reading the initscripts code correctly, sysvinit does the reverse
> by default, for some reason (/run/shm is the mount point and /dev/shm the
> symlink). I think the motivation might have been to be able to use the
> same tmpfs for /run and /run/shm,

I recall this being a misguided attempt to move it out of /dev "because it's
not a device".  The migration did not go well, especially in the face of
chroots that need to have it mounted, and since systemd did not handle this
the same way sysvinit had, we effectively now have a mess in the other
direction.

We should fix it so that everything again treats /dev/shm as the mountpoint.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
Ubuntu Developerhttp://www.debian.org/
slanga...@ubuntu.com vor...@debian.org


signature.asc
Description: PGP signature


Bug#851427: sysvinit makes /dev/shm a symlink to /run/shm, should be other way round

2017-01-14 Thread Simon McVittie
Package: initscripts
Version: 2.88dsf-59.8
Severity: normal

On Sat, 14 Jan 2017 at 11:00:51 -0800, Steve Langasek wrote:
> On Fri, Jan 13, 2017 at 03:54:30PM +, Simon McVittie wrote:
> > If I'm reading the initscripts code correctly, sysvinit does the reverse
> > by default, for some reason (/run/shm is the mount point and /dev/shm the
> > symlink). I think the motivation might have been to be able to use the
> > same tmpfs for /run and /run/shm,
> 
> I recall this being a misguided attempt to move it out of /dev "because it's
> not a device".  The migration did not go well, especially in the face of
> chroots that need to have it mounted, and since systemd did not handle this
> the same way sysvinit had, we effectively now have a mess in the other
> direction.
> 
> We should fix it so that everything again treats /dev/shm as the mountpoint.

Let's have a bug number for that, then. Please escalate its severity if you
think that's correct.

Steps to reproduce:

* install Debian (I used vmdebootstrap according to autopkgtest-virt-qemu(1))
* apt install sysvinit-core
* reboot
* mount
* ls -al /dev/shm /root/shm

Expected result:

* /dev/shm is a tmpfs
* /run/shm is a symlink with target /dev/shm

Actual result:

* /dev/shm is a symlink with target /run/shm
* /run/shm is a tmpfs



This might also be related to #697003, #818442.



Bug#851431: ITP: bart-view -- Image viewer for multi-dimensional data (add-on to BART)

2017-01-14 Thread Martin Uecker
Package: wnpp
Severity: wishlist
Owner: Martin Uecker 

* Package name: bart-view
  Version : 0.0.01
  Upstream Author : Martin Uecker 
* URL : https://github.com/mrirecon/view/
* License : BSD
  Programming Lang: C
  Description : Image viewer for multi-dimensional data (add-on to BART)

This is a viewer for multi-dimensional complex-valued
data. It is useful as an add-on to the Berkeley Advanced
Reconstruction Toolbox (BART) for computational Magnetic
Resonance Imaging which has been packaged for Debian
already. This package will be maintained as part of
the Debian Med team.



Re: soft freeze

2017-01-14 Thread Jonas Smedegaard
Quoting Ralf Treinen (2017-01-14 15:31:19)
> Hi, I was under the impression that during the soft freeze (i.e, now) 
> new usptream versions of packages that are already in testing are 
> blocked from migrating. However, I can't find anything to this effect 
> in the announcements by the release team. Can please someone in the 
> know confirm, or correct?

I believe all official statements are collected here: 
https://release.debian.org/


 - Jonas

-- 
 * Jonas Smedegaard - idealist & Internet-arkitekt
 * Tlf.: +45 40843136  Website: http://dr.jones.dk/

 [x] quote me freely  [ ] ask before reusing  [ ] keep private


signature.asc
Description: signature


Bug#851431: ITP: bart-view -- Image viewer for multi-dimensional data (add-on to BART)

2017-01-14 Thread Uecker, Martin
Package: wnpp
Severity: wishlist
Owner: Martin Uecker 

* Package name: bart-view
  Version : 0.0.01
  Upstream Author : Martin Uecker 
* URL : https://github.com/mrirecon/view/
* License : BSD
  Programming Lang: C
  Description : Image viewer for multi-dimensional data (add-on to BART)

This is a viewer for multi-dimensional complex-valued
data. It is useful as an add-on to the Berkeley Advanced
Reconstruction Toolbox (BART) for computational Magnetic
Resonance Imaging which has been packaged for Debian
already. This package will be maintained as part of
the Debian Med team.



Re: how to mount /(dev|run)/shm properly? (was Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS)

2017-01-14 Thread Michael Biebl
Am 14.01.2017 um 20:00 schrieb Steve Langasek:
> On Fri, Jan 13, 2017 at 03:54:30PM +, Simon McVittie wrote:
>> If I'm reading the initscripts code correctly, sysvinit does the reverse
>> by default, for some reason (/run/shm is the mount point and /dev/shm the
>> symlink). I think the motivation might have been to be able to use the
>> same tmpfs for /run and /run/shm,
> 
> I recall this being a misguided attempt to move it out of /dev "because it's
> not a device".  The migration did not go well, especially in the face of
> chroots that need to have it mounted, and since systemd did not handle this
> the same way sysvinit had, we effectively now have a mess in the other
> direction.

The /run/shm symlink in systemd was added to minimize breakage when
doing the switch from sysvinit to systemd

> We should fix it so that everything again treats /dev/shm as the mountpoint.

Nod, I'd be more then happy to drop the /run/shm symlink again from systemd.


-- 
Why is it that all of the instruments seeking intelligent life in the
universe are pointed away from Earth?



signature.asc
Description: OpenPGP digital signature


Bug#851444: ITP: python-xarray -- N-D labeled arrays and datasets in Python

2017-01-14 Thread Ghislain Antony Vaillant
Package: wnpp
Severity: wishlist
Owner: Ghislain Antony Vaillant 

* Package name: python-xarray
  Version : 0.8.2
  Upstream Author : xarray Developers
* URL : http://xarray.pydata.org
* License : Apache-2.0
  Programming Lang: Python
  Description : N-D labeled arrays and datasets in Python

Long-Description:
 xarray (formerly xray) is an open source project and Python package
 that aims to bring the labeled data power of pandas to the physical
 sciences, by providing N-dimensional variants of the core pandas data
 structures.
 .
 It provides a pandas-like and pandas-compatible toolkit for analytics
 on multi-dimensional arrays, rather than the tabular data for which
 pandas excels.

This package will be maintained by the Debian Science Team.



Re: how to mount /(dev|run)/shm properly? (was Re: Auto reject if autopkgtest of reverse dependencies fail or cause FTBFS)

2017-01-14 Thread Simon McVittie
On Sun, 15 Jan 2017 at 01:18:00 +0100, Michael Biebl wrote:
> Am 14.01.2017 um 20:00 schrieb Steve Langasek:
> > I recall this being a misguided attempt to move it out of /dev "because it's
> > not a device".  The migration did not go well, especially in the face of
> > chroots that need to have it mounted, and since systemd did not handle this
> > the same way sysvinit had, we effectively now have a mess in the other
> > direction.
> 
> The /run/shm symlink in systemd was added to minimize breakage when
> doing the switch from sysvinit to systemd

If I understand correctly, the objection was to how sysvinit behaves
(for which I have now opened #851427) - it puts the symlink at /dev/shm and
the real mount at /run/shm.

I don't think systemd is doing anything wrong here. Upstream systemd is
correct to mount the actual filesystem on /dev/shm, and IMO it's also
valid for Debian systemd to make the symlink.

> > We should fix it so that everything again treats /dev/shm as the mountpoint.
> 
> Nod, I'd be more then happy to drop the /run/shm symlink again from systemd.

This sounds like a job for post-stretch. Let's not remove low-cost
compatibility symlinks right now :-)

S