Bug#1002452: ITP: fq -- jq for binary formats

2021-12-22 Thread Daniel Milde
Package: wnpp
Severity: wishlist
Owner: Daniel Milde 
X-Debbugs-Cc: debian-devel@lists.debian.org

* Package name: fq
  Version : 0.0.2
  Upstream Author : Mattias Wadman
* URL : https://github.com/wader/fq
* License : Expat
  Programming Lang: Go
  Description : jq for binary formats 

Tool, language and decoders for inspecting binary data 
in similar manner as jq does for JSON.

This package will be maintained under the Debian Go Team umbrella.



Bug#1002470: RFP: opensearch -- search engine, fork of Elasticsearch

2021-12-22 Thread Andrius Merkys
Package: wnpp
Severity: wishlist
X-Debbugs-Cc: debian-devel@lists.debian.org
Control: block -1 by 926714

* Package name: opensearch
  Version : 1.1.0
  Upstream Author : Amazon Web Services
* URL : https://www.opensearch.org
* License : Apache-2.0
  Programming Lang: Java
  Description : F/LOSS search and analytics suite

Starting with v7.11, Elasticsearch is dual-licensed under SSPL and
Elastic License, neither of them are DFSG-compatible. Amazon Web
Services forked the Apache-2.0 Elasticsearch source and released
OpenSearch as F/LOSS fork of it.

Since there is some initial compatibility with Elasticsearch promised,
OpenSearch might serve as drop-in replacement for the former. Thus some
F/LOSS software which depended on Elasticsearch (RDF4J, for example)
could be made to rely on OpenSearch instead.

The biggest blocker for now is gradle (see #926714). Build system
requires at least v6.6.

Andrius



Bug#1002480: ITP: pyxrd -- python implementation of the matrix algorithm for computer modeling of X-ray diffraction (XRD) patterns of disordered lamellar structures.

2021-12-22 Thread Roland Mas
Package: wnpp
Severity: wishlist
Owner: Roland Mas 
X-Debbugs-Cc: debian-devel@lists.debian.org

* Package name: pyxrd
  Version : 0.8.4
  Upstream Author : Mathijs Dumon 
* URL : https://github.com/PyXRD/PyXRD
* License : BSD-2-clause
  Programming Lang: Python
  Description : modeling of X-ray diffraction (XRD) patterns of disordered 
lamellar structures.

The full short description of the software is "python implementation
of the matrix algorithm for computer modeling of X-ray diffraction
(XRD) patterns of disordered lamellar structures."

This package will be maintained under the Debian Science Team
umbrella, and will be useful for the Debian Photons and Neutrons Team.



releasing major library change to unstable without coordination

2021-12-22 Thread Jonas Smedegaard
Hi fellow developers,

Is it normal and ok to upload a new major release of a library to 
unstable, without either a) testing that reverse dependencies do not 
break, or b) coordinating with maintainers of reverse dpendencies 
_before_ such upload?

Sure, accidents happen - but do the label "unstable" only mean that 
accidents can happen or also that coordination/warning is optional?

Reason for my question is bug#1001591 where (apart from my failures in 
getting my points across as clearly as I would have desired) me and the 
involved package maintainer seem to have very different views on the 
matter, and I would like to understand more generally if I am living is 
some fantasy World different from common practices in Debian.


Regards,

 - Jonas

-- 
 * Jonas Smedegaard - idealist & Internet-arkitekt
 * Tlf.: +45 40843136  Website: http://dr.jones.dk/

 [x] quote me freely  [ ] ask before reusing  [ ] keep private

signature.asc
Description: signature


Re: releasing major library change to unstable without coordination

2021-12-22 Thread Samuel Thibault
Jonas Smedegaard, le jeu. 23 déc. 2021 00:45:23 +0100, a ecrit:
> Is it normal and ok to upload a new major release of a library to 
> unstable, without either a) testing that reverse dependencies do not 
> break, or b) coordinating with maintainers of reverse dpendencies 
> _before_ such upload?

Usually I'd upload to experimental first, for people to easily check&fix
their rdeps package, and notify them with an "important" bug. Then after
some time raise to "severe" and upload to unstable.

Samuel



Re: releasing major library change to unstable without coordination

2021-12-22 Thread Rene Engelhard
Hi,

Am 23.12.21 um 00:45 schrieb Jonas Smedegaard:
> Is it normal and ok to upload a new major release of a library to 
> unstable, without either a) testing that reverse dependencies do not 
> break, or b) coordinating with maintainers of reverse dpendencies 
> _before_ such upload?

People are expected to do so (coordination/testing etc).


- Mistakes happen.


BUT:


- Apparently some people forgot this and deliberately don't follow (and
I don't mean the can-happen accidents).

(In the speficic case I have in mind the maintainer just added a Breaks:
without telling anyone,

so "communicating" with d-d- c and/or failing autopkgtests..)


> Sure, accidents happen - but do the label "unstable" only mean that 
> accidents can happen or also that coordination/warning is optional?

I don't think it is.


Regards,


Rene



Re: releasing major library change to unstable without coordination

2021-12-22 Thread Sandro Tosi
> People are expected to do so (coordination/testing etc).
>
>
> - Mistakes happen.
>
>
> BUT:
>
>
> - Apparently some people forgot this and deliberately don't follow (and
> I don't mean the can-happen accidents).
>
> (In the speficic case I have in mind the maintainer just added a Breaks:
> without telling anyone,
>
> so "communicating" with d-d- c and/or failing autopkgtests..)

there's also a problem of resources: let's take the example of numpy,
which has 500+ rdeps. am i expected to:

* rebuild all its reverse dependencies with the new version
* evaluate which packages failed, and if that failures is due to the
new version of numpy or an already existing/independent cause
* provide fixes that are compatible with the current version and the
new one (because we cant break what we currently have and we need to
prepare for the new version)
* wait for all of the packages with issues to have applied the patch
and been uploaded to unstable
* finally upload to unstable the new version of numpy

?

that's unreasonably long, time consuming and work-intensive for several reason

* first and foremost rebuild 500 packages takes hardware resources not
every dd is expected to have at hand (or pay for, like a cloud
account), so until there's a ratt-as-as-service
(https://github.com/Debian/ratt) kinda solution available to every DD,
do not expect that for any sizable package, but maybe only for the
ones with the smallest packages "networks" (which are also the ones
causing the less "damage" if something goes wrong),
* one maintainer vs many maintainers, one for each affected pkg;
distribute the load (pain?)
* upload to experimental and use autopkgtests you say? sure that's one
way, but tracker.d.o currently doesnt show experimental excuses
(#944737, #991237), so you dont immediately see which packages failed,
and many packages still dont have autopkgtest, so that's not really
covering everything anyway
* sometimes i ask Lucas to do an archive rebuild with a new version,
but that's still relying on a single person to run the tests, parse
the build log, and the open bugs for the failed packages; maybe most
of it is automated, but not all of it (and you cant really do this for
every pkg in debian, because the archive rebuild tool needs 2 config
files for each package you wanna test: 1. how to setup the build env
to use the new package, 2. the list of packages to rebuild it that
env).

what exactly are you expecting from other DDs?

unstable is unstable for a reason, breakage will happen, nobody wants
to break intentionally (i hope?) others people work/packages, but
until we come up with a simple, effective technical solution to the
"build the rdeps and see what breaks" issue, we will upload to
unstable and see what breaks *right there*.

Maybe it's just lazy on my part, but there needs to be a cutoff
between making changes/progress and dealing with the consequences, and
walking on eggshells every time there's a new upstream release (or
even a patch!) and you need to upload a new pkg.

i choose making progress

Cheers,
-- 
Sandro "morph" Tosi
My website: http://sandrotosi.me/
Me at Debian: http://wiki.debian.org/SandroTosi
Twitter: https://twitter.com/sandrotosi



Re: releasing major library change to unstable without coordination

2021-12-22 Thread Scott Kitterman



On December 23, 2021 12:24:16 AM UTC, Sandro Tosi  wrote:
>> People are expected to do so (coordination/testing etc).
>>
>>
>> - Mistakes happen.
>>
>>
>> BUT:
>>
>>
>> - Apparently some people forgot this and deliberately don't follow (and
>> I don't mean the can-happen accidents).
>>
>> (In the speficic case I have in mind the maintainer just added a Breaks:
>> without telling anyone,
>>
>> so "communicating" with d-d- c and/or failing autopkgtests..)
>
>there's also a problem of resources: let's take the example of numpy,
>which has 500+ rdeps. am i expected to:
>
>* rebuild all its reverse dependencies with the new version
>* evaluate which packages failed, and if that failures is due to the
>new version of numpy or an already existing/independent cause
>* provide fixes that are compatible with the current version and the
>new one (because we cant break what we currently have and we need to
>prepare for the new version)
>* wait for all of the packages with issues to have applied the patch
>and been uploaded to unstable
>* finally upload to unstable the new version of numpy
>
>?
>
>that's unreasonably long, time consuming and work-intensive for several reason
>
>* first and foremost rebuild 500 packages takes hardware resources not
>every dd is expected to have at hand (or pay for, like a cloud
>account), so until there's a ratt-as-as-service
>(https://github.com/Debian/ratt) kinda solution available to every DD,
>do not expect that for any sizable package, but maybe only for the
>ones with the smallest packages "networks" (which are also the ones
>causing the less "damage" if something goes wrong),
>* one maintainer vs many maintainers, one for each affected pkg;
>distribute the load (pain?)
>* upload to experimental and use autopkgtests you say? sure that's one
>way, but tracker.d.o currently doesnt show experimental excuses
>(#944737, #991237), so you dont immediately see which packages failed,
>and many packages still dont have autopkgtest, so that's not really
>covering everything anyway
>* sometimes i ask Lucas to do an archive rebuild with a new version,
>but that's still relying on a single person to run the tests, parse
>the build log, and the open bugs for the failed packages; maybe most
>of it is automated, but not all of it (and you cant really do this for
>every pkg in debian, because the archive rebuild tool needs 2 config
>files for each package you wanna test: 1. how to setup the build env
>to use the new package, 2. the list of packages to rebuild it that
>env).
>
>what exactly are you expecting from other DDs?
>
>unstable is unstable for a reason, breakage will happen, nobody wants
>to break intentionally (i hope?) others people work/packages, but
>until we come up with a simple, effective technical solution to the
>"build the rdeps and see what breaks" issue, we will upload to
>unstable and see what breaks *right there*.
>
>Maybe it's just lazy on my part, but there needs to be a cutoff
>between making changes/progress and dealing with the consequences, and
>walking on eggshells every time there's a new upstream release (or
>even a patch!) and you need to upload a new pkg.

It's not an either or.

Generally, the Release Team should coordinate timing of transitions.  New 
libraries should be staged in Experimental first.  Maintainers of rdpends 
should be alerted to the impending transition so they can check if they are 
ready.

Debian is developed by a team and we should work together to move things 
forward.  Particularly for a big transition like numpy, we all need to work 
together to get the work done.

It's true that breakage will happen in unstable.  We shouldn't be afraid of it, 
but we should also work to keep it manageable.

Scott K



Re: releasing major library change to unstable without coordination

2021-12-22 Thread Sandro Tosi
> It's not an either or.
>
> Generally, the Release Team should coordinate timing of transitions.  New 
> libraries should be staged in Experimental first.  Maintainers of rdpends 
> should be alerted to the impending transition so they can check if they are 
> ready.
>
> Debian is developed by a team and we should work together to move things 
> forward.  Particularly for a big transition like numpy, we all need to work 
> together to get the work done.
>
> It's true that breakage will happen in unstable.  We shouldn't be afraid of 
> it, but we should also work to keep it manageable.

let's not get hung up on the details of numpy; what if the package to
update is a small library, with say 20 rdeps, but one of them is llvm
or gcc or libreoffice, and maybe only for their doc. Are we really
asking the maintainer of that library to rebuild all the rdeps, which
can require considerable time, memory and disk space nor readily
available (we can assume the rdeps maintainers have figured out their
resource availability and so they'd be able to rebuild their packages
easily)?

and lets use once again numpy: 2 days ago i've uploaded 1.21.5 to
replace 1.21.4 in unstable. should i have instead uploaded to
experimental and asked the RT for a transition slot? how do i know if
a transition is required, in this and in all other cases, for all
packages? while only a patch release, there's a non-zero chance there
should be a regresion or an incompatible chance was released with it.
which can only be discovered by rdeps rebuild and so we go back to my
previous mail.

Regards,
-- 
Sandro "morph" Tosi
My website: http://sandrotosi.me/
Me at Debian: http://wiki.debian.org/SandroTosi
Twitter: https://twitter.com/sandrotosi



Re: releasing major library change to unstable without coordination

2021-12-22 Thread Niels Thykier
Sandro Tosi:
> and lets use once again numpy: 2 days ago i've uploaded 1.21.5 to
> replace 1.21.4 in unstable. [...]
> 
> Regards,

Hi,

If you feel discussing patch releases is worth a topic of its own, I
think we should start a separate thread for that because the process is
likely to be considerably different compared to a *major library change*
(which is what Jonas asked for).

Thanks,
~Niels



Re: releasing major library change to unstable without coordination

2021-12-22 Thread Scott Kitterman
On Wednesday, December 22, 2021 11:07:51 PM EST Sandro Tosi wrote:
> > It's not an either or.
> > 
> > Generally, the Release Team should coordinate timing of transitions.  New
> > libraries should be staged in Experimental first.  Maintainers of rdpends
> > should be alerted to the impending transition so they can check if they
> > are ready.
> > 
> > Debian is developed by a team and we should work together to move things
> > forward.  Particularly for a big transition like numpy, we all need to
> > work together to get the work done.
> > 
> > It's true that breakage will happen in unstable.  We shouldn't be afraid
> > of it, but we should also work to keep it manageable.
> let's not get hung up on the details of numpy; what if the package to
> update is a small library, with say 20 rdeps, but one of them is llvm
> or gcc or libreoffice, and maybe only for their doc. Are we really
> asking the maintainer of that library to rebuild all the rdeps, which
> can require considerable time, memory and disk space nor readily
> available (we can assume the rdeps maintainers have figured out their
> resource availability and so they'd be able to rebuild their packages
> easily)?
> 
> and lets use once again numpy: 2 days ago i've uploaded 1.21.5 to
> replace 1.21.4 in unstable. should i have instead uploaded to
> experimental and asked the RT for a transition slot? how do i know if
> a transition is required, in this and in all other cases, for all
> packages? while only a patch release, there's a non-zero chance there
> should be a regresion or an incompatible chance was released with it.
> which can only be discovered by rdeps rebuild and so we go back to my
> previous mail.

For things like major python packages (other languages too), it's not a simple 
as for C (and to a lesser extent C++) libraries where it's either binary 
compatible, no so name change, and not a transition, or it isn't.

I sympathize, really.  I think that for things that are supposed to be 
backward compatible, uploading to unstable is generally fine.  More extensive 
work is appropriate for larger, more "major" upgrades.

Scott K

signature.asc
Description: This is a digitally signed message part.


Bug#1002489: ITP: berry -- Minimal window manager

2021-12-22 Thread Yuri Musachio Montezuma da Cruz
Package: wnpp
Severity: wishlist
Owner: Yuri Musachio Montezuma da Cruz 
X-Debbugs-Cc: debian-devel@lists.debian.org, yuri.musac...@gmail.com

* Package name: berry
  Version : 0.1.9
  Upstream Author : Josh Ervin 
* URL : https://berrywm.org/
* License : MIT
  Programming Lang: C
  Description : Minimal window manager

Its main features include:

Controlled via a powerful command-line client, allowing users to control 
windows via a hotkey daemon such as sxhkd or expand functionality via shell
scripts.
Extensible themeing options with double borders, title bars, and window text.
Intuitively place new windows in unoccupied spaces.
Virtual desktops.



Re: chromium: Update to version 94.0.4606.61 (security-fixes)

2021-12-22 Thread Andres Salomon



On 12/13/21 5:31 PM, Moritz Muehlenhoff wrote:

On Sun, Dec 12, 2021 at 08:11:00PM -0500, Andres Salomon wrote:

On 12/5/21 6:41 AM, Moritz Mühlenhoff wrote:

Am Sun, Dec 05, 2021 at 10:53:56AM +0100 schrieb Paul Gevers:
Exactly that.

I'd suggest anyone who's interested in seeing Chromium supported to first
update it in unstable (and then work towards updated in bullseye-security).

I started doing just that: https://salsa.debian.org/dilinger/chromium (v96
and misc-fixes branches).

As a side note: If any of the system/* patches cause issues, feel free to switch
to the vendored copies. Vendoring in general is frowned upon since it requires 
that
a fix in a libraries spreads out to all vendored copies, but for Chromium 
there's
a steady stream of Chromium-internal security issues anyway, so for all
practical purposes it doesn't make a difference if the Chromium security 
releases
also include a fix for a vendored lib like ICU.

Cheers,
 Moritz



I've got 96.0.4664.110 building on both bullseye and sid, and am currently

debugging some crashes. The only thing I had to vendor was some nodejs

libraries, although it's very tempting to take a chainsaw through the 
various


patches and re-vendor a bunch of other libraries as Jeff suggested. Still on

the v96 branch of https://salsa.debian.org/dilinger/chromium



Bug#1002492: ITP: node-d3-flame-graph -- D3.js plugin that produces flame graphs from hierarchical data

2021-12-22 Thread Yadd
Package: wnpp
Severity: wishlist
Owner: Yadd 
X-Debbugs-Cc: debian-devel@lists.debian.org, 996...@bugs.debian.org

* Package name: node-d3-flame-graph
  Version : 4.1.3
  Upstream Author : Martin Spier 
* URL : https://github.com/spiermar/d3-flame-graph
* License : Apache-2.0
  Programming Lang: JavaScript
  Description : D3.js plugin that produces flame graphs from hierarchical 
data

Flame graphs are a visualization of profiled software, allowing the most
frequent code-paths to be identified quickly and accurately.
node-d3-flame-graph is a D3.js plugin that produces flame graphs from
hierarchical data.

This package is required to fix #996839 (linux-perf-5.14)