Re: Concern for: A humble draft policy on "deep learning v.s. freedom"

2019-06-09 Thread Osamu Aoki
Hi Mo,

On Sat, Jun 08, 2019 at 10:07:13PM -0700, Mo Zhou wrote:
> Hi Osamu,
> 
> On 2019-06-08 18:43, Osamu Aoki wrote:
> >> This draft is conservative and overkilling, and currently
> >> only focus on software freedom. That's exactly where we
> >> start, right?
> > 
> > OK but it can't be where we end-up-with.
> 
> That's why I said the two words "conservative" and "overkilling".
> In my blueprint we can actually loosen these restrictions bit
> by bit with further case study.

Yes, we agree here!

> > Before scientific "deep learning" data, we already have practical "deep
> > learning" data in our archive.
> 
> Thanks for pointing them out. They are good case study
> for me to revise the DL-Policy.
> 
> > Please note one of the most popular Japanese input method mozc will be
> > kicked out from main as a starter if we start enforcing this new
> > guideline.
> 
> I'm in no position of irresponsibly enforcing an experimental
> policy without having finished enough case study.

I noticed it since you were thinking deep enough but I saw some danger
for other people to make decision too quickly based on the "Labeling".

Please check our history on the following GRs:
 https://www.debian.org/vote/2004/vote_003
 https://www.debian.org/vote/2006/vote_004

We are stack with "Further discussion" at this moment.

> >> Specifically, I defined 3 types of pre-trained machine
> >> learning models / deep learning models:
> >>
> >>   Free Model, ToxicCandy Model. Non-free Model
> >>
> >> Developers who'd like to touch DL software should be
> >> cautious to the "ToxicCandy" models. Details can be
> >> found in my draft.
> > 
> > With a labeling like "ToxicCandy Model" for the situation, it makes bad
> > impression on people and I am afraid people may not be make rational
> > decision.  Is this characterization correct and sane one?  At least,
> > it looks to me that this is changing status-quo of our policy and
> > practice severely.  So it is worth evaluating idea without labeling.
> 
> My motivation for the naming "ToxicCandy" is pure: to warn developers
> about this special case as it may lead to very difficult copyright
> or software freedom questions. I admit that this name looks not
> quite friendly. Maybe "SemiFree" look better?

Although I understand the intent of "SemiFree" or "Tainted" (by Yao), I
don't think these are a good choice.  We need to draw a line between
FREE(=main) and NON-FREE(non-free) as a organization.  I think there are
2 FREE models we are allowing for "main" as the current practice.

 * Pure  Free Model from pure free pre-train data only
 * Sanitized Free Model from free and non-free mixed pre-train data

And, we don't allow Non-Free Model in "main"

Question is when do you call it "sanitized" (or "distilled") to be clean
enough to qualify for "main" ;-)

> > As long as the "data" comes in the form which allows us to modify it and
> > re-train it to make it better with a set of free software tools to do it,
> > we shouldn't make it non-free, for sure.  That is my position and I
> > think this was what we operated as the project.  We never asked how they
> > are originally made.  The touchy question is how easy it should be to
> > modify and re-train, etc.
> >
> > Let's list analogy cases.  We allow a photo of something on our archive
> > as wallpaper etc.  We don't ask object of photo or tool used to make it
> > to be FREE.  Debian logo is one example which was created by Photoshop
> > as I understand.  Another analogy to consider is how we allow
> > independent copyright and license for the dictionary like data which
> > must have processed previous copyrighted (possibly non-free) texts by
> > human brain and maybe with some script processing.  Packages such as
> > opendict, *spell-*, dict-freedict-all, ... are in main.

...

> Thank you Osamu. These cases inspired me on finding a better
> balance point for DL-Policy. I'll add these cases to the case
> study section, and I'm going to add the following points to DL-Policy:
> 
> 1. Free datasets used to train FreeModel are not required to upload
>to our main section, for example those Osamu mentioned and wikipedia
>dump. We are not scientific data archiving organization and these
>data will blow up our infra if we upload too much.
> 
> 2. It's not required to re-train a FreeModel with our infra, because
>the outcome/cost ratio is impractical. The outcome is nearly zero
>compared to directly using a pre-trained FreeModel, while the cost
>is increased carbon dioxide in our atmosphere and wasted developer
>time. (Deep learning is producing much more carbon dioxide than we
>thought).
> 
>For classical probablistic graph models such as MRF or the mentioned
>CRF, the training process might be trivial, but re-training is still
>not required.

... but re-training is highly desirable in line with the spirit of the
free software.

> For SemiFreeModel  I still hesitate to make any decision. Once we let
  

Re: Debian, so ugly and unwieldy!

2019-06-09 Thread Chris Lamb
Adam Borowski wrote:

> This is about GUI appearance and ergonomy.
> 
> I'll concentrate at XFCE, as I consider GNOME3's UI a lost cause, thus I'd
> find it hard to bring constructive arguments there.
> 
> I also hate with a passion so-called "UX designers".  Those are folks who
> created Windows 8's Metro tiles, lightgray-on-white "Material Design" flat
> unmarked controls, and so on.  They work from a Mac while not having to
> actually use what they produce.

I empathise, understand and agree with many of the concerns that you
raised in your message. It is therefore particularly tragic that you
chose to open your remarks and suggestions in such a manner.

Expressing distain for the status quo and then compounding that by
passing judgement on the people who may be in the very position to
improve it seems, at best, unlikely to achieve our shared aims. It
furthermore frames any discussion in an unnecessarily negative light,
filtering responses to those who are willing to engage and contribute
to the conversation on combative terms, ensuring a systematic
observation bias in the outcome.

As others have mentioned, I hope that Debian remains a project that
makes it evermore welcoming to individuals who can advance our aims
and we were able to continue to discuss sensitive topics in a
collective and constructive manner. In that light, I would gently ask
that all well-meaning and sincere proposals to our lists make pains to
avoid all possibility of being accused of the aforementioned sins.


Best wishes,

-- 
  ,''`.
 : :'  : Chris Lamb
 `. `'`  la...@debian.org 🍥 chris-lamb.co.uk
   `-



Bug#930246: ITP: node-rollup-plugin-sourcemaps -- Rollup plugin for grabbing source maps from sourceMappingURLs

2019-06-09 Thread Pirate Praveen

Package: wnpp
Severity: wishlist
Owner: Pirate Praveen 
X-Debbugs-CC: debian-devel@lists.debian.org

* Package name : node-rollup-plugin-sourcemaps
 Version : 0.4.2
 Upstream Author : Max Davidson 
* URL : https://github.com/maxdavidson/rollup-plugin-sourcemaps#readme
* License : Expat
 Programming Lang: JavaScript
 Description : Rollup plugin for grabbing source maps from 
sourceMappingURLs
Useful for working with precompiled modules with existing source maps, 
without

resorting to sorcery (another library).
.
Inspired by webpack/source-map-loader library.
.
Node.js is an event-based server-side JavaScript engine.



Re: Debian, so ugly and unwieldy!

2019-06-09 Thread Adam Borowski
On Sun, Jun 09, 2019 at 09:46:37AM +0100, Chris Lamb wrote:
> Adam Borowski wrote:
> > This is about GUI appearance and ergonomy.
[...] 
> > I also hate with a passion so-called "UX designers".  Those are folks who
> > created Windows 8's Metro tiles, lightgray-on-white "Material Design" flat
> > unmarked controls, and so on.  They work from a Mac while not having to
> > actually use what they produce.
> 
> I empathise, understand and agree with many of the concerns that you
> raised in your message. It is therefore particularly tragic that you
> chose to open your remarks and suggestions in such a manner.
> 
> Expressing distain for the status quo and then compounding that by
> passing judgement on the people who may be in the very position to
> improve it seems, at best, unlikely to achieve our shared aims.

It is not my intention to bash any individuals (apologies if I sounded this
way) -- what I hate are two, quite prevalent, attitudes.

One is design that ignores usability and ergonomy.  The current trend
ignores not just grumbling user but even actual scientists.  Unmarked
controls -- in big words "lacking strong signifiers" make people need a lot
longer time to complete whatever task they were doing -- eye tracking shows
their attention wanders all around the window/page looking for interactable
controls.  This might actually be a goal -- it makes people notice adverts
more and for a longer time -- but is a strong countergoal for us.  The
theme doesn't need to be skeumorphic -- for example, marking interactable
controls in flat red is okay as long as you don't use red anywhere else.

And especially (the second problem), the interface needs to be consistent. 
This is why I hate poor integration between GTK and QT[1] so much, and CSD
even more.  The latter makes every program its unique interface, not obeying
the rest of the system.  Heck, if your theme has close button on the left or
in the center, a CSD-using program will have it on the right.  And look
different, and behave different, and so on.  The design movement pabs linked
to, "Don't Theme Our Apps" is exactly the thing I wish to stop.  I want
Debian to be well-integrated and usable while they want "brand recognition"
and to showcase their newest design, not letting the user to have things
his/her way.

> It furthermore frames any discussion in an unnecessarily negative light,
> filtering responses to those who are willing to engage and contribute to
> the conversation on combative terms, ensuring a systematic observation
> bias in the outcome.

I don't observe designs much, yeah.  I notice things only when they stand in
my way.  For example, when faced with eye-hurting whiteness I fixed and
packaged a dark theme (there were none in Debian at the time) as that was my
itch to scratch.  But I don't do so in an organized way, just lashing out at
a particular problem.  Such as tiny scrollbars being nasty for both fat
fingers on a 450dpi screen (Gemini) and for crap touchpad on a
low-resolution Pinebook.  Or, poorly hinted default font looking like crap
on that low-resolution Pinebook (and sub-pixel was set to off!).

I think a good UI person would want to at least know complaints from mere
users like me, and that's why I started this thread.
 
> As others have mentioned, I hope that Debian remains a project that
> makes it evermore welcoming to individuals

Yeah but one of our core values is "we do not hide problems".  I'd rather
lash out and voice my gripes _with actionable ideas_ than to stay silent to
avoid hurting someone's ego.

And I'd also prefer to avoid pushing my way in a back alley, thus I'm asking
for ideas and consensus before filing bugs.


Meow!

[1]. GTK vs QT seems trivial to fix, at least for GTK2-capable themes.  You
plop "export QT_QPA_PLATFORMTHEME=gtk2" into .xsessionrc and that's it. 
GTK2 going out might be a problem -- but in my naive view, why not make that
setting the default then maybe improve it later?
-- 
⢀⣴⠾⠻⢶⣦⠀ Latin:   meow 4 characters, 4 columns,  4 bytes
⣾⠁⢠⠒⠀⣿⡁ Greek:   μεου 4 characters, 4 columns,  8 bytes
⢿⡄⠘⠷⠚⠋  Runes:   ᛗᛖᛟᚹ 4 characters, 4 columns, 12 bytes
⠈⠳⣄ Chinese: 喵   1 character,  2 columns,  3 bytes <-- best!



Re: Concern for: A humble draft policy on "deep learning v.s. freedom"

2019-06-09 Thread Mo Zhou
Hi Osamu,

On 2019-06-09 08:28, Osamu Aoki wrote:
> Although I understand the intent of "SemiFree" or "Tainted" (by Yao), I
> don't think these are a good choice.  We need to draw a line between
> FREE(=main) and NON-FREE(non-free) as a organization.  I think there are

There is no such a line as a big grey area exists. Pure-free models plus
Pure-non-free models doesn't cover all the possible cases. But
Free + SemiFree + NonFree covers all possible cases.

SemiFree lies in a grey area because the ways people interpret it vary:

1. If one regards a model as sort of human artifact such as artwork
   or font, a free software licensed SemiFreeModel is free even if
   it's trained from non-free data. (Ah, yes, there is an MIT license!
   It's a free blob made by human.)

2. If one regards a model as a production from a mathematical process
   such as training or compilation, a free software licensed
   SemiFreeModel is actually non-free. (Oops, where did these
MIT-licensed
   digits came from and how can I reproduce it? Can I trust the source?
   What if the MIT-licensed model is trained from evil data but we don't
   know?)

I'm not going to draw a line across this grey area, or say minefield.
Personally I prefer the second interpretation.

> 2 FREE models we are allowing for "main" as the current practice.
> 
>  * Pure  Free Model from pure free pre-train data only
>  * Sanitized Free Model from free and non-free mixed pre-train data

Please don't make the definition of FreeModel complicated.
FreeModel should be literally and purely free.
We can divide SemiFreeModel into several categories according to
future case study and make DL-Policy properly match with the practice.

> And, we don't allow Non-Free Model in "main"

I think no one would argue about NonFreeModel.

> Question is when do you call it "sanitized" (or "distilled") to be clean
> enough to qualify for "main" ;-)

I expect a model, once sanitized, to he purely free. For example by
removing
all non-free data from the training dataset and only use free training
data. Any non-free single data pulls the model into the minefield.

>> 2. It's not required to re-train a FreeModel with our infra, because
>>the outcome/cost ratio is impractical. The outcome is nearly zero
>>compared to directly using a pre-trained FreeModel, while the cost
>>is increased carbon dioxide in our atmosphere and wasted developer
>>time. (Deep learning is producing much more carbon dioxide than we
>>thought).
>>
>>For classical probablistic graph models such as MRF or the mentioned
>>CRF, the training process might be trivial, but re-training is still
>>not required.
> 
> ... but re-training is highly desirable in line with the spirit of the
> free software.

I guess you didn't catch my point. In my definition of FreeModel and the
SemiFree/ToxicCandy model, providing training script is mandatory. Any
model without training script must be non-free. This requirement also
implies that the upstream must provide all information about the
datasets
and the training process. Software freedom can be guaranteed even if
we don't always re-train the free models, as it will only waste
electricity. On the other hand, developers should check whether a model
provide such freedom, and local re-training as an verification step
is encouraged.

Enforcing re-training will be a painful decision and would drive
energetic
contributors away especially when the contributor refuse to use Nvidia
suckware.

> Let's use SanitizedModel to be neutral.

Once sanitized a model should turn into a free model. If it doesn't,
then
why does one sanitize the model?

> We need to have some guideline principle for this sanitization process.
> (I don't have an answer now)

I need case study at this point.

> This sanitization mechanism shouldn't be used to include obfuscated
> binary blob equivalents.  It's worse than FIRMWARE case since it runs on
> the same CPU as the program code.
> 
> Although "Further Discussion" was the outcome, B in
> https://www.debian.org/vote/2006/vote_004 is worth looking at:
>   Strongly recommends that all non-programmatic works distribute the form
>   that the copyright holder or upstream developer would actually use for
>   modification. Such forms need not be distributed in the orig.tar.gz
>   (unless required by license) but should be made available on upstream
>   websites and/or using Debian project resources.
> 
> Please note this is "Strongly recommends ... should be made
> available..." and not "must be made available ...".

Umm

> Aside from Policy/Guideline for FREE/NON-FREE discussion, we also need
> to address for the spirit of the reproducible build.  It is nice to have
> checking mechanism for the validity and health of these MODELs.  I know
> one of the Japanese keyboard input method "Anthy" is suffering some
> regression in the upcoming release.  The fix was found too late so I
> uploaded to experimental since it contained too many cha

Bug#930262: ITP: node-apollo-link -- Flexible, lightweight transport layer for GraphQL

2019-06-09 Thread Pirate Praveen

Package: wnpp
Severity: wishlist
Owner: Pirate Praveen 
X-Debbugs-CC: debian-devel@lists.debian.org

* Package name : node-apollo-link
 Version : 1.2.11
 Upstream Author : Evans Hauser 
* URL : https://github.com/apollographql/apollo-link#readme
* License : Expat
 Programming Lang: JavaScript
 Description : Flexible, lightweight transport layer for GraphQL
This library is a standard interface for modifying control flow of 
GraphQL
requests and fetching GraphQL results, designed to provide a simple 
GraphQL

client that is capable of extensions.
.
The high level use cases of `apollo-link` are highlighted below:
 * fetch queries directly without normalized cache
 * network interface for Apollo Client
 * network interface for Relay Modern
 * fetcher for GraphiQL
.
The apollo link interface is designed to make links composable and 
easy to
share, each with a single purpose. In addition to the core, this 
package

contains links for the most common fetch methods—http, local schema,
websocket—and common control flow manipulations, such as retrying 
and polling.

.
Node.js is an event-based server-side JavaScript engine.

This package is a dependency of gitlab. Since it uses rollup and 
typescript to generate code, it is not suitable for embedding.





Re: Concern for: A humble draft policy on "deep learning v.s. freedom"

2019-06-09 Thread Osamu Aoki
Hi,

Let's think in a bit different perspective.

What is the outcome of "Deep Lerning".  That's "knowledge".

If the dictionary of "knowledge" is expressed in a freely usable
software format with free license, isn't it enough?

If you want more for your package, that's fine.  Please promote such
program for your project.  (FYI: the reason I spent my time for fixing
"anthy" for Japanese text input is I didn't like the way "mozc" looked
as a sort of dump-ware by Google containing the free license dictionary
of "knowledge" without free base training data.)  But placing some kind
of fancy purist "Policy" wording to police other software doesn't help
FREE SOFTWARE.  We got rid of Netscape from Debian because we now have
good functional free alternative.

If you can make model without any reliance to non-free base training
data for your project, that's great.

I think it's a dangerous and counter productive thing to do to deprive
access to useful functionality of software by requesting to use only
free data to obtain "knowledge".

Please note that the re-training will not erase "knowledge".  It usually
just mix-in new "knowledge" to the existing dictionary of "knowledge".
So the resulting dictionary of "knowledge" is not completely free of
the original training data.  We really need to treat this kind of
dictionary of "knowledge" in line with artwork --- not as a software
code.

Training process itself may be mathematical, but the preparation of
training data and its iterative process of providing the re-calibrating
data set involves huge human inputs.

> Enforcing re-training will be a painful decision...

Hmmm... this may depends on what kind of re-training.

At least for unidic-mecab, re-training to add many new words to be
recognized by the morphological analyzer is an easier task.  People has
used unidic-mecab and web crawler to create even bigger dictionary with
minimal work of re-training (mostly automated, I guess.)
  https://github.com/neologd/mecab-unidic-neologd/

I can't imagine to re-create the original core dictionary of "knowledge"
for Japanese text processing purely by training with newly provided free
data since it takes too much human works and I agree it is unrealistic
without serious government or corporate sponsorship project.

Also, the "knowledge" for Japanese text processing should be able to
cover non-free texts.  Without using non-free texts as input data, how
do you know it works on them.

> Isn't this checking mechanism a part of upstream work? When developing
> machine learning software, the model reproduciblity (two different runs
> should produce very similar results) is important.

Do you always have a luxury of relying on such friendly/active upstream?
If so, I see no problem.  But what should we do if not?

Anthy's upstream is practically Debian repo now.

Osamu



Bug#930265: ITP: node-rollup-plugin-invariant -- Rollup plugin to strip invariant(condition, message) strings in production

2019-06-09 Thread Pirate Praveen

Package: wnpp
Severity: wishlist
Owner: Pirate Praveen 
X-Debbugs-CC: debian-devel@lists.debian.org
Control: block 930262 by -1

* Package name : node-rollup-plugin-invariant
 Version : 0.5.6
 Upstream Author : Ben Newman 
* URL : https://github.com/apollographql/invariant-packages
* License : Expat
 Programming Lang: JavaScript
 Description : Rollup plugin to strip invariant(condition, message) 
strings in production

Packages for working with invariant(condition, message) assertions.
.
This package includes ts-invariant and rollup-plugin-invariant 
libraries.

.
Node.js is an event-based server-side JavaScript engine.

This package is a build dependency of apollo-link, which is a 
dependency of gitlab. Since it uses rollup and typescript to generate 
code, it is not suitable for embedding.




Re: Concern for: A humble draft policy on "deep learning v.s. freedom"

2019-06-09 Thread Mo Zhou
Hi Osamu,

On 2019-06-09 13:48, Osamu Aoki wrote:
> Let's think in a bit different perspective.
> 
> What is the outcome of "Deep Lerning".  That's "knowledge".

Don't mix everything into a single obscure word "knowledge".
That things is not representable through programming language
or mathematical language because we cannot define what
"knowledge" is in an unambiguous way. Squashing everything
into "knowledge" does exactly the inverse as what I'm doing.

> If the dictionary of "knowledge" is expressed in a freely usable
> software format with free license, isn't it enough?

A free license doesn't solve all my concerns. If we just treat
models as sort of artwork, what if

1. upstream happened to license a model trained from non-free
   data under GPL. Is upstream violating GPL by not releasing
   "source" (or material that is necessary to reproduce a work)?

2. upstream trained a model on a private dataset that contains
   deliberate evil data, and released it under MIT license.
   (Then malware just sneaked into main?)

I have to consider all possible models and applications in
the whole machine learning and deep learning area. The experience
learned from input methods cannot cover all possible cases.

A pile of digits from classifical machine learning model is
generally interpretable. That means human can understand what
each digit means (e.g. conditional probability, frequency, etc).

A pile of digits from deep neural network is basically not
interpretable -- human cannot fully understand them. Something
malicious could hide in this pile of digits due to the complexity
of the non-linearity mapping that neural networks have learned.

Proposed updates:

1. If a SemiFreeModel won't raise any security concern, we
   can accept them into main section. For an imagined example,
   upstream foobar wrote an input method, and trained a probablistic
   model based on developer's personal diary. The upstream released
   the model under a free license but didn't release his/her diary.
   Such model is fine as it doesn't incur any security problem.

2. Security sensitive SemiFreeModel is prohibited from entering
   the main section. Why should we trust it if we cannot inspect
   every thing about it?

Let me emphasize this again: Don't forget security when talking
about machine learning models and deep learning models. Data
used to train input method don't harm in any way, but data
used to train a model that controls authentication is ...
Security concern is inevitable along with industrial application
of deep learning.

Maybe I'm just too sensitive after reading ~100 papers about
attacking/fooling machine learning models. Here is a ridiculous
example: [Adversarial Reprogramming of Neural Networks]
(https://arxiv.org/abs/1806.11146)

> If you want more for your package, that's fine.  Please promote such
> program for your project.  (FYI: the reason I spent my time for fixing
> "anthy" for Japanese text input is I didn't like the way "mozc" looked
> as a sort of dump-ware by Google containing the free license dictionary
> of "knowledge" without free base training data.)  But placing some kind
> of fancy purist "Policy" wording to police other software doesn't help
> FREE SOFTWARE.  We got rid of Netscape from Debian because we now have
> good functional free alternative.
> 
> If you can make model without any reliance to non-free base training
> data for your project, that's great.

I'll create a subcategory under SemiFreeModel as an umbrella for input
methods and alike to reduce the overkilling level of DL-Policy. After
reviewing the code by myself. It may take some time because I have
to understand how things work.

> I think it's a dangerous and counter productive thing to do to deprive
> access to useful functionality of software by requesting to use only
> free data to obtain "knowledge".

The policy needs to balance not only usefulness/productivity but also
software freedom (as per definition), reproducibility, security,
doability, possibility and difficulties.

The first priority is software freedom instead of productivity
when we can only choose one, even if users will complain.
That's why our official ISO cannot ship ZFS kernel module
and very useful non-free firmware or alike.

> Please note that the re-training will not erase "knowledge".  It usually
> just mix-in new "knowledge" to the existing dictionary of "knowledge".
> So the resulting dictionary of "knowledge" is not completely free of
> the original training data.  We really need to treat this kind of
> dictionary of "knowledge" in line with artwork --- not as a software
> code.

My interpretation of "re-train" is "train from scratch again" instead
of "train increamentaly". For neural networks the "incremental training"
process is called "fine-tune".

I understand that you don't wish DL-Policy to kick off input methods
or alike and make developers down, and this will be sorted out soon...

> Training process itself may be mathematical, but the prepa

Re: Bug#930219: ITP: node-dagre-layout - should other dagre users switch?

2019-06-09 Thread Rebecca N. Palmer
Changes from dagre to this fork are summarized at [0].  It looks like 
Gitlab use this fork because Mermaid does [1] and the fork author is a 
major Mermaid contributor [2].


A few packages currently use the original dagre(-d3), but none run its 
full build process:


firefox/firefox-esr/thunderbird - embed already concatenated but 
non-minified source devtools/client/shared/vendor/dagre-d3.js

snakemake - snakemake/gui.html loads dagre-d3 from upstream website
theano - starts from the full source, but doesn't use its build script: 
runs browserify-lite + uglifyjs directly from d/rules


ssreflect used to embed dagre, but stopped doing so without apparent 
replacement, possibly (I haven't checked) disabling the documentation 
graph that uses it.


I previously [3] raised the question of whether dagre-* should be 
packaged separately rather than embedded, without reply.  I don't know 
whether it would be practical or desirable for the above packages to 
switch to this fork.


[0] 
https://github.com/tylingsoft/dagre-layout#changes-compared-to-dagrejsdagre
[1] 
https://gitlab.com/gitlab-org/gitlab-ce/commit/131e74d10dafbf2b781ab5d5517e42a18e20a587
[2] 
https://github.com/knsv/mermaid/commit/7b935823da2058243dfc32f7c2a533ae233a9d1e#diff-8ee2343978836a779dc9f8d6b794c3b2
[3] 
https://alioth-lists-archive.debian.net/pipermail/pkg-mozilla-maintainers/2017-September/029645.html




Re: ZFS in Buster

2019-06-09 Thread Thomas Goirand
On 6/6/19 5:16 PM, Ondřej Surý wrote:
> If we cannot make the ZoL in Buster safe for our users it needs to be removed 
> from the release

My understanding is that the issue is only affecting the performance of
the encryption part of ZFS, right? If so, I don't see why we should
remove ZFS from Buster, if ZFS continues to work perfectly, as long as
one doesn't use the encryption feature. Some warning lines in the
release notes, explicitly telling we recommend against using ZFS
encryption would IMO be enough.

Please let me know if I'm wrong.

Cheers,

Thomas Goirand (zigo)



Re: Debian, so ugly and unwieldy!

2019-06-09 Thread Chris Lamb
Adam Borowski wrote:

> > As others have mentioned, I hope that Debian remains a project that
> > makes it evermore welcoming to individuals
> 
> Yeah but one of our core values is "we do not hide problems".  I'd rather
> lash out and voice my gripes _with actionable ideas_ than to stay silent to
> avoid hurting someone's ego.

We share the same goal, I merely want to point out that your
confrontational approach to discussing this issue is counterproductive
in that will alienate or otherwise push away the people with the
technical skills to remedy it.

> And I'd also prefer to avoid pushing my way in a back alley, thus I'm asking
> for ideas and consensus before filing bugs.

It is a slip on your part to represent my practical suggestions for
achieving our mutual aims in any way as an attempt on my part to
suppress what you can say.


Regards,

-- 
  ,''`.
 : :'  : Chris Lamb
 `. `'`  la...@debian.org 🍥 chris-lamb.co.uk
   `-



Re: Debian, so ugly and unwieldy!

2019-06-09 Thread Niels Thykier
Adam Borowski:
> On Sun, Jun 09, 2019 at 09:46:37AM +0100, Chris Lamb wrote:
>> [...]
>  
>> As others have mentioned, I hope that Debian remains a project that
>> makes it evermore welcoming to individuals
> 
> Yeah but one of our core values is "we do not hide problems".  I'd rather
> lash out and voice my gripes _with actionable ideas_ than to stay silent to
> avoid hurting someone's ego.
> 
> [...]
> 

Hi Adam,

You have some technical improvements that may be useful and they may
very well be actionable or simple to implement.

However, at no point in this, can I understand how highlighting disdain
for certain people (or what their "title") would help with anything in
this endeavour (or any other cause for that matter).  From my PoV, it
just seems needlessly unprofessional and hostile towards others - plus
now we get to spent time dissecting that rather than focusing on what
you wanted to change.


Consider the opening part of the mail (2nd + 3rd paragraph):

> I'll concentrate at XFCE, as I consider GNOME3's UI a lost cause, thus I'd
> find it hard to bring constructive arguments there.
> 
> I also hate with a passion so-called "UX designers".  Those are folks who
> created Windows 8's Metro tiles, lightgray-on-white "Material Design" flat
> unmarked controls, and so on.  They work from a Mac while not having to
> actually use what they produce.

Rewritten as:

"""
My suggestions below will concentrate on XFCE because I use that myself.
 Though most of these changes are probably useful to other desktop
environments as well.
"""

(Replace "use" with "tested" if you do not actually use XFCE regularly;
I assumed you did but apologies if I guessed wrong)


This rewrite would have made a world of difference for how I perceived
the email.

Thanks,
~Niels