Re: unsigned repositories

2019-08-05 Thread David Kalnischkies
On Mon, Jul 29, 2019 at 10:53:45AM +0200, Johannes Schauer wrote:
> squeeze ended, we finally were able to remove a few hundred lines of code from

Julian is hoping that removing support for unsigned repositories would
do the same for us with the added benefit that for apt these lines are
security related … 😉


So far all usecases mentioned here seem to be local repositories
though. Nobody seems to be pulling unsigned repositories over the
network [for good reasons]. So perhaps we can agree on dropping support
for unsigned repositories for everything expect copy/file sources?

The other thing is repositories without a Release file, which seems to
be something used (legally) by the same class of repositories only, too.
That is in my opinion the more useful drop as the logic to decide if
a file can be acquired with(out) hashes or not is very annoying and
would probably benefit a lot from an "if not-local: return must-hashes"


These should at least help with the security aspect even if I am not sure
yet how that could be refactored to work [but that code area needs lots of
love anyhow, as in the last years I was just busy adding jetpacks and
nitro-injection to this horse-drawn vehicle to keep it afloat, would be
nice if we could retire at least the horses eventually.].


> > Both sbuild and autopkgtest are designed to target multiple Debian releases
> > including the oldest release that still attracts uploads (currently jessie,
> > for LTS), so relying on "apt-get install --with-source" is undesirable.
> > sbuild also uses aptitude instead of apt (for its more-backports-friendly
> > resolver) in some configurations, and that doesn't have --with-source.

Well, we are now building the tools we will be using in ten years in
this really old and clunky bullseye LTS release rushing for a time
machine so that we will would have had done this or that. Lets pretend
for a minute we could avoid that (or: … will be could have had? …).

What is it what you need? Sure, a local repository works, but that
sounds painful and clunky to setup and like a workaround already, so in
effect you don't like it and we don't like it either, it just happens to
work so-so for both of us for the time being.


> Yes. In sbuild we also cannot use other apt features like "apt-get build-dep"
> because sbuild allows one to mangle the build dependencies, so it works with
> dummy packages. So sbuild will have to keep creating its own repository.

Julian did "apt satisfy" recently and build-dep supports dsc files as
input, so naively speaking, could sbuild just write a dsc file the same
way it is now writing a Sources file? Also, --with-source actually
allows to add Packages/Sources files as well, I use them for
simulations only, but in theory they should work "for real", too.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: unsigned repositories

2019-08-05 Thread David Kalnischkies
On Mon, Jul 29, 2019 at 08:01:47AM +0100, Simon McVittie wrote:
> sbuild also uses aptitude instead of apt (for its more-backports-friendly
> resolver) in some configurations, and that doesn't have --with-source.

JFTR: aptitude (and all other libapt-based frontends) can make use of
that feature via the config option APT::Sources::With, the commandline
flag is just syntactic sugar.

So, as aptitude has I think -o you could e.g. say
   -o APT::Sources::With::=/path/to/file.deb
or if all else fails a config file of course.


Best regards

David Kalnischkies


signature.asc
Description: PGP signature


Re: What compiler flags are allowed in unstable/testing

2019-08-05 Thread Simon Richter
Hi Joël,

> What compiler flags are allowed for optimization with gcc in debian
> unstable/testing?

I'd go with "none". The default CPU architecture in the compiler is the
architecture baseline.

> My project has got support for builtin vector functions. And we don't
> enable any compiler flags to architecture specific CPU extensions.

On amd64, SSE2 is part of the baseline, 64-bit POWER should have some
variant of AltiVec.

If you want to use instructions that are not in this set, you can use
ld.so's HWCAP feature to load shared libraries depending on flags in
/proc/cpuinfo.

   Simon



Re: unsigned repositories

2019-08-05 Thread Johannes Schauer
Hi,

Quoting David Kalnischkies (2019-08-05 10:09:09)
> So far all usecases mentioned here seem to be local repositories though.
> Nobody seems to be pulling unsigned repositories over the network [for good
> reasons]. So perhaps we can agree on dropping support for unsigned
> repositories for everything expect copy/file sources?

This would work for sbuild.

> The other thing is repositories without a Release file, which seems to be
> something used (legally) by the same class of repositories only, too.  That
> is in my opinion the more useful drop as the logic to decide if a file can be
> acquired with(out) hashes or not is very annoying and would probably benefit
> a lot from an "if not-local: return must-hashes"

>From the sbuild perspective it would be nice not having to generate the hashes
anymore which we need to create a Release file for the local repository. But
sbuild could only implement this feature once even apt in oldstable supports
it. By that time there are probably more interesting ways for sbuild to satisfy
the dependencies it generates (see below).

> What is it what you need? Sure, a local repository works, but that sounds
> painful and clunky to setup and like a workaround already, so in effect you
> don't like it and we don't like it either, it just happens to work so-so for
> both of us for the time being.

Yes, it would be nice not having to set up that local repository and just ask
apt to satisfy the dependencies we generate. But again, any feature of apt we
use must also be available in oldstable.

> > Yes. In sbuild we also cannot use other apt features like "apt-get
> > build-dep" because sbuild allows one to mangle the build dependencies, so
> > it works with dummy packages. So sbuild will have to keep creating its own
> > repository.
> Julian did "apt satisfy" recently and build-dep supports dsc files as input,
> so naively speaking, could sbuild just write a dsc file the same way it is
> now writing a Sources file? Also, --with-source actually allows to add
> Packages/Sources files as well, I use them for simulations only, but in
> theory they should work "for real", too.

Yes, that's all great! Now we just have to wait a couple of releases until
oldstable supports these features. Until then, please don't break what we
currently use without an alternative that *also* works well on oldstable.

Thanks!

cheers, josch


signature.asc
Description: signature


Re: unsigned repositories

2019-08-05 Thread Simon McVittie
On Mon, 05 Aug 2019 at 10:09:09 +0200, David Kalnischkies wrote:
> So far all usecases mentioned here seem to be local repositories
> though. Nobody seems to be pulling unsigned repositories over the
> network [for good reasons].

On CI systems at work, I've often found it to be useful to use
[trusted=yes] over https: relying on CA-cartel-signed https certificates
has fewer security guarantees than a signed repository, but is a lot
easier to set up for experiments like "use the derivative's official apt
repository, but overlay smcv's latest test packages, so we can test the
upgrade path before putting them into production".

I also currently use [trusted=yes] over the virtual network link between
the host system and a test/build VM, as a way to satisfy dependencies that
are not satisfiable in the target suite yet (testing against packages that
are stuck in NEW or not uploaded yet) or have been selectively mirrored
from backports or experimental (where pinning would normally prevent
use of backports or experimental packages, unless we use apt-cudf like
the Debian buildds do, or replace apt with aptitude like the -backports
buildds do).

While I *could* use a GPG-signed repository for both of these, that
would require generating GPG keys (draining the system entropy pool
in the process) and installing the public keys as trusted on the test
system, and I'd have to be careful to make sure that generating and using
these test GPG keys doesn't have side-effects on the more important GPG
configuration that I use to sign uploads.

Equally, I *could* make the entire repository available as file:/// in the
VM, but the autopkgtest virtualization interface (which I currently use
for virtualization) doesn't provide direct filesystem access from qemu
VMs to directories on the host (only recursive copying via
tar | ssh | tar), and ideally I don't want to have to copy *everything*
(e.g. there's no need to copy i386 packages when I'm building for amd64
and vice versa).

> The other thing is repositories without a Release file, which seems to
> be something used (legally) by the same class of repositories only, too.

For anything beyond quick experiments I normally use reprepro, so I have
a package pool and an unsigned Release file at least.

However, when the Open Build Service (which we use a lot at work)
exports projects as apt archives, reprepro requires per-repository
configuration (to tie together potentially multiple OBS projects into
one apt repository, and select an appropriate signing key), so the
normal case seems to be that permanent/long-term/sysadmin-controlled
apt repositories use reprepro, but anything short-term or
purely automatic (like personal branches used to test a batch
of changes) uses flat repositories, typically something like
"deb 
https://build.example.com/home:/smcv:/branches:/mydistro:/1.x:/main:/my-topic-branch
 ./".
I'll check whether OBS generates an unsigned Release file for those,
or no Release at all.

Because reprepro is designed for a stable, persistent repository that
is accessed by production apt clients, it isn't very happy about having
version numbers "go backwards", having binary packages disappear, etc.,
but tolerating those is often necessary while you are getting prerelease
packages right, especially if not everyone in the team is a Debian
expert. For test-builds that are expected to be superseded multiple
times, have some of their changes rejected at code review, etc., it's a
lot more straightforward to have the repository be recreated anew every
time: yes, this can break apt clients, but if those apt clients are all
throwaway test systems anyway (as they ought to be if you are testing
unreviewed/untested code), then that doesn't actually matter.

> What is it what you need? Sure, a local repository works, but that
> sounds painful and clunky to setup and like a workaround already, so in
> effect you don't like it and we don't like it either, it just happens to
> work so-so for both of us for the time being.

Here are some use-cases, variously from my own Debian contributions, my
contributions to salsa-ci-pipeline and my day job:

* Build a package that has (build-)dependencies in a "NotAutomatic: yes"
  suite (i.e. experimental), and put it through a
  build / autopkgtest / piuparts / etc. pipeline, ideally without needing
  manual selection of the packages that have to be taken from experimental,
  and ideally as close as possible to the behaviour of official experimental
  buildds so that it will not succeed in local testing but FTBFS on the
  official infrastructure.
  (gtk+4.0, which needs graphene from experimental, is a good example.
  sbuild can be configured to use apt-cudf, but autopkgtest and piuparts
  cannot; I need to open wishlist bugs about those. At the moment I use
  a local apt repo with selected packages taken from experimental as
  a workaround.)

* Build a package that has (build-)dependencies in a "NotAutomatic: yes",
  "ButAutomaticUpdates: yes" suite (i

Re: unsigned repositories

2019-08-05 Thread Simon McVittie
On Mon, 05 Aug 2019 at 10:11:07 +0200, David Kalnischkies wrote:
> On Mon, Jul 29, 2019 at 08:01:47AM +0100, Simon McVittie wrote:
> > sbuild also uses aptitude instead of apt (for its more-backports-friendly
> > resolver) in some configurations, and that doesn't have --with-source.
> 
> JFTR: aptitude (and all other libapt-based frontends) can make use of
> that feature via the config option APT::Sources::With, the commandline
> flag is just syntactic sugar.
> 
> So, as aptitude has I think -o you could e.g. say
>-o APT::Sources::With::=/path/to/file.deb
> or if all else fails a config file of course.

Thanks, I'll try this. This might provide a way to teach piuparts and
autopkgtest how to test proposed packages for experimental and -backports
without having to know ahead of time which dependencies need to come from
the overlay.

smcv



duplicate popularity-contest ID

2019-08-05 Thread Bill Allombert
Dear Debian developers,

Each Debian popularity-contest submitter is supposed to have
a different random 128bit popcon ID.
However, the popularity-constest server 
receives a lot of submissions with identical popcon ID, which cause them
to be treated as a single submission.

I am not quite sure what it is the reason for this problem.
Maybe people use prebuild system images with a pregenerated
/etc/popularity-contest.conf file (instead of being generated
by popcon postinst).

I am not sure what to do about this.

Cheers,
-- 
Bill. 

Imagine a large red swirl here. 



Re: duplicate popularity-contest ID

2019-08-05 Thread Yao Wei (魏銘廷)


> On Aug 5, 2019, at 20:29, Bill Allombert  wrote:
> 
> I am not quite sure what it is the reason for this problem.
> Maybe people use prebuild system images with a pregenerated
> /etc/popularity-contest.conf file (instead of being generated
> by popcon postinst).

Could this be caused by Debian-live installer based on Calamares?

Yao Wei

(This email is sent from a phone; sorry for HTML email if it happens.)

Re: duplicate popularity-contest ID

2019-08-05 Thread Jonathan Carter
Hey Yao and Bill

On 2019/08/05 14:31, "Yao Wei (魏銘廷)" wrote:
>> I am not quite sure what it is the reason for this problem.
>> Maybe people use prebuild system images with a pregenerated
>> /etc/popularity-contest.conf file (instead of being generated
>> by popcon postinst).
> 
> Could this be caused by Debian-live installer based on Calamares?

Very unlikely, we don't install popularity-contest on live media and
it's not added/removed at any point by Calamares, so essentially when
you install popularity-contest on a calamares-live-installed system,
it's basically the same as installing it on any other type of Debian
system that didn't have it before.

I also just double-checked whether any /etc/popularity-contest.conf
exists on debian live images, and can confirm that it doesn't.

Bill, it might also be a good idea to ask on the debian-derivatives
mailing list, perhaps someone there might know. I don't suppose there's
any server logs with IPs that you can use to deduce from which country
it's coming from?

-Jonathan

-- 
  ⢀⣴⠾⠻⢶⣦⠀  Jonathan Carter (highvoltage) 
  ⣾⠁⢠⠒⠀⣿⡁  Debian Developer - https://wiki.debian.org/highvoltage
  ⢿⡄⠘⠷⠚⠋   https://debian.org | https://jonathancarter.org
  ⠈⠳⣄  Be Bold. Be brave. Debian has got your back.



Re: duplicate popularity-contest ID

2019-08-05 Thread Andrey Rahmatullin
On Mon, Aug 05, 2019 at 02:29:33PM +0200, Bill Allombert wrote:
> Dear Debian developers,
> 
> Each Debian popularity-contest submitter is supposed to have
> a different random 128bit popcon ID.
> However, the popularity-constest server 
> receives a lot of submissions with identical popcon ID, which cause them
> to be treated as a single submission.
Do you mean just one ID or several IDs with multiple submissions each?

-- 
WBR, wRAR


signature.asc
Description: PGP signature


Re: duplicate popularity-contest ID

2019-08-05 Thread merkys
On 2019-08-05 15:29, Bill Allombert wrote:
> However, the popularity-constest server 
> receives a lot of submissions with identical popcon ID, which cause them
> to be treated as a single submission.

I would suspect cloned VMs to have identical popcon IDs. In this case
the collation of identical IDs would be a desirable property, IMO.

Best,
Andrius

-- 
Andrius Merkys
Vilnius University Institute of Biotechnology, Saulėtekio al. 7, room V325
LT-10257 Vilnius, Lithuania



Bug#933965: ITP: ecbuild -- ECMWF build system based on CMake

2019-08-05 Thread Alastair McKinstry
Package: wnpp
Severity: wishlist
Owner: Alastair McKinstry 

* Package name: ecbuild
  Version : 3.0.3
  Upstream Author : ECMWF
* URL : https://github.com/ecmwf/ecbuild
* License : Apache
  Programming Lang: CMake
  Description : ECMWF build system based on CMake

ecBuild is a build system based on CMake macros that are used as a build system 
for ECMWF software.
ECMWF is the European Centre for Medium-Range Weather Forecasts.

It is a Build-dependency of other ECMWF software being packaged.



Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?

2019-08-05 Thread Ian Jackson
Marc Haber writes ("Re: do packages depend on lexical order or 
{daily,weekly,monthly} cron jobs?"):
> On Mon, 29 Jul 2019 18:42:34 +0100, Simon McVittie 
> wrote:
> >On Wed, 24 Jul 2019 at 20:14:22 +0200, Marc Haber wrote:
> >> Scripts with such dependencies will probably fail miserably on systems
> >> that are using systemd-cron instead of one of the "classic" cron
> >> packaes
> >
> >I thought so too, but they don't: systemd-cron uses run-parts for
> >cron.{hourly,etc.} too.
> 
> ... this imports many disadvantages of the old scheme into the new
> world. Philipp has explained in this thread very well.

Right.  That makes sense.

So it seems to me that there are the following options for systemd
users:

A. Continue to use run-parts.

   Disadvantages: Bundles the output together.
   Doesn't provide individual status.

   Advantages: No work needed.

B. Run each script as a single systemd timer unit.

   Disadvantages: Runs scripts in parallel (causing load spikes and
   other undesirable performance effects).  Imposes un-auditable and
   undebuggable new concurrency/ordering requirements on scripts (that
   is, scripts must cope with concurrent running and in any order).
   Ie, effectively, exposes systemd users to new bugs.

   Advantages: Unbundled output and individual status.

C. Provide a feature in systemd to do what is needed.
   Advantages: Everything works properly.
   Disadvantage: Requires writing new code to implement the feature.

D. Provide a version of run-parts which does per-script reporting.
   Advantages: Provides the benefits of (c) to sysvinit/cron users
   too.  Disadvantages: Requires design of a new reporting protocol,
   and implementation of that protocol in run-parts; to gain the
   benefits for systemd users, must be implemented in systemd too.
   (Likewise for cron users.)

I was wrong when earlier I wrote that sysvinit users would suffer any
significant trouble, regardless what option above is picked.  The only
negative impact of (B) on sysvinit users is that maybe the cron
scripts they are running become more complex and fragile, in order to
try to satisfy the new concurrency requirement.

I would be interested in (D) if you thought it would be worthwhile.
Maybe the subunit v1 protocol or something :-).

> Maybe systemd-cron could be extended to be locally configurable
> whether to use run-parts, keeping the old semantics, or to generate
> individual timers.

There would be question of the default.

With current code the options are:

A. Things run in series but with concatenated output and no individual
   status.

B. Things run in parallel, giving load spikes and possible concurrency
   bugs; vs.

I can see few people who would choose (B).

People who don't care much about paying attention to broken cron
stuff, or people who wouldn't know how to fix it, are better served by
(A).  It provides a better experience.

Knowledgeable people will not have too much trouble interpreting
combined output, and maybe have external monitoring arrangements
anyway.  Conversely, heisenbugs and load spikes are still undesirable.
So they should also choose (A).

IOW reliability and proper operation is more important than separated
logging and status reporting.

Ian.

-- 
Ian JacksonThese opinions are my own.

If I emailed you from an address @fyvzl.net or @evade.org.uk, that is
a private address which bypasses my fierce spamfilter.



Re: duplicate popularity-contest ID

2019-08-05 Thread Russ Allbery
Bill Allombert  writes:

> Each Debian popularity-contest submitter is supposed to have a different
> random 128bit popcon ID.  However, the popularity-constest server
>  receives a lot of submissions with identical
> popcon ID, which cause them to be treated as a single submission.

Are you getting lots and lots of submissions with one identical popcon ID,
or lots of cases of 10-20 systems duplicating different popcon IDs?  I
think those lead to different conclusions.

If it's the second, I agree with the suggestion of cloned VMs.  Containers
are making this a bit less common, but building out a system and then
cloning it repeatedly used to be the most common way of scaling a web
service in environments such as AWS.

-- 
Russ Allbery (r...@debian.org)   



Bug#933969: ITP: r-cran-corrplot -- Visualization of a Correlation Matrix

2019-08-05 Thread Steffen Moeller
Package: wnpp
Severity: wishlist
Owner: Steffen Moeller 

* Package name: r-cran-corrplot
* URL : https://cran.r-project.org/package=corrplot
* License : MIT
  Programming Lang: R
  Description : Visualization of a Correlation Matrix

Team maintained on https://salsa.debian.org/r-pkg-team/r-cran-corrplot



Bug#933973: general: System totally freeze all the time (Debian 10)

2019-08-05 Thread Nikolay Stoyanov
Package: general
Severity: important

Dear Maintainer,

*** Reporter, please consider answering these questions, where appropriate ***

   * What led up to the situation?
I don't know! This happens every day by different reasons.
   * What exactly did you do (or not do) that was effective (or
 ineffective)?
This is the point, a totally different things. There is not something specific,
even when i do nothing.
   * What was the outcome of this action?
The system stop responing.
   * What outcome did you expect instead?

*** End of the template - remove these template lines ***



-- System Information:
Debian Release: 10.0
  APT prefers stable
  APT policy: (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 4.19.0-5-amd64 (SMP w/4 CPU cores)
Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=en_CA.UTF-8, LC_CTYPE=en_CA.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_CA.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled



Bug#933990: ITP: bitwise -- Terminal based bit manipulator in ncurses

2019-08-05 Thread Ramon Fried
Package: wnpp
Severity: wishlist
Owner: Ramon Fried 
X-Debbugs-CC: debian-devel@lists.debian.org

* Package name: bitwise
  Version : 0.33
  Upstream Author : Ramon Fried 
* URL : https://github.com/mellowcandle/bitwise
* License : GPL3
  Programming Lang: C
  Description : Interactive bitwise operation in ncurses

Bitwise is multi base interactive calculator supporting dynamic base
conversion and bit manipulation. It's a handy tool for low level
hackers, kernel developers and device drivers developers.

Some of the features include:
Interactive ncurses interface
Command line calculator.
Individual bit manipulator.
Bitwise operations such as NOT, OR, AND, XOR, and shifts.



Bug#933973: marked as done (general: System totally freeze all the time (Debian 10))

2019-08-05 Thread Debian Bug Tracking System
Your message dated Mon, 5 Aug 2019 21:39:45 +0200 (CEST)
with message-id 
and subject line 
has caused the Debian Bug report #933973,
regarding general: System totally freeze all the time (Debian 10)
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
933973: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=933973
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Package: general
Severity: important

Dear Maintainer,

*** Reporter, please consider answering these questions, where appropriate ***

   * What led up to the situation?
I don't know! This happens every day by different reasons.
   * What exactly did you do (or not do) that was effective (or
 ineffective)?
This is the point, a totally different things. There is not something specific,
even when i do nothing.
   * What was the outcome of this action?
The system stop responing.
   * What outcome did you expect instead?

*** End of the template - remove these template lines ***



-- System Information:
Debian Release: 10.0
  APT prefers stable
  APT policy: (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 4.19.0-5-amd64 (SMP w/4 CPU cores)
Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=en_CA.UTF-8, LC_CTYPE=en_CA.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_CA.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
--- End Message ---
--- Begin Message ---

Am 05.08.19 um 19:24 schrieb Nikolay Stoyanov:> Package: general


   * What led up to the situation?
 I don't know! This happens every day by different reasons.
   * What exactly did you do (or not do) that was effective (or
 ineffective)?
 This is the point, a totally different things. There is not
 something specific, even when i do nothing.
   * What was the outcome of this action?
 The system stop responing.
   * What outcome did you expect instead?

-- System Information:
Debian Release: 10.0
  APT prefers stable
  APT policy: (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 4.19.0-5-amd64 (SMP w/4 CPU cores)
Kernel taint flags: TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=en_CA.UTF-8, LC_CTYPE=en_CA.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_CA.UTF-8 (charmap=UTF-8)

Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled


Now there's another important questions that is completely unclear and 
would need to be answered:


 * What do you think we can do now with this bugreport and the
   information you have provided here?

There is *nothing* specific in your bug report except for the kernel 
version.


The idea of reporting a bug here is to get a bug fixed. But what's the 
bug? Is it broken hardware? Something else? What? How can Debian find 
out given the information?


Please do go to a Debian user IRC channel and people there can 
possibly help you to narrow the problem down. bugs.debian.org however is
*not* a support site to do this. You'd need to provide much more and much 
more specific information here.


Thanks,
*t--- End Message ---


Re: do packages depend on lexical order or {daily,weekly,monthly} cron jobs?

2019-08-05 Thread Philipp Kern

On 2019-08-05 17:34, Ian Jackson wrote:

With current code the options are:

A. Things run in series but with concatenated output and no individual
   status.

B. Things run in parallel, giving load spikes and possible concurrency
   bugs; vs.

I can see few people who would choose (B).

People who don't care much about paying attention to broken cron
stuff, or people who wouldn't know how to fix it, are better served by
(A).  It provides a better experience.

Knowledgeable people will not have too much trouble interpreting
combined output, and maybe have external monitoring arrangements
anyway.  Conversely, heisenbugs and load spikes are still undesirable.
So they should also choose (A).

IOW reliability and proper operation is more important than separated
logging and status reporting.


If we are in agreement that concurrency must happen with proper locking 
and not depend on accidental lineralization then identifying those 
concurrency bugs is actually a worthwhile goal in order to achieve 
reliability, is it not? I thought you would be the first to acknowledge 
that bugs are worth fixing rather than sweeping them under the rug. We 
already identified that parallelism between the various stages is 
undesirable. With a systemd timer you can declare conflicts as well as a 
lineralization if so needed.


I also question the "knowledgeable people will not have too much 
trouble". Export state as granular as possible and there is no guesswork 
required. I have no doubt that my co-workers can do this. But I want 
their life to be as easy as possible.


Similarly I wonder what the external monitoring should be apart from 
injecting fake jobs around every run-parts unit in this case. Replacing 
run-parts with something monitoring-aware? Then why not take the tool 
that already exists (systemd)?


And finally, the load spikes: Upthread it was mentioned that 
RandomizedDelaySec exists. Generally this should be sufficient to even 
out such effects. I understand that there is a case where you run a lot 
of unrelated VMs that you cannot control. In other cases, like laptops 
and desktops, it is very likely much more efficient to generate the load 
spike and complete the task as fast as possible in order to return to 
the low-power state of (effectively) waiting for input. I suspect that 
there is a conflict between the two that could be dealt with by 
encouraging liberal use of DefaultTimerAccuracySec on the system-level. 
I understand that Debian inherently does not distinguish between the two 
cases. I'd still expect a Cloud/Compute provider to offer default images 
in any case that could be preconfigured appropriately.


I apologize that I think of this in terms of systemd primitives. But the 
tool was written for a reason and a lot of thought went into it.


Kind regards
Philipp Kern



Bug#934004: ITP: golang-starlark -- Interpreter for the Starlark configuration language

2019-08-05 Thread Emanuel Krivoy
Package: wnpp
Severity: wishlist
Owner: Emanuel Krivoy 

* Package name: golang-starlark
  Version : 0.0~git20190717.fc7a7f4-1
  Upstream Author : The Bazel Authors
* URL : https://github.com/google/starlark-go
* License : BSD-3-clause
  Programming Lang: Go
  Description : Interpreter for the Starlark configuration language

Starlark is a dialect of Python intended for use as a configuration language.
Like Python, it is an untyped dynamic language with high-level data types,
first-class functions with lexical scope, and garbage collection. Unlike
CPython, independent Starlark threads execute in parallel, so Starlark
workloads scale well on parallel machines. Starlark is a small and simple
language with a familiar and highly readable syntax. You can use it as an
expressive notation for structured data, defining functions to eliminate
repetition, or you can use it to add scripting capabilities to an existing
application.

A Starlark interpreter is typically embedded within a larger application, and
the application may define additional domain-specific functions and data types
beyond those provided by the core language

This library is a dependency of delve (github.com/go-delve/delve, ITP in
#932835). If possible I'd like this package to be co-mantained/sponsored by the
Go Team.



Re: duplicate popularity-contest ID

2019-08-05 Thread Marco d'Itri
On Aug 05, Bill Allombert  wrote:

> Each Debian popularity-contest submitter is supposed to have
> a different random 128bit popcon ID.
> However, the popularity-constest server 
> receives a lot of submissions with identical popcon ID, which cause them
> to be treated as a single submission.

> I am not quite sure what it is the reason for this problem.
> Maybe people use prebuild system images with a pregenerated
> /etc/popularity-contest.conf file (instead of being generated
> by popcon postinst).
Probably yes.

> I am not sure what to do about this.
Change popularity-contest by transmissing the hostid after it has been
hashed with the content of /etc/machine-id.

-- 
ciao,
Marco


signature.asc
Description: PGP signature