Re: restarting instanced systemd services on upgrade

2020-11-13 Thread Norbert Preining
Hi Simon,

thanks again!!

> I still think doing this as a user service (like pulseaudio, gpg-agent,
> dbus-daemon --session, rygel, flatpak-session-helper, tracker-store, etc.)
> would be a better approach to use by default: that sort of thing is exactly
> what `systemd --user` is designed for.

But isn't that what we are doing by shipping
/usr/lib/systemd/user/onedrive.service
?

Am I missing something here?

> Or perhaps it would make sense to have both implementations, like you said
> you've done, but swap round their enabledness: enable the user service by
> default, but provide example system units (disabled by default) that can
> be used by sysadmins who consider the advantages of running this as a
> system service to outweigh the disadvantages?

We don't enable **anything** by default since it requires interaction by
the user (log into the onedrive account, provide response URL).

That is also the reason why I don't see any use for a system service.

> I assume this package is a FUSE filesystem (or some other daemon) for access
> to Microsoft OneDrive? It might be useful to compare it with other

No, it is a user space program that syncs a directory with the content
in the OneDrive cloud, more similar to the dropbox client or insync
client. It can run in one-shot sync mode, as well as monitor mode using
notify events.

> It looks as though the system service would have to be enabled manually
> on a per-user basis *anyway*, because the package can't know which users
> it is meant to be enabled for; so perhaps the best way would be something
> like this in README.Debian, and arranging the packaging so that it becomes
> true:

[...]

As I said, I think nothing should be automatically enabled from the
system side (equivalent of systemctl --user --global disable
onedrive.service), so the only information necessary would be

 Users can enable the service for themselves by running the command:
 
 $ systemctl --user enable onedrive.service

All the rest for sysadm enabling for all will simply not work due to the
requirement of initial interactive setup.

> > /lib/systemd/system/onedrive@.service
> > --
> ...
> > After=network-online.target
> > Wants=network-online.target
> > PartOf=onedrive.target
> > 
> > [Service]
> > ExecStart=/usr/bin/onedrive --monitor --confdir=/home/%i/.config/onedrive
> 
> If you are sure you want this to be a system service, this looks
> right. However, as you said, this assumes every user's home directory
> is /home/${LOGNAME}. It also won't respect the user's ${XDG_CONFIG_HOME}
> (it assumes the default ~/.config).
> 
> systemd documents that it sets ${HOME} for system services that
> have User= (at least in recent-ish versions, I'm not sure how

Ahh, that is a good info. It wasn't like this back when I wrote the
current version of unit files.

Upstream will probably not change that, since we (upstream) support also
older RH/Fedora etc, where ${HOME} might not be available, but for the
Debian packages I can use ${HOME}. Thanks.

> users set PATH, XDG_CACHE_HOME, XDG_CONFIG_HOME or LANGUAGE, nothing
> that onedrive does will respect those environment variables.

Besides --confdir, onedrive doesn't care for any of these variables in
any way at the moment.

> If onedrive wants to access the D-Bus session bus, for example to send

It does when started as user service, by connecting to the notification
system and sending notifications.

> /lib/systemd/system/onedrive.target. If you *don't* give the target
> an [Install] section:
> 
> # /lib/systemd/system/onedrive.target
> [Unit]
> Description=OneDrive clients
> Documentation=...
> # end of file
> 
> but instead give onedrive@.service a Wants on it:
> 
> # /lib/systemd/system/onedrive@.service
> [Unit]
> ...
> After=network-online.target
> Wants=network-online.target onedrive.target
> 
> then it will automatically become part of system boot if and only if
> at least one instance of onedrive@.service is enabled, which I think is
> probably the right thing to do.

Ahh, that is a good idea. That would keep things a bit cleaner.

> > /usr/lib/systemd/user/onedrive.service
> > --
> ...
> > After=network-online.target
> > Wants=network-online.target
> > PartOf=onedrive.target
> 
> These will not have any effect (unless you have created a

Ok, so better to be removed, just useless.

> service managers, with entirely separate ecosystems of units: neither
> can see units that were configured for the other.

Ok, that will leave user-unit based onedrive process running until the
user logs out.

And then, again, this might be a problem if combined with lingering etc
etc.

Bottomline for me is that it seems there is no reliable way to restart
user services (services started via systemctl --user) besides waiting
for reboot. And this again could make some postinstall script code
interesting...

Looking at the insync po

Re: restarting instanced systemd services on upgrade

2020-11-13 Thread Simon McVittie
On Fri, 13 Nov 2020 at 21:32:37 +0900, Norbert Preining wrote:
[I wrote:]
> > I still think doing this as a user service
> > would be a better approach to use by default
> 
> But isn't that what we are doing by shipping
>   /usr/lib/systemd/user/onedrive.service
> ?

Yes it is, but you started this thread by asking for advice about how
to restart an instanced *system* service on upgrade, which gave me the
impression that you consider the system service to be what you recommend
to users as the most normal way to run this onedrive service.

If I'm understanding correctly, each user who wants to run this service
in monitor mode needs to either:
- enable it as a systemd user service (using the unit in
  /usr/lib/systemd/user)
or
- (ask their sysadmin to) enable it as an instanced system service with
  instance name = their username (using the unit in /lib/systemd/system),
or
- run it in some way that doesn't involve systemd
but they need to choose exactly one of those ways, and it is pointless
to have more than one for the same user. Is that right?

If I'm correct to think that, then I would say that you should
recommend the user-service as the most-preferred way, with the others
as alternatives for people with unusual needs.

> We don't enable **anything** by default since it requires interaction by
> the user (log into the onedrive account, provide response URL).

That's reasonable. We don't have a particularly good answer for what to do
with per-user services that are only conditionally useful: mpd is another
example of this (at the moment it autostarts for all users, but is only
practically useful for users who have configured it).

> That is also the reason why I don't see any use for a system service.

Now I'm confused. You started this thread asking for advice about how to
restart instanced system services, but now you say you don't see any use
for a system service.

Is what you mean here: a user-service is a viable route to use this
program; and an instanced system service that is run per user and
behaves as though it wants to be a user-service is also viable; but you
see no use for a system service that is genuinely running on behalf of
the entire multi-user system, similar to atd.service or cups.service
or rsyslog.service?

If that's what you mean, then yes, I agree; but any unit in
/lib/systemd/system is, by definition, a system service.

> > I assume this package is a FUSE filesystem (or some other daemon)
> 
> No, it is a user space program that syncs a directory with the content
> in the OneDrive cloud, more similar to the dropbox client or insync
> client. It can run in one-shot sync mode, as well as monitor mode using
> notify events.

OK, so it behaves like a daemon when run with --monitor, which is the case
we're interested in here.

> As I said, I think nothing should be automatically enabled from the
> system side (equivalent of systemctl --user --global disable
> onedrive.service), so the only information necessary would be
> 
>  Users can enable the service for themselves by running the command:
>  
>  $ systemctl --user enable onedrive.service

OK, that makes sense. You should be able to achieve this with:

dh_installsystemduser --no-enable

(in a sufficiently new compat level) if you aren't doing that already.

> Bottomline for me is that it seems there is no reliable way to restart
> user services (services started via systemctl --user) besides waiting
> for reboot.

I think you are correct. If there is a way to do this in future, it is
most likely to involve dh_installsystemduser, so making sure that your
units are registered with dh_installsystemduser would be a good step
towards this.

A few packages use killall or pkill to tell user services to restart or
reload, but that isn't exactly robust: it relies on rummaging through
/proc and assuming that anything with a matching executable name is in
fact the service you had in mind.

Using a full path match, like gvfs-backends.postinst does, is probably
the least-bad way to do this on a per-package basis - but I hesitate to
say "best", only "least bad".

As far as I know, needrestart-session is the closest we have to an
actually good solution. You might well think it is too trigger-happy
about what to restart to want to use it on your own system, and that's
fair, but the design at least seems reasonable. My understanding is that
it works by running a daemon in the user session that receives advisory
notifications from a matching system-level component. The per-user daemon
makes its own decisions about what to (offer to) restart for that user,
which means the privileged system-level part is as small as possible,
and is not trying to "look into" distrusted session processes.

Honestly, the more experience I get in distro development, the more
convinced I am that "reboot to update" is the robust thing to do for
end-user systems. For developers who are using a rolling release or
installing their own patched/updated packag

Re: restarting instanced systemd services on upgrade

2020-11-13 Thread Norbert Preining
Hi Simon,

> Yes it is, but you started this thread by asking for advice about how
> to restart an instanced *system* service on upgrade, which gave me the
> impression that you consider the system service to be what you recommend
> to users as the most normal way to run this onedrive service.

Well, I don't have a strong feeling about this. I guess because *I* use
the instanced *system* service I somehow tend to prefer it. I don't need
the notifications, but I want to sync stuff even when I am not logged
in.

> If I'm understanding correctly, each user who wants to run this service
> in monitor mode needs to either:
> - enable it as a systemd user service (using the unit in
>   /usr/lib/systemd/user)
> or
> - (ask their sysadmin to) enable it as an instanced system service with
>   instance name = their username (using the unit in /lib/systemd/system),
> or
> - run it in some way that doesn't involve systemd

Yes, exactly.

> but they need to choose exactly one of those ways, and it is pointless
> to have more than one for the same user. Is that right?

Yes, absolutely. In fact, running two instances via different methods
will crash one ;-)

> If I'm correct to think that, then I would say that you should
> recommend the user-service as the most-preferred way, with the others
> as alternatives for people with unusual needs.

Ok, thanks. I will adjust the documentation accordingly.

> > That is also the reason why I don't see any use for a system service.
> 
> Now I'm confused. You started this thread asking for advice about how to

Sorry for the confusion, I thought about a non-instanced system service
(/lib/systemd/system/onedrive.service). We don't ship it, and I don't
think it makes sense to ship it.

The *instanced* system service OTOH is something I would keep (at least
for me ;-)

> Is what you mean here: a user-service is a viable route to use this
> program; and an instanced system service that is run per user and
> behaves as though it wants to be a user-service is also viable; but you
> see no use for a system service that is genuinely running on behalf of
> the entire multi-user system, similar to atd.service or cups.service
> or rsyslog.service?

Yes. That explains is correclty.

> OK, that makes sense. You should be able to achieve this with:
> 
> dh_installsystemduser --no-enable

Hmm, I have
override_dh_installinit:
dh_installinit --no-start --no-enable

but the actual .service files are already installed by upstream's make
install.

Is there a considerable difference?

> A few packages use killall or pkill to tell user services to restart or
> reload, but that isn't exactly robust: it relies on rummaging through

Agreed (and I don't like it).

> Honestly, the more experience I get in distro development, the more
> convinced I am that "reboot to update" is the robust thing to do for

Agreed on that.

> >   su - $user -c "insync quit"
> 
> Defnitely please don't do this. Having system-level code reach into

I don't do it, that is too crazy a code for me to see in postinst. But
it is what is shipped in insync (not in Debian!).

All the best

Norbert

--
PREINING Norbert  https://www.preining.info
Accelia Inc. + IFMGA ProGuide + TU Wien + JAIST + TeX Live + Debian Dev
GPG: 0x860CDC13   fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13



Re: restarting instanced systemd services on upgrade

2020-11-13 Thread Simon McVittie
On Sat, 14 Nov 2020 at 00:06:26 +0900, Norbert Preining wrote:
> > OK, that makes sense. You should be able to achieve this with:
> > 
> > dh_installsystemduser --no-enable
> 
> Hmm, I have
> override_dh_installinit:
> dh_installinit --no-start --no-enable
> 
> but the actual .service files are already installed by upstream's make
> install.
> 
> Is there a considerable difference?

dh_installsystemduser is for systemd user units in /usr/lib/systemd/user.
It only acts on user units, and doesn't touch the system (pid 1) instance
of systemd.

In debhelper compat levels >= 11, systemd system units in
/lib/systemd/system are handled by dh_installsystemd, which conversely
only acts on *system* units, and doesn't touch the *user* instance
of systemd.

In debhelper compat levels < 10, systemd system units are handled by
a mixture of dh_installinit, dh_systemd_enable and dh_systemd_start.
I'd recommend using compat level 12 or later if you can - it'll be
less confusing.

It's OK (preferred, even) that the .service files come from upstream's
make install: debhelper knows how to look for systemd units in debian/tmp
after installation. If you add .target units, those are probably something
to contribute upstream too: I don't see any reason why they should be
Debian-specific.

The --no-enable option to dh_installsystemduser is the equivalent of
the options of the same name in dh_installinit and dh_installsystemd.
There's no --no-start option for dh_installsystemduser, because
dh_installsystemduser doesn't start services anyway.

smcv



Re: NEW queue almost empty

2020-11-13 Thread Federico Ceratto
> > > Agreed, that's a huge work done, thanks to all FTP Masters!

Thanks for all the work!

If people are interested, there's a little chart I put together:
https://people.debian.org/~federico/new_queue/

(It's manually updated because it requires ssh-ing to coccia.d.o)

Bye



Re: Split Packages files based on new section "buildlibs"

2020-11-13 Thread Tomas Pospisek
Hi Antonio (and anybody else that understands the technical problem 
involved here),


I've been reading the whole thread and it seems to me that the reason, 
why Rust/Go build-time "libraries" need to be handled differently from 
all the other existing stuff in the world and that "no user ever wants 
to use" the Debian-provided build-time Rust/Go libraries has not been 
spelled out in plain, comprehensible english yet.


So since you seem to understand a bit about the technical problem 
involved here and I'd very much appreciate if you could spell it out. I 
think it would benefit the project as then everybody would be able to 
understand what this new section is about.


So let me ask a question that could maybe clear things up:

On 11.11.20 14:39, Antonio Terceiro wrote:


In the Rust world there is no such thing as installing a library
globably. A crate that provides a library can only be used to build
other stuff, and is never made available globally. "cargo install" only
applies to creates that provide binaries:

https://doc.rust-lang.org/cargo/commands/cargo-install.html


[I've read the cargo-install.html document in the past but not re-read 
it now]


So let's say user joe wants to code Go software that depends on Go's 
third party github.com/tazjin/kontemplate/templater package ("package" 
in Go's taxonomy not in Debian's!).


Then he'd `export GOPATH=~/src/go` and `go get 
github.com/tazjin/kontemplate`. Go would then `git clone` everything 
under  `~/src/go/src/github.com/tazjin/kontemplate/`.


So far so good and I think Rust has a similar mechanisms with cargo, right?

Now given that alice wants to package joe's software. She'll do the 
above plus `go get github.com/joe/joes_app`. All will be under 
`~/src/go/src/github`.


The naive thing to do now would be to move `~/src/go/src/` to 
`/usr/lib/go` and package that as `go-tazjin-kontemplate-dev_0.1.deb` or 
similar.


Debian's automatic build process for "joes_app" would first install 
`go-tazjin-kontemplate-dev_0.1.deb`, then make a symlink from 
`~/src/go/src/github.com/tazjin` (or `~/.local/go` or whereever Go 
expects its stuff by default) to `/usr/lib/go/src/github.com/tazjin` and 
build and be done.


A user wanting to develop software based on tazjin's stuff would do the 
same: `apt-get install go-tazjin-kontemplate-dev`, symlink, done.


This solution seems to be too trivial that nobody would have though of 
it, so what is it that I (and I guess many Debianers) are missing?

*t



Re: Updating dpkg-buildflags to enable reproducible=+fixfilepath by default

2020-11-13 Thread Sune Vuorela
On 2020-10-27, Vagrant Cascadian  wrote:
> Though, of course, identifying the exact reproducibility problem would
> be preferable. One of the common issues is test suites relying on the
> behavior of __FILE__ returning a full path to find fixtures or other
> test data.

has QFIND_TESTDATA been adapted to work with this, or are we just
"lucky" that most packages don't actually build and run test suites?

I don't think that we should make it harder for maintainers to enable
test suites on their packages, when we can just record the filepath in
the build info and still have things reproducible.

/Sune



Cópia de: 代办公司报销做账发漂

2020-11-13 Thread Centro de Juventude e Cultura Cristã
Cópia de:

Este é um e-mail de pedido de informações via 
http://www.juventudecrista.com.br/ de:
农羿郦 

办各种发,票,微信:zhan55周经理微信,电话:13316623050打扰见谅!




Re: Split Packages files based on new section "buildlibs"

2020-11-13 Thread Wolfgang Silbermayr
On 11/13/20 7:28 PM, Tomas Pospisek wrote:
> Hi Antonio (and anybody else that understands the technical problem involved
> here),
> 
> I've been reading the whole thread and it seems to me that the reason, why
> Rust/Go build-time "libraries" need to be handled differently from all the
> other existing stuff in the world and that "no user ever wants to use" the
> Debian-provided build-time Rust/Go libraries has not been spelled out in
> plain, comprehensible english yet.
> 
> So since you seem to understand a bit about the technical problem involved
> here and I'd very much appreciate if you could spell it out. I think it would
> benefit the project as then everybody would be able to understand what this
> new section is about.

Hi Tomas,

The simple explanation is probably similar to go, that in the general case the
whole source code of the project at hand and all its dependencies is compiled
into a single static binary. Exceptions are e.g. libc et al which get linked
by default (if you don't build against musl), and linked libraries that are
used through FFI (foreign function interface), e.g. GTK.

> So let me ask a question that could maybe clear things up:
> 
> On 11.11.20 14:39, Antonio Terceiro wrote:
> 
>> In the Rust world there is no such thing as installing a library
>> globably. A crate that provides a library can only be used to build
>> other stuff, and is never made available globally. "cargo install" only
>> applies to creates that provide binaries:
>>
>> https://doc.rust-lang.org/cargo/commands/cargo-install.html
> 
> [I've read the cargo-install.html document in the past but not re-read it now]
> 
> So let's say user joe wants to code Go software that depends on Go's third
> party github.com/tazjin/kontemplate/templater package ("package" in Go's
> taxonomy not in Debian's!).
> 
> Then he'd `export GOPATH=~/src/go` and `go get github.com/tazjin/kontemplate`.
> Go would then `git clone` everything under 
> `~/src/go/src/github.com/tazjin/kontemplate/`.
> 
> So far so good and I think Rust has a similar mechanisms with cargo, right?

The mechanism is similar, as you say. In Rust we have the difference that
there is effectively one single platform where the ecosystem lives, crates.io.
When all I want is to install an application that was published there, I
simply use the `cargo install ` command, and it gets compiled and
installed into ~/.cargo/bin. Therefore the sources of the dependencies and the
binary crate are downloaded from crates.io into a cache directory that is
managed by cargo, compiled from there and then installed.

> Now given that alice wants to package joe's software. She'll do the above plus
> `go get github.com/joe/joes_app`. All will be under `~/src/go/src/github`.
> 
> The naive thing to do now would be to move `~/src/go/src/` to `/usr/lib/go`
> and package that as `go-tazjin-kontemplate-dev_0.1.deb` or similar.
> 
> Debian's automatic build process for "joes_app" would first install
> `go-tazjin-kontemplate-dev_0.1.deb`, then make a symlink from
> `~/src/go/src/github.com/tazjin` (or `~/.local/go` or whereever Go expects its
> stuff by default) to `/usr/lib/go/src/github.com/tazjin` and build and be 
> done.
> 
> A user wanting to develop software based on tazjin's stuff would do the same:
> `apt-get install go-tazjin-kontemplate-dev`, symlink, done.

When we package Rust library crates into librust-*-dev binary packages, the
source code is put into the package inside a subdirectory in
/usr/share/cargo/registry which can then be used during build instead of
fetching it from crates.io. In that regard it the procedure is very similar.
However, this is not a standard directory used by cargo, but instead we have
to inject it using the /usr/share/cargo/bin/cargo wrapper during build.

If you were to develop software in Rust, you'd simply add the dependency entry
into your Cargo.toml file and let the whole mechanism of cargo do the magic.
Everything else like using the library installed in a librust-*-dev package
would just complicate and slow down your development workflow. So these
packages are effectively useless for developers that want to write Rust
software, even if they use Debian or a derivative for that.

The additional thing we have in Rust, that doesn't seem to have a strict
equivalent in other languages (yet) is features. These can be provided by
library crates, and a crate depending on it can add a dependency to it. One
popular example that got mentioned frequently. A feature is somewhat a
combination of an entity in the Cargo.toml file, and conditional compilation
in the source.

So let's assume we have a library crate supporting different database
connection types, these would be enabled by features. If a program using that
library would like to use postgres, it would enable the postgres feature of
said library.

This gets mapped to Debian packages by either providing a
librust-*+feature-dev by the default package, or by creating a new empty
metapackage with

Re: Updating dpkg-buildflags to enable reproducible=+fixfilepath by default

2020-11-13 Thread Vagrant Cascadian
On 2020-11-13, Sune Vuorela wrote:
> On 2020-10-27, Vagrant Cascadian  wrote:
>> Though, of course, identifying the exact reproducibility problem would
>> be preferable. One of the common issues is test suites relying on the
>> behavior of __FILE__ returning a full path to find fixtures or other
>> test data.
>
> has QFIND_TESTDATA been adapted to work with this, or are we just
> "lucky" that most packages don't actually build and run test suites?

Yes, QFINDTESTDATA is one of the primary (only?) issues with test suites
found in about 20-30 packages in the archive, according to the
archive-wide rebuild that was performed. For most of those packages
patches have been submitted, and some are already either fixed or marked
as pending.

If it could be fixed at the core for QFINDTESTDATA, that would be nicer
than fixing 20-30 packages individually, though we're not there right
now.


> I don't think that we should make it harder for maintainers to enable
> test suites on their packages

Similarly, I don't think we should make it harder to make hundreds of
packages more reproducible just to avoid a trivial workaround. We're
talking about a one-line change in debian/rules in a small number of
packages, as opposed to the 3 packages that need no changes.


> when we can just record the filepath in
> the build info and still have things reproducible.

Sure, rebuilding packages in the same path is a relatively easy
workaround to the build path issue. It does have some drawbacks:

* More complicated infrastructure to recreate the build environment.

* The .buildinfo files need to include the build path by default, which
  I believe is enabled for most buildd machines, but not the default for
  dpkg-genbuildinfo (and there may be some privacy concerns to making it
  the default).


So, it will take some work to make this change in Debian and because it
seems to affect QT related test suites, it affects the teams working
with QT more than the rest of Debian.

Because of that, I did individual builds for all the affected packages,
and submitted patches to workaround this issue to try and minimize the
extra work needed by any affected maintainers or teams.


I still would like for Debian to move forward with this change.


live well,
  vagrant


signature.asc
Description: PGP signature


Re: NEW queue almost empty

2020-11-13 Thread Joerg Jaspert

On 15951 March 1977, Federico Ceratto wrote:


If people are interested, there's a little chart I put together:
https://people.debian.org/~federico/new_queue/
(It's manually updated because it requires ssh-ing to coccia.d.o)


Nice. We do have a whole bunch of graphs here:
https://ftp-master.debian.org/stat.html

Reachable from https://ftp-master.debian.org/

Would be nice if you could add yours to that one. If you are interested 
and want to do so, https://salsa.debian.org/ftp-team/dak/ is the repo 
for the merge request, and you want to check what "dak graph" is doing 
to maybe extend that, or look into hourly.functions in config/debian and 
possible extend function queuereport (or put a new beside and add it to 
hourly.tasks to get called).


--
bye, Joerg



Bug#974696: ITP: reuse -- tool for compliance with the REUSE recommendations

2020-11-13 Thread Stephan Lachnit
Package: wnpp
Severity: wishlist
Owner: Stephan Lachnit 
X-Debbugs-Cc: debian-devel@lists.debian.org

* Package name    : reuse
* Version : 0.11.1
* Upstream Author : Free Software Foundation Europe 
* URL : https://github.com/fsfe/reuse-tool
* License : GPL-3.0-or-later
* Programming Lang: Python
* Description : tool for compliance with the REUSE recommendations

REUSE is a standard by the FSFE to use copyright information and SPDX IDs on
every file in a project. This information can be used to automatically create
an SPDX copyright report.

Some dependencies need to be packaged. Would maintain it in the DPT.

Cheers,
Stephan Lachnit



Re: NEW queue almost empty

2020-11-13 Thread Paul Wise
On Fri, Nov 13, 2020 at 9:06 PM Joerg Jaspert wrote:

> Nice. We do have a whole bunch of graphs here:
> https://ftp-master.debian.org/stat.html

Some more somewhat related stats:

https://ircbots.debian.net/stats/package_new.png
https://people.debian.org/~eriberto/udd/top_500_new.html
https://odd.systems/debian-new/
https://people.debian.org/~anarcat/NEW.txt
https://ftp-master.debian.org/NEW-stats.yaml

These are from the stats page:

https://wiki.debian.org/Statistics

-- 
bye,
pabs

https://wiki.debian.org/PaulWise