Bug#935151: ITP: pympress -- dual-screen PDF reader used for presentations and public talks

2019-08-20 Thread Christopher Hoskin
Package: wnpp
Severity: wishlist
Owner: Christopher Hoskin 

* Package name: pympress
  Version : 1.4.1
  Upstream Author : Cimbali
* URL : https://github.com/Cimbali/pympress/
* License : GPL-2+
  Programming Lang: Python
  Description : dual-screen PDF reader for presentations and public talks

Pympress is a little PDF reader written in Python using Poppler for PDF
rendering and GTK+ for the GUI.

It is designed to be a dual-screen reader used for presentations and public
talks, with two displays: the Content window for a projector, and the Presenter
window for your laptop. It is portable and has been tested on various Mac,
Windows and Linux systems.

It comes with many great features (more below):

  * supports embedded gifs and videos
  * text annotations displayed in the presenter window
  * natively supports beamer's notes on second screen!

Whilst there are other PDF viewers already packaged for Debian, most of these
don't include native support for displaying Beamer presentations with the slides
on one screen and the notes on the other. I use Pympress for this purpose 
myself.

Dspdfviewer is packaged for Debian and also provides built in support for 
Beamer, but has less of a GUI, making it harder for novices to use. It doesn't
appear to be being actively developed (last commit 3 years ago) whereas the
lastest version of Pympress was released 6 days ago. Pympress also supports
annotations.

I plan to maintain Pympress within the Python Applications Packaging Team
(PAPT). I am a Debian Developer, so do not require a sponsor.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Sam Hartman
[trimming the cc]

> "Luke" == Luke Kenneth Casson Leighton  writes:

Luke> On Mon, Aug 19, 2019 at 7:29 PM Sam Hartman  
wrote:
>> Your entire argument is built on the premise that it is actually
>> desirable for these applications (compilers, linkers, etc) to
>> work in 32-bit address spaces.

Luke> that's right [and in another message in the thread it was
Luke> mentioned that builds have to be done natively.  the reasons
Luke> are to do with mistakes that cross-compiling, particularly
Luke> during autoconf hardware/feature-detection, can introduce
Luke> *into the binary*.  with 40,000 packages to build, it is just
Luke> far too much extra work to analyse even a fraction of them]

Luke> at the beginning of the thread, the very first thing that was
Luke> mentioned was: is it acceptable for all of us to abdicate
Luke> responsibility and, "by default" - by failing to take that
Luke> responsibility - end up indirectly responsible for the
Luke> destruction and consignment to landfill of otherwise perfectly
Luke> good [32-bit] hardware?

I'd ask you to reconsider your argument style.  You're using very
emotionally loaded language, appeals to authority, and moralistic
language to create the impression that your way of thinking is the only
reasonable one.  Instead, let us have a discussion that respects
divergent viewpoints and that focuses on the technical trade offs
without using language like "abdicate responsibility," or implies those
that prefer nmap are somehow intellectually inferior rather than simply
viewing the trade offs different than you do.

I'm particularly frustrated that you spent your entire reply moralizing
and ignored the technical points I made.

As you point out there are challenges with cross building.
I even agree with you that we cannot address these challenges and get to
a point where we have confidence a large fraction of our software will
cross-build successfully.

But we don't need to address a large fraction of the source packages.
There are a relatively small fraction of the source packages that
require more than 2G of RAM to build.
Especially given that in the cases we care about we can (at least today)
arrange to natively run both host and target binaries, I think we can
approach limited cross-building in ways that  meet our needs.
Examples include installing cross-compilers for arm64 targeting arm32
into the arm32 build chroots when building arm32 on native arm64
hardware.
There are limitations to that we've discussed in the thread.

More generally, though, there are approaches that are less risky than
full cross building.  As an example, tools like distcc or making
/usr/bin/gcc be a 64-bit hosted 32-bitcross compiler may be a lot less
risky than typical cross building.  Things like distcc can even be used
to run a 64-bit compiler for arm32 even in environments where the arm64
arch cannot natively run arm32 code.

Yes, there's work to be done with all the above.
My personal belief is that the work I'm talking about is more tractable
than your proposal to significantly change how we think about cross
library linkage.

And ultimately, if no one does the work, then we will lose the 32-bit
architectures.

--Sam



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Luke Kenneth Casson Leighton
On Tue, Aug 20, 2019 at 1:17 PM Sam Hartman  wrote:

> I'd ask you to reconsider your argument style.

that's very reasonable, and appreciated the way that you put it.

> I'm particularly frustrated that you spent your entire reply moralizing
> and ignored the technical points I made.

ah: i really didn't (apologies for giving that impression).  i
mentioned that earlier in the thread, cross-building had been
mentioned, and (if memory serves correctly), the build team had
already said it wasn't something that should be done lightly.

> As you point out there are challenges with cross building.

yes.  openembedded, as one of the longest-standing
cross-compiling-capable distros that has been able to target sub-16MB
systems as well as modern desktops for two decades, deals with it in a
technically amazing way, including:

* the option to over-ride autoconf with specially-prepared config.sub
/ config.guess files
* the ability to compile through a command-line-driven hosted native
compiler *inside qemu*
* many more "tricks" which i barely remember.

so i know it can be done... it's just that, historically, the efforts
completely overwhelmed the (small) team, as the number of systems,
options and flexibility that they had to keep track of far exceeded
their resources.

> I even agree with you that we cannot address these challenges and get to
> a point where we have confidence a large fraction of our software will
> cross-build successfully.

sigh.

> But we don't need to address a large fraction of the source packages.
> There are a relatively small fraction of the source packages that
> require more than 2G of RAM to build.

... at the moment.  with there being a lack of awareness of the
consequences of the general thinking, "i have a 64 bit system,
everyone else must have a 64 bit system, 32-bit must be on its last
legs, therefore i don't need to pay attention to it at all", unless
there is a wider (world-wide) general awareness campaign, that number
is only going to go up, isn't it?


> Especially given that in the cases we care about we can (at least today)
> arrange to natively run both host and target binaries, I think we can
> approach limited cross-building in ways that  meet our needs.
> Examples include installing cross-compilers for arm64 targeting arm32
> into the arm32 build chroots when building arm32 on native arm64
> hardware.
> There are limitations to that we've discussed in the thread.

indeed.  and my (limited) torture-testing of ld, showed that it really
doesn't work reliably (i.e. there's bugs in binutils that are
triggered by large binaries greater than 4GB being linked *on 64-bit
systems*).

it's a mess.

> Yes, there's work to be done with all the above.
> My personal belief is that the work I'm talking about is more tractable
> than your proposal to significantly change how we think about cross
> library linkage.

i forgot to say: i'm thinking ahead over the next 3-10 years,
projecting the current trends.


> And ultimately, if no one does the work, then we will lose the 32-bit
> architectures.

... and i have a thousand 32-bit systems that i am delivering on a
crowdfunding campaign, the majority of which would go directly into
landfill.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Sam Hartman
> "Luke" == Luke Kenneth Casson Leighton  writes:

>> I even agree with you that we cannot address these challenges and
>> get to a point where we have confidence a large fraction of our
>> software will cross-build successfully.

Luke> sigh.

I don't really see the need for a sigh.
I think we can address enough of the challenges that we are not
significantly harmed.

>> But we don't need to address a large fraction of the source
>> packages.  There are a relatively small fraction of the source
>> packages that require more than 2G of RAM to build.

Luke> ... at the moment.  with there being a lack of awareness of
Luke> the consequences of the general thinking, "i have a 64 bit
Luke> system, everyone else must have a 64 bit system, 32-bit must
Luke> be on its last legs, therefore i don't need to pay attention
Luke> to it at all", unless there is a wider (world-wide) general
Luke> awareness campaign, that number is only going to go up, isn't
Luke> it?

I'd rather say that over time, we'll get better at dealing with cross
building more things and 32-bit systems will become less common.
Eventually, yes, we'll get to a point where 32-bit systems are
infrequent enough and the runtime software needs have increased enough
that 32-bit general-purpose systems don't make sense.
They will still be needed for embedded usage.

There are Debian derivatives that already deal better with building
subsets of the archive for embedded uses.
Eventually, Debian itself will need to either give up on 32-bit entirely
or deal with more of that itself.

I think my concern about your approach is that you're trying to change
how the entire world thinks.  You're trying to convince everyone to be
conservative in how much (virtual) memory they use.

Except I think that a lot of people actually only do need to care about
64-bit environments with reasonable memory.  I think that will increase
over time.

I think that approaches that focus the cost of constrained environments
onto places where we need constrained environments are actually better.

There are cases where it's actually easier to write code assuming you
have lots of virtual memory.  Human time is one of our most precious
resources.  It's reasonable for people to value their own time.  Even
when people are aware of the tradeoffs, they may genuinely decide that
being able to write code faster and that is conceptually simpler is the
right choice for them.  And having a flat address space is often
conceptually simpler than having what amounts to multiple types/levels
of addressing.  In this sense, having an on-disk record store/database
and indexing that and having a library to access it is just a complex
addressing mechanism.

We see this trade off all over the place as memory mapped databases
compete with more complex relational databases which compete with nosql
databases which compete with sharded cloud databases that are spread
across thousands of nodes.  There are trade offs involving complexity of
code, time to write code, latency, overall throughput, consistency, etc.

How much effort we go to support 32-bit architectures as our datasets
(and building is just another dataset) grow is just the same trade offs
in miniture.  And choosing to write code quickly is often the best
answer.  It gets us code after all.

--Sam



Bug#935178: ITP: bcachefs-tools -- bcachefs userspace tools

2019-08-20 Thread Jonathan Carter
Package: wnpp
Severity: wishlist
Owner: Jonathan Carter 

* Package name: bcachefs-tools
  Version : 0.1.0
  Upstream Author : Kent Overstreet 
* URL : https://evilpiepirate.org/git/bcachefs-tools.git
* License : GPL-2+
  Programming Lang: C
  Description : bcachefs userspace tools

Note: this is very different from bcache-tools, which ads a bcache device
for existing filesystems. bcachefs is a new filesystem that aims to be included
in the mainline kernel (https://lkml.org/lkml/2019/6/10/762).

Userspace tools for bcachefs, a modern copy on write, checksumming, multi
device filesystem.

Note: The current Debian kernels do not come with bcachefs support, you
will have to use your own kernel or one provided by a 3rd party that that
contains bcachefs support.

This package will be maintained under the debian group in salsa.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Luke Kenneth Casson Leighton
On Tue, Aug 20, 2019 at 2:52 PM Sam Hartman  wrote:

> I think my concern about your approach is that you're trying to change
> how the entire world thinks.

that would be... how can i put it... an "incorrect" interpretation.  i
think globally - i always have.  i didn't start the NT Domains
Reverse-Engineering "because it would be fun", i did it because,
world-wide, i could see the harm that was being caused by the
polarisation between the Windows and UNIX worlds.

>  You're trying to convince everyone to be
> conservative in how much (virtual) memory they use.

not quite: i'm inviting people to become *aware of the consequences*
of *not* being conservative in how much (virtual) memory they use...
when the consequences of their focus on the task that is "today" and
is "right now", with "my resources and my development machine" are
extended to a global scale.

whether people listen or not is up to them.

> Except I think that a lot of people actually only do need to care about
> 64-bit environments with reasonable memory.  I think that will increase
> over time.
>
> I think that approaches that focus the cost of constrained environments
> onto places where we need constrained environments are actually better.
>
> There are cases where it's actually easier to write code assuming you
> have lots of virtual memory.

yes.  a *lot* easier.  LMDB for example simply will not work on files
that are larger than 4GB, because it uses shared-memory copy-on-write
B+-Trees (just like BTRFS).

...oops :)

> Human time is one of our most precious
> resources.  It's reasonable for people to value their own time.  Even
> when people are aware of the tradeoffs, they may genuinely decide that
> being able to write code faster and that is conceptually simpler is the
> right choice for them.

indeed.  i do recognise this.  one of the first tasks that i was given
at university was to write a Matrix Multiply function that could
(hypothetically) extend well beyond the size of virtual memory (let
alone physical memory).

"vast matrix multiply" is known to be such a hard problem that you
just... do... not... try it.  you use a math library, and that's
really the end of the discussion!

there are several other computer science problems that fall into this
category.  one of them is, ironically (given how the discussion
started) linking.

i really wish Dr Stallman's algorithms had not been ripped out of ld.


>  And having a flat address space is often
> conceptually simpler than having what amounts to multiple types/levels
> of addressing.  In this sense, having an on-disk record store/database
> and indexing that and having a library to access it is just a complex
> addressing mechanism.
>
> We see this trade off all over the place as memory mapped databases
> compete

... such as LMDB...

> with more complex relational databases which compete with nosql
> databases which compete with sharded cloud databases that are spread
> across thousands of nodes.  There are trade offs involving complexity of
> code, time to write code, latency, overall throughput, consistency, etc.
>
> How much effort we go to support 32-bit architectures as our datasets
> (and building is just another dataset) grow is just the same trade offs
> in miniture.  And choosing to write code quickly is often the best
> answer.  It gets us code after all.

indeed.

i do get it - i did say.  i'm aware that software libre developers
aren't paid, so it's extremely challenging to expect any change - at
all.  they're certainly not paid by the manufacturers of the hardware
that their software actually *runs* on.

i just... it's frustrating for me to think ahead, projecting where
things are going (which i do all the time), and see the train wreck
that has a high probability of occurring.

l.



lintian-brush adds redundant data

2019-08-20 Thread Andreas Tille
Hi,

I observed that lintian-brush is adding a file debian/upstream/metadata
if it finds the fields Upstream-Name and Upstream-Contact in
debian/copyright.

What is the sense to duplicate data that we can find in a well
established machine readable file in another file?

Kind regards

   Andreas.

-- 
http://fam-tille.de



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Sam Hartman
> "\Luke" == Luke Kenneth Casson Leighton  writes:
Hi.
First, thanks for working with you.
I'm seeing a lot more depth into where you're coming from, and it is
greatly appreciated.
\Luke> indeed.

\Luke> i do get it - i did say.  i'm aware that software libre
\Luke> developers aren't paid, so it's extremely challenging to
\Luke> expect any change - at all.  they're certainly not paid by
\Luke> the manufacturers of the hardware that their software
\Luke> actually *runs* on.

\Luke> i just... it's frustrating for me to think ahead, projecting
\Luke> where things are going (which i do all the time), and see the
\Luke> train wreck that has a high probability of occurring.

I'd like to better understand the train wreck you see.
What I see likely is that  the set of software that runs on 32-bit
arches will decrease over time, and the amount of time we'll spend
getting basic tools to work will increase.
We'll get some general approaches other folks have adopted into Debian
along the way.

Eventually, Debian itself will drop 32-bit arches.  I386 and proprietary
software and Steam will probably hold that off for a couple of releases.

32-bit support will continue for a bit beyond that in the Debian
ecosystem/Debian ports but with a decreasing fraction of the archive
building.

Meanwhile along the same path, there will be fewer 32-bit general
purpose systems in use.


Where is the train wreck?



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Luke Kenneth Casson Leighton
On Tue, Aug 20, 2019 at 3:31 PM Sam Hartman  wrote:
>
> > "\Luke" == Luke Kenneth Casson Leighton  writes:
> Hi.
> First, thanks for working with you.
> I'm seeing a lot more depth into where you're coming from, and it is
> greatly appreciated.

likewise.

> I'd like to better understand the train wreck you see.

that 32-bit hardware is consigned to landfill.  debian has a far
higher impact (as a leader, due to the number of ports) than i think
anyone realises.  if debian says "we're dropping 32 bit hardware",
that's it, it's done.

[btw, i'm still running my web site off of a 2006 dual-core XEON,
because it was one of the extremely hard-to-get ultra-low-power ones
that, at idle, the entire system only used 4 watts at the plug].

> Eventually, Debian itself will drop 32-bit arches.

that's the nightmare trainwreck that i foresee.

that means that every person who has an original raspberry pi who
wants to run debian... can't.

every person who has a $30 to $90 32-bit SBC with a 32-bit ARM core
from AMLogic, Rockchip, Allwinner - landfill.

marvell openrd ultimate: landfill.

the highly efficient dual-core XEON that runs the email and web
service that i'm using to communicate: landfill.

ingenic's *entire product range* - based as it is on MIPS32 - landfill.

that's an entire company's product range that's going to be wiped out
because of an *assumption* that all hardware is going "64 bit"!

to give you some idea of how influential debian really is: one of
*the* most iconic processors that AMD, bless 'em, tried desperately
for about a decade to End-of-Life, was the AMD Geode LX800.   the
reason why it wouldn't die is because up until around 2013, *debian
still supported it* out-of-the-box.

and the reason why it was so well supported in the first place was:
the OLPC project [they still get over 10,000 software downloads a week
on the OLPC website, btw - 12 years beyond the expected lifetime of
the OLPC XO-1]

i installed debian back in 2007 on a First International Computers
(FIC) box with an AMD Geode LX800, for Earth University in Costa Rica.
over 10 years later they keep phoning up my friend, saying "what the
hell kind of voodoo magic did you put in this box??  we've had 10
years worth of failed computers in the library next to this thing, and
this damn tiny machine that only uses 5 watts *just won't die*"

:)

there's absolutely no chance of upgrading it, now.

the embedded world is something that people running x86_64 hardware
just are... completely unaware of.  additionally, the sheer
overwhelming package support and convenience of debian makes it the
"leader", no matter the "statistics" of other distros.  other distros
cover one, *maybe* two hardware ports: x86_64, and *maybe* arm64 if
we're lucky.

if debian gives up, that leaves people who are depending on them
_really_ in an extremely bad position.

and yes, i'm keenly aware that that's people who *aren't* paying
debian developers, nor are they paying the upstream developers.

maybe it will be a good thing, if 32-bit hardware support in debian is
dropped.  it would certainly get peoples' attention that they actually
f*g well should start paying software libre developers properly,
instead of constantly spongeing off of them, in a way that shellshock
and heartbleed really didn't grab people.

at least with shellshock, heartbleed etc. there was a software "fix".
dropping 32 bit hardware support, there *is* no software fix.

l.



Re: Bypassing the 2/3/4GB virtual memory space on 32-bit ports

2019-08-20 Thread Michael Stone

On Tue, Aug 20, 2019 at 04:21:43PM +0100, Luke Kenneth Casson Leighton wrote:

that 32-bit hardware is consigned to landfill.  debian has a far
higher impact (as a leader, due to the number of ports) than i think
anyone realises.  if debian says "we're dropping 32 bit hardware",
that's it, it's done.


That's their decision, but I think the correct answer is that 32 bit 
hardware is rapidly moving to a place where it's not applicable to a 
general purpose operating system. From your long list of hardware, 
there's not much that I'd want to run firefox on, for example. 

So, if your hardware isn't right for a general purpose OS...run a 
specialized OS. netbsd still has a vax port...



to give you some idea of how influential debian really is: one of
*the* most iconic processors that AMD, bless 'em, tried desperately
for about a decade to End-of-Life, was the AMD Geode LX800.   the
reason why it wouldn't die is because up until around 2013, *debian
still supported it* out-of-the-box.


IMO, debian wasn't what kept geode going. Long term contracts involving 
embedded devices (which don't usually run debian) are more influential 
then the state of OS support. In fact, the geode EoL has been pushed 
back to 2021, regardless of the lack of debian support. Considering how 
many of these embedded boards are still running windows XP, having an up 
to date general purpose OS simply doesn't seem to be a major factor.




Re: lintian-brush adds redundant data

2019-08-20 Thread Jelmer Vernooij
Hi Andreas,

On Tue, Aug 20, 2019 at 04:17:50PM +0200, Andreas Tille wrote:
> I observed that lintian-brush is adding a file debian/upstream/metadata
> if it finds the fields Upstream-Name and Upstream-Contact in
> debian/copyright.

> What is the sense to duplicate data that we can find in a well
> established machine readable file in another file?
That's a good question. 

I've considered (but not implemented yet) having lintian-brush not create a
debian/upstream/metadata if the only upstream metadata fields it can set are
the name and contact.

At the moment, both the debian/copyright [1] and debian/upstream/metadata [2] 
standards both define two fields with (as far as I can tell) the same purpose.
Neither of the standards provide any guidance as to whether the fields
should be set in both files or whether e.g. one is preferred over the other.
It would be great if some guidance could be added to DEP-12 about how to deal
with these fields.

Cheers,

Jelmer

[1] https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
[2] https://dep-team.pages.debian.net/deps/dep12/



tag2upload service architecture and risk assessment - draft v2

2019-08-20 Thread Ian Jackson
Thanks for all the comments on the draft service architecture I posted
in late July. [1]  I have made a v2, incorporating the various helpful
suggestions, and the information from the thread.

Some respondents raised archive integrity concerns.  It seemed best to
address those more formally, and in a more structured way, than as a
mailing list subthreads.  Accordingly, v2 of my proposal has a formal
risk assessment, in a format loosely borrowed from health and safety
management.

I think I have captured in the risk assessment all the risks mentioned
in the thread, but I may well have missed something.  Please let me
know if you think there is anything which is not covered.  Also please
let me know if any of my analysis seems wrong.

Please find an introduction, and detailed documentation, here:
  https://people.debian.org/~iwj/tag2upload/2019-08-20/

Thanks,
Ian.

[1] This message and the subsequent thread:
  https://lists.debian.org/debian-devel/2019/07/msg00501.html

-- 
Ian JacksonThese opinions are my own.

If I emailed you from an address @fyvzl.net or @evade.org.uk, that is
a private address which bypasses my fierce spamfilter.



Re: lintian-brush adds redundant data

2019-08-20 Thread Andreas Tille
Hi Jelmer,

On Tue, Aug 20, 2019 at 04:46:23PM +, Jelmer Vernooij wrote:
> 
> On Tue, Aug 20, 2019 at 04:17:50PM +0200, Andreas Tille wrote:
> > I observed that lintian-brush is adding a file debian/upstream/metadata
> > if it finds the fields Upstream-Name and Upstream-Contact in
> > debian/copyright.
> 
> > What is the sense to duplicate data that we can find in a well
> > established machine readable file in another file?
> That's a good question. 
> 
> I've considered (but not implemented yet) having lintian-brush not create a
> debian/upstream/metadata if the only upstream metadata fields it can set are
> the name and contact.

It would be really great to implement this.  Considering the current
situation I would even remove the fields Name and Contact from
debian/upstream/metadata if the according fields are in debian/copyright
(or move them if they are missing in d/copyright).  If some empty
d/u/metadata remains this should be removed as well.

IMHO a good rule of thumb is:  Do not copy any data from some well
established machine readable file to some other place.
 
> At the moment, both the debian/copyright [1] and debian/upstream/metadata [2] 
> standards both define two fields with (as far as I can tell) the same purpose.
> Neither of the standards provide any guidance as to whether the fields
> should be set in both files or whether e.g. one is preferred over the other.
> It would be great if some guidance could be added to DEP-12 about how to deal
> with these fields.

DEP-12 is declared as "Work in progress" (without any progress since 5
years) while DEP-5 is well established and decided.  Charles and I
invented d/u/metadata to store publication information and it turned out
that there is other sensible information about upstream that can be
stored there as well.  I'd vote against any duplication of information
in any way.  So as long as Name and Contact are defined in DEP-5 it
should not be in DEP-12.

So far I removed redundant fields from the Wiki page[3] (it had also
Homepage, Watch and others I might have forgot) since it simply adds
useless maintenance burden to maintain the same information at different
places.

The idea that lintian is warning about those fields missing in
d/u/metadata is not sensible, neither that some tool adds the values.
It was some Wiki edit away[4] to ensure you about this that this stuff
is really in flux and its better to not waste time on this without
discussing it first.

I'd be really happy if lintian-brush would remove those values (please
let me know if you want me to file a bug report about this).

BTW, in general I really like lintian-brush which does a pretty nice job
and I'll keep on running it even if I do not like the feature above.  So
please keep on the nice work.

Kind regards

  Andreas.
 
> [1] https://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
> [2] https://dep-team.pages.debian.net/deps/dep12/

[3] https://wiki.debian.org/UpstreamMetadata
[4] https://wiki.debian.org/UpstreamMetadata#Deprecated_fields

-- 
http://fam-tille.de