On Thu, Dec 26, 2024 at 09:01:30AM +0100, Helmut Grohne wrote:
What other place would be suitable for including this functionality?
As I suggested: you need two tools or one new tool because what you're
looking for is the min of ncpus and (available_mem / process_size). The
result of that cal
On Thu, Dec 26, 2024 at 09:23:36PM +0900, Simon Richter wrote:
My feeling is that this is becoming less and less relevant though,
because it does not matter with SSDs.
To summarize: this thread was started with a mistaken belief that the
current behavior is only important on ext2. In reality t
Did anyone benchmark if this makes any real difference, on a set of
machines and file systems?
Say typical x86 laptop+server, arm64 SoC+server, GitLab/GitHub shared
runners, across ext4, xfs, btrfs, across modern SSD, old SSD/flash and
spinning rust.
If eatmydata still results in a performance bo
* Simon Richter [241226 13:24]:
> My feeling is that this is becoming less and less relevant though, because
> it does not matter with SSDs.
This might be true on SSDs backing a single system, but on
(otherwise well-dimensioned) SANs the I/O-spikes are still very much
visible. Same is true for va
Package: wnpp
Severity: wishlist
Owner: Tianyu Chen
X-Debbugs-Cc: debian-devel@lists.debian.org, billchenchina2...@gmail.com
* Package name: python-propcache
Version : 0.2.1
Upstream Contact: aiohttp team
* URL : https://github.com/aio-libs/propcache
* License
Le 2024-12-26 13:23, Simon Richter a écrit :
On SSDs, it does not matter, both because modern media lasts longer
than the rest of the computer now, and because the load balancer will
largely ignore the logical block addresses when deciding where to put
data into the physical medium anyway.
Hi,
On 12/26/24 18:33, Julien Plissonneau Duquène wrote:
This should not make any difference in the number of write operations
necessary, and only affect ordering. The data, metadata journal and
metadata update still have to be written.
I would expect that some reordering makes it possible f
Le 2024-12-26 11:59, Hakan Bayındır a écrit :
So making any assumptions like we did with spinning drives is mostly
moot at this point, and the industry is very opaque about that layer.
That's one of the reasons why I think benchmarking would help here. I
would expect fewer but larger write o
Julien Plissonneau Duquène left as an exercise for the reader:
> - io_uring that allows asynchronous file operations; implementation would
> require important changes in dpkg; potential performance gains in dpkg's use
> case are not yet evaluated AFAIK but it looks like the right solution for
> tha
On 12/26/24 12:33 PM, Julien Plissonneau Duquène wrote:
Hi,
Le 2024-12-24 15:10, Simon Richter a écrit :
This should not make any difference in the number of write operations
necessary, and only affect ordering. The data, metadata journal and
metadata update still have to be written.
I w
Hi,
Le 2024-12-24 15:10, Simon Richter a écrit :
This should not make any difference in the number of write operations
necessary, and only affect ordering. The data, metadata journal and
metadata update still have to be written.
I would expect that some reordering makes it possible for fewe
24.12.2024 17:10, Simon Richter wrote:
Hi,
On 12/24/24 18:54, Michael Tokarev wrote:
The no-unsafe-io workaround in dpkg was needed for 2005-era ext2fs
issues, where a power-cut in the middle of filesystem metadata
operation (which dpkg does a lot) might result in in unconsistent
filesystem st
Hi Michael and Pádraig,
On Wed, Dec 25, 2024 at 06:59:28PM +, Pádraig Brady wrote:
> On 25/12/2024 15:24, Michael Stone wrote:
> > There's zero chance I'll carry this as a debian-specific fork of nproc.
> > (Because I don't want to carry any new forks of the core utilities as
> > doing so inev
13 matches
Mail list logo