Bug#1038161: ITP: xnvme -- Cross-platform libraries and tools for efficient I/O and low-level control.

2023-06-16 Thread Simon A. F. Lund
Package: wnpp
Severity: wishlist
Owner: "Simon A. F. Lund" 
X-Debbugs-Cc: debian-devel@lists.debian.org, o...@safl.dk

* Package name: xnvme
  Version : 0.7.0
  Upstream Author : Simon A. F. Lund 
* URL : https://xnvme.io/
* License : BSD
  Programming Lang: C
  Description : Cross-platform libraries and tools for efficient I/O and 
low-level control.

xNVMe provides a library to program storage devices efficiently from
user space and tools to interact with them. xNVMe is strongly motivated
by the emergence of storage devices providing I/O commands beyond those
of read/write. This is the "NVMe" part of the name.

The data plane or I/O layer is a minimal-cost abstraction on top of
sync-io with thread pools, POSIX aio, Linux libaio, io_uring,
io_uring_cmd, to name the most popular interfaces available via xNVMe on
Linux.

A control plane or Admin layer provides flexibility and low-level
control via APIs for admin commands and device management. On Linux,
these are implemented using interfaces such as ioctl(), io_uring_cmd,
and vfio-pci.

This is one key value point of xNVMe, providing a unified I/O and Admin
API interface for storage devices on top of the myriad of interfaces
available on todays operating systems.

On top of these are command-line utilities, including "zoned" and "kvs".
These provide NVMe-command-set interaction available at the fingertips
of the command-line.

The command-line utilities are related to those of nvme-cli. In this
area we are actively working on combining the efforts on the
command-line interface. On the library-side, then xNVMe is related to
the before-mentioned I/O libraries, which it encapsulates. However, it
goes beyond serving an abstraction on top, such that applications
implemented using xNVMe can run on platforms other than Linux.

Afaik. Then xNVMe is the only library providing a cross-platform storage
programming interface. Supporting "traditional" storage and optimized
for NVMe. This is the "x" part of the name for "cross" platform.

I plan to maintain the package as part of the release process of xNVMe itself,
thus making it an integral part of the CI to build, test, and verify the
package in "lockstep" with the development of xNVMe.

I am seeking help/guidance, possibly from a sponsor / co-maintainer.



Bug#1038205: ITP: basis-universal -- Basis Universal GPU Texture Codec

2023-06-16 Thread Gürkan Myczko

Package: wnpp
Severity: wishlist
Owner: Gürkan Myczko 
X-Debbugs-Cc: debian-devel@lists.debian.org

* Package name: basis-universal
  Version : 1.16.4
  Upstream Authors: Binomial LLC
  URL : https://github.com/BinomialLLC/basis_universal
* License : Apache-2.0
  Description : Basis Universal GPU Texture Codec
 This is a "supercompressed" GPU texture data interchange system that 
supports
 two highly compressed intermediate file formats (.basis or the .KTX2 
open
 standard from the Khronos Group) that can be quickly transcoded to a 
very wide

 variety of GPU compressed and uncompressed pixel formats: ASTC 4x4
 L/LA/RGB/RGBA, PVRTC1 4bpp RGB/RGBA, PVRTC2 RGB/RGBA, BC7 mode 6 RGB,
 BC7 mode 5 RGB/RGBA, BC1-5 RGB/RGBA/X/XY, ETC1 RGB, ETC2 RGBA, ATC 
RGB/RGBA,

 ETC2 EAC R11 and RG11, FXT1 RGB, and uncompressed raster image formats
 /565/.



Bug#1038206: ITP: jpeg-compressor-cpp -- jpeg compression library

2023-06-16 Thread Matthias Geiger
Package: wnpp
Severity: wishlist
Owner: Matthias Geiger 
X-Debbugs-Cc: debian-devel@lists.debian.org, t...@debian.org, 
matthias.geiger1...@tutanota.de

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

* Package name: jpeg-compressor-cpp
  Version : 104
  Upstream Contact: Rich Geldreich 
* URL : https://github.com/richgel999/jpeg-compressor
* License : Public Domain or Apache 2.0
  Programming Lang: C/C++
  Description : jpeg compression library

I intend to package jpeg-compressor. It's needed for ppsspp where it's embedded 
as 3rdparty library.
The package just consists of four headers, so it's fairly minimal. It wil be 
maintained under the 
collaborative debian/ space on salsa. tar has kindly agreed to sponsor the 
initial upload.

thanks,

werdahias

-BEGIN PGP SIGNATURE-

iQJUBAEBCgA+FiEEwuGmy/3s5RGopBdtGL0QaztsVHUFAmSMak0gHG1hdHRoaWFz
LmdlaWdlcjEwMjRAdHV0YW5vdGEuZGUACgkQGL0QaztsVHV67g/7BwffKtkOtv1B
J5jMu9egRsFpi8MbqywvibJVVMFSBHIaEA/RO6OI1bk+nm6E4+8wNMC7Cr+fy6FL
rBaXIgP0/gg4bVRqJ+H6boxzimPcMPoymmIaNqmPV9VBzNL0dDwRndLy3T0ihivk
QBNt3umhWcgKz/2MjA8+fqUPIHQanIQgnqEqAeLzJvMMYHOgFeijkAbwLCDYC1qL
WjS28ODf5qP52Wn6Gql34Obf6iV3BYb9XKlCkeUN/ZPGtHIs6WjVLUVDIHrua1gW
yGXD1bHV4B9hyT9z4W8H/85G+lwalEVamfm59hHM6JXG9hN0Zp3xG6uHlov8ivUD
vUWB/DLY/0fZq4N5o7b8xMzDXSNP5mPgA9J9a5aWGdANSGXCOrb8N/m/ZKXS+LlL
Rpqg0oxqLrFZTmvIzZnNbN/ooviKvLrHrHVQq9eY+VpAOoU/IvFUUgNqXq6y9nq5
i6HFd8c4ZN3HsvJfdupap/BYLr6QrtiZ47mppne7pkxadhIGyn7kWokG7ckAZZC8
SAD/IbnRko7J/Jn++AR8DbCGLJbbzshxnXjXtd7gcQ1aByZUzY8D5Loyy5TBDqwM
W6cXt8knxGKPUP5FLN2YAE6x9yFqmo0dSjCV6cNa9EH1jQsOfCKeeWJqfDnZvvyi
txp1K1n9y3DHu2DCp+XTRxLvJQmZuok=
=+I09
-END PGP SIGNATURE-



Bug#1038326: ITP: transformers -- State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow (it ships LLMs)

2023-06-16 Thread M. Zhou
Package: wnpp
Severity: wishlist
Owner: Mo Zhou 
X-Debbugs-Cc: debian-devel@lists.debian.org, debian...@lists.debian.org

* Package name: transformers
  Upstream Contact: HuggingFace
* URL : https://github.com/huggingface/transformers
* License : Apache-2.0
  Description : State-of-the-art Machine Learning for JAX, PyTorch and 
TensorFlow

I've been using this for a while.

This package provides a convenient way for people to download and run an LLM 
locally.
Basically, if you want to run an instruct fine-tuned large language model with 
7B parameters,
you will need at least 16GB of CUDA memory for inference in half/bfloat16 
precision.
I have not tried to run any LLM with > 3B parameters with CPU ... that can be 
slow.
LLaMa.cpp is a good choice for running LLM on CPU, but that library supports 
less models
than this one. Meanwhile, the cpp library only supports inference.

I don't know how many dependencies are still missing, but that should not be 
too much.
Jax and TensorFlow are optional dependencies so they can be missing from our 
archive.
But anyway, I think running a large language model locally with Debian packages 
will
be interesting. The CUDA version of PyTorch is already in the NEW queue.

That said, this is actually a very comprehensive library, which provides far 
more functionalities
than running LLMs.

Thank you for using reportbug