I have had a pytorch port (with Vulkan) in the works last year, but there were so many small things to cover so I got tired of it.  I did succeed in running some inference on my Radeon cards, but I could never care enough about cleaning it all up for a port, and now I have forgot most of the details :-(.  I am still very interested in ML, and do a lot of experiments, but nothing for production use.  I too have succeeded with llama.cpp, but do most of my experiments with candle (the Rust pytorch wannabe), which sadly lacks Vulkan, which seems to be the easiest wy to get GPU support on OpenBSD.  CPU works great though.  Currenty I actually build POCs of using Rust/vulkano for building small neural nets, just to see if I can learn enough of Vulkan, to actually add support for it to candle.

However all this is vaporware, and I cannot be trusted to finish any of it.  Maybe I can dig out specific bits of someone else is curious, but it will not be in very good shape.  This is all stuff I deal with on the little spare time I have.

/Niklas

On 2024-07-17 17:28, José Maldonado wrote:
El mié, 17 jul 2024 a la(s) 7:04 a.m., Frank Baster
(frankbas...@protonmail.com) escribió:
Is there any interest in machine learning ports for PyTorch / JAX / TensorFlow? 
I had a quick look at mlpack but couldn't find support for recurrent neural 
networks.

PyTorch looks like the most do-able.
This is possible, but the performance is a big issue here.

For example, you can build PyTorch using Vulkan Backend
(https://pytorch.org/tutorials/prototype/vulkan_workflow.html - in
OpenBSD Vulkan support is very good) but Vulkan Backend is slow when
you compare with specific AI HW accel options.

In my case I use Llama.cpp with Vulkan support enable on AMD RX 580
for curiosity, and work fine is very usable.

Give a read for this: Kompute (https://kompute.cc/). This is a general
purpose GPU compute framework for AI using Vulkan Kompute.



Reply via email to