On 2025/01/30 08:15, Dave Voutila wrote:
> 
> FWIW we should be able to include Vulkan support as its in ports. I've
> played with llama.cpp locally with it, but I don't have a GPU that's
> worth a damn top see if it's an improvement over pure CPU-based
> inferencing.

Makes sense, though I think it would be better to commit without and
add that later.

> Also should this be arm64 and amd64 specific? I'm not a ports person so
> not sure :)

Do you mean for llama.cpp at all, or just the vulkan support?
(If it's "at all", afaik the original intention was that - like
whisper.cpp - it would run without anything special).


On 2025/01/30 05:50, Chris Cappuccio wrote:
> Stuart Henderson [s...@spacehopper.org] wrote:
> > 
> > I don't understand why it's in emulators. Perhaps misc would make sense?
> > 
> 
> I guess either misc or even a new category, like ml. Torch wuold come next,
> and there are plenty of other pieces that really don't fit in any other
> category except misc.

I'd be happy with misc. If we end up with dozens of related ports then
maybe a new category makes sense but misc seems to fit and is not over-full.

Reply via email to