On 2/26/26 22:20, Richard Purdie wrote:
CAUTION: This email comes from a non Wind River email account!
Do not click links or open attachments unless you recognize the sender and know 
the content is safe.

On Wed, 2026-02-25 at 13:22 +0800, hongxu via lists.openembedded.org wrote:
Ping, what is the status of the approval from Yocto Project TSC?
 From the TSC perspective, we were waiting on the answers to some of the
discussion/questions that were raised.

About the suggestion to rename the layer to meta-ai or meta-llm and
collect other LLM applications, such as llama.cpp.

I am open to it, but I strongly insist to maintain meta-ollama as a
standalone layer, further more, if we support llama.cpp later, we
should add it as meta-llama-cpp
You mean that you don't mind a meta-ai repostory but within that you
would want meta-llama and meta-llama-cpp to be separate layers?

I am open to meta-ai repo, and use meta-ollama and meta-llama-cpp to be separate

The other key questions I don't see answered are with regard to:

a) contributions - is this open to contributions from others? how would
version updates work? what testing needs to pass for patches to merge?
Basically, what is the plan for maintaining it?

We are openĀ  for others to contribute, we promise to maintain meta-ollama or meta-llama-cpp which is original created by us, and try to keep them as new as possible, promise to upgrade them >=2 times per year

Review and merge the contribution patch if normal build is passed, currently we do not have sources to support oe-selftest, but Wind River Linux have test resource for normal build

If we have co-maintainer to contribute other sub layer, we don't promise to maintain them timely.

b) specific architecture support - what would someone need to do to add
support for a different architecture?

We prefer to keep sync with Yocto/OE-core policy, currently meta-ollama works on x86-64/arm64, and verified on genericx86-64, qemuarm64, qemux86-64 for public BSP

We do not promise to maintain for ppc, riscv and 32bit arm. But we are open to accept the contribution patch

For Nvidia GPU, it supports CUDA on x86-64 and Nvidia orin BSP, but only verified on genericx86-64 for public, the Nvidia orin BSP is Wind River for commercial using and not public

For other GPU(such as AMD ROCm), we don't not have resources to support them, but we are open to accept the contribution patch

c) specific feature support - if someone wants to add a feature
WindRiver aren't using, would be it accepted?

Yes, of course, we are open to accept contribution for the new feature that WindRiver not using, but we don't promise to implement them, and only maintain them for normal build,

//Hongxu

The TSC needs more information in order to be able to make any kind of
decision.

Cheers,

Richard


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#2289): 
https://lists.openembedded.org/g/openembedded-architecture/message/2289
Mute This Topic: https://lists.openembedded.org/mt/117540395/21656
Group Owner: [email protected]
Unsubscribe: https://lists.openembedded.org/g/openembedded-architecture/unsub 
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to