Stephan Verbücheln dijo [Mon, Apr 28, 2025 at 03:46:27PM +0000]:
(...)
The “source code” for a work means the preferred form of the work
for making modifications to it.

In that definition, training data is quite obviously relevant. No one
tweaks neural network model weights manually.

Compare this to the previously mentioned example of S-boxes in
cryptography. They are small and usually created manually.

I understand that, when you consider trained models as the "thing" to be
modified, the preferred form of modification is the model itself: What RAG
does is to have a base trained LLM (confering the "mastery" of language),
and training over it with the domain-specific knowledge.

Attachment: signature.asc
Description: PGP signature

Reply via email to