Hi! > > > OpenAI has an Open Source fund. Maybe Debian should apply[1] for a grant > > > so that Debian contributors could get hands-on experience on how this > > > could help their Debian activities? > > > > or maybe Debian should not. > > Maybe. Honestly, I don't know.
I still think it would be a nice perk for DDs, along the other perks listed at https://wiki.debian.org/MemberBenefits It is up to each DD to decide how to use the tools and services. Personally I don't recommend using them to generate code as in the Debian context they seem to output so much bad results, but using LLM's for example to review code seems to be working pretty well and is faster and cheaper than waiting for humans to review (and humans can still review, they will just seem more polished stuff). ... > (A) an "agent", that is the client software running on your machine that > you talk with, and that interacts with your codebase (read files, make > changes to files, run commands, create git commits, etc.). Ideally in > some kind of sandbox and/or with permissions management. > Examples include 'Claude Code' (works in CLI, proprietary, interacts > only with Anthropic models), Cursor(.com) (VS Code fork, proprietary, > interacts with either Claude* (Anthropic) or Gemini* (Google)), Codex > CLI (free software, developed by OpenAI and focused on their models, but > supposed to work with other providers). DebGPT fits here too (but is > less advanced for coding tasks than Claude Code or Cursor). All of the above are closed-source solutions. I have been playing around with the fully open https://aider.chat/ for well over a year and I would recommend it instead. I hope to some day write a blog post about how I run it inside a container safely and how I have customized it to give better results than what it does out-of-the-box. > (B) a way to query models: > - either subscription-based from commercial services such as OpenAI, > Anthropic, Gemini, ... or brokers like OpenRouter. > - or using "open" models that you can download and run yourself, > typically with ollama (free software) I am a big fan of OpenRouter.ai, but they are just a router. I would like to see the actual model trainers contribute back to open source by issuing free API usage to Debian Developers etc. If you have time, please reach out to the three companies you mentioned. I had hoped that the Debian Machine Learning team had direct contacts to these companies, but apparently not. > (C) good matching between (A) and (B): it helps if the client side (A) > knows how to tune to queries ("prompt engineering") for the specific > provider/model in use (B). Typically, in my tests, trying to use Codex > CLI with ollama fails there (or I could not find a model that produced > reasonable results). (Also the OpenAI API has variants that are not > supported by all models ("tools support").) I usually look at these leaderboards to see what currently yields the best results with specific models and tools combined: https://aider.chat/docs/leaderboards/ https://openrouter.ai/rankings