Jean Louis <[email protected]> writes: > * Dr. Arne Babenhauserheide <[email protected]> [2026-03-30 09:33]: >> So most of the code you don’t read and most of the tests you don’t read. >> >> This thread of the discussion started with you saying: > > No, I never started the thread, maybe it branched disconnected > somewhere...
You misunderstood this part: I refer to this logical thread of the discussion, not to the thread how an email program shows it. >> > True certainty can be obtained by testing functions and seeing if they >> > are doing what is meant to be. >> >> So if I understand your stance correctly, by certainty you only mean, >> that its results during execution match what you expect. So all the >> tooling around it (tests, checking instructions, generated code, …) only >> has the goal to push the effects close enough to your expectations that >> you don’t observe problems. > > You've understood my stance correctly. Yes, certainty for me means the > code produces the results I expect when I run it. All the rest — > tests, tooling, generated code — serves that practical goal. For me certainty goes a bit further: checking whether I made a logical error in edge cases to be sure that the code also runs correctly for cases I did not try out manually. Maybe that’s why your description weird me out: it sounds as if you’re poking around in a blackbox and stitching in place the parts that work. While it feels very wrong to me if I don’t understand what happens and why it happens. > My programming style puts food on my children's table. Yours works for > you too, presumably. Different workflows for different people. No need > to measure one by the other's standard. Food on the table is important, yes. I don’t want to tell you to do something that would stop you from being able to do that. >> That the LLM generates test code is then just an implementation >> detail, purely needed to keep the LLM from going too far off track. > > No, that's not correct. The test code serves the same purpose as any > other test code I write: to verify that what I think the code does is > what it actually does. > Whether I write the test manually or an LLM > helps generate it doesn't change the function of testing. If you don’t know the content of the test, then that does change the function of testing. > Though LLM didn't help me write 5000 functions, maybe it helped me > write 100 of them. >> And that it generates code at all just serves to avoid having to >> spin up the full LLM for every step. > > Also not correct. I use LLMs to generate code because it's faster than > typing every character myself. The "full LLM" runs locally on my > machine — spinning it up is trivial. There's no avoidance happening. Why do you run the code instead of letting LLM prompts directly produce the outputs the code would give? > You're reading implementation choices as philosophical > positions. I see them as optimization, not as philosophical stance. If it doesn’t work like that, that’s good to know. Best wishes, Arne -- Unpolitisch sein heißt politisch sein, ohne es zu merken. https://www.draketo.de
signature.asc
Description: PGP signature
--- via emacs-tangents mailing list (https://lists.gnu.org/mailman/listinfo/emacs-tangents)
