mikemccand commented on issue #15225: URL: https://github.com/apache/lucene/issues/15225#issuecomment-3527672646
`..it's like trying to drain the swamp, and the swamp keeps finding new ways to refill itself.` LOL!! Which LLM and what prompt created this @rmuir! Can it generate the audio in @uschindler's voice too? I find myself being amazed by what these LLMs can do now (generating complex code), and then aghast at the silly mistakes/hallucinations they make ... brain whiplash. Claude recently wrote up a big, helpful response to me, with a list of items, except it numbered all of the items as 1, yet in the text referred to them as items 1, 2, 3. Head scratching... I think LLMs, with targeted prompts, could be useful for our javadocs? E.g., could we prompt to dig through our existing docs and correct any code examples that are stale? Or maybe to add javadocs to complex methods that are missing their `@param` explanations? With the right prompting, and the right genai, maybe running in the "just think harder" mode/model, should be useful here. Can we invoke genai (CoPIlot?) from GitHub actions? Can it comment on our PRs for silly mistakes like failing to use `== false` haha. If a javadoc change is made in a PR and the code example is wrong (doesn't compile or run, etc.), it could comment? But I agree we should tread carefully, review closely, and if it's collectively taking tons of human time to review short genai efforts overall (sort of an AI denial-of-service-attack on we humans), then that's no good. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
