Hi JB, Yufei, Thanks both, this is very helpful.
I agree the best starting point is to align Polaris with the ASF guidance, which is what I've already referenced in my initial draft: https://github.com/apache/polaris/pull/3948/changes (thought I sent out for reference last week, but not actually, need to fix my mailbox) LMK what you think! -ej On Mon, Mar 9, 2026 at 11:58 AM Yufei Gu <[email protected]> wrote: > Hi EJ, > > Thanks a lot for bringing this up, and working on it. Given the increasing > number of AI or AI assisted contributions, it makes sense for us to put > some guardrails in place. Also agreed with JB that we should base our > guideline on the ASF one. > > Yufei > > > On Sat, Mar 7, 2026 at 12:17 AM Jean-Baptiste Onofré <[email protected]> > wrote: > > > Hi EJ, > > > > That is a great idea. > > > > For your information, there is already ongoing work at the foundation > > level, and some material has been published here: > > https://www.apache.org/legal/generative-tooling.html > > > > I believe we should base our guidelines on this document and reference it > > directly, as this page will continue to evolve and applies to all Apache > > projects. > > > > Regards, > > JB > > > > Le mar. 3 mars 2026 à 19:45, EJ Wang <[email protected]> a > > écrit : > > > > > Hi Polaris community, > > > > > > I would like to start a discussion around how Polaris should approach > > > AI-generated or AI-assisted contributions. > > > > > > Recently, Apache Iceberg merged a change that explicitly documents > > > expectations around AI-assisted contributions: > > > https://github.com/apache/iceberg/pull/15213/changes > > > > > > As AI tools become more widely used in software development, > contributors > > > may rely on them in different ways - from drafting small code snippets > to > > > helping structure larger changes. Rather than focusing on how these > tools > > > are categorized, it may be more important to clarify contributor > > > responsibility. > > > > > > If Polaris were to define guidance in this area, I believe the core > > > principles should emphasize accountability: > > > > > > 1. > > > > > > The human contributor submitting a PR remains fully responsible for > > the > > > change, including correctness, design soundness, licensing > compliance, > > > and > > > long-term maintainability. > > > 2. > > > > > > The PR author should understand the core ideas behind the > > implementation > > > end-to-end, and be able to justify the design and code during > review. > > > 3. > > > > > > The contributor must be able to explain trade-offs, constraints, and > > > architectural decisions reflected in the change. > > > 4. > > > > > > Transparency around AI usage may be considered, but responsibility > > > should not shift away from the human author. > > > > > > In other words, regardless of how a change is produced, the > > accountability > > > and authorship reside with the individual submitting it. AI systems > > should > > > not be treated as autonomous contributors. > > > > > > Questions for discussion: > > > > > > - > > > > > > Should Polaris explicitly define guidance around AI-generated > > > contributions? > > > - > > > > > > Do we want to require or encourage disclosure? > > > - > > > > > > Are there ASF-level positions we should align with? > > > - > > > > > > Should any such policy live in CONTRIBUTING.md? > > > > > > Given Polaris is building foundational infrastructure, setting > > expectations > > > early may help maintain high review standards while adapting to > evolving > > > development workflows. > > > > > > Looking forward to thoughts from the community. > > > > > > Best, > > > > > > -ej > > > > > >
