Spoke with Ariel in slack.
https://lists.apache.org/thread/lcvxnpf39v22lc3f9t5fo07p19237d16
Not sure If I am missing more statements, but the one I can find was that the
TOC of OpenAI has the following words: "Use Output to develop models that
compete with OpenAI” (See here https://openai.com/
Those are both fair points. Once you start dealing with data loss though,
maintaining guarantees is often impossible, so I’m not sure that torn writes or
updated timestamps are dealbreakers, but I’m fine with tabling option 2 for now
and seeing if we can figure out something better.
Regarding t
> These change very often and it's a constant moving target.
This is not hyperbole. This area is moving faster than anything I've seen
before.
> I am in this camp at the moment, AI vs Human has the same problem for the
> reviewer; we are supposed to be doing this, and blocking AI or putting new
> It's not clear from that thread precisely what they are objecting to and
whether it has changed (another challenge!)
That thread was last updated in 2023 and the current stance is just "tell
people which one you used, and make sure the output follows the 3 main
points".
> We can make a best eff
The test build of Cassandra Analytics 0.1.0 is available.
sha1: 9c948eab9356f5d166c26bb7a155b99ee0a8f9db
Git: https://github.com/apache/cassandra-analytics/tree/0.1.0-tentative
Maven Artifacts:
https://repository.apache.org/content/repositories/orgapachecassandra-1402/org/apache/cassandra/spark/an