Hi all,
Thanks, Yuxia, for running th show!
+1 (binding)
[X] Downloaded source release from dist.apache.org
[X] Verified GPG signature (Key - Yuxia Luo)
[X] Verified SHA512 checksum
[X] Checked LICENSE (Apache 2.0), NOTICE, and DISCLAIMER files
[X] Verified ASF headers in source files
[X] Built from source successfully
[X] Verified Maven staging repository artifacts and signatures
[X] Verified Docker image (apache/fluss:0.9.0-incubating-rc1)
[X] Verified release notes and release blog: (Added comments)
[X] Ran "Real-Time Analytics With Flink" quickstart
-> Also Tested $changelog & $binlog virtual tables with Flink1.20
[X] Ran "Build streaming lakehouse" with ICEBERG 1.10.1 (RustFS UI showed
a 403, but it didn’t impact the run)
By the way, the release email mentions commit d6fd1f1... but the tag
v0.9.0-incubating-rc1 points to commit e314c35....
Best Regards,
Mehul Batra
On Sat, Feb 14, 2026 at 11:37 AM Yang Guo <[email protected]> wrote:
> +1(non-binging)
>
> Went through the new quickstart configurations(s3 compatible rustfs) and
> process with flink,paimon and iceberg.
>
> Verified the compatibility on flink 1.18,1.19,1.20,2.2.
>
> Checked the new compacted log format on both log table and kv table.
>
> Regards,
> Yang Guo
>
>
> On Sat, Feb 14, 2026 at 03:23 Keith Lee <[email protected]>
> wrote:
>
> > Hello Yuxia,
> >
> > Thank you for coordinating and preparing the release.
> >
> > +1 (non-binding)
> >
> > See verifications done below
> >
> > *1. Signature and shasum*
> > ```
> > $ curl https://downloads.apache.org/incubator/fluss/KEYS -o KEYS
> >
> > % Total % Received % Xferd Average Speed Time Time Time
> > Current
> > Dload Upload Total Spent Left
> > Speed
> > 100 8013 100 8013 0 0 51139 0 --:--:-- --:--:-- --:--:--
> > 51365
> >
> > $ gpg --import KEYS
> > gpg: key 85BACB5AEFAE3202: "Jark Wu (CODE SIGNING KEY) <[email protected]
> >"
> > not changed
> > gpg: key 56A9F259A4C18F9C: "Yuxia Luo (CODE SIGNING KEY) <
> [email protected]
> > >"
> > not changed
> > gpg: Total number processed: 2
> > gpg: unchanged: 2
> >
> >
> > $ for i in *.tgz; do echo $i; gpg --verify $i.asc $i; done
> > fluss-0.9.0-incubating-bin.tgz
> > gpg: Signature made Thu Feb 12 11:01:51 2026 GMT
> > gpg: using RSA key
> E91E2171D6678CB70B50282356A9F259A4C18F9C
> > gpg: Good signature from "Yuxia Luo (CODE SIGNING KEY) <[email protected]
> >"
> > [unknown]
> > gpg: WARNING: This key is not certified with a trusted signature!
> > gpg: There is no indication that the signature belongs to the
> > owner.
> > Primary key fingerprint: E91E 2171 D667 8CB7 0B50 2823 56A9 F259 A4C1
> 8F9C
> >
> > $ sha512sum fluss-0.9.0-incubating-bin.tgz
> >
> >
> 15102955cf8abb7bf8384c0db20c5160b2aedc15bee3a10135a6d47c744dce23bd13a4403b43da4ebe199622517b83cf8eab76de1070c42b012f717b12bc2199
> > fluss-0.9.0-incubating-bin.tgz
> > ```
> >
> > *2. Verified APIs using Rust client compiled from fluss-rust main branch
> on
> > commit: [1]*
> > - Admin APIs: Create / Get / List for Database, Table, Partition;
> Metadata,
> > Schema
> > - Log Table: Produce, Fetch, ListOffsets; ArrowRecordBatch
> > - Primary Key Table: Put, Lookup
> >
> > *3. **Verified** using Python binding compiled from fluss-rust **main
> > branch on commit: [1]*
> > - Admin APIs: Create / Get / List for Database, Table, Partition;
> Metadata,
> > Schema
> > - Log Table: Produce, Fetch, ListOffsets; ArrowRecordBatch
> > - Primary Key Table: Put, Lookup
> >
> > *4. **Verified** using C++ binding from fluss-rust main branch **main
> > branch on commit: [1]*
> > - Admin APIs: Create / Get / List for Database, Table, Partition;
> Metadata,
> > Schema
> > - Log Table: Produce, Fetch, ListOffsets; ArrowRecordBatch
> > - Primary Key Table: Put, Lookup
> >
> > [1]
> >
> >
> https://github.com/apache/fluss-rust/commit/bac00026f7dc06eca7deebed11172bc378938cf5
> >
> > Best regards
> > Keith Lee
> >
> >
> > On Fri, Feb 13, 2026 at 3:55 PM ForwardXu <[email protected]> wrote:
> >
> > > +1
> > >
> > >
> > > Test environment:
> > > - OS: macOS (Darwin 25.2.0, ARM64/Apple Silicon)
> > > - Java: OpenJDK 21.0.1 LTS (TencentKonaJDK)
> > > - Fluss: 0.9.0-incubating (binary distribution)
> > > - Flink: 2.2.0
> > > - Connector: `fluss-flink-2.2-0.9.0-incubating.jar` (built from
> > 0.9
> > > source)
> > > - Tiering JAR: `fluss-flink-tiering-0.9.0-incubating.jar`
> > > - Fluss Cluster: local-cluster mode (1 CoordinatorServer + 1
> > > TabletServer)
> > > - Flink Cluster: Standalone (1 JobManager + 1 TaskManager,
> > > process=4096m)
> > >
> > >
> > > Tested Flink + Fluss Lakehouse Tiering (Paimon + Iceberg) with 40 test
> > > cases, all passed:*OK*
> > >
> > >
> > > - Tiering to Paimon — PK table
> > > (create/insert/tiering/$lake-query/$lake-agg/union-read/upsert/delete):
> > > *OK* (10 cases)
> > > - Tiering to Paimon — Log table
> > > (create/insert/tiering/$lake-query/$lake-agg/streaming-query): *OK* (5
> > > cases)
> > > - Tiering to Paimon — Complex types PK table with ARRAY/MAP/ROW
> > > (create/insert/tiering/$lake-query/$lake-agg): *OK* (5 cases)
> > > - Tiering to Paimon — System table $lake$snapshots: *OK* (1
> case)
> > > - Tiering to Iceberg — PK table
> > > (create/insert/tiering/$lake-query/$lake-agg/union-read/upsert): *OK*
> (8
> > > cases)
> > > - Tiering to Iceberg — Log table
> > > (create/insert/tiering/$lake-query/$lake-agg/streaming-query): *OK* (5
> > > cases)
> > > - Tiering to Iceberg — Complex types PK table with ARRAY/MAP/ROW
> > > (create/insert/tiering/$lake-query/$lake-agg): *OK* (5 cases)
> > > - Tiering to Iceberg — System table $lake$snapshots: *OK* (1
> case)
> > >
> > >
> > > Detailed test cases:
> > >
> > >
> > > | # | Lake | Category | Test Case | Mode | Result |
> > > |---|------|----------|-----------|------|--------|
> > > | 1 | Paimon | PK Table | Create PK table with datalake enabled
> > > (freshness=10s) | batch | *OK* |
> > > | 2 | Paimon | PK Table | Insert 5 rows into PK table | batch |
> > > *OK* |
> > > | 3 | Paimon | PK Table | Point query (order_id=1) | batch |
> *OK*
> > |
> > > | 4 | Paimon | PK Table | $lake query (read from Paimon
> storage) |
> > > batch | *OK* |
> > > | 5 | Paimon | PK Table | $lake aggregation (COUNT + SUM) |
> batch
> > |
> > > *OK* |
> > > | 6 | Paimon | PK Table | Union Read (Fluss + Paimon) | batch |
> > > *OK* |
> > > | 7 | Paimon | PK Table | Upsert update (id=1) | batch | *OK* |
> > > | 8 | Paimon | PK Table | Verify upsert (point query id=1) |
> batch
> > > | *OK* |
> > > | 9 | Paimon | PK Table | Delete record (id=5) | batch | *OK* |
> > > | 10 | Paimon | PK Table | Verify delete ($lake COUNT) | batch |
> > > *OK* |
> > > | 11 | Paimon | Log Table | Create Log table with datalake
> enabled
> > > | batch | *OK* |
> > > | 12 | Paimon | Log Table | Insert 4 rows into Log table |
> batch |
> > > *OK* |
> > > | 13 | Paimon | Log Table | $lake query Log table | batch |
> *OK* |
> > > | 14 | Paimon | Log Table | $lake aggregation (GROUP BY
> > event_type)
> > > | batch | *OK* |
> > > | 15 | Paimon | Log Table | Streaming query Log table
> (earliest) |
> > > streaming | *OK* |
> > > | 16 | Paimon | Complex Types | Create PK table with
> ARRAY/MAP/ROW
> > > + datalake | batch | *OK* |
> > > | 17 | Paimon | Complex Types | Insert complex type data (3
> rows)
> > |
> > > batch | *OK* |
> > > | 18 | Paimon | Complex Types | Point query complex types
> (id=1) |
> > > batch | *OK* |
> > > | 19 | Paimon | Complex Types | $lake query complex types |
> batch
> > |
> > > *OK* |
> > > | 20 | Paimon | Complex Types | $lake aggregation complex types
> > > (COUNT) | batch | *OK* |
> > > | 21 | Paimon | System Table | $lake$snapshots (snapshot_id,
> > > commit_user, total_record_count) | batch | *OK* |
> > > | 22 | Iceberg | PK Table | Create PK table with datalake
> enabled
> > > (freshness=10s) | batch | *OK* |
> > > | 23 | Iceberg | PK Table | Insert 5 rows into PK table | batch
> |
> > > *OK* |
> > > | 24 | Iceberg | PK Table | Point query (order_id=1) | batch |
> > *OK*
> > > |
> > > | 25 | Iceberg | PK Table | $lake query (read from Iceberg
> > storage,
> > > 5 rows with __bucket/__offset/__timestamp) | batch | *OK* |
> > > | 26 | Iceberg | PK Table | $lake aggregation (COUNT=5,
> > SUM=808.39)
> > > | batch | *OK* |
> > > | 27 | Iceberg | PK Table | Union Read (Fluss + Iceberg) |
> batch |
> > > *OK* |
> > > | 28 | Iceberg | PK Table | Upsert update (id=1,
> > > customer→Alice_Updated) | batch | *OK* |
> > > | 29 | Iceberg | PK Table | Verify upsert (point query id=1,
> > > Alice_Updated confirmed) | batch | *OK* |
> > > | 30 | Iceberg | Log Table | Create Log table with datalake
> > enabled
> > > | batch | *OK* |
> > > | 31 | Iceberg | Log Table | Insert 4 rows into Log table |
> batch
> > |
> > > *OK* |
> > > | 32 | Iceberg | Log Table | $lake query Log table (4 rows with
> > > timestamps) | batch | *OK* |
> > > | 33 | Iceberg | Log Table | $lake aggregation (GROUP BY
> > > event_type: click=2, view=1, buy=1) | batch | *OK* |
> > > | 34 | Iceberg | Log Table | Streaming query Log table
> (earliest,
> > 4
> > > rows) | streaming | *OK* |
> > > | 35 | Iceberg | Complex Types | Create PK table with
> > ARRAY/MAP/ROW
> > > + datalake | batch | *OK* |
> > > | 36 | Iceberg | Complex Types | Insert complex type data (3
> rows)
> > > | batch | *OK* |
> > > | 37 | Iceberg | Complex Types | Point query complex types
> (id=1,
> > > tags=[vip,active]) | batch | *OK* |
> > > | 38 | Iceberg | Complex Types | $lake query complex types (3
> rows
> > > with ARRAY/MAP/ROW) | batch | *OK* |
> > > | 39 | Iceberg | Complex Types | $lake aggregation complex types
> > > (COUNT=3) | batch | *OK* |
> > > | 40 | Iceberg | System Table | $lake$snapshots (snapshot_id,
> > > operation=append/overwrite, summary) | batch | *OK* |
> > >
> > >
> > > Notes:
> > > - Paimon: 1.3.1 (`paimon-flink-2.0-1.3.1.jar` +
> > > `paimon-bundle-1.3.1.jar`)
> > > - Iceberg: 1.10.1 (`iceberg-flink-runtime-2.0-1.10.1.jar`)
> > > - Iceberg required `hadoop-client-api-3.3.6.jar` +
> > > `hadoop-client-runtime-3.3.6.jar` (Hadoop 3.x) to resolve
> > > `FileSystem.openFile()` API compatibility
> > > - Iceberg required patching `LakeFlinkCatalog.java` to use 3-arg
> > > `createCatalog(String, Map, Configuration)` via reflection (Iceberg
> > 1.10.1
> > > API change)
> > > - Tiering Service ran as Flink streaming job, data verified via
> > > `$lake` virtual table queries reading directly from lake storage
> > > - All `$lake` queries returned correct data with Fluss metadata
> > > columns (__bucket, __offset, __timestamp)
> > >
> > >
> > > ForwardXu
> > > [email protected]
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > 原始邮件
> > >
> > >
> > > 发件人:Yunhong Zheng <[email protected]>
> > > 发件时间:2026年2月13日 22:51
> > > 收件人:dev <[email protected]>
> > > 主题:Re: [VOTE] Release Fluss 0.9.0-incubating (RC1)
> > >
> > >
> > >
> > > +1 (binding)
> > >
> > >
> > >
> >
> I have verified the following new introduced features: aggregate merge engine, auto increment column, add column, rebalance and the kv snapshot lease. All of these features passed in RC1.
> > >
> > >
> > >
> >
> Looking forward to the final release of Fluss 0.9-incubating!
> > >
> > > Yours,
> > > Yunhong(Swuferhong)
> > >
> > > On 2026/02/13 13:10:22 Yang Wang wrote:
> > > > +1 (non-binding)
> > > >
> > >
> > >
> >
> > - verified signatures and checksums: ok
> > >
> > >
> >
> > - verified the source builds correctly: ok
> > >
> > >
> >
> > - checked the LICENSE and NOTICE files are correct
> > >
> > >
> >
> > - tested Rebalance feature with Flink 2.2.0 (1 coordinator + 3
> > > > tablet-servers): ok
> > >
> > >
> >
> > - interface validation (add_server_tag, rebalance,
> > > > list_rebalance, cancel_rebalance)
> > >
> > >
> >
> > - effect verification (node offline migration, replica
> > > > distribution, leader distribution)
> > >
> > >
> >
> > - data correctness across rebalance operations
> > >
> > >
> >
> > - confirmed list_rebalance ClassCastException from rc0 is
> > > > fixed in rc1
> > >
> > >
> >
> > - tested Delta Join with Flink 2.2.0: ok
> > >
> > >
> >
> > - verified DeltaJoin optimization works correctly for CDC
> > >
> >
> > sources with table.delete.behavior='IGNORE'
> > >
> > >
> >
> > - built and verified with official git tag
> > > > (v0.9.0-incubating-rc1)
> > >
> > >
> >
> > - tested Paimon DV Union Read with Flink 1.20.3 + Paimon 1.3.1:
> > > > ok
> > >
> > >
> >
> > - confirmed PR #2326 resolved rc0 issues where DV table Union
> > >
> > >
> >
> > Read returned 0 rows or stale data
> > >
> > >
> >
> > - Union Read correctly returns complete and up-to-date data
> > > > across multiple update scenarios
> > > >
> > > > Best,
> > > > Yang
> > > >
> > > > yuxia <[email protected]
> > > > 于2026年2月12日周四 20:03写道:
> > > >
> > > > > Hi everyone,
> > > > >
> > >
> > >
> >
> > > Please review and vote on the release candidate #1 for the Apache Fluss
> > >
> > > version 0.9.0-incubating, as follows:
> > >
> > > [ ] +1, Approve the release
> > >
> > >
> >
> > > [ ] -1, Do not approve the release (please provide specific comments)
> > > > >
> > > > >
> > >
> > >
> >
> > > The complete staging area is available for your review, includes:
> > > > >
> > >
> > >
> >
> > > The official source release and binary convenience releases to be deployed
> > > > > to:
> > > > > *
> > > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/incubator/fluss/fluss-0.9.0-incubating-rc1
> > > > >
> > > <
> >
> https://dist.apache.org/repos/dist/dev/incubator/fluss/fluss-0.9.0-incubating-rc1> >
> > >
> > > ;
> > > > > Helm charts are available on:
> > > > > *
> > > https://dist.apache.org/repos/dist/dev/incubator/fluss/helm-chart
> > >
> > >
> >
> > > (NB: you have to build the Docker images locally with the version
> > >
> > >
> >
> > > 0.9.0-incubating in order to test Helm charts)
> > > > >
> > > <
> >
> https://dist.apache.org/repos/dist/dev/incubator/fluss/helm-chart> > (NB: you have to build the Docker images locally with the version> > 0.9.0-incubating in order to test Helm charts)> >
> > >
> > > ;
> > >
> > >
> >
> > > All the files are signed with the key with fingerprint 56A9F259A4C18F9C,
> > >
> > >
> >
> > > you can find the KEYS file here:
> > > > > *
> > > https://dist.apache.org/repos/dist/release/incubator/fluss/KEYS
> > > > >
> > > <
> >
> https://dist.apache.org/repos/dist/release/incubator/fluss/KEYS> >
> > >
> > > ;
> > >
> > >
> >
> > > All artifacts to be deployed to the Maven Central Repository:
> > > > > *
> > >
> https://repository.apache.org/content/repositories/orgapachefluss-1004/
> > > > >
> > > <
> >
> https://repository.apache.org/content/repositories/orgapachefluss-1004/> >
> > >
> > > ;
> > > > > Git tag for the release:
> > > > > *
> > > https://github.com/apache/fluss/releases/tag/v0.9.0-incubating-rc1
> > > > >
> > > <
> >
> https://github.com/apache/fluss/releases/tag/v0.9.0-incubating-rc1> >
> > >
> > > ;
> > > > > Git commit for the release:
> > > > > *
> > > > >
> > > > >
> > >
> >
> https://github.com/apache/fluss/commit/d6fd1f1f607a2672bff5d18d5ca811bfa920bbd7
> > > > >
> > > <
> >
> https://github.com/apache/fluss/commit/d6fd1f1f607a2672bff5d18d5ca811bfa920bbd7> >
> > >
> > > ;
> > >
> > >
> >
> > > Website pull request for the release announcement blog post
> > > > > * https://github.com/apache/fluss/pull/2590
> > > > > <https://github.com/apache/fluss/pull/2590> >
> >;
> > >
> > >
> >
> > > Upgrade note for the new release:
> > > > > *
> > > > >
> > > > >
> > >
> >
> https://github.com/apache/fluss/blob/release-0.9/website/docs/maintenance/operations/upgrade-notes-0.9.md
> > > > >
> > > <
> >
> https://github.com/apache/fluss/blob/release-0.9/website/docs/maintenance/operations/upgrade-notes-0.9.md> >
> > >
> > > ;
> > >
> > >
> >
> > > Docker images for the release candidate:
> > >
> > > * fluss: apache/fluss:0.9.0-incubating-rc1
> > > > >
> > >
> > >
> >
> > > Please download, verify and test. To learn more about how to verify:
> > > > >
> > > > >
> > >
> >
> https://fluss.apache.org/community/how-to-release/verifying-a-fluss-release/
> > > > >
> > > <
> >
> https://fluss.apache.org/community/how-to-release/verifying-a-fluss-release/> >
> > >
> > > ;
> > > > >
> > >
> > >
> >
> > > The vote will be open for at least 72 hours. It is adopted by majority
> > >
> > >
> >
> > > approval, with at least 3 PPMC affirmative votes.
> > > > >
> > > > >
> > > > > Best regards,
> > > > > Yuxia
> > > > >
> > > >
> >
>