+1 (non-binding)
- verified signatures and checksums: ok
- verified the source builds correctly: ok
- checked the LICENSE and NOTICE files are correct
- tested Rebalance feature with Flink 2.2.0 (1 coordinator + 3
tablet-servers): ok
- interface validation (add_server_tag, rebalance,
list_rebalance, cancel_rebalance)
- effect verification (node offline migration, replica
distribution, leader distribution)
- data correctness across rebalance operations
- confirmed list_rebalance ClassCastException from rc0 is
fixed in rc1
- tested Delta Join with Flink 2.2.0: ok
- verified DeltaJoin optimization works correctly for CDC
sources with table.delete.behavior='IGNORE'
- built and verified with official git tag
(v0.9.0-incubating-rc1)
- tested Paimon DV Union Read with Flink 1.20.3 + Paimon 1.3.1:
ok
- confirmed PR #2326 resolved rc0 issues where DV table Union
Read returned 0 rows or stale data
- Union Read correctly returns complete and up-to-date data
across multiple update scenarios
ForwardXu <[email protected]> 于2026年2月13日周五 21:03写道:
> +1
>
>
> Test environment:
> - OS: macOS (Darwin 25.2.0, ARM64/Apple Silicon)
> - Java: OpenJDK 21.0.1 LTS (TencentKonaJDK)
> - Fluss: 0.9.0-incubating (binary distribution)
> - Flink: 2.2.0
> - Connector: `fluss-flink-2.2-0.9.0-incubating.jar` (built from 0.9 source)
> - Fluss Cluster: local-cluster mode (1 CoordinatorServer + 1 TabletServer)
> - Flink Cluster: Standalone (1 JobManager + 1 TaskManager, process=4096m)
>
>
> Tested Flink + Fluss complex types (ARRAY/MAP/ROW) with 71 test cases, all
> passed:*OK*
>
>
> - ARRAY type PK table
> (create/insert/point-query/upsert/delete/streaming-query): *OK* (10 cases)
> - MAP type PK table
> (create/insert/point-query/upsert/delete/streaming-query): *OK* (9 cases)
> - ROW type PK table
> (create/insert/point-query/upsert/delete/streaming-query): *OK* (9 cases)
> - Nested complex types — ARRAY\<ROW\>, MAP\<STRING,ARRAY\>,
> ROW\<MAP+ARRAY\> (create/insert/point-query): *OK* (11 cases)
> - Log table (no PK) with ARRAY/MAP/ROW (create/insert/streaming-query):
> *OK* (9 cases)
> - Edge cases — empty ARRAY/MAP, NULL complex types, large ARRAY(50
> elements), deep nesting ARRAY\<MAP\<STRING,ROW\>\>: *OK* (16 cases)
> - Lookup Join with complex type dimension table (ARRAY+MAP+ROW): *OK* (7
> cases)
>
>
> Detailed test cases:
>
>
> | # | Category | Test Case | Mode | Result |
> |---|----------|-----------|------|--------|
> | 1 | ARRAY PK Table | Create table with
> ARRAY\<INT\>/ARRAY\<STRING\>/ARRAY\<DOUBLE\>/ARRAY\<BOOLEAN\> |
> batch | *OK* |
> | 2 | ARRAY PK Table | Insert 3 rows with ARRAY data | batch | *OK* |
> | 3 | ARRAY PK Table | Point query (id=1) | batch | *OK* |
> | 4 | ARRAY PK Table | Point query (id=2) | batch | *OK* |
> | 5 | ARRAY PK Table | Point query (id=3) | batch | *OK* |
> | 6 | ARRAY PK Table | Upsert update ARRAY (id=1) | batch | *OK* |
> | 7 | ARRAY PK Table | Verify update (id=1 int_arr=[99,88,77]) | batch |
> *OK* |
> | 8 | ARRAY PK Table | Delete record (id=3) | batch | *OK* |
> | 9 | ARRAY PK Table | Verify delete (id=3 not exists) | batch | *OK* |
> | 10 | ARRAY PK Table | Streaming full scan ARRAY table | streaming | *OK*
> |
> | 11 | MAP PK Table | Create table with
> MAP\<STRING,INT\>/MAP\<STRING,STRING\>/MAP\<INT,DOUBLE\> | batch |
> *OK* |
> | 12 | MAP PK Table | Insert 3 rows with MAP data | batch | *OK* |
> | 13 | MAP PK Table | Point query (id=1) | batch | *OK* |
> | 14 | MAP PK Table | Point query (id=2) | batch | *OK* |
> | 15 | MAP PK Table | Upsert update MAP (id=2) | batch | *OK* |
> | 16 | MAP PK Table | Verify update (id=2 updated=999) | batch | *OK* |
> | 17 | MAP PK Table | Delete record (id=3) | batch | *OK* |
> | 18 | MAP PK Table | Verify delete (id=3 not exists) | batch | *OK* |
> | 19 | MAP PK Table | Streaming full scan MAP table | streaming | *OK* |
> | 20 | ROW PK Table | Create table with
> ROW\<city,zipcode,street\>/ROW\<phone,email\> | batch | *OK* |
> | 21 | ROW PK Table | Insert 3 rows with ROW data | batch | *OK* |
> | 22 | ROW PK Table | Point query (id=1) | batch | *OK* |
> | 23 | ROW PK Table | Point query (id=2) | batch | *OK* |
> | 24 | ROW PK Table | Upsert update ROW (id=2 city→Hangzhou) | batch |
> *OK* |
> | 25 | ROW PK Table | Verify update (id=2) | batch | *OK* |
> | 26 | ROW PK Table | Delete record (id=3) | batch | *OK* |
> | 27 | ROW PK Table | Verify delete (id=3 not exists) | batch | *OK* |
> | 28 | ROW PK Table | Streaming full scan ROW table | streaming | *OK* |
> | 29 | Nested Types | Create table ARRAY\<ROW\<name,price,qty\>\> |
> batch | *OK* |
> | 30 | Nested Types | Insert ARRAY\<ROW\> data | batch | *OK* |
> | 31 | Nested Types | Point query ARRAY\<ROW\> (id=1001) | batch | *OK*
> |
> | 32 | Nested Types | Point query ARRAY\<ROW\> (id=1002) | batch | *OK*
> |
> | 33 | Nested Types | Create table MAP\<STRING,ARRAY\<INT\>\> |
> batch | *OK* |
> | 34 | Nested Types | Insert MAP\<STRING,ARRAY\> data | batch | *OK* |
> | 35 | Nested Types | Point query MAP\<STRING,ARRAY\> (id=1) | batch |
> *OK* |
> | 36 | Nested Types | Create table ROW\<name,MAP,ARRAY\> | batch | *OK*
> |
> | 37 | Nested Types | Insert ROW nested MAP+ARRAY data | batch | *OK* |
> | 38 | Nested Types | Point query ROW nested (id=1) | batch | *OK* |
> | 39 | Nested Types | Point query ROW nested (id=2) | batch | *OK* |
> | 40 | Log Table (no PK) | Create ARRAY log table | batch | *OK* |
> | 41 | Log Table (no PK) | Insert ARRAY log data | batch | *OK* |
> | 42 | Log Table (no PK) | Streaming query ARRAY log | streaming | *OK* |
> | 43 | Log Table (no PK) | Create MAP log table | batch | *OK* |
> | 44 | Log Table (no PK) | Insert MAP log data | batch | *OK* |
> | 45 | Log Table (no PK) | Streaming query MAP log | streaming | *OK* |
> | 46 | Log Table (no PK) | Create ROW log table | batch | *OK* |
> | 47 | Log Table (no PK) | Insert ROW log data | batch | *OK* |
> | 48 | Log Table (no PK) | Streaming query ROW log | streaming | *OK* |
> | 49 | Edge Cases | Create empty ARRAY table | batch | *OK* |
> | 50 | Edge Cases | Insert empty ARRAY[] | batch | *OK* |
> | 51 | Edge Cases | Point query empty ARRAY | batch | *OK* |
> | 52 | Edge Cases | Create empty MAP table | batch | *OK* |
> | 53 | Edge Cases | Insert empty MAP[] | batch | *OK* |
> | 54 | Edge Cases | Point query empty MAP | batch | *OK* |
> | 55 | Edge Cases | Create NULL complex type table | batch | *OK* |
> | 56 | Edge Cases | Insert NULL ARRAY/MAP/ROW | batch | *OK* |
> | 57 | Edge Cases | Point query all NULL (id=1) | batch | *OK* |
> | 58 | Edge Cases | Point query non-NULL (id=2) | batch | *OK* |
> | 59 | Edge Cases | Create large ARRAY table | batch | *OK* |
> | 60 | Edge Cases | Insert ARRAY with 50 elements | batch | *OK* |
> | 61 | Edge Cases | Point query large ARRAY | batch | *OK* |
> | 62 | Edge Cases | Create deep nesting ARRAY\<MAP\<STRING,ROW\>\> |
> batch | *OK* |
> | 63 | Edge Cases | Insert deep nesting data | batch | *OK* |
> | 64 | Edge Cases | Point query deep nesting | batch | *OK* |
> | 65 | Lookup Join | Create dimension table (ARRAY+MAP+ROW) | batch | *OK*
> |
> | 66 | Lookup Join | Insert dimension data | batch | *OK* |
> | 67 | Lookup Join | Point query dimension (id=1) | batch | *OK* |
> | 68 | Lookup Join | Point query dimension (id=2) | batch | *OK* |
> | 69 | Lookup Join | Create fact stream table | batch | *OK* |
> | 70 | Lookup Join | Insert fact data | batch | *OK* |
> | 71 | Lookup Join | Lookup Join query (fact ⋈ dim with complex types) |
> streaming | *OK* |
>
>
> Best,
> ForwardXu
>
>
> ForwardXu
> [email protected]
>
>
>
>
>
>
>
> 原始邮件
>
>
> 发件人:Jark Wu <[email protected]>
> 发件时间:2026年2月13日 18:41
> 收件人:dev <[email protected]>
> 主题:Re: [VOTE] Release Fluss 0.9.0-incubating (RC1)
>
>
>
> +1 (binding)
>
> - Checked sums and signatures: *OK*
>
> - Check the LICENSE and NOTICE files are correct: *OK*
>
> - Checked the jars in the staging repo: *OK*
>
> - Checked source distribution doesn't include binaries: *OK*
>
> - Build from source with Java11 and Maven 3.8.6: *OK*
>
> - Checked version consistency in pom files: *OK*
> - Went through the quick start: *OK*
>
> - Checked release notes and release blog: *OK*
>
> - Run Flink Engine quickstart with Flink 2.2 and Fluss binary distribution: *OK*
>
> - Run "Real-Time Analytics With Flink" quickstart: *OK*
>
> - Tested Flink + Fluss for auto-increment column and agg merge engine: *OK*
>
> (There is a bug on the doc which I have pushed a hotfix [3])
>
>
> Btw, it seems the Download page update PR is missing. (not blocker)
>
> Best,
> Jark
>
>
> [1]
> https://fluss.apache.org/docs/0.9/table-design/table-types/pk-table/#auto-increment-column
> [2]
> https://fluss.apache.org/docs/0.9/table-design/merge-engines/aggregation/
> [3]
> https://github.com/apache/fluss/commit/074d7563f3f414e574de55fd5c1361135b337496
>
>
> On Thu, 12 Feb 2026 at 20:03, yuxia <
> [email protected]> wrote:
> >
> > Hi everyone,
> >
>
> > Please review and vote on the release candidate #1 for the Apache Fluss
> > version 0.9.0-incubating, as follows:
> > [ ] +1, Approve the release
>
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> >
>
> > The complete staging area is available for your review, includes:
> >
>
> > The official source release and binary convenience releases to be deployed
> > to:
> > *
> https://dist.apache.org/repos/dist/dev/incubator/fluss/fluss-0.9.0-incubating-rc1
> >
> <https://dist.apache.org/repos/dist/dev/incubator/fluss/fluss-0.9.0-incubating-rc1>>
> ;
> > Helm charts are available on:
> > *
> https://dist.apache.org/repos/dist/dev/incubator/fluss/helm-chart
>
> > (NB: you have to build the Docker images locally with the version
>
> > 0.9.0-incubating in order to test Helm charts)
> >
> <https://dist.apache.org/repos/dist/dev/incubator/fluss/helm-chart> (NB: you have to build the Docker images locally with the version> 0.9.0-incubating in order to test Helm charts)>>
> ;
>
> > All the files are signed with the key with fingerprint 56A9F259A4C18F9C,
> > you can find the KEYS file here:
> > *
> https://dist.apache.org/repos/dist/release/incubator/fluss/KEYS
> > <https://dist.apache.org/repos/dist/release/incubator/fluss/KEYS>>;
>
> > All artifacts to be deployed to the Maven Central Repository:
> > *
> https://repository.apache.org/content/repositories/orgapachefluss-1004/
> >
> <https://repository.apache.org/content/repositories/orgapachefluss-1004/>>
> ;
> > Git tag for the release:
> > *
> https://github.com/apache/fluss/releases/tag/v0.9.0-incubating-rc1
> >
> <https://github.com/apache/fluss/releases/tag/v0.9.0-incubating-rc1>>;
> > Git commit for the release:
> > *
> >
> https://github.com/apache/fluss/commit/d6fd1f1f607a2672bff5d18d5ca811bfa920bbd7
> >
> <https://github.com/apache/fluss/commit/d6fd1f1f607a2672bff5d18d5ca811bfa920bbd7>>
> ;
>
> > Website pull request for the release announcement blog post
> > * https://github.com/apache/fluss/pull/2590
> >
> > Upgrade note for the new release:
> > *
> >
> https://github.com/apache/fluss/blob/release-0.9/website/docs/maintenance/operations/upgrade-notes-0.9.md
> >
> <https://github.com/apache/fluss/blob/release-0.9/website/docs/maintenance/operations/upgrade-notes-0.9.md>>
> ;
> > Docker images for the release candidate:
> > * fluss: apache/fluss:0.9.0-incubating-rc1
> >
>
> > Please download, verify and test. To learn more about how to verify:
> >
> https://fluss.apache.org/community/how-to-release/verifying-a-fluss-release/
> >
> <https://fluss.apache.org/community/how-to-release/verifying-a-fluss-release/>>
> ;
> >
>
> > The vote will be open for at least 72 hours. It is adopted by majority
>
> > approval, with at least 3 PPMC affirmative votes.
> >
> >
> > Best regards,
> > Yuxia