+1 (non-binding)

  - verified signatures and checksums: ok
  - verified the source builds correctly: ok
  - checked the LICENSE and NOTICE files are correct
  - tested Rebalance feature with Flink 2.2.0 (1 coordinator + 3
  tablet-servers): ok
    - interface validation (add_server_tag, rebalance,
  list_rebalance, cancel_rebalance)
    - effect verification (node offline migration, replica
  distribution, leader distribution)
    - data correctness across rebalance operations
    - confirmed list_rebalance ClassCastException from rc0 is
  fixed in rc1
  - tested Delta Join with Flink 2.2.0: ok
    - verified DeltaJoin optimization works correctly for CDC
  sources with table.delete.behavior='IGNORE'
    - built and verified with official git tag
  (v0.9.0-incubating-rc1)
  - tested Paimon DV Union Read with Flink 1.20.3 + Paimon 1.3.1:
  ok
    - confirmed PR #2326 resolved rc0 issues where DV table Union
  Read returned 0 rows or stale data
    - Union Read correctly returns complete and up-to-date data
  across multiple update scenarios

ForwardXu <[email protected]> 于2026年2月13日周五 21:03写道:

> +1
>
>
> Test environment:
> - OS: macOS (Darwin 25.2.0, ARM64/Apple Silicon)
> - Java: OpenJDK 21.0.1 LTS (TencentKonaJDK)
> - Fluss: 0.9.0-incubating (binary distribution)
> - Flink: 2.2.0
> - Connector: `fluss-flink-2.2-0.9.0-incubating.jar` (built from 0.9 source)
> - Fluss Cluster: local-cluster mode (1 CoordinatorServer + 1 TabletServer)
> - Flink Cluster: Standalone (1 JobManager + 1 TaskManager, process=4096m)
>
>
> Tested Flink + Fluss complex types (ARRAY/MAP/ROW) with 71 test cases, all
> passed:*OK*
>
>
> - ARRAY type PK table
> (create/insert/point-query/upsert/delete/streaming-query): *OK* (10 cases)
> - MAP type PK table
> (create/insert/point-query/upsert/delete/streaming-query): *OK* (9 cases)
> - ROW type PK table
> (create/insert/point-query/upsert/delete/streaming-query): *OK* (9 cases)
> - Nested complex types — ARRAY\<ROW\&gt;, MAP\<STRING,ARRAY\&gt;,
> ROW\<MAP+ARRAY\&gt; (create/insert/point-query): *OK* (11 cases)
> - Log table (no PK) with ARRAY/MAP/ROW (create/insert/streaming-query):
> *OK* (9 cases)
> - Edge cases — empty ARRAY/MAP, NULL complex types, large ARRAY(50
> elements), deep nesting ARRAY\<MAP\<STRING,ROW\&gt;\&gt;: *OK* (16 cases)
> - Lookup Join with complex type dimension table (ARRAY+MAP+ROW): *OK* (7
> cases)
>
>
> Detailed test cases:
>
>
> | # | Category | Test Case | Mode | Result |
> |---|----------|-----------|------|--------|
> | 1 | ARRAY PK Table | Create table with
> ARRAY\<INT\&gt;/ARRAY\<STRING\&gt;/ARRAY\<DOUBLE\&gt;/ARRAY\<BOOLEAN\&gt; |
> batch | *OK* |
> | 2 | ARRAY PK Table | Insert 3 rows with ARRAY data | batch | *OK* |
> | 3 | ARRAY PK Table | Point query (id=1) | batch | *OK* |
> | 4 | ARRAY PK Table | Point query (id=2) | batch | *OK* |
> | 5 | ARRAY PK Table | Point query (id=3) | batch | *OK* |
> | 6 | ARRAY PK Table | Upsert update ARRAY (id=1) | batch | *OK* |
> | 7 | ARRAY PK Table | Verify update (id=1 int_arr=[99,88,77]) | batch |
> *OK* |
> | 8 | ARRAY PK Table | Delete record (id=3) | batch | *OK* |
> | 9 | ARRAY PK Table | Verify delete (id=3 not exists) | batch | *OK* |
> | 10 | ARRAY PK Table | Streaming full scan ARRAY table | streaming | *OK*
> |
> | 11 | MAP PK Table | Create table with
> MAP\<STRING,INT\&gt;/MAP\<STRING,STRING\&gt;/MAP\<INT,DOUBLE\&gt; | batch |
> *OK* |
> | 12 | MAP PK Table | Insert 3 rows with MAP data | batch | *OK* |
> | 13 | MAP PK Table | Point query (id=1) | batch | *OK* |
> | 14 | MAP PK Table | Point query (id=2) | batch | *OK* |
> | 15 | MAP PK Table | Upsert update MAP (id=2) | batch | *OK* |
> | 16 | MAP PK Table | Verify update (id=2 updated=999) | batch | *OK* |
> | 17 | MAP PK Table | Delete record (id=3) | batch | *OK* |
> | 18 | MAP PK Table | Verify delete (id=3 not exists) | batch | *OK* |
> | 19 | MAP PK Table | Streaming full scan MAP table | streaming | *OK* |
> | 20 | ROW PK Table | Create table with
> ROW\<city,zipcode,street\&gt;/ROW\<phone,email\&gt; | batch | *OK* |
> | 21 | ROW PK Table | Insert 3 rows with ROW data | batch | *OK* |
> | 22 | ROW PK Table | Point query (id=1) | batch | *OK* |
> | 23 | ROW PK Table | Point query (id=2) | batch | *OK* |
> | 24 | ROW PK Table | Upsert update ROW (id=2 city→Hangzhou) | batch |
> *OK* |
> | 25 | ROW PK Table | Verify update (id=2) | batch | *OK* |
> | 26 | ROW PK Table | Delete record (id=3) | batch | *OK* |
> | 27 | ROW PK Table | Verify delete (id=3 not exists) | batch | *OK* |
> | 28 | ROW PK Table | Streaming full scan ROW table | streaming | *OK* |
> | 29 | Nested Types | Create table ARRAY\<ROW\<name,price,qty\&gt;\&gt; |
> batch | *OK* |
> | 30 | Nested Types | Insert ARRAY\<ROW\&gt; data | batch | *OK* |
> | 31 | Nested Types | Point query ARRAY\<ROW\&gt; (id=1001) | batch | *OK*
> |
> | 32 | Nested Types | Point query ARRAY\<ROW\&gt; (id=1002) | batch | *OK*
> |
> | 33 | Nested Types | Create table MAP\<STRING,ARRAY\<INT\&gt;\&gt; |
> batch | *OK* |
> | 34 | Nested Types | Insert MAP\<STRING,ARRAY\&gt; data | batch | *OK* |
> | 35 | Nested Types | Point query MAP\<STRING,ARRAY\&gt; (id=1) | batch |
> *OK* |
> | 36 | Nested Types | Create table ROW\<name,MAP,ARRAY\&gt; | batch | *OK*
> |
> | 37 | Nested Types | Insert ROW nested MAP+ARRAY data | batch | *OK* |
> | 38 | Nested Types | Point query ROW nested (id=1) | batch | *OK* |
> | 39 | Nested Types | Point query ROW nested (id=2) | batch | *OK* |
> | 40 | Log Table (no PK) | Create ARRAY log table | batch | *OK* |
> | 41 | Log Table (no PK) | Insert ARRAY log data | batch | *OK* |
> | 42 | Log Table (no PK) | Streaming query ARRAY log | streaming | *OK* |
> | 43 | Log Table (no PK) | Create MAP log table | batch | *OK* |
> | 44 | Log Table (no PK) | Insert MAP log data | batch | *OK* |
> | 45 | Log Table (no PK) | Streaming query MAP log | streaming | *OK* |
> | 46 | Log Table (no PK) | Create ROW log table | batch | *OK* |
> | 47 | Log Table (no PK) | Insert ROW log data | batch | *OK* |
> | 48 | Log Table (no PK) | Streaming query ROW log | streaming | *OK* |
> | 49 | Edge Cases | Create empty ARRAY table | batch | *OK* |
> | 50 | Edge Cases | Insert empty ARRAY[] | batch | *OK* |
> | 51 | Edge Cases | Point query empty ARRAY | batch | *OK* |
> | 52 | Edge Cases | Create empty MAP table | batch | *OK* |
> | 53 | Edge Cases | Insert empty MAP[] | batch | *OK* |
> | 54 | Edge Cases | Point query empty MAP | batch | *OK* |
> | 55 | Edge Cases | Create NULL complex type table | batch | *OK* |
> | 56 | Edge Cases | Insert NULL ARRAY/MAP/ROW | batch | *OK* |
> | 57 | Edge Cases | Point query all NULL (id=1) | batch | *OK* |
> | 58 | Edge Cases | Point query non-NULL (id=2) | batch | *OK* |
> | 59 | Edge Cases | Create large ARRAY table | batch | *OK* |
> | 60 | Edge Cases | Insert ARRAY with 50 elements | batch | *OK* |
> | 61 | Edge Cases | Point query large ARRAY | batch | *OK* |
> | 62 | Edge Cases | Create deep nesting ARRAY\<MAP\<STRING,ROW\&gt;\&gt; |
> batch | *OK* |
> | 63 | Edge Cases | Insert deep nesting data | batch | *OK* |
> | 64 | Edge Cases | Point query deep nesting | batch | *OK* |
> | 65 | Lookup Join | Create dimension table (ARRAY+MAP+ROW) | batch | *OK*
> |
> | 66 | Lookup Join | Insert dimension data | batch | *OK* |
> | 67 | Lookup Join | Point query dimension (id=1) | batch | *OK* |
> | 68 | Lookup Join | Point query dimension (id=2) | batch | *OK* |
> | 69 | Lookup Join | Create fact stream table | batch | *OK* |
> | 70 | Lookup Join | Insert fact data | batch | *OK* |
> | 71 | Lookup Join | Lookup Join query (fact ⋈ dim with complex types) |
> streaming | *OK* |
>
>
> Best,
> ForwardXu
>
>
> ForwardXu
> [email protected]
>
>
>
>
>
>
>
>          原始邮件
>
>
> 发件人:Jark Wu <[email protected]&gt;
> 发件时间:2026年2月13日 18:41
> 收件人:dev <[email protected]&gt;
> 主题:Re: [VOTE] Release Fluss 0.9.0-incubating (RC1)
>
>
>
>        +1&nbsp;(binding)
>
> -&nbsp;Checked&nbsp;sums&nbsp;and&nbsp;signatures:&nbsp;*OK*
>
> -&nbsp;Check&nbsp;the&nbsp;LICENSE&nbsp;and&nbsp;NOTICE&nbsp;files&nbsp;are&nbsp;correct:&nbsp;*OK*
>
> -&nbsp;Checked&nbsp;the&nbsp;jars&nbsp;in&nbsp;the&nbsp;staging&nbsp;repo:&nbsp;*OK*
>
> -&nbsp;Checked&nbsp;source&nbsp;distribution&nbsp;doesn't&nbsp;include&nbsp;binaries:&nbsp;*OK*
>
> -&nbsp;Build&nbsp;from&nbsp;source&nbsp;with&nbsp;Java11&nbsp;and&nbsp;Maven&nbsp;3.8.6:&nbsp;*OK*
>
> -&nbsp;Checked&nbsp;version&nbsp;consistency&nbsp;in&nbsp;pom&nbsp;files:&nbsp;*OK*
> -&nbsp;Went&nbsp;through&nbsp;the&nbsp;quick&nbsp;start:&nbsp;*OK*
>
> -&nbsp;Checked&nbsp;release&nbsp;notes&nbsp;and&nbsp;release&nbsp;blog:&nbsp;*OK*
>
> -&nbsp;Run&nbsp;Flink&nbsp;Engine&nbsp;quickstart&nbsp;with&nbsp;Flink&nbsp;2.2&nbsp;and&nbsp;Fluss&nbsp;binary&nbsp;distribution:&nbsp;*OK*
>
> -&nbsp;Run&nbsp;"Real-Time&nbsp;Analytics&nbsp;With&nbsp;Flink"&nbsp;quickstart:&nbsp;*OK*
>
> -&nbsp;Tested&nbsp;Flink&nbsp;+&nbsp;Fluss&nbsp;for&nbsp;auto-increment&nbsp;column&nbsp;and&nbsp;agg&nbsp;merge&nbsp;engine:&nbsp;*OK*
>
> &nbsp;&nbsp;&nbsp;(There&nbsp;is&nbsp;a&nbsp;bug&nbsp;on&nbsp;the&nbsp;doc&nbsp;which&nbsp;I&nbsp;have&nbsp;pushed&nbsp;a&nbsp;hotfix&nbsp;[3])
>
>
> Btw,&nbsp;it&nbsp;seems&nbsp;the&nbsp;Download&nbsp;page&nbsp;update&nbsp;PR&nbsp;is&nbsp;missing.&nbsp;(not&nbsp;blocker)
>
> Best,
> Jark
>
>
> [1]&nbsp;
> https://fluss.apache.org/docs/0.9/table-design/table-types/pk-table/#auto-increment-column
> [2]&nbsp;
> https://fluss.apache.org/docs/0.9/table-design/merge-engines/aggregation/
> [3]&nbsp;
> https://github.com/apache/fluss/commit/074d7563f3f414e574de55fd5c1361135b337496
>
>
> On&nbsp;Thu,&nbsp;12&nbsp;Feb&nbsp;2026&nbsp;at&nbsp;20:03,&nbsp;yuxia&nbsp;<
> [email protected]&gt;&nbsp;wrote:
> &gt;
> &gt;&nbsp;Hi&nbsp;everyone,
> &gt;
>
> &gt;&nbsp;Please&nbsp;review&nbsp;and&nbsp;vote&nbsp;on&nbsp;the&nbsp;release&nbsp;candidate&nbsp;#1&nbsp;for&nbsp;the&nbsp;Apache&nbsp;Fluss
> &gt;&nbsp;version&nbsp;0.9.0-incubating,&nbsp;as&nbsp;follows:
> &gt;&nbsp;[&nbsp;]&nbsp;+1,&nbsp;Approve&nbsp;the&nbsp;release
>
> &gt;&nbsp;[&nbsp;]&nbsp;-1,&nbsp;Do&nbsp;not&nbsp;approve&nbsp;the&nbsp;release&nbsp;(please&nbsp;provide&nbsp;specific&nbsp;comments)
> &gt;
> &gt;
>
> &gt;&nbsp;The&nbsp;complete&nbsp;staging&nbsp;area&nbsp;is&nbsp;available&nbsp;for&nbsp;your&nbsp;review,&nbsp;includes:
> &gt;
>
> &gt;&nbsp;The&nbsp;official&nbsp;source&nbsp;release&nbsp;and&nbsp;binary&nbsp;convenience&nbsp;releases&nbsp;to&nbsp;be&nbsp;deployed
> &gt;&nbsp;to:
> &gt;&nbsp;*&nbsp;
> https://dist.apache.org/repos/dist/dev/incubator/fluss/fluss-0.9.0-incubating-rc1
> &gt
> <https://dist.apache.org/repos/dist/dev/incubator/fluss/fluss-0.9.0-incubating-rc1&gt>
> ;
> &gt;&nbsp;Helm&nbsp;charts&nbsp;are&nbsp;available&nbsp;on:
> &gt;&nbsp;*&nbsp;
> https://dist.apache.org/repos/dist/dev/incubator/fluss/helm-chart
>
> &gt;&nbsp;(NB:&nbsp;you&nbsp;have&nbsp;to&nbsp;build&nbsp;the&nbsp;Docker&nbsp;images&nbsp;locally&nbsp;with&nbsp;the&nbsp;version
>
> &gt;&nbsp;0.9.0-incubating&nbsp;in&nbsp;order&nbsp;to&nbsp;test&nbsp;Helm&nbsp;charts)
> &gt
> <https://dist.apache.org/repos/dist/dev/incubator/fluss/helm-chart&gt;&nbsp;(NB:&nbsp;you&nbsp;have&nbsp;to&nbsp;build&nbsp;the&nbsp;Docker&nbsp;images&nbsp;locally&nbsp;with&nbsp;the&nbsp;version&gt;&nbsp;0.9.0-incubating&nbsp;in&nbsp;order&nbsp;to&nbsp;test&nbsp;Helm&nbsp;charts)&gt>
> ;
>
> &gt;&nbsp;All&nbsp;the&nbsp;files&nbsp;are&nbsp;signed&nbsp;with&nbsp;the&nbsp;key&nbsp;with&nbsp;fingerprint&nbsp;56A9F259A4C18F9C,
> &gt;&nbsp;you&nbsp;can&nbsp;find&nbsp;the&nbsp;KEYS&nbsp;file&nbsp;here:
> &gt;&nbsp;*&nbsp;
> https://dist.apache.org/repos/dist/release/incubator/fluss/KEYS
> &gt <https://dist.apache.org/repos/dist/release/incubator/fluss/KEYS&gt>;
>
> &gt;&nbsp;All&nbsp;artifacts&nbsp;to&nbsp;be&nbsp;deployed&nbsp;to&nbsp;the&nbsp;Maven&nbsp;Central&nbsp;Repository:
> &gt;&nbsp;*&nbsp;
> https://repository.apache.org/content/repositories/orgapachefluss-1004/
> &gt
> <https://repository.apache.org/content/repositories/orgapachefluss-1004/&gt>
> ;
> &gt;&nbsp;Git&nbsp;tag&nbsp;for&nbsp;the&nbsp;release:
> &gt;&nbsp;*&nbsp;
> https://github.com/apache/fluss/releases/tag/v0.9.0-incubating-rc1
> &gt
> <https://github.com/apache/fluss/releases/tag/v0.9.0-incubating-rc1&gt>;
> &gt;&nbsp;Git&nbsp;commit&nbsp;for&nbsp;the&nbsp;release:
> &gt;&nbsp;*
> &gt;&nbsp;
> https://github.com/apache/fluss/commit/d6fd1f1f607a2672bff5d18d5ca811bfa920bbd7
> &gt
> <https://github.com/apache/fluss/commit/d6fd1f1f607a2672bff5d18d5ca811bfa920bbd7&gt>
> ;
>
> &gt;&nbsp;Website&nbsp;pull&nbsp;request&nbsp;for&nbsp;the&nbsp;release&nbsp;announcement&nbsp;blog&nbsp;post
> &gt;&nbsp;*&nbsp;https://github.com/apache/fluss/pull/2590
> &gt;
> &gt;&nbsp;Upgrade&nbsp;note&nbsp;for&nbsp;the&nbsp;new&nbsp;release:
> &gt;&nbsp;*
> &gt;&nbsp;
> https://github.com/apache/fluss/blob/release-0.9/website/docs/maintenance/operations/upgrade-notes-0.9.md
> &gt
> <https://github.com/apache/fluss/blob/release-0.9/website/docs/maintenance/operations/upgrade-notes-0.9.md&gt>
> ;
> &gt;&nbsp;Docker&nbsp;images&nbsp;for&nbsp;the&nbsp;release&nbsp;candidate:
> &gt;&nbsp;*&nbsp;fluss:&nbsp;apache/fluss:0.9.0-incubating-rc1
> &gt;
>
> &gt;&nbsp;Please&nbsp;download,&nbsp;verify&nbsp;and&nbsp;test.&nbsp;To&nbsp;learn&nbsp;more&nbsp;about&nbsp;how&nbsp;to&nbsp;verify:
> &gt;&nbsp;
> https://fluss.apache.org/community/how-to-release/verifying-a-fluss-release/
> &gt
> <https://fluss.apache.org/community/how-to-release/verifying-a-fluss-release/&gt>
> ;
> &gt;
>
> &gt;&nbsp;The&nbsp;vote&nbsp;will&nbsp;be&nbsp;open&nbsp;for&nbsp;at&nbsp;least&nbsp;72&nbsp;hours.&nbsp;It&nbsp;is&nbsp;adopted&nbsp;by&nbsp;majority
>
> &gt;&nbsp;approval,&nbsp;with&nbsp;at&nbsp;least&nbsp;3&nbsp;PPMC&nbsp;affirmative&nbsp;votes.
> &gt;
> &gt;
> &gt;&nbsp;Best&nbsp;regards,
> &gt;&nbsp;Yuxia

Reply via email to