Gerrrr opened a new issue, #8719:
URL: https://github.com/apache/iceberg/issues/8719

   ### Feature Request / Improvement
   
   Consider the following continuous insertion into a keyless table:
   
   ```
   SET 'execution.checkpointing.interval' = '10 s';
   SET 'sql-client.execution.result-mode' = 'tableau';
   SET 'pipeline.max-parallelism' = '5';
   
   
   CREATE CATALOG nessie_catalog WITH (
       'type'='iceberg', 
       'catalog-impl'='org.apache.iceberg.nessie.NessieCatalog', 
       'io-impl'='org.apache.iceberg.aws.s3.S3FileIO',
       'uri'='http://catalog:19120/api/v1', 'authentication.type'='none', 
       'client.assume-role.region'='us-east-1',
       'warehouse' = 's3://warehouse', 
       's3.endpoint'='http://127.0.0.1:9000'
   );
   
   USE CATALOG nessie_catalog;
   
   CREATE DATABASE IF NOT EXISTS db;
   USE db;
   
   CREATE TEMPORARY TABLE word_table (
       word STRING
   ) WITH (
       'connector' = 'datagen',
       'fields.word.length' = '1'
   );
   
   CREATE TABLE word_count (
       word STRING,
       cnt BIGINT
   ) PARTITIONED BY (`ptn`) WITH (
       'format-version'='2',
      'write.upsert.enabled'='true'
   );
   
   INSERT INTO word_count SELECT word, COUNT(*) FROM word_table GROUP BY word;
   ```
   
   The output of the `SELECT` query there looks like:
   
   ```
   Flink SQL> SELECT word, COUNT(*) as cnt FROM word_table GROUP BY word LIMIT 
5;
   +----+--------------------------------+----------------------+
   | op |                           word |                  cnt |
   +----+--------------------------------+----------------------+
   | +I |                              e |                    1 |
   | -D |                              e |                    1 |
   | +I |                              e |                    2 |
   | +I |                              7 |                    1 |
   | +I |                              5 |                    1 |
   | -D |                              e |                    2 |
   ...
   ```
   
   
   Now, consider the following query against the `word_count` table:
   
   ```
   Flink SQL> SELECT * FROM word_count LIMIT 10;
   ```
   
   Expected result: latest counts for 10 words.
   
   Actual result:
   ```
   [ERROR] Could not execute SQL statement. Reason:
   java.lang.IllegalStateException: Equality field columns shouldn't be empty 
when configuring to use UPSERT data stream.
   ```
   
   ### Query engine
   
   Flink


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to