kevinjqliu commented on code in PR #11923:
URL: https://github.com/apache/iceberg/pull/11923#discussion_r1906206762


##########
spark/v3.3/spark-runtime/src/integration/java/org/apache/iceberg/spark/SmokeTest.java:
##########
@@ -66,25 +64,25 @@ public void testGettingStarted() throws IOException {
     sql(
         "CREATE TABLE updates (id bigint, data string) USING parquet LOCATION 
'%s'",

Review Comment:
   nit: for this and the create table statement above, can we change to `USING 
iceberg` instead?



##########
docs/docs/spark-getting-started.md:
##########
@@ -160,7 +163,7 @@ This type conversion table describes how Spark types are 
converted to the Iceber
 | map             | map                        |       |
 
 !!! info
-    The table is based on representing conversion during creating table. In 
fact, broader supports are applied on write. Here're some points on write:
+    The table is based on type conversions during table creation. Broader type 
conversions are applied on write:

Review Comment:
   nit: the paragraph before mentions the table is for both create and write 
while this sentence says its only based on create. 
   



##########
docs/docs/spark-getting-started.md:
##########
@@ -77,21 +77,24 @@ Once your table is created, insert data using [`INSERT 
INTO`](spark-writes.md#in
 
 ```sql
 INSERT INTO local.db.table VALUES (1, 'a'), (2, 'b'), (3, 'c');
-INSERT INTO local.db.table SELECT id, data FROM source WHERE length(data) = 1;
 ```
 
 Iceberg also adds row-level SQL updates to Spark, [`MERGE 
INTO`](spark-writes.md#merge-into) and [`DELETE 
FROM`](spark-writes.md#delete-from):
 
 ```sql
-MERGE INTO local.db.target t USING (SELECT * FROM updates) u ON t.id = u.id
-WHEN MATCHED THEN UPDATE SET t.count = t.count + u.count
+CREATE TABLE local.db.updates (id bigint, data string) USING iceberg;
+INSERT INTO local.db.updates VALUES (1, 'x'), (2, 'y'), (4, 'z');

Review Comment:
   same as below, lets update the values so it will hit all branch of the merge 
into statement. 
   
   nit: and also add values as comment to track the table state



##########
spark/v3.3/spark-runtime/src/integration/java/org/apache/iceberg/spark/SmokeTest.java:
##########
@@ -66,25 +64,25 @@ public void testGettingStarted() throws IOException {
     sql(
         "CREATE TABLE updates (id bigint, data string) USING parquet LOCATION 
'%s'",
         temp.newFolder());
-    sql("INSERT INTO updates VALUES (1, 'x'), (2, 'x'), (4, 'z')");
+    sql("INSERT INTO updates VALUES (1, 'x'), (2, 'y'), (4, 'z')");

Review Comment:
   i like the original example since it hits all branch of the merge into 
statement.
   also it'd be nice to keep track of table state in the comment 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to