jurossiar commented on issue #9960:
URL: https://github.com/apache/iceberg/issues/9960#issuecomment-2324860967

   I'm wondering if you have any update on this issue? -> Previous comments:  
https://github.com/apache/iceberg/issues/9960#issuecomment-2197375635
   I've just tried using:
   spark-version: 3.5
   scala-version: 2.12
   iceberg-version: 1.6.1
   
   and still get the same errors.
   
   
   Steps:
   Create the table:
   ```
   %%sparksql
   CREATE TABLE julian.tmp_julian ( 
       user_id STRING, 
       access_type STRING, 
       open boolean
   ) 
   USING ICEBERG
   LOCATION 's3a://<bucket>/tables/julian/tmp_julian'
   ```
   
   add row:
   ```
   %%sparksql
   insert into julian.tmp_julian values ('a','B', false)
   ```
   
   Update schema
   ```
   %%sparksql
   update julian.tmp_julian set open = (access_type == "B")
   ```
   
   Error:
   ```
   {
        "name": "Py4JJavaError",
        "message": "An error occurred while calling o52.sql.
   : org.apache.spark.SparkUnsupportedOperationException: UPDATE TABLE is not 
supported temporarily.
   \tat 
org.apache.spark.sql.errors.QueryExecutionErrors$.ddlUnsupportedTemporarilyError(QueryExecutionErrors.scala:1109)
   \tat 
org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:896)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63)
   \tat scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
   \tat scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
   \tat scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
   \tat 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:70)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
   \tat 
scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
   \tat 
scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
   \tat scala.collection.Iterator.foreach(Iterator.scala:943)
   \tat scala.collection.Iterator.foreach$(Iterator.scala:943)
   \tat scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   \tat scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
   \tat scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
   \tat scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
   \tat scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
   \tat scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
   \tat 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:70)
   \tat 
org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:476)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$sparkPlan$1(QueryExecution.scala:162)
   \tat 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:202)
   \tat 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:526)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:202)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:201)
   \tat 
org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:162)
   \tat 
org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:155)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:175)
   \tat 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:202)
   \tat 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:526)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:202)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:201)
   \tat 
org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:175)
   \tat 
org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:168)
   \tat 
org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:221)
   \tat 
org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:266)
   \tat 
org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:235)
   \tat 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:112)
   \tat 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
   \tat 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
   \tat 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
   \tat 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
   \tat 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
   \tat 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
   \tat 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
   \tat 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
   \tat 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
   \tat 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
   \tat 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
   \tat org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
   \tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
   \tat org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
   \tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:662)
   \tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   \tat 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
   \tat 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   \tat java.base/java.lang.reflect.Method.invoke(Method.java:568)
   \tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
   \tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
   \tat py4j.Gateway.invoke(Gateway.java:282)
   \tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
   \tat py4j.commands.CallCommand.execute(CallCommand.java:79)
   \tat 
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
   \tat py4j.ClientServerConnection.run(ClientServerConnection.java:106)
   \tat java.base/java.lang.Thread.run(Thread.java:840)
   ",
        "stack": 
"---------------------------------------------------------------------------
   Py4JJavaError                             Traceback (most recent call last)
   Cell In[9], line 1
   ----> 1 get_ipython().run_cell_magic('sparksql', '', 'update 
julian.tmp_julian set open = (access_type == \"B\")\
   ')
   
   File 
~/miniconda3/envs/dmf-library-dev/lib/python3.10/site-packages/IPython/core/interactiveshell.py:2541,
 in InteractiveShell.run_cell_magic(self, magic_name, line, cell)
      2539 with self.builtin_trap:
      2540     args = (magic_arg_s, cell)
   -> 2541     result = fn(*args, **kwargs)
      2543 # The code below prevents the output from being displayed
      2544 # when using magics with decorator @output_can_be_silenced
      2545 # when the last Python token in the expression is a ';'.
      2546 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
   
   File 
~/miniconda3/envs/dmf-library-dev/lib/python3.10/site-packages/sparksql_magic/sparksql.py:40,
 in SparkSql.sparksql(self, line, cell, local_ns)
        37     print(\"active spark session is not found\")
        38     return
   ---> 40 df = spark.sql(bind_variables(cell, user_ns))
        41 if args.cache or args.eager:
        42     print('cache dataframe with %s load' % ('eager' if args.eager 
else 'lazy'))
   
   File 
~/miniconda3/envs/dmf-library-dev/lib/python3.10/site-packages/pyspark/sql/session.py:1440,
 in SparkSession.sql(self, sqlQuery, args, **kwargs)
      1438 try:
      1439     litArgs = {k: _to_java_column(lit(v)) for k, v in (args or 
{}).items()}
   -> 1440     return DataFrame(self._jsparkSession.sql(sqlQuery, litArgs), 
self)
      1441 finally:
      1442     if len(kwargs) > 0:
   
   File 
~/miniconda3/envs/dmf-library-dev/lib/python3.10/site-packages/py4j/java_gateway.py:1322,
 in JavaMember.__call__(self, *args)
      1316 command = proto.CALL_COMMAND_NAME +\\
      1317     self.command_header +\\
      1318     args_command +\\
      1319     proto.END_COMMAND_PART
      1321 answer = self.gateway_client.send_command(command)
   -> 1322 return_value = get_return_value(
      1323     answer, self.gateway_client, self.target_id, self.name)
      1325 for temp_arg in temp_args:
      1326     if hasattr(temp_arg, \"_detach\"):
   
   File 
~/miniconda3/envs/dmf-library-dev/lib/python3.10/site-packages/pyspark/errors/exceptions/captured.py:169,
 in capture_sql_exception.<locals>.deco(*a, **kw)
       167 def deco(*a: Any, **kw: Any) -> Any:
       168     try:
   --> 169         return f(*a, **kw)
       170     except Py4JJavaError as e:
       171         converted = convert_exception(e.java_exception)
   
   File 
~/miniconda3/envs/dmf-library-dev/lib/python3.10/site-packages/py4j/protocol.py:326,
 in get_return_value(answer, gateway_client, target_id, name)
       324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
       325 if answer[1] == REFERENCE_TYPE:
   --> 326     raise Py4JJavaError(
       327         \"An error occurred while calling {0}{1}{2}.\
   \".
       328         format(target_id, \".\", name), value)
       329 else:
       330     raise Py4JError(
       331         \"An error occurred while calling {0}{1}{2}. Trace:\
   {3}\
   \".
       332         format(target_id, \".\", name, value))
   
   Py4JJavaError: An error occurred while calling o52.sql.
   : org.apache.spark.SparkUnsupportedOperationException: UPDATE TABLE is not 
supported temporarily.
   \tat 
org.apache.spark.sql.errors.QueryExecutionErrors$.ddlUnsupportedTemporarilyError(QueryExecutionErrors.scala:1109)
   \tat 
org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:896)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63)
   \tat scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
   \tat scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
   \tat scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
   \tat 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:70)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
   \tat 
scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
   \tat 
scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
   \tat scala.collection.Iterator.foreach(Iterator.scala:943)
   \tat scala.collection.Iterator.foreach$(Iterator.scala:943)
   \tat scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
   \tat scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
   \tat scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
   \tat scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
   \tat scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
   \tat scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
   \tat 
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
   \tat 
org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:70)
   \tat 
org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:476)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$sparkPlan$1(QueryExecution.scala:162)
   \tat 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:202)
   \tat 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:526)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:202)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:201)
   \tat 
org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:162)
   \tat 
org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:155)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:175)
   \tat 
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:202)
   \tat 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:526)
   \tat 
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:202)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat 
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:201)
   \tat 
org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:175)
   \tat 
org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:168)
   \tat 
org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:221)
   \tat 
org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:266)
   \tat 
org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:235)
   \tat 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:112)
   \tat 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
   \tat 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
   \tat 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)
   \tat 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)
   \tat 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
   \tat 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
   \tat 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
   \tat 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
   \tat 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
   \tat 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)
   \tat 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)
   \tat 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)
   \tat org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
   \tat org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96)
   \tat org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
   \tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
   \tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
   \tat org.apache.spark.sql.SparkSession.sql(SparkSession.scala:662)
   \tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   \tat 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
   \tat 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   \tat java.base/java.lang.reflect.Method.invoke(Method.java:568)
   \tat py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
   \tat py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
   \tat py4j.Gateway.invoke(Gateway.java:282)
   \tat py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
   \tat py4j.commands.CallCommand.execute(CallCommand.java:79)
   \tat 
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
   \tat py4j.ClientServerConnection.run(ClientServerConnection.java:106)
   \tat java.base/java.lang.Thread.run(Thread.java:840)
   "
   }
   ```
   
   But works with spark 3.4. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@iceberg.apache.org
For additional commands, e-mail: issues-h...@iceberg.apache.org

Reply via email to