dramaticlly commented on code in PR #7433:
URL: https://github.com/apache/iceberg/pull/7433#discussion_r1178385937


##########
docs/spark-configuration.md:
##########
@@ -63,15 +63,20 @@ Iceberg supplies two implementations:
 
 Both catalogs are configured using properties nested under the catalog name. 
Common configuration properties for Hive and Hadoop are:
 
-| Property                                           | Values                  
      | Description                                                          |
-| -------------------------------------------------- | 
----------------------------- | 
-------------------------------------------------------------------- |
-| spark.sql.catalog._catalog-name_.type              | `hive`, `hadoop` or 
`rest`    | The underlying Iceberg catalog implementation, `HiveCatalog`, 
`HadoopCatalog`, `RESTCatalog` or left unset if using a custom catalog |
-| spark.sql.catalog._catalog-name_.catalog-impl      |                         
      | The underlying Iceberg catalog implementation.|
-| spark.sql.catalog._catalog-name_.default-namespace | default                 
      | The default current namespace for the catalog |
-| spark.sql.catalog._catalog-name_.uri               | thrift://host:port      
      | Metastore connect URI; default from `hive-site.xml` |
-| spark.sql.catalog._catalog-name_.warehouse         | 
hdfs://nn:8020/warehouse/path | Base path for the warehouse directory |
-| spark.sql.catalog._catalog-name_.cache-enabled     | `true` or `false`       
      | Whether to enable catalog cache, default value is `true` |
-| spark.sql.catalog._catalog-name_.cache.expiration-interval-ms | `30000` (30 
seconds) | Duration after which cached catalog entries are expired; Only 
effective if `cache-enabled` is `true`. `-1` disables cache expiration and `0` 
disables caching entirely, irrespective of `cache-enabled`. Default is `30000` 
(30 seconds) |                                                   |
+| Property                                                      | Values       
                 | Description                                                  
                                                                                
                                                                                
            |
+|---------------------------------------------------------------|-------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| spark.sql.catalog._catalog-name_.type                         | `hive`, 
`hadoop` or `rest`    | The underlying Iceberg catalog implementation, 
`HiveCatalog`, `HadoopCatalog`, `RESTCatalog` or left unset if using a custom 
catalog                                                                         
                            |

Review Comment:
   thank you @nastra . I think in intellij IDE it's easy correction to realign 
the misformatted mark down table thats why I just apply the realignment. 
   
   I do see it can cause additional difficulty for the review, but I want to 
share that the github review have a rich text diff for mark down syntax so it 
can show how it looks like when reading the rendered doc directly like in 
   
   <img width="2233" alt="image" 
src="https://user-images.githubusercontent.com/5961173/234699256-894e8bf6-ceef-4cd3-b199-5859d0229ef3.png";>
   
   let me know if you hate it, I can always revert back to original format



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to