bagipriyank opened a new issue, #9393:
URL: https://github.com/apache/pinot/issues/9393

   Use 
https://docs.pinot.apache.org/users/tutorials/ingest-parquet-files-from-s3-using-spark
 to create an offline table. I invoked delete all segments api, or delete 
segment api from swagger with retention 0d or -1d and observed that the segment 
got deleted from the controller ui and zookeeper but did not get deleted from 
the server disc. If i used 
   ```
   pushJobSpec:
     copyToDeepStoreForMetadataPush: true
   ```
   
   in the spark ingestion job spec then the segments got deleted from the s3 
segment store but still didn't get deleted from the server disc.
   
   I tried deleting the table, and even then the segment(s) did not get deleted 
from the server disc. I do not have a retention configured in the table config 
for this table.
   
   Even disabling the table, and then invoking delete segment(s) apis didn't 
delete segments from the server disc. This is resulting in increased disc usage 
on daily refresh.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to