Hi
We have created the table with the value of KEEP_DELETED_CELLS =false .
So ideally it should be delete the data from HDFS also and snapshot is also
not enable.
We are just creating the tables and putting the data and after that we are
deleting the data of that column family.
The no of count has reduced in hbase table but when we have checked the
hdfs dircetory for the hbase the size is same even we ran major compaction
also but no impact on hdfs size.
Below is the query which we are using
create 'OBST:TEST',
{NAME => 'cfDocContent', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false',
NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE',
CACHE_DATA_ON_WRITE => 'false', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL =>
'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER =>
'ROW', CACHE_INDEX_ON_WRITE=> 'false', IN_MEMORY => 'false',
CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false',
IS_MOB => 'true', COMPRESSION => 'SNAPPY', BLOCKCACHE=> 'true', BLOCKSIZE
=> '65536'}
,
{NAME => 'cfMetadata', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false',
NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE',
CACHE_DATA_ON_WRITE=> 'false', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL =>
'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER =>
'ROW', CACHE_INDEX_ON_WRITE => 'false', IN_MEMORY => 'false',
CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false',
COMPRESSION => 'SNAPPY', BLOCKCACHE => 'true', BLOCKSIZE => '65536'}
put 'OBST:TEST','Test101','cfDocContent:doc','row101'
put 'OBST:TEST','Test102','cfDocContent:doc','row102'
put 'OBST:TEST','Test103','cfDocContent:doc','row103'
put 'OBST:TEST','Test106','cfMetadata:doc','row106'
put 'OBST:TEST','Test107','cfMetadata:doc','row107'
put 'OBST:TEST','Test108','cfMetadata:doc','row108'
delete 'OBST:TEST','Test101','cfDocContent:doc'
delete 'OBST:TEST','Test102','cfDocContent:doc'
delete 'OBST:TEST','Test103','cfDocContent:doc'
delete 'OBST:TEST','Test106','cfMetadata:doc'
delete 'OBST:TEST','Test107','cfMetadata:doc'
delete 'OBST:TEST','Test108','cfMetadata:doc'
flush 'OBST:TEST'
compact 'OBST:TEST'
major_compact 'OBST:TEST', 'cfDocContent', 'MOB'
major_compact 'OBST:TEST
*hbase(main):185:0>* scan 'OBST:TEST', {RAW => TRUE}
*ROW COLUMN+CELL*
*Test101 column=cfDocContent:doc, timestamp=1608282551809, type=Delete*
*Test102 column=cfDocContent:doc, timestamp=1608282551824, type=Delete*
*Test103 column=cfDocContent:doc, timestamp=1608282551838, type=Delete*
*Even the tombstone mark is also there and the size of hdfs directory
didn't reduce. It's not moving in to trash also.*
*Let me know if you need some additional information*
Hbase version is 2.0.2.3.1.4.0-315
Regards,
Satya
On Thu, Feb 11, 2021 at 6:25 PM Shashwat Shriparv <[email protected]>
wrote:
> Please check TTL configuraton
>
>
> *Warm Regards,*
> *Shashwat Shriparv*
> *http://bit.ly/14cHpad <http://bit.ly/14cHpad> *
>
> *http://goo.gl/rxz0z8 <http://goo.gl/rxz0z8>*
> *http://goo.gl/RKyqO8 <http://goo.gl/RKyqO8>*
> http://helpmetocode.blogspot.in/
> http://photoinfinity.blogspot.in/
> http://writingishabit.blogspot.in/
> http://realiq.blogspot.in/
> http://sshriparv.blogspot.in/
> https://goo.gl/M8Us3B
> https://goo.gl/nrI2mv
> https://500px.com/shriparv
> https://www.flickr.com/photos/55141469@N02/
> https://about.me/shriparv
> ISBN - 10: 1783985941
>
> ISBN - 13: 9781783985944
> [image: https://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9]
> <https://www.linkedin.com/pub/shashwat-shriparv/19/214/2a9>[image:
> https://twitter.com/shriparv] <https://twitter.com/shriparv>[image:
> https://www.facebook.com/shriparv] <https://www.facebook.com/shriparv>[image:
> http://google.com/+ShashwatShriparv]
> <http://google.com/+ShashwatShriparv>[image:
> http://www.youtube.com/user/sShriparv/videos]
> <http://www.youtube.com/user/sShriparv/videos>[image:
> http://profile.yahoo.com/SWXSTW3DVSDTF2HHSRM47AV6DI/] <[email protected]>
>
>
>
> On Thu, 11 Feb 2021 at 12:14, satya prakash gaurav <[email protected]>
> wrote:
>
>> Hi Team,
>> Can anyone please help on this issue?
>>
>> Regards,
>> Satya
>>
>> On Wed, Feb 3, 2021 at 7:27 AM satya prakash gaurav <[email protected]>
>> wrote:
>>
>>> Hi Team,
>>>
>>> I have raised a jira HDFS-15812
>>> We are using the hdp 3.1.4.0-315 and hbase 2.0.2.3.1.4.0-315.
>>>
>>> We are deleting the data with normal hbase delete command and even with
>>> api using phoenix. The count is reducing on phoenix and hbase but the
>>> Hdfs size of the hbase directory is not reducing even I ran the major
>>> compaction.
>>>
>>> Regards,
>>> Satya
>>>
>>>
>>>
>>
>> --
>> --
>> Regards,
>> S.P.Gaurav
>>
>>
--
--
Regards,
S.P.Gaurav