luyuncheng commented on PR #11987:
URL: https://github.com/apache/lucene/pull/11987#issuecomment-1335302726

   > In fact there is no situation where thousands of shards makes sense on a 
single node. That's bad design.
   
   @rmuir i have another proposal: 
   what do you think of make `ByteBufferDataInput` as not fixed sized ( cause 
you mentioned LUCENE-10627 do not introduce any other ByteBlockPool)
   
   every time we call decompress and wrap decompressed BytesRef as 
ByteArrayDataInput
   
   the decompress can make a list ByteBuffer, and every time just copy 
dictionary, 
   this can reduce memory copy from buffer and reduce ArrayUtil.grow's memory 
copy, 
   as well as can solved buffer-heap-influence
   
   simple code like following:
   ```
         // Read blocks that intersect with the interval we need
         ArrayList<ByteBuffer> bufferList = new ArrayList<>();
         while (offsetInBlock < offset + length) {
           final int bytesToDecompress = Math.min(blockLength, offset + length 
- offsetInBlock);
           byte[] bytebuffer = new byte[dictLength + blockLength];
           //Only copy dict
           System.arraycopy(dictBuffer, 0, bytebuffer, 0, dictLength);
           LZ4.decompress(in, bytesToDecompress, bytebuffer, dictLength);
           offsetInBlock += blockLength;
           
           bufferList.add(ByteBuffer.wrap(bytebuffer, dictLength, 
bytesToDecompress).slice());
         }
        return = new ByteBuffersDataInput(bufferList);
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to