I would leave it for a later optimization.

Sent from my iPhone

> On May 5, 2017, at 9:08 AM, Bruce Schuchardt <bschucha...@pivotal.io> wrote:
> 
> This is very similar to how peer-to-peer messaging is performed in Geode.  
> Messages are serialized to a stream that knows how to optimally "chunk" the 
> bytes into fixed-size packets.  On the receiving side these are fed into a 
> similar input stream for deserialization.  The message only contains 
> information about the operation it represents.
> 
> Why don't we do something similar for the new client/server protocol?
> 
>> Le 5/5/2017 à 7:28 AM, Jacob Barrett a écrit :
>> On Thu, May 4, 2017 at 2:52 PM Hitesh Khamesra <hitesh...@yahoo.com.invalid>
>> wrote:
>> 
>>> Basically, thread/layer should not hold any resources while serializing
>>> the object or chunk.  We should be able to see this flow (ms1-chunk1,
>>> msg2-chunk1, ms1-chunk2, msg3-chunk, msg2-chunk2, so on ...)
>>> 
>> Correct, but putting that in the message layer is not appropriate. The
>> simple solution is that the multiple channels can be achieved with multiple
>> sockets. The later optimization is to add a channel multiplexer layer
>> between the message and socket layers.
>> 
>> If we put it in the message layer, not only does it for the message to
>> tackle something it shouldn't be concerned with, reassembling itself, but
>> it also forces all implementors to tackle this logic up front. By layering
>> we can release without, implementors aren't forced into understanding the
>> logic, and later we can release the layers and the client can negotiate.
>> 
>> 
>> 
>>> On other pdx note: to de-serialize the pdx we need length of serialized
>>> bytes, so that we can read field offset from serialized stream, and then
>>> can read field value. Though, I can imagine with the help of pdxType, we
>>> can interpret serialized stream.
>>> 
>> Yes, so today PDX serialization would be no worse, the PDX serializer would
>> have to buffer, but other may not have to. The length of the buffered PDX
>> could be used as the first chunk length and complete in single chunk.
>> Although, I suspect that amortized overhead of splitting the chunks  will
>> be nil anyway.
>> 
>> The point is that the message encoding of values should NOT have any
>> unbounded length fields and require long or many buffers to complete
>> serialization. By chunking you can accomplish this by not needing to buffer
>> the whole stream, just small (say 1k), chunks at a time to get the chunk
>> length.
>> 
>> Buffers == Latency
>> 
>> -Jake
>> 
> 

Reply via email to