In either case you packetize (serialize the message protocol) to buffers (fixed 
sizes and pooled) and flush buffers to the socket. Preferably using a async 
socket framework to do all the heavy lifting for you.


Sent from my iPhone

> On May 5, 2017, at 11:07 AM, Bruce Schuchardt <bschucha...@pivotal.io> wrote:
> 
> Yes, of course it does but we don't serialize directly to a socket output 
> stream because it's slow.  I agree that this could be left out and added 
> later as an optimization.
> 
>> Le 5/5/2017 à 10:33 AM, Galen M O'Sullivan a écrit :
>> I think TCP does exactly this for us.
>> 
>> On Fri, May 5, 2017 at 9:08 AM, Bruce Schuchardt <bschucha...@pivotal.io>
>> wrote:
>> 
>>> This is very similar to how peer-to-peer messaging is performed in Geode.
>>> Messages are serialized to a stream that knows how to optimally "chunk" the
>>> bytes into fixed-size packets.  On the receiving side these are fed into a
>>> similar input stream for deserialization.  The message only contains
>>> information about the operation it represents.
>>> 
>>> Why don't we do something similar for the new client/server protocol?
>>> 
>>> 
>>>> Le 5/5/2017 à 7:28 AM, Jacob Barrett a écrit :
>>>> 
>>>> On Thu, May 4, 2017 at 2:52 PM Hitesh Khamesra
>>>> <hitesh...@yahoo.com.invalid>
>>>> wrote:
>>>> 
>>>> Basically, thread/layer should not hold any resources while serializing
>>>>> the object or chunk.  We should be able to see this flow (ms1-chunk1,
>>>>> msg2-chunk1, ms1-chunk2, msg3-chunk, msg2-chunk2, so on ...)
>>>>> 
>>>>> Correct, but putting that in the message layer is not appropriate. The
>>>> simple solution is that the multiple channels can be achieved with
>>>> multiple
>>>> sockets. The later optimization is to add a channel multiplexer layer
>>>> between the message and socket layers.
>>>> 
>>>> If we put it in the message layer, not only does it for the message to
>>>> tackle something it shouldn't be concerned with, reassembling itself, but
>>>> it also forces all implementors to tackle this logic up front. By layering
>>>> we can release without, implementors aren't forced into understanding the
>>>> logic, and later we can release the layers and the client can negotiate.
>>>> 
>>>> 
>>>> 
>>>> On other pdx note: to de-serialize the pdx we need length of serialized
>>>>> bytes, so that we can read field offset from serialized stream, and then
>>>>> can read field value. Though, I can imagine with the help of pdxType, we
>>>>> can interpret serialized stream.
>>>>> 
>>>>> Yes, so today PDX serialization would be no worse, the PDX serializer
>>>> would
>>>> have to buffer, but other may not have to. The length of the buffered PDX
>>>> could be used as the first chunk length and complete in single chunk.
>>>> Although, I suspect that amortized overhead of splitting the chunks  will
>>>> be nil anyway.
>>>> 
>>>> The point is that the message encoding of values should NOT have any
>>>> unbounded length fields and require long or many buffers to complete
>>>> serialization. By chunking you can accomplish this by not needing to
>>>> buffer
>>>> the whole stream, just small (say 1k), chunks at a time to get the chunk
>>>> length.
>>>> 
>>>> Buffers == Latency
>>>> 
>>>> -Jake
>>>> 
>>>> 
> 

Reply via email to