On Wed, May 19, 2021 at 1:37 PM Amarjeet Anand <[email protected]> wrote:
> Consider a sequence of events---
>
> 1. Tcp server started on port 8080 using *net.Listen("tcp", ":8080")*
>
> 2. Tcp client established a connection using *net.Dial("tcp", ":8080") *and
> received a *conn* object.
>
> 3. Tcp server is force killed.
>
> 4. Now when the tcp client performs *conn.Write()*, this operation *pass*
> with no error.
>
>
There's no way to know, really. Suppose your server is on a Robot on Mars.
At 3, in the sequence above, it takes several minutes for the signal to
arrive on earth, and you may easily execute 4 in between. This buffers the
write in the OS kernel, and it transmits in the other direction, expecting
an ACK back. This takes up to something like 45 minutes round-trip. On
earth, the window is in milliseconds, but computers work on the nanosecond
scale, so it can still occur, and does so commonly. To hide the network
latency, we thus allow for the write to the kernel buffer instantly, and we
use the next write to handle the network error.
*Question*-
> What should be a reliable way of writing to a tcp connection so that in
> step 4, client knows that it was writing to nothing and the write was not
> successful.
>
> Since this is an expected tcp behaviour (*thanks Ian for letting me know*)
> am sure there must be some way to guarantee the tcp packet delivery (*one
> way can be of ACK, but that will need server change, which is not very
> desirable in my situation*).
> Could you please point me to some relevant documents?
>
> The key impossibility result is [The Two Generals Problem](
https://en.wikipedia.org/wiki/Two_Generals%27_Problem), which we have to
factor into the solution. It leads to the idea of best-effort delivery of
idempotent messages. Construct your protocol such that you can keep sending
the same message again and again, usually by tagging the message with a
unique identifier. The server responds with an ACK of that message
identifier, but at the application layer rather than at the TCP layer. The
server only processes messages once, and if it receives a message twice, it
responds as if it had just processed the message.
This scheme has another advantage: the unique tags allow you to process
messages out-of-order. This will allow the server to process in any order
it wants, which can improve latency.
Another important point is that if you are at the server side, it is
usually good to define a protocol which allows you to send large data sets
as a stream of chunks rather than building a complete data set and then
transmit it in one go. The reason is that it might take a while to build up
the large data set response and the client might be gone in the meantime.
If you stream the data as you go, the writes will give you errors faster,
so you can abort. And if the client can't keep up, the TCP window will
provide feedback pressure on your server to avoid a lot of up-front
processing.
--
J.
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/golang-nuts/CAGrdgiUYqsqzLrtcwHXd7bWwyy5t9h5w%3DrBYFThUrb1qs%2B2xCw%40mail.gmail.com.