On Tue, Jun 9, 2015 at 3:41 PM, Kenneth Klette Jonassen
<kenne...@ifi.uio.no> wrote:
> On Tue, Jun 9, 2015 at 7:44 AM, Yuchung Cheng <ych...@google.com> wrote:
>> On Mon, Jun 8, 2015 at 10:43 AM, Kenneth Klette Jonassen
>> <kenne...@ifi.uio.no> wrote:
> …
>
>>> +enum cdg_state {
>>> +       CDG_UNKNOWN = 0,
>>> +       CDG_FULL    = 0,
>> why duplicate states?
>
> We explicitly infer a full or non-full queue in tcp_cdg_grad().
>
> The unknown state is semantically different; it is before we infer
> anything about the network.
>
> We could change the full state to have its own value. (It does not
> matter in the current code, but it could be useful for TCP_CC_INFO.)
>
>>> +       if (hystart_detect & 1) {
>> Define 1 and 2 like cubic does?
>>> +       prior_snd_cwnd = tp->snd_cwnd;
>>> +       tcp_reno_cong_avoid(sk, ack, acked);
>>> +
>>> +       if (ca->shadow_wnd) {
>>> +               u32 incr = tp->snd_cwnd - prior_snd_cwnd;
>>> +
>>> +               ca->shadow_wnd = max(ca->shadow_wnd, ca->shadow_wnd + incr);
>>  u32 shadow_wnd may overflow?! there might be some bugs ...
>
> CDG can potentially operate without losses over some time, so
> shadow_wnd can reach U32_MAX on normal links.
> The max() should revert any increment that causes overflow/wrapping of
> shadow_wnd.
the situation you described could only happen if we didn't have the
application-limited check  (i.e., tcp_is_cwnd_limited() in tcp_reno_cong_avoid).
I am really doubtful, even tho having a max won't hurt anything.

>
> We limit shadow_wnd to min(shadow_wnd / 2, snd_cwnd) in ssthresh().
>
>>> +       if (ca->state == CDG_BACKOFF)
>>> +               return max(2U, (tp->snd_cwnd * min(1024U, backoff_beta)) >> 
>>> 10);
>>> +
>>> +       if (ca->state == CDG_NONFULL && use_tolerance) {
>>> +               bool is_ecn = (tp->prior_ssthresh == 0);
>>> +
>>> +               if (!is_ecn) {
>>> +                       tp->ecn_flags &= ~TCP_ECN_DEMAND_CWR;
>>> +                       return tp->snd_cwnd;
>>> +               }
>> I might be missing something here: cdg_backoff also calls
>> tcp_enter_cwr() that clears prior_ssthresh but it's not triggered by
>> ECE marks?
>
> That is ok. We set ca->state = CDG_BACKOFF in tcp_cdg_backoff() to
> distinguish a delay backoff.
>
>>
>> if the intention is to use loss tolerance only when undo is enabled
>> why not just check undo_marker?
>
> The intention is to not use loss tolerance if we receive an ECN signal.
>
> Inferring ECN from (prior_ssthresh == 0) is far from perfect of
> course, but it is always cleared when receiving ECN.
>
> In tcp_enter_loss(), undo_marker is set after calling ssthresh(). We
> could change that, but undo_marker is also imperfect for detecting
> ECN.
>
> If in doubt, let us remove the ECN check now. Then in a later patch
> set, we could try passing flag to ssthresh like you suggested. We
> should also look at tcp_dctcp if we do that.
+1 to split ecn handling in a separate (future) patch

>
>>> +       if (use_shadow)
>>> +               ca->shadow_wnd = min(ca->shadow_wnd >> 1, tp->snd_cwnd);
>>> +       else
>>> +               ca->shadow_wnd = 0;
>> it'd be good to not reset shadow_wnd, but only use it if use_shadow.
>> so we can monitor shadow_wnd
>> in TCP_CC_INFO in the future.
>
> Ok, then we should always set/maintain a shadow_wnd.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to