Description
In the Go net/icmp package, the ID and Seq fields of the Echo struct are
defined as int types. However, in the Marshal method, these fields are cast
to uint16. This can lead to data truncation if the values of ID or Seq
exceed the 16-bit range.
// An Echo represents an ICMP echo request or reply message body.
type Echo struct {
ID int // identifier
Seq int // sequence number
Data []byte // data
}
// Len implements the Len method of MessageBody interface.
func (p *Echo) Len(proto int) int {
if p == nil {
return 0
}
return 4 + len(p.Data)
}
// Marshal implements the Marshal method of MessageBody interface.
func (p *Echo) Marshal(proto int) ([]byte, error) {
b := make([]byte, 4+len(p.Data))
binary.BigEndian.PutUint16(b[:2], uint16(p.ID))
binary.BigEndian.PutUint16(b[2:4], uint16(p.Seq))
copy(b[4:], p.Data)
return b, nil
}
Design Intent
I noticed that the ID and Seq fields are defined as int, which is not
strictly aligned with the ICMP protocol specification (RFC 792) that
requires these fields to be 16-bit. Could you please clarify the original
intent behind defining these fields as int instead of uint16? Specifically:
- What was the rationale for using int instead of uint16?
- Was there a specific use case or flexibility in mind that required int?
Summary
I believe changing the types of ID and Seq to uint16 would make the
implementation more consistent with the ICMP protocol specification.
Understanding the original design intent would also help the community
better align with Go's design philosophy.
Thank you for your attention!
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion visit
https://groups.google.com/d/msgid/golang-nuts/ca06e7c3-8f5a-4a84-b0f3-5fe2126381dfn%40googlegroups.com.