--- Begin Message ---
On Sat, 31 May 2025 19:10:40 -0700
Howard Harte <hha...@magicandroidapps.com> wrote:
> Protocol documentation:
> https://cpm80.com/oasis-send-recv-protocol.html
Hello Howard.
Thank you for putting the documentation together, I have spent some
time comprehending the protocol and had a closer look at the proposed
encoding, which is as follows:
----------------------------------------------------------------
Pseudo-header
Field Length (bytes) Description
version 1 Set to 1.
direction byte 1 Direction, relative to capturing
utility: 0x00: received from the serial port.
0x01: transmitted to the serial port.
message n The OASIS protocol message (e.g., starting
DLE STX or ENQ).
----------------------------------------------------------------
The "version" seems to correspond to various version fields in
LINKTYPE_IPNET, LINKTYPE_NFLOG, LINKTYPE_NORDIC_BLE,
LINKTYPE_USB_DARWIN, LINKTYPE_ZBOSS_NCP, LINKTYPE_NETANALYZER and
LINKTYPE_NETANALYZER_TRANSPARENT.
The "direction" corresponds to various direction fields in
LINKTYPE_SLIP, LINKTYPE_PPP_PPPD, LINKTYPE_PPP_WITH_DIR,
LINKTYPE_C_HDLC_WITH_DIR, LINKTYPE_FRELAY_WITH_DIR,
LINKTYPE_LAPB_WITH_DIR and elsewhere.
The "message" is a sequence of zero or more ASCII characters, that
is, octets that have the most significant bit set to 0. (All control
character codes are ASCII and "Message characters are always
transmitted as 7-bit ASCII characters." -- the PDF specification
Appendix A clause 7.)
It should be easy to notice that this version of this encoding captures
generic directional ASCII strings, which is not limited to the OASIS
Send/Receive protocol -- it could readily encode other exchanges that
traditionally take place on a serial line: an XMODEM/etc. file transfer
session, an interactive shell session, an AT command exchange and so on.
Furthermore, if the proposed link-layer encoding, unlike the specified
protocol, does not restrict the octets to 7-bit values, it could with
some degree of approximation be used to capture the [binary] payload of
a TCP session or the reads and writes done on a character device or a
Unix socket. Conversely, in the OASIS Send/Receive protocol
specification I do not immediately see anything that would prevent the
protocol from working over a TCP connection, a character device or a
Unix socket. Also if the OASIS host is running in a simulator, the
communication line potentially could be a [virtual] serial port at the
OASIS guest end and a TCP connection at the host end.
All this is not to say this solution should be immediately generalised
to the limit, but there is a few things to consider if you are looking
for a better match between the problem space and the solution space.
One aspect here is whether the serial line is in scope. If it is, then
the problem space would be a bit larger because a serial port can be
synchronous/asynchronous, half/full duplex, and usually includes various
control lines that may be relevant to the exchange, but are not present
in the payload (CTS, RTS, DCD, DSR, DTR and so on). Please have a look
at LINKTYPE_RTAC_SERIAL, which captures this complexity to an extent.
Another aspect is timing. It would be reasonable to assume that the
first character in the "message" field above corresponds to the
timestamp of the link-layer packet (not shown). But if the message
comprises more than one character, the interval between any two adjacent
characters can be, generally speaking, as low as the serial port speed
allows it to be, or as high as tens of seconds if the lower-level
protocol paused (TCP delay/retransmission, buffering, modem retraining,
CTS going low) in the middle of the higher-level protocol session.
Timing would be important if anybody was, for example, to debug timeout
handling in an implementation. The specification says: "sends ... ENQ
... haven't received ... for awhile", but it does not seem to define a
specific value of the timeout, so there is a space for an unexpected
behaviour. In the proposed encoding the intervals between the
characters would not be visible, so it would be difficult to reason
whether the remote end didn't send something on time or the local end
generated an out-of-place ENQ.
Another aspect is the message length. The HTML (more detailed)
specification says: "Max data payload is typically 256 bytes
(XFR_BLOCK_SIZE).". What if an implementation decides to send way more
than that, overruns the .pcap file declared max packet size and
truncates the packet? What if an implementation sends multiple long
"STX,msg,ETX,lrcc,RUB" blocks in one go without waiting for ACK0/ACK1?
(Initial implementations often start optimistic to prove a concept.
Later one needs an accurate record of what and when was on the wire to
debug non-ideal scenarios.)
The timing and packet size factors could be reliably addressed by
allowing exactly one character per packet, which would also use a
better granularity to approximate full-duplex transmission lines. In
this case it would be difficult to tell if the overhead (16 + 2 bytes
of the packet header per 1 character of the protocol) would be
acceptable for the intended use case. One the one hand, if one was to
debug the sending of a 10MB file using such an encoding, the packet
capture would be at least 190MB big, plus the SI/SO/DLE overhead. On
the other hand, if it is a very important file and the only way to
reproduce the bug, and if finding the bug indeed requires to know the
timestamp of an arbitrary protocol character, then that's what it would
need to be a solution to the problem.
This is about as much as I can make sense of the specifications, and
OASIS Send/Receive is not one of the several serial file transfer
protocols I have used, so some of this thinking may be irrelevant to
the use case. If you could explain the use case and the intended
purpose of this link-layer type, it would be easier to reason about it.
--
Denis Ovsienko
--- End Message ---