Hello,
The default size of udp packets is defined in ̃/ns /tcl/lib/ns-default.tcl as
1000 bytes. Now, when udp receives a CBR packet it does this (Look at
̃/ns/apps/udp.cc):
void UdpAgent::sendmsg(int nbytes, AppData* data, const char* flags)
{
Packet *p;
int n;
if (size_)
n = nbytes / size_;
else
printf("Error: UDP size = 0\n");
if (nbytes == -1) {
printf("Error: sendmsg() for UDP should not be -1\n");
return;
}
// If they are sending data, then it must fit within a single packet.
if (data && nbytes > size_) {
printf("Error: data greater than maximum UDP packet size\n");
return;
}
while (n-- > 0) {
p = allocpkt();
hdr_cmn::access(p)->size() = size_;
hdr_rtp* rh = hdr_rtp::access℗;
rh->flags() = 0;
rh->seqno() = ++seqno_;
hdr_cmn::access(p)->timestamp() =
(u_int32_t)(SAMPLERATE*local_time);
// add "beginning of talkspurt" labels (tcl/ex/test-rcvr.tcl)
if (flags && (0 ==strcmp(flags, "NEW_BURST")))
rh->flags() |= RTP_M;
p->setdata(data);
target_->recv(p);
}
n = nbytes % size_;
if (n > 0) {
p = allocpkt();
hdr_cmn::access(p)->size() = n;
hdr_rtp* rh = hdr_rtp::access(p);
rh->flags() = 0;
rh->seqno() = ++seqno_;
hdr_cmn::access(p)->timestamp() =
(u_int32_t)(SAMPLERATE*local_time);
// add "beginning of talkspurt" labels (tcl/ex/test-rcvr.tcl)
if (flags && (0 == strcmp(flags, "NEW_BURST")))
rh->flags() |= RTP_M;
p->setdata(data);
target_->recv(p);
}
idle();
}
SO for each CBR packet of 1500 bytes it sends 2 udp packets. If you want no
fragmentation, you should set the default size of udp packet in ns_default.tcl
to 1500 bytes.
Best,
Behnaz
On Jun 30, 2013, at 3:15 AM, Dhrubojyoti Roy <[email protected]> wrote:
>
> I've designed a routing protocol that works over 802.11 in NS2. I have
> used a standard CBR source attached to a UDP agent to simulate traffic
> in the network. I find that when I change the CBR packet-size to 1500
> from 1000, the number of packets in the system doubles for the same data
> rate; so there is fragmentation happening somewhere in the lower layers.
> However, I'm unable to find the code segment that handles this
> fragmentation in any of the C++ modules. I know that mac-802_3.h defines
> an IEEE_8023_MAXFRAME, but I don't see an MTU defined in mac-802_11.h or
> mac.h. So I'm curious as to how and where the data packets actually get
> fragmented. Is it implemented in mac itself, or in the link/physical
> layers? If someone could point me in the right direction, that would be
> great.
>
> Thanks in advance,
>
> Dhrubo.
>
> --
> Dhrubojyoti Roy
> PhD Student (2nd year)
> Department of Computer Science and Engineering,
> The Ohio State University,
> 2015 Neil Avenue,
> Columbus, OH-43210.
> +1-740-417-5890
>