On Wed, Jan 18, 2006 at 09:32:49AM +0000, Simon Kelley wrote:
> That may be rather over-optimistic: the Atmel hardware dosen't even
> produce consistent results over different chip revs.

But each chip on its own is fairly consistent, which is all that 
random users care about.  "More bars mean better signal!"  :)
 
> One technique which I implemented in the Atmel driver and which improved 
> signal quality reporting greatly was to factor in the time-decaying
> average of the missed-beacon rate.

Yeah, this technique works quite well.

Every wireless chipset I've seen can attach a header to received frames
that includes things like signal strength, and sometimes noise levels as
well.  It's trivial to strip this off and pass it via an out-of-band
structure (pointed to in skb->cb), along with other pertinent
information like frequency, hardware timestamps, etc.  This structure 
would be stripped before it's passed into netif_rx().  

As well as giving the stack real-time information about signal levels,
rx data rates, and stuff like that, this technique makes monitor mode a
lot simpler across different hardware types, as you can just build the
"monitor header" in one place in the stack, just before it goes to the
netif_rx() call...

Once you have this real-time signal information.. it opens up the door
for many things in the stack, like opportunistic scanning or roamimg
triggered by signal thresholds, etc... and userspace polling can just
pull the current state whenever it wants.

It's also a useful input into a rate control algorithm.

 - Solomon
-- 
Solomon Peachy                                   ICQ: 1318344
Melbourne, FL                                    
Quidquid latine dictum sit, altum viditur.

Attachment: pgpfBBgsndwvl.pgp
Description: PGP signature

Reply via email to