fish <fish at bovine.artificial-stupidity.net> writes:

> On Mon, 21 Oct 2002, Ian Clarke wrote:
> 
> > You code could be pretty clever about how it requests files - i.e 
> > adjusting the number of simultaenous requests based on observed 
> > retrieval time etc.  Additionally, getting FEC in there somewhere 
> > wouldn't be too shabby either.
> 
> FEC is a hard problem, in this case, since all of the streaming FEC
> algorythms that I know of are designed for bit loss, rather than block
> loss, however, block FEC algorythms require a non-trivial amount of data
> to be effective, meaning that we could only stream in 3 minute chunks or
> whatever.
> 
There's no difference between FEC designed for bit loss vs. FEC
designed for block loss.  In fact, most FEC algorithms I know of are
designed and analyzed as if they were encoding bits, and then scaled
up to encoding blocks to make the minimal bookkeeping overhead (if
any) neglegible.

> I could, of course, use smaller chunks, however, and would need to for
> higher bitrate stuff, howver, freenet seems to be less happy(tm)
> 
>       - teh fish(tm)
> 
the solution is just more chunks; that way you can average out the
randomness inherent in requesting delays.  For higher-bitrate files,
if you switched to smaller chunks, you'd need a *lot* of smaller
chunks, possibly enough for the overhead in requesting a single chunk
to become high.  I'd just pick a good chunk size, and go with that; so
that for higher bitrates, you just need to be able to swarm faster to
keep up.

Thelema
-- 
E-mail: thelema314 at bigfoot.com                        Raabu and Piisu
GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7  84B7 D8D7 6ECE 3635 2AAB

_______________________________________________
devl mailing list
devl at freenetproject.org
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to