Agreed.  I was thinking that a node should refuse to provide all of
the shares required to reconstruct a file.  Then it's up to the client
to do the final reconstruction after communicating with multiple server
nodes. (This client/server separation is not compatible with the current
design.)

There's a subtle difference between outright refusing to provide all
the shares of a file and not caching the shares.  I think caching should
always happen, and that nodes should be penalized for asking for any piece
of data too often.  This is to prevent some forms of denial of service.
But nodes should only cache a certain percentage of shares so as not to
become "hot".  Not sure what is the best way to do this.

In any case the current Freenet architecture doesn't easily accomodate

Reply via email to