On Wed, Nov 3, 2010 at 1:04 AM, Vivek Khurana <[email protected]>wrote:

>
>
> On Nov 2, 8:56 pm, Ken Wesson <[email protected]> wrote:
>
> > That seems impossible assuming you don't trust the software running on
> the
> > other node.
> >
>
>  It is not impossible. There are projects by Cornell .University,
> jiff[1] and fabric[2] which have achieved the same. Jiff is
> modification of java and fabric is based on jiff.Both languages run
> inside JVM.


I'd expect those involve trusting the JVM and the hardware of the remote
node, even if not the software running on that JVM.

You need to work out your trust model more exactly. What specifically don't
you trust? Just the user at the remote node? The code running on the JVM?
The JVM itself? The hardware? The network infrastructure between?

If you just don't trust the remote user and the network between, it may
suffice to give him a closed source program and encrypt the network traffic
-- though ultimately that would fall before a determined enough adversary
(as all attempts at copy protection by game companies and movie studios have
proven through their notorious strings of failures). Closed source just
slows down reverse engineering; it doesn't stop it. Basically this works
only if the potentially hostile node will be operated by a non-techie or by
someone who at least can't modify the running system past a certain point.
Employees that don't have write access, say, but might go poking into
confidential records on a lark.

With pretty much any other assumption about the threat model all that
changes: you could have a determined hacker sit down at the remote node and
replace anything from the CPU to the JVM or parts of the operating system
with hostile code, undermining whatever you're running on top of it.

The one thing guaranteed to minimize your exposure in this case is what I
prescribed: dole out information to remote nodes on a need-to-know basis.
Have them submit proposed edits back to your own (trusted) node where the
edits (or even the entire remote node) can be rejected if anything is amiss
-- if it tries to change any of these things it shouldn't try to change,
into the IP blacklist it goes. And of course give each node a key pair and
encrypt each transmission with the public key for the recipient node, so the
network traffic can't be sniffed by third parties (short of compromising
many nodes to root out their private keys).

There are ways to go even further -- submitting computations, say, to
multiple nodes and if they disagree on the answer resubmitting to a disjoint
set, so one can only be sabotaged by someone who can predict exactly which
nodes it'll be sent to and compromise them all, or else someone who can
compromise all the nodes, period. Distributed computing projects tend to go
this far. So do file sharing systems -- they're under constant attack by
determined adversaries (even if this time the law is on those adversaries'
side!) and have evolved towards decentralization, being physically widely
distributed, and having cross-checks (such as databases of file hashes,
including of known bad files, and requiring swarm download sources to agree
on the file's hash so if any one of them tries to slip in a bad packet the
others can detect it). The Freenet project has perhaps the ultimate level of
paranoia of this sort, where the things to protect in this case are a) the
integrity of the network itself and b) the confidentiality of
who-published-what.

Ultimately, though, if you think simply sprinkling permissions flags into
your data here and there and trusting some part of the remote system (even
the JVM) to enforce it will work, even if you throw in a load of obfuscation
and encryption and make the node software closed-source, in the vast
majority of cases it simply won't hold up in the face of a determined
assault by any adversary with full control of the remote hardware. If it was
possible the movie industry's attempts to stop blurays being ripped and
shared could have actually worked, whereas computer security professionals
are pretty much unanimous that all such attempts were doomed to failure from
the get-go; and the empirical evidence seems to be strongly in favor of what
those security professionals said.

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to [email protected]
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to