Quoting Mark Hahn <[EMAIL PROTECTED]>, on Sun 06 Apr 2008 01:58:01 PM PDT:
damage? Presumably you have decent file system protection so that
user A can't do bad things (or even see) user B's files. All that
happens is bad guy User A zaps their own stuff.
this assumes that the windows admin knows enough not to ever run
anything untrusted, or perhaps he's set up some fileservers/trees as
readonly,
or that users are not permitted to load their own executables, etc.
I suspect that these things, which would be natural to any *nix admin,
are not exactly second-nature to windows admins. (disclaimer: I don't
know any serious windows admins.)
Any serious windows admin would know all this stuff. MS has had
decent user access controls, etc., since WinNT came out. No enterprise
scale system could work without it. Don't confuse the "admin-less"
consumer model with the "admin-full" business/enterprise model.
The folks who get into trouble with Windows are the ones trying to do
"admin-free" for a business, with the consumer strategies, when they
shouldn't be.
(I used to make a living untangling such things..)
Sure.. you let your cluster issue outbound network traffic to the
big wide internet? This is probably harder to actually allow than
to prevent.
huh? I'm guessing the natural windows cluster organization is to put all
the compute nodes on the full corporate/campus network. it's not as if
the windows world is really used to separating and tunneling GUIs over
networks,
at least not with the same level of naturalness as X and the usual
SSH tunnel.
I would say that the institutional windows world is as savvy about
tunnels, remote consoles, partitioning networks as the institutional
*nix world. If you look at MS's literature for CCS, they show the
compute nodes on a private network, just as the typical *nix cluster
is arranged.
And, for that matter, institutional Windows networks, on the whole,
are probably managed a lot more tightly than institutional *nix
networks. Enterprise scale windows (e.g. with domain controllers,
SMS, etc.) gives pretty fine grained control over the user's
workstation. I'd venture to say that you'd have a lot more trouble
dropping a "rogue" windows machine (i.e. if you ordered a computer on
your own) into a enterprise network than a *nix machine. You'd hit
too many little hiccups (i.e. your login wouldn't authenticate against
the domain, you wouldn't have access to shared resources, etc.)
This leaves aside things like SOHO (small office, home office) scale
installations done by the "geek squad" or equivalent from a consumer
electronics discounter. Those are usually done as wide open share
everything for everyone consumer installations (because it means they
won't get a service call because "I can't see my shared drive on my
husband's laptop")
Most clusters have a "totally inside the cluster" network that's
only implicitly bridged to the outside world through the headnode.
most _*nix_ clusters, yes. but the whole discussion is windows, where
users will naturally expect their job to see the same environment as
their desktop,
same filesystems, same graphics, same network access.
I don't know why a computational job would have graphics? They run
headless nodes in Windows just like *nix. The headnode would have all
the issues you address, but that's not the one one which the
computation is being done, so all the penalty from running AV, etc.
isn't as big an issue.
up the system. But, also, recall the general model we were
discussing.. smallish cluster to support some commercial
application (say, a computationally intensive FEM code).
one interesting fact is that the growth in cores of single machines is
actually working _against_ the need for windows clusters. who wouldn't
rather just run jobs on a 32-core server, rather than screwing around
with mpi on a cluster?
In this scenario, the cluster is basically sort of a "network
attached appliance". There are lots of network attached storage
devices out there (e.g. from Maxtor) using some form of Windows as
the OS. They tend not to have AV stuff, just because the software
on the appliance is fairly tightly configuration managed (i.e.
nobody goes out running random programs on the NAS box). It's just
not a huge threat.
so you really think people will buy a packaged windows compute cluster
preloaded with exactly one CFD code, and never be tempted to install
other apps on it? I think that's absurd, at least based on the kinds
of things I see people doing with clusters. the tool of the moment is
constantly
changing, even if the group isn't actually developing their own tools.
I do actually conceive of this happening. We have full time
mechanical engineers at JPL who probably use only one or two
applications all the time (e.g. NASTRAN or other FEM codes). Likewise
with some electromagnetic modeling tools.. an engineer using, say,
Ansoft's products isn't likely to change to other codes, just because
a)the existing product does what they need, and b)there's a HUGE
learning curve to changing products in terms of building models and
interpreting results.
Sure, they might want to install another code, someday, but odds are,
they'd be buying another "cluster aware" shrinkwrapped application,
which would be windows CCS compatible, and would fit within the MS
software management scheme.
The target market for such clusters is NOT the researcher developing
the codes or tinkering with multiple codes. It's someone who needs
more computational crunch than their desktop can give them, by an
order of magnitude or two. (i.e. rather than run 1000 segments in the
model, you want to run 10,000, and it's a order (N^2) sort of problem)
The target market is someone who does not care what's inside the box.
Jim
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf