Thanks for fast responding. I will try to use the threads and see how the
performance would be.. actually I am using that for my artificial neural
network and the problem is regarding to the ANN limitation when I used a big no of inputs. so one way to overcome this problem is by distributing and now
I have like 3 networks in my system with slow processing. May be parallel
could have little effort.


Depending on how long running these processes are you may be able
to separate them out completely into separate server processes
in true client server mode. Effectively creating a network process for
each network then have a load balances so that each incoming
request gets sent to one of the server processes. That way you
can scale linearly by adding moreservers as required (the load
balancer starting a new process each time the number of active
requests passes a given limit.) This is how many uindustrial scale
databases handle high processing loads, each SQL request is validated
and if OK passed to a query server, the queries are distributed over
the available servers to ensure even loading and each server can
run on separate CPUs as needed (but still controlled by the OS).

The downside of this is of course the complexity of writing the
loadbalancer which must track all incoming requests and which
server they are on and also handle the responses from the servers
when complete, making sure they go back to the right source.
Its a little bit like writing a web server... or at least the listener part.

HTH,

Alan G.


_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor

Reply via email to