For me it always worked pretty well to do 'read-only replication'. I've no idea 
if there is an established term for it, but here is what I do:

1.) we use sticky sessions in our load balancer. Each user defaults back to his 
node. If the node is not available for 40 seconds, then he gets moved to a 
different node

2.) after each request we replicate the session over to our memcached instance. 
But we do not evict the session from the current node! So it usually just gets 
backed-up but never restored.

3.) If a node fails, we have a listener which tries to restore the session from 
the memcached.

memcached of course might get it's own network segment and stuff.


This does not worsen the response times for users since all the session backup 
is done after the page got rendered already. And since we mainly rely on 
session-affinity the memcachd session retrieval only takes place if a node goes 
down.

LieGrue,
strub


----- Original Message -----
> From: Rohit Kelapure <[email protected]>
> To: MyFaces Discussion <[email protected]>
> Cc: 
> Sent: Tuesday, September 27, 2011 6:24 PM
> Subject: Re: Issues with MyFaces and Clusters
> 
> David,
> 
> It is difficult for me to tell you what is the average session size for a
> JSF application. I am by no means a JSF expert. The folks on this forum will
> have better insight into typical session sizes for a JSF application. Most
> of the time it varies according to your application.
> 
> There is some tuning you can do, like reducing the # of logical views etc
> that will reduce the amount cached my myFaces in the session.
> 
> I recommend tuning and if that does not help some rearchitecting to reduce
> your session footprint
> I will also point you to
> http://wasdynacache.blogspot.com/2011/08/websphere-session-persistence-best.html
> 
> --Thanks,
> Rohit Kelapure,
> Apache Open WebBeans Committer
> 
> On Tue, Sep 27, 2011 at 11:54 AM, Boyd, David (Corporate) <
> [email protected]> wrote:
> 
>>  Robit,
>> 
>>  Going down this road it appears that we will be using a database to
>>  store the session information.  From What I have seen with JSF - at
>>  least our implementation we are looking at around 20 MB in the session
>>  per user.
>> 
>>  So, now the question is, is what I am seeing in the session high for a
>>  JSF application - for a non JSF application it is very high.
>> 
>>  -----Original Message-----
>>  From: Rohit Kelapure [mailto:[email protected]]
>>  Sent: Monday, September 26, 2011 9:28 PM
>>  To: MyFaces Discussion
>>  Cc: Robert E Goff
>>  Subject: Re: Issues with MyFaces and Clusters
>> 
>>  David,
>> 
>>  Please take a look at
>> 
>>  Update All Session Attributes option -  Horrible for performance, but
>>  fixes
>>  issues like this.
>>  http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=%2
>>  Frzamy%2F50%2Fadmin%2Fhelp%2Fuprs_rtuning_parameters.html
>> 
>>  --Thanks,
>>  Rohit
>> 
>>  On Mon, Sep 26, 2011 at 3:09 PM, Boyd, David (Corporate) <
>>  [email protected]> wrote:
>> 
>>  > I am having some issues with clustering an application with session
>>  > affinity enabled on Websphere Application Server.
>>  >
>>  >
>>  >
>>  > We are using:
>>  >
>>  >
>>  >
>>  > MyFace 1.1.7
>>  >
>>  > Tomahawk 1.1.5
>>  >
>>  > JDK 1.5
>>  >
>>  > WebSphere 7 - Fix Pack 13
>>  >
>>  >
>>  >
>>  > I am wondering if this is a known issue with this version of My Faces.
>>  >
>>  >
>>  >
>>  > What appears to be happening is that in the application, when a
>>  session
>>  > object is accesses and the data is changed, the change event is not
>>  > being triggered and therefore the change is not being pushed out to
>>  all
>>  > the servers in the cluster.
>>  >
>>  >
>>  >
>>  > It looks like this version of my faces is accessing the session object
>>  > via the getter but, it is making a change to the reference and
>>  therefore
>>  > not calling the setter method.
>>  >
>>  >
>>  >
>>  > Looking for some confirmation on this issue or possible a
>>  configuration
>>  > that needs to be done.
>>  >
>>  >
>>  >
>>  > Thanks
>>  >
>>  >
>>  >
>>  >
>>  >
>>  >
>>  >
>>  >
>> 
>

Reply via email to