I was sending using the gmail webbrowser client in plaintext.

Spamassasin didn't seem to like 3 things (according to the error
message i got back):
- I use a free email adress
- My email address (before the @) ends with a number
- The email had the word replication in the subject line.

No idea where rule #3 came from but it was the easiest to fix so
that's what I changed ;)


On Tue, May 31, 2011 at 4:21 PM, Erick Erickson <erickerick...@gmail.com> wrote:
> Constantjin:
>
> I've had better luck by sending messages as "plain text". The Spam
> filter on the user
> list sometimes acts up if you send mail in "richtext" or similar
> formats. Gmail has a link
> to change this, what client are you using?
>
> And thanks for participating!
>
> Best
> Erick
>
> On Tue, May 31, 2011 at 3:22 AM, Constantijn Visinescu
> <baeli...@gmail.com> wrote:
>> Hi Bernd,
>>
>> I'm assuming Linux here, if you're running something else these
>> instructions might differ slightly.
>>
>> First get a heap dump with:
>> jmap -heap:format=b,file=/path/to/generate/heapdumpfile.hprof 1234
>>
>> with 1234 being the PID (process id) of the JVM
>>
>> After you get a Heap dump you can analyze it with Eclipse MAT (Memory
>> Analyzer Tool).
>>
>> Just a heads up if you're doing this in production: the JVM will
>> freeze completely while generating the heap dump, which will seem like
>> a giant stop the world GC with a 10GB heap.
>>
>> Good luck with finding out what's eating your memory!
>>
>> Constantijn
>>
>> P.S.
>> Sorry about  altering the subject line, but the spam assassin used by
>> the mailing list was rejecting my post because it had replication in
>> the subject line. hope it doesn't mess up the thread.
>>
>> On Tue, May 31, 2011 at 8:43 AM, Bernd Fehling
>> <bernd.fehl...@uni-bielefeld.de> wrote:
>>> Some more info,
>>> after one week the servers have the following status:
>>>
>>> Master (indexing only)
>>> + looks good and has heap size of about 6g from 10g OldGen
>>> + has loaded meanwhile 2 times the index from scratch via DIH
>>> + has added new documents into existing index via DIH
>>> + has optimized and replicated
>>> + no full GC within one week
>>>
>>> Slave A (search only) Online
>>> - looks bad and has heap size of 9.5g from 10g OldGen
>>> + was replicated
>>> - several full GC
>>>
>>> Slave B (search only) Backup
>>> + looks good has heap size of 4 g from 10g OldGen
>>> + was replicated
>>> + no full GC within one week
>>>
>>> Conclusion:
>>> + DIH, processing, indexing, replication are fine
>>> - the search is crap and "eats up" OldGen heap which can't be
>>>  cleaned up by full GC. May be memory leaks or what ever...
>>>
>>> Due to this Solr 3.1 can _NOT_ be recommended as high-availability,
>>> high-search-load search engine because of unclear heap problems
>>> caused by the search. The search is "out of the box", so no
>>> self produced programming errors.
>>>
>>> Any tools available for JAVA to analyze this?
>>> (like valgrind or electric fence for C++)
>>>
>>> Is it possible to analyze a heap dump produced with jvisualvm?
>>> Which tools?
>>>
>>>
>>> Bernd
>>>
>>>
>>> Am 30.05.2011 15:51, schrieb Bernd Fehling:
>>>>
>>>> Dear list,
>>>> after switching from FAST to Solr I get the first _real_ data.
>>>> This includes search times, memory consumption, perfomance of solr,...
>>>>
>>>> What I recognized so far is that something eats up my OldGen and
>>>> I assume it might be replication.
>>>>
>>>> Current Data:
>>>> one master - indexing only
>>>> two slaves - search only
>>>> over 28 million docs
>>>> single instance
>>>> single core
>>>> index size 140g
>>>> current heap size 16g
>>>>
>>>> After startup I have about 4g heap in use and about 3.5g of OldGen.
>>>> After one week and some replications OldGen is filled close to 100
>>>> percent.
>>>> If I start an optimize under this condition I get OOM of heap.
>>>> So my assumption is that something is eating up my heap.
>>>>
>>>> Any idea how to trace this down?
>>>>
>>>> May be a memory leak somewhere?
>>>>
>>>> Best regards
>>>> Bernd
>>>>
>>>
>>
>

Reply via email to