EC2 7.5Gb (large CPU instance, $0.68/hour) sucks. Unpredictably, there are
errors such as

User time: 0 seconds
Kernel time: 0 seconds
Real time: 600 seconds

How can "clock time" be higher in such extent? Only if _another_ user used
600 seconds CPU: _virtualization_


My client have had constant problems. We are moving to dedicated hardware
(25 times cheaper in average; Amazon sells 1 Tb of EBS for $100/month,
plus additional costs for I/O)


> I have a large ec2 instance(7.5 gb ram), it dies every few hours with out
> of heap memory issues.  I started upping the min memory required,
> currently I use -Xms3072M .



"Large CPU" instance is "virtualization" and behaviour is unpredictable.
Choose "cluster" instance with explicit Intel XEON CPU (instead of
"CPU-Units") and compare behaviour; $1.60/hour. Please share results.

Thanks,





-- 
Fuad Efendi
416-993-2060
Tokenizer Inc., Canada
Data Mining, Search Engines
http://www.tokenizer.ca








On 11-08-17 5:56 PM, "Jason Toy" <jason...@gmail.com> wrote:

>I've only set set minimum memory and have not set maximum memory.  I'm
>doing
>more investigation and I see that I have 100+ dynamic fields for my
>documents, not the 10 fields I quoted earlier.  I also sort against those
>dynamic fields often,  I'm reading that this potentially uses a lot of
>memory.  Could this be the cause of my problems and if so what options do
>I
>have to deal with this?
>
>On Wed, Aug 17, 2011 at 2:46 PM, Markus Jelsma
><markus.jel...@openindex.io>wrote:
>
>> Keep in mind that a commit warms up another searcher and potentially
>> doubling
>> RAM consumption in the back ground due to cache warming queries being
>> executed
>> (newSearcher event). Also, where is your Xmx switch? I don't know how
>>your
>> JVM
>> will behave if you set Xms > Xmx.
>>
>> 65m docs is quite a lot but it should run fine with 3GB heap allocation.
>>
>> It's a good practice to use a master for indexing without any caches and
>> warm-
>> up queries when you exceed a certain amount of documents, it will bite.
>>
>> > I have a large ec2 instance(7.5 gb ram), it dies every few hours with
>>out
>> > of heap memory issues.  I started upping the min memory required,
>> > currently I use -Xms3072M .
>> > I insert about 50k docs an hour and I currently have about 65 million
>> docs
>> > with about 10 fields each. Is this already too much data for one box?
>>How
>> > do I know when I've reached the limit of this server? I have no idea
>>how
>> > to keep control of this issue.  Am I just supposed to keep upping the
>>min
>> > ram used for solr? How do I know what the accurate amount of ram I
>>should
>> > be using is? Must I keep adding more memory as the index size grows,
>>I'd
>> > rather the query be a little slower if I can use constant memory and
>>have
>> > the search read from disk.
>>
>
>
>
>-- 
>- sent from my mobile
>6176064373


Reply via email to