Flat, and Crowded'
Laugh at http://www.yert.com/film.php
--- On Mon, 9/6/10, Toke Eskildsen wrote:
> From: Toke Eskildsen
> Subject: RE: Hardware Specs Question
> To: "Dennis Gearon" , "solr-user@lucene.apache.org"
>
> Date: Monday, September 6, 2010, 12:
From: Dennis Gearon [gear...@sbcglobal.net]:
> I wouldn't have thought that CPU was a big deal with the speed/cores of CPU's
> continuously growing according to Moore's law and the change in Disk Speed
> barely changine 50% in 15 years. Must have a lot to do with caching.
I am not sure I follow yo
On 9/3/2010 3:39 AM, Toke Eskildsen wrote:
I'll have to extrapolate a lot here (also known as guessing).
You don't mention what kind of harddrives you're using, so let's say
15.000 RPM to err on the high-end side. Compared to the 2 drives @
15.000 RPM in RAID 1 we've experimented with, the diffe
27;Hot, Flat, and Crowded'
Laugh at http://www.yert.com/film.php
--- On Fri, 9/3/10, Toke Eskildsen wrote:
> From: Toke Eskildsen
> Subject: Re: Hardware Specs Question
> To: "solr-user@lucene.apache.org"
> Date: Friday, September 3, 2010, 3:43 AM
> On Fri, 2010-
Eskildsen"
To:
Sent: Friday, September 03, 2010 6:43 PM
Subject: Re: Hardware Specs Question
On Fri, 2010-09-03 at 11:07 +0200, Dennis Gearon wrote:
If you really want to see performance, try external DRAM disks.
Whew! 800X faster than a disk.
As sexy as they are, the DRAM drives does not b
On Fri, 2010-09-03 at 11:07 +0200, Dennis Gearon wrote:
> If you really want to see performance, try external DRAM disks.
> Whew! 800X faster than a disk.
As sexy as they are, the DRAM drives does not buy much more extra
performance. At least not at the search stage. For searching, SSDs are
not th
On Fri, 2010-09-03 at 03:45 +0200, Shawn Heisey wrote:
> On 9/2/2010 2:54 AM, Toke Eskildsen wrote:
> > We've done a fair amount of experimentation in this area (1997-era SSDs
> > vs. two 15.000 RPM harddisks in RAID 1 vs. two 10.000 RPM harddisks in
> > RAID 0). The harddisk setups never stood a c
/10, Shawn Heisey wrote:
> From: Shawn Heisey
> Subject: Re: Hardware Specs Question
> To: solr-user@lucene.apache.org
> Date: Thursday, September 2, 2010, 6:45 PM
> On 9/2/2010 2:54 AM, Toke Eskildsen
> wrote:
> > We've done a fair amount of experimentation in this
&
On 9/2/2010 2:54 AM, Toke Eskildsen wrote:
We've done a fair amount of experimentation in this area (1997-era SSDs
vs. two 15.000 RPM harddisks in RAID 1 vs. two 10.000 RPM harddisks in
RAID 0). The harddisk setups never stood a chance for searching. With
current SSD's being faster than harddisk
On Thu, 2010-09-02 at 03:37 +0200, Lance Norskog wrote:
> I don't know how much SSD disks cost, but they will certainly cure the
> disk i/o problem.
We've done a fair amount of experimentation in this area (1997-era SSDs
vs. two 15.000 RPM harddisks in RAID 1 vs. two 10.000 RPM harddisks in
RAID 0
& memory
> configuration on our project?
>
> Thanks in advance.
>
> Scott
>
> - Original Message - From: "Lance Norskog"
> To:
> Sent: Tuesday, August 31, 2010 1:01 PM
> Subject: Re: Hardware Specs Question
>
>
> There are synchroni
e Norskog"
To:
Sent: Tuesday, August 31, 2010 1:01 PM
Subject: Re: Hardware Specs Question
There are synchronization points, which become chokepoints at some
number of cores. I don't know where they cause Lucene to top out.
Lucene apps are generally disk-bound, not CPU-bound, but yours
;
> On Mon, Aug 30, 2010 at 8:28 PM, scott chu (朱炎詹)
> wrote:
>
>> I am also curious as Amit does. Can you make an example about the garbage
>> collection problem you mentioned?
>>
>> - Original Message ----- From: "Lance Norskog"
>> To:
>>
ample about the garbage
> collection problem you mentioned?
>
> - Original Message - From: "Lance Norskog"
> To:
> Sent: Tuesday, August 31, 2010 9:14 AM
> Subject: Re: Hardware Specs Question
>
>
>
> It generally works best to tune the Solr caches an
I am also curious as Amit does. Can you make an example about the garbage
collection problem you mentioned?
- Original Message -
From: "Lance Norskog"
To:
Sent: Tuesday, August 31, 2010 9:14 AM
Subject: Re: Hardware Specs Question
It generally works best to tune the S
It generally works best to tune the Solr caches and allocate enough
RAM to run comfortably. Linux & Windows et. al. have their own cache
of disk blocks. They use very good algorithms for managing this cache.
Also, they do not make long garbage collection passes.
On Mon, Aug 30, 2010 at 5:48 PM, Am
Lance,
Thanks for your help. What do you mean by that the OS can keep the index in
memory better than Solr? Do you mean that you should use another means to
keep the index in memory (i.e. ramdisk)? Is there a generally accepted heap
size/index size that you follow?
Thanks
Amit
On Mon, Aug 30, 20
The price-performance knee for small servers is 32G ram, 2-6 SATA
disks on a raid, 8/16 cores. You can buy these servers and half-fill
them, leaving room for expansion.
I have not done benchmarks about the max # of processors that can be
kept busy during indexing or querying, and the total numbers
Hi all,
I am curious to know get some opinions on at what point having more CPU
cores shows diminishing returns in terms of QPS. Our index size is about 8GB
and we have 16GB of RAM on a quad core 4 x 2.4 GHz AMD Opteron 2216.
Currently I have the heap to 8GB.
We are looking to get more servers to
19 matches
Mail list logo