nice stuff. Please send me the test case, I'd love to see it.
Thanks,
Jacob
Nico Heid wrote:
> Hi,
> I basically followed this:
> http://wiki.apache.org/jakarta-jmeter/JMeterFAQ#head-1680863678257fbcb85bd97351860eb0049f19ae
>
>
> I basically put all my queries in a flat text file. you could either use
> two parameters or put them in one file.
> The good point of this is, that each test uses the same queries, so you
> can compare the settings better afterwards.
>
> If you use varying facets, you might just go with 2 text files. If it
> stays the same in one test you can hardcode it into the test case.
>
> I polished the result a little, if you want to take a look:
> http://i31.tinypic.com/28c2blk.jpg , JMeter itself does not plot such
> nice graphs.
> (green is the max results delivered, upon 66 "active users" per second
> the response time increases (orange/yellow, average and median of the
> response times)
> (i know the scales and descriptions are missing :-) but you should get
> the picture)
> I manually reduced the machines capacity, elsewise solr would server
> more than 12000 requests per second. (the whole index did fit into ram)
> I can send you my saved test case if this would help you.
>
> Nico
>
>
> Jacob Singh wrote:
>> Hi Nico,
>>
>> Thanks for the info. Do you have you scripts available for this?
>>
>> Also, is it configurable to give variable numbers of facets and facet
>> based searches? I have a feeling this will be the limiting factor, and
>> much slower than keyword searches but I could be (and usually am) wrong.
>>
>> Best,
>>
>> Jacob
>>
>> Nico Heid wrote:
>>
>>> Hi,
>>> I did some trivial Tests with Jmeter.
>>> I set up Jmeter to increase the number of threads steadily.
>>> For requests I either usa a random word or combination of words in a
>>> wordlist or some sample date from the test system. (this is described
>>> in the
>>> JMeter manual)
>>>
>>> In my case the System works fine as long as I don't exceed the max
>>> number of
>>> requests per second it can handel. But thats not a big surprise. More
>>> interesting seems the fact, that to a certain degree, after exceeding
>>> the
>>> max nr of requests response time seems to rise linear for a little
>>> while and
>>> then exponentially. But that might also be the result of my test
>>> szenario.
>>>
>>> Nico
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Jacob Singh [mailto:[EMAIL PROTECTED]
>>>> Sent: Sunday, June 29, 2008 6:04 PM
>>>> To: solr-user@lucene.apache.org
>>>> Subject: Benchmarking tools?
>>>>
>>>> Hi folks,
>>>>
>>>> Does anyone have any bright ideas on how to benchmark solr?
>>>> Unless someone has something better, here is what I am thinking:
>>>>
>>>> 1. Have a config file where one can specify info like how
>>>> many docs, how large, how many facets, and how many updates /
>>>> searches per minute
>>>>
>>>> 2. Use one of the various client APIs to generate XML files
>>>> for updates using some kind of lorem ipsum text as a base and
>>>> store them in a dir.
>>>>
>>>> 3. Use siege to set the update run at whatever interval is
>>>> specified in the config, sending an update every x seconds
>>>> and removing it from the directory
>>>>
>>>> 4. Generate a list of search queries based upon the facets
>>>> created, and build a urls.txt with all of these search urls
>>>>
>>>> 5. Run the searches through siege
>>>>
>>>> 6. Monitor the output using nagios to see where load kicks in.
>>>>
>>>> This is not that sophisticated, and feels like it won't
>>>> really pinpoint bottlenecks, but would aproximately tell us
>>>> where a server will start to bail.
>>>>
>>>> Does anyone have any better ideas?
>>>>
>>>> Best,
>>>> Jacob Singh
>>>>
>>>>
>>>
>