Hi,
did anyone write a Javabin to JSON convertor and is willing to share this ?
In our servlet we use a CommonsHttpSolrServer instance to execute a query.
The problem is that is returns Javabin format and we need to send the result
back to the browser using JSON format.
And no, the browser is
2009/10/23 Teruhiko Kurosaka :
> I'm trying to stress-test solr (nightly build of 2009-10-12) using JMeter.
> I set up JMeter to post pod_other.xml, then hd.xml, then commit.xml that only
> has a line "", 100 times.
> Solr instance runs on a multi-core system.
>
> Solr didn't complian when the num
I'm trying to stress-test solr (nightly build of 2009-10-12) using JMeter.
I set up JMeter to post pod_other.xml, then hd.xml, then commit.xml that only
has a line "", 100 times.
Solr instance runs on a multi-core system.
Solr didn't complian when the number of test threads is 1, 2, 3 or 4.
But
Hmm - came out worse than it looked. Here is a better attempt:
MergeFactor: 10
BUF DOCS/S
32 37.40
80 39.91
120 40.74
512 38.25
Mark Miller wrote:
> Here is an example using the Lucene benchmark package. Indexing 64,000
> wikipedia docs (sorry for the formatting):
>
>
Here is an example using the Lucene benchmark package. Indexing 64,000
wikipedia docs (sorry for the formatting):
[java] > Report sum by Prefix (MAddDocs) and Round (4
about 32 out of 256058)
[java] Operation round mrg flush runCnt
recsPerRunrec/s elapse
8 GB is much larger than is well supported. Its diminishing returns over
40-100 and mostly a waste of RAM. Too high and things can break. It
should be well below 2 GB at most, but I'd still recommend 40-100.
Fuad Efendi wrote:
> Reason of having big RAM buffer is lowering frequency of IndexWriter
Reason of having big RAM buffer is lowering frequency of IndexWriter flushes
and (subsequently) lowering frequency of index merge events, and
(subsequently) merging of a few larger files takes less time... especially
if RAM Buffer is intelligent enough (and big enough) to deal with 100
concurrent u
Hi Lici,
You may want to try the following snippet
---
SolrServer solr = new
CommonsHttpSolrServer("http://localhost:8983/solr";); //
ModifiableSolrParams params = new ModifiableSolrParams();
params.set("wt", "json"); // Can be json,standard
Hi -
FYI, Lucid's just put out a two white papers, one on Apache Lucene 2.9 and
one on Apache Solr 1.4:
- "What's New in Lucene 2.9" covers range of performance improvements and
new features (per segment indexing, trierange numeric analysis, and more),
along with recommendations for upgrading your
Well, I fixed my own problem in the end. For the record, this is the
schema I ended up going with:
I could have left it a trigram but went with a bigram because with this
setup, I can get queries to properly hit as long
Hi Hoss,
Thanks for the clarification.
I've a wrote a Unit Test in order to simulate the date processing. A high
level detail of this problem is that it occurs only when used the JavaBin
custom format (&wt=javabin), in this case the dates get back set with
environment UTC offset coordinates.
On
I wouldn't use a RAM buffer of a gig - 32-100 is generally a good number.
Fuad Efendi wrote:
> I was partially wrong; this is what Mike McCandless (Lucene-in-Action, 2nd
> edition) explained at Manning forum:
>
> mergeFactor of 1000 means you will have up to 1000 segments at each level.
> A level
I was partially wrong; this is what Mike McCandless (Lucene-in-Action, 2nd
edition) explained at Manning forum:
mergeFactor of 1000 means you will have up to 1000 segments at each level.
A level 0 segment means it was flushed directly by IndexWriter.
After you have 1000 such segments, they are me
> 1024
Ok, it will lower frequency of Buffer flush to disk (buffer flush happens
when it reaches capacity, due commit, etc.); it will improve performance. It
is internal buffer used by Lucene. It is not total memory of Tomcat...
> 100
It will deal with 100 Segments, and each segment wi
Make it 10:
10
-Fuad
> -Original Message-
> From: Ranganathan, Sharmila [mailto:sranganat...@library.rochester.edu]
> Sent: October-23-09 1:08 PM
> To: solr-user@lucene.apache.org
> Subject: Too many open files
>
> Hi,
>
> I am getting too many open files error.
>
> Usually I test on
Hi all,
I'm wondering where a slave pulls the files from the master on replication.
Is it directly to the index/ directory or is it somewhere else before
it's completed and gets copied to index?
Cheers!
Jerome.
--
Jerome Eteve.
http://www.eteve.net
jer...@eteve.net
seems to happen when sort on anything besides strictly score, even
score desc, num desc triggers it, using latest nightly and 10/14 patch
Problem accessing /solr/core1/select. Reason:
4731592
java.lang.ArrayIndexOutOfBoundsException: 4731592
at
org.apache.lucene.search.FieldComparat
2009/10/23 Andrzej Bialecki :
> Jérôme Etévé wrote:
>>
>> Hi all,
>>
>> I'm using Solr trunk from 2009-10-12 and I noticed that the QTime
>> result is always a multiple of roughly 50ms, regardless of the used
>> handler.
>>
>> For instance, for the update handler, I get :
>>
>> INFO: [idx1] webapp
Hi,
I am getting too many open files error.
Usually I test on a server that has 4GB RAM and assigned 1GB for
tomcat(set JAVA_OPTS=-Xms256m -Xmx1024m), ulimit -n is 256 for this
server and has following setting for SolrConfig.xml
true
1024
100
2147483647
1
In my
Folks:
If I issue two requests with no intervening changes to the index,
will the second optimize request be smart enough to not do anything?
Thanks,
Bill
Clever, I think that would work in some cases.
On Fri, Oct 23, 2009 at 5:22 PM, Martijn v Groningen <
martijn.is.h...@gmail.com> wrote:
> No this actually not supported at the moment. If you really need to
> collapse on two different field you can concatenate the two fields
> together in another
Is there any way to make snapinstaller install the index in
spanpshot20091023124543 (for example) from another disk? I am asking this
because I would like not to optimize the index in the master (if I do that
it takes a long time to send it via rsync if it is so big). This way I would
just have to
That's a really good point. I didn't think about the GCs. Obviously we don't
want to have all the indexes hanging if full GC occur. Wee running a +8GB
heap so GCs are very important to us.
Thanks
Erik
wojtekpia wrote:
>
> I ran into trouble running several cores (either as Solr multi-core or a
Hi everybody,
I just started playing with Solr and think of it as a quite useful tool!
I'm using Solrj (Solr 1.3) in combination with an EmbeddedSolrServer. I
managed to get the server running and implemented a method (following the
Solrj Wiki) to create a document and add it to the server's ind
That's probably it! It is quite near the end of the field. I'll try upping
it and re-indexing.
Thanks :-)
Erick Erickson wrote:
>
> I'm really reaching here, but lucene only indexes the first 10,000 terms
> by
> default (you can up the limit). Is there a chancethat you're hitting that
> limit
I'm really reaching here, but lucene only indexes the first 10,000 terms by
default (you can up the limit). Is there a chancethat you're hitting that
limit? That 1cuk is past the 10,000th term
in record 2.40?
For this to be possible, I have to assume that the FieldAnalysis
tool ignores this limit.
I ran into trouble running several cores (either as Solr multi-core or as
separate web apps) in a single JVM because the Java garbage collector would
freeze all cores during a collection. This may not be an issue if you're not
dealing with large amounts of memory. My solution is to run each web ap
Hi All,
After a day of searching I'm quite confused.
I use the solrj client as follows:
CommonsHttpSolrServer solr = new
CommonsHttpSolrServer("http://127.0.0.1:8080/apache-solr-1.4-dev/test";);
solr.setRequestWriter(new BinaryRequestWriter());
ModifiableSolrParams params = new ModifiableS
Hi,
I have a field in my index called related_ids, indexed and stored, with the
following field type:
Several records in my index contain the token 1cuk in the related_ids field,
but only *some* of them are r
Jérôme Etévé wrote:
Hi all,
I'm using Solr trunk from 2009-10-12 and I noticed that the QTime
result is always a multiple of roughly 50ms, regardless of the used
handler.
For instance, for the update handler, I get :
INFO: [idx1] webapp=/solr path=/update/ params={} status=0 QTime=0
INFO: [id
Probably multicore would give you better performance... I think most
important factors to take into account are the size of the index and the
traffic you have to hold. With enought RAM memory you can hold 40 cores in a
singe solr instance (or even more) but depending on the traffic you have to
hol
We're not using multicore. Today, one Tomcat instance host a number of
indexes in form of 10 Solr indexes (10 individual war files).
Marc Sturlese wrote:
>
> Are you using one single solr instance with multicore or multiple solr
> instances with one index each?
>
> Erik_l wrote:
>>
>> Hi,
>>
I have a requirement to be able to find hits within words in a free-form
id field. The field can have any type of alphanumeric data - it's as
likely it will be something like "123456" as it is to be "SUN-123-ABC".
I thought of using NGrams to accomplish the task, but I'm having a
problem. I set up
On Friday 23 October 2009 09:36:02 am AHMET ARSLAN wrote:
>
> --- On Fri, 10/23/09, Dan A. Dickey wrote:
>
> > From: Dan A. Dickey
> > Subject: help with how to search using spaces in the query for string
> > fields...
> > To: solr-user@lucene.apache.org
> > Date: Friday, October 23, 2009, 5:1
Is it a query related to Solr ?
On Fri, Oct 23, 2009 at 6:46 PM, Radha C. wrote:
> Hi,
>
> We have CAS server of spring integrated and it is running in apache. We have
> application in MovableType4 - PHP.
> Is it possible to configure the MT4 authentication module to redirect to
> external CAS s
--- On Fri, 10/23/09, Dan A. Dickey wrote:
> From: Dan A. Dickey
> Subject: help with how to search using spaces in the query for string
> fields...
> To: solr-user@lucene.apache.org
> Date: Friday, October 23, 2009, 5:12 PM
> I'm having a problem with figuring
> out how to search for things
Are you using one single solr instance with multicore or multiple solr
instances with one index each?
Erik_l wrote:
>
> Hi,
>
> Currently we're running 10 Solr indexes inside a single Tomcat6 instance.
> In the near future we would like to add another 30-40 indexes to every
> Tomcat instance we
Hi,
Currently we're running 10 Solr indexes inside a single Tomcat6 instance. In
the near future we would like to add another 30-40 indexes to every Tomcat
instance we host. What are the factors we have to take into account when
planning for such deployments? Obviously we do know the sizes of the
I'm having a problem with figuring out how to search for things
that have spaces (just a single space character) in them.
For example, I have a field named "FileName" and it is of type string.
I've indexed a couple of documents, that have field FileName
equal to "File 10 10AM" and another that has
Hi all,
I'm using Solr trunk from 2009-10-12 and I noticed that the QTime
result is always a multiple of roughly 50ms, regardless of the used
handler.
For instance, for the update handler, I get :
INFO: [idx1] webapp=/solr path=/update/ params={} status=0 QTime=0
INFO: [idx1] webapp=/solr path=
Hi,
We have CAS server of spring integrated and it is running in apache. We have
application in MovableType4 - PHP.
Is it possible to configure the MT4 authentication module to redirect to
external CAS server when the application recieves login request?
It would be helpful if there is any docume
Forget my mail about that, there was an old 1.3 webapp interfering with the
new webapp and I didn't immediately relise that
Sorry for the noise
Jörg
Why don't you directly hit Solr with wt=json? That will give you the
output as JSON
On Fri, Oct 23, 2009 at 5:53 PM, SGE0 wrote:
>
> Hi,
>
> thx for the fast response.
>
> So, is there a way to convert the response (javabin) to JSON ?
>
> Regards,
>
> S.
>
>
>
>
>
>
> Noble Paul നോബിള് नोब्ळ्-2
Hi,
thx for the fast response.
So, is there a way to convert the response (javabin) to JSON ?
Regards,
S.
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> CommonsHttpSolrServer will overwrite the wt param depending on the
> responseParser set.There are only two response parsers. javabin and
>
CommonsHttpSolrServer will overwrite the wt param depending on the
responseParser set.There are only two response parsers. javabin and
xml.
The qresponse.toString() actually is a String reperesentation of a
namedList object . it has nothing to do with JSON
On Fri, Oct 23, 2009 at 2:11 PM, SGE0
On Oct 22, 2009, at 9:44 PM, Chris Hostetter wrote:
: > Why wouldn't you just query the function directly and leave out
the *:* ?
:
: *:* was just a quick example, I might have other constant score
queries, but I
: guess I probably could do a filter query plus the function query,
too.
No this actually not supported at the moment. If you really need to
collapse on two different field you can concatenate the two fields
together in another field while indexing and then collapse on that
field.
Martijn
2009/10/23 Thijs :
> I haven't had time to actually ask this on the list my self
Hi ,
I have following problem:
Using CommonsHttpSolrServer (javabin format) I do a query with wt=json and
get following response (by using qresponse = solr.query(params); and then
qresponse.toString();
{responseHeader={status=0,QTime=16,params={indent=on,start=0,q=mmm,qt=dismax,wt=[javabin,
ja
Hi there,
I'm having trouble getting the latest solr from svn (I'm using trunk from
Oct., 22nd, but it didn't work with an earlier revision either) to run in
tomcat.
I've checked it out, built and ran the tests - all fine.
I run the example conf with jetty using the start.jar - all fine
Now I copy
u guessed it right . Solrj cannot query on multiple cores
2009/10/23 Licinio Fernández Maurelo :
> As no answer is given, I assume it's not possible. It will be great to code
> a method like this
>
> query(SolrServer, List)
>
>
>
> El 20 de octubre de 2009 11:21, Licinio Fernández Maurelo <
> lic
As no answer is given, I assume it's not possible. It will be great to code
a method like this
query(SolrServer, List)
El 20 de octubre de 2009 11:21, Licinio Fernández Maurelo <
licinio.fernan...@gmail.com> escribió:
> Hi there,
> is there any way to perform a multi-core query using solrj?
>
I haven't had time to actually ask this on the list my self but seeing
this, I just had to reply. I was wondering this myself.
Thijs
On 23-10-2009 5:50, R. Tan wrote:
Hi,
Is it possible to collapse the results from multiple fields?
Rih
52 matches
Mail list logo