There is no point converting javabin to json. javabin is in
intermediate format it is converted to the java objects as soon as
comes. You just need means to convert the java object to json.
On Sat, Oct 24, 2009 at 12:10 PM, SGE0 wrote:
>
> Hi,
>
> did anyone write a Javabin to JSON convertor an
On Fri, Oct 23, 2009 at 11:46 PM, Jérôme Etévé wrote:
> Hi all,
> I'm wondering where a slave pulls the files from the master on replication.
>
> Is it directly to the index/ directory or is it somewhere else before
> it's completed and gets copied to index?
>
it is copied to a emp dir till all t
hi
you don't see the point . You really don't need to use SolrJ . All
that you need to do is just make an http request with wt=json and read
the output to a buffer and you can just send it to your client.
--Noble
On Fri, Oct 23, 2009 at 9:40 PM, SGE0 wrote:
>
> Hi All,
>
> After a day of sear
Hi Paul,
fair enough. Is this included in the Solrj package ? Any examples how to do
this ?
Stefan
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> There is no point converting javabin to json. javabin is in
> intermediate format it is converted to the java objects as soon as
> comes. You just need
Hi Paul,
thx again.
Can I use this technique from within a servlet ?
Do I need an instance of the HttpClient to do that ?
I noticed I can instantiate the CommonsHttpSolrServer with a HttpClient
client .
I did not find any relevant examples how to use this .
If you can help me out with this m
hi guys,
I am indexing events in solr, where every Event contains a startDate and
endDate.
On the search page, I would like to have a Date Facet where users can
quickly browse through dates they are interested in.
I have a field daysForFilter in each document which stores timestamps from
t
Hoping someone can help -
Problem:
Querying for non-english phrases such as Добавить do not return any
results under Tomcat but do work when using the Jetty example.
Both tomcat and jetty are being queried by the same custom (flash)
client and both reference the same solr/da
Hello
Have you set URIEncoding attribute to UTF-8 in tomcat's server.xml (on
connector element)?
Like:
Hope this helps.
Best regards
czinkos
2009/10/24 Glock, Thomas :
>
> Hoping someone can help -
>
> Problem:
> Querying for non-english phrases such as Добавить do not return any
>
Thanks but not working...
I did have the URIEncoding in place and just again moved the URIEncoding
attribute to be the first attribute - ensured I saved sever.xml, shut down
tomcat, deleted logs and cache and still no luck Its probably something
very simple and I'm just missing it.
Thank
I had extremely specific use case; about 5000 documents-per-second (small
documents) update rate, some documents can be repeatedly sent to SOLR with
different timestamp field (and same unique document ID). Nothing breaks,
just a great performance gain which was impossible with 32GB Buffer (- it
ca
no need to use httpclient . use java.net.URL#openConnection(url) and
read the inputstream into a buffer and that is it.
On Sat, Oct 24, 2009 at 1:53 PM, SGE0 wrote:
>
> Hi Paul,
>
> thx again.
>
> Can I use this technique from within a servlet ?
>
> Do I need an instance of the HttpClient to do
Thanks for pointing to it, but it is so obvious:
1. "Buffer" is used as a RAM storage for index updates
2. "int" has 2 x Gb different values (2^^32)
3. We can have _up_to_ 2Gb of _Documents_ (stored as key->value pairs,
inverted index)
In case of 5 fields which I have, I need 5 arrays (up to 2Gb
Mark, I don't understand this; of course it is use case specific, I haven't
seen any terrible behaviour with 8Gb... 32Mb is extremely small for
Nutch-SOLR -like applications, but it is acceptable for Liferay-SOLR...
Please note also, I have some documents with same IDs updated many thousands
time
This JavaDoc is incorrect especially for SOLR, when you store raw (non
tokenized, non indexed) "text" value with a document (which almost everyone
does). Try to store 1,000,000 documents with 1000 bytes non-tokenized field:
you will need 1Gb just for this array.
> -Original Message-
> Fro
On Sat, Oct 24, 2009 at 12:18 PM, Fuad Efendi wrote:
>
> Mark, I don't understand this; of course it is use case specific, I haven't
> seen any terrible behaviour with 8Gb
If you had gone over 2GB of actual buffer *usage*, it would have
broke... Guaranteed.
We've now added a check in Lucene 2.9.
Try using example/exampledocs/test_utf8.sh to narrow down if the
charset problems you're hitting are due to servlet container
configuration.
-Yonik
http://www.lucidimagination.com
2009/10/24 Glock, Thomas :
>
> Thanks but not working...
>
> I did have the URIEncoding in place and just again move
On Sat, Oct 24, 2009 at 12:25 PM, Fuad Efendi wrote:
> This JavaDoc is incorrect especially for SOLR,
It looks correct to me... if you think it can be clarified, please
propose how you would change it.
> when you store raw (non
> tokenized, non indexed) "text" value with a document (which almost
Thanks - I now think it must be due to my client not sending enough ( or
correct ) headers in the request.
Tomcat does work when using an HTTP GET but is failing the POST from my flash
client.
For example putting this in both firefox and IE browsers url works correctly:
http://localhost:808
Hi Yonik,
I am still using pre-2.9 Lucene (taken from SOLR trunk two months ago).
2048 is limit for documents, not for array of pointers to documents. And
especially for new "uninverted" SOLR features, plus non-tokenized stored
fields, we need 1Gb to store 1Mb of a simple field only (size of fi
> > when you store raw (non
> > tokenized, non indexed) "text" value with a document (which almost
everyone
> > does). Try to store 1,000,000 documents with 1000 bytes non-tokenized
field:
> > you will need 1Gb just for this array.
>
> Nope. You shouldn't even need 1GB of buffer space for that.
Don't use POST. That is the wrong HTTP semantic for search results.
Use GET. That will make it possible to cache the results, will make
your HTTP logs useful, and all sorts of other good things.
wunder
On Oct 24, 2009, at 10:11 AM, Glock, Thomas wrote:
Thanks - I now think it must be due
Thanks -
I agree. However my application requires results be trimmed to users based on
roles. The roles are repeating values on the documents. Users have many
different role combinations as do documents.
I recognize this is going to hamper caching - but using a GET will tend to
limit the siz
> If you had gone over 2GB of actual buffer *usage*, it would have
> broke... Guaranteed.
> We've now added a check in Lucene 2.9.1 that will throw an exception
> if you try to go over 2048MB.
> And as the javadoc says, to be on the safe side, you probably
> shouldn't go too near 2048 - perhaps 2
I am using java 1.6.0_05
To illustrate what is happening I wrote this test program that has 10 threads
adding a collection of documents and one thread optimizing the index every 10
sec.
I am seeing that after the first optimize there is only one thread that keeps
adding documents. The other on
24 matches
Mail list logo