On Fri, Jul 10, 2009 at 11:22 PM, danben wrote:
>
> What I have seen, however, is that the number of open FDs steadily
> increases
> with the number of cores opened and files indexed, until I hit whatever
> upper bound happens to be set (currently 100k). Raising machine-imposed
> limits, using t
On Sat, Jul 11, 2009 at 12:01 AM, Bradford Stephens <
bradfordsteph...@gmail.com> wrote:
> Does the facet aggregation take place on the Solr search server, or
> the Solr client?
>
> It's pretty slow for me -- on a machine with 8 cores/ 8 GB RAM, 50
> million document index (about 36M unique values
On Sat, Jul 11, 2009 at 8:56 AM, J G wrote:
>
> I have a SOLR JMX connection issue. I am running my JMX MBeanServer through
> Tomcat, meaning I am using Tomcat's MBeanServer rather than any other
> MBeanServer implemenation.
> I am having a hard time trying to figure out the correct JMX Service U
Hi all,
I am using solr tika to index various file formats.I have used
ExtractingRequestHandler to get the data and render it in GUI using VB.NET.
Now my requirement is to render the file as it is(With all formatting,for
eg.Table,) or almost a similar look of original file.So i need to receive
all
Michael, you're of course right, copyfield would copy from source.
The lack of built-in language awareness in Solr is unfortunate :(
I have not tried Lucid's BasisTech lemmatizer implementation, but check
with them whether they can support multi languages in the same field.
--
Jan Høydahl
On 8. j
I am not familiar with perl so I cannot help you in how to do it
better in perl.The pseudo code should help.
You can do faster indexing if you post in multiple threads. If you
know java , use StreamingHttpSolrServer (in SolrJ client)
On Fri, Jul 10, 2009 at 4:28 PM, Shalin Shekhar
Mangar wrote:
>
On Jul 11, 2009, at 4:23 AM, S.Selvam wrote:
Hi all,
I am using solr tika to index various file formats.I have used
ExtractingRequestHandler to get the data and render it in GUI using
VB.NET.
Now my requirement is to render the file as it is(With all
formatting,for
eg.Table,) or almost a s
Hi guys --
Using solr 1.4 functions at query-time, can I dynamically boost
certain documents which are: a) not on the same range, i.e. have very
different document ids, b) have different boost values, c) part of a
long list (can be around 1,000 different document ids with 50
different boost values
I had been assuming that I could choose among possible tika output
formats when using the extracting request handler in extract-only mode
as if from the CLI with the tika jar:
-x or --xmlOutput XHTML content (default)
-h or --html Output HTML content
-t or --text Ou
Are we planning on implementing caching (docsets, documents, results) per
segment reader or is this something that's going to be in 1.4?
We can use solr range query like:
http://localhost:8983/solr/select?q=queryStr&fq=x:[10 TO 100] AND y:[20 TO
300]
or :
http://localhost:8983/solr/select?q=queryStr&fq=x:[10 TO 100]&fq=y:[20 TO
300]
My Question:
How to make this range query by using solrJ ? Anybody knows?
enzhao...@gmail.com tha
11 matches
Mail list logo