On Fri, Jul 27, 2012 at 6:32 PM, Federico Valeri wrote:
> Hi all,
Hi,
> I'm new to Solr, I have a problem with JSON format, this is my Java
> client code:
>
The java client (SolrServer) can only operate with xml or javabin
format. If you need to get the json response from Solr by using java
you
Where is the common index? On NFS?
If it is on a native hard disk (on the same computer) Solr uses the
file locking mechanism supplied by the operating system (Linux or
Windows). This may not be working right. See this for more info on
file locking:
http://wiki.apache.org/lucene-java/AvailableLock
Chris,
I'm not sure if Solr by itself can really do this (easily and/or well).
Have a look
at http://sematext.com/products/key-phrase-extractor/index.html which can do
exactly that, but without Solr. Some of the highlighted bits refer to trending
topics, though not using exactly that terminolo
Hi,
even though I read a lot, none of my spellchecker configurations works
really well. I reached a dead end. Maybe someone could help, to solve my
challenges.
- How can I get case sensitive suggestions, independent of the given
case in the query?
- How to configure a 'did you mean' spellc
Hi,
I'm trying to use two embedded solr servers pointing to a same solrhome /
index. So that's something like
System.setProperty("solr.solr.home", "SomeSolrDir");
CoreContainer.Initializer initializer = new
CoreContainer.Initializer();
CoreContainer coreContainer = initial
You might want to look at turning down or eliminating your caches if
you're running out of RAM. Possibly some of them have a low hit rate,
which you can see on the Stats page. Caches with a low hit rate are
only consuming RAM and CPU cycles.
Also, using this JVM arg might reduce the memory footpri
Erick,
Thank you for the courtesy of your reply.
I was able to figure out the problem, and for the benefit of the list, I
list the analysis. Judging by the caliber of those on this list, this is
likely too basic for the interests of most, but newbies (among whom I still
classify myself) might ben
Erick,
Thank you for the courtesy of your reply.
I was able to figure out the problem, and for the benefit of the list, I
list the analysis. Judging by the caliber of those on this list, this is
likely too basic for the interests of most, but newbies (among whom I still
classify myself) might ben
ApacheCon Europe will be happening 5-8 November 2012 in Sinsheim, Germany
at the Rhein-Neckar-Arena. Early bird tickets go on sale this Monday, 6
August.
http://www.apachecon.eu/
The Lucene/Solr track is shaping up to be quite impressive this year, so
make your plans to attend an
defaultSearchField is deprecated in Solr 3.6. It is still supported, but the
"df" query request parameter overrides it. So, go into solrconfig.xml and
change the "df" parameter value from "text" to "Title".
-- Jack Krupansky
-Original Message-
From: Lakshmi Bhargavi
Sent: Monday, Aug
Lakschmi - The field(s) used for querying needs to be specified somewhere,
either as a default field or as a qf parameter to (e)dismax, etc.
Erik
On Aug 6, 2012, at 10:48 , Lakshmi Bhargavi wrote:
> Hi ,
>
> I have a question on the default search field defined in schema.xml or in
> th
Hi ,
I have a question on the default search field defined in schema.xml or in
the later versions , specified as part of the search handlers. Do we always
need to have this default search field defined in order to do search if the
field name is not passed?
Suppose , there is a field named 'Title
Thanks a lot Jack for your prompt reply! The JIRA issue indeed talks about
what I want to accomplish. I will try out Tricia's solution.
As regards your question - whether I want "real" page numbers? Yes, ideally
I want to get real page numbers (and am willing to put in the additional
parsing effor
There is an old, open Jira, SOLR-380 - "There's no way to convert search
results into page-level hits of a "structured document".", but no recent
activity on it. It does have a lot of interesting commentary though. I
wouldn't get my hopes up.
See:
https://issues.apache.org/jira/browse/SOLR-380
Suppose, we are provisioning search over large text documents (e.g., Word,
PPT). It would be nice to have the highlighter component to return the page
numbers where the matches are found so that the same may be included in the
search result summaries. What is the most efficient way to accomplish th
Hi, i am trying to use a read/write solr setup. what i mean is that i would
have a common location for lucene indexes and configure one instance of solr
for reads and another instance to only write new indexes. Both the instances
pointing to the same index location. The approach is given here
http
Is there anything you cannot do with Solr? :-)
Thanks a lot Erick! I only had to use . instead of ?, e.g.
...:8983/solr/terms?terms.fl=fieldname&terms.limit=100&terms.prefix=abcd&terms.regex.flag=case_insensitive&terms=true&terms.regex=abcd..
Adding terms.sort=index allows me even to sort as I ne
Hi, we have a large lucene index base created using solr. Its split into 16
cores. Each core contains almost 10GB of indexes. We have deployed 8
instances of Solr hosting two cores each. The logic of identifying where the
document resides based on the document id, is built within the application.
T
18 matches
Mail list logo