Hi. Peter and All.
I merged my indexes today. Now each index stores 10M document. Now I only have
10 solr cores.
And I used
java -Xmx1g -jar -server start.jar
to start the jetty server.
At first I deployed them all on one search. The search speed is about 3s. Then
I noticed from cmd output wh
>My query string is always simple like "design", "principle of design",
"tom"
>EG:
>URL:
http://localhost:7550/solr/select/?q=design&version=2.2&start=0&rows=10&indent=on
IMO, indeed with these types of simple searches caching (and thus RAM usage)
can not be fully exploited, i.e: there isn't reall
Thanks Jonathan. I appreciate your reply.
Though I got few ideas for implementing my requirement, I got stuck up with
few issues. It would be more helpful if you guide me in resolving those.
As you suggested I configured single core with different fields.
For example the core contains the follo
Hi,
I am following:
http://wiki.apache.org/solr/LoggingInDefaultJettySetup
All works fine except defining the logging properties files from jetty.xml
Does this approach work for anyone else?
regards,
Lukas Kahwe Smith
m...@pooteeweet.org
Hi. Geert-Jan.
Thanks for replying.
I know solr has querycache and it improves the search speed from second
time. Actually when I talk about the search speed. I don't mean talking about
the speed of cache. When user search on our site, I don't want the first time
cost 10s and all following
> 1) While doing a dismax query, I specify the query in double quotes for
> exact match. This works fine but I don't get any partial matches in search
> result.
Rather than specify your query in quotes for 'exact' matches, I was suggesting
configuring the analyzers differently for your fields "co
On 7/17/2010 3:28 AM, marship wrote:
Hi. Peter and All.
I merged my indexes today. Now each index stores 10M document. Now I only have
10 solr cores.
And I used
java -Xmx1g -jar -server start.jar
to start the jetty server.
How big are the indexes on each of those cores? You can easily get th
Hi. Shawn.
My indexes are smaller than yours. I only store "id" + "type" in indexes so
each "core" index is about 1 - 1.5GB on disk.
I don't have so many servers/VPS as you have. In my option, my problem is not
CPU. If possible, I prefer to add more memory to fit indexes in my server. At
least a
@Hemanth, I understand the functionality of "fl" and also about "facet"...
In my condition example i have mentioned that i need faceted COUNT that
will be like merging both the fileds
Still if I am not clear to you then find below my solr search result's
console view and with what I am fin
Can anybody help me with this? :(
-Original Message-
From: Marc Ghorayeb
Sent: Thursday, July 08, 2010 9:46 AM
To: solr-user@lucene.apache.org
Subject: Spellcheck help
Hello,I've been trying to get rid of a bug when using the spellcheck but so
far with no success :(When searching for
> I needed to get counts based GRPID clubbed with GRPNAME not different sets
Perhaps using facet.query to write your own "sub queries" that will collect
whatever you want?
I don't know of a way to tell Solr to load all the indexes into
memory, but if you were to simply read all the files at the OS level,
that would do it. Under a unix OS, "cat * > /dev/null" would work. Under
Windows, I can't think of a way to do it off the top of my head, but if
you had Cygwin
Hi Lance,
Thanks for the reply!
I checked the settings and I don't think it has multivalue setting. Here is
the current field configuration:
*
**
*
> *
> *Lance Norskog wrote:
>
> This can happen when there are multiple values in a field. Is 'first'
> a multi-valued
(10/07/18 4:51), Girish wrote:
Hi Lance,
Thanks for the reply!
I checked the settings and I don't think it has multivalue setting. Here is
the current field configuration:
*
**
*
Tokenized field is one of multiValued type fields since
multiple tokens (valu
Spellchecking can also take a dictionary as its database. Is it
possible to create a dictionary of the terms you want suggested?
On Sat, Jul 17, 2010 at 10:40 AM, wrote:
> Can anybody help me with this? :(
>
> -Original Message- From: Marc Ghorayeb
> Sent: Thursday, July 08, 2010 9:46 AM
You cannot sort on text fields, only string&number&date fields. The
ArrayIndexOutOfBounds exception happens when there are more terms to
sort on than documents (I think?).
On Sat, Jul 17, 2010 at 3:11 PM, Koji Sekiguchi wrote:
> (10/07/18 4:51), Girish wrote:
>>
>> Hi Lance,
>>
>> Thanks for th
There appears to be a problem with the recognition of the 'solr.solr.home'
property in SOLR 1.4.1 - or else I have a basic misunderstanding of how
'solr.solr.home' is intended to work.
Conduct the following experiment.
Take the standard SOLR 1.4.1 distribution.
Suppose the home directory is /U
One more piece of information. I notice that it does look for the schema in
~/solr_example1/solr/conf. A fatal error is generated if
~/solr_example1/solr/conf is removed. So, it appears to be localized to the
writing of the index files.
What is your data dir set to? It should say in the start up logging.
- Mark
http://www.lucidimagination.com (mobile)
On Jul 17, 2010, at 8:40 PM, Tracy Flynn wrote:
> One more piece of information. I notice that it does look for the schema in
> ~/solr_example1/solr/conf. A fatal error is gen
That's a little telling
INFO: Opening new SolrCore at /Users/johndoe/example1/solr/,
dataDir=./solr/data/
Since I'm running with ~/example2 as the current working directory, then that
would explain it. Schema etc. is found in ~/example1/solr/conf, but the data
is being managed in ~/example2/
20 matches
Mail list logo