take a look at
http://wiki.apache.org/solr/QueryElevationComponent
On 20 July 2012 03:48, Siping Liu wrote:
> Hi,
> I have requirements to place a document to a pre-determined position for
> special filter query values, for instance when filter query is
> fq=(field1:"xyz") place document abc as
Hi Bill,
MMapDirectory uses the file system cache of your operating system, which has
following consequences: In Linux, top & free should normally report only
*few* free memory, because the O/S uses all memory not allocated by
applications to cache disk I/O (and shows it as allocated, so having 0%
Hi,
I am want to configure solr performance monitoring tool i surf a lot and
found some tool like "zabbix, SolrGaze"
but i am not able to decide which tool is better.
I want integrated alerting option in tools so that i can received a message
whether website performance going down or in case of
Dear Michael,
My system is:
Ubuntu 12.04
8Go Ram
4 cores
Concerning connector on server.xml, I don't modified something, so all
values are default.
I have only one connector and no maxThreads are define inside.
Must I add a line with maxThreads=?
Le 20/07/2012 03:31, Michael Della B
Suneel,
there's many monitoring tools out there.
Zabbix is one of them, it is in PHP.
I think SolrGaze as well (not sure).
I've been using HypericHQ, which is pure java, and I have been satisfied with
it though it leaves some space for enhancement.
Other names include Nagios, also in PHP, and RRD
Try attaching &debugQuery=on to your query and look at the parsed
query. My first guess
is that your default operator is AND (or a.op in modern terms) and the
ngram with
"dl" in it is required.
Please paste the results here if that's not the cause.
Best
Erick
On Thu, Jul 19, 2012 at 7:29 AM, Hus
More details:
First (around) 50 requests are very quick and after connection down
(very slow) and freeze sometime.
I'm trying to install a tool to see what happens.
Le 20/07/2012 12:09, Bruno Mannina a écrit :
Dear Michael,
My system is:
Ubuntu 12.04
8Go Ram
4 cores
Concerning connector o
Default operators are ignored by edismax. See the "mm" parameter here:
http://wiki.apache.org/solr/DisMaxQParserPlugin
Best
Erick
On Thu, Jul 19, 2012 at 8:16 AM, amitesh116 wrote:
> Hi,
>
> We have used *dismax* in our SOLR config with /defaultOperator="OR"/ and
> some *mm * settings. Recently
You might try two queries. The first would get your authors, the second
would use the returned authors as a filter query and search your titles, grouped
by author then combine the two lists. I don't know how big your corpus
is, but two
queries may well be fast enough
Best
Erick
On Thu, Jul 19
Ok it's nice a facet query, i will try this feature and will reply you but i
think that's the point, thanks a lot for time spent :)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-facet-multiple-constraint-tp3992974p3996186.html
Sent from the Solr - User mailing list ar
There's almost nothing to go on here. Please review:
http://wiki.apache.org/solr/UsingMailingLists
Best
Erick
On Thu, Jul 19, 2012 at 1:44 PM, Rohit wrote:
> Hi Brandan,
>
> I am not sure if get whats being suggested. Our delete worked fine, but now
> no new data is going into the system.
>
> Co
I have tried and it works !
Thanks again a lot for this dude !
Regards,
David.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-facet-multiple-constraint-tp3992974p3996189.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Erick. Actually it was going in as a phrase query. I set the following
filter and things are perfect
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Friday, July 20, 2012 5:23 PM
To: solr-user@lucene.apache.org
Subject: Re: NGram Indexing Basic Que
NP, glad it's working for you!
On Fri, Jul 20, 2012 at 8:26 AM, davidbougearel
wrote:
> I have tried and it works !
>
> Thanks again a lot for this dude !
>
> Regards,
> David.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-facet-multiple-constraint-tp399297
Hello solr users,
i want to use the spellchecker component. All examples and tutorials i found
are dealing with one index. Our solr setup has multiple cores each one for a
single language. The spellchecker component should be based on the different
languages in the cores.
I am unsure how to handl
Hooking up Zabbix with Solr's / Java's JMX support is very powerful.
On Jul 20, 2012, at 5:58 AM, Suneel wrote:
> Hi,
>
> I am want to configure solr performance monitoring tool i surf a lot and
> found some tool like "zabbix, SolrGaze"
> but i am not able to decide which tool is better.
>
>
It varies. Last I used Tomcat (some years ago) it defaulted to the system
default encoding and you had to use -Dfile.encoding... to get UTF-8.
Jetty currently defaults to UTF-8.
On Jul 17, 2012, at 11:12 PM, William Bell wrote:
> -Dfile.encoding=UTF-8... Is this usually recommended for SOLR ind
Hi,
I started with SysUsage http://sysusage.darold.net/ which grabs all system
activities
using unix Sar and system commands. Pretty easy and simple.
I also tried zabbix. Very powerful but for me to much to configure.
Have now munin 2.0.2 installed for testing. Needs some perl knowledge to get
> I am want to configure solr performance monitoring tool i
> surf a lot and
> found some tool like "zabbix, SolrGaze"
You might be interested in http://sematext.com/spm/index.html
Hi all,
I'm trying to index float values that are not required, input is an XML file. I
have problems avoiding the NFE.
I'm using SOLR 3.6.
Index input:
- XML using DataImportHandler with XPathProcessor
Data:
Optional, Float, CDATA like: 2.0 or
Original Problem:
Empty values would cause a
I'm using the PayloadTermQuery and scoring documents using a custom
algorithm based on the payloads of the matching terms. The algorithm is
implemented in the custom PayloadFunction and I have added an Override for
the explain. However, the PayloadTermWeight explanation hides the details
of the pay
Hi Bruno,
It seems the version of Tomcat I was running was customized by
Canonical to have that parameter. You might try to add it in... I have
no idea what the default is.
Do you have any idea how much RAM you're allocating to the Tomcat
process? It could be that something is off there.
http://
Hi.
Sorry for the noise.
Managed to get it working on another PC, so it must be something very local to
the PC I am using.
From: John-Paul Drawneek
Sent: 19 July 2012 23:13
To: solr-user@lucene.apache.org
Subject: RE: solr 4.0 cloud 303 error
I did a se
: Processing chain (to avoid NFE): via XPath loaded into a field of type
: text with a trim and length filter, then via copyField directive into
: the tfloat type field
The root of the problem you are seeing is that copyField directives are
applied to the *raw* field values -- the analyzer use
Thank you Michael ..
I overlooked that . Now its working and got the data indexed.
Regards,
Lakshmi
--
View this message in context:
http://lucene.472066.n3.nabble.com/Reg-issue-with-indexing-data-from-one-of-the-sqlserver-DB-tp3996078p3996303.html
Sent from the Solr - User mailing list arch
Good to hear!
Michael Della Bitta
Appinions, Inc. -- Where Influence Isn’t a Game.
http://www.appinions.com
On Fri, Jul 20, 2012 at 2:56 PM, Lakshmi Bhargavi
wrote:
> Thank you Michael ..
>
> I overlooked that . Now its working and got the data
Hello, Lakshmi,
The issue is the fieldType you've assigned to the fields in your
schema does not perform any analysis on the string before indexing it.
So it will only do exact matches. If you want to do matches against
portions of the field value, use one of the "text" types that come in
the defa
DefaultSearchField is the field which is queried if you don't explicitly
specify the fields to query on.
Please refer to the below link:
http://wiki.apache.org/solr/SchemaXml
On Sat, Jul 21, 2012 at 12:56 AM, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
> Hello, Lakshmi,
>
> T
Hi Michael,
I set Xms1024m Xmx2048
I will take a look to your link, thanks !!!
Actually, all my tests works slowlyeven with 150 requests :'(
Le 20/07/2012 18:17, Michael Della Bitta a écrit :
Hi Bruno,
It seems the version of Tomcat I was running was customized by
Canonical to have that
> I would like to know whether there
> exist any add-on for semantic search.inSolr?
May be this http://siren.sindice.com/ ?
hum... by using
|export JAVA_OPTS=||"-Xms1024m -Xmx2048m -XX:MaxPermSize=512m"|
it seems to be very quick, but I need to add delay between each requests
because I loose answer with http answer 200 OK :'(
I must do another and another tests but It's a begin !
Le 20/07/2012 22:40, Bruno Mannina
Hmm, are you seeing any errors in $CATALINA_HOME/logs/catalina.out
that suggest that you're running out of permgen space, or anything
else?
Michael Della Bitta
Appinions, Inc. -- Where Influence Isn’t a Game.
http://www.appinions.com
On Fri, Jul
Very Strange $CATALINA_HOME is empty ?!!!
Help is welcome !
Another thing, in the /usr/share/tomcat6/catalina.sh I added twice time
JAVA_OPTS="$JAVA_OPTS . -Xms1024m -Xmx2048m -XX:MaxPermSize=512m"
Le 20/07/2012 23:02, Michael Della Bitta a écrit :
Hmm, are you seeing any errors in $
Sorry, if you're running the Ubuntu-provided Tomcat, your log should
be in /var/log/tomcat6/catalina.out.
Michael Della Bitta
Appinions, Inc. -- Where Influence Isn’t a Game.
http://www.appinions.com
On Fri, Jul 20, 2012 at 5:09 PM, Bruno Mannina
If I try to do a :
cd /var/log/tomcat6
I get a permission denied ??!!
tomcat6/ directory exists and it has drwxr-x--- 2 tomcat6 adm
Le 20/07/2012 23:16, Michael Della Bitta a écrit :
Sorry, if you're running the Ubuntu-provided Tomcat, your log should
be in /var/log/tomcat6/catalina.out.
Mi
Bruno,
That sounds like either you need sudo permissions on your machine, or
you need help from someone who has them. Having a look at the logs in
there should be fairly revealing.
Failing that, you could always go back to Jetty. :)
Michael Della Bitta
--
Michael,
I'm admin of my server, I have only 2 accounts.
If I use
sudo cd /var/log/tomcat6
I enter the pwd and I get the message:
sudo: cd: command not found
my account is admin.
I don't understand what happens but If I do:
sudo lsof -p pid_of_tomcat |grep log
I see several logs file :
catali
Le 21/07/2012 00:00, Bruno Mannina a écrit :
catalinat.out <-- twice
Sorry concerning this file, I do a
sudo cat .. |more and it's ok I see the content
Le 21/07/2012 00:02, Bruno Mannina a écrit :
Le 21/07/2012 00:00, Bruno Mannina a écrit :
catalinat.out <-- twice
Sorry concerning this file, I do a
sudo cat .. |more and it's ok I see the content
And inside the catalina.out I have all my requests, without error or
missing requests
:'( it's
Hi Mark,
I am am also facing the same issue when trying to index in SolrCloud using
DIH running on a non-leader server. The DIH server is creating around 10k
threads and then OOM cannot create thread error.
Do you know when or which version this issue will be solved. I think a
workaround for thi
In the catalina.out, I have only these few rows with:
.
INFO: Closing Searcher@1faa614 main
fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
15
>
> Hi Mark,
> I am am also facing the same issue when trying to index in SolrCloud using
> DIH running on a non-leader server. The DIH server is creating around 10k
> threads and then OOM cannot create thread error.
> Do you know when or which version this issue will be solved. I think a
> workaro
Why not just index one title per document, each having author and specialty
fields included? Then you could search titles with a user query and also
filter/facet on the author and specialties at the same time. The author bio
and other data could be looked up on the fly from a DB if you didn't
If I try to run DIH on the SolrCloud it can hit any one of the servers and
start the import process, but if we try to get the import status from any
other server it returns no import is running. Only the server that is
running the DIH gives back the correct import status. So if we run DIH
behind a
Can you include the entire exception? This is really necessary!
On Tue, Jul 17, 2012 at 2:58 AM, Oliver Schihin
wrote:
> Hello
>
> According to release notes from 4.0.0-ALPHA, SOLR-2396, I replaced
> ICUCollationKeyFilterFactory with ICUCollationField in our schema. But this
> throws an exception
Also, newrelic.com has a saas-based Solr monitor. This and Sematec are
the least work
We use Zabbix internally in LucidWorks Cloud. We picked it for a
production site because it connects to JMX, monitors & archives,
graphs, and sends alerts. We could not find anything else that did all
of these we
> My data is in an enormous text file that is parsed in python,
You mean it is in Python s-expressions? I don't think there is a
parser in DIH for that.
On Thu, Jul 19, 2012 at 9:27 AM, Erick Erickson wrote:
> First, turn off all your soft commit stuff, that won't help in your situation.
> If yo
47 matches
Mail list logo