it's not the right place.
when you use java -Durl=http://... -jar post.jar data.xml
the data.xml file must be a valid xml file. you shoud escape special chars
in this file.
I don't know how you generate this file.
if you use java program(or other scripts) to generate this file, you should
use xml t
Hello all,
We have a SOLR index filled with user queries and we want to retrieve the
ones that are more similar to a given query entered by an end-user. It is
kind of a "related queries" system.
The index is pretty big and we're using early-termination of queries (with
the index sorted so that th
I am sorry, but I can't get what you mean.
I tried the HTMLStripCharFilter and PatternReplaceCharFilter. It doesn't
work.
Could you give me an example? Thanks!
I also tried:
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-avoid-the-unexpect
On 3/15/2012 5:53 PM, Shawn Heisey wrote:
Is there any way to get get the threads within SUSS objects to
immediately exit without creating other issues? Alternatively, if
immediate isn't possible, the exit could take 1-2 seconds. I could
not find any kind of method in the API that closes down
Ludovic,
I looked through. First of all, it seems to me you don't amend regular
"servlet" solr server, but the only embedded one.
Anyway, the difference is that you stream DocList via callback, but it
means that you've instantiated it in memory and keep it there until it will
be completely consume
Hello.
I have an SQL database with documents having an ID, TITLE and SUMMARY.
I am using the DIH to index the data.
In the DIH dataConfig, for every document, I would like to do something
like:
In other words, "A match on any document's title is worth twice as much as
a match on other fields"
I've got a build system that uses SolrJ. The currently running version
uses CommonsHttpSolrServer for everything. In the name of performance,
I am writing a new version that uses CommonsHttpSolrServer for queries
and StreamingUpdateSolr for updates.
One of the things that my program does is
Run the program under jconsole (visualgc on some machines). This
connects to your tomcat and gives a running view of memory use and
garbage collection activity.
On Thu, Mar 15, 2012 at 10:28 AM, Erick Erickson
wrote:
> See:
> http://javahowto.blogspot.com/2006/06/6-common-errors-in-setting-java-
Upgrading to jre7 made it go away!
Radek.
Radoslaw Zajkowski
Senior Developer
O°
proximity
CANADA
t: 416-972-1505 ext.7306
c: 647-281-2567
f: 416-944-7886
2011 ADCC Interactive Agency of the Year
2011 Strategy Magazine Digital Agency of the Year
http://www.proximityworld.com/
Join us on:
Fac
Hi,
I was looking for something similar.
I tried this patch :
https://issues.apache.org/jira/browse/SOLR-2112
it's working quite well (I've back-ported the code in Solr 3.5.0...).
Is it really different from what you are trying to achieve ?
Ludovic.
-
Jouve
France.
--
View this message
Hello Nicholas,
Looks like we are around the same point. Here is my branch
https://github.com/m-khl/solr-patches/tree/streaming there are only two
commits on top of it. And here is the test
https://github.com/m-khl/solr-patches/blob/streaming/solr/core/src/test/org/apache/solr/response/ResponseSt
Thanks for the reply Erik,
Yep, the project is working with distributed Solr applications (i.e.
shards) but not with the Solr supplied shard implementation, rather a
custom version (not very different to it to be honest).
I understand that Solr has scoring at it's heart which is something we are
The best advice I can give is to spend some time on the admin/analysis page.
For instance, I believe that your first index analysis chain will do nothing.
KeywordTokenizerFactory does not break up the incoming text at all. Since there
is only a single token, the shinglefilter isn't doing anything e
Somehow you'd have to create a custom collector, probably queue off the docs
that made it to the collector and have some asynchronous thread consuming
those docs and sending them in bits...
But this is so antithetical to how Solr operates that I suspect my hand-waving
wouldn't really work out. The
See:
http://javahowto.blogspot.com/2006/06/6-common-errors-in-setting-java-heap.html
Your Xmx specification is wrong I think.
-Xmx2.0GB
-Xmx2.0G
-Xmx-2GB
-Xmx-2G
-Xmx-2.0GB
-Xmx-2.0G
-Xmx=2G
all fail immediately when I try them from a command line on raw Java
from a command prompt.
Perhaps Tomca
Hi all,
Just testing SOLR 3.5.0. and notice a different behavior on this new
version:
select?rows=10&q=sig%3a("54ba3e8fd3d5d8371f0e01c403085a0c")&?
this query returns no results on my indexes, but works for SOLR 1.4.0
and returns "Java heap space java.lang.OutOfMemoryError: Java he
Is your balance field multi-valued by chance? I dont have much
experience with stats component but it may be very innefficient for
larger indexes. How is memory/performance if you turn stats off?
On Thu, Mar 15, 2012 at 11:58 AM, harisundhar wrote:
> I am using apache solr 3.5.0
>
> I have a i
No, the deleted files do not get replicated. Instead, the slaves do the same
thing as the master, holding on to the deleted files after the new files are
copied over.
The optimize is obsoleting all of your index files, so maybe should quit doing
that. Without an optimize, the deleted files will
I am using apache solr 3.5.0
I have a index size of 4.2 G on disk. I have about 30 M records of small
string and numeric fields
categ | name | age | sex | addr | balance | min balance | interest | tax |
customertype
I am running a solr server with
jdk1.6.0_21_64/bin/java -Xms512m -Xmx2048M -ja
The problem is that when replicating, the double-size index gets replicated
to slaves. I am now doing a dummy commit with always the same document and
it works fine.. After the optimize and dummy commit process I just end up
with numDocs = x and maxDocs = x+1. I don't get the nice green checkmark
Say I have around 30-40 fields (SQL Table Columns) indexed using Solr from the
database. I concatenate those fields into one field by using Solr copyfield
directive and than make it default search field which I search.
If at the database level itself I perform concatenation of all those fields
Hi all,
#1. I'm on windows
When I run solr+jetty from where the start.jar lives using this as provided by
README:
java.exe -Dsolr.solr.home=multicore -jar start.jar
All works fine.
However if I try this:
c:\HP\solr\example>"C:\Program Files\Java\jre6\bin\java.exe"
-Dsolr.solr.home=C:/HP/sol
Hi all,
I'm facing problems regarding multiple Filter Queries in SOLR 1.4.1 -
I hope some one will be able to help.
Example 1 - works fine: {!tag=myfieldtag}(-(myfield:*))
Example 2 - works fine: {!tag=myfieldtag}((myfield:"Bio" | myfield:"Alexa"))
Please note that in Example 2, result sets
Here is the link to the screenshot I tried to send about my Topic/Scope issue.
http://imageupload.org/en/file/200809/solr-scope.png.html
-Original Message-
From: Valentin, AJ
Sent: Thursday, March 15, 2012 9:47 AM
To: 'solr-user@lucene.apache.org'
Subject: RE: Adding a Topics (Scope) pu
Hello,
I have still same problem after installation.
Files are loaded:
~/appl/apache-solr-3.5.0/example $ java -Dsolr.solr.home=multicore/ -jar
start.jar 2>&1 | grep contrib
INFO: Adding
'file:/home/virus/appl/apache-solr-3.5.0/contrib/velocity/lib/velocity-tools-2.0.jar'
to classloader
INFO: Add
Erik,
Thank you, attached is the screenshot I tried to submit. I'm VERY new to using
Solr, so I'm going to try and make the best sense of your response with my
limited experience.
Thank you,
AJ
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Thursday, March
Screenshots/attachments don't make it through to the Apache lists, sorry.
As for limiting search results, if you've indexed your topics into a string
field, then you limit adding fq=field_name:value to your Solr requests. And
you can get a list of topics using a
q=*:*&rows=0&facet=on&facet.fie
Hello all,
I have installed Solr search on my Drupal 7 installation. Currently, it works
as an All search tool. I'd like to limit the scope of the search with an
available pull-down to set the topic for searching.
Attached is a screenshot to better visualize what I have to get implemented
so
Or just ignore it if you have the disk space. The files will be cleaned up
eventually. I believe they'll magically disappear if you simply bounce the
server (but work on *nix so can't personally guarantee it). And replication
won't replicate the stale files, so that's not a problem either
Best
Hey there,
I've been working through the Solr Tutorial
(http://lucene.apache.org/solr/tutorial.html), using the example schema and
documents, just working through step by step trying everything out. Everything
worked out the way it should (just using the example queries and stuff), except
for the
Hi
I have a scenario, where I store a field which is an Id,
ID field
--
1
3
4
Descrption mapping
---
1 = "Options 1"
2 = "Options A"
3 = "Options 3"
4 = "Options 4a"
Is there a way in solr when ever i query this field should return me the
description instead of the id. An
FWIW it looks like this feature has been enabled by default since JDK 6 Update
23:
http://blog.juma.me.uk/2008/10/14/32-bit-or-64-bit-jvm-how-about-a-hybrid/
François
On Mar 15, 2012, at 6:39 AM, Husain, Yavar wrote:
> Thanks a ton.
>
> From: L
Mike
Actually i'm not able to tell you what each value stands for .. but what i can
tell you is, where the information is coming from.
The interface requests /admin/system which is using
https://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/java/org/apache/solr/handler/admin/SystemInf
Thanks guys! I'll try out OpenNMS & Zabbix :)
Alex
On 03/14/2012 12:07 AM, Jan Høydahl wrote:
And here is a page on how to wire Solr's JMX info into OpenNMS monitoring tool.
Have not tried it, but as soon as a collector config is defined once I'd guess
it could be re-used, maybe shipped with
Thanks a ton.
From: Li Li [fancye...@gmail.com]
Sent: Thursday, March 15, 2012 12:11 PM
To: Husain, Yavar
Cc: solr-user@lucene.apache.org
Subject: Re: Solr out of memory exception
it seems you are using 64bit jvm(32bit jvm can only allocate about 1.5GB). yo
Hi Alp,
if you have not changed how SOLR logs in general, you should find the
log output in the regular server logfile. For Tomcat you can find this
in TOMCAT_HOME/catalina.out (or search for that name).
If there is a problem with your schema, SOLR should be complaining about
it during applicatio
Hi folks,
i comment this issue : https://issues.apache.org/jira/browse/SOLR-3238 ,
but i want to ask here if anyone have the same problem.
I use Solr 4.0 from trunk(latest) with tomcat6.
I get an error in New Admin UI:
This interface requires that you activate the admin request handlers,
add t
would
http://www.lucidimagination.com/blog/2011/10/02/monitoring-apache-solr-and-lucidworks-with-zabbix/work
for your scenario?
Tommaso
2012/3/12 Alex Leonhardt
> Hi All,
>
> I was wondering if anyone knows of a free tool to use to monitor multiple
> Solr hosts under one roof ? I found some non
Hello.
Is it possible to switch master/slave on the fly without restarting the
server?
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 8 Cores,
1 Core with 45 Million Documents other Cores < 200.000
- Solr1 for Sear
it can reduce memory usage. for small heap application less than 4GB, it
may speed up.
but be careful, for large heap application, it depends.
you should do some test for yourself.
our application's test result is: it reduce memory usage but enlarge
response time. we use 25GB memory.
http://lists.
why should enable pointer compression?
-- Original --
From: "Li Li";
Date: Thu, Mar 15, 2012 02:41 PM
To: "Husain, Yavar";
Cc: "solr-user@lucene.apache.org";
Subject: Re: Solr out of memory exception
it seems you are using 64bit jvm(32bit jvm can only
41 matches
Mail list logo