Yes, your are right. I was trying to indent the solr json response.
Actually, Solr Json response is not exactly json, i couldnt understand the
output format. But found a solution yo solve the problem, i mean making
json and indenting the solr result set. Here is the code segment.
public class Solr
HI all
After upgrading tol Solr 3.4 we are having trouble with the replication.
The setup is one indexing master with a few slaves that replicate the
indexes once every night.
The largest index is 20 GB and the master and slaves are on the same DMZ.
Almost every night one of the indexes (17 in to
Well, here's something that might just work. Using the Solr 3.4+ facet.prefix
parameter, as well as prefixing the values of the particular field I want to
facet based on the node neighbor ID, I get what I need.
Adding the field:
Then, for each value, I prefix it with {nodeId}-. Fo
thank you for your answer.i read it and i use this filter in my schema.xml in
solr:
but this filter doesn't understand all words with their suffix and prefix.
this means when i search 'rain' solr doesn't show me any document that have
'rainy'.
--
View this message in context:
http://lucene.47
This seems to be solution of my problem.. i definitely try this.
Thanks for your reply.
Meghana.
--
View this message in context:
http://lucene.472066.n3.nabble.com/make-fuzzy-search-for-phrase-tp3542079p3544239.html
Sent from the Solr - User mailing list archive at Nabble.com.
: are you using either dismax or edismax? They don't respect
: the defaultOperator. Use the mm param to get this kind
: of behavior.
FWIW: that has not been tru since Solr 3.1 ... mm's default value is now
based on q.op (which get's it's default from defaultOperator in the
schema.xml)
By Eric
: I'm assuming the processCommit method is called for each
: UpdateRequestProcessor chain class when the records are being commited to
: the Lucene index.
Not exactly.
RequestHandlers that want to modify the index do so by asking the SolrCore
for a processor chain (either by name or just get th
I'm stumped. For some reason on my local set up, Solr is not logging all that
it should. None of the searches, updates, errors are logged at all.
I just did a fresh install of Tomcat 7, Solr 3.5 and it's all the same. No
logging. The *only* thing I change to the default configuration is the
locat
: I am indexing a table that has a field by the name of solr_keywords of type
: text in mysql. And it contains null values also. While creating index in
: solr, this field is not getting indexed.
what exactly is the problem you are seeing?
If your documents are being indexed w/o error, but some
oops... the query looks more like this
http://solr/select?&q=*id:*myid.doc&rows=0
--
View this message in context:
http://lucene.472066.n3.nabble.com/conditionally-update-document-on-unique-id-tp3119302p3543871.html
Sent from the Solr - User mailing list archive at Nabble.com.
I wanted something similar for a file crawler/uploader in c#, but don't even
want to upload the document if it exists... I'm currently querying solr
first... Is this is optimal, silly, or otherwise?
var url = "http://solr/select?&q=myid.doc&rows=0";;
var txt = webclient.DownloadString(url);
if
On Mon, Nov 28, 2011 at 10:49 AM, Roberto Iannone
wrote:
> Hi Michael,
>
> thx for your help :)
You're welcome!
> 2011/11/28 Michael McCandless
>
>> Which version of Solr/Lucene were you using when you hit power loss?
>>
> I'm using Lucene 3.4.
Hmm, which OS/filesystem? Unexpected power loss
Problem has been resolved. My disk subsystem been a bottleneck for quick search.
I put my indexes to RAM and I see very nice QTimes :)
Sorry for your time, guys.
On Mon, Nov 28, 2011 at 4:02 PM, Artem Lokotosh wrote:
> Hi all again. Thanks to all for your replies.
>
> On this weekend I'd made som
On Mon, Nov 28, 2011 at 4:36 PM, Phil Hoy wrote:
> Added issue: https://issues.apache.org/jira/browse/SOLR-2926
> Please let me know if more information needs adding to JIRA.
>
> Phil
>
Thanks, I'll followup on the issue
--
lucidimagination.com
Added issue: https://issues.apache.org/jira/browse/SOLR-2926
Please let me know if more information needs adding to JIRA.
Phil
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: 28 November 2011 19:32
To: solr-user@lucene.apache.org
Subject: Re: DirectSolrSpellChecker o
: To Erick's Point: Can you be more specific then 'certain circumstances'?
:
: Can anyone provide an example of when fieldValueCache would be used?
either FC and FVC are used "most of the time" -- which one is used depends
on wether the field is multivalued or not, and if it's tokenized or not:
technically it could? I'm just not sure if the current spellchecking
apis allow for it? But maybe someone has a good idea on how to easily
expose this.
I think its a good idea.
Care to open a JIRA issue?
On Mon, Nov 28, 2011 at 1:31 PM, Phil Hoy wrote:
> Hi,
>
> Can the DirectSolrSpellChecker b
Interestingly, Ahmet Arslan just answered a virtually identical
question:
"It is possible with this plugin.
https://issues.apache.org/jira/browse/SOLR-1604";
Best
Erick
On Mon, Nov 28, 2011 at 9:09 AM, vrpar...@gmail.com wrote:
> Hello all,
>
> i want to search on phrase with fuzzy, e.g. q=w
I'm not sure what you're really after here. Indent how?
The indent parameter is to make the reply readable, it really
has nothing to do with printing the query.
Could you show an example of what you want for output?
Best
Erick
On Mon, Nov 28, 2011 at 8:42 AM, halil wrote:
> I step one more. bu
Hi,
Can the DirectSolrSpellChecker be used for autosuggest but defer to request
time the name of the field to use to create the dictionary. That way I don't
have to define spellcheckers specific to each field which for me is not really
possible as the fields I wish to spell check are DynamicFie
Hi all,
I'm trying to use PatternTokenizer and not getting expected results.
Not sure where the failure lies. What I'm trying to do is split my
input on whitespace except in cases where the whitespace is preceded
by a hyphen character. So to do this I'm using a negative look behind
assertion in th
> thanks. Is this not then a jetty setting? I'll search for
> that.
I don't use jetty but there is a logging section here :
http://wiki.apache.org/solr/SolrJetty
On 11/28/2011 3:26 AM, Jones, Graham wrote:
Hello
Brief question: How can I clean-up excess files after performing optimize
without restarting the Tomcat service?
Detail follows:
I've been running several SOLR cores for approx 12 months and have recently
noticed the disk usage of one of them
Aha! That sounds like it might be it!
On Mon, Nov 28, 2011 at 4:16 PM, Husain, Yavar wrote:
>
> Thanks Kai for sharing this. Ian encountered the same problem so marking him
> in the mail too.
>
> From: Kai Gülzau [kguel...@novomind.com]
> Sent: Monday, No
Thanks Kai for sharing this. Ian encountered the same problem so marking him in
the mail too.
From: Kai Gülzau [kguel...@novomind.com]
Sent: Monday, November 28, 2011 6:55 PM
To: solr-user@lucene.apache.org
Subject: RE: DIH Strange Problem
Do you use Java
Hah, I've just come on here to suggest you do the same thing! Thanks
for getting back to me - and interesting we both came up with the same
solution!
Now I have the problem that running a delta update updates the
'dataimport.properties' file - but then just re-fetches all the data
regardless! Weir
Hi Ahmet,
thanks. Is this not then a jetty setting? I'll search for that.
RR
Ahmet Arslan wrote:
I have not managed to figure out how to prevent verbose
output of the solr server. I assume the verbosity on the
server side slows down the response and it would be
preferable to turn it off?
If a
Hi Michael,
thx for your help :)
2011/11/28 Michael McCandless
> Which version of Solr/Lucene were you using when you hit power loss?
>
> I'm using Lucene 3.4.
> There was a known bug that could allow power loss to cause corruption,
> but this was fixed in Lucene 3.4.0.
>
> Unfortunately, th
It looks like you are using the plural stemmer, you might want to look into
using the Porter stemmer instead:
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#Stemming
François
On Nov 28, 2011, at 9:14 AM, mina wrote:
> I use solr 3.3,I want solr index words with their suffixes. whe
Tried below url and got the same output. Any other suggestion .
http://localhost:8983/solr/select?q=rangefld:[5000%20TO%206000]&fl=lily.id,rangefld&hl=on&rows=5&wt=json&indent=on&hl.fl=rangefld&hl.highlightMultiTerm=true&hl.usePhraseHighlighter=true&hl.useFastVectorHighlighter=false
On Mon, Nov
I use solr 3.3,I want solr index words with their suffixes. when i index
'book' and 'books' and search 'book', solr show any document that has 'book'
or 'books' but when I index 'rain' and 'rainy' and search 'rain', solr show
any document that has 'rain' but i whant that solr show any document that
> Can i apply fuzzy query and slop together... like
>
> q="hello world~0.5"~3
>
> I am getting error when applying like this. i want to make
> both fuzzy search
> and slop work.
>
> How can i do this, can anybody help me?
It is possible with this plugin. https://issues.apache.org/jira/browse
> I am doing fuzzy search in my solr , its working good for
> signle term , but
> when searching for phrases i get either bulk of data
> or very less data. is
> there any good way for getting satisfactory amount of data
> with nice
> accuracy.
>
> 1) q:"kenny zemanski" : 9 recors
> 2) keny~0.7 ze
> and output is
>
> {
> "responseHeader":{
> "status":0,
> "QTime":4,
> "params":{
> "hl.highlightMultiTerm":"true",
> "fl":"lily.id,rangefld",
> "indent":"on",
>
> "hl.useFastVectorHighlighter":"false",
> "q":"rangefld:[5000 TO
> 6000]",
> "hl.fl
Which version of Solr/Lucene were you using when you hit power loss?
There was a known bug that could allow power loss to cause corruption,
but this was fixed in Lucene 3.4.0.
Unfortunately, there is no easy way to recreate the segments_N file...
in principle it should be possible and maybe not t
Hi,
Can i apply fuzzy query and slop together... like
q="hello world~0.5"~3
I am getting error when applying like this. i want to make both fuzzy search
and slop work.
How can i do this, can anybody help me?
Thanks in Advance.
Meghana
--
View this message in context:
http://lucene.472066.n3.
Hi all again. Thanks to all for your replies.
On this weekend I'd made some interesting tests, and I would like to share it
with you.
First of all I made speed test of my hdd:
root@LSolr:~# hdparm -t /dev/sda9
/dev/sda9:
Timing buffered disk reads: 146 MB in 3.01 seconds = 48.54 MB/se
Hello,
I'm trying to implement automatic document classification and store
the classified attributes as an additional field in Solr document.
Then the search goes against that field like
q=classified_category:xyz. The document classification is currently
implemented as an UpdateRequestProcessor an
I step one more. but still no indent. I wrote below code segment
query.setQuery( "marka_s:atak*" )
.setFacet(true)
.setParam("indent", "on")
;
and here is the resulted query string
q=marka_s%3Aatak*&facet=true&indent=on
-halil agin.
On Mon, Nov 28, 2011 at
Do you use Java 6 update 29? There is a known issue with the latest mssql
driver:
http://blogs.msdn.com/b/jdbcteam/archive/2011/11/07/supported-java-versions-november-2011.aspx
"In addition, there are known connection failure issues with Java 6 update 29,
and the developer preview (non producti
Hi List,
I am new to Solr and lucene world. I have a simple question. I wrote below
code segment and it works.
public class SolrjTest {
public static void main(String[] args) throws MalformedURLException,
SolrServerException{
ClassPathXmlApplicationContext c = new
ClassPathXmlApplic
Hi All,
I am doing fuzzy search in my solr , its working good for signle term , but
when searching for phrases i get either bulk of data or very less data. is
there any good way for getting satisfactory amount of data with nice
accuracy.
1) q:"kenny zemanski" : 9 recors
2) keny~0.7 zemansi~0.7 A
Hi All,
I making fuzzy search in my solr application like below,
q:squre~ 0.6
i want that some prefix length should not go for match in fuzzy query , say
for in this ex. i want that my fuzzy query should not go to match for "squ"
, and rest of term go for fuzzy search. i am doing it by applying
Hi all,
after a power supply inperruption my lucene index (about 28 GB) looks like
this:
18/11/2011 20:29 2.016.961.997 _3d.fdt
18/11/2011 20:29 1.816.004 _3d.fdx
18/11/2011 20:2989 _3d.fnm
18/11/2011 20:30 197.323.436 _3d.frq
18/11/2011 20:30 1.816.
Hi,
I am using a mysql db to store all my data. I had finished configuring my
data import handler to get data into solr and then realized about taking
care of deletes.
This is what i did to handle delete
1) a mysql table 'DeletedContentMapping' with deleted id's
2) deletedPkQuery - to fe
I figured out the solution and Microsoft and not Solr is the problem here :):
I downloaded and build latest Solr (3.4) from sources and finally hit following
line of code in Solr (where I put my debug statement) :
if(url != null){
LOG.info("Yavar: getting handle to driver manager:
Hi Ian
I downloaded and build latest Solr (3.4) from sources and finally hit following
line of code in Solr (where I put my debug statement) :
if(url != null){
LOG.info("Yavar: getting handle to driver manager:");
c = DriverManager.getConnection(url, initProps);
I tried this url :
http://localhost:8983/solr/select?q=rangefld:[5000%20TO%206000]&fl=lily.id,rangefld&hl=on&rows=5&wt=json&indent=on&hl.fl=*,rangefld&hl.highlightMultiTerm=true&hl.usePhraseHighlighter=true&hl.useFastVectorHighlighter=false
and output is
{
"responseHeader":{
"status":0,
Right.
This is REALLY weird - I've now started from scratch on another
machine (this time Windows 7), and got _exactly_ the same problem !?
On Mon, Nov 28, 2011 at 7:37 AM, Husain, Yavar wrote:
> Hi Ian
>
> I am having exactly the same problem what you are having on Win 7 and 2008
> Server http
Hello
Brief question: How can I clean-up excess files after performing optimize
without restarting the Tomcat service?
Detail follows:
I've been running several SOLR cores for approx 12 months and have recently
noticed the disk usage of one of them is growing considerably faster than the
rate
On 24 November 2011 15:18, Tomasz Wegrzanowski
wrote:
> On 22 November 2011 14:28, Jan Høydahl wrote:
>> Why do you need spaces in the replacement?
>>
>> Try pattern="\+" replacement="plus" - it will cause the transformed
>> charstream to contain as many tokens as the original and avoid the
>>
51 matches
Mail list logo