iorixxx wrote:
>
> Hi Jean,
> Since you use WDF, your best bet can be to modify your query :
>
> "cross link* compiler"~50
>
> "crosslink* compiler"~50
>
Thanks but
"crosslink* compiler"~50 returns nothing (seems correct to me however)
"cross link* compiler"~50 does not return exactly what
Ben,
It's absolutely possible for MLT to find documents similar to another
indexed document. That's its primary use case. For externally supplied
data, you will need to supply one blob of text. You could derive this by
concatenating applicable parts of your structured data before handing to
Solr.
Can someone translate this error for me. My data looks pretty clean, so I am
not sure what is going on here.
Mar 30, 2011 5:21:52 AM org.apache.solr.common.SolrException log
SEVERE: Error processing "legacy" update
command:com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character '0'
(
Hello,
I'm not sure what's the Default Similarity Luke uses but I doubt it's
SolrSimilarity (which I modified).
I see the field to change the similarity used but typing
org.apache.solr.search.SolrSimilarity doesn't work (Default similarity
remains selected).
Adding &debugQuery=true to the query o
Do you want to tokenize subwords based on dictionaries ? A bit like
disagglutination of german words ?
If so, something like this could help : DictionaryCompoundWordTokenFilter
http://search.lucidimagination.com/search/document/CDRG_ch05_5.8.8
Ludovic
http://lucene.apache.org/java/2_4_0/api/org
I have the following table, which hold category names in a specifi language,
title_en=English, title_nl=Dutch.
[music_categories]
id int Unchecked
title_ennvarchar(50)Unchecked
title_nlnvarchar(50)Unchecked
In my data-config.xml I have:
People
Is were way to upgrade existsing index from solr 1.4 to solr 4(trunk). When
I configured solr 4 and launched it complained about incorrect lucence file
version (3 instead of old 2)
Are there any procedures to convert index?
Best Regards
Alexander Aristov
*start*: The offset to start at in the result set. This is useful for
pagination.
On Wed, Mar 30, 2011 at 10:33 AM, stockii wrote:
> Hello.
>
> i get sometimes much results, and solr or jetty give me the error.
> "EVERE: java.lang.IllegalStateException: Form too large1787345>100"
> numfound
How is a multivalued field in DIH config file passed to a Script Transformer
function???To be more clear is it an array/string???
When I do var result=row.get('fieldname'), I am unable to apply any string
manipulation functions on result.
Thanks,
Neha
--
View this message in context:
http://luc
Hi Jean,
Since you use WDF, your best bet can be to modify your query :
"cross link* compiler"~50
"crosslink* compiler"~50
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/ComplexPhraseQueryParser-and-wildcards-tp2742244p2754034.html
> Sent from the Solr - User mailin
That did the trick! thanks!
On Wed, Mar 30, 2011 at 1:31 PM, Steven A Rowe wrote:
> Hi Marcelo,
>
> Try adding the 'method="text"' attribute to your tag, e.g.:
>
>
>
> If that doesn't work, there is another attribute "omit-xml-declaration"
> that might do the trick.
>
> See http://www.w3.org/T
Hi Marcelo,
Try adding the 'method="text"' attribute to your tag, e.g.:
If that doesn't work, there is another attribute "omit-xml-declaration" that
might do the trick.
See http://www.w3.org/TR/xslt#output for more info.
Steve
> -Original Message-
> From: Marcelo Iturbe [mailto:mar
Hello,
I currently have set up Solr working and I am doing tests with the XSL
stylesheets.
I had no problem in generating HTML files, but while trying to generate Json
files I noticed something odd..
I am calling Solr with the following URL:
http://172.16.0.30:8983/solr/gcontacts/select?q=apache*
Wow that sounds rad!
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: Wednesday, March 30, 2011 9:39 AM
To: solr-user@lucene.apache.org
Subject: Re: FW: no results searching for stadium seating chairs
There are some new features in 3.1 to make it easier to tune this
s
Hi all,
I have a field set up like this:
And I have some records:
RECORD1
companion to mankind
pooch
RECORD2
companion to womankind
man's worst enemy
I would like to write a query that will match the beginning of a word within
the term. Here is the query I would use as it exists now:
ht
> Both of the clustering algorithms that ship with Solr (Lingo and STC) are
> designed to allow one document to appear in more than one cluster, which
> actually does make sense in many scenarios. There's no easy way to force
> them to produce hard clusterings because this would require a complete
Hi Ramdev,
Both of the clustering algorithms that ship with Solr (Lingo and STC) are
designed to allow one document to appear in more than one cluster, which
actually does make sense in many scenarios. There's no easy way to force
them to produce hard clusterings because this would require a compl
Hello,
It is currently possible to use the MoreLikeThis handler to find documents
similar to a given document in the index.
Is there any way to feed the handler a new document in XML or JSON (as one
would do for adding to the index) and have it find similar documents without
indexing the target d
Hello every body,
referring to the link : http://wiki.apache.org/solr/CoreAdmin.
I've created a solr.xml file as follows:
So before using SolrCore I instanciated a SolrServer to index and search
documents as follows:
System.setProperty("solr.solr.home",
There are some new features in 3.1 to make it easier to tune this
stuff, especially:
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_1/solr/src/java/org/apache/solr/analysis/StemmerOverrideFilterFactory.java
This takes a tab separate list of words->stems, and sets a flag to any
down
Yes, you can set engine specific parameters. Check the comments in your
snippety.
> Hi:
> I recently included the CLustering component into Solr and updated the
> requestHandler accordingly (in solrconfig.xml). Snippet of the Config for
> the CLuserting:
>
>name="clusteringComponent"
>
Hi:
I recently included the CLustering component into Solr and updated the
requestHandler accordingly (in solrconfig.xml).
Snippet of the Config for the CLuserting:
default
org.carrot2.clustering.lingo.LingoClusteringAlgorithm
20
Thanks for the input! We've discussed using synonyms to help here. We
have product managers who are supposed to add keywords on to skus also
which our indexer will automatically consume. Getting them to do that
is a different matter! haha
-Original Message-
From: Jonathan Rochkind [mai
Thank you all for your responses. The field had already been set up with
positionIncrementGap=100 so I just needed to add in the slop.
On Tue, Mar 29, 2011 at 6:32 PM, Juan Pablo Mora wrote:
> >> A multiValued field
> >> is actually a single field with all data separated with
> positionIncrement
iorixxx wrote:
>
> Can you paste your field type definition?
>
Here it is:
-
-
-
-
-
-
-
-
-
Jean-Michel
--
View this message in context:
http://lucene.472066.n3.nabble.com/ComplexPhraseQueryParser-and-wildcards-tp2742244p2754034.html
you're right! it works, Thank you very much
--
View this message in context:
http://lucene.472066.n3.nabble.com/Special-characters-index-tp2753707p2753939.html
Sent from the Solr - User mailing list archive at Nabble.com.
I wrote this snippet but get an exception
--
View this message in context:
http://lucene.472066.n3.nabble.com/Concatenate-multivalued-DIH-fields-tp2749988p2753910.html
Sent from the Solr - User mailing list ar
Not on your result page. Stored data is not affected by analysis. With the
filter café finds both café and cafe and vice versa.
On Wednesday 30 March 2011 16:18:29 royr wrote:
> Thanks for your quick answer.
>
> I'm not sure if the ASCIIFoldingFilterFactory is what I needed. In my
> results I ju
Thanks for your quick answer.
I'm not sure if the ASCIIFoldingFilterFactory is what I needed. In my
results I just want to see the special characters. If I search for Cafe I
want café in my results and if i search for café i want café also in my
results. The filter you send me will change the valu
I am getting data from an xml file. If possible would you be able to guide me
with a code snippet for script transformer for doing this(I am sorry if this
very basic, I am a newbie to Solr).
Thanks,
Neha
--
View this message in context:
http://lucene.472066.n3.nabble.com/Concatenate-multivalued-
suggesting an alternative way .. adjust your query and use the
rdbms-concat function?
On Wed, Mar 30, 2011 at 3:22 PM, neha wrote:
> HI, when i tried to use template transformer, it concatenates the entire
> multivalued field with other, not each element of the multivalued fields.
> [Lars L., He
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.ASCIIFoldingFilterFactory
On Wednesday 30 March 2011 15:44:26 royr wrote:
> Hello,
>
> i have a question about SOLR and special characters. How can i search for
> cafe or café and get in both situations the following results:
>
>
Thanks, it works!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Synonyms-whitespace-problem-tp2730953p2753720.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello,
i have a question about SOLR and special characters. How can i search for
cafe or café and get in both situations the following results:
cafe pub humphreys
café brokecity
café fillham
cafe langer
The same for characters like -,^,ë
--
View this message in context:
http://lucene.472066.
Hello every body,
referring to the link : http://wiki.apache.org/solr/CoreAdmin.
I've created a solr.xml file as follows:
So before using SolrCore I instanciated a SolrServer to index and search
documents as follows:
System.setProperty("solr.solr.home",
HI, when i tried to use template transformer, it concatenates the entire
multivalued field with other, not each element of the multivalued fields.
[Lars L., Helle K., Thomas A., Jes] [Thomsen, Iversen, Brinck, Olesen],
instead of Lars L. Thomsen, Helle K. Iverson, Thomas A Brinck, Jes Oleson.
> I'm using ComplexPhraseQueryParser and I'm quite happy with
> it.
> However, there are some queries using wildcards nor
> working.
>
> Exemple: I want to do a proximity search between the word
> compiler and the
> expression 'cross linker' or 'cross linking' or 'cross
> linked' ...
>
> ("cross-
Let's see the schema file and a sample input document please. Possibly
there's something you're overlooking...
And what is your evidence that the document isn't overwritten? Because an
update is really a delete, followed by an add. The delete just marks
the document
as deleted, it doesn't physical
You can also just go up to Jenkins (the build server) and check out the nightly
build of your choice. Start at;
https://builds.apache.org/hudson/view/S-Z/view/Solr/job/Solr-3.x/
Click on the date of your choice and you should see a page with
the build artifacts on it.
Best
Erick
On Tue, Mar 29,
Are you actually sending in documents with the field specified in uniqueKey
with existing values?
On Wednesday 30 March 2011 13:59:15 Carl-Erik Herheim wrote:
> Yes, I have.
>
> Den 30.03.2011 13:41, skrev Markus Jelsma:
> > Have you defined a uniqueKey in your schema?
> >
> > http://wiki.apach
Yes, I have.
Den 30.03.2011 13:41, skrev Markus Jelsma:
Have you defined a uniqueKey in your schema?
http://wiki.apache.org/solr/SchemaXml#The_Unique_Key_Field
On Wednesday 30 March 2011 13:16:02 Carl-Erik Herheim wrote:
Hi list,
I've got a multi-core solr index that is indexed through solrj
Have you defined a uniqueKey in your schema?
http://wiki.apache.org/solr/SchemaXml#The_Unique_Key_Field
On Wednesday 30 March 2011 13:16:02 Carl-Erik Herheim wrote:
> Hi list,
> I've got a multi-core solr index that is indexed through solrj. The
> problem is that already existing documents don't
Hi list,
I've got a multi-core solr index that is indexed through solrj. The
problem is that already existing documents don't get overwritten when
they are re-indexed. This means we have to empty the index whenever we
want to update it, which isn't really an option. From what I've been
reading
Hi Eric,
Yes, we are using the Dismax parser. It was more the "All search fields"
selected use case that we were wondering about..
We specify a omitNorms=true for the catch_all field option which we have
found to yield better results in our case, but we don't do that for all the
other fields so, a
On 27.03.2011, at 01:05, Israel Ekpo wrote:
> Lukas,
>
> How do you think it should have been designed?
>
> Most libraries are not going to have all the features that you need and while
> there may be features about the library that you do not like others may
> really appreciate them being th
okay, i thougt that heap isnt the problem ...
but, what should i look for in jconsole ? what say it to me ?
i understand the monitoring sense not.
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core wi
Neha, If you just need to combine them .. w/o further logic,
http://wiki.apache.org/solr/DataImportHandler#TemplateTransformer
should be enough
The first Example for ScriptTransformer should be pretty clear, no?
The Function (which you define as ScriptTransformer) will retrieve the
current row as
Stockii, don't be sad but read about JVM memory usage and collection.
Increasing and decreasing memory consumption is normal, you would only worry
if the left bar reaches 100% and stays at 100%.
Start overhere:
http://download.oracle.com/javase/6/docs/technotes/guides/management/jconsole.html
B
when a delta-import is startet my heap jumped to 100% ... every minute ...
when GC is running heap is near zero.
how can i optimize this ?
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core with 31 Mill
jconsole: heap: 100 % =(((
what can i do ?
http://lucene.472066.n3.nabble.com/file/n2752697/heap.png
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core with 31 Million Documents other Cores < 100.000
http://yonik.wordpress.com/2010/07/29/csv-output-for-solr/
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core with 31 Million Documents other Cores < 100.000
- Solr1 for Search-Requests - commit every Mi
Hello.
i get sometimes much results, and solr or jetty give me the error.
"EVERE: java.lang.IllegalStateException: Form too large1787345>100"
numfound ist 94000, not really much, but i get the a double-value from each
doc and calculate the sum over php. when i put the query into browser, a
dow
52 matches
Mail list logo