Hi Uwe,
sorting should be well prepared.
First rough check is fieldCache. You can see it with SolrAdmin Stats.
The "insanity_count" there should be 0 (zero).
Only sort on fields which are prepared for sorting and make sense to be sorted.
Do only faceting on fields which make sense. I've seen syst
On 6 December 2012 17:40, Spadez wrote:
> Hi,
>
> I currently have this setup:
>
> Bring in data into the "description" schema and then have this code:
>
> maxChars="168"/>
>
> To then truncate the description and move it to "truncated_description".
> This works fine.
>
> I was wondering, is it p
Hello,
It looks like the directspelling component returns the correction between
"(correction)".
Why is this and are there some other differences in the spelling component
in solr 4(towards solr 3.1)
Thanks
Roy
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spelling-outpu
You mean this:
stats: entries_count : 24
entry#0 :
'NIOFSIndexInput(path="/home/connect/ConnectPORTAL/preview/solr-home/data/index/_2f3.frq")'=>'WiringDiagramSheetImpl.pageNumber',class
org.apache.lucene.search.FieldCache$StringIndex,null=>org.apache.lucene.search.FieldCache$StringIndex#32159051
Hi all.
I have a master/slave server configuration that seems to be working for a
while.
Yesterday we updated synonyms.txt in master, and the replication did not
copied the file for slave server.
When we check these files in servers, we can see that the dates are
different for several files.
Th
Hi:
Is there any way that I can prevent a document from being indexed? I've a
separated core only for query suggestions, this queries are stored right from
the frontend app, so I'm trying to prevent some kind of bad intended queries to
be stored in my query, but keeping the logic of what I cons
Either don't send the document to Solr the first place or implement a custom
update processor that uses whatever criteria you want to ignore undesirable
documents.
-- Jack Krupansky
-Original Message-
From: Jorge Luis Betancourt Gonzalez
Sent: Thursday, December 06, 2012 3:39 PM
To:
Hi,
I have Solr cluster and I want to use UUID for unique key. I configured
solrconfig and schema according to the rules on Wiki page:
http://wiki.apache.org/solr/UniqueKey
In logs I can see some UUID is being generated when adding new document:
INFO: [selekta] webapp=/solr path=/update params={}
If I remember correctly, updated files in the master only get replicated if
there is a change in the index (if the index version from the master and
the slave are the same, nothing gets replicated, not even the configuration
files). Are you currently updating the index or just the configuration
fil
The index (documents), was also updated. But the two servers have the same
index version.
Thank's
*
--
*
*"E conhecereis a verdade, e a verdade vos libertará." (João 8:32)*
*andre.maldonado*@gmail.com
Have you committed the changes on the master? Are you sure that the
replication didn't happen before you change the configuration files?
On Fri, Dec 7, 2012 at 11:56 AM, André Maldonado
wrote:
> The index (documents), was also updated. But the two servers have the same
> index version.
>
> Thank
Yes, I'm sure. Files were changed yesterday, this morning we had a full
reindexation...
Thank's
*
--
*
*"E conhecereis a verdade, e a verdade vos libertará." (João 8:32)*
*andre.maldonado*@gmail.com
(1
I'm not sure what you mean. Can you paste in an example spellcheck response
and explain how it differs between the older IndexBasedSpellChecker on 3.1 and
the DirectSolrSpellChecker on 4.0 ?
James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311
-Original Message-
From: royS
hello,solr group.i want to do some search with elevate,but i don't know how
to change the search path to /elevate from default /select.i find out that
the class QueryRequest.java's constructor don't set the second param of
path.please help me.thanks.
You might want to open a jira issue for this to request that the feature be
added. If you haven't used it before, you need to create an account.
https://issues.apache.org/jira/browse/SOLR
In the mean time, If you need to get the document frequency of the query terms,
see http://wiki.apache.o
For what it's worth this is the log output with DEBUG on,
Dec 07, 2012 2:00:48 PM org.apache.solr.handler.admin.CollectionsHandler
handleCreateAction
INFO: Creating Collection : action=CREATE&name=foo&numShards=4
Dec 07, 2012 2:01:03 PM org.apache.solr.core.SolrCore execute
INFO: [15671] webapp=/s
In case it is of use, I have just uploaded an updated and mavenised
version of the Luke code to the Luke discussion list, see https://groups.google.com/d/topic/luke-discuss/MNT_teDxVno/discussion
.
It seems to work with the latest (4.0.0 & 4.1-SNAPSHOT) versions of
Lucene.
N
Somebody succeeded in getting tika1.2 working in solr4.0?
I can't get it working. Extracting a txt file is working but a simple
txt file not woth the error:
SEVERE: null:java.lang.RuntimeException: java.lang.NoClassDefFoundError:
org/apache/tika/parser/txt/UniversalEncodingListener
at
org
I have same problem: no suggestions
and really similar configration
-
Complicare è facile, semplificare é difficile.
Complicated is easy, simple is hard.
quote: http://it.wikipedia.org/wiki/Bruno_Munari
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spellchecker-and-m
Hi James,
Thanks for the response, will open a JIRA for this.
Had one follow-up question - how does the Distributed SpellCheckComponent
handle this? I tried looking at the code but it's not obvious to me how it
is able to differentiate between these 2 cases. I see that it only
considers a term to
Thanks for the info!
Do you know if it'spossible to use file uploads to Tika with this client?
On 12/03/2012 03:56 PM, Bill Au wrote:
https://bugs.php.net/bug.php?id=62332
There is a fork with patches applied.
On Mon, Dec 3, 2012 at 9:38 AM, Arkadi Colson wrote:
Hi
Anyone tested the pecl
hmm then I'm not sure what can be happening. Do you see anything in the
logs? any exception? Maybe you can share a piece of log that includes the
replication.
On Fri, Dec 7, 2012 at 12:09 PM, André Maldonado
wrote:
> Yes, I'm sure. Files were changed yesterday, this morning we had a full
> reind
Hi,
I am trying to do delta import and I am not able to get it to work. How ever
full import does work. Could you please help me figure out what I am
missing?
data-config.xml file
Ouput in the browser is
idle-9802012-12-07
03:15:362012-12-07 03:15:362012-12-07 03:15:362012-12-07 03:15:36800:0:0.32
I have tried all sorts of URL's to invoke the data import
http://localhost:8080/solr/dataimport?command=delta-import
http://localhost:8080/solr/dataimport?command=delta-import&c
how i change the search path to /elevate when i do some search with java?
Anything in any of the other logs (the other nodes)? The key is getting the
logs from the node designated as the overseer - it should hopefully have the
error.
Right now because you pass this stuff off to the overseer, you will always get
back a 200 - there is a JIRA issue that addresses this t
If I have an arbitrarily complex query that uses ORs, something like:
q=(simple_fieldtype:foo OR complex_fieldtype:foo) AND
(another_simple_fieldtype:bar OR another_complex_fieldtype:bar)
I want to know which fields actually contributed to the match for each document
returned. Something like:
d
No excpetions...
INFO: Opening Searcher@21fb3211 main
Dec 6, 2012 5:09:55 PM
org.apache.solr.update.DirectUpdateHandler2$CommitTracker
INFO: AutoCommit: disabled
Dec 6, 2012 5:09:55 PM org.apache.solr.handler.component.SearchHandler
inform
INFO: Adding
component:org.apache.solr.handler.component
The response from the shards is different from the final spellcheck response in
that it does include the term even if there are no suggestions for it. So to
get the behavior you want, we'd probably just have to make it so you could get
the "shard-to-shard-internal" version.
See
http://svn.apa
I have not used the pecl Solr client. I have been using SolrPhpClient. I
came across this patch for pecl when I was researching php client for Solr
4.0. SolrPhpClient has the same problem with 4.0 that this patch addresses.
Bill
On Fri, Dec 7, 2012 at 11:00 AM, Arkadi Colson wrote:
> Thanks
The debugQuery "explain" is simply a text display of what Lucene has already
calculated. As such, you could do a custom search component that gets the
non-text Lucene "Explanation" object for the query and then traverse it to
get your matched field list without all the text. No parsed would be
Thanks, I did start to dig into how DebugComponent does its thing a little, and
I'm not all the way down the rabbit hole yet, but the lucene indexSearcher's
explain() method has this comment:
"This is intended to be used in developing Similarity implementations, and, for
good performance, shoul
Any news on Solarium Project? Is the one I'm using with Solr 3.6!
- Mensaje original -
De: "Bill Au"
Para: solr-user@lucene.apache.org, "Arkadi Colson"
Enviados: Viernes, 7 de Diciembre 2012 13:40:20
Asunto: Re: PHP client
I have not used the pecl Solr client. I have been using SolrPhp
I actually was not using a solr.xml. I am only using a single core. I am
using the default core name collection1. I know for sure I will not be
using more than a single core so I did not bother with having a solr.xml.
Is that a bad thing?
Everything works when I had tomcat config to run on por
No news there. But according to their roadmap, Solr 4.0 won't be full
supported until Solarium 3.1. There is no schedule for 3.1 yet as Solarium
3.0 first release candidate was released on Oct 4, 2012.
Bill
On Fri, Dec 7, 2012 at 2:01 PM, Jorge Luis Betancourt Gonzalez <
jlbetanco...@uci.cu> w
Ah I see what you mean. Will probably try to change the response to look
like the internal shard one then.
Thanks for the detailed explanation!
- Nalini
On Fri, Dec 7, 2012 at 1:38 PM, Dyer, James wrote:
> The response from the shards is different from the final spellcheck
> response in that i
Yup, solr.xml is pretty much required - especially if you want to use solrcloud.
The only reason anything works without is for back compat.
We are working towards removing the need for it, but's considered required
these days.
- Mark
On Dec 7, 2012, at 11:04 AM, Bill Au wrote:
> I actually w
This log seems to be from when you start Solr, is it? This is master's log,
right? It would be more useful to see the log when the replication actually
happens. You should see something like:
Master's generation:
Slave's generation:
Starting replication process
...
On Fri, Dec 7, 2012 at 2:16 P
Hi Erick,
Thanks for the reply!
I don't think there is a problem with my schema, because I can successfully
extract text from other file types.
For example, Tika is able to extract the content from a docx:
FINEST: Trying class name
org.apache.solr.handler.extraction.ExtractingRequestHandler
Erick:
Not seeing any page caching related issues...
Mark:
1.Would this "waiting" on 003(replica) cause any inconsistencies in the
zookeeper cluster state? I was also looking at the leader(001) logs at that
time and seeing errors related to "*SEVERE: ClusterState says we are the
leader, but lo
I have a problem with multifaceting in Solr 4.0 and would appreciate any
insight.
My base query returns the documents and facet counts I expect. After adding an
fq the result set of documents is smaller and the facet counts go down as
expected.
What I want is the smaller result set but to have
We saw this error again today during our load test - basically, whenever
session is getting expired on the leader node, we are seeing the
error.After this happens, leader(001) is going into 'recovery' mode and all
the index updates are failing with "503- service unavailable" error
message.After som
Also, CSV importer can handle tab-separated records. That could be slightly
nicer than coma-separated CSV.
Regards,
Alex.
Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all
I realised yesterday what useful about /browse, and why it is wrong as
it is.
The browse interface is a good way for a newcomer to explore some
aspects of the query response without having to pour through lots of XML
or JSON. It gives them a visual representation of their query result.
While that
Hi guys,
Sometimes we get a bot crawling our search function on our retail web site.
The ebay crawler loves to do this (Request.UserAgent: Terapeakbot). They just
do a star search and then iterate through page after page. I've noticed that
when they get to higher page numbers like page 9000
While investigating differences in query results between Solr 3.5 and a
branch_4x snapshot with a slightly different schema, I came across some
fairly radical differences in how a particular query is parsed. My
default operator on both versions is AND. I am using the lucene query
parser.
Th
Add the autoGeneratePhraseQueries=true attribute to your text field type
since it now defaults to false.
Also, change your query-time WDF to preserveOriginal="0". I think there were
some changes or bug fixes about the position of the original vs. the
generated parts and you only need the gener
Yes, expected.
When it does a search for the first, say, 10 results, it must scan
through all docs, recording just the highest ten scoring ones.
To find documents 1000 to 1010, it must scan through all docs, recording
the best scoring 1010 documents, and then discard the first 1000. This
is much
On 12/7/2012 6:38 PM, Jack Krupansky wrote:
Add the autoGeneratePhraseQueries=true attribute to your text field
type since it now defaults to false.
Also, change your query-time WDF to preserveOriginal="0". I think
there were some changes or bug fixes about the position of the
original vs. th
Hi,
For solr monitoring and alerting see my signature.
The JVM params should be -Xmx and -Xms.
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm
On Dec 6, 2012 11:33 PM, "aniljayanti" wrote:
> Hi,
>
> Im generating SOLR using SOLR 3.3, Apache Tomcat 7.0.19. Some times my
> Tomcat g
Hi Arkadi,
You may want to post this on the u...@tika.apache.org list -- looks like
you are missing the univerisalchardetector library as part of your Solr
Cell installation.
Cheers,
Chris
On 12/6/12 12:02 AM, "Arkadi Colson" wrote:
>Anybody an idea?
>
>Dec 5, 2012 3:52:32 PM org.apache.solr.c
Good point. There is some documentation now, see:
http://wiki.apache.org/lucene-java/HowtoConfigureIntelliJ
Please feel free to modify the instructions any way you see fit, it's often
valuable to have someone who's fresh look over instructions and clarify
steps..
Best
Erick
On Thu, Dec 6, 2012
Hey, I'll try and answer this tomorrow.
There is a def an unreported bug in there that needs to be fixed for the
restarting the all nodes case.
Also, a 404 one is generally when jetty is starting or stopping - there are
points where 404's can be returned. I'm not sure why else you'd see one.
G
Hi Robert,
You could look at pageDoc & pageScore to improve things for deep paging (
http://wiki.apache.org/solr/CommonQueryParameters#pageDoc_and_pageScore).
Regards,
Aloke
On Sat, Dec 8, 2012 at 8:08 AM, Upayavira wrote:
> Yes, expected.
>
> When it does a search for the first, say, 10 resul
54 matches
Mail list logo