Hello,
My solr.xml (Solr ver.4.0) looks like this:
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-0-UI-issue-tp3993286p3994075.html
Sent from the Solr - User mailing list archive at Nabble.com.
You still haven't given a clear (at least to me) statement of
what you see and what about it isn't what you expect. Please
give some concrete examples.
Best
Erick
On Mon, Jul 9, 2012 at 3:45 AM, davidbougearel
wrote:
> Thanks for answer,
>
> Actually when i put 'it's not working' it means that i
Hi
Thanks for the reply. Yes I wanted the executable to run after the commit
operation. I would like to have the doc XML though. The further processing
might be intensive and so I didn't want the document to wait for the extra
processing before committing (if I use UpdateRequestProcessor). The
I may have found a good solution. I implemented my own SolrEventListener:
public class DynamicIndexerEventListener
implementsorg.apache.solr.core.SolrEventListener{
...
and then called it with a "firstSearcher" element in solrconfig.xml:
Then in the newSearcher() method I startup up the t
I'm writing a custom update request handler that will poll a "hot"
directory for Solr xml files and index anything it finds there. The custom
class implements Runnable, and when the run method is called the loop
starts to do the polling. How can I tell Solr to load this class on startup
to fire off
Thanks again for reporting this Brent. I opened a JIRA issue:
https://issues.apache.org/jira/browse/SOLR-3610
On Mon, Jul 9, 2012 at 3:36 PM, Brent Mills wrote:
> We're having an issue when we add or change a field in the db-data-config.xml
> and schema.xml files in solr. Basically whenever I a
Hmm, never mind my question about replicating using symlinks. Given that
replication on a single machine improves throughput, I should be able to get
a similar improvement by simply sharding on a single machine. As also
observed at
http://carsabi.com/car-news/2012/03/23/optimizing-solr-7x-your-se
Hello,
This is because Solr's Codec implementation defers to the schema, to
determine how the field should be indexed. When a core is reloaded,
the IndexWriter is not closed but the existing writer is kept around:
so you are basically trying to index to the old version of schema
before the reload.
Hi Brent,
Ordinarily when you make a change to schema.xml, that should be
accompanied by a core wipe and reindex. I think you may have been
lucking out thus far.
Michael Della Bitta
Appinions, Inc. -- Where Influence Isn’t a Game.
http://www.appin
We're having an issue when we add or change a field in the db-data-config.xml
and schema.xml files in solr. Basically whenever I add something new to index
I add it to the database, then the data config, then add the field to the
schema to index, reload the core, and do a full import. This has
Somebody any idea? Solr seems to ignore the DTD definition and therefore
does not understand the entities like ü or ä that are defined in
dtd. Is it the problem? If yes how can I tell SOLR to consider the DTD
definition?
On Fri, 06 Jul 2012 10:58:59 +0200, Michael Belenki
wrote:
> Dear community,
Thanks Lance, attached is a trimmed down version of my schema and a
print out of the object that exhibits the issue. Again if I put
splitOnCaseChange = 0 on the text field I don't see the same issue.
sample doc
SolrInputDocument[key=1, datetime_dt=Mon Jul 09 15:07:32 EDT 2012,
type=1, subject_tx
This is also seen when there are no cores defined in solr.xml. Check
that your solr.xml is in a useful place and has cores defined.
Alternatively, issue an appropriate CoreAdmin request to create one.
--Casey Callendrello
On 7/6/12 9:57 AM, anarchos78 wrote:
> Didn't helped
>
> --
> View this mess
Problem solved. The problem was in the drupal side... drupal core
search interfered with apachesolr module. and added extra information.
After deleting the core search tables and setting to 0 the numbres to
index on cron .. reindex site an the problem was solved.
thanks.
Marco
El 09/07/12
I thought this had to be a joke, but no, you were absolutely right. Fixed
it right up!
Unbelievable.
Thanks so much!
-jsd-
On Mon, Jul 9, 2012 at 10:15 AM, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
> Are you perhaps being bitten by the leap second bug? Just happened to
>
Erick, thanks. I now do see segment files in an index. directory at
the replicas. Not sure why they were not getting populated earlier.
I have a couple more questions, the second is more elaborate - let me know if I
should move it to a separate thread.
(1) The speed of adding documents in Solr
Are you perhaps being bitten by the leap second bug? Just happened to
me last week.
http://blog.wpkg.org/2012/07/01/java-leap-second-bug-30-june-1-july-2012-fix/
Michael Della Bitta
Appinions, Inc. -- Where Influence Isn’t a Game.
http://www.appin
> I would like to know how to use postCommit in SOLR properly.
> I would like to grab the indexed document and do further
> processing with it. How do I capture the documents
> being committed to the SOLR through the arguments in the
> postCommit config? I'm not using SolrJ and have no
> intenti
I have a very small Solr setup. The index is 32MB and there are only 8
fields, most of which are ints. I run a cron job every hour to use
DataImportHandler to do a full reimport of a database which has 42,600 rows.
There is minimal traffic on the server. Maybe a few dozen queries a
minute. Usu
Yes, "maxCollationTries" tests the new (collation) queries with all the same
parameters as the original query. Most notably, it uses the same "fq"
parameters so it will take in account any filters you were using.
James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311
-Original
I think the second way is probably the most robust, and it's surprisingly
un-complicated. You wouldn't really be using copyField in that case,
just adding them to the proper field in the document.
Anything you do outside of the update chain would suffer from having to
be kept in synch with the sto
No, you're misunderstanding the setup. Each replica has a complete
index. Updates get automatically forwarded to _both_ nodes for a
particular shard. So, when a doc comes in to be indexed, it gets
sent to the leader for, say, shard1. From there:
1> it gets indexed on the leader
2> it gets forwarded
Hello Koji,
thanks for reply. yes one way I can try is use copyField with one of the
copy using PathHierarchyTokenizerFactory and the other using
KeywordTokenizerFactory and depending on whether input entered is directory
path or exact file path switch between these 2 fields . thanks
--
View this
(12/07/09 19:41), Alok Bhandari wrote:
Hello,
this is how the field is declared in schema.xml
when I query for this filed with input
"M:/Users/User/AppData/Local/test/abc.txt" .
It searches for documents containing any of the token generated M,Users,
User
Hello,
this is how the field is declared in schema.xml
when I query for this filed with input
"M:/Users/User/AppData/Local/test/abc.txt" .
It searches for documents containing any of the token generated M,Users,
User etc.but I want to search for exact file
Does these settings somehow are involved into this?
edismax
2<-25%
Regards!
Dalius Sidlauskas
On 09/07/12 11:23, Dalius Sidlauskas wrote:
Hello, I have a query that returns different debug explanation output
for some reason. First document has no "(MATCH) product of:" tha
Hello, I have a query that returns different debug explanation output
for some reason. First document has no "(MATCH) product of:" that others
have and makes it as a first results that should not be.
Here is my debug explain output:
Doc #1
4.2649355 = (MATCH) sum of:
0.035260722 = (MATCH) m
Thanks for answer,
Actually when i put 'it's not working' it means that it's not the result
expected : the result return me data tagged by all the facets not only the
facet that i ask for with the constraint.
Hope you will help me.
Best regards,
David.
--
View this message in context:
http://
28 matches
Mail list logo