This is supported. You just need to ajust your ZK connection-string:
":/solr,:/solr,...,:/solr"
Regards, Per Steffensen
On 1/24/13 7:57 AM, J Mohamed Zahoor wrote:
Hi
I am using Solr 4.0.
I see the Solr data in zookeeper is placed on the root znode itself.
This becomes a pain if the zookeeper
I have two solr instances. One is a master and the other a slave, polling the
master every 20 seconds or so for index updates. My application mainly
queries the slave, so most of the load falls to it.
There are some areas of the application that do query the master, however.
For instance, During t
Hi,
Have you tried to add aliases to your network interface (for master and
slave)? Then you should use -Djetty.host and -Djetty.port to bind Solr with
appropriate IPs. I think you should also use different directories for Solr
files (-Dsolr.solr.home) as there may be some conflict with index
file
Romita,
IIRC you've already asked this, and I replied that everything what you need
is on debugQuery=on output. That format is a little bit verbose, and I
suppose you can experience some difficulties on finding the necessary info
there. Please provide debugQuery=on output, I can try to highlight t
http://wiki.apache.org/solr/XsltResponseWriter
IIRC you can output even json by xslt.
On Thu, Jan 24, 2013 at 5:11 AM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi,
>
> Write a custom response writer?
>
> Otis
> Solr & ElasticSearch Support
> http://sematext.com/
> On Jan 23, 2013
Hi
I am using Solr 4.0.
I see the Solr data in zookeeper is placed on the root znode itself.
This becomes a pain if the zookeeper instance is used for multiple projects
like HBase and like.
I am thinking of raising a Jira for putting them under a znode /solr or
something like that?
./Zahoor
On 1/23/2013 3:12 PM, Walter Underwood wrote:
I can get one Solr 4.1 instance up with the config bootstrapped into Zookeeper.
In zk I see two configs, two collections, and I can run the DIH on the first
node.
I can get the other two nodes to start and sync if I give them a
-Dsolr.solr.home po
Sorry for leaving that bit out. This is Solr 4.1.0.
Thanks again,
John
On Wed, Jan 23, 2013 at 5:39 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi,
>
> Solr4 is 4.0 or 4.1? If the former try the latter first?
>
> Otis
> Solr & ElasticSearch Support
> http://sematext.com/
> On Jan
On 1/23/2013 2:32 PM, O. Olson wrote:
Hi,
I am using the /example-DIH in the Solr 4.0 download. The example worked
out of the box using the HSQLDB. I then attempted to modify the files to
connect to a SQL Express instance running on my local machine. A
http://localhost:8983/solr/db/datai
"id" field is not serial, it generated randomly.. so range queries on this
field are almost useless.
I mentioned TrieField, because solr.LongField is internally implemented as
a string, while solr.TrieLongField is a number. It might improve
performace, even without setting a precisionStep...
On T
Yeah, I don't know what you are seeing offhand. You might try Solr 4.1 and see
if it's something that has been resolved.
- Mark
On Jan 23, 2013, at 3:14 PM, Marcin Rzewucki wrote:
> Guys, I pasted you the full log (see pastebin url). Yes, it is Solr4.0. 2
> cores are in sync, but the 3rd one i
Hi,
I want the tokenized keywords to be displayed in solr response. As for
example, my solr search could be "Seach this document named XYZ-123". And
the tokenizer in schema.xml tokenizes the query as follows:
"search documnent xyz 123". I want to get these tokenized words in the
Solr response.
Hi,
Solr4 is 4.0 or 4.1? If the former try the latter first?
Otis
Solr & ElasticSearch Support
http://sematext.com/
On Jan 23, 2013 2:51 PM, "John Skopis (lists)" wrote:
> Hello,
>
> We have recently put solr4 into production.
>
> We have a 3 node cluster with a single shard. Each solr node is
: We met a wired problem in our project when sorting by score in Solr 4.0,
: the biggest score document is not a the top the debug explanation from
: solr are like this,
that's weird ... can you post the full debugQuery output of a an example
query showing the problem, using "echoParams=all" &
Hi,
I think trie type fields add value only if you do range queries in them and
it sounds like that is bit your use case.
Otis
Solr & ElasticSearch Support
http://sematext.com/
On Jan 23, 2013 2:53 PM, "Isaac Hebsh" wrote:
> Hi,
>
> In my use case, Solr have to to return only the "id" field, as
This is now JIRA issue SOLR-4343
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrcloud-4-1-inconsistent-of-results-in-replicas-tp4035638p4035825.html
Sent from the Solr - User mailing list archive at Nabble.com.
: References: <50f8af05.8030...@elyograg.org>
:
: <50f99712.80...@elyograg.org>
:
: Message-ID: <1358538442.4125.yahoomail...@web171802.mail.ir2.yahoo.com>
: Subject: Question on Solr Velocity Example
: In-Reply-To:
https://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailin
Hi
We met a wired problem in our project when sorting by score in Solr 4.0, the
biggest score document is not a the top the debug explanation from solr are
like this,
First Document
1.8412635 = (MATCH) sum of:
2675.7964 = (MATCH) sum of:
0.0 = (MATCH) sum of:
0.0 = (MATCH) max of:
On Jan 23, 2013, at 6:21 PM, Yonik Seeley wrote:
> A solr request could request a token that when resubmitted with a
> follow-up request would result in hitting the same replicas if
> possible.
Yeah, this would be good. It's also useful for not catching "eventual
consistency" effects between q
On Wed, Jan 23, 2013 at 6:15 PM, Markus Jelsma
wrote:
> We need, and i think many SolrCloud users are going to need this as well, to
> make replica's don't deviate too much from eachother, because if they do
> documents are certainly going to jump positions.
The synchronization that would be ne
Hi everyone
its my first post here so I hope im doing it in the right place.
Im a software developer and Im setting up a DEV environment in Ubuntu with
the same configuration as in PROD. (apparently this IT department doesnt
know the difference between a developer and a sys admin)
In PROD we
Hi Michael,
The evidence is how Lucene works, and that i add the same docs over and over
again in tests. If i index 500k docs to an index that already has the same 500k
docs it means i write a delete flag to the old 500k and add the new 500k,
leading to a million docs (maxDoc). You're correct,
Are you able to see any evidence that some of the 500k docs are being added
twice? Check the maxDocs on the Solr admin page. I vaguely recall there being
some issue with docs in SolrCloud being added multiple times (which under the
covers is really add, delete, add). I think that could cause the
Hi again,
I've tried various settings for TieredMergePolicy to make sure the docFreq,
maxDoc and docCount don't deviate too much. We've also did tests after
increasing reclaimDeletesWeight from 2.0 to 8.0 and slightly more frequent
merging. In these tests we reindexed the same 500k docs each ti
Thanks Hoss, Good to know!
I have that exact situation: a complex function based on multiple field values
that I always run for particular types of searches including global star
searches to aid in sorting the results appropriately.
Robi
-Original Message-
From: Chris Hostetter [
I can get one Solr 4.1 instance up with the config bootstrapped into Zookeeper.
In zk I see two configs, two collections, and I can run the DIH on the first
node.
I can get the other two nodes to start and sync if I give them a
-Dsolr.solr.home pointing to a directory with a solr.xml and subdir
If you can handle it in XML, use wt=xml&tr=foo.xsl and use a stylesheet
to format it as you want.
Upayavira
On Wed, Jan 23, 2013, at 08:53 PM, Rafał Kuć wrote:
> Hello!
>
> As far as I know you can't remove the response, numFound, start and
> docs. This is how the response is prepared by Solr an
Hi,
I am using the /example-DIH in the Solr 4.0 download. The example worked
out of the box using the HSQLDB. I then attempted to modify the files to
connect to a SQL Express instance running on my local machine. A
http://localhost:8983/solr/db/dataimport?command=full-import results in
o
Hello!
As far as I know you can't remove the response, numFound, start and
docs. This is how the response is prepared by Solr and apart from
removing the header, you can't do anything.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch
> no I want
Why? Just skip over that in the code. --wunder
On Jan 23, 2013, at 12:50 PM, hassancrowdc wrote:
> no I wanted it in json. i want it to start from where square bracket starts [
> . I want to remove everything before that. I can get it in json by including
> wt=json. I just want to remove Response
no I wanted it in json. i want it to start from where square bracket starts [
. I want to remove everything before that. I can get it in json by including
wt=json. I just want to remove Response, numFound, start and docs.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Resu
Another revelation...
I can see that there is a time difference in the Solr output for adding
these documents when I watch it realtime.
Here are some rows from the 3.5 solr server:
Jan 23, 2013 11:57:23 AM org.apache.solr.core.SolrCore execute
INFO: [gxdResult] webapp=/solr path=/update/javabin
pa
Hello!
Maybe you are looking to get the results in plain text if you want to
remove all the XML tags ? If that so, you can try adding wt=csv to get
the response as CSV instead of XML.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch
> Thanks. hal
you can do this with cores. You can have one core to serve the public,
and one for indexing. Then, when you've finished updating your index,
you use the core admin handler to swap the cores around. Then you do the
same thing the following night. Doesn't require any file moving nor any
restarts of s
What can I provide to get more insight into this.
I have tried lowering the commit maxDocs but the difference in nodes is
several times the maxDocs.
If I bring up a new node it will get the correct version, however if I kill
the leader, the wrong version becomes the master, and replicates out.
-
Guys, I pasted you the full log (see pastebin url). Yes, it is Solr4.0. 2
cores are in sync, but the 3rd one is not:
INFO: PeerSync Recovery was not successful - trying replication. core=ofac
INFO: Starting Replication Recovery. core=ofac
It started replication and even says it is done successfull
Is there anyway i can get rid of the response header(response header, status,
Qtime,response, numFound, start, docs) from the resultset of the query in
solr. I only want to see the result without this info at the top.
--
View this message in context:
http://lucene.472066.n3.nabble.com/ResultSet
Hello,
I do nightly builds for one of my sites. I build the new index in a
parallel directory. When it is finished I move the old files to a backup
directory(I only save one, delete the previous), move the new database
files to the correct place, then stop and restart solr. It sees the new
databas
Hello,
We have recently put solr4 into production.
We have a 3 node cluster with a single shard. Each solr node is also a
zookeeper node, but zookeeper is running in cluster mode. We are using the
cloudera zookeeper package.
There is no communication problems between nodes. They are in two diffe
Hi Ron,
If you turn off autoCommit and only commit after your delete and refresh,
the user's experience will be totally uninterrupted. Commits are used to
control visibility in a Solr index.
Michael Della Bitta
Appinions
18 East 41st Street, 2nd
: OK I guess I see how that makes sense. If I use function queries for
: affecting the scoring of results, does it help to include those in the
: warm up queries or does the same thing go for those also? IE is it
: useless to add {!boost%20b=... ?
boosts on *queries* probably won't affect yo
Hello!
I'm new to solr and trying to figure out how to implement it in our
environment. My question involves building the index. Our data does not lend
itself to delta updates so we have to build the entire index each time. Is
there some way to feed solr a file with all index records and tell i
I think the attachment got stripped. Here it is:
http://www.flickr.com/photos/otis/8409088080/in/photostream
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Tue, Jan 22, 2013 at 12:36 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Same here - I've seen some documen
Date and time is not being displayed properly, It goes to he next line after
year and month, see following:
"createdDate":"2012-12-
21T21:34:51Z"
in my schema:
and type is:
Is there any datetime field in solr that i can write in schema.xml so that
my date and time are shown properly in my resu
On Wed, Jan 23, 2013 at 12:23 PM, Eduard Moraru wrote:
> The only small but workable problem I have now is the same as
> https://issues.apache.org/jira/browse/SOLR-3598. When you are creating an
> alias for the field "who", you can't include the actual field in the list
> of alias like "f.who.qf=w
On 23 January 2013 23:04, hassancrowdc wrote:
> Date and time is not being displayed properly, It gives me the following
> error and also it goes to he next line after year and month, see following:
> createdDate":"ERROR:SCHEMA-INDEX-MISMATCH,stringValue=2012-12-
> 21T21:34:51" in my schema:
> re
Hi Otis,
OK I guess I see how that makes sense. If I use function queries for affecting
the scoring of results, does it help to include those in the warm up queries or
does the same thing go for those also? IE is it useless to add {!boost%20b=... ?
Thanks,
Robi
-Original Message-
Fr
Hi Alex,
On Wed, Jan 23, 2013 at 3:44 PM, Alexandre Rafalovitch
wrote:
> On Wed, Jan 23, 2013 at 8:38 AM, Eduard Moraru
> wrote:
>
> > "title:version author:SomeGuy content:content"
> >
> > which would get automagically expanded to:
> >
> > "(title_en:version OR title_fr:version) author:SomeGuy
Thanks Hoss,
The issue mentioned describes a similar behavior to what I observed, but not
quite. Commons-fileupload creates java.io.File objects for the temp files, and
when those Files are garbage collected, the temp file is deleted. I've
verified this by letting the temp files build up an
I'm still poking around trying to find the differences. I found a couple
things that may or may not be relevant.
First, when I start up my 3.5 solr, I get all sorts of warnings that my
solrconfig is old and will run using 2.4 emulation.
Of course I had to upgrade the solconfig for the 4.0 instance
On Wed, Jan 23, 2013 at 9:50 AM, Craig Ching wrote:
> The problem I have is that JSON is not specified to preserve order of
> keys.
JSON is a serialization format, and readers/writers can preserve order
if they wish to.
If you send JSON to solr in a specific order, that order will
definitely be r
Joey
That looks like a mixture .. the HTML-Partial is from the 4.0 Release, but my
guess is that you're seeing the current stylesheet .. that doesn't work well
together.
Perhaps it helps if you trick your Browser a bit, by requesting the Partial
directly using http://solr:8983/solr/tpl/dataim
Do you mean commenting out the ... tag? Because
that I already commented out. Or do I also need to remove the entire
tag? Sorry, I am not too familiar with everything in the
solrconfig file. I have a tag that essentially looks like this:
Everything inside is commented out.
-Kevin
On 1/23/13
Viacheslav,
SOLR-2155 is only compatible with Solr 3. However the technology it is
based on lives on in Lucene/Solr 4 in the
"SpatialRecursivePrefixTreeFieldType" field type. In the example schema
it's registered under the name "location_rpt". For more information on
how to use this field type
Mark Miller-3 wrote
> Does the admin cloud UI show all of the nodes as green? (active)
>
> If so, something is not right.
>
> - Mark
Yes, the leader with the correct # is filled in green, and the other node is
a green circle.
-Joey
--
View this message in context:
http://lucene.472066.n3.na
It's hard to guess, but I might start by looking at what the new UpdateLog is
costing you. Take it's definition out of solrconfig.xml and try your test
again. Then let's take it from there.
- Mark
On Jan 23, 2013, at 11:00 AM, Kevin Stone wrote:
> I am having some difficulty migrating our sol
Does the admin cloud UI show all of the nodes as green? (active)
If so, something is not right.
- Mark
On Jan 23, 2013, at 10:02 AM, Roupihs wrote:
> I have a one shard collection, with one replica.
> I did a dataImport from my oracle DB.
> In the master, I have 93835 docs, in the non master 9
Hi,
With Solr 3.5 I use SOLR-2155 plugin to filter the documents by distance as
described in http://wiki.apache.org/solr/SpatialSearch#Advanced_Spatial_Search
and this solution perfectly filter the multiValued data defined in schema.xml
like
the query looks like this with Solr 3.5: q=*
I am having some difficulty migrating our solr indexing scripts from using 3.5
to solr 4.0. Notably, I am trying to track down why our performance in solr 4.0
is about 5-10 times slower when indexing documents. Querying is still quite
fast.
The code adds documents in groups of 1000, and adds e
I have a one shard collection, with one replica.
I did a dataImport from my oracle DB.
In the master, I have 93835 docs, in the non master 92627.
I have tried http://{machinename}:8080/solr/{collection}/update/commit=true
on the master, but the index does not replicate.
Also, the node list differ
Hi all,
We're using the JSON update handler and we're currently doing two seperate,
but related updates. The first is a deleteByQuery to delete a bunch of
documents, the second then is a new set of documents to replace the old.
The premise is that the documents are all related in some way and the
Looks like it shows 3 cores start - 2 with versions that decide they are up to
date and one that replicates. The one that replicates doesn't have much logging
showing that activity.
Is this Solr 4.0?
- Mark
On Jan 23, 2013, at 9:27 AM, Upayavira wrote:
> Mark,
>
> Take a peek in the pastebi
Mark,
Take a peek in the pastebin url Marcin mentioned earlier
(http://pastebin.com/qMC9kDvt) is there enough info there?
Upayavira
On Wed, Jan 23, 2013, at 02:04 PM, Mark Miller wrote:
> Was your full logged stripped? You are right, we need more. Yes, the peer
> sync failed, but then you cut ou
Was your full logged stripped? You are right, we need more. Yes, the peer sync
failed, but then you cut out all the important stuff about the replication
attempt that happens after.
- Mark
On Jan 23, 2013, at 5:28 AM, Marcin Rzewucki wrote:
> Hi,
> Previously, I took the lines related to coll
Thanks,
That worked.
So the documentation needs to be fixed in a few places (the solr wiki and
the default solrconfig.xml in Solr 4.0 final; I didn't check any other
versions)
I'll either open a new ticket in JIRA to request a fix or reopen the old
one...
Furthermore,
I tried using the ElevatedMar
I upgraded to solr 4.1 from 4.0 to take advantage of some solrcloud
improvements, but now it seems that the DIH UI is broken. I have a screen shot
but the list seems to block emails w/ links. I will try to describe my issue:
* DIH it self works, via commands & the buttons on the UI.
* The DIH UI
On Wed, Jan 23, 2013 at 8:38 AM, Eduard Moraru wrote:
> "title:version author:SomeGuy content:content"
>
> which would get automagically expanded to:
>
> "(title_en:version OR title_fr:version) author:SomeGuy (content_en:content
> OR content_fr:content)"
>
Ignoring everything else, how is this d
On Wed, Jan 23, 2013 at 3:38 PM, Eduard Moraru wrote:
> Hello,
>
> Here is my problem:
>
> I am trying to do multilingual indexing in Solr and each document
> translation is indexed as an independent Solr/Lucene document having some
> fields suffixed with the language code. Here is an example:
>
Please see the linked screen shot. The DIH works but the UI says its not
configured.
http://i1194.photobucket.com/albums/aa365/Rouphis/ScreenShot2013-01-23at80306AM_zps1aa10b37.png
-Joey
We need a "Make your own adventure" (TM) Solr troubleshooting guide. :-)
*) You are staring at the Solr installation full of twisty little passages
and nuances. Would you like to:
*) Build your first index?
*) Make your first query?
*) Spread your documents in the cloud?
*) Build your
No, you look at wrong collection. I told you I have couple of collections
in Solr. I guess some messages may overlap each other. The one for which I
did test (index recovery) is called "ofac" and that fails. Besides, Solr
sometimes adds suffix to index directory internally and it is not a bug.
The
You are going to have to give more information than this. If you get bad
request, look in the logs for the Solr server and you will probably find
an exception there that tells you what was wrong with your document.
Upayavira
On Wed, Jan 23, 2013, at 08:58 AM, Thendral Thiruvengadam wrote:
> Hi,
>
Hi Gora,
I'm solrj4.1 with spring data solr.
here is my code.
PartialUpdate update = new PartialUpdate("id", "123");
update.setValueOfField("mutiValuedField", null);
solrTemplate.saveBean(update);
solrTemplate.commit();
On Fri, Jan 18, 2013 at 6:44 PM, Gora Mohanty wrote:
> On 18 January 2013
Are documents arriving, but your index is empty? Looking at that log,
everything appears to have happened fine, except the replication handler
has put the index in a directory with a suffix:
WARNING: New index directory detected: old=null
new=/solr/cores/bpr/selekta/data/index.20130121090342477
Ja
Thanks Alexandre for correcting the link and Mikhail for sharing the ideas!
Mihkail,
I will need to look closer at your customization of SpansFacetComponent on
the blogpost.
Is it so, that in this component, you are accessing and counting the
matched spans?
Thanks,
Dmitry
On Tue, Jan 22, 2013
OK, check this link: http://pastebin.com/qMC9kDvt
On 23 January 2013 11:35, Upayavira wrote:
> Hmm, don't see it. Not sure if attachments make it to this list.
> Perhaps put it in a pastebin and include a link if too long to include
> in an email?
>
>
>
> Upayavira
>
>
>
>
>
> On Wed, Jan 23, 2
You could use wt=xml&tr=.xsl and use an xsl stylesheet to transform
the XML into the order you want. (on 3.6 and below, use
wt=xslt&tr=.xsl)
However, I'm not that sure why you would want to - presumably some app
is going to consume this XML, and can't that put them into the right
order?
U
Hmm, don't see it. Not sure if attachments make it to this list.
Perhaps put it in a pastebin and include a link if too long to include
in an email?
Upayavira
On Wed, Jan 23, 2013, at 10:28 AM, Marcin Rzewucki wrote:
Hi,
Previously, I took the lines related to collection I tested. Maybe s
Hi,
Previously, I took the lines related to collection I tested. Maybe some
interesting part was missing. I'm sending the full log this time.
It ends up with:
INFO: Finished recovery process. core=ofac
The issue I described is related to collection called "ofac". I hope the
log is meaningful now.
The way Zookeeper is set up, requiring 'quorum' is aimed at avoiding
'split brain' where two halves of your cluster start to operate
independently. This means that you *have* to favour one half of your
cluster over the other, in the case that they cannot communicate with
each other.
For example. i
the first stage is identifying whether it can sync with transaction
logs. It couldn't, because there's no index. So the logs you have shown
make complete sense. It then says 'trying replication', which is what I
would expect, and the bit you are saying has failed. So the interesting
bit is likely i
Hi Gora and Roman ,
Thank you for you valuable comments, I am trying to fallow your suggestion.
I will notify you when its done.
If I face any problem to integrate solr please help me at the same way in
future.
Thank you,
Ashim
--
View this message in context:
http://lucene.472066.n3.nabbl
Interesting, that sounds like a bit of an issue really, the cloud is
"hiding" the real error. Presumably the non ok status: 500 (buried at the
bottom of your trace) was where the actual shard was returning the error
(we've had issues with positional stuff before and it normally says
something obvi
This is exactly the problem we are encountering as well, how to deal with
the ZK Quorum when we have multiple DCs. Our index is spread so that each
DC has a complete copy and *should* be able to survive on its own, but how
to arrange ZK to deal with that. The problem with Quorum is we need an odd
84 matches
Mail list logo