hi all. I have a large amount of spatial data in geo-json format that I get
from mssql server.
I want to be able to index that data and am trying to figure out how to
convert the data into WKT format since solr only accepts WKT.
is anyone away of any solr module or tsql code or c# code that
Sent from the Solr - User mailing list archive at Nabble.com.
Does anyone have a blog, wiki with detailed step by step instructions on
setting up SOLRCloud on multiple JBOSS instances?
Thanks in advance,
I am using edismax when executing search against set of news articles. I
would like to also boost the scores of matched documents based on another
field in the documents which I will call "source" which can be set to 3
possible strings. So if the "source" field has a value "a", then I want
to mul
Hello folks,
We are trying to access JMX data from SOLR 3.6 multi-core
setup and feed it into Nagios. Once we reload the core the JMX no more
works and we cannot get any data. Prior to moving to SOLR 3.6, I heard
that SOLR-2623 might have fixed the core reload issue. I reloaded one
of the
Hi,
We are planning to move the search of one of our listing based portal to
solr/lucene search server from sphinx search server. But we are facing a
challenge is porting customized sorting being used in our portal. We only
have last 60 days of data live.The algorithm is as follows:-
1. Put
, then 2nd listing from every advertiser and so on.
Now if I go with the grouping on advertiserId and and use the group.offset,
then probably I also need to do additive filtering on bucket_count. To
explain it better pseudo algorithm will be like
1. query solr with group.offset 0 and bucket count
Hi,
Any suggestions,
Am I trying to do too much with solr? Is there any other search engine,
which should be used here?
I am looking into solr codebase and planning to modify QueryComponent. Will
this be the right approach?
Regards,
Shivam
On Fri, Apr 27, 2012 at 10:48 AM, solr user wrote
Hello,
We Recently we migrated our SOLR 3.6 server OS from Solaris
to CentOS and from then on we started seeing "Invalid version
(expected 2, but 60)" errors on one of the query servers (oddly one
other query server seems fine). If we restart the server having issue
everythi
Hello,
We recently we migrated our production SOLR 3.6 servers OS
from Solaris to CentOS and from then on we started seeing "Invalid
version (expected 2, but 60)" errors on one of the query servers
(oddly one other query server seems fine). If we restart the
problematic server
Thank you very much for responding Mr. Miller. There are 5 different
apps deployed on the same server as SOLR and all apps call SOLR as via
SOLRJ with localhost:8080/solr/sitecore as constructor url for
HttpSolrServer.out of all these 5 apps only one has this
issueif it is really the web
g, there is a replication happening
> immediately prior to the error. I confess I'm not entirely up on the
> version thing, but is it possible you're replicating an index that
> is built with some other version of Solr?
>
> That would at least explain your statement that it ru
May 7, 2012, at 7:37 AM, Erick Erickson wrote:
>
>> Well, I'm guessing that the version of Solr (and perhaps there are
>> classpath issues in here?) are different, somehow, on the machine
>> slave that is showing the error.
>>
>> It's also possible t
I clean the entire index and re-indexed it with SOLRJ 3.6. Still I get
the same error every single day. How can I see if the container
returned partial/nonconforming response since it may be hidden by
solrj ?
Thanks
Ravi Kiran Bhaskar
On Mon, May 7, 2012 at 2:16 PM, Ravi Solr wrote:
> Hello
Thanks for responding Mr. Heisey... I don't see any parsing errors in
my log but I see lot of exceptions like the one listed belowonce
an exception like this happens weirdness ensues. For example - To
check sanity I queried for uniquekey:"111" from the solr admin GUI it
gav
be causing this issue of null/empty response. If the
server holds up during the weekend then we have the culprit :-)
Thanks to all of you who helped me out. Stay tuned.
Ravi Kiran
On Fri, May 11, 2012 at 1:23 AM, Shawn Heisey wrote:
> On 5/10/2012 4:17 PM, Ravi Solr wrote:
>>
>>
O|sun-appserver2.1.1|org.apache.solr.core.SolrCore|_ThreadID=32;_ThreadName=httpSSLWorkerThread-9001-8;|[sitesearchcore]
webapp=/solr-admin path=/select
params={q=dwts&start=0&rows=10&shards=localhost:9001/solr-admin/sitesearchcore,localhost:9001/solr-admin/deathnoticescore&tracki
I have already triple cross-checked that all my clients are using
same version as the server which is 3.6
Thanks
Ravi Kiran
On Tue, May 15, 2012 at 2:09 PM, Ramesh K Balasubramanian
wrote:
> I have seen similar errors before when the solr version and solrj version in
> the client don
configuration. :-)
Thanks
Ravi Kiran Bhaskar
On Tue, May 15, 2012 at 2:57 PM, Ravi Solr wrote:
> I have already triple cross-checked that all my clients are using
> same version as the server which is 3.6
>
> Thanks
>
> Ravi Kiran
>
> On Tue, May 15, 2012 at 2:09 PM,
ndler and trying to understand what
changes we need to make to SolrConfig.xml. I understood what changes need to
be made to schema.xml in a different thread on this forum.
Thanks,
Solr User
Ahmet,
Thanks for the reply and it was very helpful.
The query that I used before changing to dismax was:
/solr/tradecore/spell/?q=curious&wt=json&rows=9&facet=true&facet.limit=-1&facet.mincount=1&facet.field=author&facet.field=pubyear&facet.field=format&f
facet data not returning and what mistake I did with the
schema?
Thanks,
Solr User
On Wed, Nov 17, 2010 at 6:42 PM, Ahmet Arslan wrote:
>
>
> Wow you facet on many fields :
>
> author,pubyear,format,series,season,imprint,category,award,age,reading,grade,price
>
> The fields y
Hi Ahmet,
The below is my previous configuration which use to work correctly.
textSpell
default
searchFields
/solr/qa/tradedata/spellchecker
true
We use to search only in one field which is "searchFields" but with
implementing dismax we are searching in different f
think the URL that you provided which has plug in will do help doing that.
Is there a way from Solr to directly get the spelling suggestions as well as
first suggestion data at the same time?
For example:
if seach keywork is mooon (typed by mistake instead of moon)
the we need all suggestions
.
What configuration changes I need to make so that special characters like
hypen (-), period (.) are ignored while indexing? or any other suggestions?
Thanks,
Solr User
Hi Eric,
I use solr version 1.4.0 and below is my schema.xml
It creates 3 tokens j r r tolkien works fine but not jrr tolkien.
I will read about PatternReplaceCharFilterFactory and try it. Please let me
know if I need to do anything differently.
Thanks,
Solr User
On Mon
1991"^2.0 | category:"pubyear 1991" | title:"pubyear 1991"^9.0 |
isbn10:"pubyear 1991" | season:"pubyear 1991" | imprint:"pubyear 1991" |
subtitle:"pubyear 1991"^3.0 | isbn13:"pubyear 1991") (series:2011 |
desc:2011 | bisacsub:2011 | award:2011 | format:2011 | shortdesc:2011 |
pubyear:2011 | author:2011^2.0 | category:2011 | title:2011^9.0 |
isbn10:2011 | season:2011 | imprint:2011 | subtitle:2011^3.0 |
isbn13:2011))~1) ()
DisMaxQParser
Basically we are trying to pass the query string along with a facet field
and the range. Is there any syntax issue? Please help this is urgent as I
got stuck.
Thanks,
Solr user
uot;not work"? What, exactly, fails to do what you
> expect?
>
> But the first question I have is "did you reindex after changing your
> schema?".
>
> And have you checked your index to verify that there values in the fields
> you
> changed?
>
> Be
was getting only 16 results
instead of 8000 results.
How to get all the search results using dismax? Do I need to configure
anything to make * (asterisk) work?
Thanks,
Solr User
that query syntax be *:* ?
>
> Regards,
> -- Savvas.
>
> On 6 December 2010 16:10, Solr User wrote:
>
> > Hi,
> >
> > First off thanks to the group for guiding me to move from default search
> > handler to dismax.
> >
> > I have a question related to getti
Hi Shawn,
Yes you did.
I tried and did not work so I asked the same question again.
Now I understood and tried directly on the Solr admin and I got all the
search results. I will implement the same on the website.
Thank you so much Shawn.
On Mon, Dec 13, 2010 at 5:16 PM, Shawn Heisey wrote
sorry, never did find a solution to that.
if you do happen to figure it out, pls post a reply to this thread. thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/what-would-cause-large-numbers-of-executeWithRetry-INFO-messages-tp1453417p2281087.html
Sent from the Solr
Hi All,
I am trying to create indexes out of a 400MB XML file using the following
command and I am running into out of memory exception.
$JAVA_HOME/bin/java -Xms768m -Xmx1024m -*Durl*=http://$SOLR_HOST
SOLR_PORT/solr/customercarecore/update -jar
$SOLRBASEDIR/*dataconvertor*/common/lib/post.jar
Hi,
I am working on a migration from verity k2 to solr.
At this point I have a parser for the Verity Query Language (our used subset)
which generates a syntax tree.
I transfer this in a couple of filters and one query. This fragmentation is the
reason, why I can not use my parser inside
Hi folks,
we want to migrate our search-portal to Solr.
But some of our customers search in our informations offline with a DVD-Version.
So we want to estimate the complexity of a Solr DVD-Version.
This means to trim Solr to work on small computers with the opposite of heavy
loads. So no server
Hi Ezequiel,
In Solr the performance of sorting and faceted search is mainly a question of
main memory.
e.g Mike McCandless wrote in s.apache.org/OWK that sorting of 5m wikipedia
documents by title field need 674 MB of RAM.
But again: My main interest is an example of other companies/product
Hi Folks,
does anyone improve DIH XPathRecordReader to deal with nested xpaths?
e.g.
data-config.xml with
and the XML stream contains
/html/body/h1...
will only fill field “alltext” but field “title” will be empty.
This is a known issue from 2009
https://issues.apache.org/jira/browse/SOLR
Hi Lance,
your are right:
XPathEntityProcessor has the attribut "xsl", so I can use xslt to generate a
xml-File "in the form of the standard Solr update schema".
I will check the performance of this.
Best regards
Karsten
btw. "flatten" is an attrib
Hi Lance,
I used XPathEntityProcessor with attribut "xsl" and generate a xml-File "in the
form of the standard Solr update schema".
I lost a lot of performance, it is a pity that XPathEntityProcessor does only
use one thread.
My tests with a collection of 350T
only need this fields for faceted search?
Your problem will be, that solr normaly put a int[searcher.maxDoc()] array in
main-memory for each field with facets.
You can avoid this by using .method=enum which should not fit in your case.
Because you do not have multiToken per document, your facets
Hi,
I'm new to solr. My solr instance version is:
Solr Specification Version: 3.1.0
Solr Implementation Version: 3.1.0 1085815 - grantingersoll - 2011-03-26
18:00:07
Lucene Specification Version: 3.1.0
Lucene Implementation Version: 3.1.0 1085809 - 2011-03-26 18:06:58
Current Time: Tue Apr
Tue, Apr 26, 2011 at 1:35 PM, Robert Muir wrote:
> What do you have in solrconfig.xml for luceneMatchVersion?
>
> If you don't set this, then its going to default to "Lucene 2.9"
> emulation so that old solr 1.4 configs work the same way. I tried your
> example an
Hi,
I'm reading solr cache documentation -
http://wiki.apache.org/solr/SolrCaching I found there "The current
Index Searcher serves requests and when a new searcher is opened...".
Could you explain when new searcher is opened? Does it have something
to do with index commit?
Bes
new searcher will be opened. Before being
> exposed
> to regular clients it's a good practice to warm things up.
>
> Otis
>
> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Lucene ecosystem search :: http://search-lucene.com/
>
>
>
> - Or
Hi,
I can see only fieldCache (nothing about filter, query or document
cache) on stats page. What I'm doing wrong? We have two servers with
replication. There are two cores(prod, dev) on each server. Maybe I
have to add something to solrconfig.xml of cores?
Best Regards,
Solr Beginner
Solr version:
Solr Specification Version: 3.1.0
Solr Implementation Version: 3.1.0 1085815 - grantingersoll -
2011-03-26 18:00:07
Lucene Specification Version: 3.1.0
Lucene Implementation Version: 3.1.0 1085809 - 2011-03-26 18:06:58
Current Time: Wed Apr 27 14:28:34 CEST 2011
Server Start
Hello,
Pardon me if this has been already answered somewhere and I
apologize for a lengthy post. I was wondering if anybody could help me
understand Replication internals a bit more. We have a single
master-slave setup (solr 1.4.1) with the configurations as shown
below. Our environment is
heckerFile
2.7G search-data
Thanks,
Ravi Kiran Bhaskar
On Sat, May 7, 2011 at 11:49 PM, Bill Bell wrote:
> I did not see answers... I am not an authority, but will tell you what I
> think
>
> Did you get some answers?
>
>
> On 5/6/11 2:52 PM, "Ravi Solr" wr
Hello All,
I am planning to upgrade from Solr 1.4.1 to Solr 3.1. I
saw some deprecation warnings in the log as shown below
[#|2011-05-09T12:37:18.762-0400|WARNING|sun-appserver9.1|org.apache.solr.analysis.BaseTokenStreamFactory|_ThreadID=53;_ThreadName=httpSSLWorkerThread-9001-13
Thanks Grijesh for responding. I meant that I will use the Lucene 3.1
jars for indexing also from now on. My current index already has a
million docs indexed with solr 1.4.1 version, I read somewhere that
once server is upgraded to 3.1, it is said that the first commit will
change the indexes to
Hello Mr. Kanarsky,
Thank you very much for the detailed explanation,
probably the best explanation I found regarding replication. Just to
be sure, I wanted to test solr 3.1 to see if it alleviates the
problems...I dont think it helped. The master index version and
generation are
ter
wrote:
>
> : Thanks Grijesh for responding. I meant that I will use the Lucene 3.1
> : jars for indexing also from now on. My current index already has a
> : million docs indexed with solr 1.4.1 version, I read somewhere that
> : once server is upgraded to 3.1, it is said that the first c
uments, then committing),
> You will cycle through all 10 segments pretty fast.
>
> It appears that if you do go past the 10 segments without replicating, the
> only recourse is for the replicator to do a full index replication instead
> of a delta index replication...
>
> Does th
eneration is greater than slave, try to watch for the index on
> both master and slave the same time to see what files are getting
> replicated. You probably may need to adjust your merge factor, as Bill
> mentioned.
>
> -Alexander
>
>
>
> On Tue, 2011-05-10 at 12:45 -0400,
tell me how you solved it.
Ravi Kiran Bhaskar
On Thu, May 12, 2011 at 6:42 PM, Ravi Solr wrote:
> Thank you Mr. Bell and Mr. Kanarsky, as per your advise we have moved
> from 1.4.1 to 3.1 and have made several changes to configuration. The
> configuration changes have worked nicely til
Hi All,
I am using Solr 1.4.0 and dismax as request handler.I have the following in
my solrconfig.xml in the dismax request handler tag
spellcheck
The above tags helps to find terms if there are spelling issues. I tried
configuring terms component and no luck.
May I know how to configure
ex folder on master and slave
> before and after the replication?
>
> -Alexander
>
>
> On Fri, 2011-05-13 at 18:34 -0400, Ravi Solr wrote:
>> Sorry guys spoke too soon I guess. The replication still remains very
>> slow even after upgrading to 3.1 and setting the compr
Hi All,
Please help me in implementing TermsComponent in my current Solr solution.
Regards,
Solr User
On Tue, May 17, 2011 at 4:12 PM, Solr User wrote:
> Hi All,
>
> I am using Solr 1.4.0 and dismax as request handler.I have the following in
> my solrconfig.xml in the dismax req
Hi rajini,
multi-word synonyms like "private schools" normally make problems.
See e.g. Solr-1-4-Enterprise-Search-Server Page 56:
"For multi-word synonyms to work, the analysis must be applied at
index-time and with expansion so that both the original words and the
combined w
t; Von: mtraynham
> An: solr-user@lucene.apache.org
> Betreff: AndQueryNode to NearSpanQuery
> ...
> The SpanNearQueryNode is a class I made that implements FieldableNode
> and extends QueryNodeImpl (as I want all Fieldable children to be from
> the same field, therefore just remember
Original-Nachricht
> Datum: Thu, 16 Jun 2011 12:39:32 +0200
> Von: Tommaso Teofili
> An: solr-user@lucene.apache.org
> Betreff: Showing facet of first N docs
> Hi all,
> Do you know if it is possible to show the facets for a particular field
> related only to th
700 (PDT)
> Von: rocco2004
> An: solr-user@lucene.apache.org
> Betreff: Solr Configuration with 404 error
> I installed Solr using:
>
> java -jar start.jar
>
> However I downloaded the source code and didn't compile it (Didn't pay
> attention). And the error u
I am using Solr 3.3. I am using the edismax query parser and I am getting
great results. To improve relevancy I want to add some semantic filters to
the query.
E.g. I want to pass the query "red shoes" as q="shoes"&fq=color:red. I have
a service that can tell me that in th
My documents have two prices "retail_price" and "current_price". I want to
get products which have a sale of x%, the x is dynamic and can be specified
by the user. I was trying to achieve this by using fq.
If I want all sony tv's that are at least 20% off, I want to write something
like
q="sony t
I read about frange but didn't think about using it like you mentioned :)
Thank you.
On Tue, Jul 19, 2011 at 4:12 PM, Yonik Seeley wrote:
> On Tue, Jul 19, 2011 at 6:49 PM, solr nps wrote:
> > My documents have two prices "retail_price" and "current_price".
Hi lucene/solr-folk,
Issue:
Our documents are stable except for two fields which are used for linking
between the docs. So we like to update this two fields in a batch once a month
(possible once a week).
We can not reindex all docs once a month, because we are using XeLDA in some
fields for
we have a fairly complex taxonomy in our search system. I want to store the
taxonomy revision that was used to built the Solr index. This revision
number is not specific to a document but it is specific to the entire index.
I want this revision number to be returned as part of every search.
What
Hi abhayd,
XPathEntityProcessor does only support a subset of xpath,
like div[@id="2"] but not [id=2]
Take a look to
https://issues.apache.org/jira/browse/SOLR-1437#commentauthor_12756469_verbose
I solve this problem by using xslt a preprocessor (with full xpath).
The drawback is p
Karsten
Original-Nachricht
> Datum: Mon, 1 Aug 2011 12:17:45 +0200
> Von: Chantal Ackermann
> An: "solr-user@lucene.apache.org"
> Betreff: Re: Store complete XML record (DIH & XPathEntityProcessor)
> Hi g,
>
> ok, I understand your problem,
Hi Suk-Hyun Cho,
if "myFriend" is the unit of retrieval you should use this as lucene document
with the fields "isCool" "gender" "bloodType" ...
if you realy want to insert all "myFriends" in one field like your
myFriends = [
"isCool=true SOME_JUNK_HERE gender=female bloodType=O",
"isCoo
Solr, possible
an issue?)
Best regards
Karsten
in context:
http://lucene.472066.n3.nabble.com/How-to-cut-off-hits-with-score-below-threshold-td3219064.html
Original-Nachricht
> If one wanted to cut off hits whose score is below some threshold (I know,
> I know, one d
dge of "possible link pattern".
For the lucene indexer this is a black-box: There is a service which produce
the keys for outgoing and possibleIncoming from our source (xml-)documents,
this keys must be searchable in lucene/solr.
P.P.S. in Context:
http://lucene.472066.n3.nabble.com/U
Hi Erick,
thanks a lot!
This looks like a good idea:
Our queries with the "changeable" fields fits the join-idea from
https://issues.apache.org/jira/browse/SOLR-2272
because
- we do not need relevance ranking
- we can separate in a conjunction of a query with the "changeable&q
}
}
Original-Nachricht
> Datum: Mon, 08 Aug 2011 10:15:45 +0200
> Von: Bernd Fehling
> An: solr-user@lucene.apache.org
> Betreff: string cut-off filter?
> Hi list,
>
> is there a string cut-off filter to limit the length
> of a KeywordTokenized string
Hi Arcadius,
currently we have a migration project from verity k2 search server to solr.
I do not know IDOL, but autonomy bought verity before IDOL was released, so
possible it is comparable?
verity k2 works directly on xml-Files, in result the query syntax is a little
bit like xpath e.g. with
Hello,
I am using Solr 3.3. I have been following instructions at
https://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_3/solr/contrib/uima/README.txt
My setup looks like the following.
solr lib directory contains the following jars
apache-solr-uima-3.3.0.jar
commons-digester-2.0.jar
A annotation.
> Hope this helps,
> Tommaso
>
>
> [1] :
>
> http://svn.apache.org/repos/asf/uima/addons/trunk/Tagger/src/main/java/org/apache/uima/SentenceAnnotation.java
>
> 2011/8/17 solr nps
>
> > Hello,
> >
> > I am using Solr 3.3. I have bee
> index time expansion would expand "lms" to these terms
>> > "lms"
>> > "learning management system"
>> >
>> > i.e. not like this:
>> > "lms"
>> > "learning"
>> >
g management
>> > system"
>> >
>> > index time expansion would expand "lms" to these terms
>> > "lms"
>> > "learning management system"
>> >
>> > i.e. not like this:
>> >
your own LogUpdateProcessor to log only the last UniqueKey
4.) you can change DocBuilder#execute to store the uniqueKey in
dataimport.properties
*max(id)*
With TermsComponent you can easily ask for the first term in a field (so you
could add a field with "1000 - id" to find the la
Hi Avenka,
you asked for a HowTo to add a field "inverseID" which allows to calculate
max(id) from its first term:
If you do not use solr you have to calculate "1 - id" and store it in
an extra field "inverseID".
If you fill solr with your own code, add a
Original-Nachricht
> Datum: Thu, 12 Jul 2012 03:18:47 -0700 (PDT)
> Von: Andy
> An: "solr-user@lucene.apache.org"
> Betreff: NRT and multi-value facet - what is Solr\'s limit?
> Hi,
>
> I understand that the cache for multi-value facet is multi-segment. S
he new TermsEnum#ord() Method the class UnInvertedField already
lost half of its code-lines. UnInvertedField would work per segment, if the
"ordinal position for a term" would not change in a commit. Which is the basic
idea of the taxonomy-solution.
So I am quite sure that Solr will adopt th
field in my solr schema that contains indexed geographic
polygon data. I want to find all docs where that polygon intersects a given
lat/long. I was experimenting with returning distance in the resultset and
with sorting by distance and found that the following query works. However,
I dont know what
Thanks David. No worries about the delay; am always happy and appreciative
when someone responds.
I don't understand what you mean by "All center points get cached into
memory upon first use in a score" in question 2 about the Java OOM errors I
am seeing.
The Solr instance I
sage in context:
http://lucene.472066.n3.nabble.com/question-s-re-lucene-spatial-toolkit-aka-LSP-aka-spatial4j-tp3997757p4000286.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello,
I have a very simple setup one master and one slave configured
as below, but replication keeps failing with stacktrace as shown
below. Note that 3.6 works fine on the same machines so I am thinking
that Iam missing something in configuration with regards to solr
4.0...can somebody
Wow, That was quick. Thank you very much Mr. Siren. I shall remove the
compression node in the solrconfig.xml and let you know how it went.
Thanks,
Ravi Kiran Bhaskar
On Wed, Sep 5, 2012 at 2:54 AM, Sami Siren wrote:
> I opened SOLR-3789. As a workaround you can remove name="com
The replication finally worked after I removed the compression setting
from the solrconfig.xml on the slave. Thanks for providing the
workaround.
Ravi Kiran
On Wed, Sep 5, 2012 at 10:23 AM, Ravi Solr wrote:
> Wow, That was quick. Thank you very much Mr. Siren. I shall remove the
> compr
Hi Jay,
I would like to see the Zookeeper Watcher as part of DIH in solr.
Possible you could extend org.apache.solr.handler.dataimport.DataSource.
If you want to call solr without http you can use solrJ:
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer
Beste regards
Karsten
where "geohash" type is setup as:
2. my data consists of 2 docs, each having a single polygon (the polygons
are somewhat large so I put them at the end of this message) that is in WKT
format
3. my query is as follows:
http://myserver:myport/solr/core0/select?q=*:*&fq={!v=$geoq%2
Hello,
I have a weird problem, Whenever I read the doc from solr and
then index the same doc that already exists in the index (aka
reindexing) I get the following error. Can somebody tell me what I am
doing wrong. I use solr 3.6 and the definition of the field is given
below
Exception
Do you have a "_version_" field in your schema. I believe SOLR 4.0
Beta requires that field.
Ravi Kiran Bhaskar
On Wed, Oct 10, 2012 at 11:45 AM, Andrew Groh wrote:
> I cannot seem to get delete by query working in my simple setup in Solr 4.0
> beta.
>
> I have a single
, Gopal Patwa wrote:
> You need remove field after read solr doc, when u add new field it will
> add to list, so when u try to commit the update field, it will be multi
> value and in your schema it is single value
> On Oct 10, 2012 9:26 AM, "Ravi Solr" wrote:
>
>
I am using DirectXmlRequest to index XML. This is just a test case as
my client would be sending me a SOLR compliant XML. so I was trying to
simulate it by reading a doc from an exiting core and reindexing it.
HttpSolrServer server = new
HttpSolrServer("http://testsolr:8080
Thank you very much Hoss, I knew I was doing something stupid. I will
change the dynamic fields to stored="false" and check it out.
Thanks
Ravi Kiran Bhaskar
On Wed, Oct 10, 2012 at 3:02 PM, Chris Hostetter
wrote:
> : I have a weird problem, Whenever I read the doc
Hi,
I noticed that the backup request
http://master_host:port/solr/replication?command=backup<http://master_host/solr/replication?command=backup>
works only if there are committed index data, i.e.
core.getDeletionPolicy().getLatestCommit() is not null. Otherwise, no backup
is created. It
Hi,
http://localhost:8549/solr/replication?command=enablereplication
does not seem working. After making the request, I run
http://localhost:8549/solr/replication?command=indexversion
and here is the response:
0
0
0
0
Notice the indexversion is 0, which is the value after you disable
You are right. Replication was disabled after the server was restarted, and
then I saw the behavior. After I added some data, command "indexversion"
returns the right values. So it seems Solr behaved correctly.
Thanks,
2009/8/5 Noble Paul നോബിള് नोब्ळ्
> how is the repli
By the way, I was using command=indexversion to verify replication is on or
off. Since it seems not reliable, is there a better to do it?
Thanks,
On Thu, Aug 6, 2009 at 8:43 AM, solr jay wrote:
> You are right. Replication was disabled after the server was restarted, and
> then I s
201 - 300 of 389 matches
Mail list logo