On 05/10/2020 16:02, Rafael Sousa wrote:
> Having things reindexed from scratch is not
> an option, so, is there a way of creating a 8.6.2 index from a pre-existing
> 6.5 index or something like that?
Sadly there is no such way. If all your fields are stored you might be
able to whip up something
Hi all,
I have an solr 6.5 indexes that I should migrate to a 8.6.2 version.
Knowing that the migration of more than one version shift is now blocked in
the 8.6 version, what is the recommended way of making an old 6.5 index to
be ported to a 8.6.2 version ? Having things reindexed from scra
I changed attributes reloaded the collection but scores are not changing
also (norm(content_text)) is not changing.
i did reindexing of documents but scores are not changing.
steps i followed.
1 Created fields using default similarity.
created content_text field type without similarity section
Hi all,
I am experimenting with different parameters of BM25 and Sweetspot
similarity.
I changed solr field type definition like given below.
I need clarification that changing similarity in field type need reidexing
or not.
{"replace-field-type":{
"name":"content_text",
"class":"solr.TextField",
You’re welcome.
Solr is a huge beast, I don’t think any single individual
knows all the bits and pieces… Or, in my case, can
remember them ;)
> On Apr 27, 2020, at 9:15 AM, Bjarke Buur Mortensen
> wrote:
>
> Wow, thanks. Erick. That's actually much better :-)
> You live and you learn.
>
> Che
Wow, thanks. Erick. That's actually much better :-)
You live and you learn.
Cheers,
Bjarke
Den man. 27. apr. 2020 kl. 15.00 skrev Erick Erickson <
erickerick...@gmail.com>:
> What about the Collections API REINDEXCOLLECTION? That has the
> advantage of being something officially supported, puts
What about the Collections API REINDEXCOLLECTION? That has the
advantage of being something officially supported, puts the source
collection into read-only mode, uses a much more efficient query
process (streaming actually) etc.
It has the disadvantage of producing a new collection under the
cove
Thanks for the reply,
I'm on solr 8.2 so cursorMark is there.
Doing this from one collection to another collection, and then use a
collection alias is probably the way to go, but actually, my suggestion
was a little more bold:
I'm indexing on top of the same core, i.e from
http://localhost:8983/
Hi Bjarke,
I don’t see a problem with that approach if you have enough resources to handle
both cores at the same time, especially if you are doing that while serving
production queries. The only issue is that if you plan to do that then you have
to have all fields stored. Also note that cursorM
Hi list,
Let's say I add a copyField to my solr schema, or change the analysis chain
of a field or some other change.
It seems to me to be an alluring choice to use a very simple
dataimporthandler to reindex all documents, by using a SolrEntityProcessor
that points to itself. I have just done this
;
> On Wed, Aug 8, 2018 at 11:24 AM, Bjarke Buur Mortensen
> wrote:
> > OK, thanks.
> >
> > As long as it's my dev box, reindexing is fine.
> > I just hope that my assumption holds, that our prod solr is 7x segments
> > only.
> >
> > Thanks agai
estion -
> https://stackoverflow.com/questions/54593171/can-i-use-solr-cloud-replica-for-reindexing
> . Can you help me?
Hi, I have a question -
https://stackoverflow.com/questions/54593171/can-i-use-solr-cloud-replica-for-reindexing
. Can you help me?
See: https://issues.apache.org/jira/browse/SOLR-12646
On Wed, Aug 8, 2018 at 11:24 AM, Bjarke Buur Mortensen
wrote:
> OK, thanks.
>
> As long as it's my dev box, reindexing is fine.
> I just hope that my assumption holds, that our prod solr is 7x segments
> only.
>
&
OK, thanks.
As long as it's my dev box, reindexing is fine.
I just hope that my assumption holds, that our prod solr is 7x segments
only.
Thanks again,
Bjarke
2018-08-08 20:03 GMT+02:00 Erick Erickson :
> Bjarke:
>
> Using SPLITSHARD on an index with 6x segments just seems to no
Bjarke:
Using SPLITSHARD on an index with 6x segments just seems to not work,
even outside the standalone-> cloud issue. I'll raise a JIRA.
Meanwhile I think you'll have to re-index I'm afraid.
Thanks for raising the issue.
Erick
On Wed, Aug 8, 2018 at 6:34 AM, Bjarke Buur Mortensen
wrote:
> E
Erick,
thanks, that is of course something I left out of the original question.
Our Solr is 7.1, so that should not present a problem (crossing fingers).
However, on my dev box I'm trying out the steps, and here I have some
segments created with version 6 of Solr.
After having copied data from m
Rahul, thanks, I do indeed want to be able to shard.
For now I'll go with Markus' suggestion and try to use the SPLITSHARD
command.
2018-08-07 15:17 GMT+02:00 Rahul Singh :
> Bjarke,
>
> I am imagining that at some point you may need to shard that data if it
> grows. Or do you imagine this data t
Bjarke:
One thing, what version of Solr are you moving _from_ and _to_?
Solr/Lucene only guarantee one major backward revision so you can copy
an index created with Solr 6 to another Solr 6 or Solr 7, but you
couldn't copy an index created with Solr 5 to Solr 7...
Also note that shard splitting i
Bjarke,
I am imagining that at some point you may need to shard that data if it grows.
Or do you imagine this data to remain stagnant?
Generally you want to add solrcloud to do two things : 1. Increase availability
with replicas 2. Increase available data via shards 3. Increase fault tolerance
gt; Regards,
> Markus
>
>
>
> -Original message-
> > From:Bjarke Buur Mortensen
> > Sent: Tuesday 7th August 2018 13:47
> > To: solr-user@lucene.apache.org
> > Subject: Re: Recipe for moving to solr cloud without reindexing
> >
> > Thank you, that is
Subject: Re: Recipe for moving to solr cloud without reindexing
>
> Thank you, that is of course a way to go, but I would actually like to be
> able to shard ...
> Could I use your approach and add shards dynamically?
>
>
> 2018-08-07 13:28 GMT+02:00 Markus Jelsma :
>
>
ache.org
> > Subject: Recipe for moving to solr cloud without reindexing
> >
> > Hi List,
> >
> > is there a cookbook recipe for moving an existing solr core to a solr
> cloud
> > collection.
> >
> > We currently have a single machine with a large core (~
ust 2018 13:06
> To: solr-user@lucene.apache.org
> Subject: Recipe for moving to solr cloud without reindexing
>
> Hi List,
>
> is there a cookbook recipe for moving an existing solr core to a solr cloud
> collection.
>
> We currently have a single machine with a large
Hi List,
is there a cookbook recipe for moving an existing solr core to a solr cloud
collection.
We currently have a single machine with a large core (~150gb), and we would
like to move to solr cloud.
I haven't been able to find anything that reuses an existing index, so any
pointers much apprec
Thanks, made heap size considerably larger and its fine now. Thank you
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
On 7/18/2018 8:31 AM, THADC wrote:
Thanks for the reply. I read the link you provided. I am currently not
specifying a heap size with solr so my understanding is that by default it
will just grow automatically. If I add more physical memory to the VM
without doing anything with heap size, won't t
Thanks for the reply. I read the link you provided. I am currently not
specifying a heap size with solr so my understanding is that by default it
will just grow automatically. If I add more physical memory to the VM
without doing anything with heap size, won't that possibly fix the problem?
Thanks
On 7/18/2018 7:10 AM, THADC wrote:
We performed a full reindex for the first time against our largest database
and on two new VMs dedicated to solr indexing. We have two solr nodes
(solrCloud/solr7.3) with a zookeeper cluster. Several hours into the
reindexing process, both solr nodes shut down
Hi,
We performed a full reindex for the first time against our largest database
and on two new VMs dedicated to solr indexing. We have two solr nodes
(solrCloud/solr7.3) with a zookeeper cluster. Several hours into the
reindexing process, both solr nodes shut down with
I'd set your soft commit interval to as long as you can stand. Every
soft commit opens a new searcher and does significant work, including
throwing away your queryResultCache and filterCache.
The time here should be as long as you can afford to not be able to
search updates. Don't go totally overb
thanks I changed the autosoftCommit from -1 and 3000 and that seemed to do
the trick.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Most likely you don't have your autocommit settings set up correctly
and, at a guess, your indexing process fires a commit at the end.
If I'm right, autoCommit has "openSearcher" set to "false" and
autoSoftCommit is either commented out or set to -1.
More than you might want to know:
https://luc
Hello,
We are migrating from solr 4.7 to 7.3. It takes about an hour to perform a
complete re-index against our development database. During this upgrade (to
7.3) testing, I typically wait for the re-index to complete before doing
sample queries from our application. However, I got a bit impatient
Anyone has a clue?
--
View this message in context:
http://lucene.472066.n3.nabble.com/reindexing-a-solr-collection-of-nested-documents-tp4307586p4307976.html
Sent from the Solr - User mailing list archive at Nabble.com.
72066.n3.nabble.com/reindexing-a-solr-collection-of-nested-documents-tp4307586.html
Sent from the Solr - User mailing list archive at Nabble.com.
ve to
reindex all data for that tenant ? (if not, is there a way without fully
reindexing a tenant?) This probably will also fail because the id changes,
but what if using another field for _routeing and id stays the same ?
Thank You
On 4/1/2016 8:56 PM, Erick Erickson wrote:
> bq: The bottleneck is definitely Solr.
>
> Since you commented out the server.add(doclist), you're right to focus
> there. I've seen
> a few things that help.
>
> 1> batch the documents, i.e. in the doclist above the list should be
> on the order of 1,00
Shawn:
bq: The bottleneck is definitely Solr.
Since you commented out the server.add(doclist), you're right to focus
there. I've seen
a few things that help.
1> batch the documents, i.e. in the doclist above the list should be
on the order of 1,000 docs. Here
are some numbers I worked up one tim
On 3/24/2016 11:57 AM, tedsolr wrote:
> My post was scant on details. The numbers I gave for collection sizes are
> projections for the future. I am in the midst of an upgrade that will be
> completed within a few weeks. My concern is that I may not be able to
> produce the throughput necessary to
day) cannot
> be supported by Solr.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Performance-potential-for-updating-reindexing-documents-tp4265861p4265922.html
> Sent from the Solr - User mailing list archive at Nabble.com.
://lucene.472066.n3.nabble.com/Performance-potential-for-updating-reindexing-documents-tp4265861p4265922.html
Sent from the Solr - User mailing list archive at Nabble.com.
nd you'll have to re-index from the system
of record.
Best,
Erick
On Thu, Mar 24, 2016 at 7:18 AM, tedsolr wrote:
> With a properly tuned solr cloud infrastructure and less than 1B total docs
> spread out over 50 collections where the largest collection is 100M docs,
> what is a
With a properly tuned solr cloud infrastructure and less than 1B total docs
spread out over 50 collections where the largest collection is 100M docs,
what is a reasonable target goal for entirely reindexing a single
collection?
I understand there are a lot of variables, so I'm hypotheti
Hi kshitij
We are using following configuration and it is working fine
http://11.11.11.11:8983/solr/classify"; query="*:*"
fl="id,title,content,segment," wt="javabin" />
Please give processor="SolrEntityProcessor" and also give fl
(fieldswhich you want to be saved in your new instance)
hi
I am using following tag
i am able to connect but indexing is not working. My solr have same versions
On Wed, Feb 24, 2016 at 12:48 PM, Neeraj Bhatt
wrote:
> Hi
>
> Can you give your data import tag details tag in
> db-data-config.xml
> Also is your previuos and new solr have differe
Hi
Can you give your data import tag details tag in db-data-config.xml
Also is your previuos and new solr have different versions ?
Thanks
On Wed, Feb 24, 2016 at 12:08 PM, kshitij tyagi
wrote:
> Hi,
>
> I am following the following article
> https://wiki.apache.org/solr/HowToReindex
> to re
Hi,
I am following the following article
https://wiki.apache.org/solr/HowToReindex
to reindex the data using Solr itself as a datasource.
Means one solr instance has all fields with stored true and indexed=false.
When I am using this instance as a datasource and indexing it on other
instance data
the leader that was not synching properly and let another node become
the leader, then reindex all docs. Once the reindexing is done I started
the node that was causing the issue and it synched properly :-)
Thanks
Ravi Kiran Bhaskar
On Mon, Sep 28, 2015 at 10:26 AM, Gili Nachum wrote:
> W
Were all of shard replica in active state (green color in admin ui) before
starting?
Sounds like it otherwise you won't hit the replica that is out of sync.
Replicas can get out of sync, and report being in sync after a sequence of
stop start w/o a chance to complete sync.
See if it might have hap
Erick...There is only one type of String
"sun.org.mozilla.javascript.internal.NativeString:" and no other variations
of that in my index, so no question of missing it. Point taken regarding
the CURSORMARK stuff, yes you are correct, my head so numb at this point
after working 3 days on this, I wasn
bq: 3. Erick, I wasnt getting all 1.4 mill in one shot. I was initially using
100 docs batch, which, I later increased to 500 docs per batch. Also it
would not be a infinite loop if I commit for each batch, right !!??
That's not the point at all. Look at the basic logic here:
You run for a while
Erick & Shawn I incrporated your suggestions.
0. Shut off all other indexing processes.
1. As Shawn mentioned set batch size to 1.
2. Loved Erick's suggestion about not using filter at all and sort by
uniqueId and put last known uinqueId as next queries start while still
using cursor marks as
Thank you Erick & Shawn for taking significant time off your weekends to
debug and explain in great detail. I will try to address the main points
from your emails to provide more situation context for better understanding
of my situation
1. Erick, As part of our upgrade from 4.7.2 to 5.3.0 I re-in
Oh, one more thing. _assuming_ you can't change the indexing process
that gets the docs from the system of record, why not just add an
update processor that does this at index time? See:
https://cwiki.apache.org/confluence/display/solr/Update+Request+Processors,
in particular the StatelessScriptUpd
On 9/26/2015 10:41 AM, Shawn Heisey wrote:
> 30
This needs to include openSearcher=false, as Erick mentioned. I'm sorry
I screwed that up:
30
false
Thanks,
Shawn
bet you'd get through your update a lot faster that way.
>>>>
>>>> Best,
>>>> Erick
>>>>
>>>> On Fri, Sep 25, 2015 at 5:07 PM, Ravi Solr wrote:
>>>> > Thanks for responding Erick. I set the "start" to zero
On 9/25/2015 10:10 PM, Ravi Solr wrote:
> thank you for taking time to help me out. Yes I was not using cursorMark, I
> will try that next. This is what I was doing, its a bit shabby coding but
> what can I say my brain was fried :-) FYI this is a side process just to
> correct a messed up string.
uot; returns zero docs causing my while loop to
>>> > exist...so was trying to see if I was doing the right thing or if
>>> there is
>>> > an alternate way to do heavy indexing.
>>> >
>>> > Thanks
>>> >
>>> > Ravi Kiran Bh
. If you're absolutely sure no commits
>> >> are taking place even that should be OK.
>> >>
>> >> The "deep paging" stuff could be helpful here, see:
>> >>
>> >>
>> https://lucidworks.com/blog/coming-soon-to-solr-e
t; other
> >> > good way that I did not know of, that's all 😀
> >> >
> >> > Thanks
> >> >
> >> > Ravi Kiran Bhaskar
> >> >
> >> > On Friday, September 25, 2015, Walter Underwood <
> wun...@wunderwood.org
> &g
; >>
>> >> It might be faster to fetch all the docs from Solr and save them in
>> files.
>> >> Then modify them. Then reload all of them. No guarantee, but it is
>> worth a
>> >> try.
>> >>
>> >> Good luck.
>> >&g
a
> >> try.
> >>
> >> Good luck.
> >>
> >> wunder
> >> Walter Underwood
> >> wun...@wunderwood.org
> >> http://observer.wunderwood.org/ (my blog)
> >>
> >>
> >> > On Sep 25, 2015, at 2:59 PM, Ravi So
them. Then reload all of them. No guarantee, but it is worth a
>> try.
>>
>> Good luck.
>>
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/ (my blog)
>>
>>
>> > On Sep 25, 2015, at 2:59 PM,
;
> Good luck.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/ (my blog)
>
>
> > On Sep 25, 2015, at 2:59 PM, Ravi Solr > wrote:
> >
> > Walter, Not in a mood for banter right now Its 6:00pm on a friday and
>
here trying to figure reindexing issues :-)
> I dont have source of docs so I have to query the SOLR, modify and put it
> back and that is seeming to be quite a task in 5.3.0, I did reindex several
> times with 4.7.2 in a master slave env without any issue. Since then we
> have moved to
Walter, Not in a mood for banter right now Its 6:00pm on a friday and
Iam stuck here trying to figure reindexing issues :-)
I dont have source of docs so I have to query the SOLR, modify and put it
back and that is seeming to be quite a task in 5.3.0, I did reindex several
times with 4.7.2 in
Sure.
1. Delete all the docs (no commit).
2. Add all the docs (no commit).
3. Commit.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Sep 25, 2015, at 2:17 PM, Ravi Solr wrote:
>
> I have been trying to re-index the docs (about 1.5 million) as one
I have been trying to re-index the docs (about 1.5 million) as one of the
field needed part of string value removed (accidentally introduced). I was
issuing a query for 100 docs getting 4 fields and updating the doc (atomic
update with "set") via the CloudSolrClient in batches, However from time t
Reload will get the new schema definitions. But all the indexed
content will stay as is and will probably start causing problems if
you changed analyzer definitions seriously.
You probably will have to reindex from scratch/external source.
Sorry.
Solr Analyzers, Tokenizers, Filters, URPs and
Hi,
We have an over engineered index that we would be to rework. It's already
holding 150M documents with 94GB of index size. We have High index/high query
system running Solr 4.5.
My question - If we update the schema, can we run reindex by using "Reload"
action in CoreAdmin UI? Will that r
Thanks Shawn for the insight. WIll try your recommendations .
Gopal
On Mon, Apr 27, 2015 at 9:46 PM, Rajesh Hazari
wrote:
> thanks, i am sure that we have missed this command line property, this
> gives me more information on how to use latest solr scripts more
> effectively.
>
>
> *Thanks,*
>
thanks, i am sure that we have missed this command line property, this
gives me more information on how to use latest solr scripts more
effectively.
*Thanks,*
*Rajesh**.*
On Mon, Apr 27, 2015 at 12:04 PM, Shawn Heisey wrote:
> On 4/27/2015 9:15 AM, Gopal Jee wrote:
> > We have a 26 node solr c
On 4/27/2015 9:15 AM, Gopal Jee wrote:
> We have a 26 node solr cloud cluster. During heavy re-indexing, some of
> nodes go into recovering state.
> as per current config, soft commit is set to 15 minute and hard commit to
> 30 sec. Moreover, zkClientTimeout is set to 30 sec in solr nodes.
> Please
our production solr nodes were having similar issue with 4 nodes everything
is normal, but when we try to increase the replicas (nodes) to 10 most of
then went to recovery.
our config params :
nodes : 20 (replica in each node)
soft commit is 6 sec
hard commit is 5 min
indexing scheduled time : ever
We have a 26 node solr cloud cluster. During heavy re-indexing, some of
nodes go into recovering state.
as per current config, soft commit is set to 15 minute and hard commit to
30 sec. Moreover, zkClientTimeout is set to 30 sec in solr nodes.
Please advise.
Thanks
Gopal
I have the following problem: I have many (let's say hundreds of millions) of
documents in an existing distributed index that have a field with a variety of
values. Two of these values are "dog" and "puppy". I have decided that I want
to reclassify these to just all be "dog".
I do queries on th
On 4/7/2014 3:00 AM, Ralf Matulat wrote:
we are currently facing a new problem while reindexing one of our SOLR
4.4 instances:
We are using SOLR 4.4 getting data via DIH out of a MySQL Server.
The data is constantly growing.
We have reindexed our data a lot of times without any trouble.
The
Hi,
we are currently facing a new problem while reindexing one of our SOLR
4.4 instances:
We are using SOLR 4.4 getting data via DIH out of a MySQL Server.
The data is constantly growing.
We have reindexed our data a lot of times without any trouble.
The problem can be reproduced.
There is
m Solr but
>> from system of record such as a DB.
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/solr-4-x-reindexing-issues-tp4126695p4126986.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>
>
sage in context:
> http://lucene.472066.n3.nabble.com/solr-4-x-reindexing-issues-tp4126695p4126986.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
from system of record such as a DB.
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-4-x-reindexing-issues-tp4126695p4126986.html
Sent from the Solr - User mailing list archive at Nabble.com.
s not getting indexed in solr 4.x
> as I mentioned in my original email. The reason I am reindexing is that
> with solr 4.x EnglishPorterFilterFactory has been removed and also I wanted
> to add another copyField of all field values into destination "allfields"
>
> As per your s
Thank you very much for responding Mr. Høydahl. I removed the recursion
which eliminated the stack overflow exception. However, I still
encountering my main problem with the docs not getting indexed in solr 4.x
as I mentioned in my original email. The reason I am reindexing is that
with solr 4.x
one go!
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
24. mars 2014 kl. 22:36 skrev Ravi Solr :
> Hello,
>We are trying to reindex as part of our move from 3.6.2 to 4.6.1
> and have faced various issues reindexing 1.5 Million docs. We dont use
> so
Hello,
We are trying to reindex as part of our move from 3.6.2 to 4.6.1
and have faced various issues reindexing 1.5 Million docs. We dont use
solrcloud, its still Master/Slave config. For testing this Iam using a
single test server reading from it and putting back into same index.
We
Abhishek,
stemming is applied before the tokens get into the index.
Changing the stemming of the indexer cannot be done without reindexing.
paul
Le 2 févr. 2014 à 06:23, "abhishek jain" a écrit :
> Hi Friends,
>
> Is it possible to remove stemming without having to rein
Hi Friends,
Is it possible to remove stemming without having to reindex the entire data,
I am using KStem.
Can we do so by query itself, not sure how?
I am not using dismax.
Thanks
Abhishek
In Solr 4.5, I'm trying to create a new collection on the fly. I have a
data dir with the index that should be in there, but the CREATE command
makes the directory be:
_shard1_replicant#
I was hoping that making a collection named something would use a directory
with that name to let me use the d
what i am doing?
i am querying from one core and reindexing data to another core.
Why?
i am querying using regular expression, it give me results but do not tell
how many unique values found and with their individual counts. (facet). i
am querying and reindexing in another core on same machine
On Fri, May 31, 2013 at 3:57 AM, Michael Sokolov
gt wrote:
> On UNIX platforms, take a look at vmstat for basic I/O measurement, and
> iostat for more detailed stats. One coarse measurement is the number of
> blocked/waiting processes - usually this is due to I/O contention, and you
> will want to
On 5/30/2013 8:30 AM, Dotan Cohen wrote:
On Wed, May 29, 2013 at 5:37 PM, Shawn Heisey wrote:
It's impossible for us to give you hard numbers. You'll have to
experiment to know how fast you can reindex without killing your
servers. A basic tenet for such experimentation, and something you
hop
On Wed, May 29, 2013 at 5:37 PM, Shawn Heisey wrote:
> It's impossible for us to give you hard numbers. You'll have to
> experiment to know how fast you can reindex without killing your
> servers. A basic tenet for such experimentation, and something you
> hopefully already know: You'll want to
On 5/29/2013 6:01 AM, Dotan Cohen wrote:
> I mean 'overload' Solr in the sense that it cannot read, process, and
> write data fast enough because too much data is being handled. I
> remind you that this system is writing hundreds of documents per
> minute. Certainly there is a limit to what Solr ca
On Wed, May 29, 2013 at 2:41 PM, Upayavira wrote:
> I presume you are running Solr on a multi-core/CPU server. If you kept a
> single process hitting Solr to re-index, you'd be using just one of
> those cores. It would take as long as it takes, I can't see how you
> would 'overload' it that way.
>
I presume you are running Solr on a multi-core/CPU server. If you kept a
single process hitting Solr to re-index, you'd be using just one of
those cores. It would take as long as it takes, I can't see how you
would 'overload' it that way.
I guess you could have a strategy that pulls 100 documents
I see that I do need to reindex my Solr index. The index consists of
20 million documents with a few hundred new documents added per minute
(social media data). The documents are mostly smaller than 1KiB of
data, but some may go as large as 10 KiB. All the data is text, and
all indexed fields are s
: Difference Between Indexing and Reindexing
On 4 April 2013 19:29, Furkan KAMACI wrote:
I use Nutch 2.1 and using that:
bin/nutch solrindex http://localhost:8983/solr -index
bin/nutch solrindex http://localhost:8983/solr -reindex
[...]
Sorry, but are you sure that you are using 2.1. Here is
what I get
It may be a deprecated usage(maybe not) but certainly can run -index and
-reindex on Nutch 2.1.
2013/4/4 Gora Mohanty
> On 4 April 2013 20:16, Gora Mohanty wrote:
> > On 4 April 2013 19:29, Furkan KAMACI wrote:
> >> I use Nutch 2.1 and using that:
> >>
> >> bin/nutch solrindex http://localhos
On 4 April 2013 20:16, Gora Mohanty wrote:
> On 4 April 2013 19:29, Furkan KAMACI wrote:
>> I use Nutch 2.1 and using that:
>>
>> bin/nutch solrindex http://localhost:8983/solr -index
>> bin/nutch solrindex http://localhost:8983/solr -reindex
> [...]
>
> Sorry, but are you sure that you are using
1 - 100 of 176 matches
Mail list logo