Thanks Jason. Appreciate your response.
Thanks
Fiz N.
On Thu, Jun 25, 2020 at 5:42 AM Jason Gerlowski
wrote:
> Hi Fiz,
>
> Since you're just looking for a POC solution, I think Solr's
> "bin/post" tool would probably help you achieve your first
> requirement.
>
> But I don't think "bin/post" gi
Hi Fiz,
Since you're just looking for a POC solution, I think Solr's
"bin/post" tool would probably help you achieve your first
requirement.
But I don't think "bin/post" gives you much control over the fields
that get indexed - if you need the file path to be stored, you might
be better off writi
Hello Solr experts,
I am using standalone version of SOLR 8.5 on Windows machine.
1) I want to index all types of files under different directory in the
file share.
2) I need to index absolute path of the files and store it solr field. I
need that info so that end user can click and open the f
SON format without having to change
> schema and index then I would have no issues with JSON.
> I can not use "select" handler as it does not include parent/child
> relationships.
>
> The options I have are following I guess. I am not sure if they are real
> possibilities t
s though.
1. Find a way to load pre-created index files either through
SolrCloudClient or directly to ZK
2. Find a way to export the data in JSON format without having to make all
fields docValues enabled.
3. Use Merge Index tool with an empty index and a real index. I am don't
know if it is po
ause of this, I would prefer if there is a way to load pre-created index
> files into the cluster.
> I checked the solr test framework and related examples but couldn't find
> any example of index files being loaded in cloud mode.
>
> Is there a way to load index files in
is a way to load pre-created index
files into the cluster.
I checked the solr test framework and related examples but couldn't find
any example of index files being loaded in cloud mode.
Is there a way to load index files into solr running in cloud mode?
Thanks!
Pratik
>
>> bq. To be clear I deleted the actual index files out from under the
>> running master
>>
>> I'm assuming *nix here since Windows won't let you delete a file that
>> has an open file handle...
>>
>> Did you then restart the master? Asi
Yes unix.
It was an amazing moment.
On Mon, Jun 4, 2018, 11:28 PM Erick Erickson
wrote:
> bq. To be clear I deleted the actual index files out from under the
> running master
>
> I'm assuming *nix here since Windows won't let you delete a file that
> has an open f
bq. To be clear I deleted the actual index files out from under the
running master
I'm assuming *nix here since Windows won't let you delete a file that
has an open file handle...
Did you then restart the master? Aside from any checks about refusing
to replicate an empty index, just de
Check the logs. I bet it says something like “refusing to fetch empty index.”
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 4, 2018, at 1:41 PM, Jeff Courtade wrote:
>
> I am thankful for that!
>
> Could you point me at something that explain
I am thankful for that!
Could you point me at something that explains this maybe?
J
On Mon, Jun 4, 2018, 4:31 PM Shawn Heisey wrote:
> On 6/4/2018 12:15 PM, Jeff Courtade wrote:
> > This was strange as I would have thought the replica would have
> replicated
> > an empty index from the master.
On 6/4/2018 12:15 PM, Jeff Courtade wrote:
> This was strange as I would have thought the replica would have replicated
> an empty index from the master.
Solr actually has protections in place to specifically PREVENT index
replication when the master has an empty index. This is so that a
accident
8, 23:57 Jeff Courtade wrote:
>
> > To be clear I deleted the actual index files out from under the running
> > master
> >
> > On Mon, Jun 4, 2018, 2:25 PM Jeff Courtade
> wrote:
> >
> > > So are you saying it should have?
> > >
> &g
that log if it not clear.
Regards,
Aman
On Mon, Jun 4, 2018, 23:57 Jeff Courtade wrote:
> To be clear I deleted the actual index files out from under the running
> master
>
> On Mon, Jun 4, 2018, 2:25 PM Jeff Courtade wrote:
>
> > So are you saying it should have?
> >
To be clear I deleted the actual index files out from under the running
master
On Mon, Jun 4, 2018, 2:25 PM Jeff Courtade wrote:
> So are you saying it should have?
>
> It really acted like a normal function this happened on 5 different pairs
> in the same way.
>
>
> On M
and synchronized up to date
> >
> > I went on the master and deleted the index files while solr was running.
> > solr created new empty index files and continued to serve requests.
> > The slave did not delete its indexes and kept all of the old data in
> place
> &g
ation.
>
> The master and slave were both running and synchronized up to date
>
> I went on the master and deleted the index files while solr was running.
> solr created new empty index files and continued to serve requests.
> The slave did not delete its indexes and kept all of t
Hi,
This I think is a very simple question.
I have a solr 4.3 master slave setup.
Simple replication.
The master and slave were both running and synchronized up to date
I went on the master and deleted the index files while solr was running.
solr created new empty index files and continued to
> >> curiosity. Perhaps making a replication backup and then restoring on
> the
> >> new server would be better. In the middle of other things now, will
> try a
> >> few of those, plus some other ideas.
> >
> > I think the problem is that you're cop
>> curiosity. Perhaps making a replication backup and then restoring on the
>> new server would be better. In the middle of other things now, will try a
>> few of those, plus some other ideas.
>
> I think the problem is that you're copying the index files into
> ${ins
few of those, plus some other ideas.
I think the problem is that you're copying the index files into
${instanceDir}/data and not ${instanceDir}/data/index. The index
directory is what Solr is actually going to use.
Delete everything that already exists in the index directory before
putting
I just created a tar file, actually a tar.gz file and scp'd to a server, at
first i was worried that the gzip caused issues, but as i mentioned no
errors on start up, and i thought i would see some. @Erick, how would you
recommend. This is going to be less of an issue b/c i need to build the
inde
One note, be _very_ sure you copy in binary mode..
On Thu, Feb 1, 2018 at 1:33 PM, Shawn Heisey wrote:
> On 2/1/2018 12:56 PM, Jeff Dyke wrote:
>> That's exactly what i thought as well. The only difference and i can try
>> to downgrade OSX is 7.2, and i grabbed 7.2.1 for install on Ubuntu.
On 2/1/2018 12:56 PM, Jeff Dyke wrote:
> That's exactly what i thought as well. The only difference and i can try
> to downgrade OSX is 7.2, and i grabbed 7.2.1 for install on Ubuntu. I
> didn't think a point minor point release would matter.
>
> solr@stagingsolr01:~/data/issuers/data$ ls -1
> 98
That's exactly what i thought as well. The only difference and i can try
to downgrade OSX is 7.2, and i grabbed 7.2.1 for install on Ubuntu. I
didn't think a point minor point release would matter.
solr@stagingsolr01:~/data/issuers/data$ ls -1
981552
index
_mg8.dii
_mg8.dim
_mg8.fdt
_mg8.fdx
_mg
On 2/1/2018 11:14 AM, Jeff Dyke wrote:
I've been developing locally on OSX and am now going through the process of
automating the installation on AWS Ubuntu. I have created a core, added my
fields and then untarred the data directory on my Ubuntu instance,
restarted solr (to hopefully reindex),
I've been developing locally on OSX and am now going through the process of
automating the installation on AWS Ubuntu. I have created a core, added my
fields and then untarred the data directory on my Ubuntu instance,
restarted solr (to hopefully reindex), but no documents are seen.
Nor are any er
The problem i
am having is the old index files are not being deleted on the slave.
After each replication, I can see the old files still hanging around
This causes the data
directory size to increase by the index size every replication until
the disk fills up.
master:
-rw-r- 1
what currently stops you from indexing those files? Are you
getting an exception? Is it slow? Something else?
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 13 March 2017 at 15:52, Victor Hugo Olvera Morales
wrote:
> How can I index fi
> On Mar 13, 2017, at 12:52 PM, Victor Hugo Olvera Morales
> wrote:
>
> How can I index files with more than 300 MB in weight in solr-6.2.1
Is that 300 MB of text or some source format, like PDF?
The King James Bible is only 4 MB of text, so 300 MB is extremely large.
w
How can I index files with more than 300 MB in weight in solr-6.2.1
> I still think you should look at ensuring your merge policy is turned off
>in solrconfig.xml (if I understand your scenario, you have 1 instance which
>is read-only for searching, and another writing to the same index
>location), and did your turn infostream on as Erick suggested?
Thank you for
I know Solr used to have issues with indexes on NFS, there was a
segments.gen file specifically for issues around that, though that was
removed in 5.0. But you say this happens on local disks too, so that would
rule NFS out of it.
I still think you should look at ensuring your merge policy is turn
Hi,
If you look at the files at the ls-Output in my last post you will see that
SolR has deleted the
segments_f -file. Thus the index can no longer be loaded.
I also had other cases in which the data directory of SolR was empty after the
SolR shutdown.
And yes, it ist bad.
Best regards
Andre
ich I assume from the name means it
never merges. :)
On 15 January 2016 at 10:58, Moll, Dr. Andreas wrote:
> Hi,
>
> we still have the problem that SolR deletes index files on closing the
> application if the index was changed in the meantime from the production
> application (w
Hi,
we still have the problem that SolR deletes index files on closing the
application if the index was changed in the meantime from the production
application (which has an embedded SolR-Server).
The problem also occurs if we use a local file system instead of a NFS.
I have changed the loglevel
09:54 segments_e
>
> We produce the index via a separate instance of SolR.
> Die Filesystem used by the two SolR instances is a NFS share.
>
> The Index-lock-file is configured as
>
> single
>
> When the production SolR writes changes in the index directory while the SolR
duce the index via a separate instance of SolR.
Die Filesystem used by the two SolR instances is a NFS share.
The Index-lock-file is configured as
single
When the production SolR writes changes in the index directory while the SolR
search server is running,
the search server deletes all
On 12/17/2015 8:00 AM, Moll, Dr. Andreas wrote:
> we are using SolR for some years now and are currently switching from SolR
> 3.6 to 5.3.1.
> SolR 5.3.1 deletes all index files when it shuts down and there were external
> changes on the index-files
> (in our case from a second So
Hi,
we are using SolR for some years now and are currently switching from SolR 3.6
to 5.3.1.
SolR 5.3.1 deletes all index files when it shuts down and there were external
changes on the index-files
(in our case from a second SolR-server which produces the index).
Is this behaviour intentional
Hello Majisha,
Nutch' Solr indexing plugin has support for stripping non-utf8 character
codepoints from the input, but it does so only on the content field if i
remember correctly.
However, that stripping method was not built with the invalid middle byte
exception in mind, and i have not seen
On 3/22/2015 5:04 PM, Majisha Parambath wrote:
> As part of an assignment, we initially crawled and collected NSF and
> NASA Polar Datasets using Nutch. We used the nutch dump command to dump
> out the segments that were created as part of the crawl.
> Now we have to index this data into Solr. I a
Hello,
As part of an assignment, we initially crawled and collected NSF and NASA
Polar Datasets using Nutch. We used the nutch dump command to dump out the
segments that were created as part of the crawl.
Now we have to index this data into Solr. I am using java -jar post.jar
filename to post to
This is like allowing many users to access the
disk your database is on. Don't do it.
If by "many users on a server", you mean
many users having shell access, well, you have
many more problems than securing the Solr
index.
If you mean you have many users accessing an app
that lives on the server,
Hi,
Is there any way to secure the solr index directory . I have many users on
a server and i want to restrict file access to only the administrator.
does securing the index directory affect solr accessing the folder
Thanks,
Prasi
lem.
> > >When we setup the autocommit properties, we suppose that the index
> > > fille will created every commited.So that the size of the index files
> > will
> > > be large enough. We do not want to keep too many small files as [1].
> > >
> > >
On Tue, Sep 17, 2013 at 6:36 AM, Shawn Heisey wrote:
> On 9/17/2013 12:32 AM, YouPeng Yang wrote:
> > Hi
> >Another werid problem.
> >When we setup the autocommit properties, we suppose that the index
> > fille will created every commited.So that the size of the
On 9/17/2013 12:32 AM, YouPeng Yang wrote:
> Hi
>Another werid problem.
>When we setup the autocommit properties, we suppose that the index
> fille will created every commited.So that the size of the index files will
> be large enough. We do not want to keep too many sm
Hi
Another werid problem.
When we setup the autocommit properties, we suppose that the index
fille will created every commited.So that the size of the index files will
be large enough. We do not want to keep too many small files as [1].
How to control the size of the index files.
[1
gt; > docs was 1090.
> >
> > At first, I move the 2.7GB index data to another new Solr Server in
> > tomcat7. After I start the tomcat ,I find the total number of docs was
> just
> > half of the orginal number.
> > So I thought that maybe the left docs were
Server in
> tomcat7. After I start the tomcat ,I find the total number of docs was just
> half of the orginal number.
> So I thought that maybe the left docs were not commited to index
> files,and the tlog needed to be replayed .
You need to turn on autoCommit in your solrconfig.xm
umber of docs was just
half of the orginal number.
So I thought that maybe the left docs were not commited to index
files,and the tlog needed to be replayed .
Sequently , I moved the 2.7GB index data and 4.1GB tlog data to the new
Solr Server in tomcat7.
After I start the tomcat,an exce
t; -Original Message- From: Rajesh Jain
> Sent: Thursday, July 25, 2013 3:57 PM
> To: solr-user@lucene.apache.org
> Subject: Solr Index Files in a Directories
>
>
> I have flume sink directory where new files are being written periodically.
>
> How can I instruct solr t
-- Jack Krupansky
-Original Message-
From: Rajesh Jain
Sent: Thursday, July 25, 2013 3:57 PM
To: solr-user@lucene.apache.org
Subject: Solr Index Files in a Directories
I have flume sink directory where new files are being written periodically.
How can I instruct solr to index the files in
I have flume sink directory where new files are being written periodically.
How can I instruct solr to index the files in the directory every time a
new file gets written.
Any ideas?
Thanks,
Rajesh
ne shard, so Node A is leader.
> Then I index some docs to it. Then I created the same collection using
> CoreAdmin to B to become a replica. I found that solr will sync all index
> files from A to B.
> Under B's data dir, I have: index.20130318083415358 folder which has
index
files from A to B.
Under B's data dir, I have: index.20130318083415358 folder which has all
the synced index files, index.properties, replication.properties and tlog
folder(empty inside).
Then I removed the collection from node B using CoreAdmin UNLOAD, I keep
all files in B's d
I’m using Solr/Lucene 3.6 under Tomcat 6.
When shutting down an indexing server after much indexing activity,
occasionally, I see the following NullPointerException trace from Tomcat:
INFO: Stopping Coyote HTTP/1.1 on http-1800
Exception in thread "Lucene Merge Thread #1"
org.apache.lucene.i
Hi,
Just discoved that this seems to dependent on *noCFSRatio [Lucene 2790] .*
If I need to make it by default to usecompound File format ,then where
should I change , can this be changed only at code level or is there any
config setting which allows me to specify that it should always be comp
Hi,
I am unable to create compound Index format in 3.6.1 inspite of setting
as true. I do not see any .cfs file ,instead all the
.fdx,frq etc are seen and I see the segments_8 even though the mergefactor
is at 4 . Should I not see only 4 segment files at any time?
Please find attached schema an
Well, since solr is running under Tomcat, I would assume it inherits the
rights from it. Now question is, what are the rights the Tomcat runs with
under Windows?
On the windows shell: in Win 7 for exampe in order to perform Admin level
changes you need to start it under Admin.
Dmitry
On Sat, Jun
I have been putting together an application using Quartz to run several
indexing jobs in sequence using SolrJ and Tomcat on Windows. I would like the
Quartz job to do the following:
1. Delete index directories from the cores so each indexing job starts
fresh with empty indexes to populate
in context:
http://lucene.472066.n3.nabble.com/Solr-index-files-GZIP-compression-tp3867562p3867562.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Mon, Mar 19, 2012 at 5:48 PM, vybe3142 wrote:
> Thanks for the response
>
> No, the file is plain text.
>
> All I'm trying to do is index plain ASCII text files via a remote reference
> to their file paths.
The XML update handler expects a specific format of XML.
The json, CSV, javabin update
BTW, .. using the client I pasted, I get the same error even with the
standard supplied executable SOLR jar.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a-way-for-SOLR-SOLRJ-to-index-files-directly-bypassing-HTTP-streaming-tp3833419p3840483.html
Sent from the
avior is tied to the
BinaryRequestWriter() . There's got to be some built in functionality in
SOLR that will enable me to achieve this.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a-way-for-SOLR-SOLRJ-to-index-files-directly-bypassing-HTTP-streaming-tp38334
On Mon, Mar 19, 2012 at 4:38 PM, vybe3142 wrote:
> Okay, I added the javabin handler snippet to the solrconfig.xml file
> (actually shared across all cores). I got further (the request made it past
> tomcat and into SOLR) but haven't quite succeeded yet.
>
> Server trace:
> Mar 19, 2012 3:31:35
ost:8080/solr/testcore1/update/javabin
at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:432)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a-way-for-SOLR-SOLRJ-to-index-files-directly-bypassing-HTTP-streaming-tp3833419p3840290.html
Sent from the Solr - User mailing list archive at Nabble.com.
aram("literal.id", "testid1");
> request.setParam("stream.file",
> "C:\\work\\SolrClient\\data\\justin2.txt");
> request.process(server);
> }
>
>
> }
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Is-there-a-way-for-SOLR-SOLRJ-to-index-files-directly-bypassing-HTTP-streaming-tp3833419p3840068.html
> Sent from the Solr - User mailing list archive at Nabble.com.
al.id", "testid1");
request.setParam("stream.file",
"C:\\work\\SolrClient\\data\\justin2.txt");
request.process(server);
}
}
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a-way-for-SOLR-SOLRJ-to-index-files-directly-bypassing-HTTP-streaming-tp3833419p3840068.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'm going to try the approach described here and see what happens
http://lucene.472066.n3.nabble.com/Fastest-way-to-use-solrj-td502659.html
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a-way-for-SOLR-SOLRJ-to-index-files-directly-bypassing-HTTP-stre
the http message (which I want
to avoid).
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a-way-for-SOLR-SOLRJ-to-index-files-directly-bypassing-HTTP-streaming-tp3833419p3838238.html
Sent from the Solr - User mailing list archive at Nabble.com.
Sure it does
http://my.safaribooksonline.com/book/web-development/9781847195883/indexing-data/ch03lvl1sec03#X2ludGVybmFsX0ZsYXNoUmVhZGVyP3htbGlkPTk3ODE4NDcxOTU4ODMvNjg=
On Sat, Mar 17, 2012 at 2:55 AM, vybe3142 wrote:
> Hi,
> Is there a way for SOLR / SOLRJ to index files directly byp
Hi,
Is there a way for SOLR / SOLRJ to index files directly bypassing HTTP
streaming.
Use case:
* Text Files to be indexed are on file server (A) (some potentially large -
several 100 MB)
* SOLRJ client is on server (B)
* SOLR server is on server (C) running with dynamically created SOLR cores
ucene.472066.n3.nabble.com/Locating-index-files-tp3496865p3511061.html
Sent from the Solr - User mailing list archive at Nabble.com.
Reduce - IndexerMapReduce:
: linkdb: crawl/linkdb
: 2011-10-30 20:18:06,993 INFO indexer.IndexerMapReduce - IndexerMapReduces:
: adding segment: crawl/segments/...
: 2011-10-30 20:20:38,933 INFO solr.SolrIndexer - SolrIndexer: done
:
: However, I don't see the index files in the Solr data director
I failed to mention that the segments* files were indeed created; it is the
other files that are missing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Locating-index-files-tp3496865p3498692.html
Sent from the Solr - User mailing list archive at Nabble.com.
solar.data.dir is set, but the files aren't in that location. I've checked
the logs, and I don't see any errors. Obviously something is wrong, but I
can't find any indications as to what. Anyone have suggestions?
--
View this message in context:
http://lucene.472066.n3.nabbl
if (solr.data.dir system property is set) {
the index files will be there.
} else {
they are at ${solr.solr.home}/data directory
}
I hope it helps.
On Thu, Nov 10, 2011 at 9:37 AM, John wrote:
> Please forgive my lack of knowledge; I'm posting for the first time!
>
:20:38,933 INFO solr.SolrIndexer - SolrIndexer: done
However, I don't see the index files in the Solr data directory. Any
suggestions for troubleshooting this? Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Locating-index-files-tp3496865p3496865.html
Sent from the Solr
On Fri, Apr 15, 2011 at 5:28 PM, Trey Grainger wrote:
> Thank you, Yonik!
> I see the Jira issue you created and am guessing it's due to this issue.
> We're going to remove replicateAfter="startup" in the mean-time to see if
> that helps (assuming this is the issue the jira ticket described).
Ye
ver some breathing room to catch up on indexing in a non-i/o
> > intensive state, and then moves onto the next core (repeating until
> done).
> >
> > The problem we are facing is that under Solr 1.4, the old index files
> were
> > deleted very quickly after each opt
oblem we are facing is that under Solr 1.4, the old index files were
> deleted very quickly after each optimize, but under Solr 3.1, the old index
> files hang around for hours... in many cases they don't disappear until we
> restart Solr completely. This is leading to us running out of d
to do a
rolling optimize. The script optimizes a single core, waits 5 minutes to
give the server some breathing room to catch up on indexing in a non-i/o
intensive state, and then moves onto the next core (repeating until done).
The problem we are facing is that under Solr 1.4, the old index files
licy to
>>> the simple locking policy - but only on the child.
>>>
>>> On Sat, Dec 18, 2010 at 4:44 PM, feedly team wrote:
>>>> I have set up index replication (triggered on optimize). The problem I
>>>> am having is the old index files are not
cy - but only on the child.
>>
>> On Sat, Dec 18, 2010 at 4:44 PM, feedly team wrote:
>>> I have set up index replication (triggered on optimize). The problem I
>>> am having is the old index files are not being deleted on the slave.
>>> After each replication
ec 18, 2010 at 4:44 PM, feedly team wrote:
>> I have set up index replication (triggered on optimize). The problem I
>> am having is the old index files are not being deleted on the slave.
>> After each replication, I can see the old files still hanging around
>> as well as
Indeed, wouldn't reducing the number of segments be a better idea? Speeds up
searching too! Do you happen to have a very high mergeFactor value for each
core?
On Wednesday 19 January 2011 17:53:12 Erick Erickson wrote:
> You're perhaps exactly right in your approach, but with a bit more info
> w
Let's back up a ways here and figure out why you're getting so many
files open.
1> how many files are in your index?
2> are you committing very frequently?
3> or do you simply have a LOT of cores?
4> do you optimize your indexes? If so, how many files to you have in your
cores before/after optimiz
Dear All,
On a Linux system running a multi-core linux server, we are
experiencing a problem of too many files open which is causing tomcat
to abort. Reading the documentation, one of the things it seems we can
do is to switch to using compound indexes. We can see that in the
solrconfig.xml there
unsubscribe
On Mon, Jan 3, 2011 at 5:22 AM, Markus Jelsma wrote:
> I'm seeing this issue as well on 1.4.1 where all slaves are using simple as
> the locking mechanism. For some unknown reason slaves either don't remove
> old
> index.DATE directories or old index files
> -Grant
>
> On Jan 3, 2011, at 2:55 AM, Bernd Fehling wrote:
>
>> Dear list,
>>
>> some questions about the names of the index files.
>> With an older Solr 4.x version from trunk my index looks like:
>> _2t1.fdt
>> _2t1.fdx
>> _2t1.fnm
>> _2t1.
, 2011, at 2:55 AM, Bernd Fehling wrote:
> Dear list,
>
> some questions about the names of the index files.
> With an older Solr 4.x version from trunk my index looks like:
> _2t1.fdt
> _2t1.fdx
> _2t1.fnm
> _2t1.frq
> _2t1.nrm
> _2t1.prx
> _2t1.tii
> _2t1.tis
>
I'm seeing this issue as well on 1.4.1 where all slaves are using simple as
the locking mechanism. For some unknown reason slaves either don't remove old
index.DATE directories or old index files in the index directory. Only the
second slave has the correct index size.
master
4.8
Dear list,
some questions about the names of the index files.
With an older Solr 4.x version from trunk my index looks like:
_2t1.fdt
_2t1.fdx
_2t1.fnm
_2t1.frq
_2t1.nrm
_2t1.prx
_2t1.tii
_2t1.tis
segments_2
segments.gen
With a most recent version from trunk it looks like:
_3a9.fdt
_3a9.fdx
_3a9
info.
--
View this message in context:
http://lucene.472066.n3.nabble.com/old-index-files-not-deleted-on-slave-tp2113493p2167789.html
Sent from the Solr - User mailing list archive at Nabble.com.
You should use Locktype 'simple' instead of 'single'. I've never
heard of a .nfs000* file.
On Tue, Dec 28, 2010 at 8:42 PM, sakunthalakishan
wrote:
>
> We are using Locktype "single".
> --
> View this message in context:
> http://lucene.472066.
We are using Locktype "single".
--
View this message in context:
http://lucene.472066.n3.nabble.com/old-index-files-not-deleted-on-slave-tp2113493p2161030.html
Sent from the Solr - User mailing list archive at Nabble.com.
deleted. And these .nfs files
are still being used by SOLR in jboss.
This setup is giving issue only in linux. Is this known bug on linux?
--
View this message in context:
http://lucene.472066.n3.nabble.com/old-index-files-not-deleted-on-slave-tp2113493p2160924.html
Sent from the Solr
1 - 100 of 174 matches
Mail list logo