Stable Versions in Solr 4

2015-12-28 Thread abhi Abhishek
Hi All,
   i am trying to determine stable version of SOLR 4. is there a blog which
we can refer.. i understand we can read through Release Notes. I am
interested in user reviews and challenges seen with various versions of
SOLR 4.


Appreciate your contribution.

Thanks,
Abhishek


Determine if Merge is triggered in SOLR

2016-01-26 Thread abhi Abhishek
Hi All,
is there a way in SOLR to determine if a merge has been triggered in
SOLR? is there a API exposed to query this?

if its not available is there a way to do the same using lucene jar files
available in the SOLR libs?

Appreciate your help.

Best Regards,
Abhishek


Re: Determine if Merge is triggered in SOLR

2016-01-31 Thread abhi Abhishek
Hi All,
any suggestions/ ideas?

Thanks,
Abhishek

On Tue, Jan 26, 2016 at 9:16 PM, abhi Abhishek  wrote:

> Hi All,
> is there a way in SOLR to determine if a merge has been triggered in
> SOLR? is there a API exposed to query this?
>
> if its not available is there a way to do the same using lucene jar files
> available in the SOLR libs?
>
> Appreciate your help.
>
> Best Regards,
> Abhishek
>


Solr 4 replication

2016-04-04 Thread abhi Abhishek
Hi all,
Is solr 4 replication push or pull?

Best Regards,
Abhishek


Re: Solr 4 replication

2016-04-05 Thread abhi Abhishek
Thanks MIkhail.
  is there a way to have a push Replication. any Contributions or
Anything what could in this case?

Thanks,
Abhishek

On Tue, Apr 5, 2016 at 1:29 AM, Mikhail Khludnev  wrote:

> It's pull, but you can trigger pulling.
>
> On Mon, Apr 4, 2016 at 9:19 PM, abhi Abhishek  wrote:
>
> > Hi all,
> > Is solr 4 replication push or pull?
> >
> > Best Regards,
> > Abhishek
> >
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>
> <http://www.griddynamics.com>
> 
>


SOLR Upgrade 3.x to 4.10

2016-04-12 Thread abhi Abhishek
Hi All,
I have SOLR 3.6 running currently, i am planning to upgrade this to
SOLR 4.10. Below were the thoughts we could come up with.

1. in place upgrade
   I would be making the SOLR 4.10 slave of 3.6 and copy the indexes,
and optimize this index.

  will optimizing the Lucene 3.3 index on SOLR 4 instance(with Lucene
4.10) change the index structure to Lucene 4.10? if not what would be the
version?
  if i enable docvalues on certain fields before issuing optimize, will
it be able to incorporate ( create .dvd & .dvm files ) that in the newly
created index?


2. Re-Index the data

Seeking advice for minimum time to upgrade this with most features of SOLR
4.10

Thanks in Advance

Best Regards,
Abhishek


Re: SOLR Upgrade 3.x to 4.10

2016-04-12 Thread abhi Abhishek
Thanks Erick and Shawn for the input. it makes more sense to move to SOLR
5.x but we would like to get there in few iterations gradually making
incremental changes to have a smooth cut over.

our index size is 3TB (10 shards of 300G each), i was looking for a
alternate route which would save me from pain of re-indexing. any thoughts
for the same would help.

Best Regards,
Abhishek


On Wed, Apr 13, 2016 at 6:18 AM, Shawn Heisey  wrote:

> On 4/12/2016 6:10 AM, abhi Abhishek wrote:
> > I have SOLR 3.6 running currently, i am planning to upgrade this to
> > SOLR 4.10. Below were the thoughts we could come up with.
> >
> > 1. in place upgrade
> >I would be making the SOLR 4.10 slave of 3.6 and copy the indexes,
> > and optimize this index.
> >
> >   will optimizing the Lucene 3.3 index on SOLR 4 instance(with Lucene
> > 4.10) change the index structure to Lucene 4.10? if not what would be the
> > version?
>
> Yes, the optimize will change the index structure, but the contents of
> the index will not change, even if changes in Solr's analysis components
> would have resulted in different info going into the index based on your
> schema.  Because the *query* analysis may also change with the upgrade,
> this might cause queries to no longer work the same, unless you reindex
> and verify that your analysis still does what you require.  A few
> changes to analysis components in later versions can be changed back to
> earlier behavior with luceneMatchVersion, but this typically only
> happens with big changes -- such as the major bugfix for
> WordDelimiterFilterFactory in version 4.8.
>
> Reindexing for all upgrades is recommended when possible.
>
> >   if i enable docvalues on certain fields before issuing optimize,
> will
> > it be able to incorporate ( create .dvd & .dvm files ) that in the newly
> > created index?
>
> No.  You must entirely reindex to add docValues.  Optimize just rewrites
> what's already present in the Lucene index.
>
> > 2. Re-Index the data
> >
> > Seeking advice for minimum time to upgrade this with most features of
> SOLR
> > 4.10
>
> This is impossible to answer.  It will depend on how long it takes to
> index your data.  That is very difficult to predict even if a lot of
> information is available.
>
> Thanks,
> Shawn
>
>


working of Sharded Query in SOLR 3.6

2015-09-09 Thread abhi Abhishek
Hi,
   I have a question about the distributed Querying in solr (
https://wiki.apache.org/solr/DistributedSearch),

let us consider the below call being made to solr server.

https://server1:8080/solr/core1/select?shards=server1:8080/solr/core1,server2:8070
/solr/core2,server3:8090/solr/core3&q=*:*&rows=10*&start=0

please correct if my understanding of the query processing here is
incorrect!

server1 acts as the master server for this request, and would spawn
requests toserver1, server2 and server3 for the given query and would wait
for the response from all the requests to return the response back to
client.

if this is the case(server1 waits on all the sharded calls to respond) how
would it join all the results from all the sharded calls?

if this is not how it does the processing can you please help me in
understanding the same.

Thanks in advance.

Thanks and Best Regards,
Abhishek Das


Re: working of Sharded Query in SOLR 3.6

2015-09-09 Thread abhi Abhishek
Hi,
   Thanks for the reply Shawn and Mugeesh. I was just trying to understand
the working of Distributed Querying in SOLR.

Thanks,
Abhishek Das

On Wed, Sep 9, 2015 at 8:18 PM, Mugeesh Husain  wrote:

> You are correct for distributed search.
> do worry care about join, solr will aggregate results from all core.
> share your requirement what you want ?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/working-of-Sharded-Query-in-SOLR-3-6-tp4227952p4227979.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


SOLR Backup and Restore - Solr 3.6.1

2015-03-01 Thread abhi Abhishek
Hello,
   we have solr 3.6.1 in our environment. we are trying to analyse
backup and recovery solutions for the same. is there a way to compress the
backup taken?

we have explored about replicationHandler with backup command. but as our
index is in 100's of GB's we would like a solution that provides
compression to reduce storage overhead.

thanks in advance

Regards,
Abhishek


SOLR Index in shared/Network folder

2015-03-26 Thread abhi Abhishek
Greetings,
  I am trying to use a network shared location as my index directory.
are there any known problems in using a Network File System for running a
SOLR Instance?

Thanks in Advance.

Best Regards,
Abhishek


ZFS File System for SOLR 3.6 and SOLR 4

2015-03-26 Thread abhi Abhishek
Hello,
 i am trying to use ZFS as filesystem for my Linux Environment. are
there any performance implications of using any filesystem other than
ext-3/ext-4 with SOLR?

Thanks in Advance

Best Regards,
Abhishek


Re: SOLR Index in shared/Network folder

2015-03-29 Thread abhi Abhishek
Hello,
 Thanks for the suggestions. My aim is to reduce the disk space usage.
I have 1 master with 2 slave configured, where slaves are used for
searching and master ingests new data replicated to slaves, but as my index
size is in 100's of GB we see 3x times space overhead. i would like to
reduce this overhead, can you suggest something for this?

Thanks in Advance

Best Regards,
Abhishek

On Sat, Mar 28, 2015 at 12:13 AM, Erick Erickson 
wrote:

> To pile on: If you're talking about pointing two Solr instances at the
> _same_ index, it doesn't matter whether you are on NFS or not, you'll
> have all sorts of problems. And if this is a SolrCloud installation,
> it's particularly hard to get right.
>
> Please do not do this unless you have a very good reason, and please
> tell us what the reason is so we can perhaps suggest alternatives.
>
> Best,
> Erick
>
> On Fri, Mar 27, 2015 at 8:08 AM, Walter Underwood 
> wrote:
> > Several years ago, I accidentally put Solr indexes on an NFS volume and
> it was 100X slower.
> >
> > If you have enough RAM, query speed should be OK, but startup time
> (loading indexes into file buffers) could be really long. Indexing could be
> quite slow.
> >
> > wunder
> > Walter Underwood
> > wun...@wunderwood.org
> > http://observer.wunderwood.org/  (my blog)
> >
> >
> > On Mar 26, 2015, at 11:31 PM, Shawn Heisey  wrote:
> >
> >> On 3/27/2015 12:06 AM, abhi Abhishek wrote:
> >>> Greetings,
> >>>  I am trying to use a network shared location as my index
> directory.
> >>> are there any known problems in using a Network File System for
> running a
> >>> SOLR Instance?
> >>
> >> It is not recommended.  You will probably need to change the lockType,
> >> ... the default "native" probably will not work, and you might need to
> >> change it to "none" to get it working ... but that disables an important
> >> safety mechanism that prevents index corruption.
> >>
> >> http://stackoverflow.com/questions/9599529/solr-over-nfs-problems
> >>
> >> Thanks,
> >> Shawn
> >>
> >
>


Errors during Indexing in SOLR 4.6

2015-04-14 Thread abhi Abhishek
Hi All,
 we recently migrated from SOLR 3.6 to SOLR 4, while indexing in SOLR 4
we are getting below exception.

Apr 1, 2015 9:22:57 AM org.apache.solr.common.SolrException log

SEVERE: null:org.apache.solr.common.SolrException: Exception writing
document id 932684555 to the index; possible analysis error.

at
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:164)

at
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)

at
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)

Caused by: java.lang.IllegalArgumentException: first position increment
must be > 0 (got 0) for field 'DataEnglish'

at
org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:131)



this works perfectly fine in SOLR 3.6. can someone help in debugging this.
any fixes/solutions?


Thanks in Advance.


Best Regards,

Abhishek


Re: [EXTERNAL] Re: Does anybody crawl to a database and then index from the database to Solr?

2016-05-15 Thread abhi Abhishek
Clayton

you could also try running and optimize on the SOLR index as a
weekly/bi weekly maintenance task to keep the segment count in check and
the maxdoc , numdoc count as close as possible (in DB terms de-fragmenting
the solr indexes)

Best Regards,
Abhishek


On Sun, May 15, 2016 at 7:18 PM, Pryor, Clayton J 
wrote:

> Thank you for your feedback.  I really appreciate you taking the time to
> write it up for me (and hopefully others who might be considering the
> same).  My first thought for dealing with deleted docs was to delete the
> contents and rebuild the index from scratch but my primary customer for the
> deleted docs functionality wants to see it immediately.  I wrote a
> connector for transferring the contents of one Solr Index to another (I
> call it a Solr connector) and that takes a half hour.  As a side note, the
> reason I have multiple indexes is because we currently have physical
> servers for development and production but, as part of my effort, I am
> transitioning us to new VMs for development, quality, and production.  For
> quality control purposes I wanted to be able to reset each with the same
> set of data - thus the Solr connector.
>
> Yes, by connector I am talking about a Java program (using SolrJ) that
> reads from the database and populates the Solr Index.  For now I have had
> our enterprise DBAs create a single table to hold the current index schema
> fields plus some that I can think of that we might use outside of the
> index.  So far it is a completely flat structure so it will be easy to
> index to Solr but I can see, as requirements change, we may have to have a
> more sophisticated database (with multiple tables and greater
> normalization) in which case the connector will have to flatten the data
> for the Solr index.
>
> Thanks again, your response has been very reassuring!
>
> :)
>
> Clay
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
> Sent: Friday, May 13, 2016 5:57 PM
> To: solr-user
> Subject: [EXTERNAL] Re: Does anybody crawl to a database and then index
> from the database to Solr?
>
> Clayton:
>
> I think you've done a pretty thorough investigation, I think you're
> spot-on. The only thing I would add is that you _will_ reindex your entire
> corpus multiple times. Count on it. Sometime, somewhere, somebody will
> say "gee, wouldn't it be nice if we could ". And
> to support it you'll have to change your Solr schema... which will almost
> certainly require you to re-index.
>
> The other thing people have done for deleting documents is to create
> triggers in your DB to insert the deleted doc IDs into, say, a "deleted"
> table along with a timestamp. Whenever necessary/desirable, run a cleanup
> task that finds all the IDs since the last time you ran your deleting
> program to remove docs that have been flagged since then.. Obviously you
> also have to keep a record around of the timestamp of the last successful
> run of this program..
>
> Or, frankly, since it takes so little time to rebuild from scratch people
> have foregone any of that complexity and simply rebuild the entire index
> periodically. You can use "collection aliasing" to do this in the
> background and then switch searches atomically, it depends somewhat on how
> long you can wait until you need to see (well, _not_
> see) the deleted docs.
>
> But this is all refinements, I think you're going down the right path.
>
> And when you say "connector", are you talking DIH or an external (say
> SolrJ) program?
>
> Best,
> Erick
>
> On Fri, May 13, 2016 at 2:04 PM, John Bickerstaff <
> j...@johnbickerstaff.com> wrote:
> > I've been working on a less-complex thing along the same lines -
> > taking all the data from our corporate database and pumping it into
> > Kafka for long-term storage -- and the ability to "play back" all the
> > Kafka messages any time we need to re-index.
> >
> > That simpler scenario has worked like a charm.  I don't need to
> > massage the data much once it's at rest in Kafka, so that was a
> > straightforward solution, although I could have gone with a DB and
> > just stored the solr documents with their ID's one per row in a RDBMS...
> >
> > The rest sounds like good ideas for your situation as Solr isn't the
> > best candidate for the kind of manipulation of data you're proposing
> > and a database excels at that.  It's more work, but you get a lot more
> > flexibility and you de-couple Solr from the data crawling as you say.
> >
> > It all sounds pretty good to me, but I've only been on the list here a
> > short time - so I'll leave it to others to add their comments.
> >
> > On Fri, May 13, 2016 at 2:46 PM, Pryor, Clayton J 
> > wrote:
> >
> >> Question:
> >> Do any of you have your crawlers write to a database rather than
> >> directly to Solr and then use a connector to index to Solr from the
> >> database?  If so, have you encountered any issues with this approach?
> If not, why not?
> >>
> >> I have searche

Proximity Search using edismax parser.

2017-06-12 Thread abhi Abhishek
Hi All,
  How does proximity Query work in SOLR.

Example if i am running a query like below, for the field containing the
text “India registered a historical test match win against the arch rival
Pakistan here in Lords, England on Sunday”

Query: “Test match India Pakistan” ~ 10

I am interested in understanding the intermediate steps
involved here to understand the search behavior and determine how results
are being matched to the search phrase.

Thanks in Advance,

Abhishek


Re: Proximity Search using edismax parser.

2017-06-12 Thread abhi Abhishek
Thanks for the suggestions Erik and Vrindavda,

i was trying to understand how does the above query work when we have slop
set to 10. the debug output of the SOLR Query gave the terms which were
being looked up but the transpositions done to look up the search wasn't
exposed.

i found following stack overflow link which describes the transpositions
done when one is looking for phrase with slop 4. is there a guide to
understand this?

https://stackoverflow.com/questions/25558195/lucene-proximity-search-for-phrase-with-more-than-two-words

Thanks in advance.

Best Regards,
Abhishek


On Mon, Jun 12, 2017 at 5:41 PM, Erik Hatcher 
wrote:

> Adding &debug=true to your search requests will give you the parsing
> details, so you can see how edismax interprets the query string and
> parameters to turn it into the underlying dismax and phrase queries.
>
> Erik
>
> > On Jun 12, 2017, at 3:22 AM, abhi Abhishek  wrote:
> >
> > Hi All,
> >  How does proximity Query work in SOLR.
> >
> > Example if i am running a query like below, for the field containing the
> > text “India registered a historical test match win against the arch rival
> > Pakistan here in Lords, England on Sunday”
> >
> > Query: “Test match India Pakistan” ~ 10
> >
> >I am interested in understanding the intermediate steps
> > involved here to understand the search behavior and determine how results
> > are being matched to the search phrase.
> >
> > Thanks in Advance,
> >
> > Abhishek
>
>


Odd Boolean Query behavior in SOLR 3.6

2017-06-13 Thread abhi Abhishek
Hi Everyone,

I have hit a weird behavior of Boolean Query, when I am
running the query with below param’s  it’s not behaving as expected. can
you please help me understand the behavior here?



q=*:*&fq=((-documentTypeId:3)+AND+companyId:29096)&version=2.2&start=0&rows=10&indent=on&debugQuery=true

 èReturns 0 matches

filter_queries: ((-documentTypeId:3) AND companyId:29096)

parsed_filter_queries: +(-documentTypeId:3) +companyId:29096



q=*:*&fq=(-documentTypeId:3+AND+companyId:29096)&version=2.2&start=0&rows=10&indent=on&debugQuery=true

è returns 1600 matches

filter_queries:(-documentTypeId:3 AND companyId:29096)

parsed_filter_queries:-documentTypeId:3 +companyId:29096



Can you please help me understand what am I missing here?


Thanks in Advance.


Thanks & Best Regards,

Abhishek


SOLR Metric Reporting to graphite

2017-08-06 Thread abhi Abhishek
Hi All,
I am trying to setup the graphite reporter for SOLR 6.5.0. i've started
a sample docker instance for graphite with statd (
https://github.com/hopsoft/docker-graphite-statsd).

also i've added the graphite metrics reporter in the SOLR.xml config of the
collection. however post doing this i dont see any data getting posted to
the graphite (
https://cwiki.apache.org/confluence/display/solr/Metrics+Reporting).
added XML Config to solr.xml
 
  
localhost
2003
1
  
 
Graphite Mapped Ports
HostContainerService
80 80 nginx 
2003 2003 carbon receiver - plaintext

2004 2004 carbon receiver - pickle

2023 2023 carbon aggregator - plaintext

2024 2024 carbon aggregator - pickle

8125 8125 statsd 
8126 8126 statsd admin


please advice if i am doing something wrong here.

Thanks,
Abhishek


explaination of query processing in SOLR

2014-08-08 Thread abhi Abhishek
Hello,
I am fairly new to SOLR, can someone please help me understand how a
query is processed in SOLR, i.e, what i want to understand is from the time
it hits solr what files it refers to process the query, i.e, order in which
.tvx, .tvd files and others are accessed. basically i would like to
understand the code path of the search functionality also significance of
various files in the solr directory such as .tvx, .tcd, .frq, etc.


Regards,
Abhishek Das


Re: explaination of query processing in SOLR

2014-08-12 Thread abhi Abhishek
Thanks Alex and Jack for the direction, actually what i was trying to
understand was how various files had an effect on the search.

Thanks,
Abhishek


On Fri, Aug 8, 2014 at 6:35 PM, Alexandre Rafalovitch 
wrote:

> Abhishek,
>
> Your first part of the question is interesting, but your specific
> details are probably the wrong level for you to concentrate on. The
> issues you will be facing are not about which file does what. That's
> more performance and inner details. I feel you should worry more about
> the fields, default search fields, multiterms, whitespaces, etc.
>
> One way to do that is to enable debug and see if you actually
> understand what those different debug entries do. And don't use string
> or basic tokenizer. Pick something that has complex analyzer chain and
> see how that affects debug.
>
> Regards,
>Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On Fri, Aug 8, 2014 at 1:59 PM, abhi Abhishek  wrote:
> > Hello,
> > I am fairly new to SOLR, can someone please help me understand how a
> > query is processed in SOLR, i.e, what i want to understand is from the
> time
> > it hits solr what files it refers to process the query, i.e, order in
> which
> > .tvx, .tvd files and others are accessed. basically i would like to
> > understand the code path of the search functionality also significance of
> > various files in the solr directory such as .tvx, .tcd, .frq, etc.
> >
> >
> > Regards,
> > Abhishek Das
>


Re: Error:Missing Required Fields for Atomic Updates

2018-11-19 Thread abhi Abhishek
Update Handler expect all the required fields to be passed in even for the
atomic update request payload.

https://github.com/apache/lucene-solr/blob/branch_7_5/solr/core/src/java/org/apache/solr/update/DocumentBuilder.java

Hope this helps!

// Now validate required fields or add default values // fields with
default values are defacto 'required' // Note: We don't need to add default
fields if this document is to be used for // in-place updates, since this
validation and population of default fields would've happened // during the
full indexing initially. if (!forInPlaceUpdate) { for (SchemaField field :
schema.getRequiredFields()) { if (out.getField(field.getName() ) == null) {
if (field.getDefaultValue() != null) { addField(out, field,
field.getDefaultValue(), false); } else { String msg = getID(doc, schema) +
"missing required field: " + field.getName(); throw new SolrException(
SolrException.ErrorCode.BAD_REQUEST, msg ); } } } } Cheers! Abhishek

On Tue, Nov 20, 2018 at 11:47 AM Rahul Goswami 
wrote:

> What is the Router name for your collection? Is it "implicit"  (You can
> know this from the "Overview" of you collection in the admin UI)  ? If yes,
> what is the router.field parameter the collection was created with?
>
> Rahul
>
>
> On Mon, Nov 19, 2018 at 11:19 PM Rajeswari Kolluri <
> rajeswari.koll...@oracle.com> wrote:
>
> >
> > Hi Rahul
> >
> > Below is part of schema ,   entityid is my unique id field.  Getting
> > exception missing required field for  "category"  during atomic updates.
> >
> >
> > entityid
> >  > required="true" multiValued="false" />
> >  > required="false" multiValued="false" />
> >  > stored="true" required="false" multiValued="false" />
> >  > stored="true" required="false" multiValued="false" />
> >  > stored="true" required="false" multiValued="false" />
> >  > stored="true" required="false" multiValued="false" />
> >  > stored="true" required="false" multiValued="false" />
> >  > required="true" docValues="true" />
> >  > required="false" multiValued="true" />
> >
> >
> >
> > Thanks
> > Rajeswari
> >
> > -Original Message-
> > From: Rahul Goswami [mailto:rahul196...@gmail.com]
> > Sent: Tuesday, November 20, 2018 9:33 AM
> > To: solr-user@lucene.apache.org
> > Subject: Re: Error:Missing Required Fields for Atomic Updates
> >
> > What’s your update query?
> >
> > You need to provide the unique id field of the document you are updating.
> >
> > Rahul
> >
> > On Mon, Nov 19, 2018 at 10:58 PM Rajeswari Kolluri <
> > rajeswari.koll...@oracle.com> wrote:
> >
> > > Hi,
> > >
> > >
> > >
> > >
> > >
> > > Using Solr 7.5.0.  While performing atomic updates on a document  on
> > > Solr Cloud using SolJ  getting exceptions "Missing Required Field".
> > >
> > >
> > >
> > > Please let me know  the solution, would not want to update the
> > > required fields during atomic updates.
> > >
> > >
> > >
> > >
> > >
> > > Thanks
> > >
> > > Rajeswari
> > >
> >
>


Re: Reg:- Create Solr Core Using Command Line

2018-02-05 Thread abhi Abhishek
Hello,
I followed the steps outlined in your mail. i was able to get a running
core up fine. only thing I can think of in your case is the config
directory having all the required files for the SOLR Core to get
initialized. can you check if you have all the SOLR config files in the
config directory ( i.e, schema.xml, solrconfig.xml, also various supporting
files referred in the schema.xml) you specified on the command line?

can you share the conf directory, if you can?

Cheers!
Abhishek

On Tue, Feb 6, 2018 at 9:30 AM, @Nandan@ 
wrote:

> Hi Sadiki,
> I checked Sample Techproduct Conf folder. Inside that folder, there are
> numerous files. So Again my question will be how those files came.
> I want to create core from Scratch and want to check and create each and
> every config files from my sides. Then only I can able to understand what
> and which files needs in different solr search function.
> I hope you can understand my query.
>
> Thanks
>
> On Tue, Feb 6, 2018 at 11:48 AM, Sadiki Latty  wrote:
>
> > If I'm not mistaken the command requires that the books_data folder
> > already exists with a conf folder inside and the various required files
> > (solrconfig.xml, solr.xml,etc). To get an idea of what you should have in
> > your conf folder you can check out the included configsets
> > (sample_techproducts_configs for example). These configsets have the
> > required files and you can copy and modify to accommodate your own
> needs. I
> > am not 100% sure where to find them on a windows installation but I
> believe
> > it would be C:\solr\server\configsets\  or another subfolder of the
> server
> > folder.
> >
> > -Original Message-
> > From: @Nandan@ [mailto:nandanpriyadarshi...@gmail.com]
> > Sent: Monday, February 5, 2018 9:46 PM
> > To: solr-user@lucene.apache.org
> > Subject: Reg:- Create Solr Core Using Command Line
> >
> > Hi ,
> > This question might be very basic, but need to clarify my basic
> > understanding.
> > I am using Solr Version 7.2.1
> > I have one CSV file named as books_data.csv which contains 2 records.
> > Now I want to create solr core to start my basic search using Solr UI.
> > Steps which I Follow :-
> > 1) Go to bin directory and start solr
> > C:\solr\bin>solr start -p 8983
> > 2) books_data.csv is in C:\solr location
> > 3) Now I try to create solr core.
> > C:\solr\bin>solr create_core -c books_data -d C:\solr Got Error :- No
> Conf
> > Sub folder or Solrconfig.xml file present.
> > 4) Then I created folder "books_data" in C:\solr location and Created
> conf
> > subfolder under books_data folder and put solrconfig.xml inside conf
> > subfolder.
> > 5) Again start to execute query
> > C:\solr\bin>solr create_core -c books_data -d C:\solr\books_data Got
> Error
> > :- Already core existed.
> > When I checked Solr Admin UI , showing error message as SolrCore
> > Initialization Failures
> >
> >- *books_data:*
> >org.apache.solr.common.SolrException:org.apache.solr.
> > common.SolrException:
> >Could not load conf for core books_data: Error loading solr config
> from
> >C:\solr\bin\books_data\conf\solrconfig.xml
> >
> >
> > Please tell me where am I doing wrong?
> > Thanks.
> >
>


Re: Reg:- Create Solr Core Using Command Line

2018-02-06 Thread abhi Abhishek
you can try using the post Tool.

https://lucene.apache.org/solr/guide/6_6/post-tool.html

bin/post -c film example/books_data.csv


Cheers!
Abhishek


On Tue, Feb 6, 2018 at 1:22 PM, @Nandan@ 
wrote:

> Hi ,
> I created core name as "films". Now I am trying to insert my csv file by
> below step:-
> C:\solr>curl "http://localhost:8983/solr/films/update?commit=true";
> --data-binary @example/books_data.csv -H 'Content-type:application/csv'
> Got Below result.
> {
>   "responseHeader":{
> "status":0,
> "QTime":279}}
>
> But in Solr Admin UI, can't able to see any data.
> please tell me where am i wrong ?
> Thanks
>
>
> On Tue, Feb 6, 2018 at 1:42 PM, Shawn Heisey  wrote:
>
> > On 2/5/2018 10:39 PM, Shawn Heisey wrote:
> >
> >> In order for this solr script command to work, the argument to the -d
> >> option (which you have as C:\solr) would have to be a config directory,
> >> containing a minimum of solrconfig.xml and the schema.
> >>
> > Replying to myself because I made an error here.
> >
> > The directory provided with -d needs to contain a "conf" subdirectory,
> > which in turn must contain the files that I mentioned.
> >
> > Thanks,
> > Shawn
> >
>


SOLR 7.x stable version

2018-08-13 Thread abhi Abhishek
Hi All -
 I am using SOLR Cloud v6.5.0 and looking to upgrade it to SOLR 7.x;
any suggestions which are the most stable version in SOLR 7.x series.
from my initial reading, I see until SOLR 7.2 we had issues with CDCR
updates.

Thank you for your suggestions.

Thanks,
Abhishek