Solr -The connection has timed out

2013-12-30 Thread rakesh
I have Solr server running wit jetty. Some times i am getting connection
timed out error from the home page. In the logs no errors are shown also
.Please help how to resolve this problem.Attaching the log from the Solr

INFO  - 2013-12-26 02:51:37.460; org.eclipse.jetty.server.Server;
jetty-8.1.10.v20130312
INFO  - 2013-12-26 02:51:37.490;
org.eclipse.jetty.deploy.providers.ScanningAppProvider; Deployment monitor
/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/contexts at interval 0
INFO  - 2013-12-26 02:51:37.498; org.eclipse.jetty.deploy.DeploymentManager;
Deployable added:
/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/contexts/solr-jetty-context.xml
INFO  - 2013-12-26 02:51:37.562;
org.eclipse.jetty.webapp.WebInfConfiguration; Extract
jar:file:/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/webapps/solr.war!/ to
/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr-webapp/webapp
INFO  - 2013-12-26 02:51:39.547;
org.eclipse.jetty.webapp.StandardDescriptorProcessor; NO JSP Support for
/solr, did not find org.apache.jasper.servlet.JspServlet
INFO  - 2013-12-26 02:51:39.583; org.apache.solr.servlet.SolrDispatchFilter;
SolrDispatchFilter.init()
INFO  - 2013-12-26 02:51:39.597; org.apache.solr.core.SolrResourceLoader;
JNDI not configured for solr (NoInitialContextEx)
INFO  - 2013-12-26 02:51:39.597; org.apache.solr.core.SolrResourceLoader;
solr home defaulted to 'solr/' (could not find system property or JNDI)
INFO  - 2013-12-26 02:51:39.598; org.apache.solr.core.SolrResourceLoader;
new SolrResourceLoader for directory: 'solr/'
INFO  - 2013-12-26 02:51:39.714; org.apache.solr.core.ConfigSolr; Loading
container configuration from
/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/solr.xml
INFO  - 2013-12-26 02:51:40.031; org.apache.solr.core.ConfigSolrXml;
Config-defined core root directory:
INFO  - 2013-12-26 02:51:40.041; org.apache.solr.core.CoreContainer; New
CoreContainer 709424757
INFO  - 2013-12-26 02:51:40.041; org.apache.solr.core.CoreContainer; Loading
cores into CoreContainer [instanceDir=solr/]
INFO  - 2013-12-26 02:51:40.057;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting
socketTimeout to: 0
INFO  - 2013-12-26 02:51:40.057;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting urlScheme
to: http://
INFO  - 2013-12-26 02:51:40.058;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting
connTimeout to: 0
INFO  - 2013-12-26 02:51:40.060;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting
maxConnectionsPerHost to: 20
INFO  - 2013-12-26 02:51:40.061;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting
corePoolSize to: 0
INFO  - 2013-12-26 02:51:40.061;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting
maximumPoolSize to: 2147483647
INFO  - 2013-12-26 02:51:40.061;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting
maxThreadIdleTime to: 5
INFO  - 2013-12-26 02:51:40.062;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting
sizeOfQueue to: -1
INFO  - 2013-12-26 02:51:40.062;
org.apache.solr.handler.component.HttpShardHandlerFactory; Setting
fairnessPolicy to: false
INFO  - 2013-12-26 02:51:40.247; org.apache.solr.logging.LogWatcher; SLF4J
impl is org.slf4j.impl.Log4jLoggerFactory



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-The-connection-has-timed-out-tp4108802.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr -The connection has timed out

2013-12-30 Thread rakesh
Finally able to get the full log details

ERROR - 2013-12-30 15:13:00.811; org.apache.solr.core.SolrCore;
[collection1] Solr index directory
'/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data/index/'
is locked.  Throwing exception
INFO  - 2013-12-30 15:13:00.812; org.apache.solr.core.SolrCore;
[collection1]  CLOSING SolrCore org.apache.solr.core.SolrCore@de26e52
INFO  - 2013-12-30 15:13:00.812; org.apache.solr.update.SolrCoreState;
Closing SolrCoreState
INFO  - 2013-12-30 15:13:00.813;
org.apache.solr.update.DefaultSolrCoreState; SolrCoreState ref count has
reached 0 - closing IndexWriter
INFO  - 2013-12-30 15:13:00.813; org.apache.solr.core.SolrCore;
[collection1] Closing main searcher on request.
INFO  - 2013-12-30 15:13:00.814;
org.apache.solr.core.CachingDirectoryFactory; Closing
NRTCachingDirectoryFactory - 2 directories currently being tracked
INFO  - 2013-12-30 15:13:00.814;
org.apache.solr.core.CachingDirectoryFactory; looking to close
/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data/index
[CachedDir<>]
INFO  - 2013-12-30 15:13:00.814;
org.apache.solr.core.CachingDirectoryFactory; Closing directory:
/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data/index
INFO  - 2013-12-30 15:13:00.815;
org.apache.solr.core.CachingDirectoryFactory; looking to close
/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data
[CachedDir<>]
INFO  - 2013-12-30 15:13:00.815;
org.apache.solr.core.CachingDirectoryFactory; Closing directory:
/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data
ERROR - 2013-12-30 15:13:00.817; org.apache.solr.core.CoreContainer; Unable
to create core: collection1
org.apache.solr.common.SolrException: Index locked for write for core
collection1
at org.apache.solr.core.SolrCore.(SolrCore.java:834)
at org.apache.solr.core.SolrCore.(SolrCore.java:625)
at
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:557)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:592)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:271)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:263)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked
for write for core collection1
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:491)
at org.apache.solr.core.SolrCore.(SolrCore.java:755)
... 13 more
ERROR - 2013-12-30 15:13:00.819; org.apache.solr.common.SolrException;
null:org.apache.solr.common.SolrException: Unable to create core:
collection1
at org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:977)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:601)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:271)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:263)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.solr.common.SolrException: Index locked for write for
core collection1
at org.apache.solr.core.SolrCore.(SolrCore.java:834)
at org.apache.solr.core.SolrCore.(SolrCore.java:625)
at
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:557)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:592)
... 10 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked
for write for core collection1
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:491)
at org.apache.solr.core.SolrCore.(SolrCore.java:755)
... 13 more

INFO  - 2013-12-30 15:13:00.820; org.apache.solr.servlet.SolrDispatchFilter;
user.dir=/ctgapps/apache-solr-4.6.0/solr-4.6.0/example
INFO  - 2013-12-30 15:13:00.820; org.apache.solr.servlet.SolrDispatchFilter;
SolrDispatchFilter.init() done
WARN  - 2013-12-30 15:13:00.856;
org.eclipse.jetty.util.component.AbstractLifeCycle; FAILED
SocketConnector@0.0.0.0:8983: java.net.BindException: Address already in use
java.net.BindException: Address already in use
at java.net.PlainSocketImpl.socketBi

Re: Mongo DB Users

2014-09-15 Thread Rakesh Varna
Remove

Regards,
Rakesh Varna


On Mon, Sep 15, 2014 at 9:29 AM, Ed Smiley  wrote:

> Remove
>
> On 9/15/14, 8:35 AM, "Aaron Susan"  wrote:
>
> >Hi,
> >
> >I am here to inform you that we are having a contact list of *Mongo DB
> >Users *would you be interested in it?
> >
> >Data Field¹s Consist Of: Name, Job Title, Verified Phone Number, Verified
> >Email Address, Company Name & Address Employee Size, Revenue size, SIC
> >Code, Industry Type etc.,
> >
> >We also provide other technology users as well depends on your
> >requirement.
> >
> >For Example:
> >
> >
> >*Red Hat *
> >
> >*Terra data *
> >
> >*Net-app *
> >
> >*NuoDB*
> >
> >*MongoHQ ** and many more*
> >
> >
> >We also provide IT Decision Makers, Sales and Marketing Decision Makers,
> >C-level Titles and other titles as per your requirement.
> >
> >Please review and let me know your interest if you are looking for above
> >mentioned users list or other contacts list for your campaigns.
> >
> >Waiting for a positive response!
> >
> >Thanks
> >
> >*Aaron Susan*
> >Data Specialist
> >
> >If you are not the right person, feel free to forward this email to the
> >right person in your organization. To opt out response Remove
>
>


Re: Getting started with indexing a database

2012-01-15 Thread Rakesh Varna
Hi Mike,
   Can you try removing '  from the
nested entities? Just keep it in the top level entity.

Regards,
Rakesh Varna

On Wed, Jan 11, 2012 at 7:26 AM, Gora Mohanty  wrote:

> On Tue, Jan 10, 2012 at 7:09 AM, Mike O'Leary  wrote:
> [...]
> > My data-config.xml file looks like this:
> >
> > 
> >   >  url="jdbc:mysql://localhost:3306/bioscope" user="db_user"
> password=""/>
> >  
> > >deltaQuery="SELECT doc_id FROM bioscope.docs where
> last_modified > '${dataimporter.last_index_time}'">
> >  
> >  
>
> Your SELECT above does not include the field "type"
>
> >^^ This should be: WHERE id=='${docs.doc_id}' as 'id' is
> what
>you are selecting in this entity.
>
> Same issue for the second nested entity, i.e., replace doc_id= with id=
>
> Regards,
> Gora
>


Re: xpathentityprocessor with flattern true

2012-01-15 Thread Rakesh Varna
Try using flatten="true" in the  rather than the . Note that
it will remove all child node names, and will only concatenate the text
values of the child nodes.
example:



abc
def
ghi>/id>




will concatenate abc, def, ghi to give a single text value. Note that xpath
terminates at 

Regards,
Rakesh Varna
On Mon, Jan 9, 2012 at 8:32 AM, vrpar...@gmail.com wrote:

> am i making any mistake with xpathentityprocessor?
>
> i am using solr 1.4
>
> please help me to solve this problem?
>
>
>
> Thanks & Regards,
> Vishal Parekh
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/xpathentityprocessor-with-flattern-true-tp3637928p3645013.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Catchall field does not seem to work. Solr 3.4

2012-02-04 Thread Rakesh Varna
Hello Solr-users,
 I am trying to search a patents dataset (in xml format) which
has fields like title, abstract, patent_ number,year of submission . Since
I would not want to specify the field name in a query, I have used a
catchall field, and using copyField, I copied all fields into it. I then
made it the default search field.
My schema looks something like:

 
   
   
  

   


 
 
 
 

 searchField


I have noticed that 'title' field  and 'patent_no' field are not getting
copied. If i search a string which appears in the title, like
'quinolinecarboxylic', I do not get any results. But if specify
'title:quinolinecarboxylic', I get the result correctly. I checked the
output and saw that even in the correct result, the searchField tag did not
contain the title and patent no data. But I do see the 'year' and 'abs'
field data replicated in the 'searchField' tag. I am checking the xml
output and hence the tag.

Any idea or pointers to what I might be doing wrong? I am using Solr 3.4
with tomcat.

Regards,
Rakesh Varna


Re: Catchall field does not seem to work. Solr 3.4

2012-02-04 Thread Rakesh Varna
Thanks! I will look into it.

Regards,
Rakesh Varna

On Sat, Feb 4, 2012 at 10:09 AM, Esteban Donato wrote:

> also, if you want to search across different fields without specifying
> field names, you can use dismax
> http://wiki.apache.org/solr/DisMaxQParserPlugin
>
>
> On Sat, Feb 4, 2012 at 11:06 AM, Ahmet Arslan  wrote:
> >>  
> >>  
> >>  
> >>  
> >>
> >>  searchField
> >>
> >>
> >> I have noticed that 'title' field  and 'patent_no'
> >> field are not getting
> >> copied.
> >
> > You need to use upper case F letter. copyField versus copyfield.
>


Question on using dynamic fields

2012-04-08 Thread Rakesh Varna
Hello Solr-users,
   I am trying to index xml files which have the following tags: (I am
using Solr 3.5 on Tomcat)


0.98
0.767
.
..
..
..
0.2873


The numbers after "theta" are not a continuous sequence and I do not know
how many such tags are there. I thought this was a good candidate for
dynamic fields and have the following schema for those tags:
   
Is this correct? If so, what should I use in the data-config.xml file to
index these tags?

When I try the admin feature in the browser and query *:* , I don't see the
theta fields in the response.

If not, is dynamicFields a wrong choice? Is there another way of indexing
these fields?

Thanks in advance,
Rakesh Varna


Re: Question on using dynamic fields

2012-04-09 Thread Rakesh Varna
Hi Erick,
   Thanks for the response. I am trying to index xml files in a directory.
I provide the xpath details, file location etc in data-config.xml. I will
try the 2 approaches that you have mentioned.

Regards,
Rakesh Varna

On Mon, Apr 9, 2012 at 3:38 PM, Erick Erickson wrote:

> Hmmm, not sure about the dataconfig.xml file. What
> are you trying to index? Is this DIH? Because
> if you're simply posting Solr-formatted XML docs,
> dataconfig.xml is irrelevant
>
> You say you're not seeing the output. One of two
> things is going on:
> 1> The data is not in the index. See the admin/schema browser
>  page to examine what actually went in your index.
> 2> Try doing the query with fl=*. You may simply not be asking
>  for the fields to be returned.
>
> Best
> Erick
>
> On Sun, Apr 8, 2012 at 9:09 PM, Rakesh Varna 
> wrote:
> > Hello Solr-users,
> >   I am trying to index xml files which have the following tags: (I am
> > using Solr 3.5 on Tomcat)
> >
> > 
> > 0.98
> > 0.767
> > .
> > ..
> > ..
> > ..
> > 0.2873
> > 
> >
> > The numbers after "theta" are not a continuous sequence and I do not know
> > how many such tags are there. I thought this was a good candidate for
> > dynamic fields and have the following schema for those tags:
> >>  stored="true"/>
> > Is this correct? If so, what should I use in the data-config.xml file to
> > index these tags?
> >
> > When I try the admin feature in the browser and query *:* , I don't see
> the
> > theta fields in the response.
> >
> > If not, is dynamicFields a wrong choice? Is there another way of indexing
> > these fields?
> >
> > Thanks in advance,
> > Rakesh Varna
>


Re: Question on using dynamic fields

2012-04-09 Thread Rakesh Varna
Hi Erick,
   The schema browser says that no dynamic fields were indexed. Any idea
how do I specify dynamic fields through XPath when I only know the prefix
and nothing else?

Regards,
Rakesh Varna

On Mon, Apr 9, 2012 at 4:49 PM, Rakesh Varna  wrote:

> Hi Erick,
>Thanks for the response. I am trying to index xml files in a directory.
> I provide the xpath details, file location etc in data-config.xml. I will
> try the 2 approaches that you have mentioned.
>
> Regards,
> Rakesh Varna
>
>
> On Mon, Apr 9, 2012 at 3:38 PM, Erick Erickson wrote:
>
>> Hmmm, not sure about the dataconfig.xml file. What
>> are you trying to index? Is this DIH? Because
>> if you're simply posting Solr-formatted XML docs,
>> dataconfig.xml is irrelevant
>>
>> You say you're not seeing the output. One of two
>> things is going on:
>> 1> The data is not in the index. See the admin/schema browser
>>  page to examine what actually went in your index.
>> 2> Try doing the query with fl=*. You may simply not be asking
>>  for the fields to be returned.
>>
>> Best
>> Erick
>>
>> On Sun, Apr 8, 2012 at 9:09 PM, Rakesh Varna 
>> wrote:
>> > Hello Solr-users,
>> >   I am trying to index xml files which have the following tags: (I am
>> > using Solr 3.5 on Tomcat)
>> >
>> > 
>> > 0.98
>> > 0.767
>> > .
>> > ..
>> > ..
>> > ..
>> > 0.2873
>> > 
>> >
>> > The numbers after "theta" are not a continuous sequence and I do not
>> know
>> > how many such tags are there. I thought this was a good candidate for
>> > dynamic fields and have the following schema for those tags:
>> >   > >  stored="true"/>
>> > Is this correct? If so, what should I use in the data-config.xml file to
>> > index these tags?
>> >
>> > When I try the admin feature in the browser and query *:* , I don't see
>> the
>> > theta fields in the response.
>> >
>> > If not, is dynamicFields a wrong choice? Is there another way of
>> indexing
>> > these fields?
>> >
>> > Thanks in advance,
>> > Rakesh Varna
>>
>
>


Multiple Update servers

2008-07-28 Thread Rakesh Godhani

Hi, we are currently evaluating Solr and have been browsing the archives for
one particular issue but can¹t seem to find the answer, so please forgive me
if I¹m asking a repetitive question.  We like the idea of having multiple
slave servers serving up queries and a master performing updates.  However
the the issue for us there is no redundancy for the master.  So a couple of
questions:

1. Can there be multiple masters (or update servers) sharing the same index
files, performing updates at the same time (ie. Hosting the index on a SAN)?

2. Is there a recommended architecture utilizing a SAN.   (For example 2
slaves and 2 masters sharing a SAN).  We current don¹t have that many
records ­ prob about a million and growing.  We are mainly concerned about
redundancy, then performance.

Thanks 
-Rakesh





Re: Multiple Update servers

2008-07-29 Thread Rakesh Godhani
Thanks for the input, much appreciated.
-Rakesh



On 7/29/08 12:18 PM, "Matthew Runo" <[EMAIL PROTECTED]> wrote:

> As far as I know only one machine can write to an index at a time.
> More than that and I got corrupted indexes.
> 
> Thanks!
> 
> Matthew Runo
> Software Developer
> Zappos.com
> 702.943.7833
> 
> On Jul 28, 2008, at 11:25 AM, Rakesh Godhani wrote:
> 
>> 
>> Hi, we are currently evaluating Solr and have been browsing the
>> archives for
>> one particular issue but can¹t seem to find the answer, so please
>> forgive me
>> if I¹m asking a repetitive question.  We like the idea of having
>> multiple
>> slave servers serving up queries and a master performing updates.
>> However
>> the the issue for us there is no redundancy for the master.  So a
>> couple of
>> questions:
>> 
>> 1. Can there be multiple masters (or update servers) sharing the
>> same index
>> files, performing updates at the same time (ie. Hosting the index on
>> a SAN)?
>> 
>> 2. Is there a recommended architecture utilizing a SAN.   (For
>> example 2
>> slaves and 2 masters sharing a SAN).  We current don¹t have that many
>> records ­ prob about a million and growing.  We are mainly concerned
>> about
>> redundancy, then performance.
>> 
>> Thanks
>> -Rakesh
>> 
>> 
>> 
> 
> 




Re: Multiple Update servers

2008-07-29 Thread Rakesh Godhani
After Matthew's comment I was thinking about putting them both behind a load
balancer, with the LB directing all traffic to one until it fails and then
kick over to the other one.

In your architectures I'm guessing the masters share the same physical
index, but do the slaves share the same index as the masters or do you use
rsync or some other mechanism to distribute copies.

Thanks
-Rakesh




On 7/29/08 5:07 PM, "Alexander Ramos Jardim"
<[EMAIL PROTECTED]> wrote:

> You could implement a script that woiuld control which master server is
> indexing and put them behind something like a NAT.
> 
> I use that that control my master redundancy.
> 
> 2008/7/29 Rakesh Godhani <[EMAIL PROTECTED]>
> 
>> Thanks for the input, much appreciated.
>> -Rakesh
>> 
>> 
>> 
>> On 7/29/08 12:18 PM, "Matthew Runo" <[EMAIL PROTECTED]> wrote:
>> 
>>> As far as I know only one machine can write to an index at a time.
>>> More than that and I got corrupted indexes.
>>> 
>>> Thanks!
>>> 
>>> Matthew Runo
>>> Software Developer
>>> Zappos.com
>>> 702.943.7833
>>> 
>>> On Jul 28, 2008, at 11:25 AM, Rakesh Godhani wrote:
>>> 
>>>> 
>>>> Hi, we are currently evaluating Solr and have been browsing the
>>>> archives for
>>>> one particular issue but can¹t seem to find the answer, so please
>>>> forgive me
>>>> if I¹m asking a repetitive question.  We like the idea of having
>>>> multiple
>>>> slave servers serving up queries and a master performing updates.
>>>> However
>>>> the the issue for us there is no redundancy for the master.  So a
>>>> couple of
>>>> questions:
>>>> 
>>>> 1. Can there be multiple masters (or update servers) sharing the
>>>> same index
>>>> files, performing updates at the same time (ie. Hosting the index on
>>>> a SAN)?
>>>> 
>>>> 2. Is there a recommended architecture utilizing a SAN.   (For
>>>> example 2
>>>> slaves and 2 masters sharing a SAN).  We current don¹t have that many
>>>> records ­ prob about a million and growing.  We are mainly concerned
>>>> about
>>>> redundancy, then performance.
>>>> 
>>>> Thanks
>>>> -Rakesh
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
>> 
> 




Boosting fields by default

2008-08-18 Thread Rakesh Godhani

Hi, I¹m using the data import mechanism to pull data into my index.  If I
want to boost a certain field for all docs, (e.g. the title over the body)
what is the best way to do that?  I was expecting to change something in
schema.xml but I don¹t see any info on boosting there.

Thanks in advance
-Rakesh





Re: Boosting fields by default

2008-08-18 Thread Rakesh Godhani
Sweet, cool, thanks
-Rakesh



On 8/18/08 11:31 AM, "Shalin Shekhar Mangar" <[EMAIL PROTECTED]> wrote:

> On Mon, Aug 18, 2008 at 7:12 PM, Rakesh Godhani <[EMAIL PROTECTED]> wrote:
> 
>> 
>> Hi, I¹m using the data import mechanism to pull data into my index.  If I
>> want to boost a certain field for all docs, (e.g. the title over the body)
>> what is the best way to do that?  I was expecting to change something in
>> schema.xml but I don¹t see any info on boosting there.
>> 
> 
> You can specify the boost as an attribute on the field in data-config.xml
> 
> 




Solr Newbie question

2008-12-10 Thread Rakesh Sinha
Hi -
  I am a new user of Solr tool  and came across the introductory
tutorial here - http://lucene.apache.org/solr/tutorial.html  .
I am planning to use Solr in one of my projects . I see that the
tutorial mentions about a REST api / interface to add documents and to
query the same.

I would like to create  the indices locally , where the web server (or
pool of servers ) will have access to the database directly , but use
the query REST api to query for the results.

 I am curious how this could be possible without taking the http rest
api submission to add to indices. (For the sake of simplicity - we can
assume it would be just one node to store the index but multiple
readers / query machines that could potentially connect to the solr
web service and retrieve the query results. Also the index might be
locally present in the same machine as that of the Solr host or at
least accessible through NFS etc. )

Thanks for helping out to some starting pointers regarding the same.


Solr 1.3 - DataInputHandler DIH integration

2008-12-12 Thread Rakesh Sinha
[Changing subject accordingly ] .

Thanks Noble.

I grabbed one of the nightlies at -
http://people.apache.org/builds/lucene/solr/nightly/ .

I could not find the DataImportHandler in the same.  May be I am
missing something about the sources of DataImportHandler.

Can somebody suggest on where to find the same.

On Thu, Dec 11, 2008 at 12:49 AM, Noble Paul നോബിള്‍ नोब्ळ्
 wrote:
> On Wed, Dec 10, 2008 at 11:00 PM, Rakesh Sinha  
> wrote:
>> Hi -
>>  I am a new user of Solr tool  and came across the introductory
>> tutorial here - http://lucene.apache.org/solr/tutorial.html  .
>> I am planning to use Solr in one of my projects . I see that the
>> tutorial mentions about a REST api / interface to add documents and to
>> query the same.
>>
>> I would like to create  the indices locally , where the web server (or
>> pool of servers ) will have access to the database directly , but use
>> the query REST api to query for the results.
>
> If your data resides in DB consider using DIH.
> http://wiki.apache.org/solr/DataImportHandler
>
>>
>>  I am curious how this could be possible without taking the http rest
>> api submission to add to indices. (For the sake of simplicity - we can
>> assume it would be just one node to store the index but multiple
>> readers / query machines that could potentially connect to the solr
>> web service and retrieve the query results. Also the index might be
>> locally present in the same machine as that of the Solr host or at
>> least accessible through NFS etc. )
> I guess you are thinking of using a master/slave setup.
> see this http://wiki.apache.org/solr/CollectionDistribution
> or http://wiki.apache.org/solr/SolrReplication
>
>
>>
>> Thanks for helping out to some starting pointers regarding the same.
>>
>
>
>
> --
> --Noble Paul
>


Re: Solr 1.3 - DataInputHandler DIH integration

2008-12-12 Thread Rakesh Sinha
Ooops . Sorry - Never mind - they are present under contrib directory.

/opt/programs/solr $ find contrib -name *.java | grep Handler
contrib/dataimporthandler/src/main/java/org/apache/solr/handler/dataimport/DataImportHandlerException.java
contrib/dataimporthandler/src/main/java/org/apache/solr/handler/dataimport/AbstractDataImportHandlerTest.java
contrib/dataimporthandler/src/main/java/org/apache/solr/handler/dataimport/DataImportHandler.java
contrib/extraction/src/main/java/org/apache/solr/handler/SolrContentHandler.java
contrib/extraction/src/main/java/org/apache/solr/handler/ExtractingRequestHandler.java
contrib/extraction/src/main/java/org/apache/solr/handler/SolrContentHandlerFactory.java
contrib/extraction/src/test/java/org/apache/solr/handler/ExtractingRequestHandlerTest.java



On Fri, Dec 12, 2008 at 10:40 AM, Rakesh Sinha  wrote:
> [Changing subject accordingly ] .
>
> Thanks Noble.
>
> I grabbed one of the nightlies at -
> http://people.apache.org/builds/lucene/solr/nightly/ .
>
> I could not find the DataImportHandler in the same.  May be I am
> missing something about the sources of DataImportHandler.
>
> Can somebody suggest on where to find the same.
>
> On Thu, Dec 11, 2008 at 12:49 AM, Noble Paul നോബിള്‍ नोब्ळ्
>  wrote:
>> On Wed, Dec 10, 2008 at 11:00 PM, Rakesh Sinha  
>> wrote:
>>> Hi -
>>>  I am a new user of Solr tool  and came across the introductory
>>> tutorial here - http://lucene.apache.org/solr/tutorial.html  .
>>> I am planning to use Solr in one of my projects . I see that the
>>> tutorial mentions about a REST api / interface to add documents and to
>>> query the same.
>>>
>>> I would like to create  the indices locally , where the web server (or
>>> pool of servers ) will have access to the database directly , but use
>>> the query REST api to query for the results.
>>
>> If your data resides in DB consider using DIH.
>> http://wiki.apache.org/solr/DataImportHandler
>>
>>>
>>>  I am curious how this could be possible without taking the http rest
>>> api submission to add to indices. (For the sake of simplicity - we can
>>> assume it would be just one node to store the index but multiple
>>> readers / query machines that could potentially connect to the solr
>>> web service and retrieve the query results. Also the index might be
>>> locally present in the same machine as that of the Solr host or at
>>> least accessible through NFS etc. )
>> I guess you are thinking of using a master/slave setup.
>> see this http://wiki.apache.org/solr/CollectionDistribution
>> or http://wiki.apache.org/solr/SolrReplication
>>
>>
>>>
>>> Thanks for helping out to some starting pointers regarding the same.
>>>
>>
>>
>>
>> --
>> --Noble Paul
>>
>


Solr 1.3 DataImportHandler iBatis integration ..

2008-12-12 Thread Rakesh Sinha
Hi -
  I was planning to check more details about integrating ibatis query
resultsets with the query required for  tags .   Before I
start experimenting more along the lines - I am just curious if there
had been some effort done earlier on this end (specifically - how to
better integrate DataImportHandler with iBatis queries etc. )


Re: Solr 1.3 DataImportHandler iBatis integration ..

2008-12-12 Thread Rakesh Sinha
Trivial answer - I already have quite a bit of iBatis queries as part
of the project ( a large consumer facing website) that I want to
reuse.
Also  - the iBatis layer already has all the db authentication tokens
/ sqlmap wired on ( as part of sql-map-config.xml ).

When I create the dataConfig xml I seem to re-entering the db
authentication details and the query once again to use the same.
Hence an orthogonal integration might be really useful.




On Fri, Dec 12, 2008 at 3:11 PM, Shalin Shekhar Mangar
 wrote:
> On Fri, Dec 12, 2008 at 11:50 PM, Rakesh Sinha wrote:
>
>> Hi -
>>  I was planning to check more details about integrating ibatis query
>> resultsets with the query required for  tags .   Before I
>> start experimenting more along the lines - I am just curious if there
>> had been some effort done earlier on this end (specifically - how to
>> better integrate DataImportHandler with iBatis queries etc. )
>>
>
> Why do you want to go through iBatis? Why not index directly from the
> database?
>
> --
> Regards,
> Shalin Shekhar Mangar.
>


DataImportHandler - The field :xyz present in DataConfig does not have a counterpart in Solr Schema

2008-12-29 Thread Rakesh Sinha
Hi -
  I am testing around with the full - import functionality of Data
Import Handler.  My dataconfig file looks as follows.













In solrconfig.xml - I am setting the access for DIH as follows.

  

  data-config.xml

  


When I try to access the deployed web-app ( even before hitting
full-import functionality using command ) - I am getting the following
sequence of errors.

The field :lastname present in DataConfig does not have a counterpart
in Solr Schema
The field :firstname present in DataConfig does not have a counterpart
in Solr Schema

The config file is very similar to what is given in the DIH wiki.

Curious, what gives ?


Re: DataImportHandler - The field :xyz present in DataConfig does not have a counterpart in Solr Schema

2008-12-29 Thread Rakesh Sinha
Oops. The fields were out of sync with those in schema.xml .

Looking at the dynamic field name configuration in schema.xml - my
dataconfig.xml file looks as follows.











 

The naming of fields with suffix ( _s ) , as per the dynamic field
naming conventions fixed the issue.



On Mon, Dec 29, 2008 at 1:36 PM, Rakesh Sinha  wrote:
> Hi -
>  I am testing around with the full - import functionality of Data
> Import Handler.  My dataconfig file looks as follows.
>
>
> 
>user="username" password="password" />
>
>query="select id, firstname, lastname from user">
>
>
>
>
>
> 
>
> In solrconfig.xml - I am setting the access for DIH as follows.
>
>   class="org.apache.solr.handler.dataimport.DataImportHandler">
>
>  data-config.xml
>
>  
>
>
> When I try to access the deployed web-app ( even before hitting
> full-import functionality using command ) - I am getting the following
> sequence of errors.
>
> The field :lastname present in DataConfig does not have a counterpart
> in Solr Schema
> The field :firstname present in DataConfig does not have a counterpart
> in Solr Schema
>
> The config file is very similar to what is given in the DIH wiki.
>
> Curious, what gives ?
>


DataImportHandler full-import: SolrException: Document [null] missing required field: id

2008-12-29 Thread Rakesh Sinha
Hi -
   My dataconfig.xml looks as follows.












When I do a full-import with this revised schema ( where the primary
key of the table is not id , but user_id ), I am getting the following
error.

WARNING: Error creating document : SolrInputDocument[{}]
org.apache.solr.common.SolrException: Document [null] missing required field: id
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:292)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:59)
at 
org.apache.solr.handler.dataimport.SolrWriter.upload(SolrWriter.java:70)
at 
org.apache.solr.handler.dataimport.DataImportHandler$1.upload(DataImportHandler.java:275)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:328)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:183)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:134)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:323)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:385)


I am trying to understand this since I had defined a mapping for a
field above as < field column="USER_ID" name="id" /> .

Any idea - what is missing here for the indexing. Also - why would
Document [null] be null since my query does give appropriate results.


Equivalent of TermVector.YES in solr - schema

2009-03-02 Thread Rakesh Sinha
I am in the process of porting a Lucene code to Solr.

I checked the wiki at - http://wiki.apache.org/solr/SchemaXml  for the
common porting instructions.
But I have a specific query with respect the following line of code,
about creating a field / fieldType in Solr

Lucene:
return new Field(String, String, Field.Store.NO,
Field.Index.TOKENIZED, TermVector.YES);

Solr:

I created a new field type as -

   
   

My understanding is that - the default type - string does not seem to
be tokenized ( since 2.9 - it is analyzed ).

  

How do I make the field to be TOKENIZED ( since Lucene 2.9, it is
ANALYZED )  with TermVector set to YES.  Thanks.


Type of object to go into NamedList as part of SearchComponent#process(ResponseBuilder rb)

2009-03-07 Thread Rakesh Sinha
I am developing this system in which I adding custom SearchComponents
(last-components) after all the post-processing of the query.

server:
---
MyComponent extends SearchComponent :


@Override
public void process(ResponseBuilder rb)  {
   MySerializableType obj = new MySerializableType();
   //populate obj after processing
   rb.rsp.add("myresult" ,  obj );
}

}


client:

QueryResponse rsp = commonsHttpSolrServer.query( mySolrQuery);
MySerializableType obj  = rsp.getResponse().get("myresult");

I was curious if the above mentioned code would work for any kind of
serializable type , T, as long as it implements Serializable . Would
there be any restrictions on the same (like only List of Strings -
List of Integers etc).
Thanks.


DocSet implementation for around 300K documents - clarification regarding the memory size

2009-03-08 Thread Rakesh Sinha
I have a much smaller document set of 300 K documents that I am talking about.

I have some pre-defined queries (totally around 10K ) that I want to
implement as facets on the resultset of a given query and return the
top N (say 100) of the same.

I was planning to pre-compute the DocSet results for each of these
queries - store them in memory .

And then - for every successive query - I can do an intersection of
the query resultset with the predefined docset and extract the top N
(after sorting, of course).

Before I go along these lines - I was considering the memory usage of
10K docsets.

http://lucene.apache.org/solr/api/org/apache/solr/search/DocSet.html ,
lists 3 possible implementations ( conventional Bitset as BitDocSet,
HashDocSet for sparse docsets and DocSlice , ??? ) .

Most of these 10K docsets that I am talking about would fall into
sparse docset category.

I am curious what docset implementation would be chosen to store the
docset result. (Does it automatically select the right one based on
the density of the docset , for eg - if the number of set bits in
bitset is  > 1/8th then may be storing as a BitDocSet might be ok -
but for storing a bitset with 10 bits out of possible 300K .
HashDocSet might be better) .

Where can I look ( in solr source code) to understand more about this.  Thanks.


Re: Type of object to go into NamedList as part of SearchComponent#process(ResponseBuilder rb)

2009-03-08 Thread Rakesh Sinha
QueryResponse rsp = commonsHttpSolrServer.query( mySolrQuery);
MySerializableType obj  = rsp.getResponse().get("myresult");

In the code - the API referred to is -

http://lucene.apache.org/solr/api/org/apache/solr/common/util/NamedList.html#get(java.lang.String)
 .

That seems to return any object of type T , as per the API .

So  - is there any Solr code that I can look at - that imposes the
restriction that the type has to be an int, string ,List (of what
allowed objects ? ) . Map ( allowed key, value types ? )  etc.




On Sun, Mar 8, 2009 at 2:18 AM, Shalin Shekhar Mangar
 wrote:
> On Sun, Mar 8, 2009 at 5:34 AM, Rakesh Sinha wrote:
>
>>
>> client:
>> 
>> QueryResponse rsp = commonsHttpSolrServer.query( mySolrQuery);
>> MySerializableType obj  = rsp.getResponse().get("myresult");
>>
>> I was curious if the above mentioned code would work for any kind of
>> serializable type , T, as long as it implements Serializable . Would
>> there be any restrictions on the same (like only List of Strings -
>> List of Integers etc).
>> Thanks.
>>
>
> No, the returned object is a NamedList data structure. It can contain the
> basic java types (int/float etc.), strings, lists, maps and a couple of Solr
> specific objects such as SolrDocumentList, SolrDocument
>
> http://lucene.apache.org/solr/api/solrj/org/apache/solr/common/util/NamedList.html
> --
> Regards,
> Shalin Shekhar Mangar.
>


Multiple Core schemas with single solr.solr.home

2009-04-04 Thread Rakesh Sinha
I am planning to configure a solr server with multiple cores with
different schema for themselves with a single solr.solr.home . Are
there any examples in the wiki to the wiki ( the ones that I see have
a single schema.xml for a given solr.solr.home under schema directory.
).

Thanks for helping pointing to the same.


dual of method - CommonsHttpSolrServer(url) to close and destroy underlying httpclient connection

2009-04-17 Thread Rakesh Sinha
When we instantiate a commonshttpsolrserver - we use the following method.

CommonsHttpSolrServerserver = new CommonsHttpSolrServer(this.endPoint);

how do we do we a 'kill all' of all the underlying httpclient connections  ?

server.getHttpClient() returns a HttpClient reference, but I am trying
to figure out the right method to close all currently active
httpclient connections .


maxBooleanClauses implications of a high number ?

2009-04-20 Thread Rakesh Sinha
I am configuring solr locally for our apps and for some of our apps -
we need to configure maxBooleanQueries in the solr configuration.
Right now - we had set it to 8K ( as opposed to the default of 1K) .
Our dataset document size is about 500K . We have about 6G of ram
(totally) - so ignoring the app server + free space required for swap
out - I would put the number around 4G for solr doc jvm instance.

Given these implications I am trying to figure out how far we can go
with (how high ) maxBooleanQueries number since sometimes the boolean
queries to be composed seems that long (huge list of terms to be
OR-ed).

* what are the space implications in terms of memory  ( and then
possibly disk usage )
* what are the time implications in terms of performance .

One of the solutions that I had thought is to split the long boolean
query into sub-queries and feeding in multiple queries.  again - if we
were take that route - what would the time / space considerations .


using boolean operators with different fields for querying

2009-04-24 Thread Rakesh Sinha
  How do I specify boolean operators on a given field to search with
the 'q' parameter.

For a given index - I have a different documents , document 1 with a
field course:"Applied Statistics"
 document 2 with a field course: "Applied Probability"
document 3 with a field course:"Discrete math"

I need to search for those documents with field course containing
statistics or probability

When I submit q=course:"statistics or probability"  , it searches for
those documents with the exact match of the string (with
debugQuery=true , option ) .  How do pass in a single string that is
directly taken by the query parser without breaking it myself ( I can
achieve the above by q=course:statistics or q=course:probability - but
that would I do the parsing myself but would like to delegate the
parsing to Lucene though).

Thanks.


Re: using boolean operators with different fields for querying

2009-04-24 Thread Rakesh Sinha
On Fri, Apr 24, 2009 at 11:51 AM, Shalin Shekhar Mangar
 wrote:
> On Fri, Apr 24, 2009 at 9:08 PM, Rakesh Sinha wrote:
>
>>  How do I specify boolean operators on a given field to search with
>> the 'q' parameter.
>>
>> For a given index - I have a different documents , document 1 with a
>> field course:"Applied Statistics"
>>  document 2 with a field course: "Applied Probability"
>> document 3 with a field course:"Discrete math"
>>
>> I need to search for those documents with field course containing
>> statistics or probability
>>
>> When I submit q=course:"statistics or probability"  , it searches for
>> those documents with the exact match of the string (with
>> debugQuery=true , option ) .  How do pass in a single string that is
>> directly taken by the query parser without breaking it myself ( I can
>> achieve the above by q=course:statistics or q=course:probability - but
>> that would I do the parsing myself but would like to delegate the
>> parsing to Lucene though).
>>
>
> You can query it like q=course:(statistics probability) or by
> q=course:statistics course:probability
>
> I'm assuming that you have not changed the default operator (OR) in
> schema.xml
> --
> Regards,
> Shalin Shekhar Mangar.
>

It seems like that approach  assumes that the parsing is done on the
client side . But what I do want is to pass a string like - course:
("statistics +probability -discrete) to solr and solr parse the string
and extract the query locally on the solr server as opposed to being
done on the client. Any idea how to simulate the same.


Delete by query in SOLR 6.3

2018-11-14 Thread RAKESH KOTE
Hi,   We are using SOLR 6.3 in cloud and we have created 2 collections in a 
single SOLR cluster consisting of 20 shards and 3 replicas each(overall 20X3 = 
60 instances). The first collection has close to 2.5 billion records and the 
second collection has 350 million records. Both the collection uses the same 
instances which has 4 cores and 26 GB RAM (10 -12 GB assigned for Heap and 14 
GB assigned for OS).The first collection's index size is close to 50GB and 
second collection index size is close to 5 GB in each of the instances. We are 
using the default solrconfig values and the autoCommit and softCommits are set 
to 5 minutes. The SOLR cluster is supported by 3 ZK.
We are able to reach 5000/s updates and we are using solrj to index the data to 
solr. We also delete the documents in each of the collection periodically using 
solrj  delete by query method(we use a non-id filed in delete query).(we are 
using java 1.8) The updates happens without much issues but when we try to 
delete, it is taking considerable amount of time(close to 20 sec on an average 
but some of them takes more than 4-5 mins) which slows down the whole 
application. We don't do an explicit commit after deletion and let the 
autoCommit take care of it for every 5 mins. Since we are not doing a commit we 
are wondering why the delete is taking more time comparing to updates which are 
very fast and finishes in less than 50ms - 100 ms. Could you please let us know 
the reason or how the deletes are different than the updates operation in SOLR.
with warm regards,RK.

Re: Solr Cloud existing shards down after enabling SSL

2019-02-11 Thread Rakesh Enjala
Please help
*Regards,*
*Rakesh Enjala*


On Wed, Feb 6, 2019 at 2:59 PM Rakesh Enjala 
wrote:

> Hi,
>
> We have a solr cloud with  4 nodes installed in two different servers( 1
> on on server and 3 on other server)and  a collection with data in 4 shards.
> We have enabled SSL for solrcloud by using below link
>
> https://lucene.apache.org/solr/guide/7_4/enabling-ssl.html
>
> Successfully enabled SSL and we are able to access the solr GUI, But for
> existing collection only one shard is in active state and rest 3 are in
> down state. When i click cloud in GUI all 3 shards are down and showing
> port to 8984 instead different ports.
>
> Solr Version:7.4
>
> Environment: Centos 7.2
>
> Please help me out !!
>
> Thanks in advance
> *Regards,*
> *Rakesh Enjala*
>

-- 
*Disclaimer**
**The information contained in this electronic message and 
any attachments to this message are intended for the exclusive use of the 
addressee(s) and may contain proprietary, confidential or privileged 
information. If you are not the intended recipient, you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately and destroy all copies of this message and any attachments. 
WARNING: Computer viruses can be transmitted via email. The recipient 
should check this email and any attachments for the presence of viruses. 
The company accepts no liability for any damage caused by any virus 
transmitted by this email. **www.solix.com 
<https://www.google.com/url?q=http://www.solix.com&sa=D&source=hangouts&ust=1527924614994000&usg=AFQjCNF6JdJTwnvKO4xgbMJbOjUo4g4hiA>.*



Solr Cloud Kerberos cookie rejected spnego

2019-06-23 Thread Rakesh Enjala
Hi Team,

Enabled solrcloud-7.4.0 with kerberos. While creating a collection getting
below error

org.apache.http.impl.auth.HttpAuthenticator; NEGOTIATE authentication
error: No valid credentials provided (Mechanism level: No valid credentials
provided (Mechanism level: Server not found in Kerberos database (7)))
org.apache.http.client.protocol.ResponseProcessCookies; Cookie rejected
[hadoop.auth="", version:0, domain:xxx.xxx.com, path:/, expiry: Illegal
domain attribute "". Domain of origin: "localhost"

enabled krb5 debug true and am able to find the actual problem is that
sname is HTTP/localh...@realm.com, it should be HTTP/@DOMAIN1.COM not the
localhost

solr.in.sh

SOLR_AUTH_TYPE="kerberos"
SOLR_AUTHENTICATION_OPTS="-DauthenticationPlugin=org.apache.solr.security.KerberosPlugin
-Djava.security.auth.login.config=/solr/jaas.conf
-Dsun.security.krb5.debug=true -Dsolr.kerberos.cookie.domain=
-Dsolr.kerberos.name.rules=DEFAULT -Dsolr.kerberos.principal=HTTP/@
DOMAIN1.COM -Dsolr.kerberos.keytab=/solr/HTTP.keytab"

Please help me out!
*Regards,*
*Rakesh Enjala*


Re: Solr Cloud Kerberos cookie rejected spnego

2019-06-24 Thread Rakesh Enjala
Hi Team,

Enabled solrcloud-7.4.0 with kerberos. While creating a collection getting
below error

org.apache.http.impl.auth.HttpAuthenticator; NEGOTIATE authentication
error: No valid credentials provided (Mechanism level: No valid credentials
provided (Mechanism level: Server not found in Kerberos database (7)))
org.apache.http.client.protocol.ResponseProcessCookies; Cookie rejected
[hadoop.auth="", version:0, domain:xxx.xxx.com, path:/, expiry: Illegal
domain attribute "". Domain of origin: "localhost"

enabled krb5 debug true and am able to find the actual problem is that
sname is HTTP/localh...@realm.com, it should be HTTP/@DOMAIN1.COM
<http://domain1.com/> not the localhost

solr.in.sh

SOLR_AUTH_TYPE="kerberos"
SOLR_AUTHENTICATION_OPTS="-DauthenticationPlugin=org.apache.solr.security.KerberosPlugin
-Djava.security.auth.login.config=/solr/jaas.conf
-Dsun.security.krb5.debug=true -Dsolr.kerberos.cookie.domain=
-Dsolr.kerberos.name.rules=DEFAULT -Dsolr.kerberos.principal=HTTP/@
DOMAIN1.COM <http://domain1.com/> -Dsolr.kerberos.keytab=/solr/HTTP.keytab"

Please help me out!
*Regards,*
*Rakesh Enjala*


On Sun, Jun 23, 2019 at 8:04 PM Rakesh Enjala 
wrote:

> Hi Team,
>
> Enabled solrcloud-7.4.0 with kerberos. While creating a collection getting
> below error
>
> org.apache.http.impl.auth.HttpAuthenticator; NEGOTIATE authentication
> error: No valid credentials provided (Mechanism level: No valid credentials
> provided (Mechanism level: Server not found in Kerberos database (7)))
> org.apache.http.client.protocol.ResponseProcessCookies; Cookie rejected
> [hadoop.auth="", version:0, domain:xxx.xxx.com, path:/, expiry:
> Illegal domain attribute "". Domain of origin: "localhost"
>
> enabled krb5 debug true and am able to find the actual problem is that
> sname is HTTP/localh...@realm.com, it should be HTTP/@DOMAIN1.COM not the
> localhost
>
> solr.in.sh
>
> SOLR_AUTH_TYPE="kerberos"
> SOLR_AUTHENTICATION_OPTS="-DauthenticationPlugin=org.apache.solr.security.KerberosPlugin
> -Djava.security.auth.login.config=/solr/jaas.conf
> -Dsun.security.krb5.debug=true -Dsolr.kerberos.cookie.domain=
> -Dsolr.kerberos.name.rules=DEFAULT -Dsolr.kerberos.principal=HTTP/@
> DOMAIN1.COM -Dsolr.kerberos.keytab=/solr/HTTP.keytab"
>
> Please help me out!
> *Regards,*
> *Rakesh Enjala*
>