Re: ZFS File System for SOLR 3.6 and SOLR 4

2015-03-29 Thread William Bell
How is performance on XFS when compared to ext4?

>From Otis:  noatime, nodiratime

http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201306.mbox/%3CCACtzgz1YwM5HDQO1R=2CdGxFmJnXcs4pyWRuaPJkKRc=eb8...@mail.gmail.com%3E

Large file systems seem to work well in both. I think the underlying
hardware does actually matter too. SSD vs HDD vs Y.

Several people love XFS at Amazon:
http://java.dzone.com/articles/tips-check-and-improve-your


On Sat, Mar 28, 2015 at 4:57 PM, Bill Bell  wrote:

> Is the an advantage for Xfs over ext4 for Solr ? Anyone done testing?
>
> Bill Bell
> Sent from mobile
>
>
> > On Mar 27, 2015, at 8:14 AM, Shawn Heisey  wrote:
> >
> >> On 3/27/2015 12:30 AM, abhi Abhishek wrote:
> >> i am trying to use ZFS as filesystem for my Linux Environment.
> are
> >> there any performance implications of using any filesystem other than
> >> ext-3/ext-4 with SOLR?
> >
> > That should work with no problem.
> >
> > The only time Solr tends to have problems is if you try to use a network
> > filesystem.  As long as it's a local filesystem and it implements
> > everything a program can typically expect from a local filesystem, Solr
> > should work perfectly.
> >
> > Because of the compatibility problems that the license for ZFS has with
> > the GPL, ZFS on Linux is probably not as well tested as other
> > filesystems like ext4, xfs, or btrfs, but I have not heard about any big
> > problems, so it's probably safe.
> >
> > Thanks,
> > Shawn
> >
>



-- 
Bill Bell
billnb...@gmail.com
cell 720-256-8076


SolR response encapsulation

2015-03-29 Thread danutclapa
Hello,
i am new in SolR and i didn't figured out if in SolR the encapsulation of
response can be in such a way to allow
directly mapping the response into an object like in ORM using JPA.

what i have in SolR:
- i have some fields that are unique and others that are not like this:
lastName, firstName, productsId (multivalued setted in Solr), productsName
(also multivalued)

the actual response of Solr is like (lets consider JSON resonse):

 {"lastName": "John", "firstName": "Doe", "productsId ": [{"1","2","3"}],
"productsName": [{"prod1","prod2","prod3"}]}


what i want is that the response to have subobjects  - the productId and the
productName obviously are an object and should be encapsulated like an
object:

and the response should look like:
 {"lastName": "John", "firstName": "Doe", "products":
[{"productId":"1","productName":"prod1"},{"productId":"2","productName":"prod2"}]}

Is there possible to response to be such that?


Thank you,
Danut



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolR-response-encapsulation-tp4196178.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr 3.6, Highlight and multi words?

2015-03-29 Thread Bruno Mannina

Dear Solr User,

I try to work with highlight, it works well but only if I have only one
keyword in my query?!
If my request is plastic AND bicycle then only plastic is highlight.

my request is:

./select/?q=ab%3A%28plastic+and+bicycle%29&version=2.2&start=0&rows=10&indent=on&hl=true&hl.fl=tien,aben&fl=pn&f.aben.hl.snippets=5

Could you help me please to understand ? I read doc, google, without
success...
so I post here...

my result is:



 

  (EP2423092A1) #CMT# #/CMT# The bicycle pedal has a pedal body (10) made 
fromplastic  material
  , particularly for touring bike. #CMT#ADVANTAGE : #/CMT# The bicycle pedal has a pedal 
body made fromplastic

  
  

betweenplastic  tapes 3 and 3 having two heat fusion layers, and 
the twoplastic  tapes 3 and 3 are stuck

  
  

elements. A connecting element is formed as a hinge, a flexible foil or a 
flexibleplastic  part. #CMT#USE

  
  

  A bicycle handlebar grip includes an inner fiber layer and an 
outerplastic  layer. Thus, the fiber
handlebar grip, while theplastic  layer is soft and 
has an adjustable thickness to provide a comfortable
sensation to a user. In addition, theplastic  layer 
includes a holding portion coated on the outer surface
layer to enhance the combination strength between the fiber layer and 
theplastic  layer and to enhance

  






---
Ce courrier électronique ne contient aucun virus ou logiciel malveillant parce 
que la protection avast! Antivirus est active.
http://www.avast.com


Re: Solr 3.6, Highlight and multi words?

2015-03-29 Thread Bruno Mannina

Additional information, in my schema.xml, my field is defined like this:

 

May be it misses something? like termVectors



Le 29/03/2015 21:15, Bruno Mannina a écrit :

Dear Solr User,

I try to work with highlight, it works well but only if I have only 
one keyword in my query?!

If my request is plastic AND bicycle then only plastic is highlight.

my request is:

./select/?q=ab%3A%28plastic+and+bicycle%29&version=2.2&start=0&rows=10&indent=on&hl=true&hl.fl=tien,aben&fl=pn&f.aben.hl.snippets=5 



Could you help me please to understand ? I read doc, google, without 
success...

so I post here...

my result is:



 

  (EP2423092A1) #CMT# #/CMT# The bicycle pedal has a pedal 
body (10) made fromplastic material
  , particularly for touring bike. #CMT#ADVANTAGE : #/CMT# 
The bicycle pedal has a pedal body made 
fromplastic


  
  

   betweenplastic  tapes 3 and 3 having 
two heat fusion layers, and the twoplastic  tapes 
3 and 3 are stuck


  
  

elements. A connecting element is formed as a hinge, a 
flexible foil or a flexibleplastic  part. 
#CMT#USE


  
  

  A bicycle handlebar grip includes an inner fiber layer and 
an outerplastic layer. Thus, the fiber
handlebar grip, while theplastic  
layer is soft and has an adjustable thickness to provide a 
comfortable
sensation to a user. In addition, 
theplastic  layer includes a holding portion 
coated on the outer surface
layer to enhance the combination strength between the 
fiber layer and theplastic  layer and to 
enhance


  






---
Ce courrier électronique ne contient aucun virus ou logiciel 
malveillant parce que la protection avast! Antivirus est active.

http://www.avast.com





---
Ce courrier électronique ne contient aucun virus ou logiciel malveillant parce 
que la protection avast! Antivirus est active.
http://www.avast.com


Structured and Unstructured data indexing in SolrCloud

2015-03-29 Thread Vijay Bhoomireddy
Hi,

 

We have a requirement where both structured and unstructured data comes into
the system. We need to index both of them and then enable search
functionality on it. We are using SolrCloud on Hadoop platform. For
structured data, we are planning to put the data into HBase and for
unstructured, directly into HDFS.

 

My question is how to index these sources under a single Solr core? Would
that be possible to index both structured and unstructured data under a
single core/collection in SolrCloud and then enable search functionality
over that index?

 

Thanks in advance.


-- 
The contents of this e-mail are confidential and for the exclusive use of 
the intended recipient. If you receive this e-mail in error please delete 
it from your system immediately and notify us either by e-mail or 
telephone. You should not copy, forward or otherwise disclose the content 
of the e-mail. The views expressed in this communication may not 
necessarily be the view held by WHISHWORKS.


Re: Structured and Unstructured data indexing in SolrCloud

2015-03-29 Thread Jack Krupansky
The first step is to work out the queries that you wish to perform - that
will determine how the data should be organized in the Solr schema.

-- Jack Krupansky

On Sun, Mar 29, 2015 at 4:04 PM, Vijay Bhoomireddy <
vijaya.bhoomire...@whishworks.com> wrote:

> Hi,
>
>
>
> We have a requirement where both structured and unstructured data comes
> into
> the system. We need to index both of them and then enable search
> functionality on it. We are using SolrCloud on Hadoop platform. For
> structured data, we are planning to put the data into HBase and for
> unstructured, directly into HDFS.
>
>
>
> My question is how to index these sources under a single Solr core? Would
> that be possible to index both structured and unstructured data under a
> single core/collection in SolrCloud and then enable search functionality
> over that index?
>
>
>
> Thanks in advance.
>
>
> --
> The contents of this e-mail are confidential and for the exclusive use of
> the intended recipient. If you receive this e-mail in error please delete
> it from your system immediately and notify us either by e-mail or
> telephone. You should not copy, forward or otherwise disclose the content
> of the e-mail. The views expressed in this communication may not
> necessarily be the view held by WHISHWORKS.
>


Re: New To Solr, getting error using the quick start guide

2015-03-29 Thread Will ferrer
Hi Swawn.

Thanks so much for the response. I will do so more tests on this and send
more info in the next day or 2, maybe that will illuminate something.  I
hope you are having a great weekend.

All the best.

Will Ferrer

On Sat, Mar 28, 2015 at 10:28 AM, Shawn Heisey  wrote:

> On 3/27/2015 8:00 PM, Will ferrer wrote:
> > I am new to solr and trying to run through the quick start guide (
> > http://lucene.apache.org/solr/quickstart.html).
> >
> > The installation seems fine but then I run:
> >
> > bin/solr start -e cloud -noprompt
>
> You are starting the cloud example with no prompts.
>
> > http://localhost:8983/solr/#/ shows data in my web browser, but the
> cloud
> > tab is empty under graph.
> >
> > Any advice any one give me to get me started here with the product would
> be
> > very appreciated.
>
> The console log that you included with your message said nothing about
> creating the gettingstarted collection, but when I try the command you
> used on the following setups, it creates the collection every time:
>
> *) Linux, from the tags/lucene_solr_5_0_0 source.
> *) Windows 8.1, from the binary 5.0.0 download.
> *) Linux, from the branch_5x source.
>
> Here's my console log from the first item above - Solr built from the
> tags/lucene_solr_5_0_0 source:
>
> ---
>
> elyograg@sauron:~/asf/lucene_solr_5_0_0/solr$ bin/solr -e cloud -noprompt
>
> Welcome to the SolrCloud example!
>
>
> Starting up 2 Solr nodes for your example SolrCloud cluster.
> Creating Solr home directory
> /home/elyograg/asf/lucene_solr_5_0_0/solr/example/cloud/node1/solr
> Cloning Solr home directory
> /home/elyograg/asf/lucene_solr_5_0_0/solr/example/cloud/node1 into
> /home/elyograg/asf/lucene_solr_5_0_0/solr/example/cloud/node2
>
> Starting up SolrCloud node1 on port 8983 using command:
>
> solr start -cloud -s example/cloud/node1/solr -p 8983
>
>
> Waiting to see Solr listening on port 8983 [/]
> Started Solr server on port 8983 (pid=13260). Happy searching!
>
>
> Starting node2 on port 7574 using command:
>
> solr start -cloud -s example/cloud/node2/solr -p 7574 -z localhost:9983
>
>
> Waiting to see Solr listening on port 7574 [/]
> Started Solr server on port 7574 (pid=13419). Happy searching!
>
>   Connecting to ZooKeeper at localhost:9983
> Uploading
>
> /home/elyograg/asf/lucene_solr_5_0_0/solr/server/solr/configsets/data_driven_schema_configs/conf
> for config gettingstarted to ZooKeeper at localhost:9983
>
> Creating new collection 'gettingstarted' using command:
>
> http://166.70.79.221:7574/solr/admin/collections?action=CREATE&name=gettingstarted&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=gettingstarted
>
> {
>   "responseHeader":{
> "status":0,
> "QTime":6869},
>   "success":{"":{
>   "responseHeader":{
> "status":0,
> "QTime":6387},
>   "core":"gettingstarted_shard1_replica2"}}}
>
>
>
> SolrCloud example running, please visit http://localhost:8983/solr
>
>
> elyograg@sauron:~/asf/lucene_solr_5_0_0/solr$
>
> ---
>
> I don't know why it's not creating the collection for you, unless maybe
> you are running a different version built from older source code or
> something.
>
> Thanks,
> Shawn
>
>


Re: Unable to perform search query after changing uniqueKey

2015-03-29 Thread Zheng Lin Edwin Yeo
Hi Andrea,

This is the query that I'm using.
http://localhost:8983/solr/logmill/select?q=*:*&wt=xml&indent=true

This is the stacktrace that I got.





   500
   10
   
  true
  *:*
  xml
   



java.lang.NullPointerException at
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:1074)
at
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:743)
at
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:722)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:350)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:144)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2006) at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:204)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368) at
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) at
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Unknown Source) 

500




I'm using SolrCloud setup.


Regards,
Edwin



On 27 March 2015 at 16:42, Andrea Gazzarini  wrote:

> Hi Edwin,
> please provide some other detail about your context, (e.g. complete
> stacktrace, query you're issuing)
>
> Best,
> Andrea
>
>
> On 03/27/2015 09:38 AM, Zheng Lin Edwin Yeo wrote:
>
>> Hi everyone,
>>
>> I've changed my uniqueKey to another name, instead of using id, on the
>> schema.xml.
>>
>> However, after I have done the indexing (the indexing is successful), I'm
>> not able to perform a search query on it. I gives the error
>> java.lang.NullPointerException.
>>
>> Is there other place which I need to configure, besides changing the
>> uniqueKey field in scheam.xml?
>>
>> Regards,
>> Edwin
>>
>>
>


Re: Unable to perform search query after changing uniqueKey

2015-03-29 Thread Zheng Lin Edwin Yeo
Hi Erick,

I used the following query to delete all the index.

http://localhost:8983/solr/update?stream.body=*:*http://localhost:8983/solr/update?stream.body=


Or is it better to physically delete the entire data directory?


Regards,
Edwin


On 28 March 2015 at 02:27, Erick Erickson  wrote:

> You say you re-indexed, did you _completely_ remove the data directory
> first, i.e. the parent of the "index" and, maybe, "tlog" directories?
> I've occasionally seen remnants of old definitions "pollute" the new
> one, and since the  key is so fundamental I can see it
> being a problem.
>
> Best,
> Erick
>
> On Fri, Mar 27, 2015 at 1:42 AM, Andrea Gazzarini 
> wrote:
> > Hi Edwin,
> > please provide some other detail about your context, (e.g. complete
> > stacktrace, query you're issuing)
> >
> > Best,
> > Andrea
> >
> >
> > On 03/27/2015 09:38 AM, Zheng Lin Edwin Yeo wrote:
> >>
> >> Hi everyone,
> >>
> >> I've changed my uniqueKey to another name, instead of using id, on the
> >> schema.xml.
> >>
> >> However, after I have done the indexing (the indexing is successful),
> I'm
> >> not able to perform a search query on it. I gives the error
> >> java.lang.NullPointerException.
> >>
> >> Is there other place which I need to configure, besides changing the
> >> uniqueKey field in scheam.xml?
> >>
> >> Regards,
> >> Edwin
> >>
> >
>


Same schema.xml is loaded for different cores in SolrCloud

2015-03-29 Thread Zheng Lin Edwin Yeo
Hi everyone,

I've created a SolrCloud with multiple core, and I have different
schema.xml for each of the core. However, when I start Solr, there's only
one version of the schema.xml that is loaded onto Solr. Regardless of which
core I go to, the schema.xml that is shown is the first one which I have
loaded.

What I did was, I have 3 cores: logmill, collection1 and collection2.
Each of the core has 2 shrads: shard1 and shard2

I first started the Solr with shard1 using the following command:
java -Dcollection.configName=logmill -DzkRun -DnumShards=2
-Dbootstrap_confdir=./solr/logmill/conf -jar start.jar

After that I start shard2 using the following command:
java -Dcollection.configName=logmill -DzkRun -DnumShards=2
-Dbootstrap_confdir=./solr/logmill/conf -jar start.jar

All the schema.xml loaded are from logmill core, even for the collection1
and collection2.

Even after I change the command to start shard1 with the following command,
all the schema.xml are still from logmill
java -Dcollection.configName=collection1 -DzkRun
-DnumShards=2 -Dbootstrap_confdir=./solr/collection1/conf -jar start.jar


How do I get Solr to read the different schema.xml for the different cores?

Regards,
Edwin


Re: Unable to perform search query after changing uniqueKey

2015-03-29 Thread Erick Erickson
I meant shut down Solr and physically remove the entire data
directory. Not saying this is the cure, but it can't hurt to rule out
the index having "memory"...

Best,
Erick

On Sun, Mar 29, 2015 at 6:35 PM, Zheng Lin Edwin Yeo
 wrote:
> Hi Erick,
>
> I used the following query to delete all the index.
>
> http://localhost:8983/solr/update?stream.body=*:*http://localhost:8983/solr/update?stream.body=
>
>
> Or is it better to physically delete the entire data directory?
>
>
> Regards,
> Edwin
>
>
> On 28 March 2015 at 02:27, Erick Erickson  wrote:
>
>> You say you re-indexed, did you _completely_ remove the data directory
>> first, i.e. the parent of the "index" and, maybe, "tlog" directories?
>> I've occasionally seen remnants of old definitions "pollute" the new
>> one, and since the  key is so fundamental I can see it
>> being a problem.
>>
>> Best,
>> Erick
>>
>> On Fri, Mar 27, 2015 at 1:42 AM, Andrea Gazzarini 
>> wrote:
>> > Hi Edwin,
>> > please provide some other detail about your context, (e.g. complete
>> > stacktrace, query you're issuing)
>> >
>> > Best,
>> > Andrea
>> >
>> >
>> > On 03/27/2015 09:38 AM, Zheng Lin Edwin Yeo wrote:
>> >>
>> >> Hi everyone,
>> >>
>> >> I've changed my uniqueKey to another name, instead of using id, on the
>> >> schema.xml.
>> >>
>> >> However, after I have done the indexing (the indexing is successful),
>> I'm
>> >> not able to perform a search query on it. I gives the error
>> >> java.lang.NullPointerException.
>> >>
>> >> Is there other place which I need to configure, besides changing the
>> >> uniqueKey field in scheam.xml?
>> >>
>> >> Regards,
>> >> Edwin
>> >>
>> >
>>


Re: How to boost documents at index time?

2015-03-29 Thread CKReddy Bhimavarapu
@Bill Bell
>
> Did you try debugQuery ?

yes I tried debugQuery, final score calculated is less when I apply
boost="0.002"  in doc field compared to if I left blank but I can't see
where actual this 0.002 is directly impacting( As far as I understand this
is multiplicative so 0.002 technically deboost the doc). I.e I am expecting
some score * 0.002 = new Score.

@Ahmet Arslan

> Did you disable norms ( omitNorms="true" ) accidentally ?

no they are currently enabled omitNorms="false"  for our required fields.


On Sun, Mar 29, 2015 at 9:17 AM, Ahmet Arslan 
wrote:

> Hi,
>
> Did you disable norms ( omitNorms="true" ) accidentally ?
>
> Ahmet
>
>
> On Saturday, March 28, 2015 9:49 AM, CKReddy Bhimavarapu <
> chaitu...@gmail.com> wrote:
> I am want to boost docs at index time, I am doing this using boost
> parameter in doc field .
> but I can't see direct impact on the  doc by using  debuQuery.
>
> My question is that is there any other way to boost doc at index time and
> can see the reflected changes i.e direct impact.
>
> --
> ckreddybh. 
>



-- 
ckreddybh. 


Re: SOLR Index in shared/Network folder

2015-03-29 Thread abhi Abhishek
Hello,
 Thanks for the suggestions. My aim is to reduce the disk space usage.
I have 1 master with 2 slave configured, where slaves are used for
searching and master ingests new data replicated to slaves, but as my index
size is in 100's of GB we see 3x times space overhead. i would like to
reduce this overhead, can you suggest something for this?

Thanks in Advance

Best Regards,
Abhishek

On Sat, Mar 28, 2015 at 12:13 AM, Erick Erickson 
wrote:

> To pile on: If you're talking about pointing two Solr instances at the
> _same_ index, it doesn't matter whether you are on NFS or not, you'll
> have all sorts of problems. And if this is a SolrCloud installation,
> it's particularly hard to get right.
>
> Please do not do this unless you have a very good reason, and please
> tell us what the reason is so we can perhaps suggest alternatives.
>
> Best,
> Erick
>
> On Fri, Mar 27, 2015 at 8:08 AM, Walter Underwood 
> wrote:
> > Several years ago, I accidentally put Solr indexes on an NFS volume and
> it was 100X slower.
> >
> > If you have enough RAM, query speed should be OK, but startup time
> (loading indexes into file buffers) could be really long. Indexing could be
> quite slow.
> >
> > wunder
> > Walter Underwood
> > wun...@wunderwood.org
> > http://observer.wunderwood.org/  (my blog)
> >
> >
> > On Mar 26, 2015, at 11:31 PM, Shawn Heisey  wrote:
> >
> >> On 3/27/2015 12:06 AM, abhi Abhishek wrote:
> >>> Greetings,
> >>>  I am trying to use a network shared location as my index
> directory.
> >>> are there any known problems in using a Network File System for
> running a
> >>> SOLR Instance?
> >>
> >> It is not recommended.  You will probably need to change the lockType,
> >> ... the default "native" probably will not work, and you might need to
> >> change it to "none" to get it working ... but that disables an important
> >> safety mechanism that prevents index corruption.
> >>
> >> http://stackoverflow.com/questions/9599529/solr-over-nfs-problems
> >>
> >> Thanks,
> >> Shawn
> >>
> >
>


Re: Unable to perform search query after changing uniqueKey

2015-03-29 Thread Zheng Lin Edwin Yeo
Hi Erick,

I've tried that, and removed the data directory from both the shards. But
the same problem still occurs, so we probably can rule out the "memory"
issue.

Regards,
Edwin

On 30 March 2015 at 12:39, Erick Erickson  wrote:

> I meant shut down Solr and physically remove the entire data
> directory. Not saying this is the cure, but it can't hurt to rule out
> the index having "memory"...
>
> Best,
> Erick
>
> On Sun, Mar 29, 2015 at 6:35 PM, Zheng Lin Edwin Yeo
>  wrote:
> > Hi Erick,
> >
> > I used the following query to delete all the index.
> >
> > http://localhost:8983/solr/update?stream.body=
> *:*
> http://localhost:8983/solr/update?stream.body=
> >
> >
> > Or is it better to physically delete the entire data directory?
> >
> >
> > Regards,
> > Edwin
> >
> >
> > On 28 March 2015 at 02:27, Erick Erickson 
> wrote:
> >
> >> You say you re-indexed, did you _completely_ remove the data directory
> >> first, i.e. the parent of the "index" and, maybe, "tlog" directories?
> >> I've occasionally seen remnants of old definitions "pollute" the new
> >> one, and since the  key is so fundamental I can see it
> >> being a problem.
> >>
> >> Best,
> >> Erick
> >>
> >> On Fri, Mar 27, 2015 at 1:42 AM, Andrea Gazzarini <
> a.gazzar...@gmail.com>
> >> wrote:
> >> > Hi Edwin,
> >> > please provide some other detail about your context, (e.g. complete
> >> > stacktrace, query you're issuing)
> >> >
> >> > Best,
> >> > Andrea
> >> >
> >> >
> >> > On 03/27/2015 09:38 AM, Zheng Lin Edwin Yeo wrote:
> >> >>
> >> >> Hi everyone,
> >> >>
> >> >> I've changed my uniqueKey to another name, instead of using id, on
> the
> >> >> schema.xml.
> >> >>
> >> >> However, after I have done the indexing (the indexing is successful),
> >> I'm
> >> >> not able to perform a search query on it. I gives the error
> >> >> java.lang.NullPointerException.
> >> >>
> >> >> Is there other place which I need to configure, besides changing the
> >> >> uniqueKey field in scheam.xml?
> >> >>
> >> >> Regards,
> >> >> Edwin
> >> >>
> >> >
> >>
>


Re: Unable to perform search query after changing uniqueKey

2015-03-29 Thread Mostafa Gomaa
Hi Zheng,

It's possible that there's a problem with your schema.xml. Are all fields
defined and have appropriate options enabled?

Regards,

Mostafa.

On Mon, Mar 30, 2015 at 7:49 AM, Zheng Lin Edwin Yeo 
wrote:

> Hi Erick,
>
> I've tried that, and removed the data directory from both the shards. But
> the same problem still occurs, so we probably can rule out the "memory"
> issue.
>
> Regards,
> Edwin
>
> On 30 March 2015 at 12:39, Erick Erickson  wrote:
>
> > I meant shut down Solr and physically remove the entire data
> > directory. Not saying this is the cure, but it can't hurt to rule out
> > the index having "memory"...
> >
> > Best,
> > Erick
> >
> > On Sun, Mar 29, 2015 at 6:35 PM, Zheng Lin Edwin Yeo
> >  wrote:
> > > Hi Erick,
> > >
> > > I used the following query to delete all the index.
> > >
> > > http://localhost:8983/solr/update?stream.body=
> > *:*
> > http://localhost:8983/solr/update?stream.body=
> > >
> > >
> > > Or is it better to physically delete the entire data directory?
> > >
> > >
> > > Regards,
> > > Edwin
> > >
> > >
> > > On 28 March 2015 at 02:27, Erick Erickson 
> > wrote:
> > >
> > >> You say you re-indexed, did you _completely_ remove the data directory
> > >> first, i.e. the parent of the "index" and, maybe, "tlog" directories?
> > >> I've occasionally seen remnants of old definitions "pollute" the new
> > >> one, and since the  key is so fundamental I can see it
> > >> being a problem.
> > >>
> > >> Best,
> > >> Erick
> > >>
> > >> On Fri, Mar 27, 2015 at 1:42 AM, Andrea Gazzarini <
> > a.gazzar...@gmail.com>
> > >> wrote:
> > >> > Hi Edwin,
> > >> > please provide some other detail about your context, (e.g. complete
> > >> > stacktrace, query you're issuing)
> > >> >
> > >> > Best,
> > >> > Andrea
> > >> >
> > >> >
> > >> > On 03/27/2015 09:38 AM, Zheng Lin Edwin Yeo wrote:
> > >> >>
> > >> >> Hi everyone,
> > >> >>
> > >> >> I've changed my uniqueKey to another name, instead of using id, on
> > the
> > >> >> schema.xml.
> > >> >>
> > >> >> However, after I have done the indexing (the indexing is
> successful),
> > >> I'm
> > >> >> not able to perform a search query on it. I gives the error
> > >> >> java.lang.NullPointerException.
> > >> >>
> > >> >> Is there other place which I need to configure, besides changing
> the
> > >> >> uniqueKey field in scheam.xml?
> > >> >>
> > >> >> Regards,
> > >> >> Edwin
> > >> >>
> > >> >
> > >>
> >
>