Re: SolrJ & Ridiculously Large Queries

2016-10-18 Thread Bram Van Dam
On 14/10/16 16:13, Shawn Heisey wrote:
>  name="solr.jetty.request.header.size" default="8192" />

A belated thanks, Shawn! 32k should be sufficient, I hope.

 - Bram




solr 5.5.2 using omitNorms=False on multivalued fields

2016-10-18 Thread elisabeth benoit
Hello,

I would like to score higher, or even better to sort documents with same
text scores based on norm

for instance, with query "a b d"

document with

a b d

should score higher  than (or appear before)  document with

a b c d

The problem is my field is multivalued so omitNorms= False is not working.

Does anyone know how to achieve this with a multivalued field on solr 5.5.2?


Best regards,
Elisabeth


Re: Access solr in web browser

2016-10-18 Thread Mugeesh Husain
It is not about the information of collection while it about the
accessibility, I want to access via putty session. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Access-solr-in-web-browser-tp4301201p4301659.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Advice on implementing SOLR Cloud

2016-10-18 Thread Susheel Kumar
In case if you need exact commands etc. you can follow this

http://blog.thedigitalgroup.com/susheelk/2015/08/03/solrcloud-2-nodes-solr-1-node-zk-setup/


Thanks,
Susheel

On Mon, Oct 17, 2016 at 7:17 PM, John Bickerstaff 
wrote:

> I had quite a lot of "fun" figuring out how to install Solr Cloud.  Because
> it uses Zookeeper, the Solr folks don't say much about Zookeeper and the
> Zookeeper folks don't say much about Solr.
>
> I finally put my notes together and posted them online for download.  There
> is one in the set called something like 6.1_final.txt and that will contain
> a step-by-step way to set up the Solr Cloud.  You can modify for your
> situation.
>
> Hope this helps...
>
> https://www.linkedin.com/pulse/actual-solrcloud-vms-zookeeper-nodes-john-
> bickerstaff?trk=hp-feed-article-title-publish
>
> Oh, by the way, you MUST have a minimum of 3 Zookeepers as far as I know,
> due to the need to elect a "leader" if one goes down.
>
> From the zookeeper site: Three ZooKeeper servers is the minimum recommended
> size for an ensemble, and we also recommend that they run on separate
> machines.
>
> See this: https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html
>
> Virtual box and Linux VMs are free (except for the time to build them) and
> you may want to take that route instead of doing everything on the same
> machine, but that's up to you...
>
> On Mon, Oct 17, 2016 at 5:02 PM, Sadheera Vithanage 
> wrote:
>
> > Hello solr experts,
> >
> > I am new to solr and facing a problem while trying to implement solr
> cloud
> > with zoo keeper.
> >
> > I am having 2 zookeeper instances residing on the same machines as solr
> > instances(not the best config but to get started).
> >
> > I got my zookeeper instances and solr instances to work but I am getting
> an
> > error as below.
> >
> >
> >- *Core_Name:*
> > org.apache.solr.common.SolrException:org.apache.solr.
> common.SolrException:
> >Could not load conf for core *Core_Name*: Error loading solr config
> from
> >solrconfig.xml
> >
> >
> > I had these cores running as a standalone instance before and I haven't
> > pushed the config to zookeeper.
> >
> > I am assuming that is the problem, If someone could send me the proper
> > syntax pushing the config to zookeeper, It would be great, I tried the
> > syntax on the web but I ddnt get it ryt..
> >
> > Also, I am unable to create collections from the web ui, it doesn't list
> > any configurations.
> >
> >
> > OS: Ubuntu
> > Solr version: 6.2.0
> >
> > If I can get a list of setup steps, for this config It will help as
> well..
> >
> > Please let me know if you need further clarifications.
> >
> > Thank you very much.
> >
> > --
> > Regards
> >
> > Sadheera Vithanage
> >
>


Re: Query by distance

2016-10-18 Thread marotosg
This is my field type.


I was reading about this and it looks like the issue

  





  
  




  
 


I have been reading and it looks like the issue is about multi term synonym.
http://opensourceconnections.com/blog/2013/10/27/why-is-multi-term-synonyms-so-hard-in-solr/

I may try this plug in to check if it works.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Query-by-distance-tp4300660p4301697.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Access solr in web browser

2016-10-18 Thread Shawn Heisey
On 10/18/2016 5:11 AM, Mugeesh Husain wrote:
> It is not about the information of collection while it about the
> accessibility, I want to access via putty session. 

The fact that you mentioned putty leads me to believe that you probably
ARE using a UNIX or UNIX-like OS.  Ideally the OS will include GNU
tools.  Assuming this is the case, the two most common commandline
programs to use for accessing the Solr API are curl and wget.  I
personally find curl to be a little bit cleaner.

For other languages, I would recommend using libraries for accessing the
API, either a Solr library or a combination of a library for HTTP and
another library, XML or JSON, for parsing the response.

I can't offer much useful advice for users on Windows, other than
switching platforms or using a programming language that has a Solr library.

Resources that may be useful if you want to write a program:

https://wiki.apache.org/solr/IntegratingSolr

Thanks,
Shawn



Re: Query by distance

2016-10-18 Thread John Bickerstaff
Just in case it helps, I had good success on multi-word synonyms using this
plugin...

https://github.com/healthonnet/hon-lucene-synonyms

IIRC, the instructions are clear and fairly easy to follow - especially for
Solr 6.x

Ping back if you run into any problems setting it up...



On Tue, Oct 18, 2016 at 7:12 AM, marotosg  wrote:

> This is my field type.
>
>
> I was reading about this and it looks like the issue
>  positionIncrementGap="300">
>   
> 
>  generateWordParts="1" generateNumberParts="1" catenateWords="1"
> catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"
> preserveOriginal="1" protected="protwordscompany.txt"/>
>  preserveOriginal="false"/>
> 
>  synonyms="positionsynonyms.txt" ignoreCase="true" expand="true"/>
>   
>   
> 
>  generateWordParts="0" generateNumberParts="0" catenateWords="0"
> catenateNumbers="0" catenateAll="0" splitOnCaseChange="0"
> preserveOriginal="1" protected="protwordscompany.txt"/>
>  preserveOriginal="false"/>
> 
>   
>  
>
>
> I have been reading and it looks like the issue is about multi term
> synonym.
> http://opensourceconnections.com/blog/2013/10/27/why-is-
> multi-term-synonyms-so-hard-in-solr/
>
> I may try this plug in to check if it works.
>
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/Query-by-distance-tp4300660p4301697.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


CachedSqlEntityProcessor with delta-import

2016-10-18 Thread Mohan, Sowmya
Good morning,

Can CachedSqlEntityProcessor be used with delta-import? In my setup when 
running a delta-import with CachedSqlEntityProcessor, the child entity values 
are not correctly updated for the parent record. I am on Solr 4.3. Has anyone 
experienced this and if so how to resolve it?

Thanks,
Sowmya.



Migration from Solr 4

2016-10-18 Thread sputul
We are using Solr 4.3, sing Zoopeeker on development manage Solr Cloud having
one or two nodes. Will it be easier to migrate to Solr 5 first or should I
migrate to Solr 6 directly? I see Core definition has changed. Anything else
worth noting? 

The goal is to also use HTTPS perhaps after everything works in my local
environment using Single Zookeeper and a one or more Solr Nodes. Thanks.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Migration-from-Solr-4-tp4301788.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Migration from Solr 4

2016-10-18 Thread John Bickerstaff
For what it's worth, (and it may not work for your situation) I decided not
to upgrade, but to "upgrade by replacing".  In other words, I just
installed 6.x and because I had set up my configs for "include" I didn't
have to worry about what would be different about the "new" solrconfig.xml
and the managed_schema file.  Instead, I was able to use a copy from one of
the sample projects and add my "included" configs.

Then, after creating the same collection structure in 6.x that I had in my
5.x instances, I just re-indexed everything into the new 6.x Solr.

The big deal (probably) is whether it will cost you days to re-index and
whether you have the resources to do that.

I don't know if the index remained the same because I didn't have to
trouble myself with that due to the replacement.  I'm sure others on the
list can tell us whether it's OK to just copy the data files between 4.3
and 6.x  (I'd guess not...)

By the way - unless I misunderstand the Zookeeper docs, you can't get away
with any less than 3 Zookeeper nodes. So keep that in mind.

I have my rough notes about what I did available online.  You can go here
for the link...

https://www.linkedin.com/pulse/actual-solrcloud-vms-zookeeper-nodes-john-bickerstaff






On Tue, Oct 18, 2016 at 11:28 AM, sputul  wrote:

> We are using Solr 4.3, sing Zoopeeker on development manage Solr Cloud
> having
> one or two nodes. Will it be easier to migrate to Solr 5 first or should I
> migrate to Solr 6 directly? I see Core definition has changed. Anything
> else
> worth noting?
>
> The goal is to also use HTTPS perhaps after everything works in my local
> environment using Single Zookeeper and a one or more Solr Nodes. Thanks.
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/Migration-from-Solr-4-tp4301788.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Migration from Solr 4

2016-10-18 Thread Erick Erickson
bq: ...whether it's OK to just copy the data files between 4.3 and 6.x

NOT ok. Solr (well, Lucene really) guarantees to read _one_ major version
behind. So a Solr 5x will read a solr 4x. But a Solr 6x is not guaranteed
at all to read a 4x.  And in fact removing back-compat complexification is
one of the benefits of moving to a new major version.

You can use the 5x IndexUpgraderTool to migrate from 4x->5x, then run
6x over the upgraded index.

All that said, I completely agree with John's advice to re-index entirely
from scratch if at all possible.

**
It's perfectly possible to run with a single Zookeeper. The quorum
formula is N/2 + 1. If you only have 1 ZK node, then 1 represents
quorum. It's just not advisable. Zk is responsible for presenting the
cluster topology to the entire set of Solr instances, you want it
to be robust so 3 is the recommended minimum. BTW, It's quite rare
to need more than that unless you're at pretty massive scale (hundreds
of Solr instances and/or collections), and even in these cases it's
usually best to use Observers

FWIW,
Erick

On Tue, Oct 18, 2016 at 2:28 PM, John Bickerstaff
 wrote:
> For what it's worth, (and it may not work for your situation) I decided not
> to upgrade, but to "upgrade by replacing".  In other words, I just
> installed 6.x and because I had set up my configs for "include" I didn't
> have to worry about what would be different about the "new" solrconfig.xml
> and the managed_schema file.  Instead, I was able to use a copy from one of
> the sample projects and add my "included" configs.
>
> Then, after creating the same collection structure in 6.x that I had in my
> 5.x instances, I just re-indexed everything into the new 6.x Solr.
>
> The big deal (probably) is whether it will cost you days to re-index and
> whether you have the resources to do that.
>
> I don't know if the index remained the same because I didn't have to
> trouble myself with that due to the replacement.  I'm sure others on the
> list can tell us whether it's OK to just copy the data files between 4.3
> and 6.x  (I'd guess not...)
>
> By the way - unless I misunderstand the Zookeeper docs, you can't get away
> with any less than 3 Zookeeper nodes. So keep that in mind.
>
> I have my rough notes about what I did available online.  You can go here
> for the link...
>
> https://www.linkedin.com/pulse/actual-solrcloud-vms-zookeeper-nodes-john-bickerstaff
>
>
>
>
>
>
> On Tue, Oct 18, 2016 at 11:28 AM, sputul  wrote:
>
>> We are using Solr 4.3, sing Zoopeeker on development manage Solr Cloud
>> having
>> one or two nodes. Will it be easier to migrate to Solr 5 first or should I
>> migrate to Solr 6 directly? I see Core definition has changed. Anything
>> else
>> worth noting?
>>
>> The goal is to also use HTTPS perhaps after everything works in my local
>> environment using Single Zookeeper and a one or more Solr Nodes. Thanks.
>>
>>
>>
>> --
>> View this message in context: http://lucene.472066.n3.
>> nabble.com/Migration-from-Solr-4-tp4301788.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>


Re: Migration from Solr 4

2016-10-18 Thread sputul
Thanks for quick reply and all documents, John. I plan on placing our index
to Solr install to see if that works. And hope that Solr 4 index will
magically work with SolrConfig changes. Excuse my ignorance, but is there a
curl command or so to reindex documents in a collection? We do this in code
because index needs to synch up with other data.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Migration-from-Solr-4-tp4301788p4301820.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Migration from Solr 4

2016-10-18 Thread Shawn Heisey
On 10/18/2016 12:28 PM, John Bickerstaff wrote:
> For what it's worth, (and it may not work for your situation) I
> decided not to upgrade, but to "upgrade by replacing". In other words,
> I just installed 6.x and because I had set up my configs for "include"
> I didn't have to worry about what would be different about the "new"
> solrconfig.xml and the managed_schema file. Instead, I was able to use
> a copy from one of the sample projects and add my "included" configs. 

John's way is the way that I would recommend doing it.  If you utilize a
chroot, you can even use the same zookeeper ensemble for the new cloud,
it will just go in a different location in the database. SolrCloud
evolves very quickly, so trying to use an existing zookeeper database
may result in less than optimal operation.  Solr itself doesn't evolve
nearly as fast, but if you go to 6.x, you're going to jump two major
versions -- even for Solr, that's a LOT of change.

> By the way - unless I misunderstand the Zookeeper docs, you can't get
> away with any less than 3 Zookeeper nodes. So keep that in mind. 

This is 100% correct.  If you want zookeeper redundancy, you must have
at least 3 zookeeper servers.  This is clear in the zookeeper documentation.

> I have my rough notes about what I did available online. You can go
> here for the link...
> https://www.linkedin.com/pulse/actual-solrcloud-vms-zookeeper-nodes-john-bickerstaff


I haven't gone over these notes, but if they work, awesome.

Regarding the latest question on the thread, there is no magic "reindex"
button:

https://wiki.apache.org/solr/HowToReindex

Thanks,
Shawn



Re: Migration from Solr 4

2016-10-18 Thread John Bickerstaff
Gratzi Eric for the correction on Zookeeper...

On Tue, Oct 18, 2016 at 1:48 PM, Erick Erickson 
wrote:

> bq: ...whether it's OK to just copy the data files between 4.3 and 6.x
>
> NOT ok. Solr (well, Lucene really) guarantees to read _one_ major version
> behind. So a Solr 5x will read a solr 4x. But a Solr 6x is not guaranteed
> at all to read a 4x.  And in fact removing back-compat complexification is
> one of the benefits of moving to a new major version.
>
> You can use the 5x IndexUpgraderTool to migrate from 4x->5x, then run
> 6x over the upgraded index.
>
> All that said, I completely agree with John's advice to re-index entirely
> from scratch if at all possible.
>
> **
> It's perfectly possible to run with a single Zookeeper. The quorum
> formula is N/2 + 1. If you only have 1 ZK node, then 1 represents
> quorum. It's just not advisable. Zk is responsible for presenting the
> cluster topology to the entire set of Solr instances, you want it
> to be robust so 3 is the recommended minimum. BTW, It's quite rare
> to need more than that unless you're at pretty massive scale (hundreds
> of Solr instances and/or collections), and even in these cases it's
> usually best to use Observers
>
> FWIW,
> Erick
>
> On Tue, Oct 18, 2016 at 2:28 PM, John Bickerstaff
>  wrote:
> > For what it's worth, (and it may not work for your situation) I decided
> not
> > to upgrade, but to "upgrade by replacing".  In other words, I just
> > installed 6.x and because I had set up my configs for "include" I didn't
> > have to worry about what would be different about the "new"
> solrconfig.xml
> > and the managed_schema file.  Instead, I was able to use a copy from one
> of
> > the sample projects and add my "included" configs.
> >
> > Then, after creating the same collection structure in 6.x that I had in
> my
> > 5.x instances, I just re-indexed everything into the new 6.x Solr.
> >
> > The big deal (probably) is whether it will cost you days to re-index and
> > whether you have the resources to do that.
> >
> > I don't know if the index remained the same because I didn't have to
> > trouble myself with that due to the replacement.  I'm sure others on the
> > list can tell us whether it's OK to just copy the data files between 4.3
> > and 6.x  (I'd guess not...)
> >
> > By the way - unless I misunderstand the Zookeeper docs, you can't get
> away
> > with any less than 3 Zookeeper nodes. So keep that in mind.
> >
> > I have my rough notes about what I did available online.  You can go here
> > for the link...
> >
> > https://www.linkedin.com/pulse/actual-solrcloud-vms-
> zookeeper-nodes-john-bickerstaff
> >
> >
> >
> >
> >
> >
> > On Tue, Oct 18, 2016 at 11:28 AM, sputul  wrote:
> >
> >> We are using Solr 4.3, sing Zoopeeker on development manage Solr Cloud
> >> having
> >> one or two nodes. Will it be easier to migrate to Solr 5 first or
> should I
> >> migrate to Solr 6 directly? I see Core definition has changed. Anything
> >> else
> >> worth noting?
> >>
> >> The goal is to also use HTTPS perhaps after everything works in my local
> >> environment using Single Zookeeper and a one or more Solr Nodes. Thanks.
> >>
> >>
> >>
> >> --
> >> View this message in context: http://lucene.472066.n3.
> >> nabble.com/Migration-from-Solr-4-tp4301788.html
> >> Sent from the Solr - User mailing list archive at Nabble.com.
> >>
>


HttpSolrClient.Builder

2016-10-18 Thread wmcginnis
What causes this?
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/http/impl/client/CloseableHttpClient
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.(HttpSolrClient.java:209)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build(HttpSolrClient.java:874)

The code is:
return new HttpSolrClient.Builder(urlString).build();
After connecting to a Solr server, should I call solrServer.close();
or
should I leave the connection open until the entire index is built?


Re: Migration from Solr 4

2016-10-18 Thread John Bickerstaff
Good point on the chroot - I used one and at one point I had 3 different
versions of Solr running on 9 VMs - nary a problem...

Sputul, the chroot "instructions" are in my notes...  look for something
like "./solr6.1" in the notes and you'll see what I mean...

On Tue, Oct 18, 2016 at 2:05 PM, Shawn Heisey  wrote:

> On 10/18/2016 12:28 PM, John Bickerstaff wrote:
> > For what it's worth, (and it may not work for your situation) I
> > decided not to upgrade, but to "upgrade by replacing". In other words,
> > I just installed 6.x and because I had set up my configs for "include"
> > I didn't have to worry about what would be different about the "new"
> > solrconfig.xml and the managed_schema file. Instead, I was able to use
> > a copy from one of the sample projects and add my "included" configs.
>
> John's way is the way that I would recommend doing it.  If you utilize a
> chroot, you can even use the same zookeeper ensemble for the new cloud,
> it will just go in a different location in the database. SolrCloud
> evolves very quickly, so trying to use an existing zookeeper database
> may result in less than optimal operation.  Solr itself doesn't evolve
> nearly as fast, but if you go to 6.x, you're going to jump two major
> versions -- even for Solr, that's a LOT of change.
>
> > By the way - unless I misunderstand the Zookeeper docs, you can't get
> > away with any less than 3 Zookeeper nodes. So keep that in mind.
>
> This is 100% correct.  If you want zookeeper redundancy, you must have
> at least 3 zookeeper servers.  This is clear in the zookeeper
> documentation.
>
> > I have my rough notes about what I did available online. You can go
> > here for the link...
> > https://www.linkedin.com/pulse/actual-solrcloud-vms-
> zookeeper-nodes-john-bickerstaff
>
>
> I haven't gone over these notes, but if they work, awesome.
>
> Regarding the latest question on the thread, there is no magic "reindex"
> button:
>
> https://wiki.apache.org/solr/HowToReindex
>
> Thanks,
> Shawn
>
>


Re: HttpSolrClient.Builder

2016-10-18 Thread Shawn Heisey
On 10/18/2016 2:39 PM, wmcgin...@cox.net wrote:
> What causes this?
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/http/impl/client/CloseableHttpClient
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.(HttpSolrClient.java:209)
>   at 
> org.apache.solr.client.solrj.impl.HttpSolrClient$Builder.build(HttpSolrClient.java:874)

This is most likely caused by having multiple versions of jars on your
classpath.  You've probably got an older version of the HttpComponents
jars (httpclient, httpmime, httpcore) somewhere that's missing a class
that SolrJ needs from the newer version.  The correct jar versions can
be found included with Solr.

> The code is:
> return new HttpSolrClient.Builder(urlString).build();
> After connecting to a Solr server, should I call solrServer.close();
> or
> should I leave the connection open until the entire index is built?

The intent is that the client object will be created once and used in
the program until it stops running.  Many users *never* close the
client.  It can also be shared by multiple threads with no problems.

Thanks,
Shawn



Public/Private data in Solr :: Metadata or ?

2016-10-18 Thread John Bickerstaff
I have a question that I suspect I'll need to answer very soon in my
current position.

How (or is it even wise) to "segregate data" in Solr so that some data can
be seen by some users and some data not be seen?

Taking the case of "public / private" as a (hopefully) simple, binary
example...

Let's imagine I have a data set that can be seen by a user.  Some of that
data can be seen ONLY by the user (this would be the private data) and some
of it can be seen by others (assume the user gave permission for this in
some way)

What is a best practice for handling this type of situation?  I can see
putting metadata in Solr of course, but the instant I do that, I create the
obligation to keep it updated (Document-level CRUD?) and I start using Solr
more like a DB than a search engine.

(Assume the user can change this public/private setting on any one piece of
"their" data at any time).

Of course, I can also see some kind of post-results massaging of data to
remove private data based on ID's which are stored in a database or similar
datastore...

How have others solved this and is there a consensus on whether to keep it
out of Solr, or how best to handle it in Solr?

Are there clever implementations of "secondary" collections in Solr for
this purpose?

Any advice / hard-won experience is greatly appreciated...


Re: Public/Private data in Solr :: Metadata or ?

2016-10-18 Thread Doug Turnbull
You might want to talk to Kevin Waters or look at some of the work being
done with the graph plugin. It's being used to model permissions with Solr.
It's a bit of normalization within Solr whereby you could localize updates
to a users shared-with document. Kevin can probably talk more intelligently
than I can about it.

-Doug
On Tue, Oct 18, 2016 at 5:00 PM John Bickerstaff 
wrote:

> I have a question that I suspect I'll need to answer very soon in my
> current position.
>
> How (or is it even wise) to "segregate data" in Solr so that some data can
> be seen by some users and some data not be seen?
>
> Taking the case of "public / private" as a (hopefully) simple, binary
> example...
>
> Let's imagine I have a data set that can be seen by a user.  Some of that
> data can be seen ONLY by the user (this would be the private data) and some
> of it can be seen by others (assume the user gave permission for this in
> some way)
>
> What is a best practice for handling this type of situation?  I can see
> putting metadata in Solr of course, but the instant I do that, I create the
> obligation to keep it updated (Document-level CRUD?) and I start using Solr
> more like a DB than a search engine.
>
> (Assume the user can change this public/private setting on any one piece of
> "their" data at any time).
>
> Of course, I can also see some kind of post-results massaging of data to
> remove private data based on ID's which are stored in a database or similar
> datastore...
>
> How have others solved this and is there a consensus on whether to keep it
> out of Solr, or how best to handle it in Solr?
>
> Are there clever implementations of "secondary" collections in Solr for
> this purpose?
>
> Any advice / hard-won experience is greatly appreciated...
>


Re: Public/Private data in Solr :: Metadata or ?

2016-10-18 Thread Jan Høydahl
https://wiki.apache.org/solr/SolrSecurity#Document_Level_Security 


--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 18. okt. 2016 kl. 23.00 skrev John Bickerstaff :
> 
> I have a question that I suspect I'll need to answer very soon in my
> current position.
> 
> How (or is it even wise) to "segregate data" in Solr so that some data can
> be seen by some users and some data not be seen?
> 
> Taking the case of "public / private" as a (hopefully) simple, binary
> example...
> 
> Let's imagine I have a data set that can be seen by a user.  Some of that
> data can be seen ONLY by the user (this would be the private data) and some
> of it can be seen by others (assume the user gave permission for this in
> some way)
> 
> What is a best practice for handling this type of situation?  I can see
> putting metadata in Solr of course, but the instant I do that, I create the
> obligation to keep it updated (Document-level CRUD?) and I start using Solr
> more like a DB than a search engine.
> 
> (Assume the user can change this public/private setting on any one piece of
> "their" data at any time).
> 
> Of course, I can also see some kind of post-results massaging of data to
> remove private data based on ID's which are stored in a database or similar
> datastore...
> 
> How have others solved this and is there a consensus on whether to keep it
> out of Solr, or how best to handle it in Solr?
> 
> Are there clever implementations of "secondary" collections in Solr for
> this purpose?
> 
> Any advice / hard-won experience is greatly appreciated...



Re: Advice on implementing SOLR Cloud

2016-10-18 Thread Sadheera Vithanage
Thank you ,Susheel Kumar.

On Tue, Oct 18, 2016 at 11:13 PM, Susheel Kumar 
wrote:

> In case if you need exact commands etc. you can follow this
>
> http://blog.thedigitalgroup.com/susheelk/2015/08/03/
> solrcloud-2-nodes-solr-1-node-zk-setup/
>
>
> Thanks,
> Susheel
>
> On Mon, Oct 17, 2016 at 7:17 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
> wrote:
>
> > I had quite a lot of "fun" figuring out how to install Solr Cloud.
> Because
> > it uses Zookeeper, the Solr folks don't say much about Zookeeper and the
> > Zookeeper folks don't say much about Solr.
> >
> > I finally put my notes together and posted them online for download.
> There
> > is one in the set called something like 6.1_final.txt and that will
> contain
> > a step-by-step way to set up the Solr Cloud.  You can modify for your
> > situation.
> >
> > Hope this helps...
> >
> > https://www.linkedin.com/pulse/actual-solrcloud-vms-
> zookeeper-nodes-john-
> > bickerstaff?trk=hp-feed-article-title-publish
> >
> > Oh, by the way, you MUST have a minimum of 3 Zookeepers as far as I know,
> > due to the need to elect a "leader" if one goes down.
> >
> > From the zookeeper site: Three ZooKeeper servers is the minimum
> recommended
> > size for an ensemble, and we also recommend that they run on separate
> > machines.
> >
> > See this: https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html
> >
> > Virtual box and Linux VMs are free (except for the time to build them)
> and
> > you may want to take that route instead of doing everything on the same
> > machine, but that's up to you...
> >
> > On Mon, Oct 17, 2016 at 5:02 PM, Sadheera Vithanage  >
> > wrote:
> >
> > > Hello solr experts,
> > >
> > > I am new to solr and facing a problem while trying to implement solr
> > cloud
> > > with zoo keeper.
> > >
> > > I am having 2 zookeeper instances residing on the same machines as solr
> > > instances(not the best config but to get started).
> > >
> > > I got my zookeeper instances and solr instances to work but I am
> getting
> > an
> > > error as below.
> > >
> > >
> > >- *Core_Name:*
> > > org.apache.solr.common.SolrException:org.apache.solr.
> > common.SolrException:
> > >Could not load conf for core *Core_Name*: Error loading solr config
> > from
> > >solrconfig.xml
> > >
> > >
> > > I had these cores running as a standalone instance before and I haven't
> > > pushed the config to zookeeper.
> > >
> > > I am assuming that is the problem, If someone could send me the proper
> > > syntax pushing the config to zookeeper, It would be great, I tried the
> > > syntax on the web but I ddnt get it ryt..
> > >
> > > Also, I am unable to create collections from the web ui, it doesn't
> list
> > > any configurations.
> > >
> > >
> > > OS: Ubuntu
> > > Solr version: 6.2.0
> > >
> > > If I can get a list of setup steps, for this config It will help as
> > well..
> > >
> > > Please let me know if you need further clarifications.
> > >
> > > Thank you very much.
> > >
> > > --
> > > Regards
> > >
> > > Sadheera Vithanage
> > >
> >
>



-- 
Regards

Sadheera Vithanage


RE: Public/Private data in Solr :: Metadata or ?

2016-10-18 Thread Markus Jelsma
In case you're not up for Doug or Jan's anwers; we have relied on HTTP proxies 
(nginx) to solve the problem of restriction for over 6 years. Very easy if 
visibility is your only problem. Of course, the update handlers are hidden (we 
perform indexing for clients with crawlers) so we don't expose anything update 
related.

For us, it's is just simple translating a client's key to a filter query 
equivalent.

There are many answers depending on what you need.

M.

 
 
-Original message-
> From:John Bickerstaff 
> Sent: Tuesday 18th October 2016 23:00
> To: solr-user@lucene.apache.org
> Subject: Public/Private data in Solr :: Metadata or ?
> 
> I have a question that I suspect I'll need to answer very soon in my
> current position.
> 
> How (or is it even wise) to "segregate data" in Solr so that some data can
> be seen by some users and some data not be seen?
> 
> Taking the case of "public / private" as a (hopefully) simple, binary
> example...
> 
> Let's imagine I have a data set that can be seen by a user.  Some of that
> data can be seen ONLY by the user (this would be the private data) and some
> of it can be seen by others (assume the user gave permission for this in
> some way)
> 
> What is a best practice for handling this type of situation?  I can see
> putting metadata in Solr of course, but the instant I do that, I create the
> obligation to keep it updated (Document-level CRUD?) and I start using Solr
> more like a DB than a search engine.
> 
> (Assume the user can change this public/private setting on any one piece of
> "their" data at any time).
> 
> Of course, I can also see some kind of post-results massaging of data to
> remove private data based on ID's which are stored in a database or similar
> datastore...
> 
> How have others solved this and is there a consensus on whether to keep it
> out of Solr, or how best to handle it in Solr?
> 
> Are there clever implementations of "secondary" collections in Solr for
> this purpose?
> 
> Any advice / hard-won experience is greatly appreciated...
> 


Re: Public/Private data in Solr :: Metadata or ?

2016-10-18 Thread John Bickerstaff
Thanks Markus,

In your case that client's key is fairly static, yes?  It doesn't change at
any time, but tends to live on the data more or less permanently?

On Tue, Oct 18, 2016 at 4:07 PM, Markus Jelsma 
wrote:

> In case you're not up for Doug or Jan's anwers; we have relied on HTTP
> proxies (nginx) to solve the problem of restriction for over 6 years. Very
> easy if visibility is your only problem. Of course, the update handlers are
> hidden (we perform indexing for clients with crawlers) so we don't expose
> anything update related.
>
> For us, it's is just simple translating a client's key to a filter query
> equivalent.
>
> There are many answers depending on what you need.
>
> M.
>
>
>
> -Original message-
> > From:John Bickerstaff 
> > Sent: Tuesday 18th October 2016 23:00
> > To: solr-user@lucene.apache.org
> > Subject: Public/Private data in Solr :: Metadata or ?
> >
> > I have a question that I suspect I'll need to answer very soon in my
> > current position.
> >
> > How (or is it even wise) to "segregate data" in Solr so that some data
> can
> > be seen by some users and some data not be seen?
> >
> > Taking the case of "public / private" as a (hopefully) simple, binary
> > example...
> >
> > Let's imagine I have a data set that can be seen by a user.  Some of that
> > data can be seen ONLY by the user (this would be the private data) and
> some
> > of it can be seen by others (assume the user gave permission for this in
> > some way)
> >
> > What is a best practice for handling this type of situation?  I can see
> > putting metadata in Solr of course, but the instant I do that, I create
> the
> > obligation to keep it updated (Document-level CRUD?) and I start using
> Solr
> > more like a DB than a search engine.
> >
> > (Assume the user can change this public/private setting on any one piece
> of
> > "their" data at any time).
> >
> > Of course, I can also see some kind of post-results massaging of data to
> > remove private data based on ID's which are stored in a database or
> similar
> > datastore...
> >
> > How have others solved this and is there a consensus on whether to keep
> it
> > out of Solr, or how best to handle it in Solr?
> >
> > Are there clever implementations of "secondary" collections in Solr for
> > this purpose?
> >
> > Any advice / hard-won experience is greatly appreciated...
> >
>


Re: Public/Private data in Solr :: Metadata or ?

2016-10-18 Thread John Bickerstaff
Thanks Jan --

I did a quick scan on the wiki and here:
http://www.slideshare.net/lucenerevolution/wright-nokia-manifoldcfeurocon-2011
and couldn't find the answer to the following question in the 5 or 10
minutes I spent looking.  Admittedly I'm being lazy and hoping you have
enough experience with the project to answer easily...

Do you know if ManifoldCF helps with a use case where the security token
needs to be changed arbitrarily and a re-index of the collection is not
practical?  Or is ManifoldCF an index-time only kind of thing?


Use Case:  User A changes "record A" from private to public so a friend
(User B) can see it.  User B logs in and expects to see what User A changed
to public a few minutes earlier.

The security token on "record A" would need to be changed immediately, and
that change would have to occur in Solr - yes?



On Tue, Oct 18, 2016 at 3:32 PM, Jan Høydahl  wrote:

> https://wiki.apache.org/solr/SolrSecurity#Document_Level_Security <
> https://wiki.apache.org/solr/SolrSecurity#Document_Level_Security>
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 18. okt. 2016 kl. 23.00 skrev John Bickerstaff  >:
> >
> > I have a question that I suspect I'll need to answer very soon in my
> > current position.
> >
> > How (or is it even wise) to "segregate data" in Solr so that some data
> can
> > be seen by some users and some data not be seen?
> >
> > Taking the case of "public / private" as a (hopefully) simple, binary
> > example...
> >
> > Let's imagine I have a data set that can be seen by a user.  Some of that
> > data can be seen ONLY by the user (this would be the private data) and
> some
> > of it can be seen by others (assume the user gave permission for this in
> > some way)
> >
> > What is a best practice for handling this type of situation?  I can see
> > putting metadata in Solr of course, but the instant I do that, I create
> the
> > obligation to keep it updated (Document-level CRUD?) and I start using
> Solr
> > more like a DB than a search engine.
> >
> > (Assume the user can change this public/private setting on any one piece
> of
> > "their" data at any time).
> >
> > Of course, I can also see some kind of post-results massaging of data to
> > remove private data based on ID's which are stored in a database or
> similar
> > datastore...
> >
> > How have others solved this and is there a consensus on whether to keep
> it
> > out of Solr, or how best to handle it in Solr?
> >
> > Are there clever implementations of "secondary" collections in Solr for
> > this purpose?
> >
> > Any advice / hard-won experience is greatly appreciated...
>
>


RE: Public/Private data in Solr :: Metadata or ?

2016-10-18 Thread Markus Jelsma
The key is static indeed, just a subscription key. Under the hood it translates 
to a function query, which can vary. In our simple case it is really a key that 
translates to fq=host:(host1 host2 ... hostX). A simple backend sends this data 
to nginx every few minutes.

Again, just simple visibility. Nothing fancy. It works well for some cases.

 
 
-Original message-
> From:John Bickerstaff 
> Sent: Wednesday 19th October 2016 0:10
> To: solr-user@lucene.apache.org
> Subject: Re: Public/Private data in Solr :: Metadata or ?
> 
> Thanks Markus,
> 
> In your case that client's key is fairly static, yes?  It doesn't change at
> any time, but tends to live on the data more or less permanently?
> 
> On Tue, Oct 18, 2016 at 4:07 PM, Markus Jelsma 
> wrote:
> 
> > In case you're not up for Doug or Jan's anwers; we have relied on HTTP
> > proxies (nginx) to solve the problem of restriction for over 6 years. Very
> > easy if visibility is your only problem. Of course, the update handlers are
> > hidden (we perform indexing for clients with crawlers) so we don't expose
> > anything update related.
> >
> > For us, it's is just simple translating a client's key to a filter query
> > equivalent.
> >
> > There are many answers depending on what you need.
> >
> > M.
> >
> >
> >
> > -Original message-
> > > From:John Bickerstaff 
> > > Sent: Tuesday 18th October 2016 23:00
> > > To: solr-user@lucene.apache.org
> > > Subject: Public/Private data in Solr :: Metadata or ?
> > >
> > > I have a question that I suspect I'll need to answer very soon in my
> > > current position.
> > >
> > > How (or is it even wise) to "segregate data" in Solr so that some data
> > can
> > > be seen by some users and some data not be seen?
> > >
> > > Taking the case of "public / private" as a (hopefully) simple, binary
> > > example...
> > >
> > > Let's imagine I have a data set that can be seen by a user.  Some of that
> > > data can be seen ONLY by the user (this would be the private data) and
> > some
> > > of it can be seen by others (assume the user gave permission for this in
> > > some way)
> > >
> > > What is a best practice for handling this type of situation?  I can see
> > > putting metadata in Solr of course, but the instant I do that, I create
> > the
> > > obligation to keep it updated (Document-level CRUD?) and I start using
> > Solr
> > > more like a DB than a search engine.
> > >
> > > (Assume the user can change this public/private setting on any one piece
> > of
> > > "their" data at any time).
> > >
> > > Of course, I can also see some kind of post-results massaging of data to
> > > remove private data based on ID's which are stored in a database or
> > similar
> > > datastore...
> > >
> > > How have others solved this and is there a consensus on whether to keep
> > it
> > > out of Solr, or how best to handle it in Solr?
> > >
> > > Are there clever implementations of "secondary" collections in Solr for
> > > this purpose?
> > >
> > > Any advice / hard-won experience is greatly appreciated...
> > >
> >
> 


Summary for term facet

2016-10-18 Thread prosens
How to find the stats for all the term facets where count>2, 

so looking for a 'termssummary' type like this:
Json.facet={
Total:{
type:termssummary,
field:filename,
*mincount:2*,
Facet:{
Sum:sum(size)
}}}

Is there a way to achieve this?




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Summary-for-term-facet-tp4301846.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Public/Private data in Solr :: Metadata or ?

2016-10-18 Thread Markus Jelsma
ManifoldCF can do this really flexible, with Filenet or Sharepoint, or both, i 
don't remember that well. This means a variety of users can have changing 
privileges  at any time. The backend determines visibility, ManifoldCF just 
asks how visible it should be.

This also means you need those backends and ManifoldCF. If broad document and 
users permissions are required (and you have those backends), this is a very 
viable option.

 
 
-Original message-
> From:John Bickerstaff 
> Sent: Wednesday 19th October 2016 0:14
> To: solr-user@lucene.apache.org
> Subject: Re: Public/Private data in Solr :: Metadata or ?
> 
> Thanks Jan --
> 
> I did a quick scan on the wiki and here:
> http://www.slideshare.net/lucenerevolution/wright-nokia-manifoldcfeurocon-2011
> and couldn't find the answer to the following question in the 5 or 10
> minutes I spent looking.  Admittedly I'm being lazy and hoping you have
> enough experience with the project to answer easily...
> 
> Do you know if ManifoldCF helps with a use case where the security token
> needs to be changed arbitrarily and a re-index of the collection is not
> practical?  Or is ManifoldCF an index-time only kind of thing?
> 
> 
> Use Case:  User A changes "record A" from private to public so a friend
> (User B) can see it.  User B logs in and expects to see what User A changed
> to public a few minutes earlier.
> 
> The security token on "record A" would need to be changed immediately, and
> that change would have to occur in Solr - yes?
> 
> 
> 
> On Tue, Oct 18, 2016 at 3:32 PM, Jan Høydahl  wrote:
> 
> > https://wiki.apache.org/solr/SolrSecurity#Document_Level_Security <
> > https://wiki.apache.org/solr/SolrSecurity#Document_Level_Security>
> >
> > --
> > Jan Høydahl, search solution architect
> > Cominvent AS - www.cominvent.com
> >
> > > 18. okt. 2016 kl. 23.00 skrev John Bickerstaff  > >:
> > >
> > > I have a question that I suspect I'll need to answer very soon in my
> > > current position.
> > >
> > > How (or is it even wise) to "segregate data" in Solr so that some data
> > can
> > > be seen by some users and some data not be seen?
> > >
> > > Taking the case of "public / private" as a (hopefully) simple, binary
> > > example...
> > >
> > > Let's imagine I have a data set that can be seen by a user.  Some of that
> > > data can be seen ONLY by the user (this would be the private data) and
> > some
> > > of it can be seen by others (assume the user gave permission for this in
> > > some way)
> > >
> > > What is a best practice for handling this type of situation?  I can see
> > > putting metadata in Solr of course, but the instant I do that, I create
> > the
> > > obligation to keep it updated (Document-level CRUD?) and I start using
> > Solr
> > > more like a DB than a search engine.
> > >
> > > (Assume the user can change this public/private setting on any one piece
> > of
> > > "their" data at any time).
> > >
> > > Of course, I can also see some kind of post-results massaging of data to
> > > remove private data based on ID's which are stored in a database or
> > similar
> > > datastore...
> > >
> > > How have others solved this and is there a consensus on whether to keep
> > it
> > > out of Solr, or how best to handle it in Solr?
> > >
> > > Are there clever implementations of "secondary" collections in Solr for
> > > this purpose?
> > >
> > > Any advice / hard-won experience is greatly appreciated...
> >
> >
> 


Re: Migration from Solr 4

2016-10-18 Thread sputul
Thanks, Shawn. Sad but good to know upfront hat reindex is not magic. 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Migration-from-Solr-4-tp4301788p4301858.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Migration from Solr 4

2016-10-18 Thread sputul
Thanks, Eric. I will use IndexUpgraderTool to upgrade index per your
suggestion.
-- Putul



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Migration-from-Solr-4-tp4301788p4301859.html
Sent from the Solr - User mailing list archive at Nabble.com.


Empty facets on TextField

2016-10-18 Thread John Davis
Hi,

I have converted one of my fields from StrField to TextField and am not
getting back any facets for that field. Here's the exact configuration of
the TextField. I have tested it with 6.2.0 on a fresh instance and it
repros consistently. From reading through past archives and documentation,
it feels like this should just work. I would appreciate any input.



  
  

  


Search
query: 
/select/?facet.field=FACET_FIELD_NAME&facet=on&indent=on&q=QUERY_STRING&wt=json

Interestingly facets are returned if I change facet.method to enum instead
of default fc.

John


Re: Empty facets on TextField

2016-10-18 Thread Yonik Seeley
This sounds like you didn't actually start fresh, but just reindexed your data.
This would mean that docValues would still exist in the index for this
field (just with no values), and that normal faceting would use those.
Forcing facet.method=enum forces the use of the index instead of
docvalues (or the fieldcache if the field is configured w/o
docvalues).

-Yonik

On Tue, Oct 18, 2016 at 9:43 PM, John Davis  wrote:
> Hi,
>
> I have converted one of my fields from StrField to TextField and am not
> getting back any facets for that field. Here's the exact configuration of
> the TextField. I have tested it with 6.2.0 on a fresh instance and it
> repros consistently. From reading through past archives and documentation,
> it feels like this should just work. I would appreciate any input.
>
>  omitTermFreqAndPositions="true" indexed="true" stored="true"
> positionIncrementGap="100" sortMissingLast="true" multiValued="true">
> 
>   
>   
> 
>   
>
>
> Search
> query: 
> /select/?facet.field=FACET_FIELD_NAME&facet=on&indent=on&q=QUERY_STRING&wt=json
>
> Interestingly facets are returned if I change facet.method to enum instead
> of default fc.
>
> John


Re: Empty facets on TextField

2016-10-18 Thread John Davis
Thanks. Is there a way around to not starting fresh and forcing the reindex
to remove docValues?

On Tue, Oct 18, 2016 at 6:56 PM, Yonik Seeley  wrote:

> This sounds like you didn't actually start fresh, but just reindexed your
> data.
> This would mean that docValues would still exist in the index for this
> field (just with no values), and that normal faceting would use those.
> Forcing facet.method=enum forces the use of the index instead of
> docvalues (or the fieldcache if the field is configured w/o
> docvalues).
>
> -Yonik
>
> On Tue, Oct 18, 2016 at 9:43 PM, John Davis 
> wrote:
> > Hi,
> >
> > I have converted one of my fields from StrField to TextField and am not
> > getting back any facets for that field. Here's the exact configuration of
> > the TextField. I have tested it with 6.2.0 on a fresh instance and it
> > repros consistently. From reading through past archives and
> documentation,
> > it feels like this should just work. I would appreciate any input.
> >
> >  > omitTermFreqAndPositions="true" indexed="true" stored="true"
> > positionIncrementGap="100" sortMissingLast="true" multiValued="true">
> > 
> >   
> >   
> > 
> >   
> >
> >
> > Search
> > query: /select/?facet.field=FACET_FIELD_NAME&facet=on&indent=on&
> q=QUERY_STRING&wt=json
> >
> > Interestingly facets are returned if I change facet.method to enum
> instead
> > of default fc.
> >
> > John
>


Re: Empty facets on TextField

2016-10-18 Thread Yonik Seeley
A delete-by-query of *:* may do it (because it special cases to
removing the index).
The underlying issue is when lucene merges a segment without docvalues
with a segment that has them.
-Yonik


On Tue, Oct 18, 2016 at 10:09 PM, John Davis  wrote:
> Thanks. Is there a way around to not starting fresh and forcing the reindex
> to remove docValues?
>
> On Tue, Oct 18, 2016 at 6:56 PM, Yonik Seeley  wrote:
>>
>> This sounds like you didn't actually start fresh, but just reindexed your
>> data.
>> This would mean that docValues would still exist in the index for this
>> field (just with no values), and that normal faceting would use those.
>> Forcing facet.method=enum forces the use of the index instead of
>> docvalues (or the fieldcache if the field is configured w/o
>> docvalues).
>>
>> -Yonik
>>
>> On Tue, Oct 18, 2016 at 9:43 PM, John Davis 
>> wrote:
>> > Hi,
>> >
>> > I have converted one of my fields from StrField to TextField and am not
>> > getting back any facets for that field. Here's the exact configuration
>> > of
>> > the TextField. I have tested it with 6.2.0 on a fresh instance and it
>> > repros consistently. From reading through past archives and
>> > documentation,
>> > it feels like this should just work. I would appreciate any input.
>> >
>> > > > omitTermFreqAndPositions="true" indexed="true" stored="true"
>> > positionIncrementGap="100" sortMissingLast="true" multiValued="true">
>> > 
>> >   
>> >   
>> > 
>> >   
>> >
>> >
>> > Search
>> > query:
>> > /select/?facet.field=FACET_FIELD_NAME&facet=on&indent=on&q=QUERY_STRING&wt=json
>> >
>> > Interestingly facets are returned if I change facet.method to enum
>> > instead
>> > of default fc.
>> >
>> > John
>
>


Re: Empty facets on TextField

2016-10-18 Thread Yonik Seeley
Actually, a delete-by-query of *:* may also be hit-or-miss on replicas
in a solr cloud setup because of reorders.
If it does work, you should see something in the logs at the INFO
level like "REMOVING ALL DOCUMENTS FROM INDEX"

-Yonik

On Tue, Oct 18, 2016 at 11:02 PM, Yonik Seeley  wrote:
> A delete-by-query of *:* may do it (because it special cases to
> removing the index).
> The underlying issue is when lucene merges a segment without docvalues
> with a segment that has them.
> -Yonik
>
>
> On Tue, Oct 18, 2016 at 10:09 PM, John Davis  
> wrote:
>> Thanks. Is there a way around to not starting fresh and forcing the reindex
>> to remove docValues?
>>
>> On Tue, Oct 18, 2016 at 6:56 PM, Yonik Seeley  wrote:
>>>
>>> This sounds like you didn't actually start fresh, but just reindexed your
>>> data.
>>> This would mean that docValues would still exist in the index for this
>>> field (just with no values), and that normal faceting would use those.
>>> Forcing facet.method=enum forces the use of the index instead of
>>> docvalues (or the fieldcache if the field is configured w/o
>>> docvalues).
>>>
>>> -Yonik
>>>
>>> On Tue, Oct 18, 2016 at 9:43 PM, John Davis 
>>> wrote:
>>> > Hi,
>>> >
>>> > I have converted one of my fields from StrField to TextField and am not
>>> > getting back any facets for that field. Here's the exact configuration
>>> > of
>>> > the TextField. I have tested it with 6.2.0 on a fresh instance and it
>>> > repros consistently. From reading through past archives and
>>> > documentation,
>>> > it feels like this should just work. I would appreciate any input.
>>> >
>>> > >> > omitTermFreqAndPositions="true" indexed="true" stored="true"
>>> > positionIncrementGap="100" sortMissingLast="true" multiValued="true">
>>> > 
>>> >   
>>> >   
>>> > 
>>> >   
>>> >
>>> >
>>> > Search
>>> > query:
>>> > /select/?facet.field=FACET_FIELD_NAME&facet=on&indent=on&q=QUERY_STRING&wt=json
>>> >
>>> > Interestingly facets are returned if I change facet.method to enum
>>> > instead
>>> > of default fc.
>>> >
>>> > John
>>
>>