Re: Need help on LTR

2019-03-19 Thread Roopa ML
In model file replace original_score with originalScore

Roopa

Sent from my iPhone

> On Mar 19, 2019, at 2:44 PM, Amjad Khan  wrote:
> 
> Roopa,
> 
> Yes
> 
>> On Mar 19, 2019, at 11:51 AM, Roopa Rao  wrote:
>> 
>> Does your feature definitions and the feature names used in the model match?
>> 
>>> On Tue, Mar 19, 2019 at 10:17 AM Amjad Khan  wrote:
>>> 
>>> Yes, I did.
>>> 
>>> I can see the feature that I created by this
>>> schema/feature-store/exampleFeatureStore and it return me the features I
>>> created. But issue is when I try to put store-model.
>>> 
 On Mar 19, 2019, at 12:18 AM, Mohomed Rimash 
>>> wrote:
 
 Hi Amjad, After adding the libraries into the path, Did you restart the
 SOLR ?
 
> On Tue, 19 Mar 2019 at 08:45, Amjad Khan  wrote:
> 
> I followed the Solr LTR Documentation
> 
> https://lucene.apache.org/solr/guide/7_4/learning-to-rank.html <
> https://lucene.apache.org/solr/guide/7_4/learning-to-rank.html>
> 
> 1. Added library into the solr-config
> 
>  regex=".*\.jar" />
>  regex="solr-ltr-\d.*\.jar" />
> 2. Successfully added feature
> 3. Get schema to see feature is available
> 4. When I try to push model I see the error below, however I added the
>>> lib
> into solr-cofig
> 
> Response
> {
> "responseHeader":{
>  "status":400,
>  "QTime":1},
> "error":{
>  "metadata":[
>"error-class","org.apache.solr.common.SolrException",
>"root-error-class","java.lang.NullPointerException"],
>  "msg":"org.apache.solr.ltr.model.ModelException: Model type does not
> exist org.apache.solr.ltr.model.LinearModel",
>  "code":400}}
> 
> Thanks
>>> 
>>> 
> 


Re: LTR not picking up modified features

2018-03-07 Thread Roopa ML
Thank you, I Reloaded and collection and see that the change picked up.

I had not seen a need to do this in my local environment which is on non cloud 
mode.

Regards 
Roopa

Sent from my iPhone

> On Mar 7, 2018, at 7:09 PM, Shawn Heisey  wrote:
> 
>> On 3/6/2018 12:57 PM, Roopa Rao wrote:
>> There was an error in one of the feature definition in Solr LTR
>> features.json file and I modified and uploaded it to Solr.  I can see that
>> the definition change is uploaded correctly using the feature store url such
>> as
>> 
>> http://servername/solr/techproducts/schema/feature-store/myFeatureStore
>> I checked the _schema_feature-store.json file and I see that the change is
>> present.
>> 
>> However, during run time it is picking the old feature definition.
> 
> Did you reload the collection (SolrCloud mode) or core (standalone
> mode)?  Or restart all Solr instances with that index present?
> 
> Most of the time, if you don't reload or restart, then configuration
> changes will not take effect.  When using the config or schema APIs that
> change things on the fly, Solr does a reload in order to make changes
> effective.
> 
> Thanks,
> Shawn
> 


Re: How to store files larger than zNode limit

2018-03-13 Thread Roopa ML
The documentation has:
 If this
option is changed, the system property must be set on all servers and
clients otherwise problems will arise

Other than Zookeeper java property what are the other places this should be set?

Thank you
Roopa

Sent from my iPhone

> On Mar 13, 2018, at 5:56 PM, Markus Jelsma  wrote:
> 
> Hi - For now, the only option is to allow larger blobs via jute.maxbuffer 
> (whatever jute means). Despite ZK being designed for kb sized blobs, Solr 
> demands us to abuse it. I think there was a ticket for compression support, 
> but that only stretches the limit.
> 
> We are running ZK with 16 MB for maxbuffer. It holds the large dictionaries, 
> it runs fine. 
> 
> Regards,
> Markus
> 
> -Original message-
>> From:Atita Arora 
>> Sent: Tuesday 13th March 2018 22:38
>> To: solr-user@lucene.apache.org
>> Subject: How to store files larger than zNode limit
>> 
>> Hi ,
>> 
>> I have a use case supporting multiple clients and multiple languages in a
>> single application.
>> So , In order to improve the language support, we want to leverage the Solr
>> dictionary (userdict.txt) files as large as 10MB.
>> I understand that ZooKeeper's default zNode file size limit is 1MB.
>> I'm not sure sure if someone tried increasing it before and how does that
>> fares in terms of performance.
>> Looking at - https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
>> It states -
>> Unsafe Options
>> 
>> The following options can be useful, but be careful when you use them. The
>> risk of each is explained along with the explanation of what the variable
>> does.
>> jute.maxbuffer:
>> 
>> (Java system property:* jute.maxbuffer*)
>> 
>> This option can only be set as a Java system property. There is no
>> zookeeper prefix on it. It specifies the maximum size of the data that can
>> be stored in a znode. The default is 0xf, or just under 1M. If this
>> option is changed, the system property must be set on all servers and
>> clients otherwise problems will arise. This is really a sanity check.
>> ZooKeeper is designed to store data on the order of kilobytes in size.
>> I would appreciate if someone has any suggestions  on what are the best
>> practices for handling large config/dictionary files in ZK?
>> 
>> Thanks ,
>> Atita
>> 


Re: How to store files larger than zNode limit

2018-03-13 Thread Roopa ML
Thank you, this is clear
Regards 
Roopa

Sent from my iPhone

> On Mar 13, 2018, at 6:35 PM, Markus Jelsma  wrote:
> 
> Hi - configure it for all servers that connect to ZK and need jute.maxbuffer 
> to be high, and ZK itself of course.
> 
> So if your Solr cluster needs a large buffer, your Solr's environment 
> variables need to match that of ZK. If you simultaneously use ZK for a Hadoop 
> cluster, but don't need that buffer size, you can omit in Hadoop's settings.
> 
> Markus
> 
> 
> 
> -Original message-
>> From:Roopa ML 
>> Sent: Tuesday 13th March 2018 23:18
>> To: solr-user@lucene.apache.org
>> Subject: Re: How to store files larger than zNode limit
>> 
>> The documentation has:
>> If this
>> option is changed, the system property must be set on all servers and
>> clients otherwise problems will arise
>> 
>> Other than Zookeeper java property what are the other places this should be 
>> set?
>> 
>> Thank you
>> Roopa
>> 
>> Sent from my iPhone
>> 
>>> On Mar 13, 2018, at 5:56 PM, Markus Jelsma  
>>> wrote:
>>> 
>>> Hi - For now, the only option is to allow larger blobs via jute.maxbuffer 
>>> (whatever jute means). Despite ZK being designed for kb sized blobs, Solr 
>>> demands us to abuse it. I think there was a ticket for compression support, 
>>> but that only stretches the limit.
>>> 
>>> We are running ZK with 16 MB for maxbuffer. It holds the large 
>>> dictionaries, it runs fine. 
>>> 
>>> Regards,
>>> Markus
>>> 
>>> -Original message-
 From:Atita Arora 
 Sent: Tuesday 13th March 2018 22:38
 To: solr-user@lucene.apache.org
 Subject: How to store files larger than zNode limit
 
 Hi ,
 
 I have a use case supporting multiple clients and multiple languages in a
 single application.
 So , In order to improve the language support, we want to leverage the Solr
 dictionary (userdict.txt) files as large as 10MB.
 I understand that ZooKeeper's default zNode file size limit is 1MB.
 I'm not sure sure if someone tried increasing it before and how does that
 fares in terms of performance.
 Looking at - https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
 It states -
 Unsafe Options
 
 The following options can be useful, but be careful when you use them. The
 risk of each is explained along with the explanation of what the variable
 does.
 jute.maxbuffer:
 
 (Java system property:* jute.maxbuffer*)
 
 This option can only be set as a Java system property. There is no
 zookeeper prefix on it. It specifies the maximum size of the data that can
 be stored in a znode. The default is 0xf, or just under 1M. If this
 option is changed, the system property must be set on all servers and
 clients otherwise problems will arise. This is really a sanity check.
 ZooKeeper is designed to store data on the order of kilobytes in size.
 I would appreciate if someone has any suggestions  on what are the best
 practices for handling large config/dictionary files in ZK?
 
 Thanks ,
 Atita
 
>>