On 4/13/2018 5:07 PM, Tomás Fernández Löbbe wrote:
> Yes... Unfortunately there is no GET API :S Can you open a Jira? Patch
> should be trivial
My suggestion would be to return the list of properties for a collection
when a URL like this is used:
/solr/admin/collections?action=COLLECTIONPROP&name
Yes... Unfortunately there is no GET API :S Can you open a Jira? Patch
should be trivial
On Fri, Apr 13, 2018 at 3:05 PM, Hendrik Haddorp
wrote:
> Hi,
>
> with Solr 7.3 it is possible to set arbitrary collection properties using
> https://lucene.apache.org/solr/guide/7_3/collections-api.
> html#
Hi,
with Solr 7.3 it is possible to set arbitrary collection properties
using
https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop
But how do I read out the properties again? So far I could not find a
REST call that would return the properties. I do see my property in t
On 4/13/2018 7:49 AM, Christopher Schultz wrote:
> $ SOLR_POST_OPTS="-Djavax.net.ssl.trustStore=/etc/solr/solr-client.p12
> -Djavax.net.ssl.trustStorePassword=whatevs
> -Djavax.net.ssl.trustStoreType=PKCS12" /usr/local/solr/bin/post -c
> new_core https://localhost:8983/solr/new_core
>
> [time passe
Yes I could imagine big gains from this strategy if OpenNLP is in the
analysis chain ;-)
On Fri, Apr 13, 2018 at 5:01 PM Markus Jelsma
wrote:
> Hello David,
>
> If JSON serialization is too bulky, we could also opt for
> SimplePreAnalyzed right? At least as a FieldType it is possible, if not
> w
Hello David,
If JSON serialization is too bulky, we could also opt for SimplePreAnalyzed
right? At least as a FieldType it is possible, if not with URP, it just needs
some work.
Regarding results; we haven't done it yet, and won't for some time, but we will
when we reintroduce OpenNLP in the a
On 4/13/2018 11:34 AM, Jesus Olivan wrote:
> first of all, thanks for your answer.
>
> How you import simultaneously these 6 shards?
I'm not running in SolrCloud mode, so Solr doesn't know that each shard
is part of a larger index. What I'm doing would probably not work in
SolrCloud mode without
hi Shawn,
first of all, thanks for your answer.
How you import simultaneously these 6 shards?
2018-04-13 19:30 GMT+02:00 Shawn Heisey :
> On 4/13/2018 11:03 AM, Jesus Olivan wrote:
> > thanks for your answer. It happens that when we launch full import
> process
> > didn't finished (we wait for
On 4/13/2018 11:03 AM, Jesus Olivan wrote:
> thanks for your answer. It happens that when we launch full import process
> didn't finished (we wait for more than 60 hours last time, and we cancelled
> it, because this is not an acceptable time for us) There weren't any errors
> in solr logfile simpl
Thank you Shawn & Edwin. It was a certificate error. I did not have the IPs
list in the SAN in the format required for IPs. After I updated the SAN
list, I was able to run the ADDREPLICA API correctly.
-Antony
On Thu, Apr 12, 2018 at 10:20 PM, Shawn Heisey wrote:
> On 4/12/2018 9:48 PM, Antony
Hi Shawn,
thanks for your answer. It happens that when we launch full import process
didn't finished (we wait for more than 60 hours last time, and we cancelled
it, because this is not an acceptable time for us) There weren't any errors
in solr logfile simply because it was working fine. The probl
Jesus,
Usually zipper join (aka external merge in old ETL world) and explicit
partitioning is able to boost import.
https://lucene.apache.org/solr/guide/6_6/uploading-structured-data-store-data-with-the-data-import-handler.html#entity-processors
On Fri, Apr 13, 2018 at 7:11 PM, Jesus Olivan
wrote
_how_ are you importing? DIH? SolrJ?
Here's an article about using SolrJ
https://lucidworks.com/2012/02/14/indexing-with-solrj/
But without more details it's really impossible to say much. Things
I've done in the past:
1> use SolrJ and partition the job up amongst a bunch of clients each
of which
On 4/13/2018 10:11 AM, Jesus Olivan wrote:
> we're trying to launch a full import of 375 millions of docs aprox. from a
> MySQL database to our solrcloud cluster. Until now, this full import
> process takes around 24/27 hours to finish due to an huge import query
> (several group bys, left joins, e
Hi!
we're trying to launch a full import of 375 millions of docs aprox. from a
MySQL database to our solrcloud cluster. Until now, this full import
process takes around 24/27 hours to finish due to an huge import query
(several group bys, left joins, etc), but after another import query
modificati
expungeDeletes wont' do the trick for you, it purges documents in
segments with > 10% deleted docs so you'll still have documents.
I'd push back on "the requirement is to show facets with 0 count as
disabled." Why? What use-case is satisfied here? Effectively this is
saying "For my query, show me
bq: http://10.0.3.181:8983/solr/collectioname is not a valid URL.
You need to add an end point like
http://10.0.3.181:8983/solr/collectioname/query?q=*:*
for SolrJ you need to specify the collection, depending on the version
you might have to setDefaultCollection or use a call that includes the
c
Having documents without fields doesn't matter much.
Solr (well, Lucene actually) is pretty efficient about this. It
handles thousands of different field types, although I have to say
that when you have thousands of fields it's usually time to revisit
the design. It looks like your total field cou
Try starting Solr with the -v option. That'll dump exactly what's
loaded from where. and provide a lot more details. Also what's in the
Solr logs? Perhaps that'll be more informative.
Best
Erick
On Fri, Apr 13, 2018 at 1:36 AM, Sambhav Kothari wrote:
> Hello everyone!
>
> I am migrating my Solr
Have you considered the StatelessScriptUpdateProcessorFactory? That
allows you to do pretty much anything you want.
Don't quite know whether TemplateUPdateProcessorFactory deals well
with empty fields or not, but it might be worth a shot.
And, of course, your ETL process could to that on the clie
I think that perhaps something along PointType or
IntPointField/FloatPointField would be the right answer to store
things:
https://lucene.apache.org/solr/guide/7_3/field-types-included-with-solr.html
On the other hand, I am not sure what "searching similar vectors" mean
for you, I suspect there a
All,
I've recently been encountering some frustrations with Solr 7.3 after
configuring TLS; since the command-line tools (which are a breeze to use
when you have a "toy" Solr installation) stop working when TLS is
enabled, I'm finding myself having to perform the following tasks in
order to get bi
Hello,
I don't understand the documentation concerning the DIH and Preprocessor
creation only on Solr Cloud.
On the Solr documentation, the single server and cloud procedure are mixed.
I don't find it very clear.
Do you have a process to create DIH and Processor and implement it with Solr
On 4/13/2018 2:54 AM, rameshkjes wrote:
I am using datasource as "FileDataSource", how can i grab that information
from url?
I'm pretty sure that you can provide pretty much ANY information in the
DIH config file from a URL parameter, using the ${dih.request.} syntax.
To answer your othe
On 4/10/2018 9:14 AM, Erick Erickson wrote:
The very first thing I'd do is set up a simple SolrCloud setup and
give it a spin. Unless your indexing load is quite heavy, the added
work the NRT replicas have in SolrCloud isn't a problem so worrying
about that is premature optimization unless you ha
#Ignore, mis-read the comment and its context.
On 13 April 2018 at 13:08, Lee Carroll wrote:
> Hi all,
>
> I'm writing a custom response writer to output a very simple rendition of
> a solr result set to clients.
>
> In my tests I do:
>
> h.getCore().execute(h.getCore().getRequestHandler(null),r
Hi all,
I'm writing a custom response writer to output a very simple rendition of a
solr result set to clients.
In my tests I do:
h.getCore().execute(h.getCore().getRequestHandler(null),req,rsp);
which for a q=*:* request object returns a response with a BasicResultContext.
In TextResponseWr
Jason
One way is simply to use a multi value field. But this is not officially a
vector, and the order might not be guaranteed. I suspect you can just post a
document with the values, and see them in order.
Search for a single value would not be very useful.
Another way is to choose a textual
Thanks Steve. I will look into the cache.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
great. thanks, erick!
--
John Blythe
On Wed, Apr 11, 2018 at 12:16 PM, Erick Erickson
wrote:
> bq: are you simply flagging the fact that we wouldn't direct the queries
> to A
> v. B v. C since SolrCloud will make the decisions itself as to which part
> of the distro gets hit for the operation
>
We have a use case where we need to populate uniq field from multiple
fields. Our solrconfig.xml file like below.
**
* *
* Code*
* FullCode*
* *
* *
* ExtendedCode*
* FullCode*
* *
**
* _*
* FullCode*
* *
**
The above configuration works fine if the document has both Cod
I am using datasource as "FileDataSource", how can i grab that information
from url?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hello everyone!
I am migrating my Solr config to solr 7.3 from 6.6.2.
I was having trouble with one of the filters, specifically
'solr.ICUFoldingFilterFactory'
I have the required libs from analysis-extras in my solr config.
but I am still getting
Error instantiating class:
'org.apache.luce
How data will be imported using dih, if path of data is not provided? What I
want is to provide the link of dataset files during runtime.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hello everyone!
I am migrating my Solr config to solr 7.3 from 6.6.2.
I was having trouble with one of the filters, specifically 'solr.
ICUFoldingFilterFactory'
I have the required libs from analysis-extras in my solr config.
but I am still getting
Error instantiating class: 'org.apache.luc
Hi Shawn,
Thanks for the long explanation.
Now 2 Billion limit can be overcome by using shard.
Now coming back to collection.Unless we have a logical or Business reason
we should not go for more than one collection.
Lets say i have 5 different entities and they have each 10,20,30,40 and 50
attri
Hi ,
I am getting the below error when I try to connect solr nodes using
zookeeper cluster. Can you please help me to debug this.
org.apache.solr.client.solrj.SolrServerException: No live SolrServers
available to handle this request:[http://10.0.3.73:8983/solr/collectioname,
http://10.0.3.181:898
37 matches
Mail list logo