Hello,
We are running Solr 7.2.1 and planning for a deployment which will grow to
4 billion documents over time. We have 16 nodes at disposal.I am thinking
between 3 configurations:
1 cluster - 16 nodes
vs
2 clusters - 8 nodes each
vs
4 clusters -4 nodes each
Irrespective of the configuration, ea
That implies that you’re using two different queryParsers, one for the “q”
portion and one for the “fq” portion. My guess is that you have solrconfig
/select or /query configured to use (e)dismax but your fq clause is being
parsed by, the LuceneQueryParser.
You can specify the parser via local
Maybe in this scenario a Secure Enclave could make sense (eg Intel sgx)?
The scenario that you describes looks like MIT CryptDB, eg
https://css.csail.mit.edu/cryptdb/
> Am 25.06.2019 um 21:05 schrieb Tim Casey :
>
> My two cents worth of comment,
>
> For our local lucene indexes we use AES e
I removed the replicate after startup from our solrconfig.xml file. However
that didn't solve the issue. When I rebuilt the primary, the associated
replicas all went to 0 documents.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hello,
First time poster here so sorry for any formatting problems. I am sorry if
this has been asked before but I've tried several version of SolrJ
(6.5.1-8.1.1) with the same result.
I am runing the following example code and am seeing odd output.
String id = "foo123";
int popularity=1;
Solr
My two cents worth of comment,
For our local lucene indexes we use AES encryption. We encrypt the blocks
on the way out, decrypt on the way in.
We are using a C version of lucene, not the java version. But, I suspect
the same methodology could be applied. This assumes the data at rest is
the at
Thanks Erick for the clarification. How does the ps work for fq? I
configured ps=4 for q, it doesn't apply to fq though. For phrase queries in
fq seems ps=0 is used. Is there a way to config it for fq also?
Best,
Wei
On Tue, Jun 25, 2019 at 9:51 AM Erick Erickson
wrote:
> q and fq do _exactly
Using docker 7.7.2 image
Solr 7.7.2 on new Znode on ZK. Created the chroot using solr zk mkroot.
Created a policy:
{'set-policy': {'banana': [{'replica': '#ALL',
'sysprop.HELM_CHART': 'notbanana'}]}}
No errors on creation of the policy.
I have no nodes that have
I am actually looking for the best option so currently doing research on it.
For Window's FS encryption I didn't find a way to use different
Username/Password. It by default takes window's username/password to encrypt
and decrypt.
I tried bitlocker too for creating encrypted virtual directory
q and fq do _exactly_ the same thing in terms of query parsing, subject to all
the same conditions.
There are two things that apply to fq clauses that have nothing to do with the
query _parsing_.
1> there is no scoring, so it’s cheaper from that perspective
2> the results are cached in a bitmap
Why does FS encryption does not serve your use case?
Can’t you apply it also for backups etc?
> Am 25.06.2019 um 17:32 schrieb Ahuja, Sakshi :
>
> Hi,
>
> I am using solr 6.6 and want to encrypt index for security reasons. I have
> tried Windows FS encryption option that works but want to know
This is a recurring issue. The Hitachi solution will encrypt individual
_tokens_ in the index, even with different keys for different users. However,
the price is functionality.
Take wildcards. The Hitachi solution doesn’t solve this, the problem is
basically intractable. Consider the words run
No index encryption in the box. I am aware of a commercial solution but no
details on how good or what the price is:
https://www.hitachi-solutions.com/securesearch/
Regards,
Alex
On Tue, Jun 25, 2019, 11:32 AM Ahuja, Sakshi, wrote:
> Hi,
>
> I am using solr 6.6 and want to encrypt index for
Hi,
I am using solr 6.6 and want to encrypt index for security reasons. I have
tried Windows FS encryption option that works but want to know if solr has some
inbuilt feature to encrypt index or any good way to encrypt solr index?
Thanks,
Sakshi
Ok. probable dropping startup will help. Another idea
set replication.enable.master=false and enable it when master index is
build after restart.
On Tue, Jun 25, 2019 at 6:18 PM Patrick Bordelon <
patrick.borde...@coxautoinc.com> wrote:
> We are currently using the replicate after commit and sta
We are currently using the replicate after commit and startup
${replication.enable.master:false}
commit
startup
schema.xml,stopwords.txt
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
You have couple of options to delete:
1) Explicit delete request
2) Expiration management:
https://lucene.apache.org/solr/8_1_0//solr-core/org/apache/solr/update/processor/DocExpirationUpdateProcessorFactory.html
3) If you are indexing in clear batches (e.g. monthly, and keep last 3
month), you cou
Note, it seems like the current Solr's logic relies on persistent master
disks.
https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java#L615
On Tue, Jun 25, 2019 at 3:16 PM Mikhail Khludnev wrote:
> Hello, Patrick.
> Can commit he
Hello, Patrick.
Can commit help you?
On Tue, Jun 25, 2019 at 12:55 AM Patrick Bordelon <
patrick.borde...@coxautoinc.com> wrote:
> Hi,
>
> We recently upgraded to SOLR 7.5 in AWS, we had previously been running
> SOLR
> 6.5. In our current configuration we have our applications broken into a
> si
Then I'll need to delete records more than 30 days old. I was wonder if I
could add something in the cron script itself.
Regards,
Anuj
On Tue, 25 Jun 2019 at 16:11, Vadim Ivanov <
vadim.iva...@spb.ntk-intourist.ru> wrote:
>
> ... and &clean=false if you want to index just new records and keep
... and &clean=false if you want to index just new records and keep old ones.
--
Vadim
> -Original Message-
> From: Jan Høydahl [mailto:jan@cominvent.com]
> Sent: Tuesday, June 25, 2019 10:48 AM
> To: solr-user
> Subject: Re: Solr 8.0.0 Customized Indexing
>
> Adjust your SQL (lo
Adjust your SQL (located in data-config.xml) to extract just what you need (add
a WHERE clause)
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
> 25. jun. 2019 kl. 07:23 skrev Anuj Bhargava :
>
> Customized Indexing date specific
>
> We have a huge database of more t
Hi ,
i am using streaming expression getting following error .
Failed to open JDBC connection
{
"result-set":{
"docs":[{
"EXCEPTION":"Failed to open JDBC connection to
'jdbc:mysql://localhost/users?user=root&password=solr'",
"EOF":true,
"RESPONSE_TIME":99}]}}
23 matches
Mail list logo