Thanks for your explaination. @Alexandre Rafalovitch @Walter Underwood
My case is use SOLR as an Index Service of some NoSQL systems,it is
a common Requirement to
guarantee the consistency of index&source data .
There maybe TWO ways to write source data/index:
1. write index t
Hi Koji,
Thanks for your reply and provide the information.
Just to check, is this supported in Solr 7.4.0?
Regards,
Edwin
On Wed, 19 Sep 2018 at 11:02, Koji Sekiguchi
wrote:
> Hi,
>
> > https://github.com/airalcorn2/Solr-LTR#RankNet
> >
> > Has anyone tried on this before? And what is the
Hi all,
sorry if I bothered you all but in these days I'm just struggling what's
going on with my production servers...
Looking at Solr Admin Panel I've found the CACHE -> fieldValueCache tab
where all the values are 0.
class:org.apache.solr.search.FastLRUCache
description:Concurrent LRU Cache(m
You are doing the right thing. Always write to the repository first, then
write to Solr. The repository is the single source of truth.
We write to the repository, then have a process that copies new items
to Solr.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my
On 9/18/2018 8:11 PM, zhenyuan wei wrote:
Hi all,
add solr document with overwrite=false will keepping multi version
documents,
My question is :
1. How to search newest documents?with what options?
2. How to delete old version < newest version documents?
When Solr is compilin
A little update.
For the client machine where solr admin page behaves differently, it turns out
that the requests on the page like */admin/* were never served. I think it is
related to the server setting that might prevent these urls with "/admin/" from
being sent.
In essence, it is not a sol
Thanks for bringing closure to this, Whew!
On Wed, Sep 19, 2018 at 8:04 AM Gu, Steve (CDC/OD/OADS) (CTR)
wrote:
>
> A little update.
>
> For the client machine where solr admin page behaves differently, it turns
> out that the requests on the page like */admin/* were never served. I think
> it
Has this ever worked? IOW, is this something that's changed or has
just never worked?
The obvious first step is to start Solr with more than 1G of memory.
Solr _likes_ memory and a 1G heap is quite small. But you say:
"Increasing the heap size further doesnt start SOLR instance itself.".
How much
I've got a Solr instance which crawls roughly 3,500 seed pages, depth of 1, at
240 institutions, all but 1 of which I don't control. I recrawl once a month or
so. Naturally if one of the sites I crawl changes, then I need to know to
update my seed URLs. I've been checking this by hand, which was
Have you looked at Apache Nutch? Seems like the direct match for your
- growing - requirements and it does integrate with Solr. Or one of
the other solutions, like http://stormcrawler.net/
http://www.norconex.com/collectors/
Otherwise, this does not really feel like a Solr question.
Regards,
A
I do use Nutch as my crawler, but just as my crawler, so I hadn't thought to
look for an answer there. I will do so. thank you.
Chip
From: Alexandre Rafalovitch
Sent: Wednesday, September 19, 2018 2:05:41 PM
To: solr-user
Subject: Re: Seeking a simple way to te
Hi dear SOLR community.
On this page of the documentation:
https://lucene.apache.org/solr/guide/6_6/core-specific-tools.html what are
the fields "current" and "gen" referring to? I have not been able to find
that anywhere :(
Thanks,
JMS
I would say this is the relevant page for the "current" and
"generation" https://lucene.apache.org/solr/guide/6_6/index-replication.html
And I think generation refers to the actual Lucene index, so is
explained further here:
https://lucene.apache.org/core/6_6_0/core/org/apache/lucene/codecs/lucene
I am helping implement solr for a "downloadable library" of sorts. The
objective is that communities without internet access will be able to access
a library's worth of information on a small, portable device. As such, I am
working within strict space constraints. What are some non-essential
compon
Chip:
Another thing that might work for you are the streaming/export
capabilities. It has the capacity to efficiently return some data
(docValues only) for very large result sets. You'd have to have some
automated way to feed it what to look for.
But that's a fallback, I'd first look at Nutch as
Hi Erick,
Thank you for the follow-up. I have resolved the issue with the increase
in heapSize and I am able to set the SOLR VM to initialize with a 3G heap
size and the subset of 1 mil records was fetched successfully. Although it
fails with the entire 3 mil records. So something is off with th
On 9/19/2018 1:48 PM, oddtyme wrote:
I am helping implement solr for a "downloadable library" of sorts. The
objective is that communities without internet access will be able to access
a library's worth of information on a small, portable device. As such, I am
working within strict space constrai
Hi Alex,
Thanks for replying.
I also found this:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201404.mbox/%3c53483062.2000...@elyograg.org%3E
where it says "s basically means that Lucene has detected an index state
where something has made changes to the index, but those changes are
I am still wondering whether anyone has ever seen any examples of this actually
working (has anyone ever seen any example of SPLITSHARD on a two-node SolrCloud
placing replicas of the each shard on different hosts than other replicas of
the same shards)?
Anyone?
-Original Message-
Fro
Thanks, Shawn.
We made a change to add q.op=AND as a separate param and found a few issues.
For example, we have a query that filters out guest users in our product.
It boils down to:
select?q=myname*&q.op=AND&fq=(-(site_role:"Guest"))
debugQuery shows this is parsed as the following, which do
Hi Edwin,
> Just to check, is this supported in Solr 7.4.0?
Yes, it is.
https://github.com/LTR4L/ltr4l/blob/master/ltr4l-solr/ivy-jars.properties#L17
Koji
On 2018/09/19 19:40, Zheng Lin Edwin Yeo wrote:
Hi Koji,
Thanks for your reply and provide the information.
Just to check, is this suppo
Ok, thank you.
Regards,
Edwin
On Thu, 20 Sep 2018 at 08:39, Koji Sekiguchi
wrote:
> Hi Edwin,
>
> > Just to check, is this supported in Solr 7.4.0?
>
> Yes, it is.
>
>
> https://github.com/LTR4L/ltr4l/blob/master/ltr4l-solr/ivy-jars.properties#L17
>
> Koji
>
> On 2018/09/19 19:40, Zheng Lin Ed
Yeah~, writing to true data store first, then write to solr. I found it is
simple to guarantee the finally consistency,
only handling two main exception bellow:
1. If failed to write to true data store,then client samply retry its
request。
2. If write true data store success,and failed to write
Tanya:
Good to hear. You probably want to configure hard as
well, and in your case perhaps with openSearcher=true
Indexing is only half the problem. It's quite possible that what's
happening is your index is just growing and that's pushing the
boundaries of Java heap. What I'm thinking is that D
Hi Guys
I have a 3 zookeeper ensemble and 3 solr nodes running version 7.4.0.
Recently I had to restart one node and after I did that it started throwing
this exception.
{
"error":{
"metadata":[
"error-class","org.apache.solr.core.SolrCoreInitializationException",
"root-erro
On 9/19/2018 8:22 PM, Schaum Mallik wrote:
I have a 3 zookeeper ensemble and 3 solr nodes running version 7.4.0.
Recently I had to restart one node and after I did that it started throwing
this exception.
Caused by: org.apache.solr.common.SolrException: No coreNodeName
for
CoreDescriptor[name=
The data and index get stored under
/opt/solr/server/solr/articles_shard1_replica_n1.
The config directory when the collection was created, that time the path to
the config was given as '/opt/solr/server/solr/configsets/articles'. I
didn't use the service installer script. The other two solr nodes
I also want to add one other things. I had moved from a single core solr
instance on solr 6.6 to the solr cloud few months back. I had ran the
indexupgrader tool on the indexes before I moved them to the solr cloud.
On Wed, Sep 19, 2018 at 7:29 PM Shawn Heisey wrote:
> On 9/19/2018 8:22 PM, Scha
28 matches
Mail list logo