waiting for reply . Actually Heap utilization increases when we sort with
dynamic fields
On Tue, Mar 21, 2017 at 10:37 AM, Midas A wrote:
> Hi ,
>
> How can i improve the performance of dynamic field sorting .
>
> index size is : 20 GB
>
> Regards,
> Midas
>
Chris:
OK, The whole DBQ thing baffles the heck out of me so this may be
totally off base. But would committing help here? Or at least be worth
a test?
On Tue, Mar 21, 2017 at 4:28 PM, Chris Hostetter
wrote:
>
> : Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
> : boun
Joe,
To do this correctly, soundly, you will need to sample the data and mark
them as threatening or neutral. You can probably expand on this quite a
bit, but that would be a good start. You can then draw another set of
samples and see how you did. You use one to train and one to validate.
Wha
Hello Hank,
The online version of the reference guide is always for the latest Solr
release. I think your configuration would work in the latest release. Prior
to Solr 6, the Spatial4J library had a different Java package location: replace
"org.locationtech.spatial4j" with "com.spatial4j.core
Hi,
I am trying to integrate openNLP with Solr.
The fieldtype is :
en-lemmatizer.txt->The file has a size close to 5mb.
I am using the lemmatizer dictionary from below link:
https://raw.githubusercontent.com/richardwilly98/elasticsearch-openn
: Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
: bounded by 1K count, but after looking at heap dump I was amazed how can it
: keep more than 1K entries. But Yes I see around 7M entries according to
: heap dump and around 17G of memory occupied by BytesRef there.
what
Hi Chris,
Thanks for replying. We are using Solr 6.1 version. Even I saw that it is
bounded by 1K count, but after looking at heap dump I was amazed how can it
keep more than 1K entries. But Yes I see around 7M entries according to
heap dump and around 17G of memory occupied by BytesRef there.
It
: facing. We are storing messages in solr as documents. We are running a
: pruning job every night to delete old message documents. We are deleting
: old documents by calling multiple delete by id query to solr. Document
: count can be in millions which we are deleting using SolrJ client. We are
:
Hi All,
I am looking for some help to solve an out of memory issue which we are
facing. We are storing messages in solr as documents. We are running a
pruning job every night to delete old message documents. We are deleting
old documents by calling multiple delete by id query to solr. Document
cou
Hello,
I'm having problems with a polygon search on location data. I've tried to
enable the JTS and Polygons from
https://cwiki.apache.org/confluence/display/solr/Spatial+Search but I get the
following error when I load solr
java.util.concurrent.ExecutionException:
org.apache.solr.co
Well, I always chmod 777 when developing on my laptop ;)
And I agree it's much easier to get it all working with a sledgehammer
_then_ play with getting permissions correct. At least then I know
what changed last.
On Tue, Mar 21, 2017 at 11:51 AM, David Hastings
wrote:
> This is true, I wa
This is true, I was speaking more from a development standpoint as if im
the only one with access to the machine.
On Tue, Mar 21, 2017 at 2:47 PM, Erick Erickson
wrote:
> "Should you be aware of anything". Yes. Your security guys will go
> nuts if you recursively gave 777 permissions. By changin
"Should you be aware of anything". Yes. Your security guys will go
nuts if you recursively gave 777 permissions. By changing this you've
opened up your directories to anyone with any access to add any
program to any directory and execute it. From a security standpoint
this is very bad practice.
To
On 3/21/2017 10:34 AM, Shawn Heisey wrote:
> Restating the original problem: I cannot paginate through the groups
> in a grouped query. The first page works, subsequent pages do not. I
> have a distributed index. Co-locating documents in the same group
> onto the same shard is going to require
When I recursively gave permissions to all folders/files under /opt/bitnami
it worked! So now I can actually see my core aswell as use it.
What when I go into production? Should I be aware of anything?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-creating-core-da-
Here:
/opt/bitnami/apache-solr/server/solr/da/data/index/write.lock
so just set the permissions on /opt/bitnami/ to 777 it will save you
headaches later. you can use the ap but not make any indexes. also delete
that file if it exists
On Tue, Mar 21, 2017 at 1:16 PM, HrDahl wrote:
> The questi
The question is what permissions ? Is it read/write permissions on the
write.lock file?
And the solr is running and I can access the admin panel via the url port
given by google cloud, I just cant add a core.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-creating-c
Hello,
My team often uses the /dataimport & /dih handlers to move items from one
Solr collection to another. However, all the times we did that, the number
of shards in the new collection was always the same or higher than in the
old.
Can /dataimport work if I have less shards in the new collectio
Hello,
I am trying to index fields with ObjectId datatype from my mongoDB into
SOLR 5.4.0 using DIH(DataImportHandler) however it does not allow me to
index it.
I have to change it to string in my mongodb and then index it.
Is there a way to index ObjectId datatype fields in SOLR schema ?
Best
On 3/17/2017 9:26 AM, Shawn Heisey wrote:
On 3/17/2017 9:07 AM, Erick Erickson wrote:
"group.ngroups and group.facet require that all documents in each
group must be co-located on the same shard in order for accurate
counts to be returned."
That is not how things work right now. The index has 1
Permissions would be my first guess.
Second guess: you killed Solr somehow (crash, kill -9 etc) and it left
the write.lock file laying around. Shut down Solr and manually remove
it.
Best,
Erick
On Tue, Mar 21, 2017 at 9:12 AM, David Hastings
wrote:
> AccessDeniedException
>
>
>
> im going to gu
AccessDeniedException
im going to guess permissions, open them up when youre developing, well, i
do, to 777
On Tue, Mar 21, 2017 at 11:37 AM, HrDahl wrote:
> I am trying to create a solr core on a google cloud linux server using
> binami
> launchpad. But when im trying to create a new core it
I am trying to create a solr core on a google cloud linux server using binami
launchpad. But when im trying to create a new core it gives me the below
error message.
Can anyone see what is wrong?
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.Solr
You haven't really told us _how_ you are indexing, so I'm going to
make some comments that may be irrelevant...
At 600M documents, you'll almost certainly have to shard your index.
It sounds like you're doing the sharding yourself in Lucene by having
different Lucene indexes based on date. As you
Dear Sir/Madam, I am Li Wei, from China, and I`m writing to you for your help.
Here is the problem I encountered:
There is a timing task set at night in our project which uses Lucene to build
index for the data from Oracle database. It was working fine at the beginning,
however, as the index
Hello Jan,
If I get you right, you need to find every word either in parent or child
level, hence:
q=+({!edismax qf=$pflds v=$w1} {!parent ..}{!edismax qf=$cflds v=$w1})
+({!edismax qf=$pflds v=$w1} {!parent ..}{!edismax qf=$cflds
v=$w1})...&w1=foo&w2=bar
note that spaces and + matter much. This y
26 matches
Mail list logo