I'll suggest to raise a JIRA and link to
https://issues.apache.org/jira/browse/SOLR-7759 but before that see if
updating the settings in Solrcofig for statsCache as described works here
https://issues.apache.org/jira/browse/SOLR-1632
Thanks,
Susheel
On Tue, Jun 13, 2017 at 5:16 PM, Zisis T. wrot
Hi,
Can anyone confirm if this "service --version" command works ? For me to
install in SUSE distribution, "service --version" commands always fail and
abort the solr installation with printing the error "Script requires the
'service' command"
To make it work, i had to change "service --versio
Hi,
I am trying to configure NLP search with Solr. I am using OpenNLP for the
same.I am able to index the documents and extract named entities and POS
using OpenNLP-UIMA support and also by using a UIMA Update request processor
chain.But I am not able to write a query parser for the same.Is there
I have setup Solr-6.6-0 on local (local ZK and Solr) and then on servers (3
ZK and 2 machines, 2 shards) and on both the env, i see this intermittent
error "column not found". The same query works sometime and other time
fails.
Is that a bug or am I missing something...
Console
===
-> solr-6.6
Hi,
I'm using Solr 6.5.1.
Is it possible to have multiple hashJoin or innerJoin in the query?
An example will be something like this for innerJoin:
innerJoin(innerJoin(
search(people, q=*:*, fl="personId,name", sort="personId asc"),
search(pets, q=type:cat, fl="personId,petName", sort="pers
Are you able to reproduce the error, or is it just appearing in the logs?
Do you know the state of index when it's occurring?
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, Jun 14, 2017 at 11:09 AM, Susheel Kumar
wrote:
> I have setup Solr-6.6-0 on local (local ZK and Solr) and then on s
Hi,
I am getting Out of Memory Errors after a while on solr-6.3.0.
The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
just kills the jvm right after.
Using Jconsole, I see the nice triangle pattern, where it uses the heap and
being reclaimed back.
The heap size is set at
Yes, Joel. Kind of every other command runs into this issue. I just
executed below queries and 3 of them failed while 1 succeeded. I just
have 6 documents ingested and no further indexing going on. Let me know
what else to look for the state of index.
➜ solr-6.6.0 curl --data-urlencode 'stmt
Also i tried with both docValues and non docValue fields/column.
On Wed, Jun 14, 2017 at 11:42 AM, Susheel Kumar
wrote:
> Yes, Joel. Kind of every other command runs into this issue. I just
> executed below queries and 3 of them failed while 1 succeeded. I just
> have 6 documents ingested and
You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
or so and share.
Thnx
On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada
wrote:
> Hi,
>
> I am getting Out of Memory Errors after a while on solr-6.3.0.
> The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom
I have seen this with very few indexed documents and multiple shards.
In such a case, some shards may not have any documents, and when the query
happens to hit such a shard, it does not find the fields it's looking for
and turns this into "column not found". If you resubmit the query and hit
a diff
Thanks, Yury. Indeed that is the issue.
Joel, is that something expected behavior or should i create a JIRA?
Thanks,
Susheel
On Wed, Jun 14, 2017 at 12:16 PM, Yury Kats
wrote:
> I have seen this with very few indexed documents and multiple shards.
> In such a case, some shards may not have a
Susheel, Please see attached. There heap towards the end of graph has
spiked
On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar
wrote:
> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
> or so and share.
>
> Thnx
>
> On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada <
>
The attachment will not come thru. Can you upload thru dropbox / other
sharing sites etc.
On Wed, Jun 14, 2017 at 12:41 PM, Satya Marivada
wrote:
> Susheel, Please see attached. There heap towards the end of graph has
> spiked
>
>
>
> On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar
> wrote:
>
Let's create a jira for this.
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, Jun 14, 2017 at 12:26 PM, Susheel Kumar
wrote:
> Thanks, Yury. Indeed that is the issue.
>
> Joel, is that something expected behavior or should i create a JIRA?
>
> Thanks,
> Susheel
>
> On Wed, Jun 14, 2017 a
We are replacing a drive mounted at /old with one mounted at /new. Our
index currently lives on /old, and our plan was to:
1. Create a new index on /new
2. Reindex from our database so that the new index on /new is properly
populated.
3. Stop solr.
4. Symlink /old to /new (Solr now looks for the i
Hello,
I used Apache Solr™ version 6.6.0 but can't upload pdf file to Core
Instruction and Example has been get from
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Solr+Cell+using+Apache+Tika
Add to solconfig.xml additional path to /dist/ and /contrib/extraction jar
fil
I dont have an answer to why the folder got cleared, however i am wondering
why you arent using basic replication to do this exact same thing, since
solr will natively take care of all this for you with no interruption to
the user and no stop/start routines etc.
On Wed, Jun 14, 2017 at 2:26 PM, Mi
Are you physically swapping the disks to introduce the new index? Or having
both disks mounted at the same time?
If the disks are simultaneously available, can you just swap the cores and
then delete the core on the old disk?
https://cwiki.apache.org/confluence/display/solr/CoreAdmin+API#CoreAdmin
Created JIRA https://issues.apache.org/jira/browse/SOLR-10890
Thank you.
On Wed, Jun 14, 2017 at 1:59 PM, Joel Bernstein wrote:
> Let's create a jira for this.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, Jun 14, 2017 at 12:26 PM, Susheel Kumar
> wrote:
>
> > Thanks, Yury.
Try using the curl command directly on terminal/console and it will work. I
just tried on 6.6 on a mac. The upload thru UI would not work for PDF's
unless more parameters are provided. The upload thru UI though works
directly for XML/JSON files etc.
curl '
http://localhost:8983/solr/techproduct
Hi,
I think I've found a bug with the highlighter. I search for the word
"something" and I get an empty highlighting response for all the documents that
are returned shown below. The fields that I am searching over are text_en, the
highlighter works for a lot of queries. I have no stopwords.tx
I figured Solr would have a native system built in, but since we don't use
it already, I didn't want to learn all of its ins and outs just for this
disk situation.
Ditto, essentially, applies for the swapping strategy. We don't have a Solr
expert, just me, a generalist, and sorting out these kinds
If the default operator is OR, then you're just matching on the "like"
word and it's being properly highlighted. If you're saying that doc
286 (or whatever) has both "something" and "like" in the content and
you expect to find them both, try increasing the number of snippets
returned.
Otherwise we
Why not just use the replication API and fetchindex? See:
https://cwiki.apache.org/confluence/display/solr/Index+Replication#IndexReplication-HTTPAPICommandsfortheReplicationHandler.
It's not entirely obvious from the writeup, but you can specify
masterUrl as part of the command &masterUrl=some_o
Hi ,
I am using Apache Solr for do advanced searching with my Big Data.
When I am creating Solr core , then by default for text field , it is
coming as TextField data type and class.
Can you please tell me how to change TextField to StrField. My table
contains record into English as well as Chin
Hi,
To respond your first question: “How do I get SortedSetDocValues from index by
field name?”, DocValues.getSortedSet(LeafReader reader, String field) (which is
what you want to use to assert the existence and type of the DV) will give you
the dv instance for a single leaf reader. In general,
I have found that this is possible, but currently I have problems if the
field name to join in all the 3 collections are different.
For example, if in "people" collection, it is called personId, and in
"pets" collection, it is called petsId. But in "collectionId", it is called
collectionName, but
If I try
/getsolr?
fl=id,title,datasource,score&hl=true&hl.maxAnalyzedChars=9000&hl.method=unified&q=Wainui-1&q.op=AND&wt=csv
The response I get is:
id,title,datasource,scoreW:\PR_Reports\OCR\PR869.pdf,,Petroleum
Reports,8.233313W:\PR_Reports\OCR\PR3440.pdf,,Petroleum
Reports,8.217836W:\PR_
Just had similar issue - works for some, not others. First thing to look at is
hl.maxAnalyzedChars is the query. The default is quite small.
Since many of my documents are large PDF files, I opted to use
storeOffsetsWithPositions="true" termVectors="true" on the field I was
searching on.
This ce
Back up a bit and tell us why you want to use StrField, because what
you're trying to do is somewhat confused.
First of all, StrFields are totally unanalyzed. So defining an
as part of a StrField type definition is totally
unsupported. I'm a bit surprised that Solr even starts up.
Second, you ca
Thanks Erick For best Explanation.
The issue with My data is as below. :-
I have few data on my books table.
cqlsh:nandan> select * from books;
id | author | date | isbn |
solr_query | title
--+--+--+
> Beware of NOT plus OR in a search. That will certainly produce no
highlights. (eg test -results when default op is OR)
Seems like a bug to me; the default operator shouldn't matter in that case
I think since there is only one clause that has no BooleanQuery.Occur
operator and thus the OR/AND sho
Hi All,
I am using solr-4.10.4 and our shards are frequently going down. When ever
it goes down , we are deleting the data folder and restart server it is
recovering data from replica. Any idea why shards are going down? Any
suggestions ? Any defect in 4.10.4 version?
-
Thanks,
Ramesh.
-
34 matches
Mail list logo