Hi Stave,
Thanks for the support, I tried below cases but still i'm not able to get
the expected results.
Case 1 :
Input : bizNameAr:شرطة + ازكي
Output : {
"responseHeader": {
"status": 0,
"QTime": 1,
"params": {
"indent": "true",
"q": " bizNameAr:شرطة + ازكي",
Hi Bernd,
Can you please double check?
I downloaded the 6.4.2 tarball and see that they have 6.4.2:
[ishan@ishanvps solr-6.4.2]$ grep -rn "luceneMatchVersion" *|grep
solrconfig.xml
CHANGES.txt:1474: or
your luceneMatchVersion in the solrconfig.xml is less than 6.0
docs/changes/Changes.html:1694
Hi,
Just to check, are the index that was indexed in Solr 6.4.1 affected by the
bug? Do we have to re-index those records when we move to Solr 6.4.2?
Regards,
Edwin
On 9 March 2017 at 12:49, Shawn Heisey wrote:
> On 3/8/2017 2:36 AM, Bernd Fehling wrote:
> > Shouldn't in server/solr/configset
On 3/8/2017 2:36 AM, Bernd Fehling wrote:
> Shouldn't in server/solr/configsets/.../solrconfig.xml
> 6.4.1
> really read
> 6.4.2
>
> May be something for package builder for future releases?
That does look like it got overlooked, and is generally something that
SHOULD be changed with each new vers
I am just trying to clarify whether there is a bug here in solr. It seems
that when solr tranlsates sql into the underlying solr query, it puts
parantheses around "NOT" clause expressions. But that does not seem to be
working correctly and is not returning expected results. If parantheses
around th
>From the first class, it seems similar to
https://wiki.apache.org/solr/NegativeQueryProblems
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 8 March 2017 at 22:34, Sundeep T wrote:
> Hi,
>
> I am using solr 6.3 version.
>
> We are seeing iss
What _exactly_ are you testing? It's unclear whether you're asking
about general Lucene/Solr syntax or some of the recent streaming SQL
work.
On Wed, Mar 8, 2017 at 7:34 PM, Sundeep T wrote:
> Hi,
>
> I am using solr 6.3 version.
>
> We are seeing issues involving NOT clauses when they are paired
Hi,
I am using solr 6.3 version.
We are seeing issues involving NOT clauses when they are paired in boolean
expressions. The issues specifically occur when the “NOT” clause is surrounded
by paratheses.
For example, the following solr query does not return any results -
(timestamp:[* TO "2017-
Hi pub
You need to google CORS cross origin resource sharing. But no need to worry
about CORS if all of the JavaScript for the Solr site is on the Solr server.
But as others have said, it is best to have some PHP or Python UI in front of
Solr.
Cheers
Rick
On March 8, 2017 2:11:36 PM EST, pubdiv
On 3/8/2017 12:11 PM, pubdiverses wrote:
> I have a site https://site.com under Apache.
> On the same physical server, i've installed solr.
>
> Inside https://site.com, i've a search form wich call solr with
> http://xxx.xxx.xxx.xxx/solr.
>
> But the browser says : "mixt content" and blocks the cal
On 3/6/2017 9:06 AM, Chris Ulicny wrote:
> We've recently had some issues with a 5.1.0 core copying the whole index
> when it was set to replicate from a master core.
>
> I've read that if there are documents that have been added to the slave
> core by mistake, it will do a full copy. Though we are
Getting streaming expression and SQL working in non-SolrCloud mode is my
top priority right now.
I'm testing the first parts of
https://issues.apache.org/jira/browse/SOLR-10200 today and will be
committing soon. The first functionality delivered will be the
significantTerms Streaming Expression. H
How are you updating? All the stored stuff is assuming "Atomic Updates"..
On Wed, Mar 8, 2017 at 11:15 AM, Alexandre Rafalovitch
wrote:
> Uhm, actually, If you have copyField from multiple sources into that
> _text_ field, you may be accumulating/duplicating content on update.
>
> Check what ha
What we are suggesting is that your browser does NOT access solr directly at
all. In fact, configure firewall so that SOLR is unreachable outside the
server. Instead you write a proxy in your site application which calls SOLR
instead. Ie a server-to-server call instead of browser-to-server. This
Hello,
Yes, I was trying to use it with a non-cloud setup.
Basically, our application probably won't be requiring cloud features;
however, it would be extremely helpful to use JDBC with Solr.
Of course, we don't mind using SolrCloud if that's what is needed for JDBC.
Are there any drawbacks to
I don't have an answer to the original question, but I would like to point
out that work is being done to make streaming available outside of
SolrCloud under ticket https://issues.apache.org/jira/browse/SOLR-10200.
- Dennis
On Wed, Mar 8, 2017 at 2:13 PM, Alexandre Rafalovitch
wrote:
> I believ
Uhm, actually, If you have copyField from multiple sources into that
_text_ field, you may be accumulating/duplicating content on update.
Check what happens to the content of that _text_ field when you do
full-text and then do an attribute update.
If I am right, you may want to have a separate "o
I believe JDBC requires streams, which requires SolrCloud, which
requires Collections (even if it is a single-core collection).
Are you trying to use it with non-cloud setup?
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 8 March 2017 at 14:
Hello,
I give you some more explanation.
I have a site https://site.com under Apache.
On the same physical server, i've installed solr.
Inside https://site.com, i've a search form wich call solr with
http://xxx.xxx.xxx.xxx/solr.
But the browser says : "mixt content" and blocks the call.
So,
Hello,
>From the examples I am seeing online and in the reference guide (
https://cwiki.apache.org/confluence/display/solr/Solr+JDBC+-+SQuirreL+SQL),
I can only see Solr JDBC being used against a collection. Is it possible
however to use it with a core? What should the JDBC URL be like in that
c
Guys
A BIG thank you, it works perfectly!!!
After so much research I finally got my solution working.
That was the trick, _text_ is stored and it’s working as expected.
Have a very nice day and thanks a lot for your contribution.
Really appreciated
Nico
> On 8 Mar 2017, at 18:26, Nicolas Boui
Hi,
in the TermsComponent.java's, createShardQuery, the motive to specify
terms.limit to -1 is clearly specified in java comment
but we have a usecase where have thousands of terms and we want each core
to return only the value specfied by terms.limit.
can we have two flavours of TermsComponent
Hi Erick, Shawn,
Thx really a lot for your swift reaction, it’s fantastic.
Let me answer both your answers:
1) the df entry in solrconfig.xml has not been changed:
_text_
2)when I do a query for full-text search I don’t specify a field, I just enter
the string I’m looking for in the q paramete
Hi Shawn,
These are the facts:
With Solr 6.4.1, we started the optimisation of a 200gb index with 67 segments.
This did not trigger replication. It took a few days. We confirmed that the
bottleneck was the CPU (optimisation is not parallelised).
We manually triggered replication of the optimis
bq: I wonder if it won’t be simpler for me to write a custom handler
Probably not, that would be Java too ;)...
OK, back up a bit. You can change your schema such that the full-text
field _is_ stored, I don't quite know what the default field is from
memory, but you must be searching against it ;
During the replication, check the disk, network, and CPU utilization. One of
them is the bottleneck.
If the disk is at 100%, you are OK. If the network is at 100%, you are OK. If
neither of them is at 100% and there is lots of CPU used (up to 100% of one
core), then Solr is the bottleneck and i
On 3/8/2017 9:22 AM, Nicolas Bouillon wrote:
> - I checked in the schema, all the fields that matter (Tika default
> extracted fields) and my customer fields are stored=true. - I suppose
> that the full-text index is not stored in a field? And
When you do a full text query, what field/fields are
On 3/8/2017 5:30 AM, Caruana, Matthew wrote:
> After upgrading to 6.4.2 from 6.4.1, we’ve seen replication time for a
> 200gb index decrease from 45 hours to 1.5 hours.
Just to check how long it takes to move a large amount of data over a
network, I started a copy of a 32GB directory over a 100Mb
Hi Erick
Thanks a lot for the elaborated answer. Let me give some precisions:
1. I upload the docs using an AJAX post multiform to my server.
2. The PHP target of the post, takes the file and stores it on disk
3. If the file is moved successfully from TEMP files to final destination, I
then call
Nico:
This is the place for such questions! I'm not quite sure the source
of the docs. When you say you "extract", does that mean you're using
the ExtractingRequestHandler, i.e. uploading PDF or Word etc. to Solr
and letting Tika parse it out? IOW, where is the fulltext coming from?
For adding t
Dear SOLR friends,
I developed a small ERP. I produce PDF documents linked to objects in my ERP:
invoices, timesheets, contracts, etc...
I have also the possibility to attach documents to a particular object and when
I view an invoice for instance, I can see the attached documents.
Until now, I
Caruana:
Thanks for that info.
Do you know offhand how that 1.5 hours compares to earlier versions?
I'm wondering if there is further work to be done here or are we back
to previous speeds.
Thanks
Erick
On Wed, Mar 8, 2017 at 4:30 AM, Caruana, Matthew wrote:
> After upgrading to 6.4.2 from 6.4
Are you perhaps indexing at the same time from the source other than
DIH? Because the commit is global and all the changes from all the
sources will become visible.
Check the access logs perhaps to see the requests to /update handler or similar.
Regards,
Alex.
http://www.solr-start.com/
Hey Vincent,
The feature store and model store are both Solr Managed Resources. To
propagate managed resources in distributed mode, including managed
stopwords and synonyms, you have to issue a collection reload command. The
Solr reference guide of Managed Resources has a bit more on it in the
A
Good Morning List!
I have an issue where my DIH full index is committed after a minute of indexing.
My counts will fall from around 400K to 85K until the import is finished,
usually about four (4) minutes later.
This is problematic for us as there are 315K missing items in our searches.
Version
Hello, Frank.
It's not clear what field is. I guess that per shard {!child}
results might clash by id during merge. Can you make sure that per child
ids are unique across all shards?
On Mon, Mar 6, 2017 at 10:47 PM, Kelly, Frank wrote:
> Hi Mikhail,
> Sorry I didn’t reply sooner
>
> Here are
Hi,
Thank you so much.
On Wed, Mar 8, 2017 at 1:58 PM, Mikhail Khludnev wrote:
> Hello, Chitra.
>
> Check this http://yonik.com/multi-select-faceting/ and
> https://wiki.apache.org/solr/SimpleFacetParameters#Multi-
> Select_Faceting_and_LocalParams
>
>
> On Wed, Mar 8, 2017 at 7:09 AM, Chitr
After upgrading to 6.4.2 from 6.4.1, we’ve seen replication time for a 200gb
index decrease from 45 hours to 1.5 hours.
> On 7 Mar 2017, at 20:32, Ishan Chattopadhyaya wrote:
>
> 7 March 2017, Apache Solr 6.4.2 available
>
> Solr is the popular, blazing fast, open source NoSQL search platform
Shouldn't in server/solr/configsets/.../solrconfig.xml
6.4.1
really read
6.4.2
May be something for package builder for future releases?
Regards
Bernd
Am 07.03.2017 um 20:32 schrieb Ishan Chattopadhyaya:
> 7 March 2017, Apache Solr 6.4.2 available
>
> Solr is the popular, blazing fast, open sou
Hi all,
It seems that the curl commands from the LTR wiki
(https://cwiki.apache.org/confluence/display/solr/Learning+To+Rank) to
post and/or delete features from and to the feature store only affect
one shard instead of the entire collection. For example, when I run:
|curl -XDELETE
'http://
Hello, Chitra.
Check this http://yonik.com/multi-select-faceting/ and
https://wiki.apache.org/solr/SimpleFacetParameters#Multi-Select_Faceting_and_LocalParams
On Wed, Mar 8, 2017 at 7:09 AM, Chitra wrote:
> Hi,
> I am a new one to Solr. Recently we are digging drill sideways search
> (fo
41 matches
Mail list logo