5 May 2016, Apache Solr™ 5.5.1 available
The Lucene PMC is pleased to announce the release of Apache Solr 5.5.1
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted sear
Right, this is a known issue. There is currently an active jira that you
may like to watch https://issues.apache.org/jira/browse/SOLR-5379
And other possible workaround is explained here :
https://lucidworks.com/blog/2014/07/12/solution-for-multi-term-synonyms-in-lucenesolr-using-the-auto-phrasing
Hi All -
Can you please help me out on the multi-word synonyms with Solr 4.3.1.
I am using the synonyms as below
test1,test2 => movie1 cinema,movie2 cinema,movie3 cinema
I am able to success with the above syntax like - if I search for
words like test1 or test2 then right hand side mu
hello Bastien,
In my memory, fq executed after the usual query!
-- --
??: "Bastien Latard - MDPI AG";;
: 2016??5??6??(??) 1:54
??: "solr-user";
: fq behavior...
Hi guys,
Just a quick question, that I did not find an ea
Thank you Susmit, so the answer is:
fq queries are by default run before the main query.
kr,
Bast
On 06/05/2016 07:57, Susmit Shukla wrote:
Please take a look at this blog, specifically "Leapfrog Anyone?" section-
http://yonik.com/advanced-filter-caching-in-solr/
Thanks,
Susmit
On Thu, May 5,
thank you ,Jay Potharaju
I got a discover, in the same one solr core , i put two kinds of docs, which
means that they does not have the same fields, does this means that different
kinds of docs can not be put into the same solr core?
thanks!
max mi
--
Please take a look at this blog, specifically "Leapfrog Anyone?" section-
http://yonik.com/advanced-filter-caching-in-solr/
Thanks,
Susmit
On Thu, May 5, 2016 at 10:54 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:
> Hi guys,
>
> Just a quick question, that I did not find an easy
Hi guys,
Just a quick question, that I did not find an easy answer.
1.
Is the fq "executed" before or after the usual query (q)
e.g.: select?q=title:"something really specific"&fq=bPublic:true&rows=10
Would it first:
* get all the "specific" results, and then apply the filter
Hi,
Can you please help? If there is a solution then It will be easy, else I have
to create a script in python that can process the results from
TermVectorComponent and group the result by words in different documents to
find the word count. The Python script will accept the exported Solr resul
Thank you Shawn!
So if I run the two following requests, it will only store once 7.5Mo,
right?
- select?q=*:*&fq=bPublic:true&rows=10
- select?q=field:my_search&fq=bPublic:true&rows=10
kr,
Bast
On 04/05/2016 16:22, Shawn Heisey wrote:
On 5/3/2016 11:58 PM, Bastien Latard - MDPI AG wrote:
T
Hi, all
I do a query by solr admin UI ,but the response is not what i desired!
My operations follows!
first step: get all data.
http://127.0.0.1:8080/solr/example/select?q=*%3A*&wt=json&indent=true
response follows:
"response": { "numFound": 5, "start": 0, "docs": [ {
Hello All,
I am using Solr 6.0.0 in cloud mode and have requirement to support all number
in BigDecimal
Does anyone know which solr field type should be used for BigDecimal?
I tried using DoubleTrieField but it does not meet the requirement and round up
very big number approx. after 16 digit.
Solr 6 or Solr 5.5, right?
docValues now return the values, even if stored=false. That's probably
what you are hitting. Check release notes (under 5.5 I believe) for
more details.
Regards,
Alex.
Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/
On
Thanks Joel ,
I have created JIRA issue. please let me know if any feedback.
https://issues.apache.org/jira/browse/SOLR-9077
On Thu, May 5, 2016 at 2:38 PM, Joel Bernstein wrote:
> Yes, this needs to be supported. If you open up a ticket, I'll be happy to
> review.
>
> Joel Bernstein
> http://
Please show us:
1> a sample doc that you expect to be returned
2> the results of adding '&debug=query' to the URL
3> the schema definition for the field you're querying against.
It is likely that your query isn't quite what you think it is, is going
against a different field than you think or your
Or perhaps the TermsQueryParser? See:
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-TermsQueryParser
Best,
Erick
On Thu, May 5, 2016 at 11:13 AM, Ahmet Arslan wrote:
> Hi,
>
> Wow thats a lot of IDs. Where are they coming from?
> May be you can consider using join o
Well, first of all I would use separate fq clauses. IOW:
fq=filter(fromfield:[* TO NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *] &&
type:"abc")
would probably be better written as:
fq=fromfield:[* TO NOW/DAY+1DAY]&fq=tofield:[NOW/DAY-7DAY TO *]&fq=type:"abc"
You can always spoof the * at the _en
OK, this is strange on the face of it. Is there any chance you could
create a test case that fails? Even if it only fails a small percentage
of the time...
Best,
Erick
On Wed, May 4, 2016 at 3:02 AM, Modassar Ather wrote:
> The "val1" is same for both the test with limit 100 and 200 so the
> fol
Hi Nick,
Thanks for the reply. Actually
q="software engineering" - > doc1
q="software engineer" - > no results
q="Software engineer" - > doc2
I hope above test cases will explain my requirements further. So far I'm
thinking of changing the qf according to query has enclosing double
quotations
My reading is that this whole thing is a content farm, automatically
generating website based on the user queries against some sort of
internal database of documents (PDFs, ebooks, etc perhaps).
The goal seems to be SEO rather than user experience.
Regards,
Alex.
Newsletter and resources
Lasitha,
I think I understand what you are asking and if you have something like
Doc1 = software engineering
Doc2 = Software engineer
And if you query
q=software+engineer -> Doc1 & Doc2
but
q="software+engineer" -> Doc1
Correct?
If this is correct then to my knowledge no, Solr out of the box
Joel Bernstein wrote
>> Can you post your classpath?
classpath as follows:
solr-solrj-6.0.0
commons-io-2.4
httpclient-4.4.1
httpcore-4.4.1
httpmime-4.4.1
zookeeper-3.4.6
stax2-api-3.1.4
woodstox-core-asl-4.4.1
noggit-0.6
jcl-over-slf4j-1.7.7
slf4j-api-1.7.7
-
Zeki ama calismiyor... Calis
Hi nd,
Here's the issue.. Let's say I search.. Software Engineer. For am example
lets say this query will return 10 results when search against ngram field
. Now I search "Software Engineer" with double quotations. This should
not return same result set as the previous query.
I thought the que
I am grouping documents on a field and would like to retrieve documents
where the number of items in a group matches a specific value or a range.
I haven't been able to experiment with all new functionality, but I wanted
to see if this is possible without having to calculate the count and add it
a
We implemented something similar it sounds to what you are asking but I
dont see why you would need to match the original field. Since technically
a field that has *software engineer* indexed is matched by a query like
"software eng" to "software engineer" with the ngrams; which makes the
exac
Hi,
It depends on your re-use patterns. Query supplied to the filter query (fq)
will be the key of the cache map. Subsequent filter query with an existing key
will be served from cache.
For example lets say that you always use these two clauses together.
Then it makes sense to use fq=+fromfield
Yes, this needs to be supported. If you open up a ticket, I'll be happy to
review.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, May 5, 2016 at 2:24 PM, sudsport s wrote:
> I tried to run solr streaming expression using collection alias , I get
> null pointer expression. after looking at
I have almost 50 million docs and growing ...that being said in a high
query volume case does it make sense to use
fq=filter(fromfield:[* TO NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *] &&
type:"abc")
OR
fq=filter(fromfield:[* TO NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *] )
fq=filter(type:abc)
Are you suggesting rewriting it like this ?
fq=filter(fromfield:[* TO NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *] )
fq=filter(type:abc)
Is this a better use of the cache as supposed to fq=fromfield:[* TO
NOW/DAY+1DAY]&& tofield:[NOW/DAY-7DAY TO *] && type:"abc"
Thanks
On Thu, May 5, 2016 at 12:5
Hi,
1. I was doing some exploration and wanted to know if
the id field is always stored even when I set stored
= false.
**
2. Also, even though I removed dynamic fields, anything tagged *_id is
getting stored despite marking that field stored = false.
**
Where string is defined as:
*
On Thu, May 5, 2016 at 3:21 PM, Duane Rackley
wrote:
> My team is switching from a custom statistics patch to using Stats Component
> in Solr 5.4.1. One of the features that we haven't been able to replicate in
> Stats Component is an upper and lower fence.
With the JSON Facet API, you can do a
Yes, Nick, I am using the chroot to share the ZK for different instances.
On Thu, May 5, 2016 at 3:08 PM, Nick Vasilyev
wrote:
> Just out of curiosity, are you using sharing the zookeepers between the
> different versions of Solr? If So, are you specifying a zookeeper chroot?
> On May 5, 2016 2:
Hi,
Cache enemy is not * but NOW. Since you round it to DAY, cache will work
within-day.
I would use separate filer queries, especially fq=type:abc for the structured
query so it will be cached independently.
Also consider disabling caching (using cost) in expensive queries:
http://yonik.com/ad
On Thu, May 5, 2016 at 3:37 PM, Siddharth Modala
wrote:
> Thanks Yonik,
>
> That fixed the issue. Will this experimental flag be removed from future
> versions?
I don't think so... it's needed functionality, I just don't
particularly like where I had to put it (in the "facet" block instead
of in
Thanks Yonik,
That fixed the issue. Will this experimental flag be removed from future
versions?
Is there any other webpage apart from your blog(which is btw really
awesome) where I can find more info on the new facet module(like info reg
the processEmpty flag e.t.c)?
On May 5, 2016 2:37 PM, "Yon
Hello,
My team is switching from a custom statistics patch to using Stats Component in
Solr 5.4.1. One of the features that we haven't been able to replicate in Stats
Component is an upper and lower fence. The fences limit the data that is sent
to the Stats Component but not the data that is re
Just out of curiosity, are you using sharing the zookeepers between the
different versions of Solr? If So, are you specifying a zookeeper chroot?
On May 5, 2016 2:05 PM, "Susheel Kumar" wrote:
> Nick, Hoss - Things are back to normal with ZK 3.4.8 and ZK-6.0.0. I
> switched to Solr 5.5.0 with Z
On Thu, May 5, 2016 at 2:27 PM, Siddharth Modala
wrote:
> Hi All,
>
> We are facing the following issue where Json Facet with excludeTag doesn't
> return any results when numFound=0, even though excluding the filter will
> result in matching few docs. (Note: excludeTag works when numFound is > 0
Hi All,
We are facing the following issue where Json Facet with excludeTag doesn't
return any results when numFound=0, even though excluding the filter will
result in matching few docs. (Note: excludeTag works when numFound is > 0)
We are using Solr 5.4.1
For eg.
If we have the following data
I tried to run solr streaming expression using collection alias , I get
null pointer expression. after looking at log I see that getSlices returns
null.
can someone suggest if it is good idea to add support for collection alias
in solr streaming expression?
If yes I would like to submit fix and ad
Hi,
Wow thats a lot of IDs. Where are they coming from?
May be you can consider using join options of lucene/solr if these IDs are
result of another query.
Also terms query parser would be better choice in case of lots of IDs.
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherP
Nick, Hoss - Things are back to normal with ZK 3.4.8 and ZK-6.0.0. I
switched to Solr 5.5.0 with ZK 3.4.8 which worked fine and then installed
6.0.0. I suspect (not 100% sure) i left ZK dataDir / Solr collection
directory data from previous ZK/solr version which probably was making Solr
6 in uns
Hi,
I have a filter query that gets documents based on date ranges from last n
days to anytime in future.
The objective is to get documents between a date range, but the start date
and end date values are stored in different fields and that is why I wrote
the filter query as below
fq=fromfield:[
Can you check if the field you are searching on is case sensitive? You can
quickly test it by copying the exact contents of the brand field into your
query and comparing it against the query you have posted above.
On Thu, May 5, 2016 at 8:57 AM, mixiangliu <852262...@qq.com> wrote:
>
> i found a
Thanks Shawn!
On Thu, May 5, 2016 at 12:14 PM, Shawn Heisey wrote:
> On 5/5/2016 9:52 AM, Garfinkel, David wrote:
> > I'm new to administering Solr, but it is part of my DAM and I'd like to
> > have a better understanding. If I understand correctly I have a field in
> my
> > schema with uuid 194
On 5/5/2016 9:52 AM, Garfinkel, David wrote:
> I'm new to administering Solr, but it is part of my DAM and I'd like to
> have a better understanding. If I understand correctly I have a field in my
> schema with uuid 1948 that is causing an issue right?
The data being indexed contains a field *name
i found a strange thing with solr query,when i set the value of query field
like "brand:amd",the size of query result is zero,but the real data is not
zero,can some body tell me why,thank you very much!!
my english is not very good,wish some body understand my words!
I'm new to administering Solr, but it is part of my DAM and I'd like to
have a better understanding. If I understand correctly I have a field in my
schema with uuid 1948 that is causing an issue right?
--
David Garfinkel
Digital Asset Management/Helpdesk/Systems Support
The Museum of Modern Art
2
On 5/5/2016 1:48 AM, t...@sina.com wrote:
> Can Solr run on JDK 7 32 bit? Or must be 64 bit?
You can use a 32-bit JVM ... but it will be limited to 2GB of heap.
This is a Java limitation, not a Solr limitation. Depending on how
large your index is, 2GB may not be enough.
Thanks,
Shawn
An ID lookup is a very simple and fast query, for one ID. Or’ing a lookup for
80k ids though is basically 80k searches as far as Solr is concerned, so it’s
not altogether surprising that it takes a while. Your complaint seems to be
that the query planner doesn’t know in advance that should be
On 5/4/2016 10:45 PM, Prasanna S. Dhakephalkar wrote:
> We had increased the maxBooleanClauses to a large number, but it did not
> work
It looks like you have 1161 values here, so maxBooleanClauses does need
to be increased beyond the default, but the error message would be
different if that limit
This statement has two possible meanings in my mind...
"I want everything as automated manner with minimal manual work."
Do you mean minimal work for your users? Or do you mean minimal work to
get your idea up and running and generating income for you or your company?
The first meaning is lauda
I'll just briefly add some thoughts...
#1 This can be done several ways - including keeping a totally separate
document that contains ONLY the data you're willing to expose for free --
but what you want to accomplish is not clear enough to me for me to start
making recommendations. I'll just say
Also since the same query is working from curl, it's a pretty strong
indication that the error is occurring on the client. The logs show that
the includeMetadata parameter is being sent properly. This is done
automatically by the JDBC driver. So the /sql handler should be sending the
metadata Tupl
In looking at the logs things look good on the server side. The sql query
is sent to the /sql handler. It's translated to a solr query and sent to
the select handler. Results are returned and no errors.
So, I'm going to venture a guess that problem is on the client side. I'm
wondering if you're tr
Hi,
TermVectorComponent works. I am able to find the repeating words within the
same document...that facet was not able to. The problem I see is
TermVectorComponent produces result by a document e.g. and I have to combine
the counts i.e count of word my is=6 in the list of documents. Can you pl
Hmm not good. Definitely a new bug. Please open an issue.
Please look up the core node name in core.properties for that particular
core and remove the other one from state.json manually. Probably best to do
a cluster restart to avoid surprises. This is certainly uncharted territory.
On Wed, May 4
What is in my mind!!
I have data in TB mainly educational assignments and projects which will
contain text, image and may be codes also if this is from computer
Science. I will index all the documents into solr and I will also have
original copy of those documents. Now, I want to create a l
Hi,
I am retrieving ids from collection1 based on some query and passing those ids
as a query to collection2 so the query to collection2 which contains ids in it
takes much more time compare to normal query.
Que. 1 - While passing ids to query why it takes more time compare to normal
query h
Hi,
I am retrieving ids from collection1 based on some query and passing those ids
as a query to collection2 so the query to collection2 which contains ids in it
takes much more time compare to normal query.
Que. 1 - While passing ids to query why it takes more time compare to normal
query h
Hi,
Please ignore my previous email.
CEB India Private Limited. Registration No: U741040HR2004PTC035324. Registered
office: 6th Floor, Tower B, DLF Building No.10 DLF Cyber City, Gurgaon,
Haryana-122002, India.
This e-mail and/or its attachments are intended only for the use of the
addresse
Hi,
TearmVector component is also not considering query parameter. The below query
shows result for all question id instead of question id 3426
http://localhost:8182/solr/dev/terms?terms.fl=comments&terms=true&terms.limit=1000&q=questionid=3426
Thanks
Rajesh
CEB India Private Limited. Regist
Also found
// JDBC requires metadata like field names from the SQLHandler. Force
this property to be true.
props.setProperty("includeMetadata", "true");
in org.apache.solr.client.solrj.io.sql.DriverImpl
are there any other ways to get response on solrj without metaData to avoid
the er
could it be something with includeMetaData=true param? I have tried to set it
to false but then the logs look like:
webapp=/solr path=/sql
params={includeMetadata=true&includeMetadata=false&numWorkers=1&wt=json&version=2.2&stmt=select+id,+text+from+test+where+tits%3D1+limit+5&aggregationMode=map_r
Hi there,
I want to use the Solr suggester component for city names. I have the
following settings:
schema.xml
Field definition
The field i want to apply the suggester on
The copy field
The field
solr-config.xml
true
10
mySuggester
sugg
You'll need to call this.server.connect() - the state reader is instantiated
lazily.
Alan Woodward
www.flax.co.uk
On 5 May 2016, at 01:10, Boman wrote:
> I am attempting to check for existence of a collection prior to creating a
> new one with that name, using Solrj:
>
>System.out.pri
Hi,
Can Solr run on JDK 7 32 bit? Or must be 64 bit?
Thanks
Hi All,
I'm trying to implement a search functionality using solr. Currently I'm
suing edismax parser with ngram fields to do the search against. So far it
works well.
The question I have is when the user input double quotations to the search,
As the requirement this should match against the ori
Hi All,
I'm trying to implement a search functionality using solr. Currently I'm
suing edismax parser with ngram fields to do the search against. So far it
works well.
The question I have is when the user input double quotations to the search,
As the requirement this should match against the ori
> The logs you shared don't seem to be the full logs. There will be a
> related
> exception on the Solr server side. The exception on the Solr server side
> will explain the cause of the problem.
The logs are the full logs which I got on the console when I run the code,
and there is no exception
70 matches
Mail list logo