Thanks Emir. Looks indeed like what I need.
On Mon, Jan 15, 2018 at 11:33 AM, Emir Arnautović <
emir.arnauto...@sematext.com> wrote:
> Hi Max,
> It seems to me that you are looking for grouping
> https://lucene.apache.org/solr/guide/6_6/result-grouping.html <
> https://l
wing form would return the list of worst products for
products: 7453632,645454,534664.
/select?q=product_id:[7453632,645454,534664]&fq=rating:[1 TO
3]&sort=negative_feedback desc
Is there a way to do this in Solr without custom component?
Thanks.
Max
?
true
tvComponent
I am seeing continuously increasing MLT response times and I am wondering
if I am doing something wrong.
Thanks.
Max.
three above approaches?
Thanks,
Max.
er.
> See below for e.g.
>
>
>
> http://localhost:8983/solr/techproducts/select?debugQuery=on&indent=on&q=
> manu:%22Bridge%20the%20gat~1%20between%20your%20skills%
> 20and%20your%20goals%22&defType=complexphrase
>
> On Thu, Jun 15, 2017 at 5:59 AM, Max Br
Hi,
I am trying to do phrase exact match. For this, I use
KeywordTokenizerFactory. This basically does what I want to do. My field
type is defined as follows:
In addition to this, I want to tolerate typos of two or three lett
sponse);
DocList
docList=((ResultContext)response.getValues().get("response")).docs;
//Do some crazy stuff with the result
}
My concerns:
1) What is a clean way to read the /basic handler's default parameters
from solrconfig.xml and use them in LocalSolrQueryRequest().
2) Is there a better way to accomplish this task overall?
Thanks,
Max.
Perfect. Thanks a lot.
On Wed, Feb 1, 2017 at 2:01 PM, Alan Woodward wrote:
> Hi, extractTerms() is now on Weight rather than on Query.
>
> Alan
>
> > On 1 Feb 2017, at 17:43, Max Bridgewater
> wrote:
> >
> > Hi,
> >
> > It seems Query.ex
recommendation on what I should use in place of that method? I am migrating
some legacy code from Solr 4 to Solr 6.
Thanks,
Max.
I have one Solr core on my solr 6 instance and I can query it with:
http://localhost:8983/solr/mycore/search?q=*:*
Is there a way to configure solr 6 so that I can simply query it with this
simple URL?
http://localhost:8983/search?q=*:*
Thanks.
Max,
sumption is true.
> It's arguable
> how much practical difference it makes though.
>
> Best,
> Erick
>
> On Mon, Nov 28, 2016 at 2:14 AM, Florian Gleixner wrote:
> > Am 28.11.2016 um 00:00 schrieb Shawn Heisey:
> >>
> >> On 11/27/2016 12:51 PM, Florian Gleix
comparatively set in Solr 6. The only thing I found that would make
sense is the connector max number threads that we have set at 800 for
Tomcat. However, it jetty.xml, maxThreads is set to 5. Not sure if
these two maxThreads have the same effect.
I thought about Yonik suggestion a little bit. Where I am
digging.
What are some things I could tune to improve the numbers for Solr 6? Have
you guys experienced such discrepancies?
Thanks,
Max.
q.op that was set in the default or not. But your explanation
makes sense now.
Thanks,
Max.
On Sat, Nov 12, 2016 at 4:54 PM, Greg Pendlebury
wrote:
> This has come up a lot on the lists lately. Keep in mind that edismax
> parses your query uses additional parameters such as 'mm
signs.
Is there a way to prevent this altering of queries?
Thanks,
Max.
t allows us to identify our custom
queries. This special clause should not impact search results. => Pretty
ugly.
Other potential clean, low risk, and less invasive solution?
Max.
x27;t seem to find a method
that would do this.
Thanks,
Max.
Thank you Mike, that was it.
Max.
On Sat, Apr 2, 2016 at 2:40 AM, Mikhail Khludnev wrote:
> Hello Max,
>
> Since it reports the first space occurrence pos=32, I advise to nuke all
> spaces between braces in sum().
>
> On Fri, Apr 1, 2016 at 7:40 PM, Max Bridgewater &
This works great in Solr 4.10. However, in solr 5.4.1 and solr 5.5.0, I get
the below error. How do I write this kind of query with Solr 5?
Thanks,
Max.
ERROR org.apache.solr.handler.RequestHandlerBase [ x:productsearch] –
org.apache.solr.common.SolrException: Can't determ
Hi Folks,
Thanks for all the great suggestions. i will try and see which one works
best.
@Hoss: The WEB-INF folder is just in my dev environment. I have a localo
Solr instance and I points it to the target/WEB-INF. Simple convenient
setup for development purposes.
Much appreciated.
Max.
On Wed
FromB looks like this:
Thread.currentThread().getContextClassLoader().getResources("personal-words.txt")
Unfortunately, this returns an empty list. Any recommendation?
Thanks,
Max.
I'm happy to report that we are seeing significant speed-ups in our queries
with Json facets on 5.4 vs regular facets on 5.1. Our queries contain mostly
terms facets, many of them with exclusion tags and prefix filtering.
Nice work!
le faceting
on only 1 field won't help.
And it's not implemented for all facet types IIRC.
Best,
Erick
On Fri, Dec 11, 2015 at 1:07 PM, Aigner, Max wrote:
> Answering one question myself after doing some testing on 5.3.1:
>
> Yes, facet.threads is still relevant with Json fac
the test VM has 4 cores.
-Original Message-
From: Aigner, Max [mailto:max.aig...@nordstrom.com]
Sent: Thursday, December 10, 2015 12:33 PM
To: solr-user@lucene.apache.org
Subject: RE: JSON facets and excluded queries
Another question popped up around this:
Is the facet.threads parameter
ering?
Thanks again,
Max
-Original Message-----
From: Aigner, Max [mailto:max.aig...@nordstrom.com]
Sent: Wednesday, November 25, 2015 11:54 AM
To: solr-user@lucene.apache.org
Subject: RE: JSON facets and excluded queries
Yes, just tried that and it works fine.
That just removed a showstopper
com]
Sent: Wednesday, November 25, 2015 11:38 AM
To: solr-user@lucene.apache.org
Subject: Re: JSON facets and excluded queries
On Wed, Nov 25, 2015 at 2:29 PM, Yonik Seeley wrote:
> On Wed, Nov 25, 2015 at 2:15 PM, Aigner, Max wrote:
>> Thanks, this is great :=))
>>
>> I hadn
Thanks, this is great :=))
I hadn't seen the domain:{excludeTags:...} syntax yet and it doesn't seem to be
working on 5.3.1 so I'm assuming this is work slated for 5.4 or 6. Did I get
that right?
Thanks,
Max
-Original Message-
From: Yonik Seeley [mailto:ysee...@g
I'm currently evaluating Solr 5.3.1 for performance improvements with faceting.
However, I'm unable to get the 'exclude-tagged-filters' feature to work. A lot
of the queries I'm doing are in the format
...?q=category:123&fq={!tag=fqCol}color:green&facet=true&facet.field{!key=price_all
ex=fqCol}p
Is there a way to append a set of words the the out-of-box solr index when
using the spellcheck / suggestions feature?
should be thinking of this?
Max
Thanks alot, so I will make a XSLT.
Great community here!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexed-data-not-searchable-tp4054473p4055258.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thank you.
I changed it and now it works.
But is there any possibility to make the given timestamp acceptable for
solr?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexed-data-not-searchable-tp4054473p4054985.html
Sent from the Solr - User mailing list archive at Na
Just for information: I indicate that the problem occurs when I try to add
the fields, created, last_modified, issued (all three have the type date)
and the field rightsholder.
Maybe it is helpful!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexed-data-not-searchable-
Thanks to this!
No I have another problem. I tried to give the XML file the right format so
I made this
455HHS-2232
T0072-00031-DOWNLOAD - Blatt 12v
application/pdf
2012-11-07T11:15:19.887+01:00
2012-11-07T11:15:19.887+01:00
2012-11-07T11:15:19.8
The XML files are formatted like this. I think there is the problem.
T0084-00371-DOWNLOAD - Blatt 184r
T0084-00371-DOWNLOAD
application/pdf
2012-11-08T00:09:57.531+01:00
2012-11-08
Thanks for your help:
The URL I'am positng to is: http://localhost:8983/solr/update?commit=true
The XML-Filess I've added contains fields like "author" so I thought they
have to serachable since it it declared as "indexed" in the example schema.
--
View this message in context:
http://lucen
ns as the post operation I use the stuff from example
folder.
Maybe I have to configure something in the schema.xml or solrconfig.xml ?
Hope you can help me!
Kind regards,
Max
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexed-data-not-searchable-tp4054473.
Thanks for your reply, I thought about using the debug mode, too, but
the information is not easy to parse and doesnt contain everything I
want. Furthermore I dont want to enable debug mode in production.
Is there anything else I could try?
On Tue, Dec 27, 2011 at 12:48 PM, Ahmet Arslan wrote:
>
I have a more complex query condition like this:
(city:15 AND country:60)^4 OR city:15^2 OR country:60^2
What I want to achive with this query is basically if a document has
city = 15 AND country = 60 it is more important then another document
which only has city = 15 OR country = 60
Furhtermore
Robert, thank you for creating the issue in JIRA.
However, I need ngrams on that field – is there an alternative to the
EdgeNGramFilterFactory ?
Thanks!
On Mon, Dec 12, 2011 at 1:25 PM, Robert Muir wrote:
> On Mon, Dec 12, 2011 at 5:18 AM, Max wrote:
>
>> It seems like there i
Hi there,
when highlighting a field with this definition:
I guess I missed the init() method. I was looking at the factory and
thought I saw config loading stuff (like getInt) which I assumed meant it
need to have schema.xml available.
Thanks!
-Max
On Tue, Oct 5, 2010 at 2:36 PM, Mathias Walter wrote:
> Hi Max,
>
> why don&
I have made progress on this by writing my own Analyzer. I basically added
the TokenFilters that are under each of the solr factory classes. I had to
copy and paste the WordDelimiterFilter because, of course, it was package
protected.
On Mon, Oct 4, 2010 at 3:05 PM, Max Lynch wrote:
>
Hi,
I asked this question a month ago on lucene-user and was referred here.
I have content being analyzed in Solr using these tokenizers and filters:
Basically I want to be able to search against
Is there a tokenizer that will allow me to search for parts of a URL? For
example, the search "google" would match on the data "
http://mail.google.com/dlkjadf";
This tokenizer factory doesn't seem to be sufficient:
Thanks Lance.
I have decided to just put all of my processing on a bigger server along
with solr. It's too bad, but I can manage.
-Max
On Sun, Aug 29, 2010 at 9:59 PM, Lance Norskog wrote:
> No. Document creation is all-or-nothing, fields are not updateable.
>
> I think you
Hi,
I have a master solr server and two slaves. On each of the slaves I have
programs running that read the slave index, do some processing on each
document, add a few new fields, and commit the changes back to the master.
The problem I'm running into right now is one slave will update one docume
It seems like this is a way to accomplish what I was looking for:
CoreContainer coreContainer = new CoreContainer();
File home = new
File("/home/max/packages/test/apache-solr-1.4.1/example/solr");
File f = new File(home, "solr.xml");
coreContainer.load
means "select everything AND only these documents without a value
> in the field".
>
> On Wed, Aug 25, 2010 at 7:55 PM, Max Lynch wrote:
> > I was trying to filter out all documents that HAVE that field. I was
> trying
> > to delete any documents where that field
Right now I am doing some processing on my Solr index using Lucene Java.
Basically, I loop through the index in Java and do some extra processing of
each document (processing that is too intensive to do during indexing).
However, when I try to update the document in solr with new fields (using
Sol
ve
it worked because all of my documents have values for that field.
Oh well.
-max
On Wed, Aug 25, 2010 at 9:45 PM, scott chu (朱炎詹) wrote:
> Excuse me, what's the hyphen before the field name 'date_added_solr'? Is
> this some kind of new query format that I didn't k
Hi,
I am trying to delete all documents that have null values for a certain
field. To that effect I can see all of the documents I want to delete by
doing this query:
-date_added_solr:[* TO *]
This returns about 32,000 documents.
However, when I try to put that into a curl call, no documents get
What I'm doing now is just adding the documents to the other core each night
and deleting old documents from the other core when I'm finished. Is there
a better way?
On Tue, Aug 3, 2010 at 4:38 PM, Max Lynch wrote:
> Is it possible to duplicate a core? I want to have one core
Is it possible to duplicate a core? I want to have one core contain only
documents within a certain date range (ex: 3 days old), and one core with
all documents that have ever been in the first core. The small core is then
replicated to other servers which do "real-time" processing on it, but the
ybacking on the highlighter is an OK approach.
>
> If you need it on more docs than that, then probably you should
> customize how your queries are scored to also tally up which docs had
> which terms.
>
> Mike
>
> On Wed, Jul 28, 2010 at 6:53 PM, Max Lynch wrote:
> > I
I would like to be search against my index, and then *know* which of a set
of given terms were found in each document.
For example, let's say I want to show articles with the word "pizza" or
"cake" in them, but would like to be able to say which of those two was
found. I might use this to handle
never
opens an IndexWriter.
Thanks!
On Tue, Jul 13, 2010 at 10:52 AM, Max Lynch wrote:
> Great, thanks!
>
>
> On Tue, Jul 13, 2010 at 2:55 AM, Fornoville, Tom > wrote:
>
>> If you're only adding documents you can also have a go with
>> StreamingUpdateSo
angs
>
> You could try a master slave setup using replication perhaps, then the
> slave serves searches and indexing commits on the master won't hang up
> searches at least...
>
> Here is the description: http://wiki.apache.org/solr/SolrReplication
>
>
> -Origin
Lucene index in
the background)?
Thanks.
On Mon, Jul 12, 2010 at 12:06 PM, Robert Petersen wrote:
> Maybe solr is busy doing a commit or optimize?
>
> -Original Message-
> From: Max Lynch [mailto:ihas...@gmail.com]
> Sent: Monday, July 12, 2010 9:59 AM
> To: solr-user
Hey guys,
I'm using Solr 1.4.1 and I've been having some problems lately with code
that adds documents through a CommonsHttpSolrServer. It seems that randomly
the call to theserver.add() will hang. I am currently running my code in a
single thread, but I noticed this would happen in multi threade
Here is my dataimport part of my solrconfig.xml:
/home/max/packages/apache-solr-4.0-2010-06-16_08-05-33/e/solr/conf/data-config.xml
and my data-config.xml:
I did try to rebuild the solr nightly, but I still receive the same error.
I have all of the required
this behavior by not using the term frequency. But
after this we got very strange products under the first results.
Has anybody an idea to avoid the clustering of products (documents)
which are from the same shop?
Greetings
Max
ause the field-definition contains a
LowerCaseFilter.
Is it possible that the prefix-processing ignores the filters?
Max
63 matches
Mail list logo