ht" way to do it. Every approach you take will involve
tradeoffs. Read up on this already well-discussed topic and decide what answer
is best for you in your case.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
On Aug 6, 2013, at 9:55 AM, Steven Bower wrote:
> Is there an easy way in code / command line to lint a solr config (or even
> just a solr schema)?
No, there's not. I would love there to be one, especially for the DIH.
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
rsGroup page - this is a one-time step.
Please add my username, AndyLester, to the approved editors list. Thanks.
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
ocument I need each term counted however many times it is entered
(content of "I think what I think" would report 'think' as used twice).
Does anyone have any insight as to whether I'm headed in the right
direction and then what my query would be?
Thanks,
Andy Pickler
oach doesn't work, because if a word
occurs more than one time in a document it needs to be counted that many
times. That seemed to rule out faceting like you mentioned as well as the
TermsComponent (which as I understand also only counts "documents").
Thanks,
Andy Pickler
On
ilds up "that day's top terms" in a
table or something.
Thanks,
Andy Pickler
On Tue, Apr 2, 2013 at 7:16 AM, Tomás Fernández Löbbe wrote:
> Oh, I see, essentially you want to get the sum of the term frequencies for
> every term in a subset of documents (instead of the docum
othing of timezones. Solr expects everything is in UTC. If you
want time zone support, you'll have to convert local time to UTC before
importing, and then convert back to local time from UTC when you read from Solr.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
On May 2, 2013, at 3:36 AM, "Jack Krupansky" wrote:
> RC4 of 4.3 is available now. The final release of 4.3 is likely to be within
> days.
How can I see the Changelog of what will be in it?
Thanks,
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
e what is currently pending to go in 4.3?
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
ewhere that describes the process set
up?
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
ays "This is
how the dev process works."
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
true
true
true
true
breakIterator
2
name name_par description description_par
content content_par
162
simple
default
Cheers!
- Andy
to not be "significant". I
realize that I may not understand how the MLT Handler is doing things under
the covers...I've only been guessing until now based on the (otherwise
excellent) results I've been seeing.
Thanks,
Andy Pickler
P.S. For some additional information, th
ave setup another request handler that
only searches the whole word fields and it returns in 850 ms with
highlighting.
Any ideas?
- Andy
-Original Message-
From: Bryan Loofbourrow [mailto:bloofbour...@knowledgemosaic.com]
Sent: Monday, May 20, 2013 1:39 PM
To: solr-user@lucene.apa
'm getting results...and they indeed are relevant.
Thanks,
Andy Pickler
On Wed, May 22, 2013 at 12:20 PM, Andy Pickler wrote:
> I'm a developing a recommendation feature in our app using the
> MoreLikeThisHandler <http://wiki.apache.org/solr/MoreLikeThisHandler>,
>
ionTries and how many FQs are at the end.
Am I doing something wrong? Do the collation internals not handle
FQs correctly? The lookup/hit counts on filterCache seem to be
increasing just fine. It will do N lookups, N hits, so I'm not
thinking that caching is the problem.
We'd really
le FQ and it becomes 62038ms.
> But I think you're just setting maxCollationTries too high. You're asking it
> to do too much work in trying teens of combinations.
The results I get back with 100 tries are about twice as many as I get with 10
tries. That's a big difference to the user where it's trying to figure
misspelled phrases.
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
pelling?
Yes, definitely.
Thanks for the ticket. I am looking at the effects of turning on
spellcheck.onlyMorePopular to true, which reduces the number of collations it
seems to do, but doesn't affect the underlying question of "is the spellchecker
doing FQs properly?"
Thank
*Could anyone help me to see what is the reason which Solritas page failed?*
*I can go to http://localhost:8080/solr without problem, but fail to go to
http://localhost:8080/solr/browse*
*As below is the status report! Any help is appreciated.*
*Thanks!*
*Andy*
*
*
*type* Status report
set to "true".
It all seems redundant just to allow for partial word
matching/highlighting but I didn't know of a better way. Does anything
stand out to you that could be the culprit? Let me know if you need any
more clarification.
Thanks!
- Andy
-Original Message
ted you to do that a decade and a half ago
> But either way, that's a pretty ridiculous solution.
> I don't know of any other server product that disregards security so
> willingly.
Why are you wasting your time with such an inferior project? Perhaps
ElasticSearch is
472066.n3.nabble.com/QueryParser-default-operator-AND-tp507845p507856.html>
ever implemented?
Is recompiling from source necessary, as
<http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201107.mbox/%3ccafwgddm0vxu8-pf3g5hd9evsbxczyqfhsodtzqtfm8qmwxf...@mail.gmail.com%3E>
implies?
--
ut what's
> happening here?
Have you issued a commit?
--
Andy Lindeman
http://www.andylindeman.com/
its possible to nest entities that use a URLDataSource
inside entities that use a JDBCDataSource ?
Andy
yourself. There are not really any sensible
defaults for stopwords, so Solr doesn't provide them.
Just add them to the stopwords.txt and reindex your core.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
Thanks for responding Shawn.
Annette is away until Monday so I am looking into this in the meantime.
Looking at the times of the Full GC entries at the end of the log, I think
they are collections we started manually through jconsole to try and reduce
the size of the old generation. This only seem
er VM (build 23.5-b02, mixed mode)
We are currently running more tests but it takes a while before the issues
become apparent.
Andy Kershaw
On 29 November 2012 18:31, Walter Underwood wrote:
> Several suggestions.
>
> 1. Adjust the traffic load for about 75% CPU. When you hit 100%, you
aggregate about queries over time? Or for giving
statistics about individual queries, like time breakouts for benchmarking?
For the latter, you want "debugQuery=true" and you get a raft of stats down in
.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
lr
*
http://lucene.472066.n3.nabble.com/Filtered-search-for-subset-of-ids-td502245.html
*
http://lucene.472066.n3.nabble.com/Search-within-a-subset-of-documents-td1680475.html
Thanks,
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
o help on?) but I hope that you'll get some ideas.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
people expect that their next search-within-a-list
will have those new results.
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
RCHAR would be sad.
What you'll need to do is use a date formatting function in your SELECT out of
the MySQL database to get the date into the format that MySQL likes.
See
https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
it yeilds no results... even though there
are a ton of results that match the criteria :/
Can anyone suggest what I'm doing wrong? Everything else is working
perfectly... would be a shame if I couldn't get this feature working
--
Andy Newby
a...@ultranerds.com
n this value :(
TIA
--
Andy Newby
a...@ultranerds.com
Hi,
Ah cool - missed that bit! Will give that a go (as it will be handy for
passing along other paramaters too)
Cheers
Andy
On Thu, Mar 10, 2011 at 9:13 PM, Chris Hostetter
wrote:
>
> : I know its possible to do via adding sort= , but the Perl module
> : (WebService::Solr) doe
th that keyword in, BUT there are a ton with
"tv" , "television" etc in)
However, the query:
computer => laptop
..works fine ( search for "computer", and I also get results matching
"laptop")
TIA
--
Andy Newby
a...@ultranerds.com
er of views)
I've had a look on google regarding this, but can't seem to find anything
like the above.
Any ideas/advice are much appreciated :)
TIA
--
Andy Newby
a...@ultranerds.com
way around)
Can anyone try and explain whats going on with this?
BTW, the queries are matched based on a normal "white space" index,
nothing special.
The actual query being used, is as follows:
(keywords:"st" AND keywords:"patricks") OR (description:"st"
(much
simpler, and not using multicore), but using the exact same fieldType setup.
Can anyone shed any light?
TIA!
--
Andy Newby
a...@ultranerds.com
le:
car => mascot
mascot => car
..and in Solr I actually have a result with the "description" of "bird hawk mascot 002 (u65bsag)"
...surely my setup should be showing that result?
I've been doing my head in over this - so any advice would be much
appreciated :)
TIA!
--
Andy Newby
ing? I'm currently scouring Google.
Thanks,
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
arly, I'm entirely new to the whole JVM ecosystem. I'm coming from the world
of Perl.
Thanks,
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
ndexing status.
However, it will still be necessary if I need to wait for indexing to complete
in, for example, a Makefile or a script.
xoxo
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
k, I search for "classifications:3". If you want spiralbound
large print, you'd search for "classifications:1 classifications:2".
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
do you see that?
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
g in my index and 343 rows in my table. What is going on? -- H
I don't see that you have anything in the DIH that tells what columns from the
query go into which fields in the index. You need something like
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
"? Is that the same thing as saying that Solr can be processing n
requests simultaneously?
Thanks for any insight or even links to relevant pages. We've been Googling
all over and haven't found answers to the above.
Thanks,
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
"? Is that the same thing as saying that Solr can be processing n
requests simultaneously?
Thanks for any insight or even links to relevant pages. We've been Googling
all over and haven't found answers to the above.
Thanks,
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
tchid}
when I kick off the DIH like this:
$url/dih?command=full-import&entity=titles&commit=true&batchid=47
At least that's how it works for me in 3.6 and 4.0.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
interface that I
should be using instead?
Thanks,
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
.cacti.net/viewtopic.php?f=12&t=19744&start=15 It looks promising
although it doesn't monitor Solr itself.
Suggestions?
Thanks,
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
t we must have it in-house on our own servers, for
monitoring internal dev systems, and we'd like it to be open source.
We already have Cacti up and running, but it's possible we could use something
else.
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
rField and one
tokenized field using class TextField. Very easy to feed into both using
copyField.
Your query would need to use the different field name instead of an exact
operator like with some commercial search engines.
hope that helps,
Andy
Jörg Kiegeland wrote:
>
> Normally I do s
efully received.
Thans
Andy.
--
View this message in context:
http://www.nabble.com/Slow-response-times-using-*%3A*-tp15206563p15206563.html
Sent from the Solr - User mailing list archive at Nabble.com.
valued or tokenized fields? In that case, Solr uses
> field queries which consume a lot of memory if the number of unique terms
> are large.
>
> On Jan 31, 2008 9:13 PM, Andy Blower <[EMAIL PROTECTED]> wrote:
>
>>
>> I'm evaluating SOLR/Lucene for our needs and
Yonik Seeley wrote:
>
> *:* maps to MatchAllDocsQuery, which for each document needs to check
> if it's deleted (that's a synchronized call, and can be a bottleneck).
>
Why does this need to check if documents are deleted if normal queries
don't? Is there any way of disabling this since I can
give good advice on possible solution concepts.
I'm thinking that rather than using file-system tricks, perhaps there
might be a simple means to switch the file system location that the
index readers use within SOLR.
thanks
Andy
Bernd,
I recently asked a similar question about Solr 7.3 and Zookeeper 3.4.11.
This is the response I found most helpful:
https://www.mail-archive.com/solr-user@lucene.apache.org/msg138910.html
- Andy -
On Fri, Dec 14, 2018 at 7:41 AM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de>
Dave,
You don't mention what query parser you are using, but with the default
query parser you can field qualify all the terms entered in a text box by
surrounding them with parenthesis. So if you want to search against the
'title' field and they entered:
train OR dragon
You could generate the S
Hi,
I use Solr 5.5, I recently notice a process a process ./fs-manager is run
under user solr that take quite high CPU usage. I don't think I see such
process before.
Is that a legitimate process from Solr?
Thanks.
We are receiving an UnsupportedOperationException after making certain
requests. The requests themselves do not seem to be causing the issue as
when we run the job that makes these requests locally against the same
SolrCloud cluster where the errors are being thrown, there are no errors.
These er
Erick Erickson wrote
> Maybe your remote job server is using a different set of jars than
> your local one? How does the remote job server work?
The remote job server and our local are running the same code as our local,
and both our local and remote job server are making queries against the same
We were able to locate the exact issue after some more digging. We added a
query to another collection that runs alongside the job we were executing
and we were missing the collection reference in the URL. If the below query
is run by itself in at least Solr 7, the error will be reproduced.
http:
4)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 2 more
What is wrong with it? Is this urlString correct?
Any help is appreciated!
Andy Tang
l nodes to reflect
the new IP address of the VMs. But will that be sufficient?
Appreciate any guidance.
Thanks
- Andy -
.6.2/dist/solrj-lib in
> your classpath.
>
> Best,
> Erick
>
> On Fri, Mar 16, 2018 at 12:14 PM, Andy Tang
> wrote:
> > I have the code to add document to Solr. I tested it in Both Solr 6.6.2
> and
> > Solr 7.2.1 and failed.
> >
> >
grams it's not _necessary_
>
> Best,
> Erick
>
>
> On Fri, Mar 16, 2018 at 2:02 PM, Andy Tang wrote:
> > Erik,
> >
> > Thank you for reminding.
> > javac -cp
> > .:/opt/solr/solr-6.6.2/dist/*:/opt/solr/solr-6.6.2/dist/solrj-lib/*
> >
leases?
Would appreciate any guidance.
Thanks,
- Andy -
invert the dataDir and
dataLogDir directories.
It does present something of a PR issue for us, if we tell our customers to
use a ZK version that has been pulled from the mirrors. Any plans to move
to ZK 3.4.12 in future releases?
Thanks,
- Andy -
On Wed, May 9, 2018 at 4:09 PM, Erick Erickson
Thanks Shawn. That makes sense.
On Wed, May 9, 2018 at 5:10 PM, Shawn Heisey wrote:
> On 5/9/2018 2:38 PM, Andy C wrote:
> > Was not quite sure from reading the JIRA why the Zookeeper team felt the
> > issue was so critical that they felt the need to pull the release from
&g
Shawn,
Why are range searches more efficient than wildcard searches? I guess I
would have expected that they just provide different mechanism for defining
the range of unique terms that are of interest, and that the merge
processing would be identical.
Would a search such as:
field:c*
be more e
= Optional.empty();
}
CloudSolrClient client = new CloudSolrClient.Builder(_zkHostList,
chrootOption).build();
Adapted from code I found somewhere (unit test?). Intent is to support the
option of configuring a chroot or not (stored in "_zkChroot")
- Andy -
On Mon, Jun 18, 201
=
Optional.empty(); (as you have done).
Intent of my code was to support both using a chroot and not using a
chroot. The value of _zkChroot is read from a config file in code not shown.
- Andy -
o cause problems with the class loader,
as I start getting a NoClassDefFoundError :
org/apache/lucene/analysis/util/ResourceLoaderAware exception.
Any suggestions?
Thanks,
- Andy -
hi Jörn - something's decoding a UTF8 sequence using the legacy iso-8859-1
character set:
Jörn is J%C3%B6rn in UTF8
J%C3%B6rn misinterpreted as iso-8859-1 is Jörn
Jörn is J%C3%83%C2%B6rn in UTF8
I hope this helps track down the problem!
Andy
On Fri, 7 Aug 2020 at 12:08, Jörn Franke
dragons!
Andy
> On 20 Aug 2020, at 07:25, Prashant Jyoti wrote:
>
> Hi Joe,
> These are the errors I am running into:
>
> org.apache.solr.common.SolrException: Error CREATEing SolrCore
> 'newcollsolr2_shard1_replica_n1': Unable to create core
> [newcoll
; I haven't been able to find
anything similar reported online so I'm thinking it's a config issue and
would be grateful for any pointers & solutions. Many thanks in advance,|
--
Andy Dopleach
/Director/
*BlueFusion <https://www.bluefusion.co.nz/>*
p: 03 328
Thanks David, I'll set up the techproducts schema and see what happens.
Kind regards,
Andy
On 4/09/20 4:09 pm, David Smiley wrote:
Hi,
I looked at the code at those line numbers and it seems simply impossible
that an ArrayIndexOutOfBoundsException could be thrown there because it'
Don't know if this is an option for you but the SolrJ Java Client library
has support for uploading a config set. If the config set already exists it
will overwrite it, and automatically RELOAD the dependent collection.
See
https://lucene.apache.org/solr/8_5_0/solr-solrj/org/apache/solr/common/clo
I added the maxQueryLength option to DirectSolrSpellchecker in
https://issues.apache.org/jira/browse/SOLR-14131 - that landed in 8.5.0 so
should be available to you.
Andy
On Wed, 7 Oct 2020 at 23:53, gnandre wrote:
> Is there a way to truncate spellcheck.q param value from Solr side?
>
uot;: "solr.PatternReplaceFilterFactory",
"pattern": "(.)\\1+",
"replacement": "$1"
}
]
}
}
]
}
This finds a match...
http://localhost:8983/solr/
I am adding a new float field to my index that I want to perform range
searches and sorting on. It will only contain a single value.
I have an existing dynamic field definition in my schema.xml that I wanted
to use to avoid having to updating the schema:
I went ahead and implemented th
some attribute(s) to get higher scores, can you filter out items
that don't have those attributes? Also, maybe setting mm to require more
terms to match could cut out unwanted results (that's not useful for the
"dog" query of course).
Andy
On Fri, 4 Dec 2020 at 06:43,
but
I'm not sure if that will cut it. I gather merely rsyncing the data
files won't do...
Can anyone give me a pointer to that "easy-to-find" document I have so
far failed to find? Or failing that, maybe some sound advice on how to
proceed?
Regards,
-Andy
--
Andy D
ge against the down-time which would be required to
regenerate the indexes from scratch?
Regards,
-Andy
--
Andy D'Arcy Jewell
SysMicro Limited
Linux Support
E: andy.jew...@sysmicro.co.uk
W: www.sysmicro.co.uk
it updates (sortof copy-on-write style)? So we
are relying on the principle that as long as you have at least one
remaining reference to the data, it's not deleted...
Thanks once again!
-Andy
--
Andy D'Arcy Jewell
SysMicro Limited
Linux Support
E: andy.jew...@sysmicro.co.uk
W: www.sysmicro.co.uk
t, the web app will have to gracefully handle unavailability
of SolR, probably by displaying a "down for maintenance" message, but
this should preferably be only a very short amount of time.
Can anyone comment on my proposed solutions above, or provide any
additional ones?
Thanks for a
ed to a web-app, which accepts uploads and will be available
24/7, with a global audience, so "pausing" it may be rather difficult
(tho I may put this to the developer - it may for instance be possible
if he has a small number of choke points for input into SolR).
Thanks.
--
And
ill leaves open the question of how to
*pause* SolR or prevent commits during the backup (otherwise we have a
potential race condition).
-Andy
--
Andy D'Arcy Jewell
SysMicro Limited
Linux Support
E: andy.jew...@sysmicro.co.uk
W: www.sysmicro.co.uk
maybe just solr)
2. Remove all files under /var/lib/solr/data/index/
3. Move/copy files from /tmp/snapshot.20121220155853703/ to
/var/lib/solr/data/index/
4. Restart Tomcat (or just solr)
Thanks everyone who's pitched in on this! Once I've got this working,
I'll document it.
-An
ously be a must sooner or later.
--
Andy D'Arcy Jewell
SysMicro Limited
Linux Support
T: 0844 9918804
M: 07961605631
E: andy.jew...@sysmicro.co.uk
W: www.sysmicro.co.uk
I can get the xhtml result, maybe I can stored it
somewhere else outside of Solr.
Any advice on this will be highly appreciated.
Many Thanks & Kind Regards
Andy
201 - 292 of 292 matches
Mail list logo