ion is and one for what is actually in the Lucene index.
>
> 2> add &debug=query to your queries, and run them from the admin UI.
> That’ll give you a _lot_ quicker turn-around as well as some good info
> about how the query was actually executed.
>
> Best,
&g
the schema
> thinks the definition is and one for what is actually in the Lucene index.
>
> 2> add &debug=query to your queries, and run them from the admin UI.
> That’ll give you a _lot_ quicker turn-around as well as some good info
> about how the query was actually executed.
ameters.html#CommonQueryParameters-Thefq_FilterQuery_Parameter
> [2] https://lucene.apache.org/solr/guide/6_6/working-with-dates.html
>
> On 07/06/2019 14:02, Mark Fenbers - NOAA Federal wrote:
> > Hello!
> >
> > I have a search setup and it works fine. I sea
Hello!
I have a search setup and it works fine. I search a text field called
"logtext" in a database table. My Java code is like this:
SolrQuery query - new SolrQuery();
query.setQuery(searchWord);
query.setParam("df", "logtext");
Then I execute the search... and it works just great. But now
ring. If this is the
case then try changing your field type to text_en or text_general depending
on your requirements.
On Wed, 16 Dec 2015, 19:51 Mark Fenbers wrote:
Greetings,
I had my Solr searching capabilities working for a while. But today I
inadvertently "unload"d my core from the Admin Interface. After adding
it back in, it is not working right. Because Solr was down for a while
in recent weeks, I have also done a full import with the clean option.
Greetings!
I want my spell-checker to be based on a file
(/usr/share/dict/linux.words should suffice). Word-breaks features
would also be a benefit. I have previously indexed my docs for
searching with minimal alterations to the baseline Solr configuration.
My "docs" are user-typed text, t
th Sciences
Syngenta UK
Email: geraint.d...@syngenta.com
-Original Message-
From: Mark Fenbers [mailto:mark.fenb...@noaa.gov]
Sent: 16 October 2015 19:43
To: solr-user@lucene.apache.org
Subject: Re: File-based Spelling
On 10/13/2015 9:30 AM, Dyer, James wrote:
Mark,
The older spellcheck implementatio
Yes, I'm aware that building an index is expensive and I will remove
"buildOnStartup" once I have it working. The field I added was an
attempt to get it working...
I have attached my latest version of solrconfig.xml and schema.xml (both
are in the same attachment), except that I have removed
On 10/13/2015 9:30 AM, Dyer, James wrote:
Mark,
The older spellcheck implementations create an n-gram sidecar index, which is
why you're seeing your name split into 2-grams like this. See the IR Book by
Manning et al, section 3.3.4 for more information. Based on the results you're
getting,
Greetings!
I'm attempting to use a file-based spell checker. My sourceLocation is
/usr/share/dict/linux.words, and my spellcheckIndexDir is set to
./data/spFile. BuildOnStartup is set to true, and I see nothing to
suggest any sort of problem/error in solr.log. However, in my
./data/spFile/
On 10/12/2015 5:38 AM, Duck Geraint (ext) GBJH wrote:
"When I use the Admin UI (v5.3.0), and check the spellcheck.build box"
Out of interest, where is this option within the Admin UI? I can't find
anything like it in mine...
This is in the expanded options that open up once I put a checkmark in
Greetings!
I'm new to Solr Spellchecking... I have yet to get it to work.
Attached is a snippet from my solrconfig.xml pertaining to my spellcheck
efforts.
When I use the Admin UI (v5.3.0), and check the spellcheck.build box, I
get a NullPointerException stacktrace. The actual stacktrace i
Greetings!
Attached is a snippet from solrconfig.xml pertaining to my spellcheck
efforts. When I use the Admin UI (v5.3.0), and check the
spellcheck.build box, I get a NullPointerException stacktrace. The
actual stacktrace is at the bottom of the attachment. The
FileBasedSpellChecker.build
Thanks for the suggestion, but I've looked at aspell and hunspell and
neither provide a native Java API. Further, I already use Solr for a
search engine, too, so why not stick with this infrastructure for
spelling, too? I think it will work well for me once I figure out the
right configuratio
modules you can configure. I would recommend trying
them first.
And, frankly, I still don't know what your business case is.
Regards,
Alex.
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/
On 1 October 2015 at 12:38, Mark Fenbers wrote:
Yes, a
stion, it would be better to ask what _business_
level functionality you are trying to achieve and see if Solr can help
with that. Starting from Lucene code is less useful :-)
Regards,
Alex.
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/
On 1 October 2015 at 07:48, Mark Fenbers wrote:
Greetings!
Being a newbie, I'm still mostly in the dark regarding where the line is
between Solr and Lucene. The following code snippet is -- I think --
all Lucene and no Solr. It is a significantly modified version of some
example code I found on the net.
dir =
FSDirectory.open(FileSyste
work?
Upayavira
On Mon, Sep 28, 2015, at 09:55 PM, Mark Fenbers wrote:
Greetings!
I have highlighting turned on in my Solr searches, but what I get back
is tags surrounding the found term. Since I use a SWT StyledText
widget to display my search results, what I really want is the offset
and leng
Greetings!
I have highlighting turned on in my Solr searches, but what I get back
is tags surrounding the found term. Since I use a SWT StyledText
widget to display my search results, what I really want is the offset
and length of each found term, so that I can highlight it in my own way
wi
For the brief period that I had spell-checking working, I noticed that
the results record had the start/end position within the text of the
misspelled word. Is there anyway to get the same start/end position
when doing a search? I want to be able to highlight the search term in
the text. Def
On 9/27/2015 12:49 PM, Alexandre Rafalovitch wrote:
Mark,
Thank you for your valuable feedback. The newbie's views are always appreciated.
Admin Admin UI command is designed for creating a collection based on
the configuration you already have. Obviously, it makes that point
somewhat less than
intenance
we can, but as always the documentation lags the code.
Yeah, things are a bit ragged. The admin UI/core UI is really a legacy
bit of code that has _always_ been confusing, I'm hoping we can pretty
much remove it at some point since it's as trappy as it is.
Best,
Erick
On Sat, Sep 26,
Greetings,
Being a Solr newbie, I've run the examples in the "Solr Quick Start"
document and got a feel for Solr's capabilities. Now I want to move on
and work with my own data and my own Solr server without using the
example setup (i.e., "solr -e" options). This is where the
documentation
se box. As you can tell I'm grasping at straws. I'm still
puzzled why you don't have a "data" directory here, but that
shouldn't really matter. How did you create this index? I don't mean
data import handler more how did you create the core that you're
indexing
On 9/23/2015 12:30 PM, Erick Erickson wrote:
Then my next guess is you're not pointing at the index you think you are
when you 'rm -rf data'
Just ignore the Elall field for now I should think, although get rid of it
if you don't think you need it.
DIH should be irrelevant here.
So let's back u
On 9/23/2015 11:28 AM, Erick Erickson wrote:
This is totally weird.
Don't only re-index your old docs, find the data directory and
rm -rf data (with Solr stopped) and re-index.
I pretty much do that. The thing is: I don't have a data directory
anywhere! Most of my stuff is in /localapps/dev/E
On 9/23/2015 10:21 AM, Alessandro Benedetti wrote:
m so those 2 are the queries at the minute :
1) logtext:deeper
2) logtext:*deeper*
According to your schema, the log text field is of type "text_en".
This should be completely fine.
Have you ever changed your schema on run ? without re-inde
Mugeesh, I believe you are on the right path and I was eager to try out
your suggestion. So my schema.xml now contains this snippet (changes
indicated by ~):
required="true" />
~ stored="true" required="true" />
required="true" />
required="true" />
~ stored="true" multiValue
When I submit this:
http://localhost:8983/solr/EventLog/select?q=deeper&wt=json&indent=true
then I get these (empty) results:
{
"responseHeader":{
"status":0,
"QTime":1,
"params":{
"q":"deeper",
"indent":"true",
"wt":"json"}},
"response":{"numFound":0,"start":
this path doesn't lead to main index dir.
On Mon, Sep 21, 2015 at 5:13 PM, Mark Fenbers wrote:
A snippet of my solrconfig.xml is attached. The snippet only contains the
Spell checking sections (for brevity) which should be sufficient for you to
see all the pertinent info you seek.
Thank
that this path doesn't lead to main index dir.
On Mon, Sep 21, 2015 at 5:13 PM, Mark Fenbers wrote:
You were right about finding only the Wednesday occurrences at the
beginning of the line. But attached (if it works) is a screen capture
of my admin UI. But unlike your suspicion, the index text is being
parsed properly, it appears. So I'm uncertain where this leads me.
Also attached is the
n Sat, Sep 19, 2015 at 12:34 AM, Mark Fenbers
wrote:
Greetings,
Whenever I try to build my spellcheck index
(params.set("spellcheck.build", true); or put a check in the
spellcheck.build box in the web interface) I get the following stacktrace.
Removing the write.lock file does no good. Th
Ok, Erick, you provided useful info to help with my understanding.
However, I still get zero results when I search on literal text (e.g.,
"Wednesday"), even with making changes that you suggest. However, I
discovered that if I search on "Wednesday*" (trailing asterisk), then I
get all the resul
On 9/18/2015 8:33 PM, Shawn Heisey wrote:
The "field:*" syntax is something you should not get in the habit of
using. It is a wildcard search. What this does under the covers is
looks up all the possible terms in that field across the entire index,
and constructs a Lucene query that actually i
Greetings!
Using the browser interface to run a query on my indexed data,
specifying "q=logtext:*" gives me all 9800+ documents indexed -- as
expected. But if I specify something like "q=logtext:Sunday", then I
get zero results even though ~1000 documents contain the word Sunday.
So I'm puz
OK, I understand now! To view the results before going much farther, I
simply did a "System.err.println(queryresponse);" which printed the
results in a JSON-like format. Instead, I need to use the methods of
the queryresponse object to view my output. Apparently, the
queryreponse.toString()
Greetings,
Whenever I try to build my spellcheck index
(params.set("spellcheck.build", true); or put a check in the
spellcheck.build box in the web interface) I get the following
stacktrace. Removing the write.lock file does no good. The message
comes right back anyway. I read in a post th
Greetings!
I cannot seem to configure the spell-checker to return results in XML
instead of JSON. I tried programmatically, as in ...
params.set("wt", "xml");
solr.query(params);
... and I tried through the solrconfig.xml. My problem here is that it
is not exactly clear (because I've seen
Greetings,
Using an Index-based spell-checker, I get some results, but not what I'm
looking for. Using a File-based checker, I never get any results, but
no errors either. I've trimmed down my configuration to only use one
spell-checker and named it "default", but still empty results on my t
Greetings!
Mikhail Khludnev, in his post to the thread "Google didn't help on this
one!", has pointed out one bug in Solr-5.3.0, and I was able to uncover
another one (which I wrote about in the same thread). Therefore, and
thankfully, I've been able to get past my configuration issues.
So n
Indeed! should be changed to in the "Spell Checking"
document
(https://cwiki.apache.org/confluence/display/solr/Spell+Checking) and in
all the baseline solrconfig.xml files provided in the distribution. In
addition, ' internal' should be
removed/changed in the same document and same solrco
, so starting there can be a way to get
something going easily.
Upayavira
On Wed, Sep 16, 2015, at 12:22 PM, Mark Fenbers wrote:
On 9/15/2015 6:49 PM, Shawn Heisey wrote:
>From the information we have, we cannot tell if this is a problem
request or not. Do you have a core/collection na
On 9/16/2015 5:24 AM, Alessandro Benedetti wrote:
As a reference I always suggest :
https://cwiki.apache.org/confluence/display/solr/Spell+Checking
I read this doc and have found it moderately helpful to my current
problem. But I have at least one question about it, especially given
that my
On 9/15/2015 6:49 PM, Shawn Heisey wrote:
>From the information we have, we cannot tell if this is a problem
request or not. Do you have a core/collection named "EventLog" on your
Solr server? It will be case sensitive. If you do, does that config
have a handler named "spellCheckCompRH" in it
Greetings,
In my app, I've successfully implemented full-text searching
capabilities on a database using Solr. Now I'm ready to move on to
using Solr's spell check/suggest capability. Having succeeded in
searching, I figured spell-checking would be an easier step. Well, not
for me!
I'm ra
nge anything! So, I guess I get to move on from
this and see what other hurdles I run into!
Thanks for the help!
Mark
On 9/15/2015 11:13 AM, Yonik Seeley wrote:
On Tue, Sep 15, 2015 at 11:08 AM, Mark Fenbers wrote:
I'm working with the spellcheck component of Solr for the first time. I
I'm working with the spellcheck component of Solr for the first time.
I'm using SolrJ, and when I submit my query, I get a Solr Exception:
"Expected mime type octet/stream but got text/html."
What in the world is this telling me?? The query object I submitted is
an entire sentence, not a si
Greetings!
My Java app, using SolrJ, now successfully does searches. I've used the
web interface to do a full-text indexing and for each new entry added
through my app, I have it add to this index.
But now I want to use SolrJ to also do spell checking. I have read
several documents on this
Additional experimenting lead me to the discovery that /dataimport does
*not* index words with a preceding %20 (a URL-encoded space), or in fact
*any* preceding %xx encoding. I can probably replace each %20 with a
'+' in each record of my database -- the dataimporter/indexer doesn't
sneeze at
Greetings!
So, I've created my first index and am able to search programmatically
(through SolrJ) and through the Web interface. (Yay!) I get non-empty
results for my searches!
My index was built from database records using
/dataimport?command=full-import. I have 9936 records in the table
On 9/7/2015 4:52 PM, Shawn Heisey wrote:
The only files that should be in server/lib is jetty and servlet jars.
The only files that should be in server/lib/ext is logging jars (slf4j,
log4j, etc).
In the server/lib directory on Solr 5.3.0:
ext/
javax.servlet-api-3.1.0.jar
jetty-continuation-9.
On 9/6/2015 4:25 PM, Shawn Heisey wrote:
If we assume that it cannot be a problem with multiple jar versions,
which sounds pretty reasonable, then I think SOLR-6188 is probably to blame.
https://issues.apache.org/jira/browse/SOLR-6188
I think you should try this as a troubleshooting step: Ren
On 9/6/2015 12:00 PM, Shawn Heisey wrote:
It looks like you have jars in the solrhome/lib directory, which is
good. You probably don't need the dataimporthandler jar for -extras if
you just want to load from a database.
It does appear that you also have directives in your
solrconfig.xml, which
On 9/5/2015 10:40 PM, Shawn Heisey wrote:
Your solr home is /localapps/dev/EventLog ... Solr automatically loads
any jar found in the lib directory in the solr home, so it is attempting
to use /localapps/dev/EventLog/lib for the classloader.
For the other things you noticed, I believe I know why
The log data is from solr.log. There are a couple of puzzling items.
1. On line 2015-09-05 19:19:56.678, it shows a "lib" subdir
(/localapps/dev/EventLog/lib) which doesn't exist and isn't
specified anywhere that I can find (lots of "find | grep"
commands). I did, at one point, specify
Greetings,
I'm moving on from the tutorials and trying to setup an index for my own
data (from a database). All I did was add the following to the
solrconfig.xml (taken verbatim from the example in Solr documentation,
except for the name="config" pathname) and I get an error in the
web-based
Chris,
The document "Uploading Structured Data Store Data with the Data Import
Handler" has a number of references to solrconfig.xml, starting on Page
2 and continuing on page 3 in the section "Configuring solrconfig.xml".
It also is mentioned on Page 5 in the "Property Writer" and the "Data
Hi, I've been fiddling with Solr for two whole days since
downloading/unzipping it. I've learned a lot by reading 4 documents and
the web site. However, there are a dozen or so instances of
solrconfig.xml in various $HOME/solr-5.3.0 subdirectories. The
documents/tutorials say to edit the so
60 matches
Mail list logo