I have the data ready for index now, it is a json file:
{"122": "20180320-08:08:35.038", "49": "VIPER", "382": "0", "151": "1.0",
"9": "653", "10071": "20180320-08:08:35.088", "15": "JPY", "56": "XSVC",
"54": "1", "10202": "APMKTMAKING", "10537": "XOSE", "10217": "Y", "48":
"179492540", "201": "1"
On 4/2/2018 9:00 PM, Raymond Xie wrote:
I see there is "/browse" in solrconfig.xml :
explicit
and name="defaults" with one item of "df" as shown below:
_text_
My understanding is I can put whatever fields I want to enable index and
searchin
Thanks Rick and Adhyan
I see there is "/browse" in solrconfig.xml :
explicit
and name="defaults" with one item of "df" as shown below:
_text_
My understanding is I can put whatever fields I want to enable index and
searching here in parallel with _te
Raymond,
You can specify the default behavior in solrconfig.xml under each handler.
For instance for /browse you can specify it should look into name, and for
/query you can default it to different field.
On Mon, Apr 2, 2018 at 9:04 PM, Rick Leir wrote:
> Raymond
> There is a default field norm
Raymond
There is a default field normally called df. You would normally use Copyfield
to copy all searchable fields into the default field.
Cheers -- Rick
On April 1, 2018 11:34:07 PM EDT, Raymond Xie wrote:
>Hi Rick,
>
>I sorted it out half:
>
>I should have specified the field in the search q
Hi Rick,
I sorted it out half:
I should have specified the field in the search query, so, instead of
http://localhost:8983/solr/films/browse?q=batman, I should use:
http://localhost:8983/solr/films/browse?q=name:batman
Sorry for this newbie mistake.
But what about if I/user doesn't know or does
Raymond
The output is not visible to me because the mailing list strips images. Please
try a different way to show the output.
Cheers -- Rick
On March 29, 2018 10:17:13 PM EDT, Raymond Xie wrote:
> I am new to Solr, following Steve Rowe's example on
>https://github.com/apache/lucene-solr/tree/ma
I am new to Solr, following Steve Rowe's example on
https://github.com/apache/lucene-solr/tree/master/solr/example/films:
It would be greatly appreciated if anyone can enlighten me where to start
troubleshooting, thank you very much in advance.
The steps I followed are:
Here ya go << END_OF
uot; (e.g. a special user ID used for this purpose).
Anyway, something to look at.
-Original Message-
From: Jack Krupansky [mailto:jack.krupan...@gmail.com]
Sent: Wednesday, April 08, 2015 10:39 PM
To: solr-user@lucene.apache.org; Brian Usrey
Subject: Re: SOLR searching
Are there at lea
Are there at least a small number of categories of users with discrete
prices, or can each user have their own price. The former is doable, the
latter is not unless the number of users is relatively small, in which case
they are equivalent to categories.
You could have a set of dynamic fields, pri
I am extremely new to SOLR and am wondering if it is possible to do something
like the following. Basically I have been tasked with researching SOLR to see
if we can replace our current searching algorithm.
We have a website with product data. Product data includes standard things
like Name, SK
same issue with my search result also and i have used solr.Textfield for this
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issue-with-solr-searching-words-with-not-able-to-search-tp4128549p4133845.html
Sent from the Solr - User mailing list archive at Nabble.com.
Can u please share your schema.xml used for this solr instance?
On Wednesday 02 April 2014 01:49 PM, Priti Solanki wrote:
Hello friends,
I have got one issue
I am trying to searching "X-Ray Machine"
Now Solr is returning multiple rows even if I am doing a exact search. [ On
solr server direct
What's your field type definition where your X-Ray string is stored?
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency
On Wed, Apr 2, 2014 at 3:19 PM, Priti Solanki wrote:
> Hello friends,
>
> I hav
Hello friends,
I have got one issue
I am trying to searching "X-Ray Machine"
Now Solr is returning multiple rows even if I am doing a exact search. [ On
solr server directly]
Secondly, I am using PHP client to talk to solr but with some reason I
can't search with "X-Ray Machine". Solr response
On Wed, 2014-02-05 at 08:17 +0100, Sathya wrote:
> I am running single instance solr and the JVM heap space is minimum 6.3gb
> and maximum 24.31gb. Nothing is running to complete the 24gb except tomcat
> server. I have only 2 copyField entries only.
Your Xmx is the same size as your RAM. It shoul
Shawn
>
>
>
> ------
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://lucene.472066.n3.nabble.com/Solr-Searching-Issue-tp4115207p4115455.html
> To unsubscribe from Solr Searching Issue, click
> her
On 2/4/2014 9:49 PM, Sathya wrote:
> Yes all the instances are reading the same 8GB data at a time. The java
> search programs(> 15 instances) are running in different machines, different
> JVM and they accessing the solr server machine(Ubuntu 64 bit). And the solr
> Index is not shard. The query r
n 5 seconds per
search in single instance).
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Searching-Issue-tp4115207p4115445.html
Sent from the Solr - User mailing list archive at Nabble.com.
ry 2014 13:24, Jack Krupansky wrote:
> Maybe you need a larger Java heap.
>
> -- Jack Krupansky
>
> -Original Message- From: Sathya
> Sent: Tuesday, February 4, 2014 6:11 AM
> To: solr-user@lucene.apache.org
> Subject: Solr Searching Issue
>
>
> Hi Frien
Maybe you need a larger Java heap.
-- Jack Krupansky
-Original Message-
From: Sathya
Sent: Tuesday, February 4, 2014 6:11 AM
To: solr-user@lucene.apache.org
Subject: Solr Searching Issue
Hi Friends,
I am working in Solr 4.6.0 from last 2 months. i have indexed the data in
solr
t;
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Searching-Issue-tp4115207p4115234.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
Hi Furkan,
I have index the subjects that containing only 1 to 10 words per subject.
And query rate is minimum 7 seconds for one searching. And i have single
solr instance only.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Searching-Issue-tp4115207p4115234.html
search query. Its
> getting too slow. Its taking more than 8 hours to search the 7lac data. I
> am
> Using Ubuntu machine with 24GB ram and 1TB HD. Kindly tell me the solution
> to solve this issue.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n
. Its
getting too slow. Its taking more than 8 hours to search the 7lac data. I am
Using Ubuntu machine with 24GB ram and 1TB HD. Kindly tell me the solution
to solve this issue.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Searching-Issue-tp4115207.html
Sent from the
Hi-
Unit tests to the rescue! The current unit test system in the 4.x branch
catches code sequence problems.
[junit4]> Throwable #1: java.lang.IllegalStateException:
TokenStream contract violation: reset()/close() call missing, reset()
called multiple times, or subclass does not call super.
HI,
I am working on OpenNLP integration with SOLR. I have successfully applied
the patch (LUCENE-2899-x.patch) to latest SOLR source code (branch_4x).
I have designed OpenNLP analyzer and index data to it. Analyzer declaration
in schema.xml is as
Solr to use and precisely how to use them.
The word delimiter filter and edge n-gram filter are possible tools to use
in such cases.
-- Jack Krupansky
-Original Message-
From: Mysurf Mail
Sent: Monday, September 23, 2013 3:34 AM
To: solr-user@lucene.apache.org
Subject: solr
My field is defined as
*text_en is defined as in the original schema.xml that comes with solr
Now, my field has the following vaues
- "one"
- "one1"
searching for "one" returns only the field "one". What causes it? how can I
change it?
> how it is possible also explain me and which tokenizer
> class can support for
> finding the special characters .
Probably WhiteSpaceTokenizer will do the job for you. Plus you need to escape
special characters (if you are using defType=lucene query parser).
Anyhow you need to provide us mor
thanks for giving replay
how it is possible also explain me and which tokenizer class can support for
finding the special characters .
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-searching-for-special-characters-tp3388974p3392157.html
Sent from the Solr - User
Yes.
> -Original Message-
> From: vighnesh [mailto:svighnesh...@gmail.com]
> Sent: Monday, October 03, 2011 2:22 AM
> To: solr-user@lucene.apache.org
> Subject: solr searching for special characters?
>
> Hi all,
>
> I need to search special characters in so
Hi,
You need to share relevant parts of your schema for us to be able to see what's
going on.
Try using fieldType="text". Basically, you need a fieldType which has the
lowercaseFilter included.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
On 25. okt. 2010, at 21.
Sounds like WordDelimiterFilter config issue, please refer to
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.WordDelimiterFilterFactory
.
Also it will help if you could provide:
1) Tokenizers/Filters config in schema file
2) analysis.jsp output in admin page.
2010/10/26 wu liu
Hi all,
I just noticed a wierd thing happend to my solr search result.
if I do a search for "ecommons", it cannot get the result for "eCommons",
instead,
if i do a search for "eCommons", i can only get all the match for "eCommons",
but not "ecommons".
I cannot figure it out why?
please help me
yes reindexing is necessary for protwords,synanym update
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-searching-harri-finds-harry-tp1438486p1438802.html
Sent from the Solr - User mailing list archive at Nabble.com.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Yes to restart, no to re-index. Was hoping that wouldn't be necessary.
I'll do that now.
On 08/09/10 11:48, Grijesh.singh wrote:
>
> have u restart the solr after adding words in protwords and reindex the data?
>
> -
> Grijesh
-BEGIN PGP SIG
have u restart the solr after adding words in protwords and reindex the data?
-
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-searching-harri-finds-harry-tp1438486p1438735.html
Sent from the Solr - User mailing list archive at Nabble.com.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have "harry" as a protected word in protword.txt
Here is the xml definition for my text column
On 08/09/10 1
in context:
http://lucene.472066.n3.nabble.com/Solr-searching-harri-finds-harry-tp1438486p1438637.html
Sent from the Solr - User mailing list archive at Nabble.com.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hey,
I've got a Solr server up and running over about 10 million rows, I have
a text column that I use for my main search and a couple of int/str
fields used for faceting.
Whenever I search for the term "harri" as q=title:harri it will match
the term
On Wed, Aug 25, 2010 at 2:34 PM, Peter Spam wrote:
> This is a very small number of documents (7000), so I am surprised Solr is
> having such a hard time with it!!
>
> I do facet on 3 terms.
>
> Subsequent "hello" searches are faster, but still well over a second. This
> is a very fast Mac Pro,
How much disk space is used by the index?
If you run the Lucene CheckIndex program, how many terms etc. does it report?
When you do the first facet query, how much does the memory in use grow?
Are you storing the text fields, or only indexing? Do you fetch the
facets only, or do you also fetch t
This is a very small number of documents (7000), so I am surprised Solr is
having such a hard time with it!!
I do facet on 3 terms.
Subsequent "hello" searches are faster, but still well over a second. This is
a very fast Mac Pro, with 6GB of RAM.
Thanks,
Peter
On Aug 25, 2010, at 9:52 AM,
On Wed, Aug 25, 2010 at 11:29 AM, Peter Spam wrote:
> So, I went through all the effort to break my documents into max 1 MB chunks,
> and searching for hello still takes over 40 seconds (searching across 7433
> documents):
>
> 8 results (41980 ms)
>
> What is going on??? (scroll down for
he matter, i think it should (made) to be possible to
> return multiple rows in an ArrayList.
>
> -Original message-
> From: Peter Spam
> Sent: Tue 17-08-2010 00:47
> To: solr-user@lucene.apache.org;
> Subject: Re: Solr searching performance issues, using large docu
to
return multiple rows in an ArrayList.
-Original message-
From: Peter Spam
Sent: Tue 17-08-2010 00:47
To: solr-user@lucene.apache.org;
Subject: Re: Solr searching performance issues, using large documents
Still stuck on this - any hints on how to write the JavaScript to split
Still stuck on this - any hints on how to write the JavaScript to split a
document? Thanks!
-Pete
On Aug 5, 2010, at 8:10 PM, Lance Norskog wrote:
> You may have to write your own javascript to read in the giant field
> and split it up.
>
> On Thu, Aug 5, 2010 at 5:27 PM, Peter Spam wrote:
You may have to write your own javascript to read in the giant field
and split it up.
On Thu, Aug 5, 2010 at 5:27 PM, Peter Spam wrote:
> I've read through the DataImportHandler page a few times, and still can't
> figure out how to separate a large document into smaller documents. Any
> hints?
I've read through the DataImportHandler page a few times, and still can't
figure out how to separate a large document into smaller documents. Any hints?
:-) Thanks!
-Peter
On Aug 2, 2010, at 9:01 PM, Lance Norskog wrote:
> Spanning won't work- you would have to make overlapping mini-document
Spanning won't work- you would have to make overlapping mini-documents
if you want to support this.
I don't know how big the chunks should be- you'll have to experiment.
Lance
On Mon, Aug 2, 2010 at 10:01 AM, Peter Spam wrote:
> What would happen if the search query phrase spanned separate docu
What would happen if the search query phrase spanned separate document chunks?
Also, what would the optimal size of chunks be?
Thanks!
-Peter
On Aug 1, 2010, at 7:21 PM, Lance Norskog wrote:
> Not that I know of.
>
> The DataImportHandler has the ability to create multiple documents
> from o
Not that I know of.
The DataImportHandler has the ability to create multiple documents
from one input stream. It is possible to create a DIH file that reads
large log files and splits each one into N documents, with the file
name as a common field. The DIH wiki page tells you in general how to
mak
Thanks for the pointer, Lance! Is there an example of this somewhere?
-Peter
On Jul 31, 2010, at 3:13 PM, Lance Norskog wrote:
> Ah! You're not just highlighting, you're snippetizing. This makes it easier.
>
> Highlighting does not stream- it pulls the entire stored contents into
> one string
Ah! You're not just highlighting, you're snippetizing. This makes it easier.
Highlighting does not stream- it pulls the entire stored contents into
one string and then pulls out the snippet. If you want this to be
fast, you have to split up the text into small pieces and only
snippetize from the
However, I do need to search the entire document, or else the highlighting will
sometimes be blank :-(
Thanks!
- Peter
ps. sorry for the many responses - I'm rushing around trying to get this
working.
On Jul 31, 2010, at 1:11 PM, Peter Spam wrote:
> Correction - it went from 17 seconds to 10
Correction - it went from 17 seconds to 10 seconds - I was changing the
hl.regex.maxAnalyzedChars the first time.
Thanks!
-Peter
On Jul 31, 2010, at 1:06 PM, Peter Spam wrote:
> On Jul 30, 2010, at 1:16 PM, Peter Karich wrote:
>
>> did you already try other values for hl.maxAnalyzedChars=21474
On Jul 30, 2010, at 1:16 PM, Peter Karich wrote:
> did you already try other values for hl.maxAnalyzedChars=2147483647
Yes, I tried dropping it down to 21, but it didn't have much of an impact (one
search I just tried went from 17 seconds to 15.8 seconds, and this is an 8-core
Mac Pro with 6GB
On Jul 30, 2010, at 7:04 PM, Lance Norskog wrote:
> Wait- how much text are you highlighting? You say these logfiles are X
> big- how big are the actual documents you are storing?
I want it to be like google - I put the entire (sometimes 60MB) doc in a field,
and then just highlight 2-4 lines of
Wait- how much text are you highlighting? You say these logfiles are X
big- how big are the actual documents you are storing?
On Fri, Jul 30, 2010 at 1:16 PM, Peter Karich wrote:
> Hi Peter :-),
>
> did you already try other values for
>
> hl.maxAnalyzedChars=2147483647
>
> ? Also regular expre
Hi Peter :-),
did you already try other values for
hl.maxAnalyzedChars=2147483647
? Also regular expression highlighting is more expensive, I think.
What does the 'fuzzy' variable mean? If you use this to query via
"~someTerm" instead "someTerm"
then you should try the trunk of solr which is a l
I do store term vector:
-Pete
On Jul 30, 2010, at 7:30 AM, Li Li wrote:
> hightlight's time is mainly spent on getting the field which you want
> to highlight and tokenize this field(If you don't store term vector) .
> you can check what's wrong,
>
> 2010/7/30 Peter Spam :
>> If I don't do hi
hightlight's time is mainly spent on getting the field which you want
to highlight and tokenize this field(If you don't store term vector) .
you can check what's wrong,
2010/7/30 Peter Spam :
> If I don't do highlighting, it's really fast. Optimize has no effect.
>
> -Peter
>
> On Jul 29, 2010, a
If I don't do highlighting, it's really fast. Optimize has no effect.
-Peter
On Jul 29, 2010, at 11:54 AM, dc tech wrote:
> Are you storing the entire log file text in SOLR? That's almost 3gb of
> text that you are storing in the SOLR. Try to
> 1) Is this first time performance or on repaat que
Are you storing the entire log file text in SOLR? That's almost 3gb of
text that you are storing in the SOLR. Try to
1) Is this first time performance or on repaat queries with the same fields?
2) Optimze the index and test performance again
3) index without storing the text and see what the perfor
Any ideas? I've got 5000 documents with an average size of 850k each, and it
sometimes takes 2 minutes for a query to come back when highlighting is turned
on! Help!
-Pete
On Jul 21, 2010, at 2:41 PM, Peter Spam wrote:
> From the mailing list archive, Koji wrote:
>
>> 1. Provide another fi
>From the mailing list archive, Koji wrote:
> 1. Provide another field for highlighting and use copyField to copy plainText
> to the highlighting field.
and Lance wrote:
http://www.mail-archive.com/solr-user@lucene.apache.org/msg35548.html
> If you want to highlight field X, doing the
> termO
Data set: About 4,000 log files (will eventually grow to millions). Average
log file is 850k. Largest log file (so far) is about 70MB.
Problem: When I search for common terms, the query time goes from under 2-3
seconds to about 60 seconds. TermVectors etc are enabled. When I disable
highlig
On Thu, 30 Oct 2008 15:50:58 -0300
"Jorge Solari" <[EMAIL PROTECTED]> wrote:
>
>
> in the schema file.
or use Dismax query handler.
b
_
{Beto|Norberto|Numard} Meijome
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you
gt;> name, plant generic name etc which are multually exclusive>
> >> UniqueId:long
> >>
> >> For each of the document data set, there will be only one value of the
> >> above
> >> three.
> >>
> >> In my solr query from client
>
y one value of the
>>> above
>>> three.
>>>
>>> In my solr query from client
>>>
>>> I am using AnimalName:German Shepard.
>>>
>>> The return result contains
>>> PersonName with 'Shepard'
solr query from client
>>
>> I am using AnimalName:German Shepard.
>>
>> The return result contains
>> PersonName with 'Shepard' in it, even though I am querying on AnimalName
>> field.
>> Can anyone point me whats happening and how to prevent s
AnimalName
> field.
> Can anyone point me whats happening and how to prevent scanning other
> columns/fields.
>
> I appreciate your help.
>
> Thanks
> Ravi
>
> --
> View this message in context:
> http://www.nabble.com/Solr-Searching-on-other-fields-which-are-not-in-query-tp20249798p20249798.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
lumns/fields.
I appreciate your help.
Thanks
Ravi
--
View this message in context:
http://www.nabble.com/Solr-Searching-on-other-fields-which-are-not-in-query-tp20249798p20249798.html
Sent from the Solr - User mailing list archive at Nabble.com.
thanks ! I think I fixed the issue and it's doing good :)
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: RE: Solr searching issue..
> Date: Mon, 14 Jul 2008 20:12:00 +
>
> Copy field dest="text&q
ROTECTED]> To: solr-user@lucene.apache.org> Subject: RE: Solr
> searching issue..> Date: Mon, 14 Jul 2008 09:34:47 +0100> > > again whatever
> I have pasted it didn't work ! .. I have attached the schema.xml file
> instead
again whatever I have pasted it didn't work ! .. I have attached the schema.xml
file instead,,, sorry for spamming you all
thanks
ak
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: RE: Solr searching issue..
>
with some strange reason my copy and paste didn't work !!! sorry to terrible
you all.. hope you can see them now..
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: RE: Solr searching issue..
> Date: Mon, 14 Jul 20
r-user@lucene.apache.org
> Subject: Re: Solr searching issue..
>
> You can use EdgeNGramTokenizer available with Solr 1.3 to achieve this. But
> I'd think again about introducing this kind of search as n-grams can bloat
> your index size.
>
> On Fri, Jul 11, 2008 at 3
thanks,,
I will give it a try and get back to you
> Date: Fri, 11 Jul 2008 20:14:11 +0530
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Re: Solr searching issue..
>
> You can use EdgeNGramTokenizer availabl
What was the type of the field that you are using. I guess you could achieve it
by a simple swap of text and string.
> From: [EMAIL PROTECTED]> To: solr-user@lucene.apache.org> Subject: Solr
> searching issue..> Date: Fri, 11 Jul 2008 11:28:50 +0100> > > Hi solr-us
You can use EdgeNGramTokenizer available with Solr 1.3 to achieve this. But
I'd think again about introducing this kind of search as n-grams can bloat
your index size.
On Fri, Jul 11, 2008 at 3:58 PM, dudes dudes <[EMAIL PROTECTED]> wrote:
>
> Hi solr-users,
>
> version type: nightly build solr-2
Hi solr-users,
version type: nightly build solr-2008-07-07
If I search for name John, it finds it with out any issues On the other
hand if I search for Joh* , it also finds all the possible matches. However, if
I search for "Joh".. it doesn't find any possible match in other word, it
83 matches
Mail list logo