Hi Bernd,
But is this really (causing) a problem? What -Xmx are you using?
Otis
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html
On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
wrote:
> Hi list,
>
> while monitoring
Hi list,
while monitoring my systems I see a jump in memory consumption in JVM
after 2 to 5 days of running of about 5GB.
After starting the system (search node only, no replication during search)
SOLR uses between 6.5GB to 10.3GB of JVM when idle.
If the search node is online and serves requests
Hi,
I am unable to create compound Index format in 3.6.1 inspite of setting
as true. I do not see any .cfs file ,instead all the
.fdx,frq etc are seen and I see the segments_8 even though the mergefactor
is at 4 . Should I not see only 4 segment files at any time?
Please find attached schema an
I have the same error. can you guide me how to solve this error?my id :
bhavesh.jogi...@gmail.com
--
View this message in context:
http://lucene.472066.n3.nabble.com/Logging-from-data-config-xml-tp3956009p4008540.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am using the following defines and query, and want to hightlight of the
"title" and "body" elements of HTML documents.
FieldTypes defines:
=
=
Field defines:
=
There is another option: pairing multi-valued roles and fields. Multi-valued
fields support in-order return: the values are returned in the same order you
added them. This means that you can have two fields with matched pairs of
values.
Secure data often a many-to-many relationship where any u
Thanks very much for your quick guidance, which is very helpful!
Lisheng
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Monday, September 17, 2012 6:30 PM
To: solr-user@lucene.apache.org
Subject: Re: In multi-core, special dataDir is not used?
: I can'
: I can't reproduce the problem you are seeing -- can you please provide
: more details..
Correction: i can reproduce this.
This was in fact some odd behavior in the 1.x and 3.x lines that has been
changed for 4.x in SOLR-1897.
If you had no in your solrconfig.xml, or if you had a *blank*
: But when I update data like
http://localhost:8080/solr/whatever3/update?commit=true";, the data
: did not go to the newly specified dataDir (I can see core "whatver3" is
apparently used from log)?
:
: Only way to make it work is NOT to define dataDir in solrconfig.xml, is this
by design or
Hi,
I am using solr 3.6.1, I created a new core "whatever3" dynamically, and I see
solr.xml updated
as:
...
But when I update data like
http://localhost:8080/solr/whatever3/update?commit=true";, the data
did not go to the newly specified dataDir (I can see core "whatver
Hello All,
I have a requirement or a pre=requirement for our search application.
Basically the engine will be on a website with plenty of users and more than
20 different fields, including location.
So basically, the question is this:
Is it possible to let user's define their position in search
Hi Robert,
Anyone can edit wiki, you just need to create user.
Regarding URLs
http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/stemdict.txt
http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/example/solr/collection1/conf/protwords.txt
--- On Tu
Hi group,
On this wiki page these two links below are broken as they are also on
lucidworks' version, can someone point me at the correct locations please? I
googled around and came up with possible good links.
Thanks
Robi
http://wiki.apache.org/solr/LanguageAnalysis#Other_Tips
http://lucidwo
Ok, I'll try running as tomcat.
The wiki has a problem with the tomcat startup script. It looks like it's
supposed to be a link which allows us to download a shell script, but when I
click it, I get the error message "You are not allowed to do AttachFile on
this page. Login and try again.".
You're getting the hang of it. No particular location for CopyField, just
not within "fields" or "types". Putting them after your fields makes sense.
See the Solr example schema.
-- Jack Krupansky
-Original Message-
From: Spadez
Sent: Monday, September 17, 2012 4:47 PM
To: solr-user@
Ah, ok this is news to me and makes a lot more sense. If I can just run this
back past you to make sure I understand. If I move my full_text to
If I move my fulltext document from my SQL database to "keyword_document" it
will contain the original fulltext in the source, but the index will have
th
On Mon, Sep 17, 2012 at 3:44 PM, Mike Schultz wrote:
> So I'm figuring 3MB per entry. With CacheSize=512 I expect something like
> 1.5GB of RAM, but with the server in steady state after 1/2 hour, it is 7GB
> larger than without the cache.
Heap size and memory use aren't quite the same thing.
Tr
You can use an XSL response writer to transform your values to have a different
precision.
http://wiki.apache.org/solr/XsltResponseWriter
Would most likely be better for your client to just do it on his end though. He
is probably parsing the response anyway.
-Original Message-
From: Gu
Hi,
I've got a set up as follows:
- 13 cores
- 2 servers
- running Solr 4.0 Beta with numShards=1 and an embedded zookeeper.
I'm trying to figure out why some complex queries are running so slowly in
this setup versus quickly in a standalone mode.
Given a query like: /select?q=(some complex qu
I've looked through documentation and postings and expect that a single
filter cache entry should be approx MaxDoc/8 bytes.
Our frequently updated index (replication every 3 minutes) has maxdoc ~= 23
Million.
So I'm figuring 3MB per entry. With CacheSize=512 I expect something like
1.5GB of RAM,
Hi Nalini,
We had similar requirements and this is how we did it (using your example):
Record A:
Field1_All:
Field1_Private:
Field2_All: ''
Field2_Private:
Field3_All: ''
Field3_Private:
Fields_All:
Fields_Private:
Hi,
Solr doesn't have any built-in mechanism for document/field level security
- basically it's delegated to the container to provide security, but this
of course won't apply to specific documents and/or fields.
There are are a lot of ways to skin this cat, some bits of which have been
covered by
Sorry for late response. To be strict, here is what i want:
* I get documents all the time. Let's assume those are news (It's
rather similar thing).
* Every time i get new batch of "news" i should add them to Solr index
and get cluster information for that document. Store this information
in t
Well, my client is asking if is it possible, im just providing the search
enginne to him, not working directly with the application. Dont know exactly
in what language he is programming.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Stats-field-with-decimal-values-tp400829
Yes absolutely. Since 4.0 hasn't been released, anything with a fix version to
4.0 basically implies trunk as well. Also notice my comment "Committed to
trunk & 4x" which is explicit.
~ David
On Sep 17, 2012, at 12:02 PM, Eric Khoury [via Lucene] wrote:
Hi David, I see that you committed the
You said "it has been copied from the keyword_document [field]", but the
reality is that Solr is not copying from the indexed value of the field, but
from the source value for the field. The idea is that multiple fields can be
based on the same source value even if they analyze and index the val
> Then if I do copy command to move it into truncate_document
> then even though
> I can reduce it down to say 100 words, it is lacking words
> like "and" "it"
> and "this" because it has been copied from the
> keyword_document.
That's not true. copy operation is performed before analysis (stopwor
I'm really confused here. I have a document which is say 4000 words long. I
want to get this put into two fields in Solr without having to save the
original document in its entirety within Solr.
When I import my fulltext (4000 word) document to Solr I was going to put it
straight into keyword_docu
The only catch here is that copyField might truncate in the middle of a
word, yielding an improper term.
-- Jack Krupansky
-Original Message-
From: Ahmet Arslan
Sent: Monday, September 17, 2012 11:54 AM
To: solr-user@lucene.apache.org
Subject: Re: Taking a full text, then truncate and
--- On Mon, 9/17/12, Spadez wrote:
> From: Spadez
> Subject: Re: Taking a full text, then truncate and duplicate with stopwords
> To: solr-user@lucene.apache.org
> Date: Monday, September 17, 2012, 7:10 PM
> Maybe I dont understand, but if you
> are copying the keyword description field
> and
> Ok. I can still define GramSize too?
>
> * minGramSize="3"
> maxGramSize="30" />*
Yes you can.
http://lucene.apache.org/solr/api-3_6_1/org/apache/solr/analysis/EdgeNGramFilterFactory.html
Maybe I dont understand, but if you are copying the keyword description field
and then truncating it then the truncated form will only have keywords too.
That isnt what I want. I want the truncated form to have words like "a"
"the" "it" etc that would have been removed when added to
keyword_descrip
Hi David, I see that you committed the work for solr-3304 to the 4.x tree,
which is great news, thanks.I'm not fully familiar with the process, does that
mean its currently available in the nighty builds?Eric.
> Date: Wed, 29 Aug 2012 08:44:14 -0700
> From: dsmi...@mitre.org
> To: solr-user@luc
> The trouble is, I want the truncated desciption to still
> have the keywords.
copyField copies raw text, it has noting to do with analysis.
Ok. I can still define GramSize too?
**
--
View this message in context:
http://lucene.472066.n3.nabble.com/Only-exact-match-searches-working-tp4008160p4008361.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thank you for the reply.
The trouble is, I want the truncated desciption to still have the keywords.
If I pass it to the keyword_descipriton and remove words like "and" "i"
"then" "if" etc, then copy it across to truncated_description, my truncated
description will not be a sentance, it will onl
I probably wouldn't suggest running Tomcat as root because of the
principle of least privilege, but aside from that, it's sort of
immaterial what you call the account, particularly if you already have
a 'tomcat' daemon account set up.
Michael Della Bitta
--
Hi
I am to planning use APache solr for Oracle DB based (Future we may may use
some other DB) search for our project. Its going to be a customer faced
product and we are using Spring MVC frame work. Could anybody help me how
can i integrate Apache Solr with my project or could any body suggest me
Can I have some clarification about installing Tomcat as the user solr? See
http://wiki.apache.org/solr/SolrTomcat#Installing_Tomcat_6 second paragraph,
which states "Create the solr user. As solr, extract the Tomcat 6.0 download
into /opt/tomcat6".
Does this user need a home-dir? (I'm guessi
--- On Mon, 9/17/12, Spadez wrote:
> From: Spadez
> Subject: Re: Taking a full text, then truncate and duplicate with stopwords
> To: solr-user@lucene.apache.org
> Date: Monday, September 17, 2012, 5:32 PM
> In an attempt to answer my own
> question, is this a good solution.
>
> Before I was
Thanks Jack.
We are using Solr 3.4.
On Mon, Sep 17, 2012 at 8:18 PM, Jack Krupansky wrote:
> That doc is out of date for 4.0. See the 4.0 Javadoc on FuzzyQuery for
> updated info. The tilda right operand is now an integer editing distance
> (number of times to insert char, delete char, change cha
That doc is out of date for 4.0. See the 4.0 Javadoc on FuzzyQuery for
updated info. The tilda right operand is now an integer editing distance
(number of times to insert char, delete char, change char, or transpose two
adjacent chars to map index term to query term) that is limited to 2.
Be a
That will match internal substrings in addition to prefix strings. EdgeNGram
does only prefix substrings, which is generally what people want. So,
NGramFilter would match "England" when the query is "land" or "gland",
"gla", etc.
Use the Solr Admin Analysis UI to enter text to see how the filt
In an attempt to answer my own question, is this a good solution.
Before I was thinking of importing my fulltext description once, then
sorting it into two seperate fields in solr, one truncated, one keyword.
How about instead actually importing my fulltext description twice. Then I
can import it
Add the &fmap.content=your-stored-field to the URL.
Or if your schema doesn't already have a "content" field, add one that is
"stored" and it will automatically be used.
-- Jack Krupansky
-Original Message-
From: Alexander Troost
Sent: Monday, September 17, 2012 1:12 AM
To: solr-use
Could you clue us in as to why this is important to you? I mean, any modern
programming language should be capable of dealing with parsing "1.0" if it
can deal with parsing "1.00".
-- Jack Krupansky
-Original Message-
From: Gustav
Sent: Monday, September 17, 2012 9:19 AM
To: solr-use
Purely for searching.
The truncated form is just to show to the user as a preview, and the keyword
form is for the keyword searching.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Taking-a-full-text-then-truncate-and-duplicate-with-stopwords-tp4008269p4008295.html
Sent
Hello everyone,
When im using &stats=true&stats=product_price parameter, it returns me the
following structure:
1.0
1.0
7
0
7.0
7.0
1.0
0.0
What im looking for is these 2:
1.0
1.0
Is it possible to them be returned as decimal values?
Like this:
1.00
1.00
Tnks!
--
View this message in co
Got it.
Thanks Rafał !
On Mon, Sep 17, 2012 at 6:37 PM, Rafał Kuć wrote:
> Hello!
>
> There is no need to include any changes or additional component to
> have fuzzy search working in Solr.
>
> --
> Regards,
> Rafał Kuć
> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearc
Hello!
There is no need to include any changes or additional component to
have fuzzy search working in Solr.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch
> Thanks.
> Is any extra configuration from the Solr side to make this work ?
> Any addi
Hi Jack,
Thanks.
Even though I have mentioned compound Index to true in the Indexconfig
section of schema for 3.6 version ,it still seems to create normal Index
files.
Attached is the solrconfig.xml
Please let me know if anything wrong
Regards
Sujatha
On Sat, Sep 15, 2012 at 9:43 PM, Jack Kr
Thanks.
Is any extra configuration from the Solr side to make this work ?
Any additional text files like synonyms.txt, any additional fields or any
changes in schema.xml or solrconfig.xml ?
On Mon, Sep 17, 2012 at 4:45 PM, Rafał Kuć wrote:
> Hello!
>
> Is this what you are looking for
>
> https:
> I dont want to store this as it is in Solr, I want to
> instead have two
> versions of it. One as a truncated form, and one as a
> keyword form.
> *Truncated Form:*
If truncated form means first N characters then copyField can be used
http://wiki.apache.org/solr/SchemaXml#Copy_Fields
> *Keyw
I've hit a bit of a wall and would appreciate some guidance. I want to index
a large block of text, like such:
I dont want to store this as it is in Solr, I want to instead have two
versions of it. One as a truncated form, and one as a keyword form.
*Truncated Form:*
*Keyword Form (using stop
Thank you for the reply. I have done a bit of reading and it says I can also
use this one:
This is what I will use I think, as it weeds out words like "at" "I" as a
bonus.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Only-exact-match-searches-working-tp4008160p400826
Hello!
Is this what you are looking for
https://lucene.apache.org/core/old_versioned_docs/versions/3_0_0/queryparsersyntax.html#Fuzzy%20Searches
?
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch
> Hi,
> I need to know how we can implement fuzzy
Great. Thanks.
That solves my problem.
Greetings
Jochen
André Widhani schrieb:
The first thing I would check is the virtual memory limit (ulimit -v, check
this for the operating system user that runs Tomcat /Solr).
It should be set to "unlimited", but this is as far as i remember not the
de
The first thing I would check is the virtual memory limit (ulimit -v, check
this for the operating system user that runs Tomcat /Solr).
It should be set to "unlimited", but this is as far as i remember not the
default settings on SLES 11.
Since 3.1, Solr maps the index files to virtual memory.
Hello,
I have a problem with solr and multicores on SLES 11 SP 2.
I have 3 cores, each with more than 20 segments.
When I try to start the tomcat6, it can not start the CoreContainer.
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
I r
59 matches
Mail list logo