Tirthankar,
are you indexing 1.smaller docs or 2.books?
if 1. your caches are too big for your memory, as Erick already said.
Try to allocate 10GB für JVM, leave 14GB for your HDD-Cache and make your
caches smaller.
if 2. read the blog-posts on hathitrust.com.
http://www.hathitrust.org/blogs/la
Hi Fred,
analyze the queries which take longer.
We observe our queries and see the problems with q-time with queries which
are complex, with phrase queries or queries which contains numbers or
special characters.
if you don't know it:
http://www.hathitrust.org/blogs/large-scale-search/tuning-search
shard communication?
>
> Fred.
>
>
> Am Mittwoch, 28. September 2011 um 13:18 schrieb Vadim Kisselmann:
>
> > Hi Fred,
> > analyze the queries which take longer.
> > We observe our queries and see the problems with q-time with queries
> which
> > are com
why should the optimization reduce the number of files?
It happens only when you indexing docs with same unique key.
Have you differences in numDocs und maxDocs after optimize?
If yes:
how is your optimize command ?
Regards
Vadim
2011/9/28 Manish Bafna
> Try to do optimize twice.
> The 2nd o
> before optimization there are many files but after optimization i always
> end
> up with just 3 files in my index filder. Just want to find out if this was
> ok.
>
> Thanks
>
> On Wed, Sep 28, 2011 at 1:23 PM, Vadim Kisselmann <
> v.kisselm...@googlemail.com> wrote:
&g
ava:298)
> at
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
> at
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
> at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> at java.lang.Thread.run
call optimize again, it will delete the older index files.
>
> no.
during optimize you only delete docs, which are flagged as deleted. no
matter how old they are.
if your numDocs and maxDocs have the same number of Docs, you only rebuild
and merge your index, but you delete nothing.
Regards
> open for reading)
> 2nd time optimize, other than the new index file, all else gets deleted.
>
> This is happening specifically on Windows.
>
> On Wed, Sep 28, 2011 at 8:23 PM, Vadim Kisselmann
> wrote:
> > 2011/9/28 Manish Bafna
> >
> >> >>
Hello folks,
my wildcard-search shows strange behavior.
Sometimes i have results, sometimes not.
I use the last nightly build(Solr 4.0, Build #1643)
I use this filters and tokenizers to "index":
WhitespaceTokenizer
WoldDelimiterFilter
LowerCaseFilter
RemoveDuplicateTokenFilter
ReversedWildcardFil
rd queries, but I confess I
> don't know what the status of it is.
>
> You'll have to do whatever normalization you need to do at
> the app level before you pass the query on to Solr or write a
> custom component to deal with this case I think.
>
> Best
> Erick
&g
Hello folks,
i have a question about the MLT.
For example my query:
localhost:8983/solr/mlt/?q=gefechtseinsatz+AND+dna&mlt=true&mlt.fl=text&mlt.count=0&mlt.boost=true&mlt.mindf=5&mlt.mintf=5&mlt.minwl=4
*I have 1 Query-RESULT and 13 MLT-docs. The MLT-Result corresponds to
the half of my inde
Hi,
a number of relevant questions is given.
i have another one:
which type of docs do you have? Do you add some new docs every day? Or is it
a stable number of docs (500Mio.) ?
What about Replication?
Regards Vadim
2011/10/17 Otis Gospodnetic
> Hi Jesús,
>
> Others have already asked a number
Hello folks,
i have big problems with InvalidTokenOffsetExceptions with highlighting.
Looks like a bug in HTMLStripCharFilter.
H.Wang added a patch in LUCENE-2208, but nobody have time to look at this.
Could someone of the committers please take a look at this patch and commit
it or is this probl
Internal Server Error
Error: org.apache.lucene.search.highlight.InvalidTokenOffsetsException:
Token the exceeds length of provided text sized 41
Best Regards
Vadim
2011/10/20 Vadim Kisselmann
> Hello folks,
>
> i have big problems with InvalidTokenOffsetExceptions with highlighting.
>
Hi folks,
i have a small blockade in the configuration of an multicore setup.
i use the latest solr version (4.0) from trunk and the example (with jetty).
single core is running without problems.
We assume that i have this structure:
/solr-trunk/solr/example/multicore/
it works.
it was one wrong placed backslash in my config;)
sharing the config/schema files is not a problem.
regards vadim
2011/10/31 Vadim Kisselmann
> Hi folks,
>
> i have a small blockade in the configuration of an multicore setup.
> i use the latest solr version (4.0) from t
Hello folks,
i have an problem with shard indexing.
with an single core i use this update command:
http://localhost:8983/solr/update .
now i have 2 shards, we can call them core0 / core1
http://localhost:8983/solr/core0/update .
can i adjust anything to indexing in the same way like wit
e looking for?
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> Solr Training - www.solrtraining.com
>
> On 2. nov. 2011, at 10:00, Vadim Kisselmann wrote:
>
> > Hello folks,
> > i have an problem with shard indexing.
> >
>
not specify it in the URL?
> >>
> >> You could name ONE of your cores as ".", meaning it would be the
> "default"
> >> core living at /solr/update, perhaps that is what you're looking for?
> >>
> >> --
> >> Jan Høydahl, search s
architect
> Cominvent AS - www.cominvent.com
> Solr Training - www.solrtraining.com
>
> On 2. nov. 2011, at 11:16, Vadim Kisselmann wrote:
>
> > Hello Jan,
> >
> > thanks for your quick response.
> >
> > It's quite difficult to explain:
> >
Hello folks,
i have questions about MLT and Deduplication and what would be the best
choice in my case.
Case:
I index 1000 docs, 5 of them are 95% the same (for example: copy pasted
blog articles from different sources, with slight changes (author name,
etc..)).
But they have differences.
*Now i
Hi Edwin, Chris
it´s an old bug. I have big problems too with OffsetExceptions when i use
Highlighting, or Carrot.
It looks like a problem with HTMLStripCharFilter.
Patch doesn´t work.
https://issues.apache.org/jira/browse/LUCENE-2208
Regards
Vadim
2011/11/11 Edwin Steiner
> I just entered
Hi,
yes, see http://wiki.apache.org/solr/DistributedSearch
Regards
Vadim
2011/11/2 Val Minyaylo
> Have you tried to query multiple cores at same time?
>
>
> On 10/31/2011 8:30 AM, Vadim Kisselmann wrote:
>
>> it works.
>> it was one wrong placed backslash in my con
Hi folks,
i've installed the clustering component in solr 1.4.1 and it works, but not
really:)
You can see what the doc id is corrupt.
Euro-Krise
½Íџ
¾ͽ
¿)ై
my fields:
and my config-snippets:
title
id
text
i changed my config snippets (carrot.url=id, url, title..) but the
> the output. I've just tried a similar configuration on Solr 3.5 and the
> integer identifiers looked fine. Can you try the same configuration on Solr
> 3.5?
>
> Thanks,
>
> Staszek
>
> On Tue, Nov 29, 2011 at 12:03, Vadim Kisselmann <
> v.kisselm...@googlemail
Hi,
the quick and dirty way sound good:)
It would be great if you can send me a patch for 1.4.1.
By the way, i tested Solr. 3.5 with my 1.4.1 test index.
I can search and optimize, but clustering doesn't work (java.lang.Integer
cannot be cast to java.lang.String)
My uniqieKey for my docs it the "
Hi Stanislaw,
did you already have time to create a patch?
If not, can you tell me please which lines in which class in source code
are relevant?
Thanks and regards
Vadim Kisselmann
2011/11/29 Vadim Kisselmann
> Hi,
> the quick and dirty way sound good:)
> It would be great if you ca
Hi,
comment out the lines with the collapse component in your solrconfig.xml if
not need it.
otherwise, you're missing the right jar's for this component, or path's to
this jars in your solrconfig.xml are wrong.
regards
vadim
2011/12/1 Pawan Darira
> Hi
>
> I am migrating from Solr 1.4 to Solr
docs) {
> docList.add(doc.getField("solrId").toString());
> }
>
> Let me know if this did the trick.
>
> Cheers,
>
> S.
>
> On Thu, Dec 1, 2011 at 10:43, Vadim Kisselmann
> wrote:
>
> > Hi Stanislaw,
> > did you already have tim
Hello folks,
is it possible to find out the size (in KB) of specific fields from
one document? Eventually with Luke or Lucid Gaze?
My case:
docs in my old index (Solr 1.4) have sizes of 3-4KB each.
In my new index(Solr 4.0 trunk) there are about 15KB per doc.
I changed only 2 things in my schema.x
Hi,
it depends from your hardware.
Read this:
http://www.derivante.com/2009/05/05/solr-performance-benchmarks-single-vs-multi-core-index-shards/
Think about your cache-config (few updates, big caches) and a good
HW-infrastructure.
In my case i can handle a 250GB index with 100mil. docs on a I7
mach
gt;> fast searches and above 100GB for slow ones. We also route majority of
>>>> user
>>>> queries to fast indices. Yes, caching may help, but not necessarily we
>>>> can
>>>> afford adding more RAM for bigger indices. BTW, our documents are very
Hello Folks,
i want to decrease the max. number of terms for my fields to 500.
I thought what the maxFieldLength parameter in solrconfig.xml is
intended for this.
In my case it doesn't work.
The half of my text fields includes longer text(about 1 words).
With 100 docs in my index i had an segm
P.S.:
i use Solr 4.0 from trunk.
Is maxFieldLength deprecated in Solr 4.0 ?
If so, do i have an alternative to decrease the number of terms during indexing?
Regards
Vadim
2012/1/26 Vadim Kisselmann :
> Hello Folks,
> i want to decrease the max. number of terms for my fields to 500.
>
Sean, Ahmet,
thanks for response:)
I use Solr 4.0 from trunk.
In my solrconfig.xml is only one maxFieldLength param.
I think it is deprecated in Solr Versions 3.5+...
But LimitTokenCountFilterFactory works in my case :)
Thanks!
Regards
Vadim
2012/1/26 Ahmet Arslan :
>> i want to decrease the
Hi Christopher,
when all needed jars are included, you can only have wrong paths in
your solrconfig.xml
Regards
Vadim
2012/1/26 Stanislaw Osinski :
> Hi,
>
> Can you paste the logs from the second run?
>
> Thanks,
>
> Staszek
>
> On Wed, Jan 25, 2012 at 00:12, Christopher J. Bottaro > wrote:
>
>
Hi Shaveta,
simple, index a doc and search for this ;)
An soft commit stands for NearRealTimeSearch, It could take a couple
of seconds to see this doc,
but it should be there.
Best regards
Vadim
2012/11/26 Shaveta_Chawla :
> I have migrated solr 3.6 to solr 4.0. I have implemented solr4.0's auto
Hi Shaveta,
simple, index a doc and search for this ;)
An soft commit stands for NearRealTimeSearch, It could take a couple
of seconds to see this doc,
but it should be there.
Best regards
Vadim
2012/11/26 Shaveta_Chawla :
> I have migrated solr 3.6 to solr 4.0. I have implemented solr4.0's auto
Hi,
i have problems with edismax, filter queries and highlighting.
First of all: can edismax deal with filter queries?
My case:
Edismax is my default requestHandler.
My query in SolrAdminGUI: (roomba OR irobot) AND language:de
You can see, that my q is "roomba OR irobot" and my fq is
"language:
Hi Ahmet,
thanks for quick response :)
I've also discovered this failure.
I wonder that the query themselves works.
For example: query = language:de
I get results which only have language:de.
Also works the fq and i get only the "de"-result in my field "language".
I can't understand the behavior.
2012/1/31 Erick Erickson :
> Seeing the results with &debugQuery=on would help.
>
> No, fq does NOT get translated into q params, it's a
> completely separate mechanism so I'm not quite sure
> what you're seeing.
>
> Best
> Erick
>
> On Tue, Jan 31,
Hi Erick,
> I didn't read your first post carefully enough, I was keying
> on the words "filter query". Your query does not have
> any filter queries! I thought you were talking
> about &fq=language:de type clauses, which is what
> I was responding to.
no problem, i understand:)
> Solr/Lucene ha
Hmm, i don´t know, but i can test it tomorrow at work.
i´m not sure about the right syntax with hl.q. (?)
but i report :)
2012/1/31 Ahmet Arslan :
>> > Try the &fq option maybe?
>>
>> I thought so, unfortunately.
>> &fq will be the only option. I should rebuild my
>> application :)
>
> Could hl
. Sounds like a plan? :)
Best Regards
Vadim
2012/2/1 Koji Sekiguchi :
> (12/02/01 4:28), Vadim Kisselmann wrote:
>>
>> Hmm, i don´t know, but i can test it tomorrow at work.
>> i´m not sure about the right syntax with hl.q. (?)
>> but i report :)
>
>
> hl.q ca
Hello folks,
i want to reindex about 10Mio. Docs. from one Solr(1.4.1) to another
Solr(1.4.1).
I changed my schema.xml (field types sing to slong), standard
replication would fail.
what is the fastest and smartest way to manage this?
this here sound great (EntityProcessor):
http://www.searchworkin
Hi Ahmet,
thanks for quick response:)
I've already thought the same...
And it will be a pain to export and import this huge doc-set as CSV.
Do i have an another solution?
Regards
Vadim
2012/2/8 Ahmet Arslan :
>> i want to reindex about 10Mio. Docs. from one Solr(1.4.1) to
>> another
>> Solr(1.4.1
Another problem appeared ;)
how can i export my docs in csv-format?
In Solr 3.1+ i can use the query-param &wt=csv, but in Solr 1.4.1?
Best Regards
Vadim
2012/2/8 Vadim Kisselmann :
> Hi Ahmet,
> thanks for quick response:)
> I've already thought the same...
> And it will be
uld using xslt output help?
>
> Otis
>
> Performance Monitoring SaaS for Solr -
> http://sematext.com/spm/solr-performance-monitoring/index.html
>
>
>
>>________
>> From: Vadim Kisselmann
>>To: solr-user@lucene.apache.org
>&g
Hello folks,
I build a simple custom component for “hl.q” query.
My case was to inject hl.q=params on the fly, with filter params like
fields which were in my
standard query. These were highlighted , because Solr/Lucene have no way of
interpreting an extended "q" clause and saying "this part is
Set maxBooleanClauses in your solrconfig.xml higher, default is 1024.
Your query blast this limit.
Regards
Vadim
2012/2/22 Darren Govoni
> Hi,
> I am suddenly getting a maxClauseCount exception for no reason. I am
> using Solr 3.5. I have only 206 documents in my index.
>
> Any ideas? This is
Hi folks,
where and when is the next Eurocon scheduled?
I read something about denmark and autumn 2012(i don't know where *g*).
Best regards and thanks
Vadim
Hi Chris,
thanks for your response.Ok, we will wait :)
Best Regards
Vadim
2012/3/8 Chris Hostetter
>
> : where and when is the next Eurocon scheduled?
> : I read something about denmark and autumn 2012(i don't know where *g*).
>
> I do not know where, but sometime in the fall is probably th
Hi folks,
i comment this issue : https://issues.apache.org/jira/browse/SOLR-3238 ,
but i want to ask here if anyone have the same problem.
I use Solr 4.0 from trunk(latest) with tomcat6.
I get an error in New Admin UI:
This interface requires that you activate the admin request handlers,
add t
Hello folks,
i read the SolrCloud Wiki and Bruno Dumon's blog entry with his "First
Exploration of SolrCloud".
Examples and a first setup with embedded Jetty and ZK WORKS without problems.
I tried to setup my own configuration with Tomcat and an external
Zookeeper(my Master-ZK), but it doesn't wo
you have to re-index your data.
best regards
vadim
2012/3/21 syed kather :
> Team
>
> I have indexed my data with solr 3.3 version , As I need to use
> hierarchical facets features from solr 4.0 .
> Can I use the existing data with Solr 4.0 version or should need to
> re-index the data with new
Hello folks,
i work with Solr 4.0 r1292064 from trunk.
My index grows fast, with 10Mio. docs i get an index size of 150GB
(25% stored, 75% indexed).
I want to find out, which fields(content) are too large, to consider measures.
How can i localize/discover the largest fields in my index?
Luke(late
"SolrCloud new"
> You can also view it at nabble using this link:
> http://lucene.472066.n3.nabble.com/SolrCloud-new-td1528872.html
>
> Best,
> Jerry M.
>
>
>
>
> On Wed, Mar 21, 2012 at 5:51 AM, Vadim Kisselmann
> wrote:
>>
>> Hello folk
tim copy of the data.
>
> The relative sizes of the various files above should give
> you a hint as to what's using the most space, but it'll be a bit
> of a hunt for you to pinpoint what's actually up. TermVectors
> and norms are often sources of using up space.
>
&g
e this info
> and testing shows problems
>
> Best
> Erick
>
> On Thu, Mar 29, 2012 at 9:32 AM, Vadim Kisselmann
> wrote:
>> Hi Erick,
>> thanks:)
>> The admin UI give me the counts, so i can identify fields with big
>> bulks of unique terms
Hi folks,
i use solr 4.0 from trunk, and edismax as standard query handler.
In my schema i defined this:
I have this simple problem:
nascar +author:serg* (3500 matches)
+nascar +author:serg* (1 match)
nascar author:serg* (5200 matches)
nascar AND author:serg* (1 match)
I think i under
hi,
when only the slaves are used for search, why not, more RAM for OS.
I keep my default settings on my master, because of when my slaves are
busy with client-queries,
i can test a few things on my master.
best regards
vadim
2012/4/27 Jamel ESSOUSSI :
> Hi,
>
> I use two Solr slaves and one So
is happening.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> Solr Training - www.solrtraining.com
>
> On 27. apr. 2012, at 11:21, Vadim Kisselmann wrote:
>
>> Hi folks,
>>
>> i use solr 4.0 from trunk, and edismax as standard query h
first doc in "nascar +author:serg*" all
query-params match, but in the second doc only
"ConstantScore(author:serg*)".
But with an "mm=100%" all query-params should match.
http://www.lucidimagination.com/blog/2010/05/23/whats-a-dismax/
http://lucene.apache.org/solr/api/or
Hi Otis,
done :) Till now we use Graphite, Ganglia and Zabbix. For our JVM
monitoring JStatsD.
Best regards
Vadim
2012/5/31 Otis Gospodnetic :
> Hi,
>
> Super quick poll: What do you use for Solr performance monitoring?
> Vote here:
> http://blog.sematext.com/2012/05/30/poll-what-do-you-use-for
Hi folks,
i have to look for an old live system with solr 1.4.
When i optimize an bigger index with round about 200GB(after optimize
and cut, 100GB) and my slaves
replicate the newest version after(!) optimize, they hang(all) with
100% in replication and they have at once circa 300GB index sizes.
Forget to mention:
After Tomcat-restart, the slaves still have an index with 300GB.
After an manual replication command in UI, 100GB like master in a
couple of seconds and all is ok.
2012/6/19 Vadim Kisselmann :
> Hi folks,
>
> i have to look for an old live system with solr 1.4.
Hi everyone,
I have Solr running on one master and two slaves (load balanced) via
Solr 1.4.1 native replication.
If the load is low, both slaves replicate with around 100MB/s from master.
But when I use Solrmeter (100-400 queries/min) for load tests (over
the load balancer), the replication slow
Hello Shawn,
Primary assumption: You have a 64-bit OS and a 64-bit JVM.
>Jepp, it's running 64-bit Linux with 64-bit JVM
It sounds to me like you're I/O bound, because your machine cannot
keep enough of your index in RAM. Relative to your 100GB index, you
only have a maximum of 14G
On Mar 17, 2011, at 3:19 PM, Shawn Heisey wrote:
On 3/17/2011 3:43 AM, Vadim Kisselmann wrote:
Unfortunately, this doesn't seem to be the problem. The queries
themselves are running fine. The problem is that the replications is
crawling when there are many queries going on and tha
Hi Bill,
> You could always rsync the index dir and reload (old scripts).
I used them previously but was getting problems with them. The
application querying the Solr doesn't cause enough load on it to
trigger the issue. Yet.
> But this is still something we should investigate.
Indeed :-)
> Se
Hello folks,
i use solr 1.4.1 and every 2 to 6 hours i have indexing errors in my log
files.
on the client side:
2011-08-04 12:01:18,966 ERROR [Worker-242] IndexServiceImpl - Indexing
failed with SolrServerException.
Details: org.apache.commons.httpclient.ProtocolException: Unbuffered entity
encl
Hi folks,
I'm writing here again (beside Jira: SOLR-2565), eventually any one can help
here:
I tested the nightly build #1595 with an new patch (2565), but NRT doesn't
work in my case.
I index 10 docs/sec, it takes 1-30sec. to see the results.
same behavior when i update an existing document.
Hi Markus,
thanks for your answer.
I'm using Solr. 4.0 and jetty now and observe the behavior and my error logs
next week.
tomcat can be a reason, we will see, i'll report.
I'm indexing WITHOUT batches, one doc after another. But i would try out the
batch indexing as well as
retry indexing faulty
in your schema.xml you can set the default query parser operator, in
your case , but it's
deprecated.
When you use the edismax, read this:http://drupal.org/node/1559394 .
mm-param is here the answer.
Best regards
Vadim
2012/7/2 Steve Fatula :
> Let's say a user types in:
>
> DualHead2Go
>
>
>
same problem here:
https://mail.google.com/mail/u/0/?ui=2&view=btop&ver=18zqbez0n5t35&q=tomcat%20v.kisselmann&qs=true&search=query&th=13615cfb9a5064bd&qt=kisselmann.1.tomcat.1.tomcat's.1.v.1&cvid=3
https://issues.apache.org/jira/browse/SOLR-3238?page=com.atlassian.jira.plugin.system.issuetabpane
ly fix) that.
>
> Regards
> Stefan
>
>
> On Tuesday, July 3, 2012 at 4:00 PM, Vadim Kisselmann wrote:
>
>> same problem here:
>>
>> https://mail.google.com/mail/u/0/?ui=2&view=btop&ver=18zqbez0n5t35&q=tomcat%20v.kisselmann&qs=true&search=que
Hi Stefan,
ok, i would test the latest version from trunk with tomcat in next
days and open an new issue:)
regards
Vadim
2012/7/3 Stefan Matheis :
> On Tuesday, July 3, 2012 at 8:10 PM, Vadim Kisselmann wrote:
>> sorry, i overlooked your latest comment with the new issue in
Hi folks,
my Test-Server with Solr 4.0 from trunk(version 1292064 from late
february) throws this exception...
auto commit error...:java.lang.IllegalStateException: this writer hit
an OutOfMemoryError; cannot commit
at
org.apache.lucene.index.IndexWriter.prepareCommit(IndexWriter.java:26
Hi Robert,
> Can you run Lucene's checkIndex tool on your index?
No, unfortunately not. This Solr should run without stoppage, an
tomcat-restart is ok, but not more:)
I tested newer trunk-versions a couple of months ago, but they fail
all with tomcat.
i would test 4.0-alpha in next days with tomc
ou raise the JVM memory and see if you still hit the spike and go
> OOM? this is very unlikely a IndexWriter problem. I'd rather look at
> your warmup queries ie. fieldcache, FieldValueCache usage. Are you
> sorting / facet on anything?
>
> simon
>
> On Tue, Jul 10, 2012 at
-files?
Best regards
Vadim
2012/7/5 Stefan Matheis :
> Great, thanks Vadim
>
>
>
> On Thursday, July 5, 2012 at 9:34 AM, Vadim Kisselmann wrote:
>
>> Hi Stefan,
>> ok, i would test the latest version from trunk with tomcat in next
>> days and open an new issue:)
&
same problem.
but here should tomcat6 have the right to read/write your index.
regards
vadim
2012/7/14 Bruno Mannina :
> I found the problem I think, It was a permission problem on the schema.xml
>
> schema.xml was only readable by the solr user.
>
> Now I have the same problem with the solr inde
Hi folks,
i have this case:
i want to update my solr 4.0 from trunk to solr 4.0 alpha. the index
structure has changed, i can't replicate.
10 cores are in use, each with 30Mio docs. We assume that all fields
are stored and indexed.
What is the best way to export the docs from all cores on one mach
a presumption:
do you use your "old" solrconfig.xml files from older installations?
when yes, compare the default config and yours.
2012/8/23 Claudio Ranieri :
> I made this instalation on a new tomcat.
> With Solr 3.4.*, 3.5.*, 3.6.* works with jars into
> $TOMCAT_HOME/webapps/solr/WEB-INF/lib,
your docs are marked as deleted.
you should optimize after commit, then they will be really deleted.
it's easier and faster to stop your jetty/tomcat, drop your index
directory and start your servlet container...
when it's not possible, then optimize.
regards
Vadim
2012/8/27 Jamel ESSOUSSI :
> Hi
m to start solr-4.0.0-BETA with tomcat-6.0.20
>
> Hi Vadim,
> No, I used the entire apache-solr-4.0.0-BETA\example\solr (schema.xml,
> solrconfig.xml ...)
>
>
> -Mensagem original-
> De: Vadim Kisselmann [mailto:v.kisselm...@gmail.com] Enviada em: sexta-feira,
Hi guys,
we assume i have a simple query like this with wildcard and tilde:
"japa* fukushima"~10
instead of "japan fukushima"~10 OR "japanese fukushima"~10, etc.
Do we have a solution in Solr 4.0 to work with these kind of queries?
Does the AutomatonQuery/Filter cover this case?
Best regards
V
Hi Ahmet,
thanks for your reply:)
I see that it does not come with the 4.0 release, because the given
patches do not work with this version.
Right?
Best regards
Vadim
2012/9/26 Ahmet Arslan :
>
>> we assume i have a simple query like this with wildcard and
>> tilde:
>>
>> "japa* fukushima"~10
>>
Hi Roy,
jepp, it works with Tomcat 6 and an external Zookeeper.
I will publish a blogpost about it tomorrow on sentric.ch
My blogpost is ready, but i had no time to publish it in the last
couple of days:)
Best regards
Vadim
2012/9/27 Markus Jelsma :
> Hi - on Debian systems there's a /etc/defaul
ut it should work with
> "pol* tel*"~5 types of queries.
>
> Ahmet
>
> --- On Thu, 9/27/12, Vadim Kisselmann wrote:
>
>> From: Vadim Kisselmann
>> Subject: Re: Proximity(tilde) combined with wildcard, AutomatonQuery ?
>> To: solr-user@lucene.apache.org
&
Hi Rogerio,
i can imagine what it is. Tomcat extract the war-files in
/var/lib/tomcatXX/webapps.
If you already run an older Solr-Version on your server, the old
extracted Solr-war could still be there (keyword: tomcat cache).
Delete the /var/lib/tomcatXX/webapps/solr - folder and restart tomcat,
w
Hi,
these are JAVA_OPTS params, you can find and set this stuff in the
startManagedWeblogic script.
Best regards
Vadim
2012/10/16 rayvicky :
> who can help me ?
> where to settings -DzkRun-Dbootstrap_conf=true
> -DzkHost=localhost:9080 -DnumShards=2
> in weblogic
>
>
>
> --
> View this m
Hi,
how your update/add command looks like?
Regards
Vadim
2012/10/18 rayvicky :
> i make it work on weblogic.
> but when i add or update index ,it error
>
>
> <2012-10-17 ?Χ03?47·?3? CST> unexpected error occurred while retrieving the session for Web application:
> weblogic.servlet.internal
Hi Guru,
here my blogpost about this:
http://www.sentric.ch/blog/setting-up-solr-4-0-beta-with-tomcat-and-zookeeper
It´s pretty simple, just follow the mentioned steps.
Best regards
Vadim
2012/9/5 bsargurunathan :
> Hi Markus,
>
> Can you please tell me the exact file name in the tomcat folder?
Hi,
your JVM need more RAM. My setup works well with 10 Cores, and 300mio.
docs, Xmx8GB Xms8GB, 16GB for OS.
But it's how Bernd mentioned, the memory consumption depends on the
number of fields and the fieldCache.
Best Regards
Vadim
2012/11/16 Bernd Fehling :
> I guess you should give JVM more m
95 matches
Mail list logo