rce
FINE
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator
<init>
11
Executing SQL: select * from doc_properties where
DOCID='0u3xouyscdhye61o'
therefore i assume the output shown in the dataimporthandler UI is
incorrect. i could doublecheck with the database logs
cheerio,
patrick
On 2
..
i was wondering if there's something wrong with my configuration - thank
you for clarifying,
patrick
ation would be helpful in case a specific shard has to be
re-indexed (no indexing downtime, isolated recovery). i assume the
HTTP-response only contains the IP address of the proxy.
thank you for any hints!
cheerio,
patrick
e/select?q=*:*&shards=rdta01%3A9983%2Fsolr%2Fmsg-core%2Crdta01%3A28983%2Fsolr%2Fmsg-core&indent=on&start=0&rows=0
numFound=12 (correct)
cheerio,
patrick
i resolved my confusion and discovered that the documents of the second
shard contained the same 'unique' id.
rows=0 displayed the 'correct' numFound since (as i understand) there
was no merge of the results.
cheerio,
patrick
On 25.07.2012 17:07, patrick wrote:
hi,
What are the specifications of
>your hosts and how much memory are you allocating?
Cheers,
-patrick
On 16/03/2016, 14:52, "YouPeng Yang" wrote:
>Hi
> It happened again,and worse thing is that my system went to crash.we can
>even not connect to it with ssh.
>
Yeah, I did’t pay attention to the cached memory at all, my bad!
I remember running into a similar situation a couple of years ago, one of the
things to investigate our memory profile was to produce a full heap dump and
manually analyse that using a tool like MAT.
Cheers,
-patrick
On 17/03
, but does not return the ones that
were created matching my dynamic definitions, such as *_s, *_i, *_txt, etc.
I know Solr is aware of these fields, because I can query against them.
What is the secret sauce to query their names and data types?
Thanks,
Patrick Hoeffel
Senior Software Eng
just started making suggestions on the documentation and was
wondering if there is a reason why the TOC and section headings are
missing? (that isn't apparent from the document)
Thanks!
Hope everyone is near a great weekend!
Patrick
topics fell in that section or not.
Thanks!
Hope you are having a great day!
Patrick
On 03/06/2015 12:28 PM, Shawn Heisey wrote:
On 3/6/2015 10:20 AM, Patrick Durusau wrote:
I was looking at the PDF version of the Apache Solr Reference Guide
5.0 and noticed that it has no TOC nor any section
I use Solr to index different kinds of database tables. I have a Solr index
containing a field named category. I make sure that the category field in Solr
gets occupied with the right value depending on the table. This I can use to
build facet queries which works fine.
The problem I have is wit
Hi,
I am having problems getting the delta import working. Full import works fine.
I am using current version of solr (6.1). I have been looking at this pretty
much all day and can't find what I am not doing correctly... I did try the
Using query attribute for both full and delta import and tha
n. I've been creating the collection first
from defaults and then applying the CDCR-aware solrconfig changes afterward. It
sounds like maybe I need to create the configset in ZK first, then create the
collections, first on the Target and then on the Source, and I should be good?
Thanks,
Patrick Hoe
in the SOURCE collection started flowing into
the TARGET collection, and it has remained congruent ever since.
Thanks,
Patrick Hoeffel
Senior Software Engineer
(Direct) 719-452-7371
(Mobile) 719-210-3706
patrick.hoef...@polarisalpha.com
PolarisAlpha.com
-Original Message-
From: Am
Appreciate if anyone can help raise an issue for the JSON facet sum error
my staff Edwin raised earlier
but have not gotten any response from the Solr community and developers.
Our production operation is urgently needing this accuracy to proceed as it
impacts audit issues.
Best regards,
Dr.Pa
freundlichen Grüßen
Patrick Fallert
[cid:image001.jpg@01D327BC.EA2F]
Rainer-Haungs-Straße 7
nd TF-IDF to be adjusted on that value.
I can see in the debug logs that the cache was active.
I have found a pending bug (since Solr 5.5:
https://issues.apache.org/jira/browse/SOLR-8893) that explains that this
ExactStatsCache is used to compute the correct TF-IDF for the query but not for
the TermVectors component.
Is there any way to get the correctly merged DF values (and TF-IDF) from
multiple shards?
Is there a way to get from which shard a document comes from so I could compute
my own correct DF?
Thank you,
Patrick
F values to be 6 and TF-IDF to be adjusted on that value.
I can see in the debug logs that the cache was active.
I have found a pending bug (since Solr 5.5:
https://issues.apache.org/jira/browse/SOLR-8893) that explains that this
ExactStatsCache is used to compute the correct TF-IDF for the query but not for
the TermVectors component.
Is there any way to get the correctly merged DF values (and TF-IDF) from
multiple shards?
Is there a way to get from which shard a document comes from so I could compute
my own correct DF?
Thank you,
Patrick
I'm using Solr 4.4.0 running on Tomcat 7.0.29. The solrconfig.xlm is
as-delivered (excepted for the Solr home directory of course). I could pass
on the schema.xml, though I doubt this would help much, as the following
will show.
If I select all documents containing "russia" in the text, which is t
Thank you for your very quick reply - and for your solution, that works
perfectly well.
Still, I wonder why this simple and straightforward syntax "web OR
NOT(russia)" needs some translation to be processed correctly...
>From the many related posts I read before asking my question, I know that
I'm
address the stalling issues.
Is there something I am overlooking? Perhaps the system is becoming
oversubscribed in terms of resources? Thanks for any help that is offered.
--
Patrick O'Lone
Director of Software Development
TownNews.com
E-mail ... pol...@townnews.com
Phone 309-743-0809
Fax .. 309-743-0830
s more than that, but that just seems
> bizarre unless you're doing something like faceting and/or sorting on every
> field.
>
> -Michael
>
> -Original Message-
> From: Patrick O'Lone [mailto:pol...@townnews.com]
> Sent: Tuesday, November 26, 2013 11
nyway.
--
Patrick O'Lone
Director of Software Development
TownNews.com
E-mail ... pol...@townnews.com
Phone 309-743-0809
Fax .. 309-743-0830
mly, after replication, I have several threads that will hang on
reading data from field cache and I'm trying to think of things I can do
to mitigate that. Thanks for the info.
> Hello Patrick,
>
> Replication flushes UnInvertedField cache that impacts fc, but doesn't
> harm Luce
size hitting the sky.
>
> Don't get bored reading the 1st (and small) introduction page of the
> article, page 2 and 3 will make lot of sense:
> http://www.drdobbs.com/jvm/g1-javas-garbage-first-garbage-collector/219401061
>
>
> HTH,
>
> Guido.
>
> On 26/11/13
ming more and
> more reliable, standard and stable.
>
> Regards,
>
> Guido.
>
> On 09/12/13 15:07, Patrick O'Lone wrote:
>> I have a new question about this issue - I create a filter queries of
>> the form:
>>
>> fq=start_time:[* TO NOW/5MINUTE]
>>
ou add the Garbage collection JVM options I suggested you?
>
> -XX:+UseG1GC -XX:MaxGCPauseMillis=50
>
> Guido.
>
> On 09/12/13 16:33, Patrick O'Lone wrote:
>> Unfortunately, in a test environment, this happens in version 4.4.0 of
>> Solr as well.
>>
>>> I
> is not the good one.
> * will be replaced by the first date in your field
>
> Try :
> fq=start_time:[NOW TO NOW+5MINUTE]
>
> Franck Brisbart
>
>
> Le lundi 09 décembre 2013 à 09:07 -0600, Patrick O'Lone a écrit :
>> I have a new question about this issue
cause it actually caches the negative
query or if it discards it entirely.
> Patrick,
>
> Are you getting these stalls following a commit? If so then the issue is
> most likely fieldCache warming pauses. To stop your users from seeing
> this pause you'll need to add stati
If I was to use the LFU cache instead of FastLRU on the filter cache, if
I enable auto-warming on that cache type - does it warm the most
frequently used fq on the filter cache? Thanks for any info!
--
Patrick O'Lone
Director of Software Development
TownNews.com
E-mail ... pol...@townnew
Well, I haven't tested it - if it's not ready yet I will probably avoid
for now.
> On 12/19/2013 1:46 PM, Patrick O'Lone wrote:
>> If I was to use the LFU cache instead of FastLRU on the filter cache, if
>> I enable auto-warming on that cache type - does it warm the
ient$7.execute(SolrZkClient.java:249)
at
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:65)
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:249)
at org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:889)
... 16 more
On Tue, Jan 7, 2014 at 9:57
instances started up much more quickly.
Cheers,
Pat.
On Tue, Jan 7, 2014 at 10:20 AM, patrick conant wrote:
> After a full bounce of Tomcat, I'm now getting a new exception (below). I
> can browse the Zookeeper config in the Solr admin UI, and can confirm that
> there's a nod
a custom Solr Query Parser etc.?
Regards,
Patrick
and the maintainability.
My question for the community is what are your thoughts are on this, and do
you have any suggestion and/or recommendations on planning for future
growth?
Look forward to your responses,
Patrick
Good eye, that should have been gigabytes. When adding to the new shard,
is the shard already part of the the collection? What mechanism have you
found useful in accomplishing this (i.e. routing)?
On Nov 14, 2014 7:07 AM, "Toke Eskildsen" wrote:
> Patrick Henry [patricktheawesomeg
alias at it.
>
> Michael
>
> On 11/14/14 07:06, Toke Eskildsen wrote:
>
>> Patrick Henry [patricktheawesomeg...@gmail.com] wrote:
>>
>> I am working with a Solr collection that is several terabytes in size
>>> over
>>> several hundred millions of
rvers
available to handle this request
Seems to me that CloudSolrServer will not trigger the core to be loaded.
Is it possible to get the core loaded using CloudSolrServer?
Regards,
Patrick
still
running. I now use standalone Zookeeper instance and that works well.
Thanks Erick for giving the right direction, much appreciated!
Regards,
Patrick
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Wednesday, 20 March 2013 2:57 a.m.
To: solr
Compiled it on trunk without problem.
Is this patch supposed to work for 4.X?
Regards,
Patrick
Thanks Steve, that worked for branch_4x
-Original Message-
From: Steve Rowe [mailto:sar...@gmail.com]
Sent: Friday, 24 May 2013 3:19 a.m.
To: solr-user@lucene.apache.org
Subject: Re: OPENNLP current patch compiling problem for 4.x branch
Hi Patrick,
I think you should check out and
only nouns and verbs
Same problem when updated the data using csv upload.
Is that a bug or something I did wrong?
Thanks in advance!
Regards,
Patrick
Hi Lance,
I updated the src from 4.x and applied the latest patch LUCENE-2899-x.patch
uploaded on 6th June but still had the same problem.
Regards,
Patrick
-Original Message-
From: Lance Norskog [mailto:goks...@gmail.com]
Sent: Thursday, 6 June 2013 5:16 p.m.
To: solr-user
d the tokenizers interact only with the indexed
field and not the stored one.
Am I wrong ?
Is it possible to you to do such a filter.
Patrick.
Real Time indexing (solr 4) or decrease replication poll and auto commit
time.
2011/9/10 Jamie Johnson
> Is it appropriate to query the master servers when replicating? I ask
> because there could be a case where we index say 50 documents to the
> master, they have not yet been replicated and a
I can't create one field per language, that is the problem but I'll dig into
it following your indications.
I let you know what I could come out with.
Patrick.
2011/9/11 Jan Høydahl
> Hi,
>
> You'll not be able to detect language and change stemmer on the same field
1006011609080.29...@radix.cryptio.net%3E
Patrick.
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Monday, September 12, 2011 3:04 PM
To: solr-user@lucene.apache.org
Subject: Re: Weird behaviors with not operators.
: I'm crashing into a weird behavio
I mean it's a known bug.
Hostetter AND (-chris *:*)
Should do the trick.
Depending on your request.
NAME:(-chris *:*)
-Original Message-
From: Patrick Sauts [mailto:patrick.via...@gmail.com]
Sent: Monday, September 12, 2011 3:57 PM
To: solr-user@lucene.apache.org
Subject: RE:
Is the parameter facet.method=fc still needed ?
Thank you.
Patrick.
Solr failing consistently
represents something that may cause trouble elsewhere when least
expected. (And hard to isolate as the problem.)
Thanks!
Hope everyone is having a great weekend!
Patrick
PS: From the hadoop log (when it fails) if that's helpful:
2011-12-11 15:21:51,436 INFO solr.So
ng that may cause trouble elsewhere when least
expected. (And hard to isolate as the problem.)
Thanks!
Hope everyone is having a great weekend!
Patrick
PS: From the hadoop log (when it fails) if that's helpful:
2011-12-11 15:21:51,436 INFO solr.SolrWriter - Adding 11 documents
2011-12-11 15:21
Have à look here first and you're will probably be using SolrEmbeddedServer.
http://wiki.apache.org/solr/Solrj
Patrick
Op 13 dec. 2011 om 20:38 heeft Joey het volgende
geschreven:
> Anybody could help?
>
> --
> View this message in context:
> http://lucene.472066.n3.n
that.
- Patrick
Verstuurd vanaf mijn iPhone
Op 13 dec. 2011 om 20:53 heeft Joey het volgende
geschreven:
> Thanks Patrick for the reply.
>
> What I did was un-jar solr.war and created my own web application. Now I
> want to write my own servlet to index all files inside a folder.
>
sophisticated access control, you need it to be included
in an extra layer between Solr and the devices accressing your Solr
instance.
- Patrick
2011/12/21 RT
> Hi,
>
> I would like to control what applications get access to the solr database.
> I am using jetty as the appcontainer.
&g
Hi Marotosg,
you can index the phonenumber field with the ngram field type, which allows
for partial (wildcard) searches on this field. Have a look here:
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.CommonGramsFilterFactory
Cheers,
Patrick
2012/1/19 marotosg
> Hi.
Partially agree. If just the facts are given, and not a complete sales talk
instead, it'll be fine. Don't overdo it like this though.
Cheers,
Patrick
2012/1/19 Darren Govoni
> I think the occassional "Hey, we made something cool you might be
> interested in!" noti
I went through jar hell yesterday. I finally got Solrj working.
http://jarfinder.com was a big help.
Rock on, PLA
Patrick L Archibald
http://patrickarchibald.com
On Fri, Aug 13, 2010 at 7:25 PM, Chris Hostetter
wrote:
>
> : I get the following runtime error:
> :
> : Excepti
I can find the answer but is this problem solved in Solr 1.4.1 ?
Thx for your answers.
Maybe SOLR-80 jira issue ?
As written in Solr 1.4 book; "pure negative query doesn't work correctly ."
you have to add 'AND *:* '
thx
From: Patrick Sauts [mailto:patrick.via...@gmail.com]
Sent: mardi 28 septembre 2010 11:53
To: 'solr-user@lucene.ap
than under instanceDir of the SolrCore.
Am I doing something wrong in configuration?
Regards,
Patrick
on the
document ...
-Original Message-
From: Patrick Mi [mailto:patrick...@touchpointgroup.com]
Sent: Tuesday, 26 February 2013 5:49 p.m.
To: solr-user@lucene.apache.org
Subject: DataDirectory: relative path doesn't work
I am running Solr4.0/Tomcat 7 on Centos6
According to this page http://
x27;t restart master A. Only after I
shutdown B and C then I can start master A.
Is this a feature or bug or something I haven't configure properly?
Thanks advance for your help
Regards,
Patrick
A start maybe to use a RAM disk for that. Mount is as a normal disk and
have the index files stored there. Have a read here:
http://en.wikipedia.org/wiki/RAM_disk
Cheers,
Patrick
2012/2/8 Ted Dunning
> This is true with Lucene as it stands. It would be much faster if there
>
Every datasource has a name e.g. and
every entity, too, e.g.
- Now at the result list I would need to know, which e.g. table the
result provided, e.g.
0
Thanks,
Patrick
to give this clustering
extension a try from within Solr using the 1.4.1 version that I have already
running on my server.
Thanks for a brief feedback.
Best regards,
Patrick
[X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
rserPlugin.java:
[...]
public class DisMaxQParserPlugin extends QParserPlugin {
[...]
Thanks,
Patrick
Hi Markus,
Why do you think it's not deleting amyrhing,?
Thanks,
Patrick
Op 22 okt. 2012 08:36 schreef "Markus.Mirsberger"
het volgende:
> Hi,
>
> I am trying to delete a some documents in my index by query.
> When I just select them with this negated query, I get al
Did you make sure to commit after the delete?
Patrick
Op 22 okt. 2012 08:43 schreef "Markus.Mirsberger"
het volgende:
> Hi, Patrick,
>
> Because I have the same amount of documents in my index than before I
> perform the query.
> And when I use the negated query just
e submit to the Solr server that replaces each sequence of minus
characters by a single one.
Regards, Patrick
Bernadette Houghton schrieb:
> Sorry, the last line was truncated -
>
> HTTP Status 400 - org.apache.lucene.queryParser.ParseException: Cannot parse
> '(Asia
tandard" and
also with "dismax", the tokens "foo" and "bar" were not replaced. The
parsedQueryString was something similar to "field:foo field:bar". At
index time, it works like expected.
Has anybody experienced this and/or knows a workaround, a solution for it?
Thanks, Patrick
z", but it will not
match the indexed documents that only contains "foo_bar". And this is,
what we need here.
The cause of my problem should be the query parsing, but I don't know,
if there is any solution for it. I need a possibility that works like
the analysis/query parsi
and not at "real" query time.
Patrick
Chantal Ackermann schrieb:
> Hi Patrick,
>
> have you added that SynonymFilter to the index chain and the query
> chain? You have to add it to both if you want to have it replaced at
> index and query time. It might also be enough
Hi list,
is there any possibility to get highlighting also for the query string?
Example:
Query: fooo bar
Tokens after query analysis: foo[0,4], bar[5,8]
Token "foo" matches a token of one of the queried fields.
-> Query higlighting: "fooo"
Thanks, Patrick
d "foo bar" or "bar" and "foo bar".
The best way to handle this, seems to be by using synonyms that allows
the precise configuration of this and that could be managed by an
editorial staff.
Besides, foo bar=>foo_bar works at anything (index time, analysis.jsp)
but qu
FieldQParserPlugin, it's
possible to search for things like "foo bar baz", where "foo bar" has to
be changed to "foo_bar" and where at the end the tokens "foo_bar" und
"baz" will be created, so that both could match independently.
Patrick
Chris Hoste
Hi Stefan,
you don't need to convert the Java objects built from the result
returned as Javabin. Instead of this, you could easily use the JSON
return format by setting "wt=json". See also at [0] for more information
about this.
Patrick
[0] http://wiki.apache.org/solr/SolJSON
y to go without
complex query parsing et cetera. I also have to write your own modified
QParser, that fit your special needs. Also some higher features, like
they are offered by other QParsers could be integrated. It's all up to
you and your needs.
Patrick
brad anderson schrieb:
> Th
.
-Patrick
Peter A. Kirk schrieb:
> Hi
>
>
>
> It appears that Solr reads a synonym list at startup from a text file.
>
> Is it possible to alter this behaviour so that Solr obtains the synonym list
> from a database instead?
>
>
>
> Thanks,
>
> Peter
>
>
Try solr.FastLRUCache instead of solr.LRUCache it's the new cache
gesture for solr 1.4.
And maybe true in main index section or
diminish mergefactor
see http://wiki.apache.org/lucene-java/ImproveSearchingSpeed
Tomasz Kępski a écrit :
Hi,
I'm using SOLR(1.4) to search among about 3,500,000 d
having suggestions?
Best,
Patrick
thoughts?
Best,
Patrick
details of the interestingTerms.
Thanks in advance
Patrick
-Original Message-
From: Aleksander M. Stensby [mailto:[EMAIL PROTECTED]
Sent: woensdag 26 november 2008 13:03
To: solr-user@lucene.apache.org
Subject: Re: Keyword extraction
I do not agree with you at all. The concept of MoreL
Hi Aleksander,
This was a typo on my end, the original query included a semicolon instead of
an equal sign. But I think it has to do with my field not being stored and not
being identified as termVectors="true". I'm recreating the index now, and see
if this fixes the problem.
uired?
The querystring we're currently executing is:
http://suempnr3:8080/solr/select/?q=amsterdam&mlt.fl=text&mlt.displayTerms=list&mlt=true
Best,
Patrick
-Original Message-
From: Aleksander M. Stensby [mailto:[EMAIL PROTECTED]
Sent: woensdag 26 november 20
Or have a look at the Wiki, probably a better way to start:
http://wiki.apache.org/solr/SolPHP
Best,
Patrick
--
Just trying to help
http://www.ipros.nl/
--
-Original
urce :null available for entity :item Processing Document #
Best,
Patrick
Hi,
You can find the SVN repository here:
http://www.apache.org/dev/version-control.html#anon-svn
I'm not sure if this represent the 1.4 version, but as being the trunk
it's the latest version.
Best,
Patrick
-Original Message-
From: roberto [mailto:miles.c...@gmail.
Sorry all,
Wrong url in the post, right url should be:
http://svn.apache.org/repos/asf/lucene/solr/
Best,
Patrick
-Original Message-
From: Plaatje, Patrick [mailto:patrick.plaa...@getronics.com]
Sent: dinsdag 16 december 2008 22:19
To: solr-user@lucene.apache.org
Subject: RE
Glad that's sorted. On the other issue (directly accessing solr from any
client) I think I saw a discussion on the list earlier, but I don't know
what the result was, browse through the archives and look for something
about security (I think).
Best,
patrick
-Original Message
Hi Julian,
I'm a bit confused. The indexing is indeed being done through XML, but
in searching it is possible to get JSON results by using the wt=json
parameter, have a look here:
http://wiki.apache.org/solr/SolJSON
Best,
Patrick
-Original Message-
From: Julian Davchev [mai
Hi ,
I'm wondering if you could not implement a custom filter which reads the
file realtime (you might even keep the create synonym map in memory for
a predefined time). This then doesn't need a restart of the container.
Best,
Patrick
-Original Message-
From: Shalin Shek
Hi All,
I developed my own custom search component, in which I need to get the
requestors ip-address. But I can't seem to find a request object from
where I can get this string, ideas anyone?
Best,
Patrick
Hi,
At the moment Solr does not have such functionality. I have written a plugin
for Solr though which uses a second Solr core to store/index the searches. If
you're interested, send me an email and I'll get you the source for the plugin.
Regards,
Patrick
-Original Message
Hi Shalin,
Let me investigate. I think the challenge will be in storingmanaging these
statistics. I'll get back to the list when I have thought of something.
Rgrds,
Patrick
-Original Message-
From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Sent: woensdag 20 mei 2009
if required, it shouldn't
be so hard to modify the script and code Database support into it.
You can find the source here:
http://www.ipros.nl/uploads/Stats-component.zip
It includes a README, and a schema.xml that should be used.
Please let me know you're thoughts.
Best
Hi,
In our specific implementation this is not really an issue, but I can imagine
it could impact performance. I guess a new thread could spawned, which takes
care of any performance issues, thanks for pointing it out. I'll post a message
when I coded the change.
Regards,
Pa
re the add can start but I have
no knowledge of the true sequencing of events in either Solr or Lucene.
If this is happening, how can I know when the delete has been processed
before initiating the add process?
Thanks,
Patrick Johnstone
r you) I'm running solr on a tomcat
5.0.28 and sometimes, not at a time of rsync or big traffic or commit,
it doesn't respond anymore and uptime is very high.
Thank you for your help.
Patrick.
I'm using solr 1.4 on tomcat 5.0.28, with client
StreamingUpdateSolrServer with 10threads and xml communication via Post
method.
Is there a way to avoid this error (data lost)?
And is StreamingUpdateSolrServer reliable ?
GRAVE: org.apache.solr.common.SolrException: Invalid CRLF
at org.a
1 - 100 of 145 matches
Mail list logo