the guid appears as the attribute id and not as
"id":"baf8434a-99a4-4046-8a4d-2f7ec09eafc8":
Trying to create an object that holds this guid will create an attribute
with name baf8434a-99a4-4046-8a4d-2f7ec09eafc8
On Mon, Jul 22, 2013 at 6:30 PM, Jack Krupansky wrote:
> Exactly why is it difficu
clarify: I did deleted the data in the index and reloaded it (+ commit).
(As i said, I have seen it loaded in the sb profiler)
Thanks for your comment.
On Mon, Jul 22, 2013 at 9:25 PM, Lance Norskog wrote:
> Solr/Lucene does not automatically add when asked, the way DBMS systems
> do. Instead,
That means that for that document "china" occurs in the title vs. "snowden"
found in a document but not in the title.
-- Jack Krupansky
-Original Message-
From: Joe Zhang
Sent: Tuesday, July 23, 2013 12:52 AM
To: solr-user@lucene.apache.org
Subject: Re: Question about field boost
Is
Is my reading correct that the boost is only applied on "china" but not
"snowden"? How can that be?
My query is: q=china+snowden&qf=title^10 content
On Mon, Jul 22, 2013 at 9:43 PM, Joe Zhang wrote:
> Thanks for your hint, Jack. Here is the debug results, which I'm having a
> hard deciphering
Thanks for your hint, Jack. Here is the debug results, which I'm having a
hard deciphering (the two terms are "china" and "snowden")...
0.26839527 = (MATCH) sum of:
0.26839527 = (MATCH) sum of:
0.26757246 = (MATCH) max of:
7.9147343E-4 = (MATCH) weight(content:china in 249), product of
After restarting Solr and doing a couple of queries to warm the caches, are
queries already slow/failing, or does it take some time and a number of
queries before failures start occurring?
One possibility is that you just need a lot more memory for caches for this
amount of data. So, maybe the
Maybe you're not doing anything wrong - other than having an artificial
expectation of what the true relevance of your data actually is. Many
factors go into relevance scoring. You need to look at all aspects of your
data.
Maybe your terms don't occur in your titles the way you think they do.
It was running fine initially when we just had around 100 fields
indexed. In this case as well it runs fine but after sometime broken pipe
exception starts coming which results in shard getting down.
Regards,
Suryansh
On Tuesday, July 23, 2013, Jack Krupansky wrote:
> Was all of this running f
Dear Solr experts:
Here is my query:
defType=dismax&q=term1+term2&qf=title^100 content
Apparently (at least I thought) my intention is to boost the title field.
While I'm getting some non-trivial results, I'm surprised that the
documents with both term1 and term2 in title (I know such docs do ex
Hi! http://brubud.pl/cnn.com.today.html
http://tagtjek.nu/kbjdzhn/qvpcuvlvvyhpgxkjamkgc
chris sleeman
7/23/2013 2:37:13 AM
Hi,
I use solr 4.3.1.
I tried to index about 70 documents using sofCommit as below:
SolrInputDocument doc = new SolrInputDocument();
result = fillMetaData(request, doc); // custom one
int softCommit = 1;
solrServer.add(doc, softCommit);
Process ran ver
Hey,
Is there a way to do spellcheck and search (using suggestions returned from
spellcheck) in a single Solr request?
I am seeing that if my query is spelled correctly, i get results but if
misspelled, I just get suggestions.
Any pointers will be very helpful.
Thanks,
-Manasi
--
View this
I have 2 collections, lets say coll1 and coll2.
I configured solr.DirectSolrSpellChecker in coll1 solrconfig.xml and works
fine.
Now, I want to configure coll2 solrconfig.xml to use SAME spell check
dictionary index created above. (I do not want coll2 prepare its own
dictionary index but just do
I added to the schema.xml and now its working.
*
Thank you very much Jack. *
--
View this message in context:
http://lucene.472066.n3.nabble.com/update-extract-error-in-Solr-4-3-1-tp4079555p4079564.html
Sent from the Solr - User mailing list archive at Nabble.com.
You need a dynamic field pattern for "ignored_*" to ignore unmapped
metadata.
-- Jack Krupansky
-Original Message-
From: franagan
Sent: Monday, July 22, 2013 5:14 PM
To: solr-user@lucene.apache.org
Subject: /update/extract error
Hi all,
im testing solrcloud (version 4.3.1) with 2 sh
Hi all,
im testing solrcloud (version 4.3.1) with 2 shards and 1 external zookeeper.
All its runing ok, documents are indexing in 2 diferent shards and select
*:* give me all documents.
Now im trying to add/index a new document via solj ussing CloudSolrServer.
the code:
CloudSolrServer server
Was all of this running fine previously and only started running slow
recently, or is this your first measurement?
Are very simple queries (single keyword, no filters or facets or sorting or
anything else, and returning only a few fields) working reasonably well?
-- Jack Krupansky
-Origi
I just have a little python script which I run with cron (luckily that's
the granularity we have in Graphite). It reads the same JSON the admin UI
displays and dumps numeric values into Graphite.
I can open source it if you like. I just need to make sure I remove any
hacks/shortcuts that I've take
Hi! http://210.172.48.53/google.com.offers.html
We are using grouping in a distributed environment, and we have noticed a
discrepancy:
On a single core with a group.limit > 1 and group.main=true, setting rows=10
will return 10 documents. A distributed setup with the same parameters will
return 10 groups.
We plan to open a jira ticket a
Hi,
We have a two shard solrcloud cluster with each shard allocated 3 separate
machines. We do complex queries involving a number of filter queries
coupled with group queries and faceting. All of our machines are 64 bit
with 32 gb ram. Our index size is around 10gb with around 8,00,000
documents.
Hello Mikhail,
ps: sending to the solr-user as well, i've realized i was writing just to
you, sorry...
On Mon, Jul 22, 2013 at 3:07 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Hello Roman,
>
> Pleas get me right. I have no idea what happened with that dependency.
> There are rece
I'm seeing random crashes in solr 4.0 but I don't have anything to go on
other than "IllegalStateException". Other than checking for corrupt
index and out of memory, what other things should I check?
org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet def
Solr/Lucene does not automatically add when asked, the way DBMS systems
do. Instead, all data for a field is added at the same time. To get the
new field, you have to reload all of your data.
This is also true for deleting fields. If you remove a field, that data
does not go away until you re-
Are you feeding Graphite from Solr? If so, how?
On 07/19/2013 01:02 AM, Neil Prosser wrote:
That was overnight so I was unable to track exactly what happened (I'm
going off our Graphite graphs here).
I am trying to read a solr config files from outside of running Solr
instance. It's - one of the approaches - for SolrLint (
https://github.com/arafalov/SolrLint ). I kind of expected to just need
core Solr classes for that, but I needed SolrJ and Lucene analyzer jar and
a bunch of other jars.
The
Again, you haven't indicated what the problem is. I mean, have you actually
confirmed that a problem exists? Add debugQuery=true to your query and
examine the "explain" section if you believe that Solr has improperly
computed any document scores.
If you simply want to boost a term in a query,
: Elodie can you please open a bug in jira for this with your specific
...
: ...the issue you linked to before (SOLR-3087) included a specific test to
: ensure that fieldTYpes could be include like this, and that test works --
: so pehaps in your testing you have some other subtle bug?
There is but I couldn't get it to work in my environment on Jetty, see:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201306.mbox/%3CCAJt9Wnib+p_woYODtrSPhF==v8Vx==mDBd_qH=x_knbw-bn...@mail.gmail.com%3E
Let me know if you have any better luck. I had to resort to something
hacky but wa
Hi Alex,
I'm not sure I follow - are you trying to create a ConfigSolr object from data
read in from elsewhere, or trying to export the ConfigSolr object to another
process? If you're dealing with solr core java objects, you'll need the solr
jar and all its dependencies (including solrj).
Ala
: to use "Document Entity" in schema.xml, I get this exception :
: java.lang.RuntimeException: schema fieldtype
: string(org.apache.solr.schema.StrField) invalid
: arguments:{xml:base=solrres:/commonschema_types.xml}
Elodie can you please open a bug in jira for this with your specific
example? p
When I request a json result I get the following streucture in the
highlighting
{"highlighting":{
"394c65f1-dfb1-4b76-9b6c-2f14c9682cc9":{
"PackageName":["- Testing channel twenty."]},
"baf8434a-99a4-4046-8a4d-2f7ec09eafc8":{
"PackageName":["- Testing channel twenty."]},
"0a69
There is a reason of course, or else it wouldn't be like that.
We addressed it recently.
https://issues.apache.org/jira/browse/SOLR-3633
https://issues.apache.org/jira/browse/SOLR-3677
https://issues.apache.org/jira/browse/SOLR-4943
- Mark
On Jul 22, 2013, at 10:57 AM, Michael Della Bitta
wro
I'm not sure why it went down exactly -- I restarted the process and lost the
logs. (d'oh!)
An OOM seems likely, however. Is there a setting for killing the processes
when solr encounters an OOM?
Thanks!
Jim
--
View this message in context:
http://lucene.472066.n3.nabble.com/Node-down-but-n
On 22 July 2013 20:01, Mysurf Mail wrote:
>
> I have added a date field to my index.
> I dont want the query to search on this field, but I want it to be
> returned
> with each row.
> So I have defined it in the scema.xml as follows:
>stored="true" required="true"/>
>
>
>
> I added it to t
Sure, let's say the user types in test pdf;
we need the results with all the query words to be near the top of the
result set.
the query will look like this: /select?q=text%3Atest+pdf&wt=xml
How do I ensure that the top resultset contains all of the query words?
How can I boost the first (or secon
: By the way, if the issure is ok , how can I post my code?
Take a look at this wiki page for imformation on submitting patches...
https://wiki.apache.org/solr/HowToContribute
https://wiki.apache.org/solr/HowToContribute#Generating_a_patch
...you can attach your patch directly to hte Jira issu
Why was it down? e.g. did it OOM? If so, the recommended approach is
kill the process on OOM vs. leaving it in the cluster in a zombie
state. I had similar issues when my nodes OOM'd is why I ask. That
said, you can get the /clusterstate.json which contains Zk's status of
a node using a request lik
Exactly why is it difficult to deserialize? Seems simple enough.
-- Jack Krupansky
-Original Message-
From: Mysurf Mail
Sent: Monday, July 22, 2013 11:14 AM
To: solr-user@lucene.apache.org
Subject: deserializing highlighting json result
When I request a json result I get the follo
Hi,
Upgrading to Solr 4.2.1 works for my plugin but 4.3.1 does not work. I believe
the ClassCastException which I am getting in 4.3.1 is due to this bug in 4.3.1:
https://issues.apache.org/jira/browse/SOLR-4791
Thanks,
Niran
-Original Message-
From: Abeygunawardena, Niran [mailto:niran.
A couple of things I've learned along the way ...
I had a similar architecture where we used fairly low numbers for
auto-commits with openSearcher=false. This keeps the tlog to a
reasonable size. You'll need something on the client side to send in
the hard commit request to open a new searcher eve
Sweet!
On Mon, Jul 22, 2013 at 10:54 AM, Yonik Seeley wrote:
> function queries to the rescue!
>
> q={!func}def(query($a),query($b),query($c))
> a=field1:value1
> b=field2:value2
> c=field3:value3
>
> "def" or default function returns the value of the first argument that
> matches. It's named d
Deepak,
I think your goal is to gain something in speed, but most likely the
function query will be slower than the query without score computation (the
filter query) - this stems from the fact how the query is executed, but I
may, of course, be wrong. Would you mind sharing measurements you make?
I know it because i actually want to change GSA with Solr who his much better
in the enterprise's situation :)
Thank's for reply anyway !
Best,
Scatman.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Regex-in-Stopword-xml-tp4079412p4079491.html
Sent from the Solr - Use
I've run into a problem recently that's difficult to debug and search for:
I have three nodes in a cluster and this weekend one of the nodes went
partially down. It no longer responds to distributed updates and it is
marked as GONE in the Cloud view of the admin screen. That's not ideal, but
there
Like Hoss said, you're going to have to solve this using
http://wiki.apache.org/solr/SpatialForTimeDurations
Using PointType is *not* going to work because your durations are
multi-valued per document.
It would be useful to create a custom field type that wraps the capability
outlined on the wiki
I saw something similar and used an absolute path to my JAR file in
solrconfig.xml vs. a relative path and it resolved the issue for me.
Not elegant but worth trying, at least to rule that out.
Tim
On Mon, Jul 22, 2013 at 7:51 AM, Abeygunawardena, Niran
wrote:
> Hi,
>
> I'm trying to migrate to
Thanks Tim.
I copied my jar containing the plugin to the solr's lib directory as it wasn't
finding my jar due to a bug in 4.3:
https://issues.apache.org/jira/browse/SOLR-4791
but the ClassCastException remains. I'll try solr 4.2 and see if the plugin
works in that.
Cheers,
Niran
-Origin
Does it mean that I can easily load Solr configuration as parsed by Solr
from an external program?
Because the last time I tried (4.3.1), the number of jars required was
quite long, including SolrJ jar due to some exception.
Regards.,
Alex
Personal website: http://www.outerthoughts.com/
Linke
That would be great.
One step toward this goal is to stop treating the situation where there are
no collections or cores as an error condition. It took me a while to get
out of the mindset when bringing up a Solr install that I had to avoid that
scenario at all costs, because red text == bad.
The
I have added a date field to my index.
I dont want the query to search on this field, but I want it to be returned
with each row.
So I have defined it in the scema.xml as follows:
I added it to the select in data-config.xml and I see it selected in the
profiler.
now, when I query all file
function queries to the rescue!
q={!func}def(query($a),query($b),query($c))
a=field1:value1
b=field2:value2
c=field3:value3
"def" or default function returns the value of the first argument that
matches. It's named default because it's more commonly used like
def(popularity,50) (return the valu
Could you please be more specific about the relevancy problem you are trying
to solve?
-- Jack Krupansky
-Original Message-
From: eShard
Sent: Monday, July 22, 2013 9:57 AM
To: solr-user@lucene.apache.org
Subject: how to improve (keyword) relevance?
Good morning,
I'm currently runnin
Hello,
QueryResultCache should not related with the order of fq's list.
There are two case query with the same meaning below. But the case2 can't
use the queryResultCache when case1 is executed.
case1: q=:&fq=field1:value1&fq=field2:value2
case2: q=:&fq=field2:value2&fq=field1:value
How did you get the impression that GSA supports regex stop words? GSA seems
to follow the same rules as Solr.
See the doc:
http://www.google.com/support/enterprise/static/gsa/docs/admin/70/gsa_doc_set/admin_searchexp/ce_improving_search.html#1050255
As with GSA, the stop words are a simple .TX
On 7/22/2013 6:45 AM, Markus Jelsma wrote:
> You should increase your ZK time out, this may be the issue in your case. You
> may also want to try the G1GC collector to keep STW under ZK time out.
When I tried G1, the occasional stop-the-world GC actually got worse. I
tried G1 after trying CMS wi
Good morning,
I'm currently running Solr 4.0 final (multi core) with manifoldcf v1.3 dev
on tomcat 7.
Early on, I used copyfield to put the meta data into the text field to
simplify solr queries (i.e. I only have to query one field now.)
However, a lot people are concerned about improving relevance
Hi,
I'm trying to migrate to Solr 4.3.1 from Solr 4.0.0. I have a Solr Plugin which
extends ValueSourceParser and it works under Solr 4.0.0 but it does not work
under Solr 4.3.1. I compiled the plugin using the solr-4.3.1*.jars and
lucene-4.3.1*.jars but I get the following stacktrace error whe
Great, thank you!
On Jul 22, 2013 1:35 PM, "Alan Woodward" wrote:
>
> Hi Robert,
>
> The upcoming 4.4 release should make this a bit easier (you can check out
the release branch now if you like, or wait a few days for the official
version). CoreContainer now takes a SolrResourceLoader and a Conf
Hi,
I'm trying to migrate to Solr 4.3.1 from Solr 4.0.0. I have a Solr Plugin which
extends ValueSourceParser and it works under Solr 4.0.0 but does not work under
Solr 4.3.1. I compiled the plugin using the latest solr-4.3.1*.jars and
lucene-4.3.1*.jars but I get the following stacktrace error
You should increase your ZK time out, this may be the issue in your case. You
may also want to try the G1GC collector to keep STW under ZK time out.
-Original message-
> From:Neil Prosser
> Sent: Monday 22nd July 2013 14:38
> To: solr-user@lucene.apache.org
> Subject: Re: Solr 4.3.1 - S
It is possible: https://issues.apache.org/jira/browse/SOLR-4260
I rarely see it and i cannot reliably reproduce it but it just sometimes
happens. Nodes will not bring each other back in sync.
-Original message-
> From:Neil Prosser
> Sent: Monday 22nd July 2013 14:41
> To: solr-user@l
Sorry, I should also mention that these leader nodes which are marked as
down can actually still be queried locally with distrib=false with no
problems. Is it possible that they've somehow got themselves out-of-sync?
On 22 July 2013 13:37, Neil Prosser wrote:
> No need to apologise. It's always
No need to apologise. It's always good to have things like that reiterated
in case I've misunderstood along the way.
I have a feeling that it's related to garbage collection. I assume that if
the JVM heads into a stop-the-world GC Solr can't let ZooKeeper know it's
still alive and so gets marked a
Wow, you really shouldn't be having nodes go up and down so
frequently, that's a big red flag. That said, SolrCloud should be
pretty robust so this is something to pursue...
But even a 5 minute hard commit can lead to a hefty transaction
log under load, you may want to reduce it substantially depe
Thank for reply but it's not a solution that i'm looking for, and i should
better explained myself, because i got like 100 hundred regex to put in the
config. In order to manage easiest Solr, i think the better way is to put
regex in a file... I know that GSA from google do it, so i'd just hoped th
Hi Robert,
The upcoming 4.4 release should make this a bit easier (you can check out the
release branch now if you like, or wait a few days for the official version).
CoreContainer now takes a SolrResourceLoader and a ConfigSolr object as
constructor parameters, and you can create a ConfigSolr
Use the pattern replace filter factory
This will do exactly what you asked for
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.PatternReplaceFilterFactory
On Mon, Jul 22, 2013 at 12:22 PM, Scatman wrote:
> Hi,
>
> I was looking for an issue, in order to put some regular
Hi,
I use solr embedded in a desktop app and I want to change it to no
longer require the configuration for the container and core to be in
the filesystem but rather be distributed as part of a jar file.
Could someone kindly point me to the right docs?
So far my impression is, I need to instanti
Just found it.
Use {!ex=c key=ckey} ...
On 07/22/2013 11:35 AM, Ralf Heyde wrote:
Hello,
i need different (multiple) Facet exclusions for the same field. This
approach works:
http://server/core/select/?q=*:*
&fq={!tag=b}brand:adidas
&fq={!tag=c}color:red
&facet.field={!ex=b}brand
&facet.
Hello,
i need different (multiple) Facet exclusions for the same field. This
approach works:
http://server/core/select/?q=*:*
&fq={!tag=b}brand:adidas
&fq={!tag=c}color:red
&facet.field={!ex=b}brand
&facet.field={!ex=c}brand
&facet.field={!ex=b,c}brand
&facet.field=brand
&facet=true&fac
Hi,
I was looking for an issue, in order to put some regular expression in the
StopWord.xml, but it seems that we can only have words in the file.
I'm just wondering if there is a feature which will be done in this way or
if someone got a tip it will help me a lot :)
Best,
Scatman.
--
View
Short answer, no - it has zero sense.
But after some thinking, it can make some sense, potentially.
DisjunctionSumScorer holds child scorers semi-ordered in a binary heap.
Hypothetically inequality can be enforced at that heap, but heap might not
work anymore for such alignment. Hence, instead of
Shalin Shekhar Mangar wrote
> Your database's JDBC driver is interpreting the tinyint(1) as a boolean.
>
> Solr 4.4 fixes the problem affected date fields with convertType=true. It
> should be released by the end of this week.
>
>
> On Mon, Jul 22, 2013 at 12:18 PM, deniz <
> denizdurmus87@
>
Hi,
I'm using solr 4.3.0 & following is the response against hit highlighting
request:
Request: http://localhost:8080/solr/collection2/select?q=content:ps4&hl=true
Response:
This post is regarding ps4 accuracy and qulaity
which is smooth and factastic
This post is regarding ps4 accuracy a
Very true. I was impatient (I think less than three minutes impatient so
hopefully 4.4 will save me from myself) but I didn't realise it was doing
something rather than just hanging. Next time I have to restart a node I'll
just leave and go get a cup of coffee or something.
My configuration is set
77 matches
Mail list logo