I solved the problem by increasing the RAM to 2GB.
Thanks a lot.
On Tue, May 6, 2014 at 11:32 AM, Sohan Kalsariya
wrote:
> Thanks a lot Shawn for the help!
> we have given dedicated server to the solr and the RAM size is 650 MB.
> This didn't happen when I was doing it locally.
> I have seen t
David,
I made a note about your mentioning the deprecation below to take it
into account in our software, but now that I tried to find out more
about this I ran into some confusion since the Solr documentation
regarding spatial searches is currently quite badly scattered and partly
obsolete [
Thank you very much Ahmet for your help.
It finally worked!
For anyone interested, all your hints where more than useful. I basically
had two problems:
- Didn't have my language detection chain in the update/json requestHandler
- Didn't create the field where the detected language should be stored
When I go through the debug results I found this. Can someone explain me what
is the + and | sign means.
+(
+DisjunctionMaxQuery(
(
Exact_Field1:"samplestring1"^0.6 |
Exact_Field2:samplestring1^0.5 |
Field1:samplestring1^0.9 |
Field2:samplestring1
)
)
+Disju
When I go through the debug results I found this. Can you explain me what
is the + and | sign means.
+(
+DisjunctionMaxQuery(
(
Exact_Field1:"samplestring1"^0.6 |
Exact_Field2:samplestring1^0.5 |
Field1:samplestring1^0.9 |
Field2:samplestring1
)
)
+DisjunctionMa
Thanks a lot
and thanks for pointing me at the video. I missed it
Matteo
Il giorno 05/mag/2014, alle ore 20:44, Chris Hostetter ha scritto:
> : Hi everybody
> : can anyone give me a suitable interpretation for cat_rank in
> : http://people.apache.org/~hossman/ac2012eu/ slide 15
>
>
Hi Ahmet, Thanks a lot for the help. I upgraded my code to
4.8.0.SanjeevFrom: Ahmet Arslan Sent: Fri, 02 May 2014
20:54:14To: "solr-user@lucene.apache.org"
, "sanje...@rediff.co.in"
Subject: Re: Displaying ExternalFileField
Hi everybody,
I'm having troubles with the function query
"query(subquery, default)"
http://wiki.apache.org/solr/FunctionQuery#query
running this
http://localhost:8983/solr/select?q=query($qq,1)&qq={!dismax qf=text}hard drive
on collection1 gives me no results
but
Hi,
'query' is a function returning a number.
You can't use it as a query.
Add 'debugQuery=true' to your request and you'll see how your query is
parsed (cf parsedquery)
Franck Brisbart
Le mardi 06 mai 2014 à 11:08 +0200, Matteo Grolla a écrit :
> Hi everybody,
> I'm having troubles wit
Thanks Erick for the explanation.
I'll set my autocommit max time to 30 seconds then.
But, I can let soft commit max time to 1/4 hour since it's an ads plateform
which needs to be updated regularly.
2014-05-05 21:14 GMT+01:00 Erick Erickson :
> Take a look through the article I linked, 5 minutes
my pleasure!
2014-05-06 16:43 GMT+08:00 Victor Pascual [via Lucene] <
ml-node+s472066n413488...@n3.nabble.com>:
> Thank you very much Ahmet for your help.
> It finally worked!
>
> For anyone interested, all your hints where more than useful. I basically
> had two problems:
> - Didn't have my lan
Hi All,
I have setup cloud-4.6.2 with default configuration on single machine with
2 shards and 2 replication through
https://cwiki.apache.org/confluence/display/solr/Getting+Started+with+SolrCloud
Cloud was up and running and I indexed the example data xml to it, it went
fine.
Now when I am quer
Also, be aware that there a a lot of PDF files that have text which is the
result of a low-accuracy OCR scan of the page images in the PDF file.
High-accuracy OCR scan is rather expensive. You can usually tell if you have
a "scanned" PDF by zooming way in - a PDF file generated directly from a
Hello,
I have a dynamic field '*_title_s' where '*' is replaced by a language
code when indexing.
Hence I get en_title_s ; cn_title_s, etc.
I can find the complete list of generated fields in the admin UI with
the Schema Browser and a URL like :
http://localhost/solr/#/core/schema-browser?
The "+" symbol means a clause of a boolean query that "must" be present, as
opposed to "should" (optionally) be present. This is equivalent to the "AND"
operator.
The "|" means "OR" for a disjunction maximum query to indicates the
alternatives - at least one of the alternatives must match.
-
Hi Raphaël,
Yes it is possible, https://wiki.apache.org/solr/LukeRequestHandler
make sure to use numTerms=0 for performance reasons.
Ahmet
On Tuesday, May 6, 2014 2:19 PM, Raphaël Tournoy
wrote:
Hello,
I have a dynamic field '*_title_s' where '*' is replaced by a language
code when indexi
Thank you this is what I was looking for all this time
I wanted to understand how the query that I passed being evaluated by solr
--
View this message in context:
http://lucene.472066.n3.nabble.com/Help-to-Understand-a-Solr-Query-tp4134686p4134904.html
Sent from the Solr - User mailing list
Your previous mail did not sent to mail list, I am forwarding.
-- Forwarded message --
From: Vineet Mishra
Date: 2014-05-06 14:33 GMT+03:00
Subject: Re: Indexing Big Data With or Without Solr
To: Furkan KAMACI
Hi Furkan,
No not the metadata but I am planning to store sensor da
Hi guys,
I've used the MapReduceIndexerTool [1] in order to import data into SOLR
and seem to stumbled upon something. I've followed the tutorial [2] and
managed to import data into a SolrCloud cluster using the map reduce job.
I ran the job a second time in order to update some of the existing
do
Think of debugQuery as your "Solr BFF"!
-- Jack Krupansky
-Original Message-
From: nativecoder
Sent: Tuesday, May 6, 2014 7:36 AM
To: solr-user@lucene.apache.org
Subject: Re: Help to Understand a Solr Query
Thank you this is what I was looking for all this time
I wanted to understand
Hi Ahmet,
thank you very much, LukeRequestHandler is really powefull and it's
exactly what I need.
Raphaël
Le 06/05/2014 13:32, Ahmet Arslan a écrit :
Hi Raphaël,
Yes it is possible, https://wiki.apache.org/solr/LukeRequestHandler
make sure to use numTerms=0 for performance reasons.
Ahmet
Hi Erick,
thanks for your help. After some checks. it appear that sort field (the
alphaSortOnly field) aren't feed on 'some' servers. On the leader, the sort
order are good and terms sorted seems ok. But, on the server with issue,
/terms return empty nodes (No data stored I guess). After
Yes, this is a known issue. Repeatedly running the MapReduceIndexerTool on the
same set of input files can result in duplicate entries in the Solr collection.
This occurs because currently the tool can only insert documents and cannot
update or delete existing Solr documents.
Wolfgang.
On May
Hi,
I have a setup of two shard with embedded zookeeper and one collection on
two tomcat instances. I cannot use uniqueKey i.e the compositeId routing
for document routing as per my understanding it will change the uniqueKey.
There is another way mentioned on Solr wiki is by using "router.field".
Thanks, Wolfgang! Appreciate your support.
Is there any plan to make it possible to update/delete existing SOLR docs
using the MapReduceIndexerTool? Is such a thing even possible given the way
it works behind the curtains?
Costi
On Tue, May 6, 2014 at 3:58 PM, Wolfgang Hoschek wrote:
> Yes, th
Thank you very much for your responses.
Jack, even if I were to tweak the boost factor it might not work in all
cases. So I was looking at a more generic way via Function Queries to
achieve my goal.
Ahmet, I did see Jan Høydahl's response on all terms boosting as follows-
q=a
fox&defType=dismax&
I have this query / URL
http://example.com:8983/solr/collection1/clustering?q=%28title:%22+Atlantis%22+~100+OR+content:%22+Atlantis%22+~100%29&rows=3001&carrot.snippet=content&carrot.title=title&wt=xml&indent=true&sort=date+DESC&;
With that, I get the results and also the clustering of those resu
I experimented locally with modifying the SolrCore code to not overwrite the
highlight component in the components map (essentially, leaving the components
as configured in the solrconfig.xml). This seemed to work - my search request
handler used the PostingsHighlighter to generate snippets wit
copyField should be working fine on all servers. What it sounds like
to me is that somehow your schema.xml file was different on one
machine. Now, this shouldn't be happening if you follow the practice
of altering your schema, pushing to ZooKeeper, _and_ restarting or
reloading your Solr nodes.
So
Hi,
if you can create a function query, that will assign a constant score of lets
say 100 , then you can sort multi criteria, sort= score desc, recency_date desc
On Tuesday, May 6, 2014 5:51 PM, Ravi Solr wrote:
Thank you very much for your responses.
Jack, even if I were to tweak the boos
Hello
I am migrating an application to solrcloud and I have to deal with a big
dictionary, about 10Mb
It seems that I can't upload it to zookeper, is there a way of specifying
an external file for the synonyms parameter?
can I compress the file or split it in many small files?
I have the same p
put rows to zero?
Exploit the facets as "clusters" ?
paul
Le 6 mai 2014 à 16:42, Sebastián Ramírez a
écrit :
> I have this query / URL
>
> http://example.com:8983/solr/collection1/clustering?q=%28title:%22+Atlantis%22+~100+OR+content:%22+Atlantis%22+~100%29&rows=3001&carrot.snippet=content&c
On Mon, May 5, 2014 at 6:18 PM, Romain wrote:
> Hi,
>
> I am trying to plot a non date field by time in order to draw an histogram
> showing its evolution during the week.
>
> For example, if I have a tweet index:
>
> Tweet:
> date
> retweetCount
>
> 3 tweets indexed:
> Tweet | Date | Retweet
Hi Giovanni,
I had the same issue just last week! I worked around it temporarily by
segmenting the file into < 1 MB files, and then using a comma-delimited list of
files in the filter specification in the schema.
There is a known issue around this:
https://issues.apache.org/jira/browse/SOLR-4
Thanks for your help. I don't know why but it's now working. It's probably
related to a schema update without core reload like you said. I will double
check next time we change the schema.
Thank you
De : Erick Erickson [erickerick...@gmail.com]
Envoyé
: 'query' is a function returning a number.
: You can't use it as a query.
Well ... you can, you just have to use the correct query parser.
since there is nothing to make it clear to solr that you want it to parse
the "q" paramater as a function, it's using hte default parser, and
probably ser
Hi Era,
I appreciate the scattered documentation is confusing for users. The use
of spatial for time durations is definitely not an official way to do it;
it’s clearly a hack/trick — one that works pretty well if you know the
issues to watch out for. So I don’t see it getting documented on the
r
On Tue, May 6, 2014 at 5:08 AM, Matteo Grolla wrote:
> Hi everybody,
> I'm having troubles with the function query
>
> "query(subquery, default)"
> http://wiki.apache.org/solr/FunctionQuery#query
>
> running this
>
> http://localhost:8983/solr/select?q=query($qq,1)&qq
All,
We saw one issue on fnm file.
Looks fnm file size will not be reduced after optimize.
Like we have 1000 documents, and they have field 1 to 1000.
And .fnm file size is 10K.
After delete 999 documents and just keep one document which just has 2
fields.
After run optimize, .fnm file still has
I check implementation.
in SegmentMerger.mergeFieldInfos
public void mergeFieldInfos() throws IOException {
for (AtomicReader reader : mergeState.readers) {
FieldInfos readerFieldInfos = reader.getFieldInfos();
for (FieldInfo fi : readerFieldInfos) {
fieldInfosBuilder.ad
Hi Sebastián,
Looking quickly through the code of the clustering component, there's
currently no way to output only clusters. Let me see if this can be easily
implemented.
Stanislaw
--
Stanislaw Osinski, stanislaw.osin...@carrotsearch.com
http://carrotsearch.com
On Tue, May 6, 2014 at 6:48 PM,
This looks nice!
The only missing piece for more interactivity would be to be able to map
multiple field values into the same bucket.
e.g.
http://localhost:8983/solr/query?
q=*:*
&facet=true
&facet.field=*round(date, '15MINUTES')*
&facet.stat=sum(retweetCount)
This is a bit similar
On Tue, May 6, 2014 at 5:30 PM, Romain Rigaux wrote:
> This looks nice!
>
> The only missing piece for more interactivity would be to be able to map
> multiple field values into the same bucket.
>
> e.g.
>
> http://localhost:8983/solr/query?
>q=*:*
>&facet=true
>&facet.field=*round(dat
How big is the fnm file? While you may be technically correct, I'm not
sure it would be worth the effort, I rather expect this file to be
quite small.
Are you seeing a performance issue or is this more in the theoretical realm?
Best,
Erick
On Tue, May 6, 2014 at 1:23 PM, googoo wrote:
> I check
I'm new to Solr, so forgive me if this is a silly question. Although I can
find some related information (in this list and elsewhere), I can't seem to
find a clear answer to my specific question:
If I have a DTD or XSD that describes the structure of a set of XML
documents that I have, is there s
Hello, I'm struggling to retreive some data from my localhost Solr from an
Android Application. but i'm still having the same error.
/05-06 18:22:09.036: E/AndroidRuntime(1628): java.lang.NoSuchMethodError:
org.apache.http.conn.scheme.Scheme.
05-06 18:22:09.036: E/AndroidRuntime(1628): at
org.
I'm pretty sure there's nothing to automate that task, but there are
some tools to help with indexing XML. Lux (http://luxdb.org) is one; it
can index all the element text and attribute values, effectively
creating an index for each tag name -- these are not specifically
Solr/Lucene fields, bu
I don't know what the design was, but your use case seems valid to me: I
think you should submit a ticket and a patch. If you write a test, I
suppose it might be more likely to get accepted.
-Mike
On 5/6/2014 10:59 AM, Cario, Elaine wrote:
I experimented locally with modifying the SolrCore c
On 5/6/2014 4:32 PM, blach wrote:
> Hello, I'm struggling to retreive some data from my localhost Solr from an
> Android Application. but i'm still having the same error.
>
> /05-06 18:22:09.036: E/AndroidRuntime(1628): java.lang.NoSuchMethodError:
> org.apache.http.conn.scheme.Scheme.
> 05-06 18:2
For our setup, the file size is 123M. Internal it has 2.6M fields.
The problem is facet operation. It take a while for facet.
we are stuck in below call stack for 11 second.
java.util.HashMap.transfer(Unknown Source)
java.util.HashMap.resize(Unknown Source)
java.util.HashMap.addEntry(Unknown S
Hello and thank shawn,
How can I make sure that my jar is in the classpath at runtime?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrj-problem-tp4135030p4135038.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello everybody,
Solr 4.3.1(and 4.7.1), Num Docs + Deleted Docs >
2147483647(Integer.MAX_VALUE) over
Caused by: java.lang.IllegalArgumentException: Too many documents,
composite IndexReaders cannot exceed 2147483647
It seems to be trouble similar to the unresolved e-mail.
http://mail-archives.apa
This is super nice, I tried (even without subfacets) and it works! Thanks a
lot!
Romain
facet=true&facet.range=price&facet.range.start=0&facet.range.end=1000&facet.range.gap=100&facet.stat=avg(popularity)
facets": { "price": { "buckets": [ { "val": "0.0", "avg(popularity)":
3.5714285714285716 }
53 matches
Mail list logo