Hi Raymond,
I keep trying to encode the '&' but when I look at the solar log it show me
that '%26'
I'm using urlencode it didn't work what should i do? Please suggest me.
Thank you very much,
Rachun
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-by-range-of-price
Hi Raymond,
I keep trying to encode the '&' but when I look at the solar log it show me
that '%26'
I'm using urlencode it didn't work what should i do? Im using PHPSolrClient.
Please suggest me.
Thank you very much,
Rachun
--
View this message in context:
http://lucene.472066.n3.nabble.
Hi Raymond,
I keep trying to encode the '&' but when I look at the solar log it show me
that '%26'
I'm using urlencode it didn't work what should i do? Im using SolrPHPClient.
Please suggest me.
Thank you very much,
Rachun
--
View this message in context:
http://lucene.472066.n3.nabble
Facts:
OS Windows server 2008
4 Cpu
8 GB Ram
Tomcat Service version 7.0 (64 bit)
Only running Solr
Optional JVM parameters set xmx = 3072, xms = 1024
Solr version 4.5.0.
One Core instance (both for querying and indexing)
*Schema config:*
minGramSize="2" maxGramSize="20"
most of the fields are
That's exactly what I would expect from url-encoding '&'. So, the thing
that you're doing works as it should, but you're probably doing something
that you should not do (in this case, urlencode).
I have not used SolrPHPClient myself, but from the example at
http://code.google.com/p/solr-php-client
Followup: I *think* something like this should work:
$results = $solr->search($query, $start, $rows, array('sort' => 'price_min
asc,update_date desc', 'facet.query' => 'price_min:[* TO 1300]'));
On Mon, Jan 20, 2014 at 11:05 AM, Raymond Wiker wrote:
> That's exactly what I would expect from ur
Hi folks, have any of you successfully implemented LSH (MinHash) in
Solr? If so, could you share some details of how you went about it?
I know LSH is available in Mahout, but was hoping if someone has a
solr or Lucene implementation.
Thanks
The fact that you see the memory consumed too high should be consecuency of
that some memory of the heap is only released after a full GC. With the
VisualVM tool you can try to force a full GC and see if the memory is released.
/yago
—
/Yago Riveiro
On Mon, Jan 20, 2014 at 10:03 AM, onetwothre
Other thing, Solr use a lot the OS cache to cache the index and gain
performance. This can be another reason why the solr process has a high memory
value allocated.
/yago
—
/Yago Riveiro
On Mon, Jan 20, 2014 at 10:03 AM, onetwothree
wrote:
> Facts:
> OS Windows server 2008
> 4 Cpu
> 8 GB Ram
Hi,
I have a query on Multi-Lingual Analyser.
Which one of the below is the best approach?
1.1.To develop a translator that translates a/any language to
English and then use standard English analyzer to analyse – use translator,
both at index time and while search time?
2.
Hi guys, following this thread I have some question :
1) regarding LUCENE-5350, what is the context quoted ? Is it the context a
filter query ?
2) regarding https://issues.apache.org/jira/browse/SOLR-5378, do we have
the final documentation available ?
Cheers
2014/1/16 Hamish Campbell
> Than
Thank you very much Mr. Raymond
You just saved my world ;)
It's worked and *sort by conditions *
but facet.query=price_min:[* TO 1300] not working yet but I will try to
google for the right solution.
Million thanks _/|\_
Rachun.
--
View this message in context:
http://lucene.472066.n3.nabble.
Hi,
I had the same problem.
In my case the error was, I had a copy/paste typo in my solr.xml.
"${genericCoreNodeNames:true}"
!^! Ouch!
With the type 'bool' instead of 'str' it works definitely better. ;-)
Uwe
Am 28.11.2013 08:53, schrieb lansing:
Thank you for your replies,
I a
On Mon, 2014-01-20 at 11:02 +0100, onetwothree wrote:
> Optional JVM parameters set xmx = 3072, xms = 1024
> directoryFactory: MMapDirectory
[...]
> So it seems that filesystem buffers are consuming all the leftover memory??,
> and don't release memory, even after a quite amount of time?
As long
Well it is hard to get a specific anchor because there is usually more than
one. The content of the anchors field should be correct. What would you expect
if there are multiple anchors?
-Original message-
> From:Teague James
> Sent: Friday 17th January 2014 18:13
> To: solr-user@lucen
Zitat von Mikhail Khludnev :
On Sat, Jan 18, 2014 at 11:25 PM, wrote:
So, my question now: can I change my existing index in just adding a
is_parent and a _root_ field and saving the journal id there like I did
with j-id or do I have to reindex all my documents?
Absolutely, to use block-j
It Depends (tm). Approach (2) will give you better, more specific
search results. (1) is simpler to implement and might be "good
enough"...
On Mon, Jan 20, 2014 at 5:21 AM, David Philip
wrote:
> Hi,
>
>
>
> I have a query on Multi-Lingual Analyser.
>
>
> Which one of the below is the best a
Hi Solr Users,
Drew Farris, Tom Morton and I are currently working on the 2nd Edition of
Taming Text (http://www.manning.com/ingersoll for first ed.) and are soliciting
interested parties who would be willing to contribute to a chapter on practical
use cases (i.e. you have something in producti
Hi!
I need a little help from you.
We have complex documents stored in database. On the page we show them from
database. We index them and not store them in Solr. So we can't use Solr
Highlighter. But still we would like to highlight the search words found in
the document. What approach would yo
On Mon, Jan 20, 2014 at 6:11 PM, wrote:
>
> Zitat von Mikhail Khludnev :
>
> On Sat, Jan 18, 2014 at 11:25 PM, wrote:
>>
>> So, my question now: can I change my existing index in just adding a
>>> is_parent and a _root_ field and saving the journal id there like I did
>>> with j-id or do I hav
On 1/20/2014 3:02 AM, onetwothree wrote:
OS Windows server 2008
4 Cpu
8 GB Ram
We're using a .Net Service (based on Solr.Net) for updating and inserting
documents on a single Solr Core instance. The size of documents sent to Solr
vary from 1 Kb up to 8Mb, we're sending the documents in batc
Hello!
I've installed a classical two shards Solr 4.5 topology without SolrCloud
balancing with an HA proxy. I've got a *copyField* like this:
* *
Copied from this one:
* *
* *
**
* *
* *
* *
* *
* *
* *
* *
**
When faceting with *tagValues* field I've got a total count of 3:
We are testing our shiny new Solr Cloud architecture but we are
experiencing some issues when doing bulk indexing.
We have 5 solr cloud machines running and 3 indexing machines (separate
from the cloud servers). The indexing machines pull off ids from a queue
then they index and ship over a docume
Hi Luis,
Do you have deletions? What happens when you expunge Deletes?
http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22commit.22
Ahmet
On Monday, January 20, 2014 10:08 PM, Luis Cappa Banda
wrote:
Hello!
I've installed a classical two shards Solr 4.5 topology witho
Questions: How often do you commit your updates? What is your
indexing rate in docs/second?
In a SolrCloud setup, you should be using a CloudSolrServer. If the
server is having trouble keeping up with updates, switching to CUSS
probably wouldn't help.
So I suspect there's something not optimal ab
We commit have a soft commit every 5 seconds and hard commit every 30. As
far as docs/second it would guess around 200/sec which doesn't seem that
high.
On Mon, Jan 20, 2014 at 2:26 PM, Erick Erickson wrote:
> Questions: How often do you commit your updates? What is your
> indexing rate in docs/
We also noticed that disk IO shoots up to 100% on 1 of the nodes. Do all
updates get sent to one machine or something?
On Mon, Jan 20, 2014 at 2:42 PM, Software Dev wrote:
> We commit have a soft commit every 5 seconds and hard commit every 30. As
> far as docs/second it would guess around 200/s
What version are you running?
- Mark
On Jan 20, 2014, at 5:43 PM, Software Dev wrote:
> We also noticed that disk IO shoots up to 100% on 1 of the nodes. Do all
> updates get sent to one machine or something?
>
>
> On Mon, Jan 20, 2014 at 2:42 PM, Software Dev
> wrote:
>
>> We commit have a
4.6.0
On Mon, Jan 20, 2014 at 2:47 PM, Mark Miller wrote:
> What version are you running?
>
> - Mark
>
> On Jan 20, 2014, at 5:43 PM, Software Dev
> wrote:
>
> > We also noticed that disk IO shoots up to 100% on 1 of the nodes. Do all
> > updates get sent to one machine or something?
> >
> >
>
MT is not nearly good enough to allow approach 1 to work.
On Mon, Jan 20, 2014 at 9:25 AM, Erick Erickson wrote:
> It Depends (tm). Approach (2) will give you better, more specific
> search results. (1) is simpler to implement and might be "good
> enough"...
>
>
>
> On Mon, Jan 20, 2014 at 5:21 A
All,
I know normally index should be optimized on master and it will be
replicated to slave but we have an issue with the network bandwidth.
We optimize indexes weekly (total size is around 1.5TB). We have few slaves
set up on local network so replication the whole indexes is not a big
issue.
Ho
Thanks for the reply, dropbox image added.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Memory-Usage-on-Windows-Os-while-indexing-tp4112262p4112403.html
Sent from the Solr - User mailing list archive at Nabble.com.
32 matches
Mail list logo