Just a quick announcement and request for guidance:
I've developed an open source, Javascript client for Apache Solr. Its very
easy to implement and can be configured to provide faceted search to an
existing Solr index in just a few minutes. The source is available online
here:
https://bitbucke
Hi Shawn,
Here it is: https://issues.apache.org/jira/browse/SOLR-5851
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Tue, Mar 11, 2014 at 11:22 PM, Shawn Heisey wrote:
> On 3/11/2014 8:51 PM, Shawn Heisey wrote:
> > On
On 3/11/2014 8:51 PM, Shawn Heisey wrote:
> On 3/11/2014 8:07 PM, Otis Gospodnetic wrote:
>> Is there a way to disable cache *lookups* into cached that are disabled?
>>
>> Check this for example: https://apps.sematext.com/spm-reports/s/Z04bfIvGyH
>>
>> This is a Document cache that was enabled, and
On 3/11/2014 8:07 PM, Otis Gospodnetic wrote:
> Is there a way to disable cache *lookups* into cached that are disabled?
>
> Check this for example: https://apps.sematext.com/spm-reports/s/Z04bfIvGyH
>
> This is a Document cache that was enabled, and then got disabled. But the
> lookups are stil
Hi Ravi,
How about RemoveBlankFieldUpdateProcessorFactory ?
https://lucene.apache.org/solr/4_0_0/solr-core/org/apache/solr/update/processor/
Ahmet
On Tuesday, March 11, 2014 6:11 PM, "EXTERNAL Taminidi Ravi (ETI,
Automotive-Service-Solutions)" wrote:
Hi, Is there anyway Index/store value f
Hi,
Is there a way to disable cache *lookups* into cached that are disabled?
Check this for example: https://apps.sematext.com/spm-reports/s/Z04bfIvGyH
This is a Document cache that was enabled, and then got disabled. But the
lookups are still happening, which is pointless if the cache is disab
i would like to use $r and $org for access control. it has to allow the fq's
from my facet to work aswell. i'm not sure if i'm doing it wright or if i
should add it to a qf or the q itself. the debugquery returns a parsed fq
string and in them $r and $org are printed instead of their values. how
> Thank you Ahmet, Staszek and Tomnaso ;)
> so the only way to obtain offline Clustering is to move to a customisation
> !
> I will take a look to the interface of the API ( If you can give me a link
> to the class, it will be appreciated, If not I will find it by myself .
>
The API stub is
the or
Moving 4 versions ahead may need much additional tests from my side to
ensure our cluster performance is good and within our SLA. Moving to 4.4
(just 1 month after 4.3.1 was released) gives me the most important bug
fix for reloading collections (which does not work now and have to do a
rolling re
Hi,
I seem to have the same issue here.
I'm running a very simple Solr server in standalone mode, using a DIH with
the following datasource in *dataconfig.xml* :
It works fine.
Then I stop the standalone server, empty the data directory, and start a
cluster with an embedded ZooKeeper, just by
First I have to ask why you're going to 4.4 rather than 4.7. I
understand vetting requirements, but I thought I'd ask No use
going through this twice if you can avoid it.
On Tue, Mar 11, 2014 at 12:49 PM, Chris W wrote:
> I am running solrcloud version 4.3.0 with a 10 m1.xlarge nodes and us
Hi Sohan,
Given you have 15 days and this looks like a class project, I would suggest
going with John Berryman's approach - he also provides code which you can
just apply to your data. Even if you don't get the exact expansions you
desire, I think you will get results that will pleasantly surprise
Should the same exact query using fq={!collapse field=fld} return the same
results as group=true&group.field=fld ???
I am getting different results for my facets on those queries when I have a
second fq=
This happens in both
4.6.0 1543363 - simon - 2013-11-19 11:16:33
and
4.8-2014-02-23_07-
Thank you Ahmet, Staszek and Tomnaso ;)
so the only way to obtain offline Clustering is to move to a customisation !
I will take a look to the interface of the API ( If you can give me a link
to the class, it will be appreciated, If not I will find it by myself .
Cheers
2014-03-10 18:48 GMT+00:0
We resolved this problem by changing the "Content-Type" we were providing.
Changing it to "application/x-www-form-urlencoded" resolved the issue.
Thanks for the help!
Lee
--
View this message in context:
http://lucene.472066.n3.nabble.com/Updated-to-v4-7-Getting-Search-requests-cannot-acce
On 3/11/2014 11:05 AM, abhishek jain wrote:
hi Shawn,
Thanks for the reply,
Is there a way to optimize RAM or does Solr does automatically. I have
multiple shards and i know i will be querying only 30% of shards most of
time! and i have 6 slaves. so dedicating more slave with 30% most used
shar
hi Shawn,
Thanks for the reply,
Is there a way to optimize RAM or does Solr does automatically. I have
multiple shards and i know i will be querying only 30% of shards most of
time! and i have 6 slaves. so dedicating more slave with 30% most used
shards .
Another question:
Is it advised to serve
I am running solrcloud version 4.3.0 with a 10 m1.xlarge nodes and using
zk to manage the state/data for collections and configs.I want to upgrade
to version 4.4.0.
When i deploy a 4.4 version of solrcloud in my test environment, none of
the collections/configs (created using the 4.3 version of
Hello All,
I am using solr 3.6 and I want to add multiple facet.prefix in single query.
I searched the forums but could not find the appropriate way.
what i want to do is something like this:
facet.prefix=(A OR B)
Please let me know how can i achieve this?
Thanks,
Nikhil.
--
View this mess
Hi, Is there anyway Index/store value for Null Field in Price column..?
My Price field is tfloat and the XML data file has empty value for the field,
Solr takes this as string and throwing error. Is there any quick fix for this.?
--Ravi
On 3/11/2014 2:40 AM, rachun wrote:
> $q='macbook';
> $client = new SolrClient($config);
> $query = new SolrQuery();
> $query->setQuery($q);
> $query->addParam("shards.qt","/spell");
> $query->addParam("fl","product_name_th");
>
> $query_response = $client->query($query);
> $result = $query_respon
Solr 4.7
On 11 Mar 2014, at 16:43, Erick Erickson wrote:
> What version of Solr? There's been quite a bit of work
> between various 4x versions.
>
> Erick
>
> On Tue, Mar 11, 2014 at 11:25 AM, Oliver Schrenk
> wrote:
>> Hi,
>>
>> After an unsuccessful indexing on a Solr Cloud cluster wit
What version of Solr? There's been quite a bit of work
between various 4x versions.
Erick
On Tue, Mar 11, 2014 at 11:25 AM, Oliver Schrenk
wrote:
> Hi,
>
> After an unsuccessful indexing on a Solr Cloud cluster with four machines,
> were we experienced a lot of errors we are still trying to
On 3/11/2014 6:14 AM, abhishek.netj...@gmail.com wrote:
> Hi all,
> What should be the ideal RAM index size ratio.
>
> please reply I expect index to be of size of 60 gb and I dont store contents.
Ideally, your total system RAM will be equal to the size of all your
program's heap requirements, p
Hi,
After an unsuccessful indexing on a Solr Cloud cluster with four machines, were
we experienced a lot of errors we are still trying to investigate, we found the
cluster to be in a weird state.
{"collection_v1":{
"shards":{
"shard1":{
"range":"8000-bf
It's used for failover and if you've got ZooKeeper running on a separate
machine(s) you need a way to tell Solr where to look.
Thanks,
Greg
On Mar 11, 2014, at 10:11 AM, Oliver Schrenk wrote:
> Hi,
>
> I was wondering why there is the need to full specify all zookeeper hosts
> when starting
Hi,
I was wondering why there is the need to full specify all zookeeper hosts when
starting up Solr. For example using
java -Djetty.port=7574
-DzkHost=localhost:2181,zkhost1:2181,zkhost2:2181,zkhost3:2181 -jar start.jar
Isn’t it enough to point to localhost:2181 and let the Zookeeper e
Hi,
expungesDeletes (default false) is not done automatically through SolrJ.
Please see : https://issues.apache.org/jira/browse/SOLR-1487
During segment merge, deleted terms purged. Thats why problem solved by itself.
Ahmet
On Tuesday, March 11, 2014 4:07 PM, epnRui wrote:
Hi Ahmet,
I th
Hi;
I suggest you to look at the source code. NGramTokenizer.java has some
explanations as comments and it may help you.
Thanks;
Furkan KAMACI
2014-03-11 16:06 GMT+02:00 epnRui :
> Hi Ahmet,
>
> I think the expungesDelete is done automatically through SolrJ. So I don't
> think it was that.
> T
Thank you, Robert. You are right, I was confused between the two. I also
didn't know the "storeOffsetsWithPositions" existed. My code works as I
expected now.
On Mon, Mar 10, 2014 at 11:11 PM, Robert Muir wrote:
> Hello, I think you are confused between two different index
> structures, probabl
Hi Ahmet,
I think the expungesDelete is done automatically through SolrJ. So I don't
think it was that.
THe problem solved by itself apparently. I wonder if it has to do with an
automatic optimization of Solr indexes?
Otherwise it was something similar to XY problem :P
Thanks for the help!
--
Hi;
I suggest you to start reading from here:
http://solr.pl/en/2011/04/04/indexing-files-like-doc-pdf-solr-and-tika-integration/
Thanks;
Furkan KAMACI
2014-03-11 14:44 GMT+02:00 vignesh :
> Dear Team,
>
>
>
>Am Vignesh , at present developing keyword search using
> Apache -So
It controls accuracy of non-point shapes. The more accurate you want it, the
more work Lucene must do to achieve it. For query shapes, the impact is not
much the last time I checked. For indexed shapes (again, non-point shapes
we’re talking about), however, it has an exponential curve trade-o
Hi Iorixxx!
I have not optimized the index but the day after this post I saw I didn't
have this problem anymore.
I will follow your advice next time!
Now I'm avoiding so much manipulation at indexation time and I'm doing more
work in the java code in the client side.
If I had time I would imple
great.. that worked!
What does distErrPct actually control, besides controlling the error
percentage? or maybe better put how does it impact perf?
steve
On Mon, Mar 10, 2014 at 11:17 PM, David Smiley (@MITRE.org) <
dsmi...@mitre.org> wrote:
> Correct, Steve. Alternatively you can also put this
Hi Erik,
you were right...
I had the "signatureField" bound to the "uid" in the solrconfig.xml, so the uid
was always the same.
Now I defined a new field for the "signatureField" and it works!
Before:
...
false
uid <-
The usual use of an ngram filter is at index time and not at query time.
What exactly are you trying to achieve by using ngram filtering at query
time as well as index time?
Generally, it is inappropriate to combine the word delimiter filter with the
standard tokenizer - the later removes the
Add a copyField to your schema to copy the file name string field to a
tokenized text field. You can then query both the string field and the text
field.
-- Jack Krupansky
From: vignesh
Sent: Tuesday, March 11, 2014 8:44 AM
To: solr-user@lucene.apache.org
Subject: Apache Solr.
Dear Team,
Dear Team,
Am Vignesh , at present developing keyword search using
Apache -Solr 3.6.Have indexed XML and I have around 1000 keywords and using
Boolean operators(AND,OR,NOT) have passed a query and I got the required
results for the keyword searched.
Now I am trying to carry o
Shouldn't the numbers be in the output below (parsed_filter_queries) and not
$r and $org?
This works great but i would like to use lacal params "r" and "org" instead
of hard-coded
(*:* -organisations:[* TO *] -roles:[* TO
*]) (+organisations:(150 42) +roles:(174 72))
I wo
Dear Team,
Am Vignesh , at present developing keyword search using
Apache -Solr 3.6.Have indexed pdf and I have around 1000 keywords and using
Boolean operators(AND,OR,NOT) have passed a query and I got the required
results for the keyword searched.
Now I am trying to extract
Hi all,
What should be the ideal RAM index size ratio.
please reply I expect index to be of size of 60 gb and I dont store contents.
Thanks
Abhishek
Original Message
From: abhishek.netj...@gmail.com
Sent: Monday, 10 March 2014 09:25
To: solr-user@lucene.apache.org
Cc: Erick Erickson
Subject
Hmmm, that looks OK to me. I'd log out
the id you assign for each document,
it's _possible_ that somehow you're
getting the same ID for all the files
except this line should be preventing that:
doc.addField("id", document);
Tail the Solr log while you're doing this and
see the update messages to
You can also google for Solr PostFilters,
which were originally written for ACL control.
Best,
Erick
On Tue, Mar 11, 2014 at 5:28 AM, Ahmet Arslan wrote:
> Hi,
>
> In the link has two custom classes : AccessControlQParserPlugin and
> AccessControlQuery. They can be used as an example to write
Here's a good explanation of replicationFactor:
http://wiki.apache.org/solr/SolrCloud
You don't want to define this statically, it's about
the number of nodes not _how_ they replicate.
This will explain the indexing process:
https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data
In SolrCloud there are a couple of round trips
that _may_ be what you're seeing.
First, though, the QTime is the time spent
querying, it does NOT include assembling
the documents from disk for return etc., so
bear that in mind
But here's the sequence as I understand it
from the receiving node
This works great but i would like to use lacal params "r" and "org" instead of
hard-coded
(*:* -organisations:[* TO *] -roles:[* TO *])
(+organisations:(150 42) +roles:(174 72))
I would like
(*:* -organisations:[* TO *] -roles:[* TO *])
(+organisations:($org) +roles:($r))
Sho
Am Tue, 11 Mar 2014 11:10:28 +0100
schrieb "solr-user-h...@lucene.apache.org"
:
> Hi! This is the ezmlm program. I'm managing the
> solr-user@lucene.apache.org mailing list.
>
> I'm working for my owner, who can be reached
> at solr-user-ow...@lucene.apache.org.
>
> To confirm that you would lik
Hi to all,
I'm pretty new with solr and tika and I have a problem.
I have the following workflow in my (web)application:
* download a pdf file from an archive
* index the file
* delete the file
My problem is that after indexing the file, it remains locked and the
delete-part throw
I followed the example here
(http://searchhub.org/2012/02/14/indexing-with-solrj/) for indexing all the
pdfs in a directory. The process seems to work well, but at the end, when I go
in the Solr-UI and click on "Execute query"(with q=*:*), I get only one entry.
Do I miss something in my code?
Its a long video and I will definitely go through it but it seems this is
not possible with SOLR as it is?
I just thought it would be quite a common issue; I mean generally for
search engines its more important to show the first page results, rather
than using timeAllowed which might not even retu
Hi,
In the link has two custom classes : AccessControlQParserPlugin and
AccessControlQuery. They can be used as an example to write
OnlineUsersQParserPlugin and OnlineUsersQuery. This Query implementation can
_only_ be used as an fq. They can be loaded as described here :
https://wiki.apache.o
I got it roght the first time and here is my requesthandler. The field
"plain_text" is searched correctly and has the sam fieldtype as "title" ->
"text_de"
standard
Dear all gurus,
I'm having problem with trying to use spell checker for my suggestion and
I'm using PHP Solr client. So I tried to code like this
===PHP===
$config = array
(
'hostname' => 'localhost',
'port' => '8983',
Hello,
i i'm testing new solr 4.7 with solr cloud and solr replication.
I can't find any documentation on replicationFactor parameter.
It seems it can be passed only by api on the creation of new collection.
How does this parameter works?
It there a way to specify it statically on solrconfig.xml?
Have you tried using Analysis section in the admin web interface?
You can just pick the type from drop down and feed your string to it.
It will show you (with debug enabled) exactly what happens at every
stage and which particular step in the chain might be causing
problems.
Regards,
Alex.
Per
sorry i looked at the wrong fieldtype
-Original-Nachricht-
> Von: "Andreas Owen"
> An: solr-user@lucene.apache.org
> Datum: 11/03/2014 08:45
> Betreff: searches for single char tokens instead of from 3 uppwards
>
> i have a field with the following type:
>
>
>
>
>
i have a field with the following type:
shouldn't this make tokens from 3 to 15 in length and not from 1? heres is a
query report of 2 results:
0 125 truetitl
Hi,
I've just setup a SolrCloud with Tomcat. 5 Shards with one replication each
and total 10million docs (evenly distributed).
I've noticed the query response time is faster than using one single node
but still not as fast as I expected.
After turning debugQuery on, I noticed the query time is d
Hello,
in our project we need to execute some big queries against Solr once a
day, with maybe more than 1000 results, in order to trigger a batch
proccess with the results. In the fl parameter we only are putting the ID
field, because we don't need large text fields.
This is our scenary:
- O
60 matches
Mail list logo