I solved it by adding a loop for years and one for quartals in which i count
the month-facets
-Original Message-
From: Andreas Owen [mailto:a...@conx.ch]
Sent: Montag, 11. November 2013 17:52
To: solr-user@lucene.apache.org
Subject: RE: date range tree
Has someone at least got a idee how
Mhhh, I run a dih full reload every night, and the source field is a
sqlserver smallint column...
By the way I'll try cleaning the data dir of the index and reindexing
Il 12/11/13 17:13, Shawn Heisey ha scritto:
On 11/12/2013 2:37 AM, giovanni.bricc...@banzai.it wrote:
I'm getting some errors
On 11/12/13 5:20 PM, Shawn Heisey wrote:
> Ensure that all handler names start with a slash character, so they are
> things like "/query", "/select", and so on. Make sure that handleSelect
> is set to false on your requestDispatcher config. This is how Solr 4.x
> examples are set up already.
>
>
Hi:
First of all I have to say that I had never heard about *\* as the query to
get all the documents in a index but *:* (maybe I'm wrong) . Re-reading
"Apache Solr 4 cookbook", "Solr 1.4 Enterprise Search Server" and " Apache
Solr 3 Enterprise Search Server" there is no trace for the query *\* a
Do you do your commit from the two indexing clients or have the autocommit
set to maxDocs = 1000 ?
-
Thanks,
Michael
--
View this message in context:
http://lucene.472066.n3.nabble.com/solrcloud-forward-update-to-a-shard-failed-tp4100608p4100633.html
Sent from the Solr - User mailing list a
Here is my case;
I have a field at my schema named *elmo_field*. I want that *elmo_field* should
have multiple values and multiple payloads. i.e.
dorothy|0.46
sesame|0.37
big bird|0.19
bird|0.22
When a user searches for a keyword i.e. *dorothy* I want to add 0.46 to
score. If user searches for *
PS: I use Solr 4.5.1
2013/11/13 Furkan KAMACI
> Here is my case;
>
> I have a field at my schema named *elmo_field*. I want that *elmo_field*
> should
> have multiple values and multiple payloads. i.e.
>
> dorothy|0.46
> sesame|0.37
> big bird|0.19
> bird|0.22
>
> When a user searches for a ke
So, it sounds like that either Solr is treated as a webapp, in which case
it is installed with most of the webapps under Tomcat (legacy/operational
reason). So, Solr docs just needs to explain how to deploy under Tomcat and
the rest of document/tooling comes from Tomcat community.
Or, if Solr is t
Dear lucene
In order to test the solr search performance ,I closed all the cache solr
[cid:image001.png@01CEE0AA.39ECDE90]
insert into the 10 million data,and find the first search very
slowly(700ms),and the secondary search very quick(20ms),I am sure no solr cache。
This problem bothering
Hi,
Reading that people have considered deploying "example" folder is slightly
strange to me. No wonder they are confused and confuse their ops. We just
took vanilla jetty (jetty9) and installed solr.war on it, configured it, no
example folders at all. Since then it works nicely.
The main reason
One thing you can try, and this is more diagnostic than a cure, is return
just
the id field (and insure that lazy field loading is true). That'll tell you
whether
the issue is actually fetching the document off disk and decompressing,
although
frankly that's unlikely since you can get your 5,000 ro
Solr uses the MMap Directory by default.
What you see is surely a filesystem cache.
Once a file is accessed, it's memory mapped.
Restarting solr won't reset it.
On unix, you may reset this cache with
echo 3 > /proc/sys/vm/drop_caches
Franck Brisbart
Le mercredi 13 novembre 2013 à 11:58 +0
Explicit commits after writing 1000 docs in a batch from both indexing clients.
No auto commit.
Thanks.
>
> -Original Message
>
> Do you do your commit from the two indexing clients or have the autocommit
> set to maxDocs = 1000 ?
>
>
>
> -
> Thanks,
> Michael
> --
> View this mes
I have to ask a different question: Why would you disable
the caches? You're trying to test worst-case times perhaps?
Because the caches are an integral part of Solr performance.
Disabling them artificially reduces your performance
numbers. So disabling them is useful for answering the question
"h
Just in case anybody is curious what *\* would really mean, the backslash
means to escape the following character, which in this case means don't
treat the second asterisk as a wildcard, but since the initial asterisk was
not escaped (the full rule is that if there is any unescaped wildcard in a
I did something like that also, and i was getting some nasty problems when
one of my clients would try to commit before a commit issued by another one
hadn't yet finish. Might be the same problem for you too.
Try not doing explicit commits fomr the indexing client and instead set the
autocommit to
I am able to perform the xml atomic update properly using curl commands.
However the moment I try to achieve the same using the solrj APIs I am
facing problems.
What should be the equivalent SOLRJ api code to perform similar action
using the below CURL command ?
curl "http://search1.es.dupont.com
How can I post the whole XML string to SOLR using its SOLRJ API ?
On Wed, Nov 13, 2013 at 6:50 PM, Anupam Bhattacharya wrote:
> I am able to perform the xml atomic update properly using curl commands.
> However the moment I try to achieve the same using the solrj APIs I am
> facing problems.
>
>
Hi,
I've been researching how to update a specific field of an entry in Solr,
and it seems like the only way to do this is a delete then an add. Is there
a better way to do this? If I want to change one field, do I have to store
the whole entry locally, delete it from the solr index, and then add
Okay, so I've found in the solr tutorial that if you do a POST command and
post a new entry with the same uniquekey (in my case, id_) as an entry
already in the index, solr will automatically replace it for you. That
seems to be what I need, right?
--
View this message in context:
http://lucen
(13/11/13 22:25), Anupam Bhattacharya wrote:
How can I post the whole XML string to SOLR using its SOLRJ API ?
The source code of SimplePostTool would be of some help:
http://lucene.apache.org/solr/4_5_1/solr-core/org/apache/solr/util/SimplePostTool.html
koji
--
http://soleami.com/blog/auto
Yes, that's correct. You can also update document "per field" but all
fields need to be stored=true, because Solr (version >= 4.0) first gets
your document from the index, creates new document with modified field,
and adds it again to the index...
Primoz
From: gohome190
To: solr-user@
James can elaborate how to process driver="${dataimporter.request.driver}"
url ="${dataimporter.request.url}" and all where to mention these
my purpose is to config my DB Details(url,uname,password) in properties file
-Original Message-
From: Dyer, James [mailto:james.d...@ingramcontent.
You should read here: http://wiki.apache.org/solr/Atomic_Updates
2013/11/13
> Yes, that's correct. You can also update document "per field" but all
> fields need to be stored=true, because Solr (version >= 4.0) first gets
> your document from the index, creates new document with modified field,
In solrcore.properties, put:
datasource.url=jdbc:xxx:yyy
datasource.driver=com.some.driver
In solrconfig.xml, put:
...
${datasource.driver}
${datasource.url}
...
In data-config.xml, put:
H
Need to be put out of solr like
customized Mysolr_core.properties
how to access it
-Original Message-
From: Dyer, James [mailto:james.d...@ingramcontent.com]
Sent: Wednesday, November 13, 2013 8:50 PM
To: solr-user@lucene.apache.org
Subject: RE: Data Import Handler
In solrcore.properti
I'm seeing a rare behavior of the gap fragmenter on solr 3.6. Right now this is
my configuration for the gap fragmenter:
150
This is the basic configuration, just tweaked the fragsize parameter to get
shorter fragments. The thing is that for 1 particula
Bumping this one again, any suggestions?
On Tue, Nov 12, 2013 at 3:58 PM, Utkarsh Sengar wrote:
> Hello,
>
> I load data from csv to solr via UpdateCSV. There are about 50M documents
> with 10 columns in each document. The index size is about 15GB and I am
> using a 3 node distributed solr clust
Utkarsh,
Your screenshot didn't come through. I don't think this list allows
attachments. Maybe put it up on imgur or something?
I'm a little unclear on whether you're using Solr in Cloud mode, or with a
single master.
Michael Della Bitta
Applications Developer
o: +1 646 532 3062 | c: +1 917
Hi Michael,
I am using solr cloud 4.5.
And update csv loads data to one of these nodes.
Attachment: http://i.imgur.com/1xmoNtt.png
Thanks,
-Utkarsh
On Wed, Nov 13, 2013 at 8:33 AM, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
> Utkarsh,
>
> Your screenshot didn't come throu
Don't load 50M documents in one shot. Break it up into reasonable chunks
(100K?) with commits at each point.
You will have a bottleneck somewhere, usually disk or CPU. Yours appears to be
disk. If you get faster disks, it might become the CPU.
wunder
On Nov 13, 2013, at 8:22 AM, Utkarsh Sengar
On 11/13/2013 5:29 AM, Dmitry Kan wrote:
Reading that people have considered deploying "example" folder is slightly
strange to me. No wonder they are confused and confuse their ops.
I do use the stripped jetty included in the example, but my setup is not
a straight copy of the example director
RE: the example folder
It’s something I’ve been pushing towards moving away from for a long time - see
https://issues.apache.org/jira/browse/SOLR-3619 Rename 'example' dir to
'server' and pull examples into an 'examples’ directory
Part of a push I’ve been on to own the Container level (people a
Thank you. This will help me a lot.
Sent from my iPhone
On Nov 13, 2013, at 10:08 AM, "Shawn Heisey" wrote:
> In the hopes that it will help someone get Solr running in a very clean way,
> here's an informational email.
>
> For my Solr install on CentOS 6, I use /opt/solr4 as my installation
I use Solr 4.5.1 I have indexed some documents and decided to add a new
field to my schema after a time later. I want to use Atomic Updates for
that newly added field. I use Solrj for indexing. However due to there is
no field named as I've newly added Solr does not make an atomic update for
existi
In the hopes that it will help someone get Solr running in a very clean
way, here's an informational email.
For my Solr install on CentOS 6, I use /opt/solr4 as my installation
path, and /index/solr4 as my solr home. The /index directory is a
dedicated filesystem, /opt is part of the root fil
Hi All,
I'm building a utility (Java jar) to create SolrInputDocuments and send
them to a HttpSolrServer using the SolrJ API. The intention is to find an
efficient way to create documents from a large directory of files (where
multiple files make one Solr document) and be sent to a remote Solr
in
Try Solr 4.5.1.
https://issues.apache.org/jira/browse/SOLR-5306 Extra collection creation
parameters like collection.configName are not being respected.
- Mark
On Nov 13, 2013, at 2:24 PM, Christopher Gross wrote:
> Running Apache Solr 4.5 on Tomcat 7.0.29, Java 1.6_30. 3 SolrCloud nodes
>
Running Apache Solr 4.5 on Tomcat 7.0.29, Java 1.6_30. 3 SolrCloud nodes
running. 5 ZK nodes (v 3.4.5), one on each SolrCloud server, and on 2
other servers.
I want to create a collection on all 3 nodes. I only need 1 shard. The
config is in Zookeeper (another collection is using it)
http://s
Hello,
I'm hitting a performance issue when using field collapsing in a
distributed Solr setup and I'm wondering if others have seen it and if
anyone has an idea to work around. it.
I'm using field collapsing to deduplicate documents that have the same near
duplicate hash value, and deduplicating
I am trying to escape special characters from SOLR response header (to
prevent cross site scripting).
I couldn't find any method in SolrQueryResponse to get just the SOLR
response header.
Can someone let me know if there is a way to modify the SOLR response
header?
--
View this message in c
I'm not quite sure what you're trying to do here, can you please elaborate with
an example?
But, you can get the response header from a SolrQueryResponse using the
getResponseHeader() method.
Erik
On Nov 13, 2013, at 3:21 PM, Developer wrote:
> I am trying to escape special character
It's surprising such a query takes a long time, I would assume that after
trying consistently q=*:* you should be getting cache hits and times should
be faster. Try see in the adminUI how do your query/doc cache perform.
Moreover, the query in itself is just asking the first 5000 docs that were
ind
Thanks guys!
I will start splitting the file in chunks of 5M (10 chunks) to start with
reduce the size if needed.
Thanks,
-Utkarsh
On Wed, Nov 13, 2013 at 9:08 AM, Walter Underwood wrote:
> Don't load 50M documents in one shot. Break it up into reasonable chunks
> (100K?) with commits at each p
which example? there are so many.
On Wed, Nov 13, 2013 at 1:00 PM, Mark Miller wrote:
> RE: the example folder
>
> It’s something I’ve been pushing towards moving away from for a long time -
> see https://issues.apache.org/jira/browse/SOLR-3619 Rename 'example' dir to
> 'server' and pull exampl
Can anybody provide any insight about using the tz param? The behavior of this
isn't affecting date math and /day rounding. What format does the tz variables
need to be in? Not finding any documentation on this.
Sample query we're using:
path=/select
params={tz=America/Chicago&sort=id+desc&s
I believe it is the TZ column from this table:
http://en.wikipedia.org/wiki/List_of_tz_database_time_zones
Yeah, it's on my TODO list for my book.
I suspect that "tz" will not affect "NOW", which is probably UTC. I suspect
that "tz" only affects literal dates in date math.
-- Jack Krupansky
Dear lucene
I find a question that lucene search performent,first search is very slowly and
secondary search is very quick
I use MMapDirectoryFactory in solrconfig.xml (I have already banned all solr
cache for testing lucene search peforments )
Call mmap () is the kernel just logical addresses
Dear lucene
I find a question that lucene search performent,first search is very slowly and
secondary search is very quick
I use MMapDirectoryFactory in solrconfig.xml (I have already banned all solr
cache for testing lucene search peforments )
Call mmap () is the kernel just logical addresses
49 matches
Mail list logo