if you use default directory then it will use solr.home directory, I have
tested solr cloud example on local machine with 5-6 nodes.And data
directory was created under core name, like
"example2/solr/collection1/data". you could see example startup script from
source code "solr/cloud-dev/solrcloud
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m as
parameters. I also added -XX:+UseG1GC to the java process. But now the
whole machine crashes! Any idea why?
Mar 22 20:30:01 solr01-gs kernel: [716098.077809] java invoked
oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
M
Hello All,
I am trying to index data from SQL Server view to the SOLR using the DIH
with full-import command. The view has 750K rows and 427 columns. During the
first execution i indexed only the first 50 rows of the view, the data got
indexed in 10 min. But, when i executed the same scenario to i
In context of the above scenario, when i try to index set of 500 rows, it
fetches and indexes around 400 odd rows and then it shows no progress and
keeps on executing. What can be the possible cause of this issue? If
possible, please do share if you guys have gone through such scenario with
the res
On 3/25/13 4:18 AM, Steve Rowe wrote:
The wiki at http://wiki.apache.org/solr/ has come under attack by spammers more
frequently of late, so the PMC has decided to lock it down in an attempt to
reduce the work involved in tracking and removing spam.
From now on, only people who appear on
htt
Hi ,
i am using solr4.0.i want to store key value pairs of attributes in
mutlivalued field.
Example i have some documents (Products) which have attributes as one field
and i indexed
attributes as separate documents to power auto suggest . now in some auto
suggest i have to show facet count of prod
Floyd,
I think you need provide stack trace or draft sampling.
On Fri, Mar 22, 2013 at 6:23 AM, Floyd Wu wrote:
> Anybody can point me a direction?
> Many thanks.
>
>
>
> 2013/3/20 Floyd Wu
>
> > Hi everyone,
> >
> > I have a problem and have no luck to figure out.
> >
> > When I issue a quer
Please add adderllyer to this group. Thank you!
For the ideal, never give up, fighting!
On Mon, Mar 25, 2013 at 5:11 PM, Andrzej Bialecki wrote:
> On 3/25/13 4:18 AM, Steve Rowe wrote:
>
>> The wiki at http://wiki.apache.org/solr/ has come under attack by
>> spammers more frequently of late, s
Hi,
I recently added a new field (toptipp) to an existing solr schema.xml and
it worked just fine. Subsequently I added to more fields (active_cruises
and non_grata) to the schema and now I get this error:
4006undefined
field: "active_cruise"400
My solr db is populated via a program that c
Further to the prev msg: Here's an extract from my current schema.xml:
The original schema.xml had the last 3 fields in the order toptipp,
active_cruise and non_grata. Active_cruise and non_grata were also defined
as type="int". I changed the order and field types in my attem
Is sombody using the UseG1GC garbage collector with Solr and Tomcat 7?
Any extra options needed?
Thanks...
On 03/25/2013 08:34 AM, Arkadi Colson wrote:
I changed my system memory to 12GB. Solr now gets -Xms2048m -Xmx8192m
as parameters. I also added -XX:+UseG1GC to the java process. But now
t
The of UseG1GC yes,
but with Solr 4.x, Jetty 8.1.8 and Java HotSpot(TM) 64-Bit Server VM (1.7.0_07).
os.arch: amd64
os.name: Linux
os.version: 2.6.32.13-0.5-xen
Only args are "-XX:+UseG1GC -Xms16g -Xmx16g".
Monitoring shows that 16g is a bit high, I might reduce it to 10g or 12g for
the slaves
My understanding is that logs stick around for a while just in case they
can be used to catch up a shard that rejoins the cluster.
On Mar 24, 2013 12:03 PM, "Niran Fajemisin" wrote:
> Hi all,
>
> We import about 1.5 million documents on a nightly basis using DIH. During
> this time, we need to e
Hi Team,
I want to overcome a sort issue here.. sort feature works fine.
I have indexed few documents in SOLR.. which have a unique document ID.
Now when I retrieve result's from SOLR results comes automatically sorted.
However I would like to fetch results based on the sequence I mention in my
On Mar 25, 2013, at 3:30 AM, Dawid Weiss wrote:
> Can you add me to? We have a few pages which we maintain (search results
> clustering related). My wiki user is DawidWeiss
Added to AdminGroup.
On Mar 25, 2013, at 5:11 AM, Andrzej Bialecki wrote:
> Please add AndrzejBialecki to this group. Tha
A timeout like this _probably_ means your docs were indexed just fine. I'm
curious why adding the docs takes so long, how many docs are you sending at
a time?
Best
Erick
On Thu, Mar 21, 2013 at 1:31 PM, Benjamin, Roy wrote:
> I'm calling: m_server.add(docs, 12);
>
> Wondering if the ti
Solr doesn't do anything with links natively, it just echoes back what you
put in. So you're sending file-based http links to Solr...
Best
Erick
On Thu, Mar 21, 2013 at 1:40 PM, zeroeffect wrote:
> While I am still in the beginning phase of solr I have been able to index a
> directory of HTML
Thanks for the info!
I just upgraded java from 6 to 7...
How exactly do you monitor the memory usage and the affect of the
garbage collector?
On 03/25/2013 01:18 PM, Bernd Fehling wrote:
The of UseG1GC yes,
but with Solr 4.x, Jetty 8.1.8 and Java HotSpot(TM) 64-Bit Server VM (1.7.0_07).
os.a
Furkan:
Stop. back up, you're making it too complicated. Follow Erik's
instructions. The "ant example" just compiles all of Solr, just like the
distribution. Then you can go into the example directory and change it to
look just like whatever you want, change the schema, change the solrconfig,
add
This has been a long-standing issue with updates, several attempts
have been started to change the behavior, but they haven't gotten
off the ground.
Your options are to send one record at a time, or have error-handling
logic that, say, transmits the docs one at a time whenever a packet fails.
Bes
I apologize for the slow reply. Today has been killer. I will reply to
everyone as soon as I get the time.
I am having difficulties understanding how docValues work.
Should I only add docValues to the fields that I actually use for sorting
and faceting or on all fields?
Will the docValues magic
With MS SqlServer, try adding "selectMethod=cursor" to your conenction string
and set your batch size to a reasonable amount (possibly just omit it and DIH
has a default value it will use.)
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: kobe.free.wo...@gmail.co
Hello,
Can I be added to the contributors group? Username sswoboda.
Thank you.
Swati
On Mar 25, 2013, at 10:32 AM, Swati Swoboda wrote:
> Can I be added to the contributors group? Username sswoboda.
Added to solr ContributorsGroup.
Erick,
Thanks for the info. That's also what I had in mind and that's what I did
since I can't find anything on the web regarding this issue.
Randolf
--
View this message in context:
http://lucene.472066.n3.nabble.com/Continue-to-the-next-record-tp4049920p4051113.html
Sent from the Solr - Use
We use munin with jmx plugin for monitoring all server and Solr installations.
(http://munin-monitoring.org/)
Only for short time monitoring we also use jvisualvm delivered with Java SE JDK.
Regards
Bernd
Am 25.03.2013 14:45, schrieb Arkadi Colson:
> Thanks for the info!
> I just upgraded java f
Hi,
Please let me know how to get the db changes reflected into my solr
index,Iam using Solr4 with DIH and delta query with scheduler in dataimport
scheduler properties.Ultimately i want my DB to be in sync with solr
Everything is all set and working except Every time i modify the data in the
DB
How can I see if GC is actually working? Is it written in the tomcat
logs as well or will I only see it in the memory graphs?
BR,
Arkadi
On 03/25/2013 03:50 PM, Bernd Fehling wrote:
We use munin with jmx plugin for monitoring all server and Solr installations.
(http://munin-monitoring.org/)
On
take a look here:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
looking at memory consumption can be a bit tricky to interpret with
MMapDirectory.
But you say "I see the CPU working very hard" which implies that your issue
is just scoring 90M documents. A way to test: tr
For your first problem I'd be looking at the solr logs and verifying that
1> the update was sent
2> no stack traces are thrown
3> You probably already know all about commits, but just in case the commit
interval is passed.
For your second problem, I'm not quite sure where you're setting these
time
That's essentially what replication does, only backs up parts of the index
that have changed. However, when segments merge, that might mean the entire
index needs to be replicated.
Best
Erick
On Sun, Mar 24, 2013 at 12:08 AM, Sandeep Kumar Anumalla <
sanuma...@etisalat.ae> wrote:
> Hi,
>
> Is t
Certainly that will be true for the bare q=*:*, I meant with the boosting
clause added.
Best
Erick
On Sun, Mar 24, 2013 at 7:01 PM, adityab wrote:
> thanks Eric. in this query "q=*:*" the Lucene score is always 1
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/T
You can also use "-verbose:gc -XX:+PrintGCDateStamps -XX:+PrintGCDetails
-Xloggc:gc.log"
as additional options to get a "gc.log" file and see what GC is doing.
Regards
Bernd
Am 25.03.2013 16:01, schrieb Arkadi Colson:
> How can I see if GC is actually working? Is it written in the tomcat logs as
unless you're manually typing things and did a typo, your problem is that
your csv file defines:
active_cruises
and your schema has
active_cruise
Note the lack of an 's'...
Best
Erick
On Mon, Mar 25, 2013 at 6:30 AM, Mid Night wrote:
> Further to the prev msg: Here's an extract from my curr
The tlogs will stay there to provide "peer synch" on the last 100 docs. Say
a node somehow gets out of synch. There are two options
1> replay from the log
2> replicate the entire index.
To avoid <2> if possible, the tlog is kept around. In your case, all your
data is put in the tlog file, so the "
There's no good way that I know of to have Solr do that for you.
But you have the original query so it seems like your app layer could sort
the results accordingly.
Best
Erick
On Mon, Mar 25, 2013 at 8:44 AM, atuldj.jadhav wrote:
> Hi Team,
>
> I want to overcome a sort issue here.. sort featu
Hello,
We re-indexed our entire core of 115 docs with some of the
fields having termVectors="true" termPositions="true" termOffsets="true",
prior to the reindex we only had termVectors="true". After the reindex the
the query component has become very slow. I thought that adding the
term
Generally, you will need to delete the index and completely reindex your
data if you change the type of a field.
I don't think that would account for active_cruise being an undefined field
though.
I did try your scenario with the Solr 4.2 example, and a field named
active_cruise, and it work
While you're in that mode, could you please add 'Upayavira'.
Thanks!
Upayavira
On Mon, Mar 25, 2013, at 02:41 PM, Steve Rowe wrote:
>
> On Mar 25, 2013, at 10:32 AM, Swati Swoboda
> wrote:
> > Can I be added to the contributors group? Username sswoboda.
>
> Added to solr ContributorsGroup.
On Mar 25, 2013, at 11:59 AM, Upayavira wrote:
> While you're in that mode, could you please add 'Upayavira'.
Added to solr ContributorsGroup.
Hi,
I noticed that apache solr 4.2 uses the lucene codec 4.1. How can I
switch to 4.2?
Thanks in advance
Mario
Did index size increase after turning on termPositions and termOffsets?
Thanks.
Alex.
-Original Message-
From: Ravi Solr
To: solr-user
Sent: Mon, Mar 25, 2013 8:27 am
Subject: Query slow with termVectors termPositions termOffsets
Hello,
We re-indexed our entire core o
Hi,
I'm having an issue when I trying to create a collection:
curl
http://192.168.1.142:8983/solr/admin/cores?action=CREATE&name=RT-4A46DF1563_12&collection=RT-4A46DF1563_12&shard=00&collection.configName=reportssBucket-regular
The curl call has an error because the collection.configName doesn
I fixed it by setting JVM properties in glassfish.
-Djavax.net.ssl.keyStorePassword=changeit
--
View this message in context:
http://lucene.472066.n3.nabble.com/Strange-error-in-Solr-4-2-tp4047386p4051159.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Erick and Michael for the prompt responses.
Cheers,
Niran
>
> From: Erick Erickson
>To: solr-user@lucene.apache.org
>Sent: Monday, March 25, 2013 10:21 AM
>Subject: Re: Tlog File not removed after hard commit
>
>The tlogs will stay there to provide "pee
That example does not work if you have > 1 collection (core) per node, all
end up sharing the same index and overwrite one another.
On Mon, Mar 25, 2013 at 6:27 PM, Gopal Patwa wrote:
> if you use default directory then it will use solr.home directory, I have
> tested solr cloud example on loca
Hi Chris,
Thanks for your detailed explanations. The default value is a difficult
limitation. Especially for financial figures. I may try with some
workaround like the lowest possible number for TrieLongField, but would be
better to avoid such :)
Regards.
On 22 March 2013 20:39, Chris Hostetter
I have a custom ValueSourceParser that sets up a Zookeeper Watcher on some
frequently changing metadata that a custom ValueSource depends on.
Basic flow of events is - VSP watches for metadata changes, which triggers
a refresh of some expensive data that my custom ValueSource uses at query
time. T
Solved.
I was able to solve this by removing any reference to dataDir from the
solrconfig.xml. So in solr.xml for each node I have:
and in solrconfig.xml in each core I have removed the reference to dataDir
completely.
On Tue, Mar 2
I don't know the ValueSourceParser from a hole in my head, but it looks like it
has access to the solrcore with fp.req.getCore?
If so, it's easy to get the zk stuff
core.getCoreDescriptor.getCoreContainer.getZkController(.getZkClient).
From memory, so perhaps with some minor misname.
- Mark
O
: I noticed that apache solr 4.2 uses the lucene codec 4.1. How can I
: switch to 4.2?
Unless you've configured something oddly, Solr is already using the 4.2
codec.
What you are probably seeing is that the fileformat for several types of
files hasn't changed from the 4.1 (or even 4.0) versi
Brilliant! Thank you - I was focusing on the init method and totally
ignored the FunctionQParser passed to the parse method.
Cheers,
Tim
On Mon, Mar 25, 2013 at 4:16 PM, Mark Miller wrote:
> I don't know the ValueSourceParser from a hole in my head, but it looks
> like it has access to the solr
My application is update intensive. The documents are pretty small, less than
1K bytes.
Just now I'm batching 4K documents with each SolrJ addDocs() call.
Wondering what I should expect with increasing this batch size? Say 8K docs
per update?
Thanks
Roy
Solr 3.6
Hi Jack, I tried putting the schema.xml file (further below) in the
path you specified below, but when i tried to start (java -jar
start.jar) got the message below.
I can try a fresh install like you suggested, but I'm not sure what
would be different. I was using documenationt at
http://lucene.ap
Did you ever resolve the issue with your full-import only importing 1
document.
I'm monitoring the source db and its only issuing one query, it never
attempts to query for the other documents on the top of the nest.
I'm running into the exact same issue with NO help out there.
Thanks in advance
I have two issues and I'm unsure if they are related:
Problem: After setting up a multiple collection Solrcloud 4.1 instance on
seven servers, when I index the documents they aren't distributed across
the index slices. It feels as though, I don't actually have a "cloud"
implementation, yet every
Hi,
You'll have to test because there is no general rule that works in all
environments, but from testing this a while back, you will reach the point
of diminishing returns at some point. You don't mention using
StreamingUpdateSolrServer, so you may want to try that instead:
http://lucene.apache.
Arkadi,
jstat -gcutil -h20 2000 100 also gives useful info about GC and I use
it a lot for quick insight into what is going on with GC. SPM (see
http://sematext.com/spm/index.html ) may also be worth using.
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Mon, Mar 25, 2013 at
Your schema has only "fields", but no field "types". Check the Solr example
schema for reference, and include all of the types defined there unless you
know that you do not need them. "string" is clearly one that is needed.
-- Jack Krupansky
-Original Message-
From: Patrice Seyed
Sen
Hi,
This question is too open-ended for anyone to give you a good answer.
Maybe you want to ask more specific questions? As for embedding vs. war,
start with a simpler war and think about the alternatives if that doesn't
work for you.
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
I'm guessing you didn't specify numShards. Things changed in 4.1 - if you don't
specify numShards it goes into a mode where it's up to you to distribute
updates.
- Mark
On Mar 25, 2013, at 10:29 PM, Chris R wrote:
> I have two issues and I'm unsure if they are related:
>
> Problem: After se
Nope, this doesn't find it:
http://search-lucene.com/?q=facet+stats&fc_project=Solr&fc_type=issue
Maybe Anirudha wants to do that?
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Thu, Mar 21, 2013 at 5:16 AM, Upayavira wrote:
> Have you made a JIRA ticket for this? This is use
Hi,
Try something like this: http://host/solr/replication?command=backup
See: http://wiki.apache.org/solr/SolrReplication
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Thu, Mar 21, 2013 at 3:23 AM, Sandeep Kumar Anumalla
wrote:
>
> Hi,
>
> We are loading daily 1TB (Apprx) of
Hi,
What does your query look like? Does it look like q=name:dark knight?
If so, note that only "dark" is going against the "name" field. Try
q=name:dark name:knight or q=name:"dark knight".
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Mon, Mar 18, 2013 at 6:21 PM, Catala,
Or, q=name:(dark knight) .
-- Jack Krupansky
-Original Message-
From: Otis Gospodnetic
Sent: Monday, March 25, 2013 11:51 PM
To: solr-user@lucene.apache.org
Subject: Re: Shingles Filter Query time behaviour
Hi,
What does your query look like? Does it look like q=name:dark knight?
If
Yes the index size increased after turning on termPositions and termOffsets
Ravi Kiran Bhaskar
On Mon, Mar 25, 2013 at 1:13 PM, wrote:
> Did index size increase after turning on termPositions and termOffsets?
>
> Thanks.
> Alex.
>
>
>
>
>
>
>
> -Original Message-
> From: Ravi Solr
> To
Interesting, I saw some comments about numshards, but it wasnt ever
specific enough to catch.my attention. I will give it a try tomorrow.
Thanks.
On Mar 25, 2013 11:35 PM, "Mark Miller" wrote:
> I'm guessing you didn't specify numShards. Things changed in 4.1 - if you
> don't specify numShards i
Hi Frank,
If your servlet container had a crazy low setting for the max number
of threads I think you would see the CPU underutilized. But I think
you would also see errors in on the client about connections being
requested. Sounds like a possibly VM issue that's not
Solr-specific...
Otis
--
So
"book" by itself returns in 4s (non-optimized disk IO), running it a second
time returned 0s, so I think I can presume that the query was not cached the
first time. This system has been up for week, so it's warm.
I'm going to give your article a good long read, thanks for that.
I guess good fa
69 matches
Mail list logo