t you are looking for is "solr-user@lucene.apache.org"
solr-user-info is an automated bot for giving you info about the list
solr-user-owner is for contacting the human moderators of the mailing
list
with help
: Date: Sat, 21 May 2016 09:07:00 -0400
: From: Carl Roberts
: To: sol
Let's try this one (solr-user-digest-subscr...@lucene.apache.org) -
maybe a real person will answer there.
On 5/21/16 9:09 AM, Carl Roberts wrote:
And, these responses are just wierd. Do they mean this user list is
obsolete? Is solr no longer supported via a user list where we can
6.08.29
(version=TLSv1/SSLv3 cipher=OTHER);
Sat, 21 May 2016 06:08:29 -0700 (PDT)
Subject: Re: How to properly query indexed Data
To: solr-user-i...@lucene.apache.org, solr-user-ow...@lucene.apache.org
References: <573d0f03.7070...@gmail.com> <57405d74.6010...@gmail.com>
Fro
How do I unsubscribe?
Hi,
What is the best way to update an index with new data or records? Via
this command:
curl
"http://127.0.0.1:8983/solr/nvd-rss/dataimport?command=full-import&clean=false&synchronous=true&entity=cve-2002";
or this command:
curl
"http://127.0.0.1:8983/solr/nvd-rss/dataimport?command=delta
good hack
On Wed, Jan 28, 2015 at 3:47 AM, Carl Roberts
wrote:
Hi,
I am attempting to run all these curl commands from a script so that I can
put them in a crontab job, however, it seems that only the first one
executes and the other ones return with an error (below):
curl "http://127.0.0.1
Hi,
I am attempting to run all these curl commands from a script so that I
can put them in a crontab job, however, it seems that only the first one
executes and the other ones return with an error (below):
curl
"http://127.0.0.1:8983/solr/nvd-rss/dataimport?command=full-import&clean=false&en
Yep - it works with string. Thanks a lot!
On 1/27/15, 7:08 PM, Alexandre Rafalovitch wrote:
Make that id field a string and reindex. text_general is not the right
type for a unique key.
Regards,
Alex.
uot;:{"numFound":6717,"start":0,"docs":[]
}}
Now here is the next full-import command with clean=false:
*"http://127.0.0.1:8983/solr/nvd-rss/dataimport?command=full-import&entity=cve-2002&clean=false"*
And here is the new count:
*curl
"http://
freebsd:freebsd:2.0.5",
"cpe:/o:freebsd:freebsd:2.2.6",
"cpe:/o:freebsd:freebsd:2.1.6.1",
"cpe:/o:freebsd:freebsd:2.0.1",
"cpe:/o:freebsd:freebsd:2.2",
"cpe:/o:freebsd:freebsd:2.0",
&quo
t;false"
regex=":" replaceWith=" "/>
Don't ask me why the other one didn't work, as I think it should have
worked also.
On 1/27/15, 3:42 PM, Carl Roberts wrote:
Hi,
I have tried to reindex to add a new field named product-info and no
matter what I do, I c
I too am running into what appears to be the same thing.
Everything works and data is imported but I cannot see the new field in
the result.
Hi,
I have tried to reindex to add a new field named product-info and no
matter what I do, I cannot get the new field to appear in the index
after import via DIH.
Here is the rss-data-config.xml configuration (field product-info is the
new field I added):
readTimeout="3"/>
lace, it should.
If you are getting duplicate records, maybe your uniqueKey is not set correctly?
clean=false looks to me like the right approach for incremental updates.
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr-start.com/
On 27 January 2015 at 11:4
/log4j.xml -Dsolr.solr.home=../
-classpath "./:lib/*:./log4j.xml" -jar start.jar
Regards,
Joe
On 1/22/15, 11:46 AM, Shawn Heisey wrote:
On 1/22/2015 9:18 AM, Carl Roberts wrote:
Is there a way to pass in proxy settings to Solr?
The reason that I am asking this question is that I am trying
Also, if I try full-import and clean=false with the same XML file, I end
up with more records each time the import runs. How can I make SOLR
just add the records that are new by id, and update the ones that have
an id that matches the one in the existing index?
On 1/27/15, 11:32 AM, Carl
Hi,
What is the recommended way to import and update index records?
I've read the documentation and I've experimented with full-import and
delta-import and I am not seeing the desired results.
Basically, I have 15 RSS feeds that I am importing through
rss-data-config.xml.
The first RSS fee
Krupansky
On Sat, Jan 24, 2015 at 3:49 PM, Carl Roberts
wrote:
Via this rss-data-config.xml file and a class that I wrote (attached) to
download and XML file from a ZIP URL:
https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2002.xml.zip";
proc
xml:lang="en">5707
CONFIRM
href="http://www.openbsd.org/errata23.html#tcpfix";
xml:lang="en">http://www.openbsd.org/errata23.html#tcpfix
ip_input.c in BSD-derived TCP/IP implementations
allows remote attackers to cause a denial
ftware-list/product" commonField="false" />
xpath="/nvd/entry/published-datetime" commonField="false" />
xpath="/nvd/entry/last-modified-datetime" commonField="false" />
commonField="false" /
want to be able to find documents
when somebody searches for X, Y, or Z
3. What would be the best analyzer chain to be able to do so?
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr-start.com/
On 24 January 2015 at 15:04, Carl Roberts
wrote:
Hi,
How can I pa
s for X, Y, or Z
3. What would be the best analyzer chain to be able to do so?
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr-start.com/
On 24 January 2015 at 15:04, Carl Roberts wrote:
Hi,
How can I parse the data in a field that is returned from a
Hi,
How can I parse the data in a field that is returned from a query?
Basically,
I have a multi-valued field that contains values such as these that are
returned from a query:
"cpe:/o:freebsd:freebsd:1.1.5.1",
"cpe:/o:freebsd:freebsd:2.2.3",
"cpe:/o:freebsd:fre
e of misconfiguration error and all the messages that
Solr gave indicated the import was successful. This lack of appropriate
error reporting is a pain, especially for someone learning Solr.
Switching pk="link" to pk="id" solved the problem and I was then able to
import the data.
by Solr to
indicate this type of misconfiguration error and all the messages that
Solr gave indicated the import was successful. This lack of appropriate
error reporting is a pain, especially for someone learning Solr.
Switching pk="link" to pk="id" solved the problem and I was
Hi,
I have set log4j logging to level DEBUG and I have also modified the
code to see what is being imported and I can see the nextRow() records,
and the import is successful, however I have no data. Can someone
please help me figure this out?
Here is the logging output:
ow: r1={{id=CVE-20
Hi,
I created a custom ZIPURLDataSource class to unzip the content from an
http URL for an XML ZIP file and it seems to be working (at least I have
no errors), but no data is imported.
Here is my configuration in rss-data-config.xml:
https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2002.xml.zi
Hi,
I created a custom ZIPURLDataSource class to unzip the content from an
http URL for an XML ZIP file and it seems to be working (at least I have
no errors), but no data is imported.
Here is my configuration in rss-data-config.xml:
https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2002.xml.zi
deletion at all
I guess.
On Fri, Jan 23, 2015 at 7:31 PM, Carl Roberts
wrote:
OK - Thanks for the doc.
Is it possible to just provide an empty value to preImportDeleteQuery to
disable the delete prior to import?
Will the data still be deleted for each entity during a delta-import
instead of full
you can play there once you define it.
You do have to use Curl, there is no built-in scheduler.
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr-start.com/
On 23 January 2015 at 13:29, Carl Roberts wrote:
Hi Alex,
If I am understanding this correctly, I can
Hi,
I am using the DIH RSS example and I am running into a sporadic socket
timeout error during every 3rd or 4th request. Below is the stack trace.
What is the default socket timeout for reads and how can I increase it?
15046 [Thread-17] ERROR org.apache.solr.handler.dataimport.URLDataSource
up for my Solr resources newsletter at http://www.solr-start.com/
On 23 January 2015 at 11:15, Carl Roberts wrote:
Hi,
I have the RSS DIH example working with my own RSS feed - here is the
configuration for it.
https://nvd.nist.gov/download/nvd-rss.xml";
p
Hi,
I have the RSS DIH example working with my own RSS feed - here is the
configuration for it.
https://nvd.nist.gov/download/nvd-rss.xml";
processor="XPathEntityProcessor"
forEach="/RDF/item"
transformer="DateFormatTransformer
I got the RSS DIH example to work with my own RSS feed and it works
great - thanks for the help.
On 1/22/15, 11:20 AM, Carl Roberts wrote:
Thanks. I am looking at the RSS DIH example right now.
On 1/21/15, 3:15 PM, Alexandre Rafalovitch wrote:
Solr is just fine for this.
It even ships with
query down and know what to
expect, it's probably easier to enter "escaping hell" with curl and the
like
And what is your schema definition for the field in question? the
admin/analysis page can help a lot here.
Best,
Erick
On Thu, Jan 22, 2015 at 3:51 PM, Carl Roberts
awn Heisey wrote:
On 1/22/2015 4:31 PM, Carl Roberts wrote:
Hi Walter,
If I try this from my Mac shell:
curl
http://localhost:8983/solr/nvd-rss/select?wt=json&indent=true&q=summary:"Oracle
Fusion"
I don't get a response.
Quotes are a special character to the shell
lter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/
On Jan 22, 2015, at 2:47 PM, Carl Roberts wrote:
Hi,
How do you query a sentence composed of multiple words in a description field?
I want to search for sentence "Oracle Fusion Middleware" but when I try the
following search query i
Hi,
How do you query a sentence composed of multiple words in a description
field?
I want to search for sentence "Oracle Fusion Middleware" but when I try
the following search query in curl, I get nothing:
curl "http://localhost:8983/solr/nvd-rss/select?q=summary:Oracle Fusion
Middleware&w
Thanks for the input. I think one benefit of using Solr is also that I
can provide a REST API to search the indexed records.
Regards,
Joe
On 1/21/15, 3:17 PM, Shawn Heisey wrote:
On 1/21/2015 12:53 PM, Carl Roberts wrote:
Is Solr a good candidate to index 100s of nodes in one XML file?
I
x27;t need to worry about Stax or anything, unless
your file format is very weird or has overlapping namespaces (DIH XML
parser does not care about namespaces).
Regards,
Alex.
Sign up for my Solr resources newsletter at http://www.solr-start.com/
On 21 January 2015 at 14:53, Carl Ro
Hi,
Is there a way to pass in proxy settings to Solr?
The reason that I am asking this question is that I am trying to run the
DIH RSS example, and it is not working when I try to import the RSS feed
URL because the code in Solr comes back with an unknown host exception
due to the proxy that
very well yet...:)
Many thanks,
Joe
On 1/21/15, 8:47 PM, Carl Roberts wrote:
Hi Shawn,
Many thanks for all your help. Moving the lucene JARs from
solr.solr.home/lib to the same classpath directory as the solr JARs
plus adding a bunch more dependency JAR files and most of the files
fro
g.apache.solr.update.LoggingInfoStream - [IW][main]:
rollback: infos=
[main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: now
checkpoint "" [0 segments ; isCommit = false]
[main] INFO org.apache.solr.update.LoggingInfoStream - [IFD][main]: 0
msec to checkpoint
[main] INFO org.apache.solr.core.SolrCore - [db]
der.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 21 more
*
On 1/21/15, 7:32 PM, Shawn Heisey wrote:
On 1/21/2015 5:16 PM, Carl Roberts wr
Hi,
Is Solr a good candidate to index 100s of nodes in one XML file?
I have an RSS feed XML file that has 100s of nodes with several elements
in each node that I have to index, so I was planning to parse the XML
with Stax and extract the data from each node and add it to Solr. There
will alw
s=[]
Exception in thread "main" org.apache.solr.common.SolrException: No such
core: db
at
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:112)
at
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:1
ache.solr.schema.IndexSchema -
Reading Solr Schema from
/Users/carlroberts/dev/solr-4.10.3/db/conf/schema.xml
[coreLoadExecutor-5-thread-1] INFO org.apache.solr.schema.IndexSchema -
[db] Schema name=example
false
{}
[]
/Users/carlroberts/dev/solr-4.10.3/
Exception in thread "main" org.apache.solr.common.SolrException
r.Test.main(Test.java:40)
On 1/21/15, 11:50 AM, Alan Woodward wrote:
That certainly looks like it ought to work. Is there log output that you could
show us as well?
Alan Woodward
www.flax.co.uk
On 21 Jan 2015, at 16:09, Carl Roberts wrote:
Hi,
I have downloaded the code and documentation
Hi,
Could there be a bug in the EmbeddedSolrServer that is causing this?
Is it still supported in version 4.10.3?
If it is, can someone please provide me assistance with this?
Regards,
Joe
On 1/21/15, 12:18 PM, Carl Roberts wrote:
I had to hardcode the path in solrconfig.xml from this
uk
On 21 Jan 2015, at 16:09, Carl Roberts wrote:
Hi,
I have downloaded the code and documentation for Solr version 4.10.3.
I am trying to follow SolrJ Wiki guide and I am running into errors. The
latest error is this one:
Exception in thread "main" org.apache.solr.common.SolrExc
Hi,
I have downloaded the code and documentation for Solr version 4.10.3.
I am trying to follow SolrJ Wiki guide and I am running into errors.
The latest error is this one:
Exception in thread "main" org.apache.solr.common.SolrException: No such
core: db
at
org.apache.solr.client.solrj
51 matches
Mail list logo