according to the collection configuration. Regardless
if this configuration makes sense, why should it be a problem to have the
replica on the same node?
Can anyone help me figure out how to fix this? I'm really desperate.
Freundliche Grüße
---
Chr
eling that this could also be a problem when zookeeper
crashes/restarts if some TXs haven't been flushed properly.
I'm using Zookeeper 3.5.7 and SolrJ 8.4.1. Thanks in advance!
Freundliche Grüße
---
Christian Beikov
Software-Architect
rNode=1
curl
"$SOLR_LOCAL_URL/solr/admin/collections?action=CREATE&name=${activeCollection}&numShards=${shards}&replicationFactor=${replicationFactor}&collection.configName=products&wt=json&maxShardsPerNode=${maxShardsPerNode}"
# &autoAddReplicas=true
# needed to not get into add-replica loop:
https://lucene.472066.n3.nabble.com/Autoscaling-using-triggers-to-create-new-replicas-td4415260.html
curl -X POST -H 'Content-Type: application/json' \
"${SOLR_LOCAL_URL}/api/cluster/autoscaling" --data-binary '{
"set-cluster-policy": [
{"replica": "<2", "shard": "#ANY", "node": "#ANY"},
{"replica": "#EQUAL", "shard": "#ANY", "node": "#ANY"}
]
}'
Best regards,
Christian
the
#drupal-support channel.
There is also a dedicated #search channel but in the general one the audience
is larger
and this may be a "Views" question rather than purely a search-related one.
HTH
Christian
> Am 02.12.2019 um 17:18 schrieb Erick Erickson :
>
> This is a S
Hmmm... I tried that as well, but it doesn't pick up the security.json
settings.
I run this instance on a computer that is on the internet, so just changing
the port is asking for trouble.
Looks like nobody knows how to import imap data
Best regards,
Christian
On Tue, 26 Nov 2019
Hi Jan,
I'm afraid I don't run in cloud mode, and I get
Failed to parse command-line arguments due to: Missing argument for option:
solrIncludeFile
usage: org.apache.solr.util.SolrCLI
Best regards,
Christian
On Tue, 26 Nov 2019 at 21:30, Jan Høydahl wrote:
> Try
>
> b
dent=on&wt=json&command=status&_=1574803444168} status=0 QTime=0
Best regards,
Christian
On Tue, 26 Nov 2019 at 21:17, Christian Dannemann
wrote:
> Hi Everyone,
>
> I've managed to successfully install solr on my server, and it's running
> and I have created
.sh, rather than in solrconfig.xml...)
The performance is great, and I love the stuff that works, but I find it
very hard to get solr going, configure it, and get my imap dataConfig going.
Any help is greatly appreciated!
Best regards,
Christian
tion by an end user) then you obviously won't need solarium.
As Paras Lehana said:
"Keep your front-end query simple - just describe your query. All the other
parameters
can be added on the web server side."
... that could then be implemented in your Perl code.
Christian
> Am
program your stuff on a higher level,
against an API.
Christian
> Am 25.11.2019 um 07:09 schrieb Paras Lehana :
>
> Hey rhys,
>
> What David suggested is what we do for querying Solr. You can figure out
> our frontend implementation of Auto-Suggest by seeing the AJAX request
tra.xml, a file included from our
solrconfig.xml.
HTH,
Christian
> Am 16.11.2019 um 03:43 schrieb Eric Pugh :
>
> What is the process for adding new Streaming Expressions?
>
> It appears that the org.apache.solr.client.solrj.io.Lang method statically
> loads all the stre
Hello Everyone,
I'm using suggesters with Solr 6.4 to get suggestions for a field with a
decent number of different values across a large number of documents that
is configured like this:
vendorSuggester
BlendedInfixLookupFactory
600
false
DocumentDictionaryFactory
attrib
I came across this posting with exactly the same symptoms in my solr cloud.
Here is what finally repaired my system:
I had an id with fieldType class="solr.TextField"
I changed this to class="solr.StrField"
Best regards
Christian
--
Sent from: http://lucene.472066.n3.
as not all
types we
work with support docValues).
Can the two things be related, e.g. can lots of exceptions have a side effect
that leads to solr not responding any more?
Do you have any hints as to what to check for?
Best regards
Christian Spitzlay
--
Christian Spitzlay
Senior Software-
Context filtering, at least using the suggest.cfq parameter, was not
introduced before Solr 6 to my knowledge. As Edwin, I highly recommend
updating.
On Mon, Oct 8, 2018 at 2:20 PM Manu Nair wrote:
> Hi,
>
> I am using Solr 5.1 for my application.
> I am trying to use the autoSuggest feature of
On Sat, Oct 6, 2018 at 1:04 AM Chris Hostetter
wrote:
> Which is to say: there are no explicit convenience methods for it, but you
> can absolutely use the JSON DSL and JSON facets via SolrJ and the
> QueryRequest -- just add the param key=value that you want, where the
> value is the JSON syntax
Hi,
> Am 15.06.2018 um 14:54 schrieb Christian Spitzlay
> :
>
>
>> Am 15.06.2018 um 01:23 schrieb Joel Bernstein :
>>
>> We have to check the behavior of the innerJoin. I suspect that its closing
>> the second stream when the first stream his finished. Th
nces
but here is the Solarium project's description in their own words:
https://solarium.readthedocs.io/en/latest/
Solarium is under active development. They recently added support
for Solr cloud streaming expressions and for the JSON Facet API
(the latter is in the beta version).
Best r
Hi Christine,
suggesters work differently than regular search as they complete an input
query, usually based on a state machine built from a dictionary. If you
want the similarity of input and suggestion, you can create a search
component to compute it yourself and set the value in the payload fie
Here it is: https://issues.apache.org/jira/browse/SOLR-12499
--
Christian Spitzlay
Diplom-Physiker,
Senior Software-Entwickler
Tel: +49 69 / 348739116
E-Mail: christian.spitz...@biologis.com
bio.logis Genetic Information Management GmbH
Altenhöferallee 3
60438 Frankfurt am Main
Ok. I'm about to create the issue and I have a draft version of what I had in
mind
in a branch on github.
Christian Spitzlay
> Am 19.06.2018 um 15:27 schrieb Joel Bernstein :
>
> Let's move the discussion to the jira ticket.
>
> Joel Bernstein
> http://joelso
ion that does this. If you want to
> create a ticket we can work through exactly how the operation would work.
>
I'll create an issue tonight at the latest.
Should we take further discussions off the user list
or is it acceptable here?
Christian Spitzlay
--
Christian Spit
that are expected."
From:
https://wiki.apache.org/solr/CommonQueryParameters#rows
Christian Spitzlay
quot;2",
"k2": "b"
}
]
},
{
"EOF": true,
"RESPONSE_TIME": 0
}
]
}
}
It adds a field "group" that contains an array of the unchanged input documents
with the sa
#x27;t the receiving end wait for it to appear?
Due to an incredibly low timeout used internally?
Christian Spitzlay
> Am 14.06.2018 um 19:18 schrieb Susmit :
>
> Hi,
> This may be expected if one of the streams is closed early - does not reach
> to EOF tuple
>
> Sent
t;: ["e", "f"]
}
into
{
"k1": "1",
"k2": ["a", "b"]
},
{
"k1": "2",
"k2": ["c", "d", "e", "f"]
}
And an inverse of cartesianProduct() that transforms
{
"k1": "1",
"k2": "a"
},
{
"k1": "2",
"k2": "b"
},
{
"k1": "2",
"k2": "c"
}
into
{
"k1": "1",
"k2": ["a"]
},
{
"k1": "2",
"k2": ["b", "c"]
}
Christian
results
the exception doesn’t occur either.
BTW: Do you know of any tool for formatting and/or syntax highlighting
these expressions?
Christian Spitzlay
> Am 13.06.2018 um 23:02 schrieb Joel Bernstein :
>
> Can your provide some example expressions that are causing these except
exception that
suggests something went wrong.
I don’t understand why this happens when the query seems to have succeeded.
Best regards,
Christian
e for this.
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, Jun 8, 2018 at 3:41 AM, Christian Spitzlay <
> christian.spitz...@biologis.com> wrote:
>
>> Hi,
>>
>>
>>> Am 08.06.2018 um 03:42 schrieb Joel Bern
the fields are being transposed with intersect
> function's "on" fields. The same issue was happening with joins and may
> have been resolved. I'll do little more research into this.
Thanks for your work on this!
Best regards
Christian Spitzlay
> Joel Bernst
> Am 07.06.2018 um 11:34 schrieb Christian Spitzlay
> :
>
> intersect(
> cartesianProduct(tuple(fieldA=array(a,b,c,c)), fieldA, productSort="fieldA
> asc"),
> cartesianProduct(tuple(fieldB=array(a,c)), fieldB, productSort="fieldB asc"),
> on=&q
tand the docs of the intersect decorator or have I come across a
bug?
Best regards,
Christian Spitzlay
> Am 06.06.2018 um 10:18 schrieb Christian Spitzlay
> :
>
> Hi,
>
> I don’t seem to get the behaviour of the intersect() stream decorator.
> I only ever get one doc from
e in a situation where one of the entities in the original
system is linked to itself. I’m haven’t finished analysing the problem yet
but I wondered whether there was an easy way to rule out that
cycle detection is causing it.
Best regards,
Christian Spitzlay
> christian.spitz...@bio
the consequences would it be possible?
Best regards
Christian Spitzlay
},
{
"EOF": true,
"RESPONSE_TIME": 0
}
]
}
}
I would have expected all the docs from the left stream with fieldA values a,
c, d
and only the docs with fieldA == b missing. Do I have a fundamental
misunderstanding?
Best regards
Christian Spitzlay
inherit the problem.
Is this only about double quotes or are there other meta characters that will
work
with backslash-escaping in non-streaming queries but will not parse as part of
streaming expressions?
Christian Spitzlay
> Am 24.05.2018 um 18:55 schrieb Joel Bernstein :
>
&g
which is to be expected as there is no document with a quote in the name at
the moment).
Is there a correct way to escape the double quote in a streaming expression?
Best regards
Christian
I need to enable NER Plugin in Solr 6.x in order to extract locations from the
text when committing documents to Solr .
How can I achieve this in the simpliest way possible? Please help Christian
Fotache Tel: 0728.297.207
lease help
Thank you,
Christian Fotache Tel: 0728.297.207
Hello
I am trying to update larger amounts of Documents (mostly ADD/DELETE) through
various threads.
After a certain amount of time (a few hours) all my threads get stuck at
taskExecutor-46" prio=5 tid=0x268 nid=0x10c BLOCKED owned by
taskExecutor-9 Id=230 - stats: cpu=2788 blk
Hello everyone,
I'm working on a query correction feature based on collations generated by
the spellchecker. This works like a charm, except when numeric tokens are
present in the query. In that case, I don't get any corrections for the
number, although corrections for textual tokens are still mad
he newly attach ssd.Please helpKind
regards,Christian
Hi,
there has been an JIRA issue[0] for a long time that contains some patches
for multiple releases of Solr that implement this functionality. It's a
different topic if those patches still work in recent versions, and the
issue has been resolved as a won't fix.
Personally, I think starting multi
Hi Matthew,
your problem sounds like you want to run something alongside Solr, that
probably uses Solr. Since current versions of Solr basically require you to
go over HTTP, you could deploy the thing you would like to run in the root
context in a separate application container that accesses Solr
Hi,
the admin console is backed by a JSON API. You can run the same requests it
uses programatically. Find them easily by checking your browser debug
tools' networking tab.
Regards,
Chris
On Fri, Sep 30, 2016 at 10:29 AM, subinalex wrote:
> Hi Guys,
>
> We are running back to back solr indexin
Hello everyone,
We're in the process of upgrading a service from Solr 4.4 to Solr 6. While
comparing result quality between the two versions, I found that a result's
phrase query score now contains the highest scoring field. In Solr 4.4, the
sum of all matching fields' scores was added to the tota
Hi Ahmet, Hi Upayavira,
OK, it seems that I have to dive a bit deeper in the Solr filters and
tokenizers. I've just realized that my command there is too limited.
Thanks a lot guys so far for help. Cheers and have a nice day,
christian
-Original Message-
From: Ahmet Arslan [mailto
field:
...
...
What is wrong with this schema? Respectively, what should I change to be able
to correctly do wildcard searches?
Many thanks for your time. Cheers,
christian
--
Christian Ribeaud
Software Engineer (External)
NIBR / WSJ-310.5.17
Novartis Campus
CH-4056
ng output:
"debug": {
"rawquerystring": "roch?",
"querystring": "roch?",
"parsedquery": "text:roch?",
"parsedquery_toString": "text:roch?",
"explain": {},
"QParser": "LuceneQParser",
...
Any idea? Thanks and cheers,
christian
Hi,
Can I somehow feed Solr with a result set or a list of primary keys and get
the shortest query that leads to this result? In other terms, can I reverse
engineer a query for a given result set?
Some background why I ask this question:
We are currently migrating a search application from Oracle
er (e.g.
200), the response time
gets terribly slow.
As we need only the number of documents per group and never the score, or some
other data of the
documents, we are wondering if there is a faster method to get this information.
Thanks
Christian
-midwest/1920577/%22}&rows=10&fl=dataEntityId,title,creator,score&wt=json
thanks again,
Christian
Walter wrote:
> Right.
>
> I chose the twenty most frequent terms from our documents and use those for
> cache warming.
> The list of most frequent terms is pretty stabl
field loading as a potential
issue, but with no
success.
Cheers,
Christian
On 26/08/15 18:05, Erick Erickson wrote:
> bq: my dog
> has fleas
> I wouldn't want some variant of "og ha" to match,
>
> Here's where the mysterious "positionIncrementGap" comes in. If you
> make this field "multiValued", and index this like this:
>
> my dog
> has fleas
>
>
> then the positi
the additional \R tokenizer in the index chain
because the the document can be multiple lines (but the search text is
always a single line) and if the document was
my dog
has fleas
I wouldn't want some variant of "og ha" to match, but I didn't realize
it didn't give me any
urned.
So somehow I'd need to express the query "content:.10 content:100
content:00. content:0.2 content:.22" with *the tokens exactly in this
order and nothing in between*. Is this somehow possible, maybe by using
the termvectors/termpositions stuff? Or am I trying to do something
that's fundamentally impossible? Other good ideas how to achieve this
kind of behaviour?
Thanks
Christian
anybody have an idea?
Regards
Christian
Hi,
please add me to the contributers group.
Username: ChristianMarquardt <https://wiki.apache.org/solr/ChristianMarquardt>
Best Regards
Christian
Beste Grüße
Christian Marquardt
Tannenweg 43
86391 Stadtbergen
+49-179-9735764
christianmarqua...@gmx.net
An alternative might be to drop down its relevancy for certain words.
Would that be possible?
On 30.05.2014 11:55, Christian Loock wrote:
Hi,
well we have a product search which often will return products one
might not expect because they contain somesort of reference to other
products
:52, Jack Krupansky wrote:
Explain your use case a little more, but you can define terms as stop
words with a stop filter, which means they won't appear in the index.
-- Jack Krupansky
-Original Message- From: Christian Loock
Sent: Friday, May 30, 2014 5:38 AM
To: solr
Hi,
is there a way to block a document for being found when you search for a
certain way?
Cheers,
Christian
--
Christian Loock
Web Developer
Renzel Agentur
www.renzel-agentur.de
ed*, or what
> kind of FieldTYpe/Analyzer was used when it was indexed.
>
> However: if you have the schema.xml from the Solr server that created the
> index, (or a copy of the code that created the index if it was made w/o
> SOlr using low level Lucene method calls) then you should have everything
> you need.
>
> -Hoss
> http://www.lucidworks.com/
>
--
Christian Bongiorno
I will give this a try. Thanks.
On Fri, Dec 13, 2013 at 1:15 PM, Greg Walters wrote:
> Christian,
>
> I literally did this 10 minutes ago for an internal example. You need to
> issue a RELOAD for your index to open a new searcher using the updated
> files. Here's an example
across as such. I have been
searching around for this scenario and haven't seen any discussion on it
--
Christian Bongiorno
> examples are set up already.
>
> With that config, the "qt" parameter will not function and will be
> ignored -- you must use the request handler path as part of the URL --
> /solr/corename/handler.
Great thanks, I already had it this way but I wasn't aware of these fine
details, very helpful.
Christian
Hi Gora,
thanx for pointing me in the right direction. The problem was indeed
that some ids were not unique.
Regards
Chris
Am 12.11.2013 17:05, schrieb Gora Mohanty:
> On 12 November 2013 21:29, Köhler Christian wrote:
>> Hi!
>>
>> I experience a mismatch between
Hi!
I experience a mismatch between the number of indexed documents and the
number of documents actually in the solr index. I can not find any
reason for this in the log files. How do I find out, why some documents
are deleted from the index?
Setup:
Solr 4.4 using DIH fetching 1000 rows form a My
//reverse-proxy/mapping-to-solr/searchui_client*
All other paths are blocked at the reverse proxy level.
So I'm worried about something that uses these URL paths, say
https://reverse-proxy/mapping-to-solr/searchui_client?qt=update&;
commit=true&stream.body=*:*
to stay with your example.
an be reasonably safe?
Thanks
Christian
facets on individual lat/lon points.
What is the use-case you are trying to support here?
Best,
Erick
On Tue, Sep 10, 2013 at 8:43 AM, Christian Köhler - ZFMK
wrote:
> Hi,
>
> I use the new SpatialRecursivePrefixTreeFiel**dType field to store geo
> coordinates (e.g. 14.021666,51.543
ty:"t4m70cmvej9";
java.lang.IllegalArgumentException: missing parens: t4m70cmvej9
How do I get the results of a single location using faceting?
Any thoughts?
Regards
Chris
--
Christian Köhler
Zoologisches Forschungsmuseum Alexander Koenig
Leibniz-Institut für Biodiversität der Tiere
Adenauerallee
estart the Solr instance
if you're paranoid. And you have to re-index
for changes in the index part of the analysis
chain to take effect.
Best
Erick
On Tue, Sep 3, 2013 at 6:33 AM, Christian Loock wrote:
Am 03.09.2013 12:11, schrieb pravesh:
SOLR has a nice analysis page. You can use it to
Am 03.09.2013 12:11, schrieb pravesh:
SOLR has a nice analysis page. You can use it to get insight what is
happening after each filter is applied at index/search time
Regards
Pravesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-with-Synonyms-tp4087905p4087915.
Hello,
this is my first time writing at this mailing lost, so hello everyone.
I am having issues with synonyms.
I added the synonym to one of my field types:
|
I also added some Synonyms
ot; or "eu+as"), but that has an impact on queries.
But, the scripts also handles multivalued fields (one value at a time),
and nested multivalued fields is not supported.
Thoughts?
-- Jack Krupansky
-----Original Message- From: Christian Köhler - ZFMK
Sent: Tuesday, August
e...
Erick
On Tue, Aug 6, 2013 at 12:38 PM, Walter Underwood wrote:
Would synonyms help? If you generate the query terms for the continents,
you could do something like this:
usa => continent-na
canada => continent-na
germany => continent-europe
und so weiter.
wunder
On Aug 6
Hi,
Am 06.08.2013 12:56, schrieb Raymond Wiker:
Another option might be to use a pre-existing web service... it should be
relatively easy to add that to your dataimporthandler configuration (if
you're using DIH, that is :-)
A quick google search gave me http://www.geonames.org; see
http://www.g
Am 05.08.2013 15:52, schrieb Jack Krupansky:
You can write a brute force JavaScript script using the StatelessScript
update processor that hard-codes the mapping.
I'll probably do something like this. Unfortunately I have no influence
on the original db itself, so I have fix this in solr.
Chee
Hi,
to have a database table holding the relationships between countries and
continents, and using a join to get the continent.
I forgot to mention: I only have reading access to the database.
Regards
Chris
--
Christian Köhler
Tel.: 0228 9122-433
Zoologisches Forschungsmuseum Alexander
Hi,
please excuse the multiple emails to the list. There is a mailserver
issue - our admin has fixed it (he said ...).
@ list-admin: you may delete my previous duplicate mails (9:59, 10:01 an
10:34) from the list.
Sorry for the noise!
Chris
--
Christian Köhler
Tel.: 0228 9122-433
Hi,
I am indexing data from a mysql data source. Each record contains the
field "country". I am looking for a suitable way to create a field
"continent" at indexing time. A list with the information country ->
continent is given.
With my limited knowledge of solr writing a script and calling it
Hi,
I am indexing data from a mysql data source. Each record contains the
field "country". I am looking for a suitable way to create a field
"continent" at indexing time. A list with the information country ->
continent is given.
With my limited knowledge of solr writing a script and calling it
Hi,
I am indexing data from a mysql data source. Each record contains the
field "country". I am looking for a suitable way to create a field
"continent" at indexing time. A list with the information country ->
continent is given.
Writing a script and calling it as a transformer in the sql query
Hi,
I am indexing data from a mysql data source. Each record contains the
field "country". I am looking for a suitable way to create a field
"continent" at indexing time. A list with the information country ->
continent is given.
Writing a script and calling it as a transformer in the sql query
Hi,
I am indexing data from a mysql data source. Each record contains the
field "country". I am looking for a suitable way to create a field
"continent" at indexing time. A list with the information country ->
continent is given.
Writing a script and calling it as a transformer in the sql query
Hi,
i'm totally confused ... DIH == DataImportHandler ... it's just an
acronym, you say you aren't using DIH, but you are having a problem
loading DIH, so DIH is used in your configs.
sorry for the confusion. I was just trying to say:
I use the example code from
solr-4.3.0/example/solr
and no
ivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
at
java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789)
at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
at java.lang.Class.forNa
Hi,
thanx for pointing this out to me.
1152 [coreLoadExecutor-3-thread-1] INFO org.apache.solr.core.SolrConfig
– Adding specified lib dirs to ClassLoader
org.apache.solr.core.SolrResourceLoader – Adding
'file:/home/christian/zfmk/solr/solr-4.3.0/example/lib/mysql-connector-java-5
Hi,
in my attempt to migrate for m 3.6.x to 4.3.0 I stumbled upon an issue
loading the MySQL driver from the [instance]/lib dir:
Caused by: java.lang.ClassNotFoundException:
org.apache.solr.handler.dataimport.DataImportHandler
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at j
g all processes. In that case your caches might be too big, and you
should experiment with decreasing their size. You should be able to profile the
JVM to monitor garbage collection.
Med venlig hilsen / Best Regards
Christian von Wendt-Jensen
IT Team Lead, Customer Solutions
Infopaq Internatio
Actually no, I didn't. But I can see that I should have. Thanks!
Med venlig hilsen / Best Regards
Christian von Wendt-Jensen
IT Team Lead, Customer Solutions
Infopaq International A/S
Kgs. Nytorv 22
DK-1050 København K
Phone +45 36 99 00 00
Mobile +45 31 17
Christian von Wendt-Jensen
IT Team Lead, Customer Solutions
Infopaq International A/S
Kgs. Nytorv 22
DK-1050 København K
Phone +45 36 99 00 00
Mobile +45 31 17 10 07
Email
christian.sonne.jen...@infopaq.com<mailto:christian.sonne.jen...@infopaq.com&g
disc? Local SSD? Softcommit?
Med venlig hilsen / Best Regards
Christian von Wendt-Jensen
IT Team Lead, Customer Solutions
Infopaq International A/S
Kgs. Nytorv 22
DK-1050 København K
Phone +45 36 99 00 00
Mobile +45 31 17 10 07
Email
christian.sonn
.
Issues:
#1: Storage: To use SAN or not.
#2: Cores per instance: what is ideal?
#3: Size of cores: is 14 days optimal?
#4: Performance when searching across shards.
#5: Would SolrCloud be the solution for us?
Med venlig hilsen / Best Regards
Christian von Wendt-Jensen
IT Team Lead, Customer
Tomcat/6.0.35
MySQL 5.1.50
Kind Regards,
Christian
I would
bet this has already been done. I would like to avoid multiple trips back
and forth from either the DB or SOLR if possible.
Thanks!
Christian
--
*Christian Jensen*
724 Ioco Rd
Port Moody, BC V3H 2W8
+1 (778) 996-4283
christ...@jensenbox.com
100k configs is problem of its own
You may look for other solution e.g. split the user base in small number of
Cores by use case and try to cover their needs. (btw the solrconfig.xml,
schema.xml ultra flexible most likely the cover 95% of your requirements)
Regards,
Christian Bordis
-U
nks for reading! ^^
Kind Regards,
Christian Bordis
Regards
Christian von Wendt-Jensen
IT Team Lead, Customer Solutions
Infopaq International A/S
Kgs. Nytorv 22
DK-1050 København K
Phone +45 36 99 00 00
Mobile +45 31 17 10 07
Email
christian.sonne.jen...@infopaq.com<mailto:christian.sonne.jen...@infopaq.
d then be up-to-date without copying any files.
Would this setup be possible?
Med venlig hilsen / Best Regards
Christian von Wendt-Jensen
IT Team Lead, Customer Solutions
Infopaq International A/S
Kgs. Nytorv 22
DK-1050 København K
Phone +45 36 99 00 00
Mobile +
1 - 100 of 205 matches
Mail list logo