t; processor in-between?
>>
>> What are your settings for hard/soft commit?
>>
>> For the shared going to recovery - do you have a log entry or something ?
>>
>> What is the Solr version?
>>
>> How do you setup ZK?
>>
>> > Am 10.08.202
at are your settings for hard/soft commit?
>
> For the shared going to recovery - do you have a log entry or something ?
>
> What is the Solr version?
>
> How do you setup ZK?
>
> > Am 10.08.2020 um 16:24 schrieb Anshuman Singh >:
> >
> > Hi,
> >
> &g
ieb Anshuman Singh :
>
> Hi,
>
> We have a SolrCloud cluster with 10 nodes. We have 6B records ingested in
> the Collection. Our use case requires atomic updates ("inc") on 5 fields.
> Now almost 90% documents are atomic updates and as soon as we start our
> in
Good question, what do the logs say? You’ve provided very little information
to help diagnose the issue.
As to your observation that atomic updates are expensive, that’s true. Under
the covers, Solr has to go out and fetch the document, overlay your changes
and then re-index the full document. So
Hi,
We have a SolrCloud cluster with 10 nodes. We have 6B records ingested in
the Collection. Our use case requires atomic updates ("inc") on 5 fields.
Now almost 90% documents are atomic updates and as soon as we start our
ingestion pipelines, multiple shards start going into recovery,
-of-documents.html>
) and found these two contradicting statements:
1. /The first is atomic updates. This approach allows changing only one or
more fields of a document without having to reindex the entire document./
Here is how I would rewrite that paragraph to make it correct. The
asterisks rep
Shawn Heisey-2 wrote
> Atomic updates are nearly identical to simple indexing, except that the
> existing document is read from the index to populate a new document
> along with whatever updates were requested, then the new document is
> indexed and the old one is deleted.
As p
Sure, np.
We did same W/A for long period, but eventually it indeed impacted very much
our application performance, and partial atomic updates to parent doc improved
this significantly (20-30x than whole docs).
Regards,
Adi
-Original Message-
From: Ludger Steens
Sent: Wednesday, June
: München
Handelsregisternummer: HRB 163761
---
-Ursprüngliche Nachricht-
Von: Kaminski, Adi
Gesendet: Sonntag, 7. Juni 2020 08:45
An: solr-user@lucene.apache.org
Betreff: RE: Atomic updates with nested documents
Hi Ludger,
We had the same issue with Solr 7.6, and after discussing with the
:11 PM Thomas Corthals
wrote:
> Hi
>
> I'm trying to do atomic updates with an 'add-distinct' modifier in a Solr 7
> cloud. It seems to behave like an 'add' and I end up with double values in
> my multiValued field. This only happens with multiple values
Hi
I'm trying to do atomic updates with an 'add-distinct' modifier in a Solr 7
cloud. It seems to behave like an 'add' and I end up with double values in
my multiValued field. This only happens with multiple values for the field
in an update (cat:{"add-distinct
the
above mentioned changes.
Regards,
Adi
-Original Message-
From: Ludger Steens
Sent: Friday, June 5, 2020 3:24 PM
To: solr-user@lucene.apache.org
Subject: Atomic updates with nested documents
Dear Community,
I am using Solr 7.7 and I am wondering how it is possible to do a partia
Dear Community,
I am using Solr 7.7 and I am wondering how it is possible to do a partial
update on nested documents / child documents.
Suppose I have committed the following documents to the index:
[
{
"id": "1",
"testString": "1",
"testInt": "1",
"_childDocuments_": [
gt;>>> supporting get-by-id need to be.
>>>>
>>>> Anyway, best of luck
>>>> Erick
>>>>
>>>>> On Dec 9, 2019, at 1:05 AM, Paras Lehana
>>>> wrote:
>>>>>
>>>>> Hi Erick,
>>>&g
ginal values and yes, I did see
> improvement. I
> >>> will collect more stats. *Thank you for helping. :)*
> >>>
> >>> Also, here is the reference article that I had referred for changing
> >>> values:
> >>>
> >>
> https://www.
ghts-re-indexing-7-million-books-part-1
>>>
>>> The article was perhaps for normal indexing and thus, suggested
>> increasing
>>> mergeFactor and then finally optimizing. In my case, a large number of
>>> segments could have impacted get-by-id of atomic updates
gt; >
> > The article was perhaps for normal indexing and thus, suggested
> increasing
> > mergeFactor and then finally optimizing. In my case, a large number of
> > segments could have impacted get-by-id of atomic updates? Just being
> > curious.
> >
> > On Fri
ase, a large number of
> segments could have impacted get-by-id of atomic updates? Just being
> curious.
>
> On Fri, 6 Dec 2019 at 19:02, Paras Lehana
> wrote:
>
>> Hey Erick,
>>
>> We have just upgraded to 8.3 before starting the indexing. We were on 6.6
>&
-indexing-7-million-books-part-1
The article was perhaps for normal indexing and thus, suggested increasing
mergeFactor and then finally optimizing. In my case, a large number of
segments could have impacted get-by-id of atomic updates? Just being
curious.
On Fri, 6 Dec 2019 at 19:02, Paras Lehana
e running in too little memory and eventually GC is killing
> you.
> >> Really, analyze your GC logs. OR
> >> 3> you are running on underpowered hardware which just can’t take the
> load
> >> OR
> >> 4> something else in your environment
> >>
&
>> On Dec 5, 2019, at 12:57 AM, Paras Lehana
>> wrote:
>>>
>>> Hey Erick,
>>>
>>> This is a huge red flag to me: "(but I could only test for the first few
>>>> thousand documents”.
>>>
>>>
>>> Yup, that
hat's probably where the culprit lies. I could only test for the
> > starting batch because I had to wait for a day to actually compare. I
> > tweaked the merge values and kept whatever gave a speed boost. My first
> > batch of 5 million docs took only 40 minutes (atomic update
at's probably where the culprit lies. I could only test for the
> starting batch because I had to wait for a day to actually compare. I
> tweaked the merge values and kept whatever gave a speed boost. My first
> batch of 5 million docs took only 40 minutes (atomic updates included) and
&
whatever gave a speed boost. My first
batch of 5 million docs took only 40 minutes (atomic updates included) and
the last batch of 5 million took more than 18 hours. If this is an issue of
mergePolicy, I think I should have also done optimize between batches, no?
I remember, when I indexed a sin
This is a huge red flag to me: "(but I could only test for the first few
thousand documents”
You’re probably right that that would speed things up, but pretty soon when
you’re indexing
your entire corpus there are lots of other considerations.
The indexing rate you’re seeing is abysmal unless t
ead somewhere that this could speed up indexing.
But you're probably right - I'm doing atomic updates and this could impact
performance of get by id. That's what you are saying, right?
Why did you increase RamBufferSizeMB?
2G seems to be the hard limit for int overflow. I read this
lookup of the has anything to do with
your slowdown? If I’m reading this right, you do atomic updates on 50M docs
_then_ things get slow. If it was a lookup I should think it’d
be a problem for the first 50M docs.
So here’s what I’d do:
1> go back to the defaults for TieredMergePolicy
Hi Community,
We occasionally reindex whole data to our Auto-Suggest corpus. Total
documents to be indexed are around 250 million while, due to atomic
updates, total unique documents after full indexing converges to 60
million.
We have to atomically index documents to store different names for
t; I've discovered data loss bug and couldn't find any mention of it. Please
> > confirm this bug haven't been reported yet.
> >
> >
> > Description:
> >
> > If you try to update non pre-analyzed fields in a document using atomic
> > updates, d
> Description:
>
> If you try to update non pre-analyzed fields in a document using atomic
> updates, data in pre-analyzed fields (if there is any) will be lost. The
> bug was discovered in Solr 8.2 and 7.7.2.
>
>
> Steps to reproduce:
>
> 1. Index this document
Hello Community,
I've discovered data loss bug and couldn't find any mention of it. Please
confirm this bug haven't been reported yet.
Description:
If you try to update non pre-analyzed fields in a document using atomic
updates, data in pre-analyzed fields (if there is any) wi
d not for sorting, faceting or highlighting -
> should I use docValues=true or stored=true to enable atomic updates? Or even
> both? I understand that either docValues or stored fields are needed for
> atomic updates but which of the two would perform better / consume less
> resources i
tomic updates? Or even both? I understand that either docValues or
stored fields are needed for atomic updates but which of the two would
perform better / consume less resources in this scenario?
Thank you.
Best regards,
Andreas
Hi,
Solr supports atomic updates as described at
https://lucene.apache.org/solr/guide/7_6/updating-parts-of-documents.html#atomic-updates
But I wonder how to create a streaming expression that does atomic updates. We
want to search for documents matching a given criteria and update a
Hi Shawn,
When I query the document, category field is shown in search.
Thanks
Rajeswari
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Wednesday, November 21, 2018 11:38 AM
To: solr-user@lucene.apache.org
Subject: Re: Error:Missing Required Fields for Atomic
On 11/20/2018 9:07 PM, Rajeswari Kolluri wrote:
Schema version is 1.6 and Solr version is 7.5.
While creating a document , I have provide the required information to field "
category".
Now, I would like to update document for other set of fields using Atomic
Update but not category field.
W
MPqU2fuBMQEqkrJDiOkgU0&m=E0eYGosr2K2uq4yr-Kk3r1d49jU0cFLh
> akbRohIP0rQ&s=o3x6BbYYdEkVFyvQKF3ZIZlwV5iBh3J3lsskRzt--S8&e=
For atomic updates to work either Stored or docValues is to be true. Current
configuration satisfies this condition.
" The core functionality of atom
bq. While creating a document , I have provide the required
information to field "category".
But you did not store it, you have stored="false" for that field.
Atomic updates require that all source fields are stored. What happens
under the covers is that the stored data
mic update ,exception caught with "Missing required field
on category".
Thanks
Rajeswari
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Tuesday, November 20, 2018 8:38 PM
To: solr-user@lucene.apache.org
Subject: Re: Error:Missing Required Fields for Atom
On 11/19/2018 9:19 PM, Rajeswari Kolluri wrote:
Below is part of schema , entityid is my unique id field. Getting exception missing
required field for "category" during atomic updates.
Your category field is not stored. And it's required. It does have
docVal
red Fields for Atomic Updates
What is the Router name for your collection? Is it "implicit" (You can know
this from the "Overview" of you collection in the admin UI) ? If yes, what is
the router.field parameter the collection was created with?
Rahul
On Mon, Nov 19, 2018
Below is part of schema , entityid is my unique id field. Getting
> > exception missing required field for "category" during atomic updates.
> >
> >
> > entityid
> > > required="true" multiValued="false" />
> > > required
uri <
rajeswari.koll...@oracle.com> wrote:
>
> Hi Rahul
>
> Below is part of schema , entityid is my unique id field. Getting
> exception missing required field for "category" during atomic updates.
>
>
> entityid
> required="true" multiValue
Hi Rahul
Below is part of schema , entityid is my unique id field. Getting exception
missing required field for "category" during atomic updates.
entityid
Thanks
Rajeswari
-Original Message-
From: Rahul Goswami [mailt
What’s your update query?
You need to provide the unique id field of the document you are updating.
Rahul
On Mon, Nov 19, 2018 at 10:58 PM Rajeswari Kolluri <
rajeswari.koll...@oracle.com> wrote:
> Hi,
>
>
>
>
>
> Using Solr 7.5.0. While performing atomic upd
Hi,
Using Solr 7.5.0. While performing atomic updates on a document on Solr Cloud
using SolJ getting exceptions "Missing Required Field".
Please let me know the solution, would not want to update the required fields
during atomic updates.
Thanks
Rajeswari
On 11/14/2018 10:35 AM, Jon Kjær Amundsen wrote:
It is not that I want it.
I just can't reproduce it even though I read it as an expected behaviour.
So I wondered if something has been changed since the warning was written,
or if I had misunderstood something.
To my knowledge, nothing has chan
essor, perhaps
> ScriptUpdateProcessor to do the right thing an a per-field basis.
>
> Best,
> Erick
> On Wed, Nov 14, 2018 at 7:56 AM Jon Kjær Amundsen
> wrote:
> >
> > Reading up on atomic updates, the Solr reference guide states the
> following:
> >
o do the right thing an a per-field basis.
Best,
Erick
On Wed, Nov 14, 2018 at 7:56 AM Jon Kjær Amundsen wrote:
>
> Reading up on atomic updates, the Solr reference guide states the following:
>
> The core functionality of atomically updating a document requires that all
> fields
Reading up on atomic updates, the Solr reference guide states the following:
The core functionality of atomically updating a document requires that all
fields in your schema must be configured as stored (stored="true") or
docValues (docValues="true") except for fields whi
Thanks, Shawn. That helps with the meaning of the "solr" format.
Our needs are pretty basic. We have some upstream processes that crawl
the data and generate a JSON feed that works with the default post
command. So far this works well and keeps things simple.
Thanks!
...scott
On 9/1/18 9:26
On 8/31/2018 7:18 PM, Scott Prentice wrote:
Yup. That does the trick! Here's my command line ..
$ ./bin/post -c core01 -format solr /home/xtech/solrtest/test1b.json
I saw that "-format solr" option, but it wasn't clear what it did.
It's still not clear to me how that changes the endpoint t
points there. /update by default vs
/update/json
So i think the post gets treated as generic json parsing.
Can you try the same end point?
Regards,
Alex
On Fri, Aug 31, 2018, 7:05 PM Scott Prentice wrote:
Just bumping this post from a few days ago.
Is anyone using atomic updates? If so, how
e?
>>>
>>> Thanks,
>>> ...scott
>>>
>>>
>>> On 8/31/18 4:48 PM, Alexandre Rafalovitch wrote:
>>>>
>>>> I think you are using different end points there. /update by default vs
>>>> /update/json
>>>>
wrote:
I think you are using different end points there. /update by default vs
/update/json
So i think the post gets treated as generic json parsing.
Can you try the same end point?
Regards,
Alex
On Fri, Aug 31, 2018, 7:05 PM Scott Prentice wrote:
Just bumping this post from a few days
you try the same end point?
Regards,
Alex
On Fri, Aug 31, 2018, 7:05 PM Scott Prentice wrote:
Just bumping this post from a few days ago.
Is anyone using atomic updates? If so, how are you passing the updates
to Solr? I'm seeing a significant difference between the REST API and
the
/update/json
So i think the post gets treated as generic json parsing.
Can you try the same end point?
Regards,
Alex
On Fri, Aug 31, 2018, 7:05 PM Scott Prentice wrote:
Just bumping this post from a few days ago.
Is anyone using atomic updates? If so, how are you passing the updates
to
ago.
>
> Is anyone using atomic updates? If so, how are you passing the updates
> to Solr? I'm seeing a significant difference between the REST API and
> the post command .. is this to be expected? What's the recommended
> method for doing the update?
>
> Thanks!
&
Just bumping this post from a few days ago.
Is anyone using atomic updates? If so, how are you passing the updates
to Solr? I'm seeing a significant difference between the REST API and
the post command .. is this to be expected? What's the recommended
method for doing the update
Hi...
I'm trying to get atomic updates working and am seeing some strangeness.
Here's my JSON with the data to update ..
[{"id":"/unique/path/id",
"field1":{"set","newvalue1"},
"field2":{"set","newvalue2
Sami
Why not do the simple case first, with complete document updates. When you have
that working, you can decide if you want atomic updates too.
Cheers -- Rick
On March 6, 2018 2:26:50 AM EST, Sami al Subhi wrote:
>Thank you for replying,
>
>Yes that is the one. Unfortunately th
Thank you for replying,
Yes that is the one. Unfortunately there is no documentation for this
library.
I tried to implement other libraries but I couldn't get them running. This
is the easiest library to implement but lacks support and documentation.
Thank you and best regards,
Sami
--
Sent f
e.php" file, more specifically the function "_documentToXmlFragment"
and it does not seem it supports "atomic updates".am I correct? is the only
way is to edit "_documentToXmlFragment" to support updates?
The php clients for Solr are third-party software. None
lly the function "_documentToXmlFragment"
and it does not seem it supports "atomic updates".am I correct? is the only
way is to edit "_documentToXmlFragment" to support updates?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
o with atomic update?
>
> I don't understand what you're asking. To use Atomic Updates, _every_
> original field (i.e. any field that is _not_ the destination of a
> copyField directive) must be stored. That's just a basic requirement.
>
> bq: And also during the upd
bq: However if i dont have majority of other column data while doing update
operations, is it better to go with atomic update?
I don't understand what you're asking. To use Atomic Updates, _every_
original field (i.e. any field that is _not_ the destination of a
copyField directiv
rom an external client Solr has to
> 1> de-serialize the doc
> 2> index it (identical to step 4 above)
>
> The sweet spot for Atomic Updates is when you can't easily get the
> original document from the system-of-record.
>
> Best,
> Erick
>
> On Fri, Feb 23,
ment
3> change the values in the doc based on the atomic update
4> re-index the doc just as though it had received it from a client.
Whereas if you just send the doc from an external client Solr has to
1> de-serialize the doc
2> index it (identical to step 4 above)
The sweet spot for
Can you please let me know what will be the performance impact of trying to
update 120Million records in a collection containing 1 billion records.
The collection contains around 30 columns and only one column out of it is
updated as part of atomic update.
Its not a batch update, the 120 Million up
On 1/8/2018 10:17 PM, kshitij tyagi wrote:
1. Does in place updates opens a new searcher by itself or not?
2. As the entire segment is rewriten, it means that frequent in place
updates are expensive as each in place update will rewrite the entire
segment again? Correct me here if my understanding
nal
>> information.
>>
>
> Atomic updates are nearly identical to simple indexing, except that the
> existing document is read from the index to populate a new document along
> with whatever updates were requested, then the new document is indexed and
> the old one is delete
On 1/8/2018 4:05 AM, kshitij tyagi wrote:
What are the major differences between atomic and in-place updates, I have
gone through the documentation but it does not give detail internal
information.
Atomic updates are nearly identical to simple indexing, except that the
existing document is
Hi,
What are the major differences between atomic and in-place updates, I have
gone through the documentation but it does not give detail internal
information.
1. Does doing in-place update prevents solr cache burst or not, what are
the benefits of using in-place updates?
I want to update one of
Hello,
as Amrit mentioned, I attached the schema.xml of such an index. Perhaps there
is something to find in it.
The responses of update and commit look quite normal:
{responseHeader={status=0,QTime=2}}
{responseHeader={status=0,QTime=14}}
After committing the fields
fld_BA1F56CD9C87419CB9A271D
Hi Martin,
I tested the same application SolrJ code on my system, it worked just fine
on Solr 6.6.x. My Solrclient is "CloudSolrJClient", which I think doesn't
make any difference. Can you show the response and field declarations if
you are continuously facing the issue.
Amrit Sarkar
Search Engin
Hello,
I’m trying to Update a field in a document via SolrJ. Unfortunately, while the
field itself is updated correctly, values of some other fields are removed.
The code looks like this:
SolrInputDocument updateDoc = new SolrInputDocument();
updateDoc.addField("id", "1234");
Map updateValue =
ue("classification")
> + ":" + originalDoc.getFieldValue("id")));
> doc.addField("systemModified",
> Collections.singletonMap("set", originalDoc.getFieldValue("las
> tModified")));
>
uot;, originalDoc.getFieldValue("lastModified")));
doc.addField("_version_",
originalDoc.getFieldValue("_version_"));
return doc;
}
On 21.07.2017 22:33, Amrit Sarkar wrote:
Hendrik,
Ran a little test on 6.3, wi
Hendrik,
Ran a little test on 6.3, with infinite atomic updates with optimistic
concurrency,
cannot *reproduce*:
List docs = new ArrayList<>();
> SolrInputDocument document = new SolrInputDocument();
> document.addField("id", String.valueOf(1));
> document.addField(
Hi,
I can't find anything about this in the Solr logs. On the caller side I
have this:
Error from server at http://x_shard1_replica2: version conflict for
x expected=1573538179623944192 actual=1573546159565176832
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error
f
Hendrik,
Can you list down the error snippet so that we can refer the code where
exactly that is happening.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Jul 21, 2017
Hi,
when I try to use an atomic update in conjunction with optimistic
concurrency Solr sometimes complains that the version I passed in does
not match. The version in my request however match to what is stored and
what the exception states as the actual version does not exist in the
collectio
field like this:
so I can do a fast in-place atomic updates
However if I do e.g.
curl -H 'Content-Type: application/json'
'http://localhost:8983/solr/collection/update?commit=true'
--data-binary '
[{
"id":"my_id",
"popularity":{"set&q
sense when it comes to in-place updates of docValues
- it has to have some value, so only thing that you can do is introduce
some int value as null.
HTH,
Emir
On 04.05.2017 15:40, Dan . wrote:
Hi,
I have a field like this:
so I can do a fast in-place atomic updates
However if I do e.g.
cu
> docValues="true" multiValued="false"/>
>>
>> so I can do a fast in-place atomic updates
>>
>> However if I do e.g.
>>
>> curl -H 'Content-Type: application/json'
>> 'http://localhost:8983/solr/collection/update?commit=
I have a field like this:
> >
> >
> > > docValues="true" multiValued="false"/>
> >
> > so I can do a fast in-place atomic updates
> >
> > However if I do e.g.
> >
> > curl -H 'Content-Type: application/json'
&g
On 5/4/2017 7:40 AM, Dan . wrote:
> I have a field like this:
>
>
> docValues="true" multiValued="false"/>
>
> so I can do a fast in-place atomic updates
>
> However if I do e.g.
>
> curl -H 'Content-Type: application/json'
> &
atomic updates
However if I do e.g.
curl -H 'Content-Type: application/json'
'http://localhost:8983/solr/collection/update?commit=true'
--data-binary '
[{
"id":"my_id",
"popularity":{"set":null}
}]'
then I'd expect the pop
Hi,
I have a field like this:
so I can do a fast in-place atomic updates
However if I do e.g.
curl -H 'Content-Type: application/json'
'http://localhost:8983/solr/collection/update?commit=true'
--data-binary '
[{
"id":"my_id",
"popularity
> I switched the config sets and everything works as expected. Any atomic
> updates clear out the indexed values for the non-stored field.
>
> Thanks for bearing with me.
> Chris
>
>
> On Thu, Apr 27, 2017 at 11:23 AM Chris Ulicny wrote:
>
>> I'm sending comm
, but I wasn't attempting to retrieve it so I never realized it
was being stored since I ended up looking at the wrong schema.
I switched the config sets and everything works as expected. Any atomic
updates clear out the indexed values for the non-stored field.
Thanks for bearing with me.
Chris
t satisfies those two
> > conditions let alone the other necessary ones.
> >
> > I went ahead and tested the atomic updates on different textfields, and I
> > still can't get the indexed but not-stored othertext_field to disappear.
> So
> > far set, add, and remove
r schemas never have any field that satisfies those two
> conditions let alone the other necessary ones.
>
> I went ahead and tested the atomic updates on different textfields, and I
> still can't get the indexed but not-stored othertext_field to disappear. So
> far set, add, and r
d the atomic updates on different textfields, and I
still can't get the indexed but not-stored othertext_field to disappear. So
far set, add, and remove updates do not change it regardless of what the
fields are in the atomic update.
It would be extraordinarily useful if this update behavior is no
t; That's probably it then. None of the atomic updates that I've tried have
> been on TextFields. I'll give the TextField atomic update to verify that it
> will clear the other field.
>
> Has this functionality been consistent since atomic updates were
> introduced, or
That's probably it then. None of the atomic updates that I've tried have
been on TextFields. I'll give the TextField atomic update to verify that it
will clear the other field.
Has this functionality been consistent since atomic updates were
introduced, or is this a side effec
ugh. As I mentioned in the first
> >> post, I've already tried a few tests, and the value seems to still be
> >> present after an atomic update.
> >>
> >> I haven't exhausted all possible atomic updates, but 'set' and 'add'
> seem
>
ris Ulicny wrote:
>
>> That's the thing I'm curious about though. As I mentioned in the first
>> post, I've already tried a few tests, and the value seems to still be
>> present after an atomic update.
>>
>> I haven't exhausted all possibl
ioned in the first
> post, I've already tried a few tests, and the value seems to still be
> present after an atomic update.
>
> I haven't exhausted all possible atomic updates, but 'set' and 'add' seem
> to preserve the non-stored text field.
>
> Than
That's the thing I'm curious about though. As I mentioned in the first
post, I've already tried a few tests, and the value seems to still be
present after an atomic update.
I haven't exhausted all possible atomic updates, but 'set' and 'add' seem
to pre
1 - 100 of 292 matches
Mail list logo