IMO it's wrong to change an aggregate's meaning from "aggregate across
GROUPs or entire SELECT" to "aggregate within column". Aggregation is
long established in SQL and it will just confuse experienced database
users.
PostgresQL maintains the meaning of max:
CREATE TABLE tab (
x int[]
);
INS
Agree with views, or alternatively, column permissions together with
computed columns:
CREATE TABLE foo (
id int PRIMARY KEY,
unmasked_name text,
name text GENERATED ALWAYS AS some_mask_function(text, 'xxx', 7)
)
(syntax from postgresql)
GRANT SELECT ON foo.name TO general_use;
G
k McFadin wrote:
The replies got trashed pretty badly in the responses.
When you say: "Agree it's better to reuse existing syntax than invent
new syntax."
Which syntax are you referring to?
Patrick
On Mon, Aug 22, 2022 at 1:36 AM Avi Kivity via dev
wrote:
Agree it's b
Agree it's better to reuse existing syntax than invent new syntax.
On 8/21/22 16:52, Konstantin Osipov wrote:
* Avi Kivity via dev [22/08/14 15:59]:
MySQL supports SELECT INTO FROM ... WHERE
...
PostgreSQL supports pretty much the same syntax.
Maybe instead of LET use the ANSI/
On 18/08/2022 18.46, Mick Semb Wever wrote:
Until IDEs auto cross-reference JIRA,
I'm going to lightly touch the lid of Pandora's Box here and walk
away slowly. It drives me *nuts* when I'm git blaming a file to
understand the context of why a change was made (to make sure I
.
LET (a, b, (c, d)) = SELECT a, b, someTuple FROM..
IF a > 1 AND d > 1…
I think this can be safely deferred. Most people would again separate it
into separate LETs.
I'd add (to the specification) that LETs cannot override a previously
defined variable, just to reduce ambiguity.
On 14/08/2022 01.29, Benedict Elliott Smith wrote:
I’ll do my best to express with my thinking, as well as how I would
explain the feature to a user.
My mental model for LET statements is that they are simply SELECT
statements where the columns that are selected become variables
accessibl
"Bullied"? Neither me nor anyone else made any demands or threats. I
proposed cooperation, and acknowledged up front, in my first email, that
cooperation might not be wanted by Cassandra.
On 2018-04-28 20:50, Jeff Jirsa wrote:
You're a committer Mick, if you think it belongs in the databas
On 2018-04-24 04:18, Nate McCall wrote:
Folks,
Before this goes much further, let's take a step back for a second.
I am hearing the following: Folks are fine with CASSANDRA-14311 and
CASSANDRA-2848 *BUT* they don't make much sense from the project's
perspective without a reference implementati
nce right there)
> Regards,
> Ariel
>
>> On Apr 22, 2018, at 8:26 AM, Avi Kivity mailto:a...@scylladb.com>> wrote:
>>
>>
>>
>>> On 2018-04-19 21:15, Ben Bromhead wrote:
>>> Re #3:
>>>
56
tokens, each shard on a different port would just advertise ownership of
256/# of cores (e.g. 4 tokens if you had 64 cores).
Regards,
Ariel
On Apr 22, 2018, at 8:26 AM, Avi Kivity wrote:
On 2018-04-19 21:15, Ben Bromhead wrote:
Re #3:
Yup I was thinking each shard/port would appear as a
tes unneeded range
movements when a node is added.
I have seen rack awareness used/abused to solve this.
Regards,
Ariel
On Apr 22, 2018, at 8:26 AM, Avi Kivity wrote:
On 2018-04-19 21:15, Ben Bromhead wrote:
Re #3:
Yup I was thinking each shard/port would appear as a discrete server to
long-term solution.
(it also creates a lot more tokens, something nobody needs)
Regards,
Ariel
On Apr 22, 2018, at 8:26 AM, Avi Kivity wrote:
On 2018-04-19 21:15, Ben Bromhead wrote:
Re #3:
Yup I was thinking each shard/port would appear as a discrete server to the
client.
This doesn&
vs the driver as the server expects
all shards to share some client visible like system tables and certain
identifiers.
Ariel
On Thu, Apr 19, 2018, at 12:59 PM, Avi Kivity wrote:
Port-per-shard is likely the easiest option but it's too ugly to
contemplate. We run on machines with 160 shards
You're right in principle, but in practice we haven't seen problems with
the term.
On 2018-04-19 20:31, Michael Shuler wrote:
This is purely my own opinion, but I find the use of the term 'shard'
quite unfortunate in the context of a distributed database. The
historical usage of the term has b
On 2018-04-20 12:03, Sylvain Lebresne wrote:
Those were just given as examples. Each would be discussed on its own,
assuming we are able to find a way to cooperate.
These are relatively simple and it wouldn't be hard for use to patch
Cassandra. But I want to find a way to make more complicat
e server expects all shards to
share some client visible like system tables and certain identifiers.
This has its own problems, I'll address them in the other sub-thread (or
using our term, other continuation).
Ariel
On Thu, Apr 19, 2018, at 12:59 PM, Avi Kivity wrote:
Port-per-shard is li
2 is that if driver maintainers add
support for the change (on their own or by merging changes authored by
Scylla developers), then Cassandra developers get driver support with
less effort.
Ariel
On Thu, Apr 19, 2018, at 12:53 PM, Avi Kivity wrote:
On 2018-04-19 19:10, Ariel Weisberg wrot
On 2018-04-19 10:19, kurt greaves wrote:
1. The protocol change is developed using the Cassandra process in a JIRA
ticket, culminating in a patch to doc/native_protocol*.spec when consensus
is achieved.
I don't think forking would be desirable (for anyone) so this seems the
most reasonable to
On 2018-04-19 19:10, Ariel Weisberg wrote:
Hi,
I think that updating the protocol spec to Cassandra puts the onus on the party
changing the protocol specification to have an implementation of the spec in
Cassandra as well as the Java and Python driver (those are both used in the
Cassandra r
Port-per-shard is likely the easiest option but it's too ugly to
contemplate. We run on machines with 160 shards (IBM POWER 2s20c160t
IIRC), it will be just horrible to have 160 open ports.
It also doesn't fit will with the NICs ability to automatically
distribute packets among cores using mu
Hello Cassandra developers,
We're starting to see client protocol limitations impact performance,
and so we'd like to evolve the protocol to remove the limitations. In
order to avoid fragmenting the driver ecosystem and reduce work
duplication for driver authors, we'd like to avoid forking th
22 matches
Mail list logo