Hi:
I have a view that I want to require user specification for a specific
column before the query starts (if that makes sense).
Example
I want the users to be required to provide a value for ssn in the following
query...
"select * from huge_view *where ssn = '106-91-9930'* "
I never want them t
OK, didn't think so, just checking. Thanks for verifying !
On Mon, Nov 20, 2023 at 4:45 PM Christophe Pettus wrote:
>
>
> > On Nov 20, 2023, at 13:41, David Gauthier wrote:
> > I want the users to be required to provide a value for ssn in the
> following query...
&g
Hi:
I have a plpgsql function that has this...
drop table if exists tmp_diff_blkviews;
Even with the "if exists", I still get...
NOTICE: table "tmp_diff_blkviews" does not exist, skipping
CONTEXT: SQL statement "drop table if exists tmp_diff_blkviews"
PL/pgSQL function dvm.blkview_diffs(c
timated
cost of that scan is. If you see the index being used, then that'll at
least confirm the index was not picked due to costs.
If the index gets used, then I'd check the size of the
message_payload_not_null_pidx index. You could also consider using
pgstatindex() [1] to check the state of the index and if it's bloated,
reindex it.
David
[1] https://www.postgresql.org/docs/current/pgstattuple.html
hang and occupies
> workers doing nothing.
If you find all the autovacuum workers busy most of the time, then
you'll likely want to reduce autovacuum_vacuum_cost_delay to increase
the speed of the auto-vacuum workers.
David
Hi:
I'm trying to run a PG client side "\copy" command from a perl script. I
tried using $dbh->do("\\copy ...") but it barffed when it saw the '\'...
ERROR: syntax error at or near "\"
I can do this with a command line approach, attaching to the DB then run
using...
-c "\copy ...". But I was
atletx7-reg017:/home/dgauthie[ 120 ] --> dvdbdev
Pager usage is off.
psql (11.5, server 11.3)
Type "help" for help.
dvdb=# \df opid.bef_ins_axi_reqs_set_trig;
List of functions
Schema | Name| Result data type | Argument data
types | Type
nks
architects of vision and strategy for business intelligence
David Barron
Senior Consultant
Zencos Consulting: david.bar...@zencos.com<mailto:david.bar...@zencos.com>
[www.zencos.com]<http://www.zencos.com/>
mobile:
919-995-2356
fax:
919-287-2335
The information in this e-mail and any
David Barron
Zencos Consulting LLC
919-995-2356 (Mobile)
From: Adrian Klaver
Sent: Wednesday, January 10, 2024 1:56 PM
To: David Barron ; pgsql-general@lists.postgresql.org
Subject: Re: pgAdmin 8.1 install
EXTERNAL SENDER
On 1/10/24 10:47 AM, David Barron wrote:
Is there a problem with the
On 1/10/24 10:58 AM, David Barron wrote:
On 1/10/24 10:47 AM, David Barron wrote:
Is there a problem with the installation of 8.1? I’ve installed it 3 times but
when I bring it up it still says I’m at 7.8. The Help > About menu shows 7.8
as well.
What OS are you using?
Where did you
> Is there a problem with the installation of 8.1? I’ve installed it 3 times
> but when I bring it up it still says I’m at 7.8. The Help > About menu shows
> 7.8 as well.
> It’s odd because I have gone from 7.6 to 7.7 and 7.7 to 7.8 following the
> same steps.
When I look at the directory o
On 1/10/24 12:20, David Barron wrote:
> Note that v6 and v7 are in subdirectories but v8 is not. If you have
> an existing shortcut that you are using, then perhaps it is pointed to
> the
> v7 directory and needs to be updated to point to the directory that v8
> is installed
Hi I’m trying to use PosgreSQL from a Qt application but libpq and ODBC
binaries are currently unavailable so I can only use x64 applications at
present!
It would be good to have proper support for Windows arm64 architecture. At
the very least ODBC connectivity, better still to have libpq ava
(flawed) mental model built up from command line experience
with other tools, this *should* work. I should be able to send the output
to stdout and then redirect it to a file. It surprises me that I cannot.
Anyway, thanks!
Best,
David
pe it back into the database (or if you like, some
other database) with psql. When I set out to do this I didn't think bash
pipes and redirection would be the hard part, and yet here I am. Maybe
there's some other way, because I'm fresh out of ideas.
Best,
David
On Fri, Jan 12
output from pg_recvlogical and pipe it back into the
database with psql?"*
Best,
David
On Sat, Jan 13, 2024 at 10:29 AM Adrian Klaver
wrote:
> On 1/12/24 21:23, David Ventimiglia wrote:
> > Let me just lay my cards on the table. What I'm really trying to do is
> > capture
blem. Tell us the "business problem" you
> are trying to solve, not the problem you're having with your solution to
> the "business problem".
>
> (If you've already mentioned it, please restate it.)
>
> On Sat, Jan 13, 2024 at 11:49 AM David Ventimiglia
It satisfies business constraints.
On Sat, Jan 13, 2024, 5:01 PM Karsten Hilbert
wrote:
> Am Sat, Jan 13, 2024 at 05:53:14PM -0500 schrieb Ron Johnson:
>
> > *No,* that's a technology problem. What is the purpose of storing them
> > back in the database using psql?
>
> Or even the end goal to b
sing psql?
>
> On Sat, Jan 13, 2024 at 4:34 PM David Ventimiglia <
> davidaventimig...@hasura.io> wrote:
>
>> The business problem I'm trying to solve is:
>>
>> "How do I capture logical decoding events with the wal2json output
>> encoder, filter them with jq, and pipe them to psql, using pg_recvlogical?"
>>
>>>
>>>>>
estions/75784345/how-to-pipe-pg-recvlogical-to-psql-for-logical-replication>
.
Cheers,
David
On Tue, Jan 16, 2024 at 12:57 PM Jim Nasby wrote:
> On 1/13/24 3:34 PM, David Ventimiglia wrote:
> > The business problem I'm trying to solve is:
> >
> > "How do I
Whoops! Wrong SO link. Here's the correct SO link:
https://stackoverflow.com/questions/77808615/how-to-use-logical-decoding-with-pg-recvlogical-to-pass-changes-through-a-non-tr
On Tue, Jan 16, 2024 at 1:15 PM David Ventimiglia <
davidaventimig...@hasura.io> wrote:
> Thanks for
han 86.72%.
I think you've done "1 - sum( <14.10 numbers> ) / sum( <9.6.24 numbers>)"
whereas I think you should have done "sum( <9.6.24 numbers>) / sum( <14.10
numbers> ) - 1"
Nonetheless, thanks for testing this out. I assume this is just a report
giving good feedback about progress in this area...?
David
Is there a document which makes recommendations on sizing data buffer
cache, tuning options which evict old/unused data in mem, and cache
fragmentation avoidance for a v15.3 DB ?
Thanks in Advance.
I have a table with 4 columns, none of them unique. I want to delete all
but one of the duplicate records. I think there's a way to do this with
ctid. Examples that pop up in google searches always rely on something
like a unique (id) field, like a primary key, (no good in my case)
create tabl
op with the
values scan. The current planner data structures are not really
geared up for optional base quals right now. Something would need to
be done to make that work and off the top of my head, I don't know
what that would be.
David
s valid. I think it
would be very tricky to make it work well, however. Imagine how
expensive ATTACH PARTITION would be! There are probably other race
conditions I've not thought about too. Likely, we'd get more
complaints about this being a terrible feature than we do due to the
fact that it's unsupported.
David
On Mon, 19 Feb 2024 at 22:07, Darryl Green wrote:
>
> On Mon, 19 Feb 2024 at 14:23, David Rowley wrote:
> >
> > On Mon, 19 Feb 2024 at 16:32, Darryl Green wrote:
> > > 2) It would be nice to be able to specify the id as pk on the table being
> > > partition
at the top of the plan
then suppress it from EXPLAIN. In that case, the EXPLAIN looks the
same as the serial plan. If the planner chooses a parallel plan of
its own accord, then it'd look nothing like the serial plan.
David
Hi:
I'm a PG user in a big corp with an IT dept that administers a PG
server/instance that I use. It's an old install, v11.5, and we need to
upgrade to v15.3. They want to bring the upgraded DB up on a new linux vm
which has OS upgrades of its own. So it's a move AND an upgrade. There are
2 conc
Here's the situation
- The DB contains data for several projects.
- The tables of the DB contain data for all projects (data is not
partitioned on project name or anything like that)
- The "project" identifier (table column) exists in a few "parent" tables
with many child... grandchild,... tab
On Fri, Mar 8, 2024 at 11:28 AM Yogesh Sharma <
yogesh.sha...@catprosystems.com> wrote:
> Greetings,
>
> On 3/6/24 19:19, David Gauthier wrote:
> > Hi:
> > I'm a PG user in a big corp with an IT dept that administers a PG
> > server/instance that I use. It
Thanks.
On Fri, Mar 8, 2024 at 12:12 PM Adrian Klaver
wrote:
> On 3/8/24 09:09, Adrian Klaver wrote:
> > On 3/8/24 08:57, David Gauthier wrote:
> >> Thanks for the reply.
> >>
> >> When you say "dump/restore" do you mean pg_dump then running
On Fri, 15 Mar 2024 at 07:13, David G. Johnston
wrote:
> On Thu, Mar 14, 2024, 11:08 Thiemo Kellner
> wrote:
>>
>> Thanks for the enlightenment. A pity. I suppose, there is no working
>> around this?
>
> Write a script to do the query in a loop on all databas
dirtied=9379
> Execution Time: 5049.131 ms
You could try "SET track_io_timing = 1;" to see how much of that time
is spent in reads.
David
Hi:
I have a DB named "thedb", which has a schema called "theschem" which has a
table called "thetab". In a perl/DBI script, I am able to work with the
table by referencing it as... "thedb.theschem.thetab" as in... "delete from
thedb.theschem.thetab where col1=1"
No problem (so far...)
New DB n
ll the code has the prefix and I don't want to uproot that (because
it's working).
Thanks David !
On Fri, Mar 29, 2024 at 5:22 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Fri, Mar 29, 2024 at 2:20 PM David G. Johnston <
> david.g.johns...@gmail.co
This is what we are running...
Red Hat Enterprise Linux Server release 7.9 (Maipo)
In our pandora distrib, I see PG v15.3. I was wondering if there is
something even better. Can't seem to pin this down using
https://www.postgresql.org/download/linux/redhat/.
(ANALYZE, BUFFERS)
David
JIT off to try without with:
SET jit=0;
You might want to consider editing postgresql.conf and raising the
jit_above_cost, jit_inline_above_cost and jit_optimize_above_cost
values to some higher value or disable JIT completely.
SELECT pg_reload_conf(); -- to reload the config file afterwards.
David
ng" for the problematic record. You can get the number of
blocks in the table with:
select pg_relation_size('your_table'::regclass) /
current_setting('block_size')::int;
replace the 4294967294 with the value from that and bisect the table
until you home in on the block with the issue.
David
lter" SQL state: 42601
According to our documentation for setval():
"This function requires UPDATE privilege on the sequence."
https://www.postgresql.org/docs/current/functions-sequence.html
David
'par_log_file'::regclass;
The correct way to create the PK constraint is with:
ALTER TABLE par_log_file ADD CONSTRAINT plf_pk PRIMARY KEY (par_file_id);
You might want to create supporting unique indexes on each partition
CONCURRENTLY before doing that so that the ALTER TABLE becomes a
meta-data-only operation.
David
dexes on the new heap. That's quite an effort.
To satisfy your curiosity, you could always run some EXPLAIN ANALYZE
SELECT queries to measure how much time was spent sorting the entire
table. You'd have to set work_mem to the value of
maintenance_work_mem.
David
nderstand incorrect statistics can cause this but if there are other
reasons, it would be good to know what they are.
David
psql (15.3, server 14.5) on linux
Someone else's DB which I've been asked to look at.
\dt gives many tables, here are just 3...
public | some_idIds | table |
cron_user
public | WarningIds | table |
cron_user
public |
able."
> In any case, even after the planner decides to execute the terrible plan
> with the parallel seqscans, why doesn't it finish right when it finds 10
> distinct values?
It will. It's just that Sorting requires fetching everything from its subnode.
David
[1]
https://www.postgresql.org/docs/16/routine-vacuuming.html#VACUUM-FOR-STATISTICS
On Sat, 11 May 2024 at 13:33, Tom Lane wrote:
> I do kind of wonder why it's producing both a hashagg and a Unique
> step --- seems like it should do one or the other.
It still needs to make the duplicate groups from parallel workers unique.
David
On Tue, 14 May 2024 at 00:28, Dimitrios Apostolou wrote:
>
> On Sat, 11 May 2024, David Rowley wrote:
>
> > On Sat, 11 May 2024 at 13:33, Tom Lane wrote:
> >> I do kind of wonder why it's producing both a hashagg and a Unique
> >> step --- seems like it
On Tue, 14 May 2024 at 00:41, Dimitrios Apostolou wrote:
>
> On Sat, 11 May 2024, David Rowley wrote:
> > It will. It's just that Sorting requires fetching everything from its
> > subnode.
>
> Isn't it plain wrong to have a sort step in the plan than? The differ
On Tue, 14 May 2024 at 00:46, Dimitrios Apostolou wrote:
>
> On Mon, 13 May 2024, Dimitrios Apostolou wrote:
>
> > On Sat, 11 May 2024, David Rowley wrote:
> >> If you look at [1], it says "Tuples changed in partitions and
> >> inheritance children do n
On Tue, 14 May 2024 at 01:52, Dimitrios Apostolou wrote:
>
> On Tue, 14 May 2024, David Rowley wrote:
> > The query does contain an ORDER BY, so if the index is not chosen to
> > provide pre-sorted input, then something has to put the results in the
> > correct order bef
On Tue, 14 May 2024 at 02:07, Dimitrios Apostolou wrote:
>
> On Tue, 14 May 2024, David Rowley wrote:
> > Parallel Append can also run in a way that the Append child nodes will
> > only get 1 worker each.
>
> How can I tell which case it is, from the EXPLAIN output (for ex
way to tell if the planner's estimates were
accurate or not. Also with EXPLAIN only, we don't know where the time
was spent in the query.
Running the EXPLAIN with "SET track_io_timing = 1;" would be even more useful.
David
race would be useful. See:
https://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
David
between pgn and pgc looks like a good choice. The rest,
not so much.
I don't think (ndistinct) extended statistics on pg_class
relnamespace, relkind will help since "relnamespace = pgn.oid" is a
join condition.
David
On Tue, 21 May 2024 at 23:14, Laurenz Albe wrote:
> We still don't know the query.
hmm, it was posted on this thread:
https://postgr.es/m/CAGB0_6600w5C=hvhgfmwcqo9bcwcg+3s0pxxuoqv48nlqtp...@mail.gmail.com
David
anyone know how to describe from this mailing list?
On Thu, May 23, 2024 at 5:16 AM sud wrote:
> Hello ,
> It's RDS postgres version 15.4. We suddenly saw the
> "MaximumUsedTransactionIDs" reach to ~1.5billion and got alerted by team
> members who mentioned the database is going to be in shutdow
k we need to find a way to give some additional preference to
using indexes with more keys matching to quals.
David
ave to:
DELETE FROM pg_statistic WHERE starelid = 'abc'::regclass and
staattnum = (SELECT attnum FROM pg_attribute WHERE attrelid =
'abc'::regclass AND attname = 'bg_org_partner');
to get rid of it.
David
abc table will need to be in your search_path.
You'll need to be connected to the correct database too.
David
ot;to" LINE 2: ..._2023_dab for values
with(modulus 8, remainder 1) to (modulu...
Any assistance would be appreciated.
--
*David A. Barbour*
*dbarb...@istation.com *
*(214) 292-4096*
Istation
8150 North Central Expressway, Suite 2000
Dallas, TX 75206
www.Istation.com <http://www.istatio
into the partitioned table.
Dropped the original table and renamed the partitioned table.
Easy, no problems. Also no 'attach'.
Thanks.
On Fri, Jun 7, 2024 at 3:31 AM Laurenz Albe
wrote:
> On Thu, 2024-06-06 at 11:28 -0500, David Barbour wrote:
> > Been an Oracle DBA for
.oid ** DELETE FROM idev.import_file ** WHERE
import_job_oid = new.oid; **RETURN OLD; **END; **$BODY$; * *delete from
idev.import_job where oid = 44949; * *NOTICE: Value * *ERROR:
Attempt to suppress referential action with before trigger. *
*CONTEXT: SQL statement "DELETE FROM ONLY &qu
r me to check the results match with each query from my
phone. A quick scan of the first 10 or so records looked good.
If the updated query is still too slow on cold cache then faster disks
might be needed.
David
>
row is too big: size 12024, maximum size 8160
There is a paragraph at the bottom of [1] with some warnings about
things relating to this.
The tuple length would be fixed for fixed-length types defined as NOT
NULL. So, if you have that, there should be no such surprises.
David
[1] https://www.postgresql.org/docs/current/limits.html
On Fri, 5 Jul 2024 at 19:53, Lok P wrote:
> As David suggested it breaks if a row exceeds the 8k limit I. E a single page
> size , will that still holds true if we have a column with JSON in it?
You wouldn't be at risk of the same tuple length problem if you
reduced the column count
size" in [1].
Please also read [2].
David
[1] https://www.postgresql.org/docs/current/limits.html
[2] https://www.postgresql.org/docs/current/storage-toast.html
#x27;) to (true,'2024-02-01');
CREATE TABLE event_closed_y2024_m02 PARTITION OF event FOR VALUES FROM
(true,'2024-02-01') to (true,'2024-03-01');
explain select * from event
where
not cleared
OR
(cleared and date_raised > '2024-01-01' AND date_raised < '2024-01-02');
David
. You
might want to reconsider your enable_partitionwise_aggregate setting
and/or how many partitions you have.
David
[1] https://www.postgresql.org/docs/current/runtime-config-resource.html
[2]
https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-BEST-PRACTICES
uld be appreciated. :-)
I think mentioning something about this in enable_partitionwise_join
and enable_partitionwise_aggregate is probably wise. I'll propose a
patch on pgsql-hackers.
David
esql/
. If we want to disallow blogs written by LLMs then we'd need to be
careful about how we define that as doing something like using an
LLM-based spell checker does not seem like it should be disallowed.
But to what degree exactly should that be allowed?
David
Warning, additional statistics targets can slow down planning a little)
or if that does not help and there's some correlation between those
columns and/or the values in question, then maybe the following might
help get a more accurate estimate:
create statistics on companycode, examheaderfk from cl_student_semester_subject;
analyze cl_student_semester_subject;
David
ql.org/download/snapshots/ does link to
https://www.enterprisedb.com/products-services-training/pgdevdownload
. I see Windows installers for v17b2.
> So for the moment, I don’t know how to proceed
Meson is now the build system for Windows:
https://www.postgresql.org/docs/17/install-meson.html
David
negative numbers for that
setting. You may be able to choose a value that scales correctly with
the row estimate that doesn't get outdated as you add more rows to the
partitions. You'll need to determine that based on the data you're
storing.
David
mand similar to "FROM ONLY" in SELECT queries.
You could probably do some sampling of the pg_stat_progress_analyze
view to figure out what's taking the most time. If you find that the
majority of the time is spent analysing the partitions and not the
partitioned table then maybe we
connections pile up
in a similar way to when the VACUUM FULL command is used.
David
ble already exists in the parent
> table or not?
EXPLAIN ANALYZE will list the time it took to execute the foreign key
trigger in the "Trigger for constraint" section.
David
auto_explain.log_nested_statements =
on. That should give you the plan for the cascade DELETE query that's
executed by the trigger when you perform the DELETE on the referenced
table. (Also see the note about auto_explain.log_timing)
David
[1] https://www.postgresql.org/docs/15/auto-explain.html
=1)
Index Cond: (relname = 't_c56ng1_repository'::name)
Filter: ((relkind = ANY ('{r,p}'::"char"[])) AND
pg_has_role(relowner, 'USAGE'::text))
So looks like it was the "Make Vars be outer-join-aware." commit that
changed this.
David
there's no middle ground between pass and fail.
David
NNER JOIN rather than RIGHT
JOIN means that qual_is_pushdown_safe() gets a Var rather than a
PlaceHolderVar.
David
complex expected plan.
I managed to get a reproducer for this down to something quite simple.
Probably that or something similar would be a better test to make sure
this bug stays gone.
David
> "TEST_DATA"
> "F(T61)(EXPORT)" "F(T61)(EXPORT)" "EXPORT" "6" "7" "(" "EXPORT" ")" "("
> ")"
> "TEST_DATA"
> "F(T61)(EXPORT)" "F(T61)(EXPORT)"
On Fri, 30 Aug 2024 at 23:45, Rumpi Gravenstein wrote:
> Here's the output of the script you requested.
>
These results look correct to me. Not the same as the ones you originally
reported.
David
day,
but for today, think of it as an unimplemented optimisation rather
than a bug.
> If I add in a second repetitive WHERE clause, it goes back to being happy,
> but that feels a bit like a hack:
That's likely your best bet on how to make the planner do what you
want, provided you're able to given the query is inside a view.
David
Yes, PostgreSQL.
Why not just do this via SQL join ?
On Tue, 3 Sept 2024, 17:34 arun chirappurath, wrote:
> Hi All,
>
> Do we have any open-source utility to compare two table data not structure
> and then provide some sort of reports?
>
> Row by row comparison
>
> Thanks,
> Arun
>
statements, then I think it's
safe. If not, you might get wrong results. For deferred foreign key
constraints, the window where the foreign key could be violated lasts
until the end of the transaction, so even top-level queries could see
wrong results if you use left join instead of inner.
David
On Wed, 11 Sept 2024 at 10:57, Laurenz Albe wrote:
> You should specify the upper bound as ('2023-12-31 23:59:59.99',
> MAXVALUE).
Or maybe do multi-level partitioning.
(it seems strange to always have MINVALUE and MAXVALUE as the range.
I'm guessing that was just an example.)
David
I would usually try to avoid such long IN causes. Why not try this ..
Create a temp table of 1 column. Bulk insert all your IDs into that table.
Then change your query to join to the temp table? This also has the
advantage of working for 1000s of values.
On Fri, 13 Sept 2024, 16:35 Wong, Kam
This is probably an easy one for someone with experience using CASE, but
intuitively I can't get it.
First... 11.5 on linux.
Example...
dvdb=# create table foo (col1 varchar, col2 varchar);
CREATE TABLE
dvdb=# insert into foo (col1,col2) values ('a','x'), (null,'y');
INSERT 0 2
as varchar)||'-'||sr.nightly_cl_display_suffix
END as changelist
On Wed, Mar 10, 2021 at 2:46 PM Tom Lane wrote:
> David Gauthier writes:
> > dvdb=# select
> > CASE col1
> > WHEN null THEN 'z'
> > ELSE col1
> >E
OpenSource license type for each component ? Or even, if
possible, with the license file for each component ?
Best regards,
Nicolas DAVID
WORKNC DENTAL Project Manager
Manufacturing Intelligence division
Hexagon
E: nicolas.da...@hexagon.com<mailto:nicolas.da...@hexagon.com>
Hexagon
440 Rou
-> json_data
without coalesce or other NOT NULL check in code. Maybe can be usefull have
this operator in pg? Code can be written better than my and in C. Anybody
have same issue with concat json and NULL?
David T.
--
-----
Ing. David TUROŇ
LinuxBox.cz, s.r.
st ?
Best regards,
Nicolas DAVID
,
Nicolas DAVID
-Original Message-
From: Laurenz Albe
Sent: 06 April 2021 16:13
To: DAVID Nicolas ; Adrian Klaver
; pgsql-gene...@postgresql.org
Subject: Re: Open source licenses
This email is not from Hexagon's Office 365 instance. Please be careful while
clicking links, op
g anything.
However, technically, in the yet-to-be-released PG14, if the query had
a TID qual then you could disable seqscan to encourage a TID Range
scan. That's a bit of a stretch of the imagination though.
David
of explain.
> Can anyone help me understand where the 45GB are in that explain?
If you do EXPLAIN (ANALYZE, BUFFERS) you might find that the recursive
CTE is writing the rest of them out. Something like "temp written =
", likely around 5 million or so, I imagine.
David
"risk" by building from source ?
Best regards,
Nicolas DAVID
WORKNC DENTAL Project Manager
Manufacturing Intelligence division
Hexagon
E: nicolas.da...@hexagon.com
HexagonMI.com
CONFIDENTIALITY NOTICE: This email and any attachments may be confidential and
protected by legal privilege
w then the planner might tend prefer nested loops
with index scans more than hash and merge joins.
Check the documents for more details on those settings.
David
wever, if the
first of the two is fast enough then it might be better to not add too
many extra indexes.
David
401 - 500 of 2198 matches
Mail list logo