n we expect it in PostgreSQL 15.4? Thanks.
I pushed the fixes to all branches just now. Thanks for the report!
Greetings,
Andres Freund
ve TidRange scans, to
> > have pg_dump split larger tables into chunks so that they can be
> > restored in parallel.
>
> Uh, the release notes say:
>
>
>
>
>
> Allow more efficient addition of heap and index pages (Andres Freund)
_logical_replication_slot('hello3', 'wal2json');
> pg_create_logical_replication_slot
>
> (hello3,4B/DB04C730)
>
> and I've attached the replication data output to confirm my issue.
I'm somewhat confused - your ou
Hi,
On 2023-05-16 14:20:46 +, Evgeny Morozov wrote:
> On 9/05/2023 3:32 am, Andres Freund wrote:
> > Attached is a rough prototype of that idea (only using datconnlimit ==
> > -2 for now).
> > I guess we need to move this to -hackers. Perhaps I'll post subsequent
&g
Hi,
On 2023-05-08 17:46:37 -0700, Andres Freund wrote:
> My current gut feeling is that we should use datconnlimit == -2 to prevent
> connections after reaching DropDatabaseBuffers() in dropdb(), and use a new
> column in 16, for both createdb() and dropdb().
Attached is a rough pro
Hi,
On 2023-05-08 14:04:00 -0700, Andres Freund wrote:
> But perhaps a similar approach could be the solution? My gut says that the
> rought direction might allow us to keep dropdb() a single transaction.
I started to hack on the basic approach of committing after the catalog
changes. But
Hi,
On 2023-05-08 20:27:14 +, Evgeny Morozov wrote:
> On 8/05/2023 9:47 pm, Andres Freund wrote:
> > Did you have any occasions where CREATE or DROP DATABASE was interrupted?
> > Either due the connection being terminated or a crash?
>
> I've uploaded an edited ver
removed, the
on-disk fails with all-zeroes would remain.
Greetings,
Andres Freund
Hi,
On 2022-12-08 09:52:52 -0500, Kirk Wolak wrote:
> On Wed, Dec 7, 2022 at 2:18 PM Andres Freund wrote:
>
> > Hi,
> >
> > On 2022-11-28 17:22:19 -0500, Kirk Wolak wrote:
> > > Thank you for this. We have the build working with meson (w/o
> > readline)
pkg. (other projects have a .pc file this one
> is missing?)
The problem is likely indicated by this:
> env[PKG_CONFIG_PATH]: c:/vcpkg/installed/x64-windows/tools/pkgconf/pkgconf.exe
This needs to point to the directory containing the .pc files, not the
executable.
Greetings,
Andres Freund
_path and PATH so it doesn't include debug/ (this is because of
needing to link to the same CRT).
Greetings,
Andres Freund
Hi,
On 2022-11-23 20:28:29 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2022-11-23 18:11:22 -0500, Tom Lane wrote:
> >> Huh ... do you recall the details? Large as tab-complete is, it's
> >> far smaller than gram.y:
>
> > So It might just b
Hi,
On 2022-11-23 18:11:22 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2022-11-21 10:58:06 -0500, Tom Lane wrote:
> >> It'd certainly be nice if we could use Readline on Windows.
>
> > 2) The last time I checked, msvc couldn't preprocess tab-comple
couldn't preprocess tab-complete.c with
USE_READLINE defined, due to running into some preprocessor limits. We can
probably fix that on our end somehow.
Greetings,
Andres Freund
Hi,
On 2022-11-16 09:16:56 -0800, Andres Freund wrote:
> On 2022-11-15 13:23:56 +0100, klaus.mailingli...@pernau.at wrote:
> > Filesystem is ext4. VM technology is mixed: VMware, KVM and XEN PV. Kernel
> > is 5.15.0-52-generic.
> >
> > We have not seen this with Ubutn
his start after upgrading to 22.04? Or after a certain kernel upgrade?
Do you use cgroups or such to limit memory usage of postgres?
I'd be helpful to see /proc/meminfo from one of the affected instances.
Greetings,
Andres Freund
now I'm not sure how to do
> it. Reached to Christoph about it.
It'd require rebuilding.
It's possible that the problem is one of the patches applied by ubuntu to llvm
and that's why others can't reproduce so far. There's quite a few...
Greetings,
Andres Freund
Hi,
On 2020-07-28 14:07:48 -0700, Andres Freund wrote:
> (I'm rebasing my tree that tries to reduce the overhead / allow caching
> / increase efficiency to current PG, but it's a fair bit of work)
FWIW, I created a demo workload for this, and repro'ed the issue with
that. Th
tition
accesses with the corresponding accesses, even if they're all just one
row?
(I'm rebasing my tree that tries to reduce the overhead / allow caching
/ increase efficiency to current PG, but it's a fair bit of work)
Greetings,
Andres Freund
my testing that can quite drastically cut down on
optimization time. But obviously solves the problem only to some degree,
since that's not free.
> Are you saying that if you crank jit_inline_above_cost beyond this
> query's total cost, the problem goes away?
FWIW, you can set the cost to -1 and it'll never inline.
Greetings,
Andres Freund
That unfortunately has its own downsides, in that there's significant
overhead of emitting code multiple times. I suspect that taking the
cost of all the JIT emissions together into account is the more
promising approach.
Greetings,
Andres Freund
Hi,
On Fri, Jul 24, 2020, at 15:32, Scott Ribe wrote:
> > On Jul 24, 2020, at 4:26 PM, David Rowley wrote:
> >
> > It does not really take into account the cost of jitting.
>
> That is what I was missing.
>
> I read about JIT when 12 was pre-release; in re-reading after my post I
> see that i
rototype implementation of changing the costing to be
#expressions * some_cost, and I think that's a lot more accurate.
Greetings,
Andres Freund
ially related to
https://postgr.es/m/20181003223725.elcu3t44fpd4lm56%40alap3.anarazel.de
Greetings,
Andres Freund
anks for jumping in on the thread, it was nice validation to know
> > that I wasn't the only one seeing the issue!
>
> Yes. We have been able to confirm that 2aa6e33 is a direct cause of
> your problem.
I assume you're still trying to track the actual cause of the problem
further?
Greetings,
Andres Freund
vacuums not happening? This
is older than the the cutoff age?
xid 4284570172 having the age of 202 million xids suggests that
ReadNewTransactionId() is approx 192376585. Which comports with the log
saying: oldest xmin: 189591147.
Or are you saying that you conclude that the relcache entry is somehow
out of date? It sure is interesting that all of the tables that hit the
"skipping redundant vacuum" condition are shared tables.
Greetings,
Andres Freund
t; While looking at this issue I found a few problems, btw. That seems more
> > like a -hackers discussion, so I started:
> > https://postgr.es/m/20200323235036.6pje6usrjjx22zv3%40alap3.anarazel.de
>
> Yes, let's discuss there.
Cool. Would also be good if you could expand on the thread introducing
the "redundant" logic.
Greetings,
Andres Freund
rs discussion, so I started:
https://postgr.es/m/20200323235036.6pje6usrjjx22zv3%40alap3.anarazel.de
I think I might just have figured out another one...
Greetings,
Andres Freund
else would be useful to
> capture? You'd asked about a GDB -- do you want that of the main
> process or the autovac worker?
Unless you can give me gdb access directly, I don't yet have enough data
to suggest what exactly we would want to analyze with gdb in your case.
It'd be helpful if you could change log_min_messages to DEBUG1 and
reload the configuration (not restart!).
Greetings,
Andres Freund
0 remain, 0 skipped due to pins, 0 skipped
> frozen
> 266623- tuples: 0 removed, 0 remain, 0 are dead but not yet removable,
> oldest xmin: 209635953
> [...]
Do you have any non-toast ones?
> > I wonder if what might be happening is that we're somehow missed/failed
> > to update relfrozenxid and/or datfrozenxid. If you manually vacuum some
> > table in the oldest database, but that is *NOT* the oldest table itself,
> > does the problem "resolve" itself?
>
> I'll also add pg_class snapshot for next time we run the bench. I'm not sure
> if we'll be able to catch the 1h45 interval when the system stays alive
> after the issue though.
Could you just script something to stop the benchmark once the disk is
90% full or so?
Did you see any errors / fatals around the time autovacuum stopped
working?
Greetings,
Andres Freund
Hi,
On 2020-03-23 16:22:47 +0100, Julien Rouhaud wrote:
> On Fri, Mar 20, 2020 at 12:03:17PM -0700, Andres Freund wrote:
> > Hi,
> >
> > On 2020-03-20 12:42:31 -0500, Justin King wrote:
> > > When we get into this state again, is there some other informatio
ck on a single table, and that table is not large,
it would be useful to get a backtrace with gdb.
Greetings,
Andres Freund
Hi,
On 2020-03-19 18:07:14 -0500, Justin King wrote:
> On Thu, Mar 19, 2020 at 5:35 PM Andres Freund wrote:
> >
> > Hi,
> >
> > On 2020-03-19 10:23:48 -0500, Justin King wrote:
> > > > From a single stats snapshot we can't actually understand the act
never seems to complete and prevents other autovacs from running.
One big difference between a manual VACUUM and autovacuum is that with
the default settings VACUUM is not throttled, but autovacuum is.
What are your vacuum_cost_delay, autovacuum_vacuum_cost_delay,
vacuum_cost_limit, autovacuum_vacuum_cost_limit, vacuum_cost_page_hit,
vacuum_cost_page_miss set to?
Greetings,
Andres Freund
Hi,
On March 16, 2020 1:22:18 PM PDT, Tom Lane wrote:
>Andres Freund writes:
>> On 2020-03-16 12:44:53 -0700, Andres Freund wrote:
>>> On 2020-03-15 20:11:18 -0400, Tom Lane wrote:
>>>> I wonder if we should change it to allow that when
>>>> allow_
Hi,
On 2020-03-16 12:44:53 -0700, Andres Freund wrote:
> On 2020-03-15 20:11:18 -0400, Tom Lane wrote:
> > Unfortunately, it seems like you can't do that either, short of
> > hacking up the backend or writing some custom C code, because the
> > executor won't let y
e don't the right thing for toast
relations in plenty places right now, because we just check for
RELKIND_RELATION - which will break junkvars etc.
Greetings,
Andres Freund
nal_error OR data_corrupted OR index_corrupted THEN
-- add pkey or something else
RAISE NOTICE 'failed to return data';
END;
END LOOP;
END
$$
should work. You can call it like
SELECT (salvaged_rec.rec).* FROM (SELECT salvaged_text::salvage_me FROM
salvage('salvage_me') AS salvaged_text) AS salvaged_rec(rec)
Greetings,
Andres Freund
Hi,
On 2020-03-09 17:47:23 +0200, Kouber Saparev wrote:
> На пт, 6.03.2020 г. в 21:00 Andres Freund написа:
> > On 2020-02-27 10:52:36 -0500, Tom Lane wrote:
> > > FWIW, I can't immediately think of a reason this would cause a problem,
> > > at least not on 9
BLE on the underlying table that prevents concurrent catalog
modifications.
Greetings,
Andres Freund
Hi,
On 2020-02-13 12:25:40 +0900, Michael Paquier wrote:
> On Wed, Feb 12, 2020 at 12:05:11PM -0800, Andres Freund wrote:
> > Yes, LLVM would work on windows. I'd not even be surprised if one could
> > make it work on windows already when using a mingw based build, instead
&
Hi,
On 2020-02-12 14:54:56 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2020-02-12 09:39:19 +0100, Josef Šimánek wrote:
> >> I'm not aware of any PG 12 Windows installer with JIT/LLVM enabled.
>
> > It's not currently supported by the windows bui
003). So somebody would have to step in to make that work,
first.
Greetings,
Andres Freund
Do we really have to create a new subscriber for this test? The creation
of one isn't free. Nor is the amount of test code duplication
neglegible.
Greetings,
Andres Freund
pq_."
startup option stuff to opt in, we could make it an optional parameter
to D messages (it'd be mildly hacky because unfortunately
describe_target is not a counted text), we could use an additional
describe_type etc...
Greetings,
Andres Freund
Hi,
On 2019-10-24 16:31:39 -0700, Maciek Sakrejda wrote:
> On Thu, Oct 24, 2019 at 2:25 PM Andres Freund wrote:
> > Note that the buffer access stats do *not* count the number of distinct
> > buffers accessed, but that they purely the number of buffer
> > accesses.
>
&
's ~15kb) if that helps.
Or just relevant top-level excerpts.
> Also, a tangential question: why is the top-level structure of a JSON plan
> an array? I've only ever seen one root node with a Plan key there.
IIRC one can get multiple plans when there's a DO ALSO rule. There might
be other ways to get there too.
Greetings,
Andres Freund
On 2019-10-10 15:32:38 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2019-10-09 10:16:37 -0400, Tom Lane wrote:
> >> Well, it shows that the failure is occurring while trying to evaluate
> >> a variable in a trigger's WHEN clause during
> >> &quo
On 2019-10-10 09:05:06 +0200, Andreas Joseph Krogh wrote:
> (Tom: This mail is only viewable as text/html, to if you're reading the
> text/plain version it will seem "hashed")
I'm totally not ok with that btw.
looks like this could "just" be another report of #16036, which was
already fixed in:
commit d986d4e87f61c68f52c68ebc274960dc664b7b4e
Author: Andres Freund
Date: 2019-10-04 11:59:34 -0700
Fix crash caused by EPQ happening with a before update trigger present.
> This doesn
Hi,
On 2019-10-07 17:00:24 +0900, Tatsuo Ishii wrote:
> According to the manualof RowDescription message
> https://www.postgresql.org/docs/12/protocol-message-formats.html
>
> Specifies the number of fields in a row (can be zero).
>
> Does 0 fields could actually happen?
Yes, e.g.:
SEL
On 2019-10-04 10:52:38 -0400, Tom Lane wrote:
> Thomas Kellerer writes:
> > It also works on Windows when I specify "correct" locale names - the above
> > seems to be an edge case.
> > Is it worth the effort to report that through the bug reporting form?
>
> No, this thread is a sufficient repor
Hi,
On 2019-07-27 19:10:22 -0400, Tom Lane wrote:
> Andres Freund writes:
> > Additionally we perhaps ought to just not use the send buffer when
> > internal_putbytes() is called with more data than can fit in the
> > buffer. We should fill it with as much data as fits in i
Hi,
On 2019-07-27 18:34:50 -0400, Tom Lane wrote:
> Andres Freund writes:
> > It might be better to just use larger send sizes however. I think most
> > kernels are going to be better than us knowing how to chop up the send
> > size.
> Yeah. The existing commentary
) until all
the data has been received.
I suspect we could still do with a bigger buffer, just to reduce the
number of syscalls in bulk loading cases, however.
Greetings,
Andres Freund
ed, the kernel will coalesce packages regardless of the
send() size.
> Does it make sense to make this parameter configurable?
I'd much rather not. It's goign to be too hard to tune, and I don't see
any tradeoffs actually requiring that.
Greetings,
Andres Freund
ime
configurable, that'd add overhead to some pretty central code.
Increasing the default would need a lot of benchmarks, to prove it
doesn't cause regressions in other common scenarios.
Greetings,
Andres Freund
Hi,
On 2019-07-17 13:27:23 -0400, Tom Lane wrote:
> My concern here is that if we have blinders on to the extent of only
> processing that one table or DB, we're unnecessarily allowing bloat to
> occur in other tables, and causing that missed vacuuming work to pile
> up so that there's more of it
huge_pages. Which one definitely should - but that's an additional
configuration step that requires superuser access on most operating
systems.
Greetings,
Andres Freund
Hi,
On 2019-06-18 12:33:30 -0400, Alvaro Herrera wrote:
> On 2019-Jun-18, Andres Freund wrote:
>
> > On 2019-06-17 19:45:41 -0400, Jeff Janes wrote:
> > > If not, I would set the value small (say, 8GB) and let the OS do the
> > > heavy lifting of deciding what to ke
Hi,
On 2019-06-17 19:45:41 -0400, Jeff Janes wrote:
> If not, I would set the value small (say, 8GB) and let the OS do the
> heavy lifting of deciding what to keep in cache.
FWIW, in my opinion this is not a good idea in most cases. E.g. linux's
pagecache doesn't scale particularly gracefully to
On 2019-06-16 12:25:58 -0400, Jeff Janes wrote:
> Right, but true only because they were "checkpoint starting: immediate".
> Otherwise the reported write time includes intentional sleeps added to
> honor the checkpoint_completion_target. A bit confusing to report it that
> way, I think.
+1
It's
ent community.
Greetings,
Andres Freund
ing n_tup_ins?
> Moreover, I don't have any stat for the parent table root, neither for
> y2018.
> Some explaination/pointer is appreciated.
That clearly seems wrong. Could you try build a small reproducer?
Greetings,
Andres Freund
I hope you have a recent backup to restore from.
Should there not be a backup, couldn't weaken the error checks during
replay a bit (locally), to allow replay to progress? The indexes will be
toast, but it ought to allow to recover the table data completely.
Greetings,
Andres Freund
way the
relation looks.
S2: ALTER TABLE foo ADD COLUMN bar INT;
S2: BEGIN; LOCK pg_attribute;
S1: SELECT * FROM foo;
Here S1 is blocked, because it needs to look at pg_attribute to figure
out the "shape" of the table, but it's currently locked.
Greetings,
Andres Freund
le way to connect and consume notifications but can not
> find any example or documentation on how to do that.
If you really wanted to go down that road, you'd have to read the
protocol specs. It'd not make sense to document how-to steps for
something as low-level as this.
Greetings,
Andres Freund
x AMs).
Well, it can be beneficial to create a new index concurrently, and then
drop the old one concurrently. Before v12 that's the only way to
recreate an index during production, if it e.g. bloated.
Greetings,
Andres Freund
t, of course (i.e. the double buffering
youmention).
Greetings,
Andres Freund
> to get the patch from the link.
Hm, odd. There's a link on the page "Latest attachment" - but for
unknown reasons that's broken. I've attached it for now, but will also
inquire with the webadmin team about what's up.
Greetings,
Andres Freund
>From ba93ae02e
ts
inbetween, thereby reducing latency), and that is actually supported by
postgres. Some drivers make fairly extensive use of it (e.g. the pgjdbc
driver).
As for libpq: No, that's currently not supported. There is a patch that
I hope to get into v13 however: https://commitfest.postgresql.org/20/1317
Greetings,
Andres Freund
ws/xp-cmdshell-server-configuration-option?view=sql-server-2017
and it allows to execute shell code (as a specified user) even when not
a sysadmin:
https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/xp-cmdshell-transact-sql?view=sql-server-2017#xp_cmdshell-proxy-account
Greetings,
Andres Freund
nd Windows 2019 are the same thing... And the
latter has been out for longer than yesterday... I don't know if
anybody has done rigorous testing on it however.
I'd be somewhat surprised if it didn't just work however.
Greetings,
Andres Freund
On 2019-04-03 10:44:06 +0530, preejackie wrote:
> Hi Andres,
>
> Thanks for the reply! Please see my comments inline.
>
> On 03/04/19 3:20 AM, Andres Freund wrote:
> > Hi,
> >
> > On 2019-04-02 00:51:51 +0530, preejackie wrote:
> > > As LLVM ORC supp
,
connect to it, vacuum it, and change datallowconn back.
Greetings,
Andres Freund
the generated code is
so bad that it's much more likely to get big benefits by improving the
generated IR, compared to giving more hints to the optimizer.
Greetings,
Andres Freund
h PROGRAM is executed directly by
the server, not by the client application, must be executable by the
PostgreSQL user. COPY naming a file or command is only allowed to
database superusers or users who are granted one of the default roles
pg_read_server_files, pg_write_server_files, or
pg_execute_server_program, since it allows reading or writing any file
or running a program that the server has privileges to access."
Those seem reasonable to me?
Greetings,
Andres Freund
oes so. It's recommended to limit the size of the
resultset a bit using the nchanges paramter so you can call it in
smaller increments.
Greetings,
Andres Freund
de any files? Remove any? If so which?
Greetings,
Andres Freund
ault wal_sync_method to fdatasync. With recent Linux versions,
* xlogdefs.h's normal rules will prefer open_datasync, which (a) doesn't
* perform better and (b) causes outright failures on ext4 data=journal
* filesystems, because those don't support O_DIRECT.
*/
#define PLATFORM_DEFAULT_SYNC_METHODSYNC_METHOD_FDATASYNC
Greetings,
Andres Freund
On 2019-02-18 10:33:50 -0500, Ravi Krishna wrote:
> Are there any plans to support PG on WSL ? Just curious.
I think if somebody wanted to start investing efforts to improve testing
of that setup, and then fix the resulting issues, nobody would seriously
object. But also most people working on PG
ent it? (Android?)
I'm not sure I see much need for leaving the warning in out of
principle. Feels like we ought to sync_file_range once at postmaster
startup and then just force-disable the flush GUCs if not available?
Greetings,
Andres Freund
On February 15, 2019 9:44:50 AM PST, Tom Lane wrote:
>Andres Freund writes:
>> On February 15, 2019 9:13:10 AM PST, Tom Lane
>wrote:
>>> I'm of the opinion that we shouldn't be panicking for
>sync_file_range
>>> failure, period.
>
>> With s
On February 15, 2019 9:13:10 AM PST, Tom Lane wrote:
>Andres Freund writes:
>> I suspect that's because WSL has an empty implementation of
>> sync_file_range(), i.e. it unconditionally returns ENOSYS. But as
>> configure detects it, we still emit calls for it. I gu
ly can work around it, mostly, by setting
checkpoint_flush_after = 0 and bgwriter_flush_after = 0.
Greetings,
Andres Freund
egments, you
> need to keep your replication slot in a position that will hold that WAL in
> place. And that is what will prevent old rows from being vacuumed
> away.
Note replication slots only prevent old *catalog* rows from being
removed, not old row versions in user created tables.
Greetings,
Andres Freund
Hi,
On 2019-01-22 11:10:27 -0600, Jeremy Finzel wrote:
> P.S. do heed the advice of the others and get more familiar with the docs
> around WAL archiving.
Logical replication doesn't normally interact with WAL archiving in any
way, so that seems orthogonal.
Greetings,
Andres Freund
ion won't start in the past,
and the slot will prevent necessary WAL from being removed. Are you
manually removing WAL from pg_wal/?
Greetings,
Andres Freund
resting happen since your last successful autovacuum on that table?
> Database crashes, WAL-related parameter changes, that sort of thing?
I think it's entirely conceivable that the damage happened with earlier
versions,
and just became visible now as the global horizon increased.
Greetings,
Andres Freund
On 2018-12-12 17:37:47 -0500, Ravi Krishna wrote:
> I am running explain analyze cost on a SQL which reads from two large
> tables (122mil and 37 mil). The query is an UPDATE SQL where we use
> derives table in the from clause and then join it back to the table
> being updated.
> The explain analy
On 2018-12-08 15:23:19 -0800, Rob Sargent wrote:
>
>
> > On Dec 8, 2018, at 3:12 PM, Andres Freund wrote:
> >
> > On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:
> >> On RDS PostgreSQL, the default is 25% of your server memory. This seems
> >&
On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:
> On RDS PostgreSQL, the default is 25% of your server memory. This seems
> to be pretty widely accepted as a good starting point on PostgreSQL.
FWIW, I think it's widely cited, but also bad advice. 25% for a OLTP
workload on a 1TB machine wit
Hi,
On 2018-11-06 17:11:40 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2018-11-06 16:47:20 -0500, Tom Lane wrote:
> >> Looks like somebody forgot to list
> >> RELOPT_KIND_GIST in RELOPT_KIND_INDEX, but is that the
> >> fault of commit c203d6cf8 or wa
Hi,
On 2018-11-06 23:11:29 +0100, Tomas Vondra wrote:
> On 11/6/18 10:54 PM, Andres Freund wrote:
> > Looks new:
> > + RELOPT_KIND_INDEX =
> > RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
> >
> > there aren't any other "
_KIND_INDEX =
RELOPT_KIND_BTREE|RELOPT_KIND_HASH|RELOPT_KIND_GIN|RELOPT_KIND_SPGIST,
there aren't any other "for all indexes" type options, so the whole
category didn't exist before.
It also strikes me as a really bad idea, even if RELOPT_KIND_GIST
wouldn't have been omitted: It breaks index am extensibility.
Greetings,
Andres Freund
"problem".
>
> Please check and comment.
The developer of the FDW seems to have made the requisite fixes. I'm not
sure what you'd like us to do?
Greetings,
Andres Freund
by telephone at the above number. Thank you.
GNGNGNG.
Greetings,
Andres Freund
better approach wouldn't be to add an errcontext for
vaccuum, where continually update the block number etc. Theres plenty of
different sources of corruption that'd potentially cause debug messages
or errors, and that should get most of them.
Greetings,
Andres Freund
On 2018-10-17 11:02:40 -0700, Adrian Klaver wrote:
> On 10/17/18 10:57 AM, Ravi Krishna wrote:
> >
> > >
> > > Please note that odbc_fdw is not maintained by the postgresql developers,
> > > but a separate project.
> >
> >
> > Translation: You are on your own. We are hoping this will make our
ue to odbcfdw's authors/github tracker.
> > Or pay a company for support.
> >
>
> On a related note is fdw for Oracle and SQLServer supported by the community ?
They're not postgresql.org projects if that's what you're asking. IOW,
they probably have their own communities, it's just not this.
Greetings,
Andres Freund
1 - 100 of 221 matches
Mail list logo