Trigger overhead/performance and alternatives?
Hello, Apologies if this is not in the correct forum for the non-urgent question that follows. I was reading the pgconf 2018 ppt slides by Jonathan Katz (from slide 110 onwards) http://www.pgcon.org/2018/schedule/attachments/480_realtime-application.pdf Where is mentioned trigger overhead, and provided an alternative solution (logical replication slot monitoring). My 2 part question is. 1) Does anybody have any benchmarks re: trigger overhead/performance or have any experience to give a sort of indication, at all? 2) Is anybody aware of any other clever alternatives, pg extensions or github code etc as an alternative to using triggers? Thanks in advance, -- Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
Re: FPGA optimization ...
>From what I have read and benchmarks seen.. FPGA shines for writes (and up to 3x (as opposed to 10x claim) real world for queries from memory) GPU shines/outperforms FPGA for reads. There is a very recent and interesting academic paper[1] on High Performance GPU B-Tree (vs lsm) and the incredible performance it gets, but I 'think' it requires NVIDIA (so no easy/super epyc+gpu+hbm on-chip combo solution then ;) ). Doesn't both FPHGA and GPU going to require changes to executor from pull to push to get real benefits from them? Isnt that something Andres working on (pull to push)? What really is exciting is UPMEM (little 500mhz processors on the memory), cost will be little more than memory cost itself, and shows up to 20x performance improvement on things like index search (from memory). C library, claim only needs few hundred lines of code to integrate from memory, but not clear to me what use cases it can also be used for than ones they show benchmarks for. [1] https://escholarship.org/content/qt1ph2x5td/qt1ph2x5td.pdf?t=pkvkdm -- Sent from: https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
