Fwd: Disable autocommit inside dbeaver

2023-12-06 Thread arun chirappurath
Hi All,

Is there a way we can disable autocommit option inside query writing area?
Not by choosing auto commit from drop down menu.

Thanks,
Arun


Syntax

2023-12-07 Thread arun chirappurath
Hi All,

What is the difference or use case for below syntaxes?

do $$
declare d int;
begin
RAISE INFO 'Script started at %', CURRENT_TIMESTAMP;
update employees set first_name = 'g' where employee_id = 1; get
diagnostics d = row_count; raise info 'selected: % rows', d;
RAISE INFO 'Script finished at %', CURRENT_TIMESTAMP; end;$$;

Or




Just

Begin;

Update statements

Commit;


write a sql block which will commit if both updates are successful else it will have to be rolled back

2023-12-07 Thread arun chirappurath
Hi All,

Can someone guide me to "write a sql block which will commit if both
updates are successful else it will have to be rolled back"?
would like to explicitly specify both commit and rollback in code..

 I would like to turn off the autocommit then execute the query.

Below is a just a starter ...it doesnt has COMMIT clause..

DO $$
DECLARE
  emp_id1 INT := 1; -- Assuming employee ID for the first update
  new_salary1 NUMERIC := 1; -- New salary for the first update

  emp_id2 INT := 2; -- Assuming employee ID for the second update
  new_salary2 NUMERIC := 3; -- New salary for the second update
BEGIN
  -- Update Statement 1
  UPDATE employees
  SET salary = new_salary1
  WHERE employee_id = emp_id1;

  -- Update Statement 2
  UPDATE employees
  SET salary = new_salary2
  WHERE employee_id = emp_id2;

  EXCEPTION
WHEN OTHERS THEN
  -- An error occurred during the update, log the error
  RAISE NOTICE 'Error during updates: %', SQLERRM;

  -- Roll back the transaction
  ROLLBACK;
END $$;

select * from public.employees

Thanks,
Arun


Disable script execution in server level when updating via grids

2023-12-07 Thread arun chirappurath
Hello All,

Is there a way we can disable grid based updates from the clients in the
server?

suppose if someone accidentally commits an edit in dbeaver,server shall
decline that incoming request. However requests from the query tool should
run

I have seen some options from the client side. Do we have some options in
server side?

[image: image.png]

Thanks,
Arun


Import csv to temp table

2024-01-02 Thread arun chirappurath
Dear All,

Do we have any scripts that create a temp table with column names from the
first row of csv files?

any functions which we can pass the file name as parameter which loads the
data to csv based on the data

Thanks,
ACDBA


Sample data generator for performance testing

2024-01-02 Thread arun chirappurath
Hi All,

Do we have any open source tools which can be used to create sample data at
scale from our postgres databases?
Which considers data distribution and randomness

Regards,
Arun


Re: Sample data generator for performance testing

2024-01-03 Thread arun chirappurath
Hi Adrian,

Thanks for your mail.

Is this for all tables in the database or a subset? Yes

Does it need to deal with foreign key relationships? No

What are the sizes of the existing data and what size sample data do you
want to produce?1Gb and 1Gb test data.

On Wed, 3 Jan, 2024, 22:40 Adrian Klaver,  wrote:

> On 1/2/24 23:23, arun chirappurath wrote:
> > Hi All,
> >
> > Do we have any open source tools which can be used to create sample data
> > at scale from our postgres databases?
> > Which considers data distribution and randomness
>
>
>
> >
> > Regards,
> > Arun
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Sample data generator for performance testing

2024-01-03 Thread arun chirappurath
On Wed, 3 Jan, 2024, 23:03 Adrian Klaver,  wrote:

> On 1/3/24 09:24, arun chirappurath wrote:
> > Hi Adrian,
> >
> > Thanks for your mail.
> >
> > Is this for all tables in the database or a subset? Yes
>
> Yes all tables or yes just some tables?
> All tables.except some which has user details.


> >
> > Does it need to deal with foreign key relationships? No
> >
> > What are the sizes of the existing data and what size sample data do you
> > want to produce?1Gb and 1Gb test data.
>
> If the source data is 1GB and the test data is 1GB then there is no
> sampling, you are using the data population in its entirety.
>
> Yes.would like to double the load and test.


Also do we have any standard methods for sampling and generating test data

>
>
>
> > On Wed, 3 Jan, 2024, 22:40 Adrian Klaver,  > <mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 1/2/24 23:23, arun chirappurath wrote:
> >  > Hi All,
> >  >
> >  > Do we have any open source tools which can be used to create
> > sample data
> >  > at scale from our postgres databases?
> >  > Which considers data distribution and randomness
> >
> >
> >
> >  >
> >  > Regards,
> >  > Arun
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Sample data generator for performance testing

2024-01-03 Thread arun chirappurath
Thanks for the insights..

Thanks,
Arun

On Wed, 3 Jan, 2024, 23:26 Jeremy Schneider, 
wrote:

> On 1/2/24 11:23 PM, arun chirappurath wrote:
> > Do we have any open source tools which can be used to create sample data
> > at scale from our postgres databases?
> > Which considers data distribution and randomness
>
> I would suggest to use the most common tools whenever possible, because
> then if you want to discuss results with other people (for example on
> these mailing lists) then you're working with data sets that are widely
> and well understood.
>
> The most common tool for PostgreSQL is pgbench, which does a TPCB-like
> schema that you can scale to any size, always the same [small] number of
> tables/columns and same uniform data distribution, and there are
> relationships between tables so you can create FKs if needed.
>
> My second favorite tool is sysbench. Any number of tables, easily scale
> to any size, standardized schema with small number of colums and no
> relationships/FKs.  Data distribution is uniformly random however on the
> query side it supports a bunch of different distribution models, not
> just uniform random, as well as queries processing ranges of rows.
>
> The other tool that I'm intrigued by these days is benchbase from CMU.
> It can do TPCC and a bunch of other schemas/workloads, you can scale the
> data sizes. If you're just looking at data generation and you're going
> to make your own workloads, well benchbase has a lot of different
> schemas available out of the box.
>
> You can always hand-roll your schema and data with scripts & SQL, but
> the more complex and bespoke your performance test schema is, the more
> work & explaining it takes to get lots of people to engage in a
> discussion since they need to take time to understand how the test is
> engineered. For very narrowly targeted reproductions this is usually the
> right approach with a very simple schema and workload, but not commonly
> for general performance testing.
>
> -Jeremy
>
>
> --
> http://about.me/jeremy_schneider
>
>


Unable to find column

2024-01-15 Thread arun chirappurath
Dear all,

I have a table automobile which has a column id.

Table consists of id,make,year of manufacturing

I use dbeaver for querying..

Select * from automobile provides me results

However select id from automobile yields column doesn't exists.

I tried double quotes on id As well but same error.

But if I drag and drop id or use the id which is auto prompted from
dbeaver,it works 💪.

select "id" from automobile using drag and drop of id in dbeaver works.

But if I manually type "id" it won't.

Any clue on this.

Visually both statements are alike


Table is in public schema.

Thanks,
Arun


Re: Unable to find column

2024-01-15 Thread arun chirappurath
Hi Adrian,

\d shows the tables and this id which is a sequence.

Regards,
Arun

On Mon, 15 Jan 2024 at 22:03, Adrian Klaver 
wrote:

> On 1/15/24 08:16, arun chirappurath wrote:
> > Dear all,
> >
> > I have a table automobile which has a column id.
> >
> > Table consists of id,make,year of manufacturing
> >
> > I use dbeaver for querying..
> >
> > Select * from automobile provides me results
> >
> > However select id from automobile yields column doesn't exists.
> >
> > I tried double quotes on id As well but same error.
> >
> > But if I drag and drop id or use the id which is auto prompted from
> > dbeaver,it works 💪.
> >
> > select "id" from automobile using drag and drop of id in dbeaver works.
> >
> > But if I manually type "id" it won't.
> >
> > Any clue on this.
>
> Do you have psql(https://www.postgresql.org/docs/current/app-psql.html(
> available?
>
> If so in psql what does:
>
> \d automobile
>
> return?
>
>
> >
> > Visually both statements are alike
> >
> >
> > Table is in public schema.
> >
> > Thanks,
> > Arun
> >
> >
> >
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


postgres sql assistance

2024-01-16 Thread arun chirappurath
Dear all,

I am an accidental postgres DBA and learning things every day. Apologies
for my questions if not properly drafted.

I am trying to load data from the temp table to the main table and catch
the exceptions inside another table.

temp table is cast with the main table data type and trying to load the
data.

temp table is below.

category_name  |description
 | is_active
---+-+---
 *Tech123212312312323233213123123123123*| Furniture and home decor
   | true
 *Tech123212312312323233213123123123123*| Electronic devices and
accessories  | true
 Elec| Books of various genres
| *15*
 TV  | Books
| *12*
 cla | Apparel and fashion accessories
| true

category name is varchar(25) and is_active is boolean in main table. So i
should get exceptions for 1st,2nd for category_name rows and 4 and 5th rows
for boolean. In exception table results,its only showing

Exception table is below. Here instead of showing exception for value 12 in
the is_active table its showing old exception for 15 itself.. Script is
attached,,...SQLERRM value is not getting updated for row 12..WHat could be
the reason for this?

value too long for type character varying(25) category_name 1 2024-01-16
16:17:01.279 +0530 value too long for type character varying(25)
description 2 2024-01-16 16:17:01.279 +0530 invalid input syntax for type
boolean: "15" is_active 3 2024-01-16 16:17:01.279 +0530 *invalid input
syntax for type boolean: "15" * 4 2024-01-16 16:17:01.279 +0530 *invalid
input syntax for type boolean: "15"* 5 2024-01-16 16:17:01.279 +0530
CREATE OR REPLACE FUNCTION insert_temp_data_to_main_table()
RETURNS VOID AS $$
DECLARE
v_main_table_name TEXT := 'main_categories';
v_temp_table_name TEXT := 'tmp_categories';
v_error_table_name TEXT := 'error_log_table';
v_sql_statement TEXT;
BEGIN
-- Clear the error log table
EXECUTE 'TRUNCATE TABLE ' || v_error_table_name;

-- Build the complete SQL statement with aggregated columns and select 
clauses
v_sql_statement := format('
INSERT INTO %I (%s)
SELECT %s
FROM %I',
v_main_table_name,
(SELECT string_agg(column_name, ', ') FROM information_schema.columns 
WHERE table_name = v_main_table_name),
(SELECT string_agg('CAST(' || v_temp_table_name || '.' || column_name 
|| ' AS ' || data_type || ')', ', ') FROM information_schema.columns WHERE 
table_name = v_temp_table_name),
v_temp_table_name);

-- Print the SQL statement
RAISE NOTICE 'Generated SQL statement: %', v_sql_statement;

-- Insert data into the main table from the temp table
EXECUTE v_sql_statement;

EXCEPTION
WHEN others THEN
DECLARE
v_error_msg TEXT;
v_failed_column_name TEXT;
v_row_counter INT := 1;
BEGIN
-- Get the specific error message
v_error_msg := SQLERRM;

-- Get the failed column name
SELECT column_name INTO v_failed_column_name
FROM information_schema.columns
WHERE table_name = v_temp_table_name
ORDER BY ordinal_position
LIMIT 1 OFFSET v_row_counter - 1;

-- Log the error into the error log table
EXECUTE format('
INSERT INTO %I (error_message, failed_column_name, 
failed_row_number)
VALUES ($1, $2, $3)', v_error_table_name)
USING v_error_msg, v_failed_column_name, v_row_counter;
END;
END;
$$ LANGUAGE plpgsql;


Re: postgres sql assistance

2024-01-16 Thread arun chirappurath
Hi Jim,

Thank you so much for the kind review.


Architect is pressing for a native procedure to data load.

I shall Google ans try to find more suitable one than writing one by myself.


Thanks again,
Arun

On Wed, 17 Jan, 2024, 01:58 Jim Nasby,  wrote:

> On 1/16/24 6:34 AM, arun chirappurath wrote:
> > I am trying to load data from the temp table to the main table and catch
> > the exceptions inside another table.
>
> I don't have a specific answer, but do have a few comments:
>
> - There are much easier ways to do this kind of data load. Search for
> "postgres data loader" on google.
>
> - When you're building your dynamic SQL you almost certainly should have
> some kind of ORDER BY on the queries pulling data from
> information_schema. SQL never mandates data ordering except when you
> specifically use ORDER BY, so the fact that your fields are lining up
> right now is pure luck.
>
> - EXCEPTION WHEN others is kinda dangerous, because it traps *all*
> errors. It's much safer to find the exact error code. An easy way to do
> that in psql is \errverbose [1]. In this particular case that might not
> work well since there's a bunch of different errors you could get that
> are directly related to a bad row of data. BUT, there's also a bunch of
> errors you could get that have nothing whatsoever to do with the data
> you're trying to load (like if there's a bug in your code that's
> building the INSERT statement).
>
> - You should look at the other details you can get via GET STACKED
> DIAGNOSTICS [2]. As far as I can tell, your script as-written will
> always return the first column in the target table. Instead you should
> use COLUMN_NAME. Note that not every error will set that though.
>
> 1:
>
> https://www.postgresql.org/docs/current/app-psql.html#APP-PSQL-META-COMMAND-ERRVERBOSE
> 2:
>
> https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-EXCEPTION-DIAGNOSTICS
> --
> Jim Nasby, Data Architect, Austin TX
>
>


Unused indexes

2024-02-05 Thread arun chirappurath
Hi All,

Do we have a script to get unused indexes for 30 days and once identified
do we have an option to disable  and enable when required?

I sql server we have this option to disable it and need to rebuild it to
ensemble it

Thanks,
Arun


Postgres pg_cron extension

2024-02-14 Thread arun chirappurath
Dear all,

I am trying to enable pg_cron extension in RDS postgres and I got to know
it will be enabled only in custom parameter group..it can't be enabled in
default one.
1. Suppose if we create a custom group for existing postgres 14
databases,will all the existing parameters in default group gets copied
over to custom group?

2.Also will there be any impact if we change this parameter group?

3.Also if we upgrade 14 to 15 in future,Do we need to change the parameter
to 15 compatible?

Apologies if its in wrong forum,

Thanks,
Arun


pg_locks-exclusivelock for select queries

2024-03-22 Thread arun chirappurath
Dear all,

I am running below query on a database. why is it creating a exclusive lock
on a virtualxid?  I am running some SELECT queries and its creating an
ExclusiveLock in virtualxid? is this normal?

SELECT datname, pid, state, query, age(clock_timestamp(), query_start) AS
age

FROM pg_stat_activity

WHERE state <> 'idle'

--AND query NOT LIKE '% FROM pg_stat_activity %'

ORDER BY age;

|locktype
|database|relation|page|tuple|virtualxid|transactionid|classid|objid|objsubid|virtualtransaction|pid
 |mode   |granted|fastpath|waitstart|
|--||||-|--|-|---|-||--|--|---|---||-|
|relation  |58,007  |12,073  || |  | |   |
   ||5/165 |21,912|AccessShareLock|true   |true|
 |
|virtualxid|||| |5/165 | |   |
   ||5/165 |21,912|ExclusiveLock  |true   |true|
 |

Thanks,
ACDBA


Seq scan vs index scan

2024-03-22 Thread arun chirappurath
Hi All,

I have a table named  users with index on user name.

CREATE TABLE users (
user_id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
age INT
);

CREATE INDEX idx_username ON users (username);

When I try to do below select query it's taking seq scan and query returns
in 5ms.

SELECT * FROM users WHERE username = 'example_username';

I am trying to force query to use indexes  using query hints.

Set enable indexscan to ON,
Same for bitmap and index only scan

and ran the query.

However it still uses seq scan instead of index scan.

1. Is there a way to force query to use an index? With out changing default
settings of postgres rds

 2. Modifying random page cost is desired the way or hint extension? In
which case do we use this?will it affect selecting index for all queries

3.i have done analyze on the table and tried recreating index..why is it
still taking seq scan?

In Sql server we can force query just by proving it directly in query.

USE AdventureWorks
GO
SELECT c.ContactID
FROM Person.Contact c
WITH (INDEX(AK_Contact_rowguid))
INNER JOIN Person.Contact pc
WITH (INDEX(PK_Contact_ContactID))
ON c.ContactID = pc.ContactID
GO

https://blog.sqlauthority.com/2009/02/07/sql-server-introduction-to-force-index-query-hints-index-hint/

Thanks,
Arun


Re: Seq scan vs index scan

2024-03-22 Thread arun chirappurath
Thanks Tom,David and Chris for detailed opinions

Regards,
Arun

On Sat, 23 Mar 2024 at 09:25, arun chirappurath 
wrote:

> Hi All,
>
> I have a table named  users with index on user name.
>
> CREATE TABLE users (
> user_id SERIAL PRIMARY KEY,
> username VARCHAR(50) NOT NULL,
> email VARCHAR(100) UNIQUE NOT NULL,
> age INT
> );
>
> CREATE INDEX idx_username ON users (username);
>
> When I try to do below select query it's taking seq scan and query returns
> in 5ms.
>
> SELECT * FROM users WHERE username = 'example_username';
>
> I am trying to force query to use indexes  using query hints.
>
> Set enable indexscan to ON,
> Same for bitmap and index only scan
>
> and ran the query.
>
> However it still uses seq scan instead of index scan.
>
> 1. Is there a way to force query to use an index? With out changing
> default settings of postgres rds
>
>  2. Modifying random page cost is desired the way or hint extension? In
> which case do we use this?will it affect selecting index for all queries
>
> 3.i have done analyze on the table and tried recreating index..why is it
> still taking seq scan?
>
> In Sql server we can force query just by proving it directly in query.
>
> USE AdventureWorks
> GO
> SELECT c.ContactID
> FROM Person.Contact c
> WITH (INDEX(AK_Contact_rowguid))
> INNER JOIN Person.Contact pc
> WITH (INDEX(PK_Contact_ContactID))
> ON c.ContactID = pc.ContactID
> GO
>
>
> https://blog.sqlauthority.com/2009/02/07/sql-server-introduction-to-force-index-query-hints-index-hint/
>
> Thanks,
> Arun
>


Statistics information.

2024-03-22 Thread arun chirappurath
Dear All,

Apologies the way i am asking question as i am more a SQL Server person and
a new postgre man..

I have used a query store in SQL server. it provides me option to load
statistics data to temp table and get  below important information.

1. Last run duration
2. Average time for execution.
3. Filter statistics for a specific function(stored procedure)
4. Filter for specific texts.
5 top queries
6. query plans...

I have used the below query to get this and it lets me see different plans
for each proc and even let me force it.

Do we have similar options in postgres?  pg_stat_statements extension is
the answer ? will it gets cleared over restart? i think its disabled by
default in RDS.

DROP table if exists #results;
GO
select
object_name(object_id) as "object name"
, pl.[query_id]
, pl.[plan_id]
, qt.[query_text_id]
, execution_type_desc
, rts.execution_type
, avg_rowcount
, CONVERT(smalldatetime, SWITCHOFFSET(rtsi.[start_time] ,
DATEPART(tz,SYSDATETIMEOFFSET( as "interval_start_time"
, CONVERT(smalldatetime, SWITCHOFFSET(rtsi.[end_time] ,
DATEPART(tz,SYSDATETIMEOFFSET( as "interval_end_time"
, rts.[last_duration]/1000 as "last_duration_ms"
, rts.[min_duration]/1000 as "min_duration_ms"
, rts.[max_duration]/1000 as "max_duration_ms"
, ROUND(rts.[avg_duration]/1000,2) as "avg_duration_ms"
, rts.[count_executions]
, ((rts.[avg_logical_io_reads] + rts.[avg_physical_io_reads])/128)/1024 as
"avg_reads_GB"
, ((rts.[avg_logical_io_writes])/1024) as "avg_writes_GB"
, qt.[query_sql_text]
into #results
from
sys.query_store_runtime_stats rts
join
sys.query_store_runtime_stats_interval rtsi
on (rts.[runtime_stats_interval_id] = rtsi.[runtime_stats_interval_id])
join
sys.query_store_plan pl
on (rts.plan_id = pl.plan_id)
join
sys.query_store_query q
on (pl.query_id = q.[query_id])
join
sys.query_store_query_text qt
on (q.query_text_id = qt.query_text_id)
-- uncomment the lines below if you want to limit the content of the
temporary table, in most cases you can leave these commented
--where
--execution_type <> 0 --and rtsi.[start_time] >= '22 May 2019 18:00'
and
--object_id = object_id('schema.procedurename') and rtsi.[start_time] >=
'22 May 2019 18:00'
--order by
-- rtsi.[start_time] desc;
GO


***Run the above for fetching data**
**

/*
Execute the appropriate queries below to search your temporary table
*/

--TOP 10 DISK READS
select top 10 * from #results order by avg_reads_GB desc





*--SEARCH FOR SPECIFIC STRINGselect * from #results where query_sql_text
like '%text to search%' order by interval_start_time DESC --SEARCH FOR
SPECIFIC PROCEDURE OR FUNCTIONselect * from #results where [object name] =
'Look' order by interval_start_time des*


Table level restore in postgres

2024-03-28 Thread arun chirappurath
Dear all,

I am a new bie in postgres world

Suppose I have accidently deleted a table or deleted few rows ,is it safe
to drop this table and restore just this table from custom backup to same
database?

Or should I create a new database and restore it there and then migrate the
data?

What is the general methodology used?


I tried it in a smaller database and it worked in same database..however
dbeaver was throwing a warning saying database may get corrupted?

Thanks,
Arun


Access issue for system queries

2024-03-29 Thread arun chirappurath
Dear all,

I have granted access to pg_read_all_stats and pg_read_allsettings to
user..still they are not able to receive results from this query.its
empty..we can run SELECT * FROM pg_stat_statements alone..but not below
statement..what could be the reason?

WITH statements AS (
SELECT * FROM pg_stat_statements pss
 JOIN pg_roles pr ON (userid=oid)
WHERE rolname = current_user
)
SELECT calls,
   min_exec_time,
   max_exec_time,
   mean_exec_time,
   stddev_exec_time,
   (stddev_exec_time/mean_exec_time) AS coeff_of_variance,
   query
FROM statements
WHERE calls > 500
AND shared_blks_hit > 0
ORDER BY mean_exec_time DESC
LIMIT 10

Regards,
Arun


Re: Access issue for system queries

2024-03-29 Thread arun chirappurath
Ok, I'll check it out. Thank you.

On Sat, 30 Mar, 2024, 10:36 Julien Rouhaud,  wrote:

> On Sat, Mar 30, 2024 at 12:47 PM arun chirappurath 
> wrote:
> >
> > I have granted access to pg_read_all_stats and pg_read_allsettings to
> user..still they are not able to receive results from this query.its
> empty..we can run SELECT * FROM pg_stat_statements alone..but not below
> statement..what could be the reason?
> >
> > WITH statements AS (
> > SELECT * FROM pg_stat_statements pss
> >  JOIN pg_roles pr ON (userid=oid)
> > WHERE rolname = current_user
> > )
> > SELECT calls,
> >min_exec_time,
> >max_exec_time,
> >mean_exec_time,
> >stddev_exec_time,
> >(stddev_exec_time/mean_exec_time) AS coeff_of_variance,
> >query
> > FROM statements
> > WHERE calls > 500
> > AND shared_blks_hit > 0
> > ORDER BY mean_exec_time DESC
> > LIMIT 10
>
> Probably because your current user didn't run any query more than 500
> times?  Or maybe because you have some other tools that calls
> pg_stat_statements_reset() frequently enough.
>


Sql scripts execution

2024-04-24 Thread arun chirappurath
Hi All,

What is the generally used open source solution for deploying dml and ddl
scripts for monthly release on postgres rds?

Can we use github actions to perform the same

Thanks,
Arun


Execution history of a single query

2024-05-17 Thread arun chirappurath
Hi All,

>From pg_stat_statements we can get the overall execution details of queries.

Can we get the execution details of a single queryid ?

Like today it took 6 seconds,yesterday 5 and so on..just for one query.

Thanks,
Arun


Execute permission to function

2024-06-24 Thread arun chirappurath
Hi all

I am using rds postgres 14. I have created few users and added them to
pg_readall_data and pg_write_alldata groups


They are able to read all data and do update in tables

However they can't execute functions and not able to script out objects
from pg_admin

Any other role to be added?

Thanks,
Arun


Table and data comparison

2024-09-03 Thread arun chirappurath
Hi All,

Do we have any open-source utility to compare two table data not structure
and then provide some sort of reports?

Row by row comparison

Thanks,
Arun