Oracle Associate Array conversion to PostgreSQL

2020-09-09 Thread JAGMOHAN KAINTURA
Hi All,

Since we don't have the support of Associative arrays in Postgres, we need
to convert Oracle associative arrays with a different approach in
PostgreSQL.

Following is a sample piece of code to demonstrate the same.

--Convert Oracle RECORD type to PostgreSQL TYPE.
--=
create type SLOC_TB as ( IN_EFFECT VARCHAR(1),
SUB_LOC_C VARCHAR(6),
START_DT timestamp(0),
END_DT timestamp(0),
SLOC_PHRASE_N VARCHAR(5)
);
---

--Below is simple anonymous block, where TYPE is used as an ARRAY ([]).
TYPE is used as an array object to allow storing multiple rows/records
supported by Index (like index by binary integer in Oracle).

do
$$
declare
  lv_list SLOC_TB[];
  idx record;
begin
 lv_list[1] = ROW('X','XYZ',sysdate(),sysdate()+1, 'abc');
RAISE INFO '%', lv_list[1].IN_EFFECT;
RAISE INFO '%', lv_list[1].SUB_LOC_C;
RAISE INFO '%', lv_list[1].START_DT;
RAISE INFO '%', lv_list[1].END_DT;
RAISE INFO '%', lv_list[1].SLOC_PHRASE_N;
 lv_list[2] = ROW('D','Tecore',sysdate(),sysdate()+1, 'MIG');
RAISE INFO '%', lv_list[2].IN_EFFECT;
RAISE INFO '%', lv_list[2].SUB_LOC_C;
RAISE INFO '%', lv_list[2].START_DT;
RAISE INFO '%', lv_list[2].END_DT;
RAISE INFO '%', lv_list[2].SLOC_PHRASE_N;
end$$;

==
--tested the anonymous block and returns values correctly.

zdcqpoc=> do
zdcqpoc-> $$
zdcqpoc$> declare
zdcqpoc$>   lv_list SLOC_TB[];
zdcqpoc$>   idx record;
zdcqpoc$> begin
zdcqpoc$>  lv_list[1] = ROW('X','XYZ',sysdate(),sysdate()+1, 'abc');
zdcqpoc$>  RAISE INFO '%', lv_list[1].IN_EFFECT;
zdcqpoc$>  RAISE INFO '%', lv_list[1].SUB_LOC_C;
zdcqpoc$>  RAISE INFO '%', lv_list[1].START_DT;
zdcqpoc$>  RAISE INFO '%', lv_list[1].END_DT;
zdcqpoc$>  RAISE INFO '%', lv_list[1].SLOC_PHRASE_N;
zdcqpoc$>  lv_list[2] = ROW('D','Tecore',sysdate(),sysdate()+1, 'MIG');
zdcqpoc$>  RAISE INFO '%', lv_list[2].IN_EFFECT;
zdcqpoc$>  RAISE INFO '%', lv_list[2].SUB_LOC_C;
zdcqpoc$>  RAISE INFO '%', lv_list[2].START_DT;
zdcqpoc$>  RAISE INFO '%', lv_list[2].END_DT;
zdcqpoc$>  RAISE INFO '%', lv_list[2].SLOC_PHRASE_N;
zdcqpoc$> end$$;
INFO:  X
INFO:  XYZ
INFO:  2020-09-08 03:29:52
INFO:  2020-09-09 03:29:52
INFO:  abc
INFO:  D
INFO:  Tecore
INFO:  2020-09-08 03:29:52
INFO:  2020-09-09 03:29:52
INFO:  MIG
DO

But a problem arises when we want to assign any value to a specific column
to array type.
In Oracle we mostly do this way :
   lv_list[2].START_DT  := sysdate +1;

But above does not work in PostgreSQL. It says syntax error at ".".

What would be the method for this type of single element assignment in an
array created from composite type.


*Best Regards,*
Jagmohan


Failed Login Attempts in PostgreSQL

2020-11-13 Thread Jagmohan Kaintura
Hi Team,
I was looking for a workaround on how we can configure Failed Login
attempts feature of Oracle in PostgreSQL.
The Only requirement is End user shouldn't be allowed to Login after an "n"
number of unsuccessful attempts.

Users have the ability to perform all operations on the underlying tables.
So we wanted to restrict after doing "n" unsuccessful attempts.

I couldn't get any source anywhere.

*Best Regards,*
Jagmohan


ERROR : invalid transaction termination : PostgreSQL v12

2020-11-23 Thread Jagmohan Kaintura
Hi Team,

We have many BATCH JOBS in Oracle which we are committing after processing
few Records. These batch Jobs process in some subsets and call transaction
control statements COMMIT in case of Success and ROLLBACK in case of
failure.

While converting to POstgreSQL we converted in Same Format with COMMIT and
ROLLBACK. But while executing it ended up with below error message.
ERROR:  invalid transaction termination
CONTEXT:  PL/pgSQL function inline_code_block line 29 at COMMIT

While reviewing the Transaction Management in PostgreSQL  "
https://www.postgresql.org/docs/12/plpgsql-transactions.html"; it
speaks about a format which is not Supported.

Transaction control is only possible in CALL or DO invocations from the top
level or nested CALL or DO invocations without any other intervening
command. For example, if the call stack is CALL proc1() → CALL proc2() → CALL
proc3(), then the second and third procedures can perform transaction
control actions. But if the call stack is CALL proc1() → SELECT func2() → CALL
proc3(), then the last procedure cannot do transaction control, because of
the SELECT in between.

My Call has :  CALL Batch Job => SELECT function Used in SQL Statements
==> Call Procedure.  We have transaction control in "CALL Batch Job" only.

Pseudo Code is like : Highlighted in BOLD is a function call. It's failing
when getting executed as we are using functions into this procedure.
*Can any help on this matter , how I can implement Batch Jobs as we wanted
to commit in few intervals of 2 records ?*
We can't remove this function from the statement as its value is dependent
on column value.

CREATE OR REPLACE PROCEDURE TEST_TRANSACTION(
)
LANGUAGE 'plpgsql'
SECURITY DEFINER
AS $BODY$
DECLARE
G_LAST_UPDATE_USER_SYSTEM VARCHAR2(6) := 'SYSTEM';
G_LAST_UPDATE_MODULE_BATCH VARCHAR2(5) := 'BATCH';

G_CNTR_LOADING_EXPIRED TMS_CONTAINER_LOADING.STATUS_CODE%TYPE := '90';
G_DG_MANIFEST_DELETED TMS_DG_MANIFEST.STATUS_CODE%TYPE := '80';
G_DG_MANIFEST_COMPLETED TMS_DG_MANIFEST.STATUS_CODE%TYPE := '95';

v_num_day numeric;
v_batch_count numeric;
v_log_count numeric := 0;
v_local_batch_count numeric;
BEGIN
v_batch_count := 0;
LOOP
update tms_container_loading
   set status_code = G_CNTR_LOADING_EXPIRED
 , last_update_tm = clock_timestamp()::timestamp(0)
 , last_update_user_an = G_LAST_UPDATE_USER_SYSTEM
 , last_update_module_code = G_LAST_UPDATE_MODULE_BATCH
 where
*tms$vvd.is_vessel_departed(vessel_visit_c,trunc(clock_timestamp()::timestamp(0)))*
= 1
   and coalesce(status_code,'~') <> G_CNTR_LOADING_EXPIRED  and
ctid in (select a.ctid from tms_container_loading  where
*tms$vvd.is_vessel_departed(vessel_visit_c,trunc(clock_timestamp()::timestamp(0)))*
= 1
   and coalesce(status_code,'~') <> G_CNTR_LOADING_EXPIRED
LIMIT 2);
EXIT WHEN NOT FOUND; /* apply on SQL */
GET DIAGNOSTICS v_local_batch_count = ROW_COUNT; v_batch_count
:= v_batch_count + v_local_batch_count;
COMMIT;
END LOOP;
v_log_count := v_log_count + 1; CALL
Log(v_batch_count,'TMS_CONTAINER_LOADING',NULL, 'TMS$BATCH_JOB',
v_log_count);
COMMIT;
END;
$BODY$;







-- 
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


Re: ERROR : invalid transaction termination : PostgreSQL v12

2020-11-23 Thread Jagmohan Kaintura
Hi ,
The Block is only failing immediately at First COMMIT only. It's not
supporting COMMIT.  I have removed some portion of code before the second
COMMIT.


On Mon, Nov 23, 2020 at 9:19 PM Adrian Klaver 
wrote:

> On 11/23/20 12:36 AM, Jagmohan Kaintura wrote:
> > Hi Team,
> >
> > We have many BATCH JOBS in Oracle which we are committing after
> > processing few Records. These batch Jobs process in some subsets and
> > call transaction control statements COMMIT in case of Success and
> > ROLLBACK in case of failure.
> >
> > While converting to POstgreSQL we converted in Same Format with COMMIT
> > and ROLLBACK. But while executing it ended up with below error message.
> > ERROR:  invalid transaction termination
> > CONTEXT:  PL/pgSQL function inline_code_block line 29 at COMMIT
> >
> > While reviewing the Transaction Management in PostgreSQL
> > "https://www.postgresql.org/docs/12/plpgsql-transactions.html
> > <https://www.postgresql.org/docs/12/plpgsql-transactions.html>" it
> > speaks about a format which is not Supported.
> >
> > Transaction control is only possible in |CALL| or |DO| invocations from
> > the top level or nested |CALL| or |DO| invocations without any other
> > intervening command. For example, if the call stack is |CALL proc1()| →
> > |CALL proc2()| → |CALL proc3()|, then the second and third procedures
> > can perform transaction control actions. But if the call stack is |CALL
> > proc1()| → |SELECT func2()| → |CALL proc3()|, then the last procedure
> > cannot do transaction control, because of the |SELECT| in between.
> >
> > My Call has :  CALL Batch Job => SELECT function Used in SQL Statements
> > ==> Call Procedure.  We have transaction control in "CALL Batch Job"
> only.
> >
> > Pseudo Code is like : Highlighted in BOLD is a function call. It's
> > failing when getting executed as we are using functions into this
> procedure.
> > *Can any help on this matter , how I can implement Batch Jobs as we
> > wanted to commit in few intervals of 2 records ?*
> > We can't remove this function from the statement as its value is
> > dependent on column value.
> >
> > CREATE OR REPLACE PROCEDURE TEST_TRANSACTION(
> > )
> > LANGUAGE 'plpgsql'
> >  SECURITY DEFINER
> > AS $BODY$
> > DECLARE
> >  G_LAST_UPDATE_USER_SYSTEM VARCHAR2(6) := 'SYSTEM';
> >  G_LAST_UPDATE_MODULE_BATCH VARCHAR2(5) := 'BATCH';
> >
> >  G_CNTR_LOADING_EXPIRED TMS_CONTAINER_LOADING.STATUS_CODE%TYPE :=
> '90';
> >  G_DG_MANIFEST_DELETED TMS_DG_MANIFEST.STATUS_CODE%TYPE := '80';
> >  G_DG_MANIFEST_COMPLETED TMS_DG_MANIFEST.STATUS_CODE%TYPE := '95';
> >
> >  v_num_day numeric;
> >  v_batch_count numeric;
> >  v_log_count numeric := 0;
> >  v_local_batch_count numeric;
> > BEGIN
> >  v_batch_count := 0;
> >  LOOP
> >  update tms_container_loading
> > set status_code = G_CNTR_LOADING_EXPIRED
> >   , last_update_tm = clock_timestamp()::timestamp(0)
> >   , last_update_user_an = G_LAST_UPDATE_USER_SYSTEM
> >   , last_update_module_code = G_LAST_UPDATE_MODULE_BATCH
> >   where
> >
> *tms$vvd.is_vessel_departed(vessel_visit_c,trunc(clock_timestamp()::timestamp(0)))*
>
> > = 1
> > and coalesce(status_code,'~') <> G_CNTR_LOADING_EXPIRED
> >   and ctid in (select a.ctid from tms_container_loading  where
> >
> *tms$vvd.is_vessel_departed(vessel_visit_c,trunc(clock_timestamp()::timestamp(0)))*
>
> > = 1
> > and coalesce(status_code,'~') <> G_CNTR_LOADING_EXPIRED
> > LIMIT 2);
> >  EXIT WHEN NOT FOUND; /* apply on SQL */
> >  GET DIAGNOSTICS v_local_batch_count = ROW_COUNT;
> > v_batch_count := v_batch_count + v_local_batch_count;
> >  COMMIT;
> >  END LOOP;
> >  v_log_count := v_log_count + 1; CALL
> > Log(v_batch_count,'TMS_CONTAINER_LOADING',NULL, 'TMS$BATCH_JOB',
> > v_log_count);
> >  COMMIT;
> > END;
> > $BODY$;
>
> I'm still trying to figure out transaction management in procedures, so
> bear with me. Not sure what the purpose of the second COMMIT is? Also
> wonder if it is no the cause of the issue?
>
>
> > --
> > *Best Regards,*
> > Jagmohan
> > Senior Consultant, TecoreLabs.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


Re: ERROR : invalid transaction termination : PostgreSQL v12

2020-11-23 Thread Jagmohan Kaintura
It doesn't works putting that block inside additional BEGIN END

CREATE OR REPLACE PROCEDURE TEST_TRANSACTION(
)
LANGUAGE 'plpgsql'
SECURITY DEFINER
AS $BODY$
DECLARE
G_LAST_UPDATE_USER_SYSTEM VARCHAR2(6) := 'SYSTEM';
G_LAST_UPDATE_MODULE_BATCH VARCHAR2(5) := 'BATCH';

G_CNTR_LOADING_EXPIRED TMS_CONTAINER_LOADING.STATUS_CODE%TYPE := '90';
G_DG_MANIFEST_DELETED TMS_DG_MANIFEST.STATUS_CODE%TYPE := '80';
G_DG_MANIFEST_COMPLETED TMS_DG_MANIFEST.STATUS_CODE%TYPE := '95';

v_num_day numeric;
v_batch_count numeric;
v_log_count numeric := 0;
v_local_batch_count numeric;
BEGIN
   v_batch_count := 0;
LOOP
   BEGIN
update tms_container_loading
   set status_code = G_CNTR_LOADING_EXPIRED
 , last_update_tm =
clock_timestamp()::timestamp(0)
 , last_update_user_an =
G_LAST_UPDATE_USER_SYSTEM
 , last_update_module_code =
G_LAST_UPDATE_MODULE_BATCH
 where
tms$vvd.is_vessel_departed(vessel_visit_c,trunc(clock_timestamp()::timestamp(0)))
= 1
   and coalesce(status_code,'~') <>
G_CNTR_LOADING_EXPIRED  and ctid in (select ctid from tms_container_loading
 where
tms$vvd.is_vessel_departed(vessel_visit_c,trunc(clock_timestamp()::timestamp(0)))
= 1
   and coalesce(status_code,'~') <>
G_CNTR_LOADING_EXPIRED LIMIT 20);

EXIT WHEN NOT FOUND; /* apply on SQL */
GET DIAGNOSTICS v_local_batch_count =
ROW_COUNT;

v_batch_count := v_batch_count +
v_local_batch_count;
raise info ' I came here %',v_batch_count;
END;
COMMIT;
END LOOP;
raise info ' I came here %',v_batch_count;
v_log_count := v_log_count + 1;
v_log_count);
END;
$BODY$;


while calling

INFO:   I came here 20
ERROR:  invalid transaction termination
CONTEXT:  PL/pgSQL function test_transaction() line 48 at COMMIT







On Tue, Nov 24, 2020 at 12:17 AM Michael Lewis  wrote:

> On Mon, Nov 23, 2020 at 10:03 AM Jagmohan Kaintura <
> jagmo...@tecorelabs.com> wrote:
>
>> Hi ,
>> The Block is only failing immediately at First COMMIT only. It's not
>> supporting COMMIT.  I have removed some portion of code before the second
>> COMMIT.
>>
>
> Please don't top-post on the Postgres lists by the way (reply with all
> previous conversation copied below).
>
> The only way this would happen that I am aware of is if you called begin
> before your batch function.
>
>>

-- 
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


updating(column) Porting from Oracle Trigger to PostgreSQL trigger

2021-02-04 Thread Jagmohan Kaintura
Hi Team,

When we do an implementation from Oracle to PostgreSQL in trigger for
clause like :
 updating(column_name) ==>   (TG_OP='UPDATE' and OLD.column_name IS
DISTINCT FROM NEW.column_name)
But this condition would mostly not be sufficient when we are updating any
column and with similar value somehow.
Like : column_name OLD values is 2 and New values updates is 2.

In PG it would become (TG_OP="UPDATE' and *2 is DISTINCT FROM 2*), Overall
it becomes FALSE , so from conversion from ORACLE to PostgreSQL it
doesn't give a clear picture for exact implementation for these UPDATING
clause.

Now why I brought up this as I got into one of the implementations which
made me crazy to implement.

Scenario :
Column c3 is right not updating to the same value of column, for some
bypass implementation.

update table
c1 = 'abc',
c2 ='xyz',
*c3=c3;*


Inside trigger we have implementation like :

*Oracle:*
--THis block is being placed at the start of Trigger to bypass all other
trigger blocks whenever we are updating column c3 , along with other
columns.
IF UPDATING('C3') THEN
   RETURN;
END IF;

If this column c3 is not placed in the update clause, then trigger would go
through and execute other statements.

*PostgreSQL:*
We use the same method as :
IF TC_OP='UPDATE' and OLD.c3 is DISTINCT FROM NEW.C3 THEN
   RETURN NEW;
END IF;

But here it won't ever go inside the IF clause and will never return out
the start of trigger only. So technically not able to map the same Oracle
fundamentals.

Is there any other method in POstgreSQL using which I can check which
column is getting updated. Will just replace at least this clause with any
other block.

Help would be really appreciated.

*Best Regards,*
Jagmohan


Encryption of Data Specific to a Tenant in PostgreSQL database | General Idea

2021-02-10 Thread Jagmohan Kaintura
HI All,

For POstgreSQL database to store data for multiple tenants, the approach
decided was to have
Shared Database (Holding data for all tenants)
  => Data would be segregated on basis of some additional column
(tennatid,different tenants having different tenantId)
   => Data would be accessed through Views on the basis of tenantId
value.

This is the basic process of most of the customers who are trying to
implement multiple tenants in PostgreSQL, rather than choosing
separate databases for each tenant.

Now we need to encrypt the data related to a tenantId, so that now one
knows this data belongs to which tenant even from Operations group.
Is there a method in POstgreSQL for encrypting data with different keys
with respect to different values in a single column.  Moreover pg_crypto
will impose a single key on the column.

Please share your thoughts in which direction i can start analysing this
area for encryption of data specific to a tenant.

Thanks in advance.
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


Re: Encryption of Data Specific to a Tenant in PostgreSQL database | General Idea

2021-02-10 Thread Jagmohan Kaintura
Hi Guyren,

Yup right now data is being accessed in this manner only.
application access using tenant user only who have specific tenantId in
that session and can see its own data only. It doesn't know about anyone
else's data and neither can get/fetch.

So isolation is 100% guaranteed right now.

But isolation is not enough from an operations perspective, so I need
encryption too in some way or another way, whatever postgreSQL supports
and  encryption key should differ for  a tenant .


On Thu, Feb 11, 2021 at 8:56 AM Guyren Howe  wrote:

> An interesting option is to make your tenants work through views, and have
> the views work through a variable that contains the tenant’s id. There
> would be a bit of coding, but it would be the same for every table, so you
> could automate it easy enough.
>
> When you’re done, client software just sets the id at the beginning of the
> session and the database looks like it only contains the tenant’s data.
>
> This is a particularly nice security guarantee: if you make the id a UUID
> (ie unguessable) then someone can entirely compromise the client
> application, and can still only get at the data for one tenant, and then
> only given their ID.
> On Feb 10, 2021, 18:44 -0800, Rob Sargent , wrote:
>
>
>
> On Feb 10, 2021, at 6:45 PM, Tim Cross  wrote:
>
>
> Jagmohan Kaintura  writes:
>
> HI All,
>
> For POstgreSQL database to store data for multiple tenants, the approach
> decided was to have
> Shared Database (Holding data for all tenants)
>  => Data would be segregated on basis of some additional column
> (tennatid,different tenants having different tenantId)
>   => Data would be accessed through Views on the basis of tenantId
> value.
>
> This is the basic process of most of the customers who are trying to
> implement multiple tenants in PostgreSQL, rather than choosing
> separate databases for each tenant.
>
> Now we need to encrypt the data related to a tenantId, so that now one
> knows this data belongs to which tenant even from Operations group.
> Is there a method in POstgreSQL for encrypting data with different keys
> with respect to different values in a single column.  Moreover pg_crypto
> will impose a single key on the column.
>
> Please share your thoughts in which direction i can start analysing this
> area for encryption of data specific to a tenant.
>
>
> The decision to have all tenants in a single database seems rather
> unusual to me. Isolating one tenant from adversely impacting another
> would seem complicated and I'm not sure how you would implement a clear
> security model. Your model has effectively bypassed all the provided PG
> facilities for isolation of data. Disaster recovery and business
> continuity planning under this model must be a nightmare!
>
> I doubt you can adopt a solution which is solely within the database.
> How would the database know which key to use for which rows of data? How
> would you select the data for your tenant views if all that data is
> encrypted with different keys? How would you manage these keys in a
> secure manner?
>
> With the model you have adopted, I would be looking at performing
> encryption/decryption at the client level. However, depending on your
> data types, this could be challenging. this is really a requirement
> which should have been factored into the initial architecture design.
> Anything you try to bolt on now is likely to be complex and have
> significant performance impact and that is assuming you can re-interpret
> the requirement to make the objective feasible.
>
> Yeah, I lost that same arguement in ~2007, where the forces against my
> push for separation was shouted down with rants on scheme maintenance
> (divergence) and multiple rollouts per update.  I hadn’t had any coffee
> before the 9:00am meeting so the hotshot from Amazon got his way.  Then we
> tried “veils” (a concoction of view and rule re-writing) and we all
> know how that went.  The company folded before our “next gen” software saw
> the light of day.
>
> I get the feeling multi-tenancy is, if not the rule these days, at least
> quite common (on the last of “big iron”?) but it still doesn’t sit well
> with me.
>
>
>

-- 
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


Re: Encryption of Data Specific to a Tenant in PostgreSQL database | General Idea

2021-02-10 Thread Jagmohan Kaintura
Hi Tim,

Yeh datatype is even not supported right now. As tenantId is numeric and
encryption/decryption would go through text/bytea , so even tougher to
encrypt that column data.


On Thu, Feb 11, 2021 at 7:38 AM Tim Cross  wrote:

>
> Jagmohan Kaintura  writes:
>
> > HI All,
> >
> > For POstgreSQL database to store data for multiple tenants, the approach
> > decided was to have
> > Shared Database (Holding data for all tenants)
> >   => Data would be segregated on basis of some additional column
> > (tennatid,different tenants having different tenantId)
> >=> Data would be accessed through Views on the basis of
> tenantId
> > value.
> >
> > This is the basic process of most of the customers who are trying to
> > implement multiple tenants in PostgreSQL, rather than choosing
> > separate databases for each tenant.
> >
> > Now we need to encrypt the data related to a tenantId, so that now one
> > knows this data belongs to which tenant even from Operations group.
> > Is there a method in POstgreSQL for encrypting data with different keys
> > with respect to different values in a single column.  Moreover pg_crypto
> > will impose a single key on the column.
> >
> > Please share your thoughts in which direction i can start analysing this
> > area for encryption of data specific to a tenant.
> >
>
> The decision to have all tenants in a single database seems rather
> unusual to me. Isolating one tenant from adversely impacting another
> would seem complicated and I'm not sure how you would implement a clear
> security model. Your model has effectively bypassed all the provided PG
> facilities for isolation of data. Disaster recovery and business
> continuity planning under this model must be a nightmare!
>
> I doubt you can adopt a solution which is solely within the database.
> How would the database know which key to use for which rows of data? How
> would you select the data for your tenant views if all that data is
> encrypted with different keys? How would you manage these keys in a
> secure manner?
>
> With the model you have adopted, I would be looking at performing
> encryption/decryption at the client level. However, depending on your
> data types, this could be challenging. this is really a requirement
> which should have been factored into the initial architecture design.
> Anything you try to bolt on now is likely to be complex and have
> significant performance impact and that is assuming you can re-interpret
> the requirement to make the objective feasible.
>
> --
> Tim Cross
>
>
>

-- 
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


Re: Encryption of Data Specific to a Tenant in PostgreSQL database | General Idea

2021-02-11 Thread Jagmohan Kaintura
Hi Stephen,

Note that views aren't actually guaranteed to provide the isolation
you're looking for unless you mark them as being a security barrier,
see: https://www.postgresql.org/docs/current/rules-privileges.html

By using Security Barrier we had a huge impact on performance  , it was not
considering proper indexes and was doing some filtration with respect to
that User on top of other filtration. So we didn't choose to add a security
barrier with each view we created.

Similar issue we had with ROW level security enabling, tables were always
going for Sequential Scan, when policies were imposed on rows.

>From an implementation perspective, I thought I asked questions from bottom
to top.

>From a Development perspective we are moving our already multi-tenant
system (achieved at database level with views) to SaaS implementation . In
SaaS we have tried to achieve isolation to bit extend , but now we wanted
to have encryption for multiple tenants .

So as over from all these discussions best would be to achieve encryption
at application level only.

But not sure how we can Limit access of DBA's across tenants. We dont want
DBA's of one customer accessing or viewing data to another customer. Or
Overall DBA's shouldn't be able to access sensitive data from database.

We are mostly looking  over Insider Thread... Application Server
Compromise..DB server Compromise.




On Fri, Feb 12, 2021 at 1:29 AM Stephen Frost  wrote:

> Greetings,
>
> * Jagmohan Kaintura (jagmo...@tecorelabs.com) wrote:
> > Yup right now data is being accessed in this manner only.
> > application access using tenant user only who have specific tenantId in
> > that session and can see its own data only. It doesn't know about anyone
> > else's data and neither can get/fetch.
> >
> > So isolation is 100% guaranteed right now.
>
> Note that views aren't actually guaranteed to provide the isolation
> you're looking for unless you mark them as being a security barrier,
> see: https://www.postgresql.org/docs/current/rules-privileges.html
>
> Alternatively, you could use RLS and CREATE POLICY:
>
> https://www.postgresql.org/docs/current/ddl-rowsecurity.html
>
> > But isolation is not enough from an operations perspective, so I need
> > encryption too in some way or another way, whatever postgreSQL supports
> > and  encryption key should differ for  a tenant .
>
> You can have PG do encryption by using the pgcrypto extension, perhaps
> with some custom GUC and views (which should really also be security
> barrier..) to have it be transparent.  As mentioned elsewhere, you're
> really better off doing it in the application though, so that the DB
> server doesn't ever see the plaintext data.  You should really be
> considering what the attack vector you're concerned about is though-
> SQL injection?  Insider threat?  Improper media disposal?  Application
> server compromise?  DB server compromise?  etc.
>
> Thanks,
>
> Stephen
>


-- 
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


Fwd: PostgreSQL Statement Dependency Resolving | Schema Upgrade

2021-02-28 Thread Jagmohan Kaintura
-- Forwarded message -
From: Jagmohan Kaintura 
Date: Mon, Feb 22, 2021 at 7:44 AM
Subject: PostgreSQL Statement Dependency Resolving | Schema Upgrade
To: 


Hi Team,

WHile performing a schema upgrade which needs any alter in the definition
of schema object, it always asks to execute that statement with CASCADE as
it has many dependent objects on it.

Most of the time we have to drop a lot of entities to just compile a simple
view.

In out case :
Views are being referenced by many other schema objects , in procedures,
functions and views.


Do we have any PostgreSQL extension or any standardized method which can
help with capturing all the cascade objects before dropping them and then
reloading once the new definition is applied.

-- 
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


-- 
*Best Regards,*
Jagmohan
Senior Consultant, TecoreLabs.


INSERT ALL with DML ERROR Logging replacement in PostgreSQL

2022-06-21 Thread Jagmohan Kaintura
Hi Team,
We are working on a project where we are moving from Oracle to PostgreSQL
and working on a migration tool which mostly have statements for inserting
the records which are correct and logging the errors in error table using
ORACLE inbuilt statement for INSERT ALL with DML ERROR logging.

As part of the postgresql best practices, what approach are we taking to
move these types of statements in Postgresql as we don't have any such
equivalent mechanism to load correct data in the main table and error
record in error table with error reason.

The statements mostly used are -->
INSERT ALL INTO
target_table
(COLUMN LIST)
VALUES()
LOG ERROR INTO ...
SELECT statement considering the source tables;

)
Can anyone please help me with what could be the best approach to convert
this in the tool.

-- 
*Best Regards,*
Jagmohan


Re: INSERT ALL with DML ERROR Logging replacement in PostgreSQL

2022-06-21 Thread Jagmohan Kaintura
Hi Gilles,

I was going though this earlier today but didn't compiled it as I read it
may not be able to capture the errors if we have below type of statement
and most of our statements are of INSERT .. SELECT statements only.

The form INSERT INTO  SELECT ... will not have the same behavior
than in Oracle. It will not stored the successful insert and logged the
rows in error. This is not supported because it is a single transaction for
PostgreSQL and everything is rolled back in case of error.

Our all statements are of that form will it be still useful.

On Tue, Jun 21, 2022 at 1:07 PM Gilles Darold  wrote:

> Le 21/06/2022 à 09:08, Jagmohan Kaintura a écrit :
>
> Hi Team,
> We are working on a project where we are moving from Oracle to PostgreSQL
> and working on a migration tool which mostly have statements for inserting
> the records which are correct and logging the errors in error table using
> ORACLE inbuilt statement for INSERT ALL with DML ERROR logging.
>
> As part of the postgresql best practices, what approach are we taking to
> move these types of statements in Postgresql as we don't have any such
> equivalent mechanism to load correct data in the main table and error
> record in error table with error reason.
>
> The statements mostly used are -->
> INSERT ALL INTO
> target_table
> (COLUMN LIST)
> VALUES()
> LOG ERROR INTO ...
> SELECT statement considering the source tables;
>
> )
> Can anyone please help me with what could be the best approach to convert
> this in the tool.
>
> --
> *Best Regards,*
> Jagmohan
>
>
> Hi,
>
>
> Maybe what you are looking for is here
> https://github.com/MigOpsRepos/pg_dbms_errlog , this is a PostgreSQL
> extension that emulates the DBMS_ERRLOG Oracle package.
>
>
> Best regards,
>
> --
> Gilles Daroldhttp://www.darold.net/
>
>

-- 
*Best Regards,*
Jagmohan


Re: INSERT ALL with DML ERROR Logging replacement in PostgreSQL

2022-06-22 Thread Jagmohan Kaintura
Hi ALl,

Any other thought on this thread.

On Tue, Jun 21, 2022 at 2:24 PM Gilles Darold  wrote:

> Le 21/06/2022 à 10:28, Jagmohan Kaintura a écrit :
>
> Hi Gilles,
>
> I was going though this earlier today but didn't compiled it as I read it
> may not be able to capture the errors if we have below type of statement
> and most of our statements are of INSERT .. SELECT statements only.
>
> The form INSERT INTO  SELECT ... will not have the same
> behavior than in Oracle. It will not stored the successful insert and
> logged the rows in error. This is not supported because it is a single
> transaction for PostgreSQL and everything is rolled back in case of error.
>
> Our all statements are of that form will it be still useful.
>
>
> Right, this was not obvious in your post, but yes if you are using INSERT
> + SELECT this is not possible with the current version of this extension.
> Maybe that could be possible by rewriting internally the query to loop over
> the result of the select and generate an insert per row returned, but with
> performances lost of courses.
>
>
> Best regards,
>
> --
> Gilles Daroldhttp://www.darold.net/
>
>

-- 
*Best Regards,*
Jagmohan


murmur3 hash binary data migration from Oracle to PostgreSQL

2023-04-24 Thread Jagmohan Kaintura
Hi Team,

We are doing Migration from Oracle to PostgreSQL. In SOurce database we
have Binary data stored using murmur3 hashing function. In Oracle this data
is being generated from the Java code and inserted into the Oracle
database.

As part of Migration processes the reference data on which this murmur3 is
generated is also getting changed while migrating to PostgreSQL.

In PostgreSQL do we have any mechanism for fetching this murmur3 hash
function for any UUID.

Please let me know, what ever solution is available for the implementation.



-- 
*Best Regards,*
Jagmohan


PostgreSQL Views Where Clause Character Columns getting Typecasted to Text and Indexes not getting Used

2021-03-25 Thread Jagmohan Kaintura
Hi All,

PostgreSQL version : 12.5

When we are compiling View definition into database , the where clause on
the Character Columns is getting internally typecasted to text.
Example : Source Code being Applied :

" FROM tms_vessel_visit_aggregate a, tms_vessel_visit v
LEFT OUTER JOIN tms_vsl_svc s ON (v.svc_c = s.svc_c)
WHERE v.vessel_visit_c = a.vessel_visit_c and v.vessel_call_nr =
a.vessel_call_nr
"

Now when it's stored internally in the database the columns are getting
typecast with *text *but those table columns are not text they are
varchar(10).

Example :
   FROM tms_vessel_visit_aggregate a,
tms_vessel_visit v
 LEFT JOIN tms_vsl_svc s ON *v.svc_c::text* = *s.svc_c::text*
  WHERE *v.vessel_visit_c::text* = *a.vessel_visit_c::text* AND
v.vessel_call_nr = a.vessel_call_nr

Even when we placed the casting in the Original Source to varchar(10), its
typecasting internally to *::text .*

These Columns are Primary Key columns and because of this Type Casting on
those columns Index scan is not happening and we are always getting
Sequential Scan.
Example :

 ->  Subquery Scan on "*SELECT* 1"  (cost=36.88..115.01
rows=995 width=13) (actual time=0.763..3.144 rows=995 loops=1)
   Buffers: shared hit=65
   ->  Hash Join  (cost=36.88..105.06 rows=995
width=6361) (actual time=0.763..2.964 rows=995 loops=1)

* Hash Cond: (((v.vessel_visit_c)::text =
(a.vessel_visit_c)::text) AND (v.vessel_call_nr = a.vessel_call_nr))*
   Buffers: shared hit=65

* ->  Seq Scan on tms_vessel_visit v
 (cost=0.00..62.95 rows=995 width=18) (actual time=0.006..0.292 rows=995
loops=1)*   Buffers: shared hit=53
 ->  Hash  (cost=21.95..21.95 rows=995
width=13) (actual time=0.735..0.736 rows=995 loops=1)
   Buckets: 1024  Batches: 1  Memory
Usage: 52kB
   Buffers: shared hit=12

*  ->  Seq Scan on
tms_vessel_visit_aggregate a  (cost=0.00..21.95 rows=995 width=13) (actual
time=0.009..0.238 rows=995 loops=1) *
 Buffers: shared hit=12
 ->  Subquery Scan on "*SELECT* 2"  (cost=1.35..2.86
rows=1 width=88) (actual time=0.079..0.111 rows=14 loops=1)
   Buffers: shared hit=2
   ->  Hash Join  (cost=1.35..2.85 rows=1
width=6422) (actual time=0.078..0.107 rows=14 loops=1)

*Hash Cond: (((a_1.vessel_visit_c)::text =
(v_1.vessel_visit_c)::text) AND (a_1.vessel_call_nr = v_1.vessel_call_nr))
   *Buffers: shared hit=2

*  ->  Seq Scan on tms_vessel_visit_aggregate_bak
a_1  (cost=0.00..1.33 rows=33 width=60) (actual time=0.007..0.011
rows=33loops=1)   *Buffers: shared hit=1
 ->  Hash  (cost=1.14..1.14 rows=14
width=98) (actual time=0.043..0.043 rows=14 loops=1)
   Buckets: 1024  Batches: 1  Memory
Usage: 9kB
   Buffers: shared hit=1

*  ->  Seq Scan on tms_vessel_visit_bak v_1
 (cost=0.00..1.14 rows=14 width=98) (actual time=0.007..0.034 rows=14
loops=1) *Buffers: shared hit=1


Executing Same Stuff in Oracle as its not doing implicit typecasting it
gives us perfect Index scans as expected.

|   8 | UNION ALL PUSHED PREDICATE   ||
  |   ||  |
|   9 |  NESTED LOOPS||
1 |30 | 3   (0)| 00:00:01 |
|  10 |   TABLE ACCESS BY INDEX ROWID| TMS_VESSEL_VISIT_AGGREGATE |
1 |12 | 2   (0)| 00:00:01 |
|* 11 |INDEX UNIQUE SCAN | TMS_VESSEL_VISIT_AGGREGATE_PK  |
1 |   | 1   (0)| 00:00:01 |
|  12 |   TABLE ACCESS BY INDEX ROWID| TMS_VESSEL_VISIT   |
1 |18 | 1   (0)| 00:00:01 |
|* 13 |INDEX UNIQUE SCAN | TMS_VESSEL_VISIT_PK|
1 |   | 0   (0)| 00:00:01 |
|  14 |  NESTED LOOPS||
1 |29 | 2   (0)| 00:00:01 |
|  15 |   TABLE ACCESS BY INDEX ROWID| TMS_VESSEL_VISIT_AGGREGATE_BAK |
1 |12 | 1   (0)| 00:00:01 |
|* 16 |INDEX UNIQUE SCAN | TMS_VESSEL_VISIT_AGG_BAK_PK|
1 |   | 0   (0)| 00:00:01 |
|  17 |   TABLE ACCESS BY INDEX ROWID| TMS_VESSEL_VISIT_BAK   |
1 |17 | 1   (0)| 00:00:01 |
|* 18 |INDEX UNIQUE SCAN | TMS_VESSEL_VISIT_BAK_PK|
1 |   | 0   (0)| 00:00:01 |
---


Wh

Re: PostgreSQL Views Where Clause Character Columns getting Typecasted to Text and Indexes not getting Used

2021-03-29 Thread Jagmohan Kaintura
  Index Cond:
(vessel_visit_c = v_1.vessel_visit_c)*
 Filter: (v_1.vessel_call_nr = vessel_call_nr)
 ->  Bitmap Heap Scan on tms_tdr_region_map m  (cost=3.92..6.06
rows=2 width=38) (never executed)
   Recheck Cond: ((vsl_svc_c)::text = ("*SELECT*
1".svc_c)::text)
   ->  Bitmap Index Scan on tms_tdr_region_map_pk
 (cost=0.00..3.92 rows=2 width=0) (never executed)
 Index Cond: ((vsl_svc_c)::text = ("*SELECT*
1".svc_c)::text)


Another one where we have columns as varchar in the database the plan is
with seq join and application  being tested is taking around 7-8 seconds.

 Aggregate  (cost=145.19..145.20 rows=1 width=32) (actual time=3.819..3.823
rows=1 loops=1)
   Buffers: shared hit=71
   ->  Nested Loop  (cost=39.54..145.16 rows=11 width=0) (actual
time=3.817..3.821 rows=0 loops=1)
 Buffers: shared hit=71
 ->  Hash Join  (cost=39.39..128.04 rows=5 width=5) (actual
time=1.154..3.806 rows=1 loops=1)
   Hash Cond: (("*SELECT* 1".vessel_visit_c)::text =
(t.trgt_vvd_n)::text)
   Buffers: shared hit=70
   ->  Append  (cost=36.88..122.85 rows=996 width=13) (actual
time=0.806..3.569 rows=1009 loops=1)
 Buffers: shared hit=67
 ->  Subquery Scan on "*SELECT* 1"  (cost=36.88..115.01
rows=995 width=13) (actual time=0.806..3.318 rows=995 loops=1)
   Buffers: shared hit=65
   ->  Hash Join  (cost=36.88..105.06 rows=995
width=6361) (actual time=0.805..3.124 rows=995 loops=1)
 Hash Cond: (((v.vessel_visit_c)::text =
(a.vessel_visit_c)::text) AND (v.vessel_call_nr = a.vessel_call_nr))
 Buffers: shared hit=65

* ->  Seq Scan on tms_vessel_visit v
 (cost=0.00..62.95 rows=995 width=18) (actual time=0.007..0.278 rows=995
loops=1)*   Buffers: shared hit=53
 ->  Hash  (cost=21.95..21.95 rows=995
width=13) (actual time=0.774..0.775 rows=995 loops=1)
   Buckets: 1024  Batches: 1  Memory
Usage: 52kB
   Buffers: shared hit=12

*   ->  Seq Scan on
tms_vessel_visit_aggregate a  (cost=0.00..21.95 rows=995 width=13) (actual
time=0.008..0.275 rows=995 loops=1)*
 Buffers: shared hit=12
 ->  Subquery Scan on "*SELECT* 2"  (cost=1.35..2.86
rows=1 width=88) (actual time=0.064..0.099 rows=14 loops=1)
   Buffers: shared hit=2
   ->  Hash Join  (cost=1.35..2.85 rows=1
width=6422) (actual time=0.064..0.095 rows=14 loops=1)
 Hash Cond: (((a_1.vessel_visit_c)::text =
(v_1.vessel_visit_c)::text) AND (a_1.vessel_call_nr = v_1.vessel_call_nr))
 Buffers: shared hit=2

* ->  Seq Scan on
tms_vessel_visit_aggregate_bak a_1  (cost=0.00..1.33 rows=33 width=60)
(actual time=0.011..0.015 rows=33loops=1)*
 Buffers: shared hit=1
 ->  Hash  (cost=1.14..1.14 rows=14
width=98) (actual time=0.027..0.027 rows=14 loops=1)
   Buckets: 1024  Batches: 1  Memory
Usage: 9kB
   Buffers: shared hit=1

*   ->  Seq Scan on
tms_vessel_visit_bak v_1  (cost=0.00..1.14 rows=14 width=98) (actual
time=0.006..0.017 rows=14 loops=1)*
 Buffers: shared hit=1
   ->  Hash  (cost=2.50..2.50 rows=1 width=7) (actual
time=0.019..0.019 rows=1 loops=1)
 Buckets: 1024  Batches: 1  Memory Usage: 9kB
 Buffers: shared hit=3
 ->  Index Scan using tms_tdr_pk on tms_tdr t
 (cost=0.29..2.50 rows=1 width=7) (actual time=0.015..0.016 rows=1 loops=1)
   Index Cond: (tdr_id = '13581258'::numeric)
   Buffers: shared hit=3
 ->  Index Only Scan using tms_tdr_region_map_pk on
tms_tdr_region_map m  (cost=0.15..3.40 rows=2 width=38) (actual
time=0.004..0.004 rows=0 loops
=1)
   Index Cond: (vsl_svc_c = ("*SELECT* 1".svc_c)::text)
   Heap Fetches: 0
   Buffers: shared hit=1

Can you please help with any method where we can try to cast the views
creation with actual data type casting rather than text for characters.
Or are there any additional index classes which we can create to overcome
this situation.

Our more focus is if we can stop from typecasting to *text and t*est how an
application behaves.


On Fri, Mar 26, 2021 at 10:38 AM Tom Lane  wrote:

> Jagmohan Kaintura  writes:
> > When we are compiling View definition into database , the whe

could not receive data from client: Connection reset by peer

2021-04-07 Thread Jagmohan Kaintura
Hi Team,

We are receiving alot of below error messages in the postgresql logs.
These all connections are being created and released by the applications
attached to the database.

Please help how these failures can be identified and what all steps need to
be taken for further analysis.

2021-04-08 10:18:24 HKT [8252]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer


2021-04-08 10:18:24 HKT [8252]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  unexpected EOF on client connection with
an open transaction
2021-04-08 10:18:24 HKT [8252]: [5-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.550
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51690
2021-04-08 10:18:24 HKT [8233]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8235]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8236]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8236]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.824
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51673
2021-04-08 10:18:24 HKT [8233]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.857
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51668
2021-04-08 10:18:24 HKT [8235]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.835
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51672
2021-04-08 10:18:24 HKT [8237]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8237]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.817
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51674
2021-04-08 10:18:24 HKT [8234]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8234]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.845
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51670
2021-04-08 10:18:24 HKT [8239]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8239]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection: session time: 1:00:00.801
user=cz1qa$bis_app database=zdcq12c2 host=172.25.254.11 port=51676
2021-04-08 10:18:24 HKT [8232]: [3-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  could not receive data from client:
Connection reset by peer
2021-04-08 10:18:24 HKT [8232]: [4-1]
user=cz1qa$bis_app,db=zdcq12c2,app=PostgreSQL JDBC
Driver,client=172.25.254.11LOG:  disconnection:

-- 
*Best Regards,*
Jagmohan


Table/Parition Level Compression in Postgres

2022-02-16 Thread Jagmohan Kaintura
Hi Team,

We have a requirement where we wanted to keep the partitions holding the
old data into compressed format and the partition having the current data
in the non-compressed format.

While looking over the compression aspects in Pg 14 we have some
compression introduced for the column level with some compression methods.

Is there any other way using which we can compress the partitions data
which can be even  moved to different tablespace.

We have a implementation where we wanted to keep the 80% old data in the
compressed format as we have millions of processed CDRs.



-- 
*Best Regards,*
Jagmohan


Re: Table/Parition Level Compression in Postgres

2022-02-16 Thread Jagmohan Kaintura
Hi Laurenz

Means that is what I am thinking even right now if we can move the
partition to another tablespace and have file system level compression.

Apart from this do we have any other solution or strategy which I can try
to implement?


On Wed, Feb 16, 2022 at 6:13 PM Laurenz Albe 
wrote:

> On Wed, 2022-02-16 at 18:10 +0530, Jagmohan Kaintura wrote:
> > We have a requirement where we wanted to keep the partitions holding the
> old data into compressed
> > format and the partition having the current data in the non-compressed
> format.
> > While looking over the compression aspects in Pg 14 we have some
> compression introduced for the column level with some compression methods.
> > Is there any other way using which we can compress the partitions data
> which can be even  moved to different tablespace.
> > We have a implementation where we wanted to keep the 80% old data in the
> compressed format as we have millions of processed CDRs.
>
> How about a second tablespace on a file system that compresses files?
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>

-- 
*Best Regards,*
Jagmohan