Re: how to securely delete the storage freed when a table is dropped?
There are free utilities that do government leave wipes. The process would be, drop the table, shrink the old table space then (if linux based), dd fill the drive, and use wipe, 5x or 8x deletion to make sure the drive does not have readable imprints on the platers. Now what Jonathan mentions - sounds like he wants to do the same to the physical table. Never dabbling into PSQL’s storage and optimization algorithms, I would first assume, a script to do a row by row update table set field1…fieldx, different data patterns, existing field value length and field max length. Run the script at least 5 to 8 times, then drop the table .. the problem will be, does PSQL use a new page as you do this, then you are just playing with yourself. Let alone, how does PSQL handle indexes - new pages, or overwrite the existing page? And is any NPI (Non-Public-Info) data in the index itself? * So any PSQL core-engine guys reading? O. > On Apr 13, 2018, at 3:03 PM, Ron wrote: > > > > On 04/13/2018 12:48 PM, Jonathan Morgan wrote: >> For a system with information stored in a PostgreSQL 9.5 database, in which >> data stored in a table that is deleted must be securely deleted (like shred >> does to files), and where the system is persistent even though any >> particular table likely won't be (so can't just shred the disks at >> "completion"), I'm trying to figure out my options for securely deleting the >> underlying data files when a table is dropped. >> >> As background, I'm not a DBA, but I am an experienced implementor in many >> languages, contexts, and databases. I've looked online and haven't been able >> to find a way to ask PostgreSQL to do the equivalent of shredding its >> underlying files before releasing them to the OS when a table is DROPped. Is >> there a built-in way to ask PostgreSQL to do this? (I might just not have >> searched for the right thing - my apologies if I missed something) >> >> A partial answer we're looking at is shredding the underlying data files for >> a given relation and its indexes manually before dropping the tables, but >> this isn't so elegant, and I'm not sure it is getting all the information >> from the tables that we need to delete. >> >> We also are looking at strategies for shredding free space on our data disk >> - either running a utility to do that, or periodically replicating the data >> volume, swapping in the results of the copy, then shredding the entire >> volume that was the source so its "free" space is securely overwritten in >> the process. >> >> Are we missing something? Are there other options we haven't found? If we >> have to clean up manually, are there other places we need to go to shred >> data than the relation files for a given table, and all its related indexes, >> in the database's folder? Any help or advice will be greatly appreciated. > > I'd write a program that fills all free space on disk with a specific > pattern. You're probably using a logging filesystem, so that'll be far from > perfect, though. > > -- > Angular momentum makes the world go 'round. >
Re: Postgresql database encryption
Well, actually since 2003, this has been a standard requirement from the Credit Card industry. And it does make sense in the field of "while at rest" the data still cannot be accessed. Requirement 1. No NPI data should be displayed without controls - e.g. reports, PDF, etc. Requirement 2. Same data, must be secured during transmission - fetching to client screen etc. Requirement 3. NPI data should not be logged nor stored on a physical device in non-encrypted mode. There are more steps to this, but, to chalk it off as another half-assed required is typical. Hashing is a useful one-way technique, however, trapping the hash made using a hash useless! When I worked for the credit bureaus we ran encrypted drive arrays, DB/2 encrypted, SSL/TLS encryption over P2P VPN connections, and masked output fields when the data would go to reports or screens to PCs outside our control. Anyone with Linux and use LUKS encryption on an LVM partition to achieve security where the database may not, or logs or something may exist where NPI might be see. Oh yeah, NPI (Non-Pubic Information, like your social, you bank account, you paycheck information, etc. things that should not exist outside of controls)... PS. You cannot simply take a drive from one machine to another, when doing proper RAID and LUKS encryption. Ozz 15 years experience with federal data security requirements. On Fri, Apr 20, 2018 at 7:55 PM Tim Cross wrote: > > Vikas Sharma writes: > > > Hello Guys, > > > > Could someone throw light on the postgresql instance wide or database > wide > > encryption please? Is this possible in postgresql and been in use in > > production?. > > > > This is a requirement in our production implementation. > > > > This sounds like a lazy management requirement specified for 'security' > purposes by people with little understanding of either technology or > security. I suspect it comes form a conversation that went along the > lines of > > "There has been lots in the news about cyber threats" > > "Yes, we need our system to be secure" > > "I know, lets make one of the requirements that everything must be > encrypted, that will stop them" > > "Great idea, I'll add it as requirement 14". > > This is a very poor requirement because it is not adequately specified, > but more critically, because it is specifying a 'solution' rather than > articulating the requirement in a way which would allow those with the > necessary expertise to derive an appropriate solution - one which may or > may not involve encryption or hashing of data and which may or may not > be at the database level. > > What you really need to do is go back to your stakeholders and ask them > a lot of questions to extract what the real requirement is. Try to find > out what risk they are trying to mitigate with encryption. Once this is > understood, then look at what the technology can do and work out the > design/implementation from there. > > It is extremely unlikely you just want all the data in the database > encrypted. When you think about it, such an approach really doesn't make > sense. In basic terms, if the data is encrypted, the database engine > will need to be able to decrypt it in order to operate (consider how a > where clause needs to be able to interpret actions etc). If the db can > read the data, the keys must be in the database. If the keys are in the > database and your database is compromised, then your keys are > compromised. So provided you protect your database from compromise, you > achieve the same level of security as you do with full data encryption > EXCEPT for access to the underlying data files outside of the database > system. For this, you will tend to use some sort of file system > encryption, which is typically managed at the operating system > level. Again, for the operating system to be able to read the file > system, the OS must have access to the decryption keys, so if your OS is > compromised, then that level of protection is lost as well (well, that > is over simplified, but you get the idea). What this level of protection > does give you is data at rest protection - if someone is able to access > hour disks through some other means, they cannot read the data. This is > the same principal most people should be using with their > laptops. Protect the OS with a password and have the data on disk > encrypted. Provided nobody can login to your laptop, they cannot read > your data. Without this encryption, you can just take the disk out of > the laptop, mount it on another system and you have full access. With > disk encryption, you cannot do that. Same basic principal with the > server. > > At the database level, a more typical approach is to use one way hashing > for some sensitive data (i.e. passwords) and possibly column level > encryption on a specific column (much rarer) or just well structured > security policies and user roles that restrict who has access to various > tables/columns. To implement this successfully,
Re: Postgresql database encryption
PS. the following database servers do offer internal encryption on a page/block oriented read/write (for encrypted data at rest security requirements) PremierSQL TDE MariaDB 10.1.3+ *MySQL* 5.7.11+ Microsoft uses TDE Oracle AdvSec uses TDE DB2 v7.2 UDB MangoDB uses AES-256 PostgreSQL does - but the key is public (dumb) https://www.postgresql.org/message-id/ca%2bcsw_tb3bk5i7if6inzfc3yyf%2b9hevnty51qfboeuk7ue_v%...@mail.gmail.com Just because you do not see the reason for it, does not make the reason a bad idea. On Fri, Apr 20, 2018 at 8:19 PM Ozz Nixon wrote: > Well, actually since 2003, this has been a standard requirement from the > Credit Card industry. And it does make sense in the field of "while at > rest" the data still cannot be accessed. > > Requirement 1. No NPI data should be displayed without controls - e.g. > reports, PDF, etc. > Requirement 2. Same data, must be secured during transmission - fetching > to client screen etc. > Requirement 3. NPI data should not be logged nor stored on a physical > device in non-encrypted mode. > > There are more steps to this, but, to chalk it off as another half-assed > required is typical. Hashing is a useful one-way technique, however, > trapping the hash made using a hash useless! When I worked for the credit > bureaus we ran encrypted drive arrays, DB/2 encrypted, SSL/TLS encryption > over P2P VPN connections, and masked output fields when the data would go > to reports or screens to PCs outside our control. > > Anyone with Linux and use LUKS encryption on an LVM partition to achieve > security where the database may not, or logs or something may exist where > NPI might be see. Oh yeah, NPI (Non-Pubic Information, like your social, > you bank account, you paycheck information, etc. things that should not > exist outside of controls)... > > PS. You cannot simply take a drive from one machine to another, when doing > proper RAID and LUKS encryption. > > Ozz > 15 years experience with federal data security requirements. > > On Fri, Apr 20, 2018 at 7:55 PM Tim Cross wrote: > >> >> Vikas Sharma writes: >> >> > Hello Guys, >> > >> > Could someone throw light on the postgresql instance wide or database >> wide >> > encryption please? Is this possible in postgresql and been in use in >> > production?. >> > >> > This is a requirement in our production implementation. >> > >> >> This sounds like a lazy management requirement specified for 'security' >> purposes by people with little understanding of either technology or >> security. I suspect it comes form a conversation that went along the >> lines of >> >> "There has been lots in the news about cyber threats" >> >> "Yes, we need our system to be secure" >> >> "I know, lets make one of the requirements that everything must be >> encrypted, that will stop them" >> >> "Great idea, I'll add it as requirement 14". >> >> This is a very poor requirement because it is not adequately specified, >> but more critically, because it is specifying a 'solution' rather than >> articulating the requirement in a way which would allow those with the >> necessary expertise to derive an appropriate solution - one which may or >> may not involve encryption or hashing of data and which may or may not >> be at the database level. >> >> What you really need to do is go back to your stakeholders and ask them >> a lot of questions to extract what the real requirement is. Try to find >> out what risk they are trying to mitigate with encryption. Once this is >> understood, then look at what the technology can do and work out the >> design/implementation from there. >> >> It is extremely unlikely you just want all the data in the database >> encrypted. When you think about it, such an approach really doesn't make >> sense. In basic terms, if the data is encrypted, the database engine >> will need to be able to decrypt it in order to operate (consider how a >> where clause needs to be able to interpret actions etc). If the db can >> read the data, the keys must be in the database. If the keys are in the >> database and your database is compromised, then your keys are >> compromised. So provided you protect your database from compromise, you >> achieve the same level of security as you do with full data encryption >> EXCEPT for access to the underlying data files outside of the database >> system. For this, you will tend to use some sort of file system >> encryption, which is typically managed at the operating system >&
Re: Postgresql database encryption
Thanks Ron, I was trying to find that -- memory had it down as "Persona" and I could not find that, haha. On Fri, Apr 20, 2018 at 8:39 PM Ron wrote: > > Also, Percona (a MySQL fork) 5.7. > > On 04/20/2018 07:31 PM, Ozz Nixon wrote: > > PS. the following database servers do offer internal encryption on a > page/block oriented read/write (for encrypted data at rest security > requirements) > > PremierSQL TDE > MariaDB 10.1.3+ > *MySQL* 5.7.11+ > Microsoft uses TDE > Oracle AdvSec uses TDE > DB2 v7.2 UDB > MangoDB uses AES-256 > PostgreSQL does - but the key is public (dumb) > https://www.postgresql.org/message-id/ca%2bcsw_tb3bk5i7if6inzfc3yyf%2b9hevnty51qfboeuk7ue_v%...@mail.gmail.com > > Just because you do not see the reason for it, does not make the reason a > bad idea. > > On Fri, Apr 20, 2018 at 8:19 PM Ozz Nixon wrote: > >> Well, actually since 2003, this has been a standard requirement from the >> Credit Card industry. And it does make sense in the field of "while at >> rest" the data still cannot be accessed. >> >> Requirement 1. No NPI data should be displayed without controls - e.g. >> reports, PDF, etc. >> Requirement 2. Same data, must be secured during transmission - fetching >> to client screen etc. >> Requirement 3. NPI data should not be logged nor stored on a physical >> device in non-encrypted mode. >> >> There are more steps to this, but, to chalk it off as another half-assed >> required is typical. Hashing is a useful one-way technique, however, >> trapping the hash made using a hash useless! When I worked for the credit >> bureaus we ran encrypted drive arrays, DB/2 encrypted, SSL/TLS encryption >> over P2P VPN connections, and masked output fields when the data would go >> to reports or screens to PCs outside our control. >> >> Anyone with Linux and use LUKS encryption on an LVM partition to achieve >> security where the database may not, or logs or something may exist where >> NPI might be see. Oh yeah, NPI (Non-Pubic Information, like your social, >> you bank account, you paycheck information, etc. things that should not >> exist outside of controls)... >> >> PS. You cannot simply take a drive from one machine to another, when >> doing proper RAID and LUKS encryption. >> >> Ozz >> 15 years experience with federal data security requirements. >> >> On Fri, Apr 20, 2018 at 7:55 PM Tim Cross wrote: >> >>> >>> Vikas Sharma writes: >>> >>> > Hello Guys, >>> > >>> > Could someone throw light on the postgresql instance wide or database >>> wide >>> > encryption please? Is this possible in postgresql and been in use in >>> > production?. >>> > >>> > This is a requirement in our production implementation. >>> > >>> >>> This sounds like a lazy management requirement specified for 'security' >>> purposes by people with little understanding of either technology or >>> security. I suspect it comes form a conversation that went along the >>> lines of >>> >>> "There has been lots in the news about cyber threats" >>> >>> "Yes, we need our system to be secure" >>> >>> "I know, lets make one of the requirements that everything must be >>> encrypted, that will stop them" >>> >>> "Great idea, I'll add it as requirement 14". >>> >>> This is a very poor requirement because it is not adequately specified, >>> but more critically, because it is specifying a 'solution' rather than >>> articulating the requirement in a way which would allow those with the >>> necessary expertise to derive an appropriate solution - one which may or >>> may not involve encryption or hashing of data and which may or may not >>> be at the database level. >>> >>> What you really need to do is go back to your stakeholders and ask them >>> a lot of questions to extract what the real requirement is. Try to find >>> out what risk they are trying to mitigate with encryption. Once this is >>> understood, then look at what the technology can do and work out the >>> design/implementation from there. >>> >>> It is extremely unlikely you just want all the data in the database >>> encrypted. When you think about it, such an approach really doesn't make >>> sense. In basic terms, if the data is encrypted, the database engine >>> will need to be able to decrypt it in order
RE: Code of Conduct plan
Sorry... > 1) CoC might result in developers leaving projects I know this on going regurgitation is going to cause my team to leave the project, right around 100 posts on this off topic topic it was bad enough when the original idea came up (2 years ago I think). It used to be exciting to sit back and review the day or weeks posts... not much anymore. Regards, Ozz
Re:
Ok. On Wed, Jun 13, 2018 at 12:00 PM Caglar Aksu wrote: > Dont mail me plase unsubscribe >
Re: pg_dump backup utility is taking more time around 24hrs to take the backup of 28GB
There are many possible problems, could you share your command line? For example. Dump on SAME drive as the DB files, can produce disk contention. Dump across the network, can have packet collision of network latency. I dump 20gb test server in a matter of a couple minutes here. But, I run multiple disk controllers, data drive one 1 iSCSI and archives on another iSCSI - both run SSD ... so obviously, in this design I am simply moving RAM from 1 gig channel to another 1 gig channel, both are private circuits - no network conflict, and I average no less than 1GBps A to B stack performance. On Thu, Oct 18, 2018 at 8:25 AM Raghavendra Rao J S V < raghavendra...@gmail.com> wrote: > Hi All, > > We are using *pg_dump *backup utility in order to take the backup of the > database. Unfortunately,it is taking around 24hrs of time to take the > backup of 28GB database. Please guide me how to reduce the time and is > there any parameter need to be modified which will help us to reduce the > backup time. We are using Postgres 9.2 version > > *Note:-*Kindly suggest me options using pg_dump only. > > -- > Regards, > Raghavendra Rao > >
Re: GIN Index for low cardinality
Jeff, Great info! Your example on Mr. Mrs. Miss, etc. is there a good rule of thumb that if the data is under "x"KB an index is overhead not help? I am not worried about space, more interested in performance.