Re: Postgresql database encryption

2018-04-22 Thread Vikas Sharma
Thanks a lot for the valuable information and apologies I didn't provide
specify that the requirement is to encrypt data at rest and in transit.

Regards
Vikas

On Fri, Apr 20, 2018, 21:56 Vick Khera  wrote:

> On Fri, Apr 20, 2018 at 11:24 AM, Vikas Sharma  wrote:
>
>> Hello Guys,
>>
>> Could someone throw light on the postgresql instance wide or database
>> wide encryption please? Is this possible in postgresql and been in use in
>> production?.
>>
>
> For anyone to offer a proper solution, you need to say what purpose your
> encryption will serve. Does the data need to be encrypted at rest? Does it
> need to be encrypted in memory? Does it need to be encrypted at the
> database level or at the application level? Do you need to be able to query
> the data? There are all sorts of scenarios and use cases, and you need to
> be more specific.
>
> For me, using whole-disk encryption solved my need, which was to ensure
> that the data on disk cannot be read once removed from the server. For
> certain fields in one table, I use application level encryption so only the
> application itself can see the original data. Anyone else querying that
> table sees the encrypted blob, and it was not searchable.
>


Postgres and fsync

2018-04-22 Thread Tim Cross
Hi all,

the recent article in LWN regarding issues with fsync and error
reporting in the Linux kernel and the potential for lost data has
prompted me to ask 2 questions.

1. Is this issue low level enough that it affects all potentially
supported sync methods on Linux? For example, if you were concerned
about this issue and you had a filesystem which supports open_sync or
open_datasync etc, is switching to one of these options something which
should be considered or is this issue low level enough that all sync
methods are impacted?

2. If running under xfs as the file system, what is the preferred sync
method or is this something which really needs to be benchmarked to make
a decision?

For background, one of our databases is large - approximately 7Tb with
some tables which have large numbers of records with large inserts per
day i.e. approx 1,600,000,000 new records per day added and a similar
number deleted (no updates), maintaining a table size of about 3Tb,
though it is expected we will be increasing the number of retained
records and will see the table grow to about 6Tb. This represents a fair
amount of I/O and we want to ensure we have the fastest I/O we can
achieve with highest data reliability we can get. The columns in the
table are small i.e. 7 double precision, 2 integer, 1 date and 2
timestamp. 

Platform is RHEL, Postgres 9.6.8, filesystem xfs backed by HP
SAN. Current wal_sync_method is fsync. 

Tim

-- 
Tim Cross



Re: Postgres and fsync

2018-04-22 Thread Andres Freund
Hi,

On 2018-04-23 08:30:25 +1000, Tim Cross wrote:
> the recent article in LWN regarding issues with fsync and error
> reporting in the Linux kernel and the potential for lost data has
> prompted me to ask 2 questions.

Note that you need to have *storage* failures for this to
happen. I.e. your disk needs to die, and there's no raid or such to fix
the issue.


> 1. Is this issue low level enough that it affects all potentially
> supported sync methods on Linux? For example, if you were concerned
> about this issue and you had a filesystem which supports open_sync or
> open_datasync etc, is switching to one of these options something which
> should be considered or is this issue low level enough that all sync
> methods are impacted?

No, the issue is largely about datafiles whereas the setting you refer
to is about the WAL.

Greetings,

Andres Freund