Re: encoding option when database backup

2024-01-03 Thread Taek Oh
Thank you Rob for your response.

Unfortunately, the Encoding drop-down listbox does not exist for the
pgadmin V8.
Therefore, I would like to seek a way to activate the listbox or select the
encoding method during the Backup procedure.

Regards,

Taek

On Wed, Jan 3, 2024 at 4:19 PM rob stone  wrote:

>
>
> On Wed, 2024-01-03 at 15:59 +0900, Taek Oh wrote:
> > Hi there,
> >
> > I would like to make an inquiry regarding the encoding option for the
> > database backup.
> > When I was using the previous version of PGADMIN 4, I had a
> > dropbar for the encoding format for the backup option.
> > But I recently started using the latest version of PGADMIN 4(V8.1),
> > and I realized that the dropbar for encoding format has disappeared
> > as we can observe from the attached image.
> > Are there any solutions where I can activate the encoding dropbar?
> >
> > Thank you in advance,
> >
> > Taek
> >
> >
>
>
> See https://www.pgadmin.org/docs/pgadmin4/latest/backup_dialog.html
> where it mentions:-
>
> Use the Encoding drop-down listbox to select the character encoding
> method that should be used for the archive.
>
>
>


Re: encoding option when database backup

2024-01-03 Thread Adrian Klaver

On 1/3/24 05:14, Taek Oh wrote:

Thank you Rob for your response.

Unfortunately, the Encoding drop-down listbox does not exist for the 
pgadmin V8.
Therefore, I would like to seek a way to activate the listbox or select 
the encoding method during the Backup procedure.


1) Per

https://www.pgadmin.org/docs/pgadmin4/8.1/backup_server_dialog.html

"Use the Encoding drop-down listbox to select the character encoding 
method that should be used for the archive. Note: This option is visible 
only for database server greater than or equal to 11."


So are you trying to backup a Postgres version older then 11?


2) pgAdmin is a separate project from the Postgres server. If you are 
trying to back from a Postgres 11+ server and don't see the encoding 
dropdown then you should probably bring that up on:


https://www.postgresql.org/list/pgadmin-support/



Regards,

Taek

On Wed, Jan 3, 2024 at 4:19 PM rob stone > wrote:




On Wed, 2024-01-03 at 15:59 +0900, Taek Oh wrote:
 > Hi there,
 >
 > I would like to make an inquiry regarding the encoding option for the
 > database backup.
 > When I was using the previous version of PGADMIN 4, I had a
 > dropbar for the encoding format for the backup option.
 > But I recently started using the latest version of PGADMIN 4(V8.1),
 > and I realized that the dropbar for encoding format has disappeared
 > as we can observe from the attached image.
 > Are there any solutions where I can activate the encoding dropbar?
 >
 > Thank you in advance,
 >
 > Taek
 >
 >


See https://www.pgadmin.org/docs/pgadmin4/latest/backup_dialog.html

where it mentions:-

Use the Encoding drop-down listbox to select the character encoding
method that should be used for the archive.




--
Adrian Klaver
adrian.kla...@aklaver.com





unable to resgiter witnes node in repmgr

2024-01-03 Thread Vijaykumar Patil
Happy new year to everyone !!


Can any one help me on this below issue as we are not able to register witness 
node in repmgr , getting below error.

postgres@adsoazdbaodb04[TEST]-/home/postgres: repmgr -f 
/u01/app/admin/Data/repmgr.conf witness register -h scrbtrheldbaas001
INFO: connecting to witness node "adsoazdbaodb04" (ID: 4)
DEBUG: connecting to: "user=repmgr connect_timeout=2 dbname=repmgr 
host=adsoazdbaodb04 fallback_application_name=repmgr options=-csearch_path="
INFO: connecting to primary node
DEBUG: connecting to: "user=repmgr connect_timeout=2 dbname=repmgr 
host=scrbtrheldbaas001 fallback_application_name=repmgr options=-csearch_path="
ERROR: witness node cannot be in the same cluster as the primary node
DETAIL: database system identifiers on primary node and provided witness node 
match (7317528477531093832)
HINT: the witness node must be created on a separate read/write node


postgres@adsoazdbaodb04[TEST]-/home/postgres: cat 
/u01/app/admin/Data/repmgr.conf
#cluster='pg_cluster'
node_id=4
node_name=adsoazdbaodb04
conninfo='host=adsoazdbaodb04 user=repmgr dbname=repmgr connect_timeout=2'
data_directory='/u01/app/admin/Data/pg_da'
failover=automatic
promote_command='repmgr standby promote -f /u01/app/admin/Data/repmgr.conf 
--log-to-file'
follow_command='repmgr standby follow -f /u01/app/admin/Data/repmgr.conf 
--log-to-file --upstream-node-id=%n'
pg_bindir='/u01/app/admin/Postgresql/15.3/bin'
user='repmgr'
log_level = 'DEBUG'
log_file = '/u01/app/admin/Data/repmgr.log'
monitoring_history = 'true'
primary_visibility_consensus = 'true'

Thanks & Regards
Vijaykumar
Database Operations
[cid:image001.png@01DA3E32.D50A9B60]
Maersk Global Service Centre, Pune.




The information contained in this message is privileged and intended only for 
the recipients named. If the reader is not a representative of the intended 
recipient, any review, dissemination or copying of this message or the 
information it contains is prohibited. If you have received this message in 
error, please immediately notify the sender, and delete the original message 
and attachments.

Maersk will as part of our communication and interaction with you collect and 
process your personal data. You can read more about Maersk's collection and 
processing of your personal data and your rights as a data subject in our 
privacy policy 

Please consider the environment before printing this email.


Classification: Internal


Re: Sample data generator for performance testing

2024-01-03 Thread Adrian Klaver

On 1/2/24 23:23, arun chirappurath wrote:

Hi All,

Do we have any open source tools which can be used to create sample data 
at scale from our postgres databases?

Which considers data distribution and randomness


Is this for all tables in the database or a subset?

Does it need to deal with foreign key relationships?

What are the sizes of the existing data and what size sample data do you 
want to produce?




Regards,
Arun


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Sample data generator for performance testing

2024-01-03 Thread arun chirappurath
Hi Adrian,

Thanks for your mail.

Is this for all tables in the database or a subset? Yes

Does it need to deal with foreign key relationships? No

What are the sizes of the existing data and what size sample data do you
want to produce?1Gb and 1Gb test data.

On Wed, 3 Jan, 2024, 22:40 Adrian Klaver,  wrote:

> On 1/2/24 23:23, arun chirappurath wrote:
> > Hi All,
> >
> > Do we have any open source tools which can be used to create sample data
> > at scale from our postgres databases?
> > Which considers data distribution and randomness
>
>
>
> >
> > Regards,
> > Arun
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Sample data generator for performance testing

2024-01-03 Thread Adrian Klaver

On 1/3/24 09:24, arun chirappurath wrote:

Hi Adrian,

Thanks for your mail.

Is this for all tables in the database or a subset? Yes


Yes all tables or yes just some tables?



Does it need to deal with foreign key relationships? No

What are the sizes of the existing data and what size sample data do you
want to produce?1Gb and 1Gb test data.


If the source data is 1GB and the test data is 1GB then there is no 
sampling, you are using the data population in its entirety.




On Wed, 3 Jan, 2024, 22:40 Adrian Klaver, > wrote:


On 1/2/24 23:23, arun chirappurath wrote:
 > Hi All,
 >
 > Do we have any open source tools which can be used to create
sample data
 > at scale from our postgres databases?
 > Which considers data distribution and randomness



 >
 > Regards,
 > Arun

-- 
Adrian Klaver

adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Sample data generator for performance testing

2024-01-03 Thread arun chirappurath
On Wed, 3 Jan, 2024, 23:03 Adrian Klaver,  wrote:

> On 1/3/24 09:24, arun chirappurath wrote:
> > Hi Adrian,
> >
> > Thanks for your mail.
> >
> > Is this for all tables in the database or a subset? Yes
>
> Yes all tables or yes just some tables?
> All tables.except some which has user details.


> >
> > Does it need to deal with foreign key relationships? No
> >
> > What are the sizes of the existing data and what size sample data do you
> > want to produce?1Gb and 1Gb test data.
>
> If the source data is 1GB and the test data is 1GB then there is no
> sampling, you are using the data population in its entirety.
>
> Yes.would like to double the load and test.


Also do we have any standard methods for sampling and generating test data

>
>
>
> > On Wed, 3 Jan, 2024, 22:40 Adrian Klaver,  > > wrote:
> >
> > On 1/2/24 23:23, arun chirappurath wrote:
> >  > Hi All,
> >  >
> >  > Do we have any open source tools which can be used to create
> > sample data
> >  > at scale from our postgres databases?
> >  > Which considers data distribution and randomness
> >
> >
> >
> >  >
> >  > Regards,
> >  > Arun
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Sample data generator for performance testing

2024-01-03 Thread Jeremy Schneider
On 1/2/24 11:23 PM, arun chirappurath wrote:
> Do we have any open source tools which can be used to create sample data
> at scale from our postgres databases?
> Which considers data distribution and randomness

I would suggest to use the most common tools whenever possible, because
then if you want to discuss results with other people (for example on
these mailing lists) then you're working with data sets that are widely
and well understood.

The most common tool for PostgreSQL is pgbench, which does a TPCB-like
schema that you can scale to any size, always the same [small] number of
tables/columns and same uniform data distribution, and there are
relationships between tables so you can create FKs if needed.

My second favorite tool is sysbench. Any number of tables, easily scale
to any size, standardized schema with small number of colums and no
relationships/FKs.  Data distribution is uniformly random however on the
query side it supports a bunch of different distribution models, not
just uniform random, as well as queries processing ranges of rows.

The other tool that I'm intrigued by these days is benchbase from CMU.
It can do TPCC and a bunch of other schemas/workloads, you can scale the
data sizes. If you're just looking at data generation and you're going
to make your own workloads, well benchbase has a lot of different
schemas available out of the box.

You can always hand-roll your schema and data with scripts & SQL, but
the more complex and bespoke your performance test schema is, the more
work & explaining it takes to get lots of people to engage in a
discussion since they need to take time to understand how the test is
engineered. For very narrowly targeted reproductions this is usually the
right approach with a very simple schema and workload, but not commonly
for general performance testing.

-Jeremy


-- 
http://about.me/jeremy_schneider





Re: Sample data generator for performance testing

2024-01-03 Thread arun chirappurath
Thanks for the insights..

Thanks,
Arun

On Wed, 3 Jan, 2024, 23:26 Jeremy Schneider, 
wrote:

> On 1/2/24 11:23 PM, arun chirappurath wrote:
> > Do we have any open source tools which can be used to create sample data
> > at scale from our postgres databases?
> > Which considers data distribution and randomness
>
> I would suggest to use the most common tools whenever possible, because
> then if you want to discuss results with other people (for example on
> these mailing lists) then you're working with data sets that are widely
> and well understood.
>
> The most common tool for PostgreSQL is pgbench, which does a TPCB-like
> schema that you can scale to any size, always the same [small] number of
> tables/columns and same uniform data distribution, and there are
> relationships between tables so you can create FKs if needed.
>
> My second favorite tool is sysbench. Any number of tables, easily scale
> to any size, standardized schema with small number of colums and no
> relationships/FKs.  Data distribution is uniformly random however on the
> query side it supports a bunch of different distribution models, not
> just uniform random, as well as queries processing ranges of rows.
>
> The other tool that I'm intrigued by these days is benchbase from CMU.
> It can do TPCC and a bunch of other schemas/workloads, you can scale the
> data sizes. If you're just looking at data generation and you're going
> to make your own workloads, well benchbase has a lot of different
> schemas available out of the box.
>
> You can always hand-roll your schema and data with scripts & SQL, but
> the more complex and bespoke your performance test schema is, the more
> work & explaining it takes to get lots of people to engage in a
> discussion since they need to take time to understand how the test is
> engineered. For very narrowly targeted reproductions this is usually the
> right approach with a very simple schema and workload, but not commonly
> for general performance testing.
>
> -Jeremy
>
>
> --
> http://about.me/jeremy_schneider
>
>


Re: Sample data generator for performance testing

2024-01-03 Thread Adrian Klaver


On 1/3/24 9:50 AM, arun chirappurath wrote:



On Wed, 3 Jan, 2024, 23:03 Adrian Klaver,  
wrote:


On 1/3/24 09:24, arun chirappurath wrote:
> Hi Adrian,
>
> Thanks for your mail.
>
> Is this for all tables in the database or a subset? Yes

Yes all tables or yes just some tables?
All tables.except some which has user details. 



>
> Does it need to deal with foreign key relationships? No
>
> What are the sizes of the existing data and what size sample
data do you
> want to produce?1Gb and 1Gb test data.

If the source data is 1GB and the test data is 1GB then there is no
sampling, you are using the data population in its entirety.

Yes.would like to double the load and test.



Does that mean you want to take the 1GB of your existing data and double 
it to 2GB while maintaining


the data distribution from the original data?




Also do we have any standard methods for sampling and generating test data



Something like?:


https://www.postgresql.org/docs/current/sql-select.html


"|TABLESAMPLE /|sampling_method|/ ( /|argument|/ [, ...] ) [ REPEATABLE 
( /|seed|/ ) ]|


   A |TABLESAMPLE| clause after a /|table_name|/ indicates that the
   specified /|sampling_method|/ should be used to retrieve a subset of
   the rows in that table. This sampling precedes the application of
   any other filters such as |WHERE| clauses. The standard PostgreSQL
   distribution includes two sampling methods, |BERNOULLI| and
   |SYSTEM|, and other sampling methods can be installed in the
   database via extensions

...
   "

Read the rest of the documentation for TABLESAMPLE to get the details.






>
> On Wed, 3 Jan, 2024, 22:40 Adrian Klaver,
 > wrote:
>
>     On 1/2/24 23:23, arun chirappurath wrote:
>      > Hi All,
>      >
>      > Do we have any open source tools which can be used to create
>     sample data
>      > at scale from our postgres databases?
>      > Which considers data distribution and randomness
>
>
>
>      >
>      > Regards,
>      > Arun
>
>     --
>     Adrian Klaver
> adrian.kla...@aklaver.com 
>

-- 
Adrian Klaver

adrian.kla...@aklaver.com