Re: Get the table creation DDL

2022-07-11 Thread Walter Dörwald

On 10 Jul 2022, at 17:40, Igor Korot wrote:


Hi,
Is there a query I can execute that will give me CREATE TABLE() 
command

used to create a table?

Thank you.


I am using the following query for that:

```sql
select
a.attname,
a.attnum,
a.attnotnull,
pg_catalog.format_type(a.atttypid, a.atttypmod) as column_type,
case
		when d.adrelid is not null then pg_catalog.pg_get_expr(d.adbin, 
d.adrelid)

else null
end as default_value
from
pg_catalog.pg_class c
join
	pg_catalog.pg_attribute a on c.oid = a.attrelid and a.attnum > 0 and 
not a.attisdropped

join
pg_catalog.pg_namespace n on c.relnamespace = n.oid
left outer join
	pg_catalog.pg_attrdef d on d.adrelid = a.attrelid and d.adnum = 
a.attnum and a.atthasdef

where
c.relname = %s and
n.nspname = %s
order by
attnum
;
```

and the following Python code to format the `create table` statement:

```python
sql = f"create table {schema}.{name}\n"
sql += f"(\n"
for (last, column) in islast(cursor):
column_term = "" if last else ","
notnull = " not null" if column.attnotnull else ""
	default = f" default {column.default_value}" if column.default_value is 
not None else ""
	sql += f"\t{column.attname} 
{column.column_type}{default}{notnull}{column_term}\n"

sql += f");"
return sql
```

Servus,
   Walter


PostgreSQL 14.4 ERROR: out of memory issues

2022-07-11 Thread Aleš Zelený
Hello,

we are experiencing out-of-memory issues after Postygres upgrade from 14.3
to 14.4 on CentOS7.

OS: CenotOS7

No OOM killer messages in the syslog. SWAP is disabled.

-bash-4.2$ sysctl -q vm.overcommit_memory
vm.overcommit_memory = 2

[root@sts-uat-pgsql100 ~]# swapon -vs
[root@sts-uat-pgsql100 ~]#

[root@sts-uat-pgsql100 ~]# sysctl -q vm.overcommit_ratio
vm.overcommit_ratio = 95


-bash-4.2$ uname -r
3.10.0-1160.42.2.el7.x86_64

-bash-4.2$ free -h
  totalusedfree  shared  buff/cache
available
Mem:39G7,8G220M 10G 31G
20G
Swap:0B  0B  0B



postgres=# show shared_buffers ;
 shared_buffers

 10GB
(1 row)

postgres=# show  work_mem ;
 work_mem
--
 4MB
(1 row)

postgres=# show maintenance_work_mem ;
 maintenance_work_mem
--
 512MB
(1 row)

postgres=# show max_parallel_workers;
 max_parallel_workers
--
 8
(1 row)

postgres=# show max_parallel_maintenance_workers ;
 max_parallel_maintenance_workers
--
 2
(1 row)

postgres=# select count(*) from pg_stat_activity ;
 count
---
93
(1 row)

postgres=# show max_connections ;
 max_connections
-
 1000
(1 row)

No changes on the application side were made before the out-of-memory
happened.



So far, it has happened three times (during a single week) from the 14.3 ->
14.4 upgrade, before 14.4 we haven't suffered from such an issue.

Questions:
1)  Can we safely downgrade from 14.4 to 14.3 by shutting down the instance
and reinstalling 14.3 PG packages (to prove, that the issue disappear)?
2) What is the best way to diagnose what is the root cause?

What we have started to collect:
- vmstat -nwt (per second)
- vmstat -nawt (per second)

~onece a minute
df  for /dev/shm
/proc/meminfo
proc//stat for all PG processes
proc//cmdline for all PG processes
proc//status for all PG processes
proc//smaps for all PG processes
list from pg_stat_activcity
pmap -x for all PG processes
ps aux listing
smem -u --abbreviate
smem -m --abbreviate
smem -w -k

Except the "pmpap" all of the metrics were already collected before the
last out of meory issue.


Error messages:
--
postgresql-Fri.log:2022-07-01 06:49:53 CEST [4476]: [20-1]
user=,db=,host=,app= LOG:  could not fork worker process: Cannot allocate
memory
postgresql-Fri.log:2022-07-01 06:49:53 CEST [4476]: [21-1]
user=,db=,host=,app= LOG:  could not fork worker process: Cannot allocate
memory
postgresql-Fri.log:2022-07-01 06:50:02 CEST [4476]: [22-1]
user=,db=,host=,app= LOG:  could not fork new process for connection:
Cannot allocate memory
postgresql-Fri.log:2022-07-01 06:50:02 CEST [4476]: [23-1]
user=,db=,host=,app= LOG:  could not fork new process for connection:
Cannot allocate memory
postgresql-Fri.log:2022-07-01 06:50:02 CEST [4476]: [24-1]
user=,db=,host=,app= LOG:  could not fork new process for connection:
Cannot allocate memory
...
2022-07-02 14:48:07 CEST [3930]: [2-1] user=,db=,host=,app= LOG:  automatic
vacuum of table "prematch.monitoring.tab_replication_clock": index scans: 0
pages: 0 removed, 36 remain, 0 skipped due to pins, 34 skipped
frozen
tuples: 0 removed, 203 remain, 201 are dead but not yet removable,
oldest xmin: 269822444
index scan not needed: 0 pages from table (0.00% of total) had 0
dead item identifiers removed
I/O timings: read: 0.000 ms, write: 0.000 ms
avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s
buffer usage: 67 hits, 0 misses, 0 dirtied
WAL usage: 1 records, 0 full page images, 245 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
TopMemoryContext: 279360 total in 10 blocks; 15384 free (9 chunks); 263976
used
  TopTransactionContext: 8192 total in 1 blocks; 7280 free (12 chunks); 912
used
  TOAST to main relid map: 65536 total in 4 blocks; 35168 free (15 chunks);
30368 used
  AV worker: 16384 total in 2 blocks; 10080 free (10 chunks); 6304 used
Autovacuum Portal: 8192 total in 1 blocks; 7720 free (0 chunks); 472
used
  Vacuum: 8192 total in 1 blocks; 7768 free (0 chunks); 424 used
  Operator class cache: 8192 total in 1 blocks; 552 free (0 chunks); 7640
used
  smgr relation table: 16384 total in 2 blocks; 4592 free (2 chunks); 11792
used
  pgstat TabStatusArray lookup hash table: 8192 total in 1 blocks; 1448
free (0 chunks); 6744 used
  TransactionAbortContext: 32768 total in 1 blocks; 32504 free (0 chunks);
264 used
  Portal hash: 8192 total in 1 blocks; 552 free (0 chunks); 7640 used
  TopPortalContext: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used
  Relcache by OID: 16384 total in 2 blocks; 3504 free (2 chunks); 12880 used
  CacheMemoryContext: 524288 total in 7 blocks; 123552 free (4 chunks);
400736 used
index info: 1024 total in 1 blocks; 0 free (0 chunks); 1024 used:
tab_queue_tmp_pkey
index info: 2048 total in 2 blocks; 952 free 

Re: PostgreSQL 14.4 ERROR: out of memory issues

2022-07-11 Thread Rob Sargent



> On Jul 11, 2022, at 2:50 AM, Aleš Zelený  wrote:
> 
> 
> Hello,
> 
> we are experiencing out-of-memory issues after Postygres upgrade from 14.3 to 
> 14.4 on CentOS7.
> 
> OS: CenotOS7
> 
> No OOM killer messages in the syslog. SWAP is disabled.
> 
> -bash-4.2$ sysctl -q vm.overcommit_memory
> vm.overcommit_memory = 2
> 
> [root@sts-uat-pgsql100 ~]# swapon -vs
> [root@sts-uat-pgsql100 ~]# 
> 
> [root@sts-uat-pgsql100 ~]# sysctl -q vm.overcommit_ratio
> vm.overcommit_ratio = 95
> 
> 
> -bash-4.2$ uname -r
> 3.10.0-1160.42.2.el7.x86_64
> 
> -bash-4.2$ free -h
>   totalusedfree  shared  buff/cache   
> available
> Mem:39G7,8G220M 10G 31G 
> 20G
> Swap:0B  0B  0B
> 
> 
> 
> postgres=# show shared_buffers ;
>  shared_buffers
> 
>  10GB
> (1 row)
> 
> postgres=# show  work_mem ;
>  work_mem
> --
>  4MB
> (1 row)
> 
> postgres=# show maintenance_work_mem ;
>  maintenance_work_mem
> --
>  512MB
> (1 row)
> 
> postgres=# show max_parallel_workers;
>  max_parallel_workers
> --
>  8
> (1 row)
> 
> postgres=# show max_parallel_maintenance_workers ;
>  max_parallel_maintenance_workers
> --
>  2
> (1 row)
> 
> postgres=# select count(*) from pg_stat_activity ;
>  count
> ---
> 93
> (1 row)
> 
> postgres=# show max_connections ;
>  max_connections
> -
>  1000
> (1 row)
> 
> No changes on the application side were made before the out-of-memory 
> happened.
> 
> 
> 
> So far, it has happened three times (during a single week) from the 14.3 -> 
> 14.4 upgrade, before 14.4 we haven't suffered from such an issue.
> 
> Questions:e. 
> 1)  Can we safely downgrade from 14.4 to 14.3 by shutting down the instance 
> and reinstalling 14.3 PG packages (to prove, that the issue disappear)?

> 2) What is the best way to diagnose what is the root cause?
You can turn on sql logging to see what query is causing the crash. 
Has any new code added anything with 
   a.id!= b.id
on any large table?




What we have started to collect:

When you think you see it happening you can query the
System tables for currently running queries. pg_stat*


> - vmstat -nwt (per second)
> - vmstat -nawt (per





Re: Get the table creation DDL

2022-07-11 Thread Mladen Gogala

On 7/10/22 22:52, Rob Sargent wrote:
Are you alone or on a team? 


What are your pronouns?

--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
https://dbwhisperer.wordpress.com


Re: PostgreSQL 14.4 ERROR: out of memory issues

2022-07-11 Thread Pavel Stehule
Hi

It's looks like memory leak

  ErrorContext: 8192 total in 1 blocks; 7928 free (5 chunks); 264 used
> Grand total: 1456224 bytes in 195 blocks; 378824 free (165 chunks);
> 1077400 used
> 2022-07-02 14:48:07 CEST [3930]: [3-1] user=,db=,host=,app= ERROR:  out of
> memory
> 2022-07-02 14:48:07 CEST [3930]: [4-1] user=,db=,host=,app= DETAIL:
>  Failed on request of size 152094068 in memory context
> "TopTransactionContext".
> 2022-07-02 14:48:07 CEST [3930]: [5-1] user=,db=,host=,app= CONTEXT:
>  automatic vacuum of table "prematch.replication.tab_queue_tmp"
>
> 2022-07-02 14:48:47 CEST [4476]: [42-1] user=,db=,host=,app= LOG:  could
> not fork worker process: Cannot allocate memory
> 2022-07-02 14:48:47 CEST [4476]: [43-1] user=,db=,host=,app= LOG:  could
> not fork worker process: Cannot allocate memory
> terminate called after throwing an instance of 'std::bad_alloc'
>   what():  std::bad_alloc
>
2022-07-02 14:48:47 CEST [4476]: [44-1] user=,db=,host=,app= LOG:
>  background worker "parallel worker" (PID 4303) was terminated by signal 6:
> Aborted
>
>
But what you see is probably just side effect. Do you use some extension in
C++? This is C++ message. This process kills Postgres.

It can be related to some operation over prematch.replication.tab_queue_tmp
because in both cases an atovacuum was started.


>
>
> 132hba parser context: 25600 total in 6 blocks; 4464 free (8 chunks);
> 21136 used
> 133  ErrorContext: 8192 total in 1 blocks; 7928 free (5 chunks); 264 used
> 134Grand total: 1456224 bytes in 195 blocks; 378824 free (165 chunks);
> 1077400 used
> 1352022-07-02 14:48:07 CEST [3930]: [3-1] user=,db=,host=,app= ERROR:  out
> of memory
> 1362022-07-02 14:48:07 CEST [3930]: [4-1] user=,db=,host=,app= DETAIL:
>  Failed on request of size 152094068 in memory context
> "TopTransactionContext".
> 1372022-07-02 14:48:07 CEST [3930]: [5-1] user=,db=,host=,app= CONTEXT:
>  automatic vacuum of table "prematch.replication.tab_queue_tmp"
>

I am not sure, but it looks like corrupted table
prematch.replication.tab_queue_tmp


> And a few days later again:
> 2022-07-08 14:54:22 CEST [4791]: [81186-1]
> user=app_evaluation,db=prematch,host=172.25.0.80,app=Evaluation_STS_UAT
> DETAIL: parameters: $1 = '1', $2 = '1748010445', $3 = '0', $4 = '1000' 
> terminate
> called after throwing an instance of 'std::bad_alloc' terminate called
> after throwing an instance of 'std::bad_alloc' what(): what():
> std::bad_allocstd::bad_alloc 2022-07-08 14:54:23 CEST [4476]: [49-1]
> user=,db=,host=,app= LOG: background worker "parallel worker" (PID 25251)
> was terminated by signal 6: Aborted
> 2022-07-08 14:54:23 CEST [4476]: [51-1] user=,db=,host=,app= LOG:
>  terminating any other active server processes
>
>
> Thanks Ales Zeleny
>


Re: Get the table creation DDL

2022-07-11 Thread Rob Sargent

On 7/11/22 06:31, Mladen Gogala wrote:

On 7/10/22 22:52, Rob Sargent wrote:
Are you alone or on a team? 


What are your pronouns?

This did make me chuckle, but no I am just asking whether or not the OP 
is currently part of a team.






Re: PostgreSQL 14.4 ERROR: out of memory issues

2022-07-11 Thread Aleš Zelený
Hello,

po 11. 7. 2022 v 14:47 odesílatel Pavel Stehule 
napsal:

> Hi
>
> It's looks like memory leak
>

As we are collecting some data for Postgres processes, I've found processes
using Rss Anonymous memory allocations:

Some examples:
2022-07-08T14:50:24+0200 Name: postmaster Pid: 31522 RssAnon:  742600 kB
2022-07-08T14:51:13+0200 Name: postmaster Pid: 31522 RssAnon:  742600 kB
2022-07-08T14:52:05+0200 Name: postmaster Pid: 31522 RssAnon:  742852 kB
2022-07-08T14:52:58+0200 Name: postmaster Pid: 31522 RssAnon:  743112 kB
2022-07-08T14:53:50+0200 Name: postmaster Pid: 31522 RssAnon:  743112 kB
2022-07-08T14:50:24+0200 Name: postmaster Pid: 31647 RssAnon:  684632 kB
2022-07-08T14:51:13+0200 Name: postmaster Pid: 31647 RssAnon:  684632 kB
2022-07-08T14:52:05+0200 Name: postmaster Pid: 31647 RssAnon:  684676 kB
2022-07-08T14:52:58+0200 Name: postmaster Pid: 31647 RssAnon:  684700 kB
2022-07-08T14:53:50+0200 Name: postmaster Pid: 31647 RssAnon:  684824 kB
2022-07-08T14:50:27+0200 Name: postmaster Pid: 7866 RssAnon: 1180960 kB
2022-07-08T14:51:16+0200 Name: postmaster Pid: 7866 RssAnon: 1180960 kB
2022-07-08T14:52:08+0200 Name: postmaster Pid: 7866 RssAnon: 1180960 kB
2022-07-08T14:53:01+0200 Name: postmaster Pid: 7866 RssAnon: 1180960 kB
2022-07-08T14:53:53+0200 Name: postmaster Pid: 7866 RssAnon: 1180960 kB

other processes uses less memory:
2022-07-08T14:52:08+0200 Name: postmaster Pid: 3869 RssAnon:3256 kB
2022-07-08T14:53:02+0200 Name: postmaster Pid: 3869 RssAnon:3256 kB
2022-07-08T14:53:54+0200 Name: postmaster Pid: 3869 RssAnon:3256 kB
2022-07-08T14:50:27+0200 Name: postmaster Pid: 4217 RssAnon:2532 kB
2022-07-08T14:51:16+0200 Name: postmaster Pid: 4217 RssAnon:2532 kB



And Shared memory:
2022-07-08T14:59:12+0200 Name: postmaster Pid: 5719 RssShmem:  908264 kB
2022-07-08T14:59:20+0200 Name: postmaster Pid: 5719 RssShmem:  908264 kB
2022-07-08T14:59:28+0200 Name: postmaster Pid: 5719 RssShmem:  908264 kB
2022-07-08T14:59:37+0200 Name: postmaster Pid: 5719 RssShmem:  908264 kB
2022-07-08T14:59:45+0200 Name: postmaster Pid: 5719 RssShmem:  908264 kB
2022-07-08T14:59:53+0200 Name: postmaster Pid: 5719 RssShmem:  908264 kB
2022-07-08T14:50:27+0200 Name: postmaster Pid: 5721 RssShmem: 1531656 kB
2022-07-08T14:51:16+0200 Name: postmaster Pid: 5721 RssShmem: 1531656 kB
2022-07-08T14:52:08+0200 Name: postmaster Pid: 5721 RssShmem: 1531656 kB
2022-07-08T14:53:01+0200 Name: postmaster Pid: 5721 RssShmem: 1531656 kB
2022-07-08T14:53:53+0200 Name: postmaster Pid: 5721 RssShmem: 1531656 kB

while other processes uses failry less of shared mem:

2022-07-08T14:55:25+0200 Name: postmaster Pid: 8521 RssShmem:1988 kB
2022-07-08T14:55:33+0200 Name: postmaster Pid: 8521 RssShmem:1988 kB
2022-07-08T14:55:40+0200 Name: postmaster Pid: 8521 RssShmem:2104 kB
2022-07-08T14:55:49+0200 Name: postmaster Pid: 8521 RssShmem:2104 kB
2022-07-08T14:55:57+0200 Name: postmaster Pid: 8521 RssShmem:2104 kB
2022-07-08T14:56:06+0200 Name: postmaster Pid: 8521 RssShmem:2104 kB
2022-07-08T14:56:15+0200 Name: postmaster Pid: 8521 RssShmem:2104 kB

Wich processes are better candidates for investigation, the ones using more
Anonymous memory or the ones using larger shared memory?
I can do the (tedious) work to get a list of statements that a given PID
executed (at least the statements that appear in the minute snapshots).


>   ErrorContext: 8192 total in 1 blocks; 7928 free (5 chunks); 264 used
>> Grand total: 1456224 bytes in 195 blocks; 378824 free (165 chunks);
>> 1077400 used
>> 2022-07-02 14:48:07 CEST [3930]: [3-1] user=,db=,host=,app= ERROR:  out
>> of memory
>> 2022-07-02 14:48:07 CEST [3930]: [4-1] user=,db=,host=,app= DETAIL:
>>  Failed on request of size 152094068 in memory context
>> "TopTransactionContext".
>> 2022-07-02 14:48:07 CEST [3930]: [5-1] user=,db=,host=,app= CONTEXT:
>>  automatic vacuum of table "prematch.replication.tab_queue_tmp"
>>
>> 2022-07-02 14:48:47 CEST [4476]: [42-1] user=,db=,host=,app= LOG:  could
>> not fork worker process: Cannot allocate memory
>> 2022-07-02 14:48:47 CEST [4476]: [43-1] user=,db=,host=,app= LOG:  could
>> not fork worker process: Cannot allocate memory
>> terminate called after throwing an instance of 'std::bad_alloc'
>>   what():  std::bad_alloc
>>
> 2022-07-02 14:48:47 CEST [4476]: [44-1] user=,db=,host=,app= LOG:
>>  background worker "parallel worker" (PID 4303) was terminated by signal 6:
>> Aborted
>>
>>
> But what you see is probably just side effect. Do you use some extension
> in C++? This is C++ message. This process kills Postgres.
>

These extensions are installed (and no change from 14.3):

List of installed extensions
Name| Version |   Schema   |
 Description
+-++
 amcheck| 1.3 | public | functions for verifying
relation integrity
 dblin

Is there a official benchmark comparing PG versions?

2022-07-11 Thread Jean Baro
Hi there,

Out of curiosity.

Is there (as part of the PG development cycle) any public website that
compares the performance of PG versions regarding common use cases? So that
we can (users) can follow the progress of PG over the years, especially
using modern hardware and cloud. DirectIO

Something like https://arewefastyet.com/ for JS engines? Maybe using TPC as
the underlying test.

P.S: I understand this is not simple, but PG is not simple either and I see
how the community is great and improving PG to become de facto relational
DB for the Cloud era.

Thanks!


Problem upgrading from 10 to 14 with pg_upgrade: unexpected error upgrading "template1" database for some clusters

2022-07-11 Thread Ken Yeung
In the course of upgrading a bunch of database clusters from 10 to 14 using 
pg_upgrade tool, some databases result in the following error recorded in 
pg_upgrade_dump_1.log, such that cluster upgrade fails.
FATAL:  password authentication failed for user "postgres"

This was done on Microsoft Windows 11.
PostgreSQL 10 and 14 used are built using Visual Studio 2015.
Only a small number of database clusters has this problem, around 2 of 200.
It appears these database are used from earlier cluster upgrade from 9.4 to 10 
as well.

It would be very helpful if there is any possible workaround to overcome this 
problem.

I attached a sample pg_upgrade_dump_1.log reported this error.

pg_upgrade_dump_1.log
Description: pg_upgrade_dump_1.log


Re: - operator overloading not giving expected result

2022-07-11 Thread Rajesh S

Hi All,

Thank you for your valuable comments and suggestions.  As it was pointed 
out we are using EDB Postgres and the language was mentioned as 
'edbspl'.  We made to work by setting the search_path variable as follows:


set search_path="$user", public, pg_catalog

And also changed the Operator overloading function as below. This gave 
us the expected results.  Thanks once again for all your comments.  
Consider this thread as closed.


CREATE OR REPLACE FUNCTION public.dt_minus_dt(dt1 DATE, dt2 DATE)
RETURNS INTEGER as
$$
    SELECT (to_char(dt1::date,'J'::varchar) - 
to_char(dt2::date,'J'::varchar))::integer

$$ LANGUAGE SQL IMMUTABLE STRICT PARALLEL SAFE COST 1;

CREATE OPERATOR public.- (
  PROCEDURE = dt_minus_dt,
  LEFTARG = DATE,
  RIGHTARG = DATE
);


Thanks,

Rajesh S

On 08-07-2022 08:05 pm, Ravi Krishna wrote:

LANGUAGE 'edbspl'

This is the root cause of your issue.


You are not using PGSQL, but EDB version of it which is compatible with Oracle 
PL/SQL.

Re: Problem upgrading from 10 to 14 with pg_upgrade: unexpected error upgrading "template1" database for some clusters

2022-07-11 Thread Thomas Boussekey
Hi,

Le lun. 11 juil. 2022 à 21:22, Ken Yeung  a
écrit :

> In the course of upgrading a bunch of database clusters from 10 to 14
> using pg_upgrade tool, some databases result in the following error
> recorded in pg_upgrade_dump_1.log, such that cluster upgrade fails.
> FATAL:  password authentication failed for user "postgres"
>
This error happens on the restoration-side (Pg14)

pg_restore: error: connection to server at "localhost" (::1), port 50432
failed: FATAL:  password authentication failed for user "postgres"

means that you try to use ipv6 (is it possible, according to your target
postgresql.conf file?)
Does your Pg14 target pg_hba.conf file allow this kind of connection?

Last year, I had to migrate 200+ PostgreSQL instance from Pg9.5 to Pg12, I
discovered some configuration differences while performing this kind of
pre-upgrade tests.



>
> This was done on Microsoft Windows 11.
> PostgreSQL 10 and 14 used are built using Visual Studio 2015.
> Only a small number of database clusters has this problem, around 2 of 200.
> It appears these database are used from earlier cluster upgrade from 9.4
> to 10 as well.
>
> It would be very helpful if there is any possible workaround to overcome
> this problem.
>
> I attached a sample pg_upgrade_dump_1.log reported this error.


Hope it helps,
Thomas


Re: Get the table creation DDL

2022-07-11 Thread Igor Korot
Hi,

On Mon, Jul 11, 2022 at 7:56 AM Rob Sargent  wrote:
>
> On 7/11/22 06:31, Mladen Gogala wrote:
> > On 7/10/22 22:52, Rob Sargent wrote:
> >> Are you alone or on a team?

No, this is my personal project.

Thank you.

> >
> > What are your pronouns?
> >
> This did make me chuckle, but no I am just asking whether or not the OP
> is currently part of a team.
>




Re: Get the table creation DDL

2022-07-11 Thread Rob Sargent



> On Jul 11, 2022, at 6:46 PM, Igor Korot  wrote:
> 
> Hi,
> 
>> On Mon, Jul 11, 2022 at 7:56 AM Rob Sargent  wrote:
>> 
>>> On 7/11/22 06:31, Mladen Gogala wrote:
>>> On 7/10/22 22:52, Rob Sargent wrote:
 Are you alone or on a team?
> 
> No, this is my personal project.
> 
> Thank you.
> 

Do what ever works for you

>>> 
>>> What are your pronouns?
>>> 
>> This did make me chuckle, but no I am just asking whether or not the OP
>> is currently part of a team.
>>