How can I retrieve attribute category with attribute Oid in C function?

2018-05-14 Thread a
Hi I am writing something that would need to identify the attribute category 
inside a C function, now I may need a way of fast access it, but the 
information is not stored in FormData_pg_attribute structure.


Can anyone help? Thanks a lot!!!


Shore

Re: How can I retrieve attribute category with attribute Oid in C function?

2018-05-14 Thread Laurenz Albe
a wrote:
> Hi I am writing something that would need to identify the attribute category 
> inside a C function,
> now I may need a way of fast access it, but the information is not stored in 
> FormData_pg_attribute structure.

Are you talking about pg_type.typcategory?

That is accessible from FormData_pg_type.
You'd have to look up atttypid in the TYPEOID cache.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: How can I retrieve attribute category with attribute Oid in Cfunction?

2018-05-14 Thread a
hey thank you!!


could you provide some more detail information??


What function should I use to get FormData_pg_type structure??


thanks a lot




-- Original message --
From: "Laurenz Albe"; 
Sendtime: Monday, May 14, 2018 4:11 PM
To: "a"<372660...@qq.com>; "pgsql-general"; 
Subject: Re: How can I retrieve attribute category with attribute Oid in 
Cfunction?



a wrote:
> Hi I am writing something that would need to identify the attribute category 
> inside a C function,
> now I may need a way of fast access it, but the information is not stored in 
> FormData_pg_attribute structure.

Are you talking about pg_type.typcategory?

That is accessible from FormData_pg_type.
You'd have to look up atttypid in the TYPEOID cache.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com

Re: How can I retrieve attribute category with attribute Oid in Cfunction?

2018-05-14 Thread a
BTW, I searched another function called "lookup_type_cache", which will return 
a pointer of structure of "TypeCacheEntry".


The structure contains "TypeCacheEntry.typtype" that may also help to identify 
the type.


But I'm concerning about possible memory leak and allocation stuff. Is it safe 
to just declare a pointer and call this function??




-- Original --
From:  "372660...@qq.com";<372660...@qq.com>;
Date:  May 14, 2018
To:  "Laurenz Albe"; 
"pgsql-general"; 

Subject:  Re: How can I retrieve attribute category with attribute Oid in 
Cfunction?



hey thank you!!


could you provide some more detail information??


What function should I use to get FormData_pg_type structure??


thanks a lot




-- Original message --
From: "Laurenz Albe"; 
Sendtime: Monday, May 14, 2018 4:11 PM
To: "a"<372660...@qq.com>; "pgsql-general"; 
Subject: Re: How can I retrieve attribute category with attribute Oid in 
Cfunction?



a wrote:
> Hi I am writing something that would need to identify the attribute category 
> inside a C function,
> now I may need a way of fast access it, but the information is not stored in 
> FormData_pg_attribute structure.

Are you talking about pg_type.typcategory?

That is accessible from FormData_pg_type.
You'd have to look up atttypid in the TYPEOID cache.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com

Re: How can I retrieve attribute category with attribute Oid in Cfunction?

2018-05-14 Thread Laurenz Albe
a wrote:

> could you provide some more detail information??
> 
> What function should I use to get FormData_pg_type structure??
> 
> BTW, I searched another function called "lookup_type_cache", which will 
> return a pointer of structure of "TypeCacheEntry".
> 
> The structure contains "TypeCacheEntry.typtype" that may also help to
identify the type.
> 
> But I'm concerning about possible memory leak and allocation stuff. Is it 
> safe to just declare a pointer and call this function??

In "logicalrep_write_typ" in src/backend/replication/logical/proto.c
you can find an example how to get the pg_type values from the OID.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Recommended way to copy database files on Windows OS (to perform file system level backup)

2018-05-14 Thread Yashwanth Govinda Setty
Hi all,

We are facing this problem while performing file system level backup of 
database files:
As each database will form a directory inside Base directory which consists of 
files representing the tables, when some tables are dropped during backup, We 
get error while copying since the files do not exist anymore.

So we would like to know how you recommend copying PostgreSQL database files in 
Windows OS to perform file system level backups.
(For Example - The recommended way in Linux is to use tar format.)

Thanks,
Yashwanth

***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**


Re: Recommended way to copy database files on Windows OS (to perform file system level backup)

2018-05-14 Thread James Keener
A bit of pedanticism:

> So we would like to know how you recommend copying PostgreSQL database
files in Windows OS to perform file system level backups.

(For Example – The recommended way in Linux is to use tar format.)


That is not what a file-system-level back up is, and not what tar does at
all. Tar basically creates a file that contains other files, and would be
subject to the same issues a copy would be.

A file-system level backup is just that, a backup done by the file system
itself, usually via an atomic snapshot. LVM, ZFS, and BTFS  are all capable
of doing this in Linux and *BSD. The filesystem will store the current
state of the filesystem somewhere in an atomic manner, i.e. nothing else
happens when the snapshot is made.

I think NTFS can do volume shadow copies, but I havn't used windows in
perhaps a decade and have never done so as an admin or professionally.

Jim

On Mon, May 14, 2018 at 7:22 AM, Yashwanth Govinda Setty <
ygovindase...@commvault.com> wrote:

> Hi all,
>
>
>
> We are facing this problem while performing file system level backup of
> database files:
>
> As each database will form a directory inside Base directory which
> consists of files representing the tables, when some tables are dropped
> during backup, *We get error while copying since the files do not exist
> anymore*.
>
>
>
> So we would like to know how you recommend copying PostgreSQL database
> files in Windows OS to perform file system level backups.
>
> (For Example – The recommended way in Linux is to use tar format.)
>
>
>
> Thanks,
>
> *Yashwanth*
>
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>


Re: Recommended way to copy database files on Windows OS (to perform file system level backup)

2018-05-14 Thread Allan Kamau
On Mon, May 14, 2018 at 2:22 PM, Yashwanth Govinda Setty <
ygovindase...@commvault.com> wrote:

> Hi all,
>
>
>
> We are facing this problem while performing file system level backup of
> database files:
>
> As each database will form a directory inside Base directory which
> consists of files representing the tables, when some tables are dropped
> during backup, *We get error while copying since the files do not exist
> anymore*.
>
>
>
> So we would like to know how you recommend copying PostgreSQL database
> files in Windows OS to perform file system level backups.
>
> (For Example – The recommended way in Linux is to use tar format.)
>
>
>
> Thanks,
>
> *Yashwanth*
>
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **
>

An easy way to perform backup of your data is to use PostgreSQL's very own
pg_dump utility.


Re: Recommended way to copy database files on Windows OS (to perform file system level backup)

2018-05-14 Thread Christoph Moench-Tegeder
## Yashwanth Govinda Setty (ygovindase...@commvault.com):

> We are facing this problem while performing file system level backup of 
> database files:
> As each database will form a directory inside Base directory which consists 
> of files representing the tables, when some tables are dropped during backup, 
> We get error while copying since the files do not exist anymore.

This looks like you read only the first sentence of the relevant
documentation:
https://www.postgresql.org/docs/current/static/backup-file.html

The "recommended way" is not "use tar", but to heed both restrictions
mentioned there. Especially, if you want to do a naive copy of the files,
the database has to be shut down.
If shutting down the database is not an option (most cases), your
option is a "base backup" - there's pg_basebackup to help you with
that, and you can do that "the hard way" with the low level API.
Both approaches are described here:
https://www.postgresql.org/docs/current/static/continuous-archiving.html#BACKUP-BASE-BACKUP

This is mostly independent from the OS - but you have to make sure to
follow the process (shutting down the database beforehand, or do the
full base backup) - else the database will be corrupted after restore
and may even fail to start.

Regards,
Christoph

-- 
Spare Space.



Rapid disk usage spikes when updating large tables with GIN indexes

2018-05-14 Thread Jonathan Marks
Hello!

We have a mid-sized database on RDS running 10.1 (32 cores, 240 GB RAM, 5TB 
total disk space, 20k PIOPS) with several large (100GB+, tens of millions of 
rows) tables that use GIN indexes for full-text search. We at times need to 
index very large (hundreds of pages) documents and as a result our tables have 
a mix of small (tens of tokens) to very large (hundreds of thousands of tokens 
near to the tsvector 1MB limit). All our GIN indexes have fastupdate turned off 
— we found that turning fastupdate on led to significant blocking and that we 
get better average performance with it turned off. We’ve put a lot of effort 
into tuning our database over the last several years to the point where we have 
acceptable read and write performance for these tables. 

One recurring, and predictable, issue that we have experienced regularly for 
multiple years is that inserting or updating rows in any table with GIN indexes 
results in extremely large drops in free disk space — i.e. inserting 10k rows 
with a total size of 10GB can result in the temporary loss of several hundred 
gigabytes of free disk space over 2-3 hours (and it could be more — we try to 
keep a 10-15% buffer of free disk space so that often represents almost all 
available disk space). Once we stop the operation, free disk space rapidly 
recovers, which makes us believe that this occurs due to logs, or some kind of 
temporary table. Our work_mem and maintenance_work_mem settings are pretty 
large (12GB and 62GB, respectively). The database’s size on disk scarcely 
budges during this process.

Unfortunately, we’re on RDS, so we’re unable to ssh directly into the instance 
to see what files are so large, and none of the logs we can see (nor the wal 
logs) are large enough to explain this process. Any suggestions about where to 
look to see the cause of this problem (or about any settings we can tune or 
changes we could make to stop it) would be greatly appreciated.

Thank you!


Re: Recommended way to copy database files on Windows OS (to perform file system level backup)

2018-05-14 Thread Andreas Kretschmer

>An easy way to perform backup of your data is to use PostgreSQL's very
>own
>pg_dump utility.

That's not a file level backup, but a logical backup. Maybe he wants to build 
standby for streaming replication, for this you needs an other backup. See the 
answer from Christoph.

Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Rapid disk usage spikes when updating large tables with GIN indexes

2018-05-14 Thread Tom Lane
Jonathan Marks  writes:
> One recurring, and predictable, issue that we have experienced regularly for 
> multiple years is that inserting or updating rows in any table with GIN 
> indexes results in extremely large drops in free disk space — i.e. inserting 
> 10k rows with a total size of 10GB can result in the temporary loss of 
> several hundred gigabytes of free disk space over 2-3 hours (and it could be 
> more — we try to keep a 10-15% buffer of free disk space so that often 
> represents almost all available disk space). Once we stop the operation, free 
> disk space rapidly recovers, which makes us believe that this occurs due to 
> logs, or some kind of temporary table. Our work_mem and maintenance_work_mem 
> settings are pretty large (12GB and 62GB, respectively). The database’s size 
> on disk scarcely budges during this process.

I'm not following exactly what you mean by "the database’s size on
disk scarcely budges" --- how does that square with the free disk space
dropping?  (IOW, what are you measuring?)

If you're not including WAL space in the "database size", then perhaps
a plausible theory is that the space consumption comes from a burst of
WAL output, and that the space is freed after the WAL has been dumped
off to secondary servers or archived or whatever you do with it.
If you do none of those things, it'd reduce to being an issue of how
long till the next checkpoint.

Assuming this theory is accurate, probably your use of fastupdate = off
is a contributing factor, as that causes a lot more "churn" in the
internals of the GIN indexes during updates, and correspondingly more
WAL output to log the changes.  But you probably don't want to revisit
that decision if you're happy with performance otherwise.

If you are archiving or streaming WAL, then probably what you want to
look at is getting rid of bottlenecks in that, so that it can keep up
with these WAL-generation spikes better.

If you're not, the only suggestion I can think of is to try twiddling
your checkpoint parameters to alleviate the space spikes.  Reducing
the checkpoint interval would do that, but you have to be very wary
of going too far; a short checkpoint interval results in more full-page
images being emitted to WAL and thus can actually increase your WAL
space consumption.  Depending on what parameters you're using now,
maybe even an increase would be better.

regards, tom lane



Re: Rapid disk usage spikes when updating large tables with GIN indexes

2018-05-14 Thread Tom Lane
[ please keep the list cc'd ]

Jonathan Marks  writes:
> Thanks for your quick reply. Here’s a bit more information:
> 1) to measure the “size of the database” we run something like `select 
> datname, pg_size_pretty(pg_database_size(datname)) from pg_database;` I’m not 
> sure if this includes WAL size.
> 2) I’ve tried measuring WAL size with `select sum(size) from pg_ls_waldir();` 
> — this also doesn’t budge.
> 3) Our current checkpoint_timeout is 600s with a checkpoint_completion_target 
> of 0.9 — what does that suggest?

Hmph.  Your WAL-size query seems on point, and that pretty much destroys
my idea about a WAL emission spike.

pg_database_size() should include all regular and temporary tables/indexes
in the named DB.  It doesn't include WAL (but we've eliminated that), nor
cluster-wide tables such as pg_database (but those seem pretty unlikely
to be at issue), nor non-relation temporary files such as sort/hash temp
space.  At this point I think we have to focus our attention on what might
be creating large temp files.  I do not see anything in the GIN index code
that could do that, especially not if you have fastupdate off.  I wonder
whether there is something about the particular bulk-insertion queries
you're using that could result in large temp files --- which'd make the
apparent correlation with GIN index use a mirage, but we're running out
of other ideas.  You could try enabling log_temp_files to see if there's
anything to that.

In the grasping-at-straws department: are you quite sure that the extra
disk space consumption is PG's to begin with, rather than something
outside the database entirely?

regards, tom lane



Re: Rapid disk usage spikes when updating large tables with GIN indexes

2018-05-14 Thread Jonathan Marks
We’ll turn on log_temp_files and get back to you to see if that’s the cause. 
Re: the exact queries — these are just normal INSERTs and UPDATEs. This occurs 
as part of normal database operations — i.e., we are processing 10% of a table 
and marking changes to a particular row, or happen to be inserting 5-10% of the 
table volume with new rows. Whenever we bulk load we have to drop the indexes 
because the disk space loss just isn’t tenable.

Re: extra disk space consumption not within PG — the AWS folks can’t tell me 
what the problem is because it’s all internal to the PG part of the instance 
they can’t access. Doesn’t mean your last suggestion can’t be the case but 
makes it slightly less likely.

Any chance that GIN indexes are double-logging? I.e. with fastupdate off they 
are still trying to keep track of the changes in the pending list or something?

Our thought has been temp files for a while, but we’re not sure what we should 
do if that turns out to be the case.

> On May 14, 2018, at 3:08 PM, Tom Lane  wrote:
> 
> [ please keep the list cc'd ]
> 
> Jonathan Marks  writes:
>> Thanks for your quick reply. Here’s a bit more information:
>> 1) to measure the “size of the database” we run something like `select 
>> datname, pg_size_pretty(pg_database_size(datname)) from pg_database;` I’m 
>> not sure if this includes WAL size.
>> 2) I’ve tried measuring WAL size with `select sum(size) from 
>> pg_ls_waldir();` — this also doesn’t budge.
>> 3) Our current checkpoint_timeout is 600s with a 
>> checkpoint_completion_target of 0.9 — what does that suggest?
> 
> Hmph.  Your WAL-size query seems on point, and that pretty much destroys
> my idea about a WAL emission spike.
> 
> pg_database_size() should include all regular and temporary tables/indexes
> in the named DB.  It doesn't include WAL (but we've eliminated that), nor
> cluster-wide tables such as pg_database (but those seem pretty unlikely
> to be at issue), nor non-relation temporary files such as sort/hash temp
> space.  At this point I think we have to focus our attention on what might
> be creating large temp files.  I do not see anything in the GIN index code
> that could do that, especially not if you have fastupdate off.  I wonder
> whether there is something about the particular bulk-insertion queries
> you're using that could result in large temp files --- which'd make the
> apparent correlation with GIN index use a mirage, but we're running out
> of other ideas.  You could try enabling log_temp_files to see if there's
> anything to that.
> 
> In the grasping-at-straws department: are you quite sure that the extra
> disk space consumption is PG's to begin with, rather than something
> outside the database entirely?
> 
>   regards, tom lane




10.4 upgrade, function markings, and template0

2018-05-14 Thread Dominic Jones
Good afternoon,

The PostgreSQL 10.4 upgrade involves changes to some function markings (see 
release notes, E.1.2, second and third bullet points for specifics). One way to 
make these changes is to use `ALTER FUNCTION` to change the functions in your 
existing databases. While this was mainly straightforward, I'm unclear on 
whether the `template0` database must be changed in this manner or if it is 
automatically updated when the upgrade is applied. The documentation indicates 
that in general you shouldn't manually change the `template0` database.


-- 
Dominic Jones 



Re: 10.4 upgrade, function markings, and template0

2018-05-14 Thread David G. Johnston
On Mon, May 14, 2018 at 1:42 PM, Dominic Jones  wrote:

> Good afternoon,
>
> The PostgreSQL 10.4 upgrade involves changes to some function markings
> (see release notes, E.1.2, second and third bullet points for specifics).
> One way to make these changes is to use `ALTER FUNCTION` to change the
> functions in your existing databases. While this was mainly
> straightforward, I'm unclear on whether the `template0` database must be
> changed in this manner or if it is automatically updated when the upgrade
> is applied. The documentation indicates that in general you shouldn't
> manually change the `template0` database.
>

If you ever go and CREATE DATABASE TEMPLATE template0 you will not get the
correct markings unless you've updated template0 (w/o TEMPLATE template0
you pull from template1, probably want to update that as well).

​Related quest​ion - the post-installation instructions that are part of
the news releases seem like something that should be part of the release
notes...but they are not.

David J.


Re: 10.4 upgrade, function markings, and template0

2018-05-14 Thread Tom Lane
Dominic Jones  writes:
> The PostgreSQL 10.4 upgrade involves changes to some function markings (see 
> release notes, E.1.2, second and third bullet points for specifics). One way 
> to make these changes is to use `ALTER FUNCTION` to change the functions in 
> your existing databases. While this was mainly straightforward, I'm unclear 
> on whether the `template0` database must be changed in this manner or if it 
> is automatically updated when the upgrade is applied. The documentation 
> indicates that in general you shouldn't manually change the `template0` 
> database.

Yes, you'd need to fix it in template0 as well, or you risk
subsequently-created databases not having the fix.  See previous
minor releases where we've given more painstaking detail about
applying catalog corrections, e.g. 9.6.4:

https://www.postgresql.org/docs/current/static/release-9-6-4.html

I didn't bother with spelling it all out in full detail this time,
which maybe was a mistake, but I felt that probably most users
wouldn't need to bother with these changes at all (unlike the case
where a catalog correction is security-related).

regards, tom lane



Re: 10.4 upgrade, function markings, and template0

2018-05-14 Thread Adrian Klaver

On 05/14/2018 02:02 PM, Tom Lane wrote:

Dominic Jones  writes:

The PostgreSQL 10.4 upgrade involves changes to some function markings (see 
release notes, E.1.2, second and third bullet points for specifics). One way to 
make these changes is to use `ALTER FUNCTION` to change the functions in your 
existing databases. While this was mainly straightforward, I'm unclear on 
whether the `template0` database must be changed in this manner or if it is 
automatically updated when the upgrade is applied. The documentation indicates 
that in general you shouldn't manually change the `template0` database.


Yes, you'd need to fix it in template0 as well, or you risk
subsequently-created databases not having the fix.  See previous
minor releases where we've given more painstaking detail about
applying catalog corrections, e.g. 9.6.4:

https://www.postgresql.org/docs/current/static/release-9-6-4.html

I didn't bother with spelling it all out in full detail this time,
which maybe was a mistake, but I felt that probably most users
wouldn't need to bother with these changes at all (unlike the case
where a catalog correction is security-related).


Well what is nice about the news release is you can cut and past the 
entire list of commands and do the updates en masse.




regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: 10.4 upgrade, function markings, and template0

2018-05-14 Thread Tom Lane
Adrian Klaver  writes:
> On 05/14/2018 02:02 PM, Tom Lane wrote:
>> I didn't bother with spelling it all out in full detail this time,
>> which maybe was a mistake, but I felt that probably most users
>> wouldn't need to bother with these changes at all (unlike the case
>> where a catalog correction is security-related).

> Well what is nice about the news release is you can cut and past the 
> entire list of commands and do the updates en masse.

It'd be nice to have some more-automated way of doing this type of
correction.  Ordinary scripting doesn't look very promising, because
I don't see an easy way to deal with the need to connect to every
database in the cluster; that seems to depend on a lot of local
characteristics about usernames and authentication.

Maybe it'd be worth building some sort of infrastructure that would
allow this to be done at a lower level.  It's not hard to imagine
an autovacuum-like or bgworker-based thingy that could run around
and apply a given SQL script in every database, bypassing the usual
worries about authentication and connections-disabled databases.
That seems like a lot of work for a need that only comes up once in
awhile, but perhaps it'd have more applications than just catalog
corrections.

regards, tom lane



Re: 10.4 upgrade, function markings, and template0

2018-05-14 Thread Adrian Klaver

On 05/14/2018 02:22 PM, Tom Lane wrote:

Adrian Klaver  writes:

On 05/14/2018 02:02 PM, Tom Lane wrote:

I didn't bother with spelling it all out in full detail this time,
which maybe was a mistake, but I felt that probably most users
wouldn't need to bother with these changes at all (unlike the case
where a catalog correction is security-related).



Well what is nice about the news release is you can cut and past the
entire list of commands and do the updates en masse.


It'd be nice to have some more-automated way of doing this type of
correction.  Ordinary scripting doesn't look very promising, because
I don't see an easy way to deal with the need to connect to every
database in the cluster; that seems to depend on a lot of local
characteristics about usernames and authentication >
Maybe it'd be worth building some sort of infrastructure that would
allow this to be done at a lower level.  It's not hard to imagine
an autovacuum-like or bgworker-based thingy that could run around
and apply a given SQL script in every database, bypassing the usual
worries about authentication and connections-disabled databases.
That seems like a lot of work for a need that only comes up once in
awhile, but perhaps it'd have more applications than just catalog
corrections.


That would be helpful given that a major version has a 5 year supported 
life span. I can see folks not deciding to do the manual work at the 
minor release because at that time it does not apply and the work does 
not seem worth it. Then at some point in the future conditions change 
and they wonder why things are not working the way they should. I know I 
would be grateful.




regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: 10.4 upgrade, function markings, and template0

2018-05-14 Thread Dominic Jones
On Mon, 14 May 2018 17:02:25 -0400
Tom Lane  wrote:

> Dominic Jones  writes:
> > The PostgreSQL 10.4 upgrade involves changes to some function markings (see 
> > release notes, E.1.2, second and third bullet points for specifics). One 
> > way to make these changes is to use `ALTER FUNCTION` to change the 
> > functions in your existing databases. While this was mainly 
> > straightforward, I'm unclear on whether the `template0` database must be 
> > changed in this manner or if it is automatically updated when the upgrade 
> > is applied. The documentation indicates that in general you shouldn't 
> > manually change the `template0` database.
> 
> Yes, you'd need to fix it in template0 as well, or you risk
> subsequently-created databases not having the fix.  See previous
> minor releases where we've given more painstaking detail about
> applying catalog corrections, e.g. 9.6.4:
> 
> https://www.postgresql.org/docs/current/static/release-9-6-4.html
> 
> I didn't bother with spelling it all out in full detail this time,
> which maybe was a mistake, but I felt that probably most users
> wouldn't need to bother with these changes at all (unlike the case
> where a catalog correction is security-related).
> 
>   regards, tom lane
> 

Yes, the link does address the issue and answer the question. It looks like I 
didn't see the previous upgrade's discussion because the change to which it was 
tied didn't appear to be relevant to the database deployment involved.


-- 
Dominic Jones 



Re: Strange error in Windows 10 Pro

2018-05-14 Thread Dale Seaburg

Just to put a finishing note to this thread:

I am placing the blame on the mfg of the PC - was supposed to be a NEW, 
but had a Reburbished sticker on the bottom of the case.  Now, whether 
it was the PC mfg or Microsoft's fault, I really could care less - the 
PC became useless for installing PostgreSQL - versions 8.4.5, 9.6.7 or 
10.3.  The install ALWAYS failed at the very last step - the building of 
the internal folders, etc of the *data* folder.


So, in frustration, I have chosen to install 8.4.5 on an older PC 
running XP.  It just works!  I don't like it, but I am constrained by 
the owner's wishes.


Thanks, for offering each of your individual helps.

Regards,
Dale Seaburg

On 4/21/2018 3:08 PM, Dale Seaburg wrote:


Thanks Adrian for the suggestion of running the installer with Admin 
rights.  Unfortunately, I get the same results.  It appears that all 
of the folders within C:\Program Files\PostgreSQL\9.6 path are 
created, and populated, BUT, when the items in the *data* folder are 
to be created, or copied into, it leaves an error message as noted 
previously. The *data* folder is empty.


It's almost as if the PC is missing a critical .dll needed in the 
*data* folder filling function (my guess).


Again, I am at a loss as to what to do.

Dale


On 4/20/2018 11:13 PM, Dale Seaburg wrote:
Oops, my mistake.  I'll let this serve the list with what I've tried 
so far.  Thanks, Adrian for the reminder.


I hope tomorrow to visit the customer and try the Admin user method 
of installing.


Dale


On 4/20/2018 11:03 PM, Adrian Klaver wrote:

On 04/20/2018 07:52 PM, Dale Seaburg wrote:

Please also reply to list.
Ccing list to put it front of more eyes.


Thanks, Adrian, for suggestion(s).


On 4/20/2018 9:35 PM, Adrian Klaver wrote:

On 04/20/2018 07:16 PM, Dale Seaburg wrote:
I am attempting to install a fresh copy of 
postgresql-9.6.8-2-windows-x86 on a new DELL PC with Windows 10 
Pro.  It 


This was downloaded from where?
downloaded from https://www.postgresql.org/download/windows/, 
selecting to use the installer pointed to near the beginning of 
that page.  The actual website that contained the installer file 
was: 
"https://www.enterprisedb.com/thank-you-downloading-postgresql?anid=209611";. 



gets near the end of the install when the message says it is 
attempting to start the the database server.   There's a long 
pause, followed by an error message: "Failed to load SQL modules 
into the database cluster". Using File Explorer, i notice the 
9.6\base\ folder is empty?  Has anyone else seen this before?  I 
have no clue where to look for the issue.


You are running as Admin user?
I can't say that I was.  Will check this next time (maybe tomorrow) 
when I am customer's site.  I didn't even think about 
right-clicking on the installer and selecting run-as-admin.  I know 
I did not deliberately use Admin user on the test PC mentioned 
below, and it installed with no problems.





As a double-check on a different PC with Windows 10 Pro, I get no 
error message, and the database is installed correctly.


Dale Seaburg






















Query ID Values

2018-05-14 Thread tango ward
Good Day,

I need to run an SQL query and get a program_id and department_id of a
specific course for each student. I am thinking of running an IF condition
to check if the course name is in program and get it's ID but I don't know
yet where to use the IF condition in the query.

sample code:

for row in cur_t:
 course = row['course']


 cur_p.execute("""SELECT id from program where name='$[course]']
   WHERE department_id=?? """)


Problem is I have 3 department IDs ( Senior High, Vocational, Undergraduate
) and each ID have multiple programs/courses. Each program/course is
connected to the deparment table via department_id.

May I ask an advice on how to approach this?


Thanks,
J


Re: Query ID Values

2018-05-14 Thread David G. Johnston
On Monday, May 14, 2018, tango ward  wrote:
>
> May I ask an advice on how to approach this?
>

I can't make heads nor tails of your description...but there isn't IF in
SQL.  But you may get some mileage out of simple joins.

David J.


Re: 10.4 upgrade, function markings, and template0

2018-05-14 Thread Michael Paquier
On Mon, May 14, 2018 at 05:22:39PM -0400, Tom Lane wrote:
> Maybe it'd be worth building some sort of infrastructure that would
> allow this to be done at a lower level.  It's not hard to imagine
> an autovacuum-like or bgworker-based thingy that could run around
> and apply a given SQL script in every database, bypassing the usual
> worries about authentication and connections-disabled databases.

A portion of the infrastructure is already available for background
workers which can use BGWORKER_BYPASS_ALLOWCONN since Postgres 11 to
enforce connections to databases even if an administrator disables
connections to it.
--
Michael


signature.asc
Description: PGP signature


Re: Query ID Values

2018-05-14 Thread tango ward
Hi,

Yes sorry, here's the tables:

[cur_t DB] [student_profile table]


Column|  Type  | Collation | Nullable
|Default|
--++---+--+---+
 studentnumber| character varying(45)  |   | not null |
''::character varying |
 firstname| character varying(60)  |   |
|   |
 middlename   | character varying(60)  |   |
|   |
 lastname | character varying(60)  |   |
|   |
 course   | character varying(150) |   | not null |
''::character varying |



[cur_p DB] [profile table]

Column|  Type  | Collation | Nullable
|Default|
--++---+--+---+
 studentnumber| character varying(45)  |   | not null |
''::character varying |
 firstname| character varying(60)  |   |
|   |
 middlename   | character varying(60)  |   |
|   |
 lastname | character varying(60)  |   |
|   |
 program_id   | integer|   | not null
|   |
 department_id| integer|   | not null
|   |
 campus_id| integer|   | not null
|   |



So I am migrating the data from one database to another. Here, I am moving
data of student from student_profile table to profile table.

I have already migrated the course data to another table. What I would like
to do is get the value of program_id and department_id for the profile
table. I want to check if the course exist in profile_program table, then
get it's ID. I think I can use the same logic for getting and setting value
for the department_id column of profile table. I am using psycopg2 to
access and move the data.


for row in cur_t:
  course = row['course']
  # Here I would like to get the value of program_id and
department_id and insert it to the said columns but I don't know how to do
it yet
  # I put ?? in department_id coz I don't know how to access the 3
department IDs in this query.
  cur_p.execute(""" SELECT id from st_profile where
name='$[course]' and department_id=?? """)
  x = cur_p.fetchall()
  # This will print an error since I added department_id without
value yet but if I remove it, I will get "None"
  print x



Sorry for asking questions a lot, we don't have DBA at the moment.


Thanks,
J


On Tue, May 15, 2018 at 9:57 AM, melvin6925  wrote:

> Perhaps if you care to provide us with the structure of all tables
> involved, we could suggest a reasonable query.
>
>
>
> Sent via the Samsung Galaxy S® 6, an AT&T 4G LTE smartphone
>
>  Original message 
> From: tango ward 
> Date: 5/14/18 21:08 (GMT-05:00)
> To: "pgsql-generallists.postgresql.org"  postgresql.org>
> Subject: Query ID Values
>
>
> Good Day,
>
> I need to run an SQL query and get a program_id and department_id of a
> specific course for each student. I am thinking of running an IF condition
> to check if the course name is in program and get it's ID but I don't know
> yet where to use the IF condition in the query.
>
> sample code:
>
> for row in cur_t:
>  course = row['course']
>
>
>  cur_p.execute("""SELECT id from program where name='$[course]']
>WHERE department_id=?? """)
>
>
> Problem is I have 3 department IDs ( Senior High, Vocational,
> Undergraduate ) and each ID have multiple programs/courses. Each
> program/course is connected to the deparment table via department_id.
>
> May I ask an advice on how to approach this?
>
>
> Thanks,
> J
>


Re: Query ID Values

2018-05-14 Thread tango ward
for row in cur_t:
course = row['course']
cur_p.execute("""
  SELECT id
  FROM education_program
  WHERE name=%s
  AND department_id
  IN (SELECT id FROM profile_department WHERE
school_id=1)
  """, (course,))
x = cur_p.fetchall()
print x

So far I can see the program IDs but I am still getting empty list. Also
the program_id seems to be in a nested list. Why is that?

On Tue, May 15, 2018 at 10:47 AM, tango ward  wrote:

> Hi,
>
> Yes sorry, here's the tables:
>
> [cur_t DB] [student_profile table]
>
>
> Column|  Type  | Collation | Nullable
> |Default|
> --++---+
> --+---+
>  studentnumber| character varying(45)  |   | not null |
> ''::character varying |
>  firstname| character varying(60)  |   |
> |   |
>  middlename   | character varying(60)  |   |
> |   |
>  lastname | character varying(60)  |   |
> |   |
>  course   | character varying(150) |   | not null |
> ''::character varying |
>
>
>
> [cur_p DB] [profile table]
>
> Column|  Type  | Collation | Nullable
> |Default|
> --++---+
> --+---+
>  studentnumber| character varying(45)  |   | not null |
> ''::character varying |
>  firstname| character varying(60)  |   |
> |   |
>  middlename   | character varying(60)  |   |
> |   |
>  lastname | character varying(60)  |   |
> |   |
>  program_id   | integer|   | not null
> |   |
>  department_id| integer|   | not null
> |   |
>  campus_id| integer|   | not null
> |   |
>
>
>
> So I am migrating the data from one database to another. Here, I am moving
> data of student from student_profile table to profile table.
>
> I have already migrated the course data to another table. What I would
> like to do is get the value of program_id and department_id for the profile
> table. I want to check if the course exist in profile_program table, then
> get it's ID. I think I can use the same logic for getting and setting value
> for the department_id column of profile table. I am using psycopg2 to
> access and move the data.
>
>
> for row in cur_t:
>   course = row['course']
>   # Here I would like to get the value of program_id and
> department_id and insert it to the said columns but I don't know how to do
> it yet
>   # I put ?? in department_id coz I don't know how to access the 3
> department IDs in this query.
>   cur_p.execute(""" SELECT id from st_profile where
> name='$[course]' and department_id=?? """)
>   x = cur_p.fetchall()
>   # This will print an error since I added department_id without
> value yet but if I remove it, I will get "None"
>   print x
>
>
>
> Sorry for asking questions a lot, we don't have DBA at the moment.
>
>
> Thanks,
> J
>
>
> On Tue, May 15, 2018 at 9:57 AM, melvin6925  wrote:
>
>> Perhaps if you care to provide us with the structure of all tables
>> involved, we could suggest a reasonable query.
>>
>>
>>
>> Sent via the Samsung Galaxy S® 6, an AT&T 4G LTE smartphone
>>
>>  Original message 
>> From: tango ward 
>> Date: 5/14/18 21:08 (GMT-05:00)
>> To: "pgsql-generallists.postgresql.org" > l.org>
>> Subject: Query ID Values
>>
>>
>> Good Day,
>>
>> I need to run an SQL query and get a program_id and department_id of a
>> specific course for each student. I am thinking of running an IF condition
>> to check if the course name is in program and get it's ID but I don't know
>> yet where to use the IF condition in the query.
>>
>> sample code:
>>
>> for row in cur_t:
>>  course = row['course']
>>
>>
>>  cur_p.execute("""SELECT id from program where name='$[course]']
>>WHERE department_id=?? """)
>>
>>
>> Problem is I have 3 department IDs ( Senior High, Vocational,
>> Undergraduate ) and each ID have multiple programs/courses. Each
>> program/course is connected to the deparment table via department_id.
>>
>> May I ask an advice on how to approach this?
>>
>>
>> Thanks,
>> J
>>
>
>


Re: RPM packages 10.4 for rhel7 x86_86 are build as f25.x86_64.rpm

2018-05-14 Thread cwlists
Hi,

I see that the f25 packages for 10.4 are now rebuilt with rhel7 ones.
Thanks.

On Mon, May 14, 2018 at 6:22 AM cwlists  wrote:

> Hi,
>
> Maybe this is not an issue, but first time I see it.
>
> Only for 10.4 and rhel7 x86_64 I can see this issue (*). They are
> consistent for earlier releases, as well as for 10.4 and rhel 6 x86_64, and
> also for the release 9.6.9 all rpms are consistent.
>
> (*) https://yum.postgresql.org/10/redhat/rhel-7-x86_64/
>
>
> BR,
> Christian
>


Re: Query ID Values

2018-05-14 Thread Adrian Klaver

On 05/14/2018 08:30 PM, tango ward wrote:

for row in cur_t:
     course = row['course']
     cur_p.execute("""
   SELECT id
   FROM education_program
   WHERE name=%s
   AND department_id
   IN (SELECT id FROM profile_department WHERE 
school_id=1)

   """, (course,))
     x = cur_p.fetchall()
     print x

So far I can see the program IDs but I am still getting empty list. Also


That would seem to indicate that the value of course is not matching any 
value in the field name for the given school_id. Maybe do:


print(course)

to see if they are valid values.



the program_id seems to be in a nested list. Why is that?


Because you are doing fetchall(). That is going to fetch a list of row 
tuples. Either iterate over that list or iterate over the cursor:


for row in cur_p:
print(row)

For more info see:
http://initd.org/psycopg/docs/cursor.html



On Tue, May 15, 2018 at 10:47 AM, tango ward > wrote:


Hi,

Yes sorry, here's the tables:

[cur_t DB] [student_profile table]


     Column    |  Type  | Collation |
Nullable |    Default    |

--++---+--+---+
  studentnumber    | character varying(45)  |   | not
null | ''::character varying |
  firstname    | character varying(60)  |  
|  |   |
  middlename   | character varying(60)  |  
|  |   |
  lastname | character varying(60)  |  
|  |   |

  course   | character varying(150) |   | not
null | ''::character varying |



[cur_p DB] [profile table]

     Column    |  Type  | Collation |
Nullable |    Default    |

--++---+--+---+
  studentnumber    | character varying(45)  |   | not
null | ''::character varying |
  firstname    | character varying(60)  |  
|  |   |
  middlename   | character varying(60)  |  
|  |   |
  lastname | character varying(60)  |  
|  |   |

  program_id   | integer    |   | not
null |   |
  department_id    | integer    |   | not
null |   |
  campus_id    | integer    |   | not
null |   |



So I am migrating the data from one database to another. Here, I am
moving data of student from student_profile table to profile table.

I have already migrated the course data to another table. What I
would like to do is get the value of program_id and department_id
for the profile table. I want to check if the course exist in
profile_program table, then get it's ID. I think I can use the same
logic for getting and setting value for the department_id column of
profile table. I am using psycopg2 to access and move the data.


for row in cur_t:
   course = row['course']
   # Here I would like to get the value of program_id and
department_id and insert it to the said columns but I don't know how
to do it yet
   # I put ?? in department_id coz I don't know how to
access the 3 department IDs in this query.
   cur_p.execute(""" SELECT id from st_profile where
name='$[course]' and department_id=?? """)
   x = cur_p.fetchall()
   # This will print an error since I added department_id
without value yet but if I remove it, I will get "None"
   print x



Sorry for asking questions a lot, we don't have DBA at the moment.


Thanks,
J


On Tue, May 15, 2018 at 9:57 AM, melvin6925 mailto:melvin6...@gmail.com>> wrote:

Perhaps if you care to provide us with the structure of all
tables involved, we could suggest a reasonable query.



Sent via the Samsung Galaxy S® 6, an AT&T 4G LTE smartphone

 Original message 
From: tango ward mailto:tangowar...@gmail.com>>
Date: 5/14/18 21:08 (GMT-05:00)
To: "pgsql-generallists.postgresql.org
"
mailto:pgsql-general@lists.postgresql.org>>
Subject: Query ID Values


Good Day,

I need to run an SQL query and get a program_id and
department_id of a specific course for each student. I am
thinking of running a

Re: Query ID Values

2018-05-14 Thread tango ward
Noted Sir Adrian. The course name for the ones that are blank are not match
with the ones in the profile_program table. I am writing a CASE Statement
right now to verify the data but I can't make it work.:

for row in cur_t:
course = row['course']
cur_p.execute("""
  SELECT id
  FROM education_program
  WHERE name=%s,
  CASE
WHEN name='SENIOR HIGH SCHOOL GAS'
THEN name='General Academic Strand'
WHEN name='SENIOR HIGH SCHOOL HUMSS'
THEN name='Humanities and Social Sciences'
WHEN name='SENIOR HIGH SCHOOL STEM'
THEN name='Science, Technology, Engineering and
Mathematics'
  END
  AND department_id
  IN (SELECT id
  FROM profile_department
  WHERE school_id=1)
  """, [course])
x = cur_p.fetchone()
print row['firstname'], row['lastname'], course, x




On Tue, May 15, 2018 at 12:12 PM, Adrian Klaver 
wrote:

> On 05/14/2018 08:30 PM, tango ward wrote:
>
>> for row in cur_t:
>>  course = row['course']
>>  cur_p.execute("""
>>SELECT id
>>FROM education_program
>>WHERE name=%s
>>AND department_id
>>IN (SELECT id FROM profile_department WHERE
>> school_id=1)
>>""", (course,))
>>  x = cur_p.fetchall()
>>  print x
>>
>> So far I can see the program IDs but I am still getting empty list. Also
>>
>
> That would seem to indicate that the value of course is not matching any
> value in the field name for the given school_id. Maybe do:
>
> print(course)
>
> to see if they are valid values.
>
>
> the program_id seems to be in a nested list. Why is that?
>>
>
> Because you are doing fetchall(). That is going to fetch a list of row
> tuples. Either iterate over that list or iterate over the cursor:
>
> for row in cur_p:
> print(row)
>
> For more info see:
> http://initd.org/psycopg/docs/cursor.html
>
>
>> On Tue, May 15, 2018 at 10:47 AM, tango ward > > wrote:
>>
>> Hi,
>>
>> Yes sorry, here's the tables:
>>
>> [cur_t DB] [student_profile table]
>>
>>
>>  Column|  Type  | Collation |
>> Nullable |Default|
>> --++---+
>> --+---+
>>   studentnumber| character varying(45)  |   | not
>> null | ''::character varying |
>>   firstname| character varying(60)  |
>> |  |   |
>>   middlename   | character varying(60)  |
>> |  |   |
>>   lastname | character varying(60)  |
>> |  |   |
>>   course   | character varying(150) |   | not
>> null | ''::character varying |
>>
>>
>>
>> [cur_p DB] [profile table]
>>
>>  Column|  Type  | Collation |
>> Nullable |Default|
>> --++---+
>> --+---+
>>   studentnumber| character varying(45)  |   | not
>> null | ''::character varying |
>>   firstname| character varying(60)  |
>> |  |   |
>>   middlename   | character varying(60)  |
>> |  |   |
>>   lastname | character varying(60)  |
>> |  |   |
>>   program_id   | integer|   | not
>> null |   |
>>   department_id| integer|   | not
>> null |   |
>>   campus_id| integer|   | not
>> null |   |
>>
>>
>>
>> So I am migrating the data from one database to another. Here, I am
>> moving data of student from student_profile table to profile table.
>>
>> I have already migrated the course data to another table. What I
>> would like to do is get the value of program_id and department_id
>> for the profile table. I want to check if the course exist in
>> profile_program table, then get it's ID. I think I can use the same
>> logic for getting and setting value for the department_id column of
>> profile table. I am using psycopg2 to access and move the data.
>>
>>
>> for row in cur_t:
>>course = row['course']
>># Here I would like t

Re: Query ID Values

2018-05-14 Thread Ian Zimmerman
On 2018-05-14 21:12, Adrian Klaver wrote:

> Because you are doing fetchall(). That is going to fetch a list of row
> tuples.  Either iterate over that list or iterate over the cursor:
> 
> for row in cur_p:
>   print(row)
> 
> For more info see:
> http://initd.org/psycopg/docs/cursor.html

Where does that webpage say that I can use the cursor itself for
iteration?  I can't find it there.  (OTOH it is clearly documented for
the sqlite3 library).

Until now, with psycopg2 I have done it like this:

cur.execute(stmt)
results = iter(cur.fetchone, None)
for r in results:
...

-- 
Please don't Cc: me privately on mailing lists and Usenet,
if you also post the followup to the list or newsgroup.
To reply privately _only_ on Usenet and on broken lists
which rewrite From, fetch the TXT record for no-use.mooo.com.



Re: Query ID Values

2018-05-14 Thread tango ward
I thing its this:

"

Note

cursor  objects are
iterable, so, instead of calling explicitly fetchone()
 in a loop, the
object itself can be used:

>>> cur.execute("SELECT * FROM test;")>>> for record in cur:... print 
>>> record...(1, 100, "abc'def")(2, None, 'dada')(3, 42, 'bar')

Changed in version 2.4: iterating over a named cursor
 fetches
itersize 
records at time from the backend. Previously only one record was fetched
per roundtrip, resulting in a large overhead.
"

On Tue, May 15, 2018 at 1:04 PM, Ian Zimmerman  wrote:

> On 2018-05-14 21:12, Adrian Klaver wrote:
>
> > Because you are doing fetchall(). That is going to fetch a list of row
> > tuples.  Either iterate over that list or iterate over the cursor:
> >
> > for row in cur_p:
> >   print(row)
> >
> > For more info see:
> > http://initd.org/psycopg/docs/cursor.html
>
> Where does that webpage say that I can use the cursor itself for
> iteration?  I can't find it there.  (OTOH it is clearly documented for
> the sqlite3 library).
>
> Until now, with psycopg2 I have done it like this:
>
> cur.execute(stmt)
> results = iter(cur.fetchone, None)
> for r in results:
> ...
>
> --
> Please don't Cc: me privately on mailing lists and Usenet,
> if you also post the followup to the list or newsgroup.
> To reply privately _only_ on Usenet and on broken lists
> which rewrite From, fetch the TXT record for no-use.mooo.com.
>
>


Re: Query ID Values

2018-05-14 Thread tango ward
Fixed the case statement

SELECT id
  FROM education_program
  WHERE name = CASE %s
WHEN 'SENIOR HIGH SCHOOL GAS'
THEN 'General Academic Strand'
WHEN 'SENIOR HIGH SCHOOL HUMSS'
THEN 'Humanities and Social Sciences'
WHEN 'SENIOR HIGH SCHOOL STEM'
THEN 'Science, Technology, Engineering and
Mathematics'
ELSE %s
  END
  AND department_id
  IN (SELECT id
  FROM profile_department
  WHERE school_id=1)
  """, [course, course])

On Tue, May 15, 2018 at 1:11 PM, tango ward  wrote:

> I thing its this:
>
> "
>
> Note
>
> cursor  objects are
> iterable, so, instead of calling explicitly fetchone()
>  in a loop,
> the object itself can be used:
>
> >>> cur.execute("SELECT * FROM test;")>>> for record in cur:... print 
> >>> record...(1, 100, "abc'def")(2, None, 'dada')(3, 42, 'bar')
>
> Changed in version 2.4: iterating over a named cursor
>  fetches
> itersize 
> records at time from the backend. Previously only one record was fetched
> per roundtrip, resulting in a large overhead.
> "
>
> On Tue, May 15, 2018 at 1:04 PM, Ian Zimmerman 
> wrote:
>
>> On 2018-05-14 21:12, Adrian Klaver wrote:
>>
>> > Because you are doing fetchall(). That is going to fetch a list of row
>> > tuples.  Either iterate over that list or iterate over the cursor:
>> >
>> > for row in cur_p:
>> >   print(row)
>> >
>> > For more info see:
>> > http://initd.org/psycopg/docs/cursor.html
>>
>> Where does that webpage say that I can use the cursor itself for
>> iteration?  I can't find it there.  (OTOH it is clearly documented for
>> the sqlite3 library).
>>
>> Until now, with psycopg2 I have done it like this:
>>
>> cur.execute(stmt)
>> results = iter(cur.fetchone, None)
>> for r in results:
>> ...
>>
>> --
>> Please don't Cc: me privately on mailing lists and Usenet,
>> if you also post the followup to the list or newsgroup.
>> To reply privately _only_ on Usenet and on broken lists
>> which rewrite From, fetch the TXT record for no-use.mooo.com.
>>
>>
>


Re: Query ID Values

2018-05-14 Thread David G. Johnston
I'd bottom-post, as is the convention for these lists, but it seems
pointless now...

CASE *expression*
WHEN *value* THEN *result*
[WHEN ...]
[ELSE *result*]
END

Try that where expression is the %s.  The values and results are simple
literals.  And you compare the result of the expression to "name".

Or just do the rename in python, not sql.

David J.

On Monday, May 14, 2018, tango ward  wrote:

> Noted Sir Adrian. The course name for the ones that are blank are not
> match with the ones in the profile_program table. I am writing a CASE
> Statement right now to verify the data but I can't make it work.:
>
> for row in cur_t:
> course = row['course']
> cur_p.execute("""
>   SELECT id
>   FROM education_program
>   WHERE name=%s,
>   CASE
> WHEN name='SENIOR HIGH SCHOOL GAS'
> THEN name='General Academic Strand'
> WHEN name='SENIOR HIGH SCHOOL HUMSS'
> THEN name='Humanities and Social Sciences'
> WHEN name='SENIOR HIGH SCHOOL STEM'
> THEN name='Science, Technology, Engineering
> and Mathematics'
>   END
>   AND department_id
>   IN (SELECT id
>   FROM profile_department
>   WHERE school_id=1)
>   """, [course])
> x = cur_p.fetchone()
> print row['firstname'], row['lastname'], course, x
>
>
>
>
> On Tue, May 15, 2018 at 12:12 PM, Adrian Klaver  > wrote:
>
>> On 05/14/2018 08:30 PM, tango ward wrote:
>>
>>> for row in cur_t:
>>>  course = row['course']
>>>  cur_p.execute("""
>>>SELECT id
>>>FROM education_program
>>>WHERE name=%s
>>>AND department_id
>>>IN (SELECT id FROM profile_department WHERE
>>> school_id=1)
>>>""", (course,))
>>>  x = cur_p.fetchall()
>>>  print x
>>>
>>> So far I can see the program IDs but I am still getting empty list. Also
>>>
>>
>> That would seem to indicate that the value of course is not matching any
>> value in the field name for the given school_id. Maybe do:
>>
>> print(course)
>>
>> to see if they are valid values.
>>
>>
>> the program_id seems to be in a nested list. Why is that?
>>>
>>
>> Because you are doing fetchall(). That is going to fetch a list of row
>> tuples. Either iterate over that list or iterate over the cursor:
>>
>> for row in cur_p:
>> print(row)
>>
>> For more info see:
>> http://initd.org/psycopg/docs/cursor.html
>>
>>
>>> On Tue, May 15, 2018 at 10:47 AM, tango ward >> > wrote:
>>>
>>> Hi,
>>>
>>> Yes sorry, here's the tables:
>>>
>>> [cur_t DB] [student_profile table]
>>>
>>>
>>>  Column|  Type  | Collation |
>>> Nullable |Default|
>>> --++---+
>>> --+---+
>>>   studentnumber| character varying(45)  |   | not
>>> null | ''::character varying |
>>>   firstname| character varying(60)  |
>>> |  |   |
>>>   middlename   | character varying(60)  |
>>> |  |   |
>>>   lastname | character varying(60)  |
>>> |  |   |
>>>   course   | character varying(150) |   | not
>>> null | ''::character varying |
>>>
>>>
>>>
>>> [cur_p DB] [profile table]
>>>
>>>  Column|  Type  | Collation |
>>> Nullable |Default|
>>> --++---+
>>> --+---+
>>>   studentnumber| character varying(45)  |   | not
>>> null | ''::character varying |
>>>   firstname| character varying(60)  |
>>> |  |   |
>>>   middlename   | character varying(60)  |
>>> |  |   |
>>>   lastname | character varying(60)  |
>>> |  |   |
>>>   program_id   | integer|   | not
>>> null |   |
>>>   department_id| integer|   | not
>>> null |   |
>>>   campus_id| integer|   | not
>>> null |   |
>>>
>>>
>>>
>>> So I am migrating the data from one database to another. Here, I am
>>> moving data of student from student_profile table to