Debugging features needed

2021-11-05 Thread Boboc Cristi
Hello!
    I need a feature that would me allow to debug "live" the work of an 
application together with a PostgreSQL database.

I think that if I would be able to define a session variable that is visible in 
pg_stat_activity (or in other system view, table or function) I would be more 
than happy.
Well, I have described what I see as a partial solution to my problem, the 
problem being: how to track a specific set of actions at database level of a 
given application user in the context of session pooling (so the user activity 
can jump from one session to another of the same pool).
If I would be able to set a "value" to a custom variable, then the app would 
simply need to set the value at each session usage, but I definitely understand 
that other mechanisms might help solving the problem (but I do not know any of 
these).
I think that I can implement a logging mechanism at database level instead (and 
I have some ideas about how to do that), but logging search is not quite live 
debugging.
Any help would be highly appreciated, thank you kindly.
Best regards, Cristi Boboc

Best regards, Cristi Boboc

Re: Debugging features needed

2021-11-05 Thread Boboc Cristi
    Thank you, Sir!
It is, indeed, a good idea!
Best regards, Cristi Boboc 

On Friday, November 5, 2021, 01:41:52 PM GMT+2, Vijaykumar Jain 
 wrote:  
 
 

On Fri, Nov 5, 2021, 4:58 PM Boboc Cristi  wrote:

Hello!
    I need a feature that would me allow to debug "live" the work of an 
application together with a PostgreSQL database.

I think that if I would be able to define a session variable that is visible in 
pg_stat_activity (or in other system view, table or function) I would be more 
than happy.
Well, I have described what I see as a partial solution to my problem, the 
problem being: how to track a specific set of actions at database level of a 
given application user in the context of session pooling (so the user activity 
can jump from one session to another of the same pool).
If I would be able to set a "value" to a custom variable, then the app would 
simply need to set the value at each session usage, but I definitely understand 
that other mechanisms might help solving the problem (but I do not know any of 
these).

I had been trying something similar to play with tracing. I was wondering if 
the traceid or requestid could make its way into the query logs to be able to 
debug.The only thing I could find was play around with param application_name  
that could be set for each session with the traceid of my downstream and enable 
logging to log application name via log_line_prefix.
I'll be interested to know as well if there are any other parameters.
https://www.enterprisedb.com/blog/getting-most-out-applicationname  

Re: Debugging features needed

2021-11-05 Thread Boboc Cristi
    Hi.
    If I am going into logging, then I would use SET_CONTEXT function. It is 
too little documented, but we can add many pairs of (name, value), like 
this:set_context('myvar.varname', 'my value here', 
false),set_context('abc.anyname', 'any value', false)
The important part is that dot inside the first argument of the function.
But I am looking for a feature that can be used live, while inspecting the 
running queries. Thank you for the tip, indeed application_name seems to offer 
what I was searching for.Thank you also to all people telling me about it.


Setting context variables in conjunction with row level security helped me 
12 years ago (yes, as you can imagine it was an Oracle database) to "force" 
users perform the urgent tasks assigned to them (all they could see were the 
urgent tasks, and only after solving those tasks they could see other tasks) in 
the ERP software we were using at that time.

Relying on SQL comments is also a great solution, but I find it a little 
hard to manage precisely. This is why I have not considered it until now.More 
than that I am not a fan of the SQL statements embedded in the application, 
because the application does not have a kind of SELECT statement to allow us 
inspect its metadata. (Yes, it may have reflection, but reflection is much less 
powerful that SELECT is at database level). There are other reasons as well, 
but this one is the most important for me and for my current job.
Best regards, Cristi Boboc 

On Friday, November 5, 2021, 04:16:23 PM GMT+2, Michael Lewis 
 wrote:  
 
 For my purposes, app name isn't long enough so we put a comment at the start 
of every SQL that has a unique ID generated in the application. This ensures 
that we can tell one connection apart from another even when both are coming 
from the same feature/action (applicationName is set to this) even using 
connection pooling. Before, we had different requests reusing the same pid of 
course and if they happened to have the same appName, we didn't know it was a 
new process on app side.
We also log that same request ID to NewRelic and when an app process needs to 
use multiple databases, it links those for research of events after the fact as 
well.
I would be interested to hear what others are doing.  

Re: Debugging features needed

2021-11-05 Thread Boboc Cristi
    You are right, it is too much work to do. For the moment application_name 
will suffice!

Best regards, Cristi Boboc 

On Friday, November 5, 2021, 06:15:53 PM GMT+2, Vijaykumar Jain 
 wrote:  
 
 

On Fri, Nov 5, 2021, 7:46 PM Michael Lewis  wrote:

For my purposes, app name isn't long enough so we put a comment at the start of 
every SQL that has a unique ID generated in the application. This ensures that 
we can tell one connection apart from another even when both are coming from 
the same feature/action (applicationName is set to this) even using connection 
pooling. Before, we had different requests reusing the same pid of course and 
if they happened to have the same appName, we didn't know it was a new process 
on app side.


While researching options,  I did see crunchy proxy using annotations to split 
read/write. But I was not sure if any of these options fit well with multiple 
toolings , drivers etc. 
https://github.com/CrunchyData/crunchy-proxy/blob/master/docs/crunchy-proxy-user-guide.asciidoc#annotations

The other option washttps://github.com/wgliang/pgproxy

or writing your custom plug-in for envoyproxy filter for postgresql 
https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/other_protocols/postgres
So basically it was adding one more layer in between the already connection 
pooling layers via pgbouncer.The biggest concern is when we use ssl, these 
session params cannot be read unless we terminate ssl at the proxy, and then 
decode,  so I gave up :)