Re: postgresql-10.3 on unbuntu-17.10 - how??
2018-03-21 14:02 GMT+01:00 Adrian Klaver : > On 03/20/2018 10:52 PM, Stuart McGraw wrote: > Looks like these posts are coming through a news group to me. > I am Ccing list to get response back there. > > >>>> >>>> Is Pgdg 10.3 even available for ubuntu 17.10? How the heck does >>>> one upgrade to it? >>> >>> >>> 18.04 LTS (Bionic Beaver)?: >>> >>> http://apt.postgresql.org/pub/repos/apt/dists/bionic-pgdg/ >> >> >> I tried doing a full reinstall of Ubuntu-17.10 followed by a postgresql >> install from the bionic repo (instead of installing 10.1 per above and >> trying to upgrade) >> >># apt-get -qy install postgresql postgresql-client postgresql-contrib \ >>postgresql-doc pgadmin3 postgresql-server-dev-10 libpq-dev >>Reading package lists... >>Building dependency tree... >>Reading state information... >> Some packages could not be installed. This may mean that you have >>requested an impossible situation or if you are using the unstable >>distribution that some required packages have not yet been created >>or been moved out of Incoming. >>The following information may help to resolve the situation: >> >>The following packages have unmet dependencies: >> libpq-dev : Depends: libpq5 (= 10.3-1.pgdg18.04+1) but it is not going >> to be installed >> pgadmin3 : Depends: libgcrypt20 (>= 1.8.0) but 1.7.8-2ubuntu1 is to be >> installed >>Depends: libpq5 (>= 8.4~) but it is not going to be >> installed >>Recommends: pgagent but it is not going to be installed >> postgresql : Depends: postgresql-10 but it is not going to be >> installed >> postgresql-client : Depends: postgresql-client-10 >> postgresql-contrib : Depends: postgresql-contrib-10 >>E: Unable to correct problems, you have held broken packages. > > > If it where me I would simplify the above for the moment to : > > apt-get install postgresql-10 > > >> >> Is there any reason now not to conclude that the 10.3 bionic version is >> simply incompatible with Ubuntu-17.10 (at least without a lot more package >> wrangling chops than I have)? >> >> One can install postgresql-10.1 but one cannot upgrade it to get security >> fixes or to be able to load data dumped from another 10.3 database. >> >> Given that Ubuntu-18.04 will be out soon I guess this is pretty much moot >> except for a few unfortunates like me who absolutely need 10.3 but have no >> option to upgrade. I guess the lesson is that running the Pgdg versions >> of Postgresql on any but the LTS versions of Ubuntu is pretty risky. >> Live and learn. Maybe this will help someone else. I have followed more than once the exact directions from the download page: https://www.postgresql.org/download/linux/ubuntu/ Just pretend you are running 17.10 instead of 17.04. It simply works. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: PostgreSQL suitable?
I think that table partitioning isn't suitable yet for a large number of partitions. Or a large number of partial indexes. This will be a challenge too. Il 19 dic 2017 15:07, "Kellner Thiemo" ha scritto: > Hi > > We are developing a data warehouse of which the integration layer will > start with over 100 TB of data. There are not many entities though we > probably can partition and foremost we should use inheritance for the lab > results. I just was wondering if PostgreSQL was able to cope with. In case > it depends on the modelling kind, we have not yet decided between classic > erd, anchor modelling and data vault. > > Does someone have experience with such a set up? > > Kind regards > > Thiemo > >
Re: PostgreSQL suitable?
I've not tested PG10. But it's not released for production yet! Il 19 dic 2017 15:48, "Andreas Kretschmer" ha scritto: > > > Am 19.12.2017 um 15:07 schrieb Kellner Thiemo: > >> Hi >> >> We are developing a data warehouse of which the integration layer will >> start with over 100 TB of data. There are not many entities though we >> probably can partition and foremost we should use inheritance for the lab >> results. I just was wondering if PostgreSQL was able to cope with. In case >> it depends on the modelling kind, we have not yet decided between classic >> erd, anchor modelling and data vault. >> >> Does someone have experience with such a set up? >> >> Kind regards >> >> Thiemo >> >> > > depends at least on the data and the workload. pg10 contains better > solutions for table-partitioning, up to 1000 (maybe more) child-tables > arn't that problem. > We have customers in that range. > > > Regards, Andreas > > -- > 2ndQuadrant - The PostgreSQL Support Company. > www.2ndQuadrant.com > > >
Re: PostgreSQL suitable?
Sorry, my bad: I confused V10 with v11. But accordingly to a discussion with Bruce Momjan, table partitionin V10 is little more than syntactic sugar around old-fashioned table partitioning. Sub-table partition selection is linear in complexity. Il 19 dic 2017 15:55, "Alvaro Herrera" ha scritto: > Vincenzo Romano wrote: > > I've not tested PG10. But it's not released for production yet! > > It definitely is, for a couple of months now. 10.1 (the first bugfix > release) has been out for over a month. > > -- > Álvaro Herrerahttps://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services >
Re: PostgreSQL suitable?
2017-12-19 16:48 GMT+01:00 Stephen Frost : > * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote: >> Sorry, my bad: I confused V10 with v11. >> But accordingly to a discussion with Bruce Momjan, table partitionin V10 is >> little more than syntactic sugar around old-fashioned table partitioning. > > Well, it's a bit more than that since there's tuple-routing, but you're > right that the partition elimination is the same as it was in earlier > versions and based on constraint exclusion. That said, as noted in the > email you replied to, reasonable numbers of partitions aren't too bad > even with the planning cost; it's when you have many thousands of > partitions that you get into cases where planning time for queries is > really bad. When you have to handle a 100TB table, the number of partitions shouldn't be 10 or 12 as seen in most examples and tests. This is the same type of issues you hit with partial indexes (this is why I mentioned them earlier). Sub-table (and partial index) selection algorithm should be logarithmic or sub-linear. As long as it'll be linear, you'll hear about "reasonable number of partitions". One thousand partitions for a 100TB table would make "manageable" 100GB sub-tables. I could be easily wrong, but this is an are where PG needs improvements. One could maybe think about multi-level partitioning, though. > Also as noted on this thread, PG could handle this data volume, but to > be efficient there would be work to be done in normalization, > aggregation, and analyzing the system to ensure you're storing and > querying on the data efficiently. Normalization will grow the number of tables (and later joins) and you'll will very likely end up with at least a table with a "gazillion" rows. I fear normalization, provided it's really needed, would provide little help. With tables that big I usually do "software partitioning". I make the software aware of the partition schema so it can direct the queries to the needed (sub-)tables. So I will have the sub-table selection algorithm in my language of choice with effective support data structures. Of course this is far from being the right solution. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: PostgreSQL suitable?
Hi. 2017-12-19 17:32 GMT+01:00 Stephen Frost : > Greetings, > > * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote: >> 2017-12-19 16:48 GMT+01:00 Stephen Frost : >> > * Vincenzo Romano (vincenzo.rom...@notorand.it) wrote: >> >> Sorry, my bad: I confused V10 with v11. >> >> But accordingly to a discussion with Bruce Momjan, table partitionin V10 >> >> is >> >> little more than syntactic sugar around old-fashioned table partitioning. >> > >> > Well, it's a bit more than that since there's tuple-routing, but you're >> > right that the partition elimination is the same as it was in earlier >> > versions and based on constraint exclusion. That said, as noted in the >> > email you replied to, reasonable numbers of partitions aren't too bad >> > even with the planning cost; it's when you have many thousands of >> > partitions that you get into cases where planning time for queries is >> > really bad. >> >> When you have to handle a 100TB table, the number of partitions >> shouldn't be 10 or 12 >> as seen in most examples and tests. > > Probably not, but it would depend on the data and what you're doing with > it. While there are some general rules-of-thumb, there's no one right > answer when it comes to the size of individual partitions. I think partitioning is done in order to make smaller tables or to balance table size and table number. You currently have to keep in mind a limit to the number of sub-tables. This makes partitioning less effective. >> This is the same type of issues you hit with partial indexes (this is >> why I mentioned them earlier). >> Sub-table (and partial index) selection algorithm should be >> logarithmic or sub-linear. > > Sure, and work is being done to improve PG in that exact area, but that > doesn't mean it can't handle workloads like this today, but you have to > be aware of the costs associated with today's partitions. > >> As long as it'll be linear, you'll hear about "reasonable number of >> partitions". >> One thousand partitions for a 100TB table would make "manageable" >> 100GB sub-tables. > > Yup, and 100G tables are certainly large and a bit awkward but they can > be managed. Of course. But I would bet they are still considered as "very large tables". >> I could be easily wrong, but this is an are where PG needs improvements. > > Of course, and work is being done to improve it. What's important is > knowing that there's a cost to having more partitions when querying > through the parent when you get to a point where you have thousands of > partitions. That cost may be perfectly fine in some use-cases and in > others it might not be, and instead you'd likely have to build logic > into the application layer to address it. That's not ideal, which is > why there's work being done to improve PG, but it's not necessairly a > big deal either. Dynamic SQL is another approach. > >> One could maybe think about multi-level partitioning, though. > > Not sure exactly what you're referring to here, but a simple multi-level > partitioning setup with PG wouldn't actually change the cost for > partition elimination today. It'd be a tree of tables, pointing to logarithmic selection. This is why we love tree data structures. >> > Also as noted on this thread, PG could handle this data volume, but to >> > be efficient there would be work to be done in normalization, >> > aggregation, and analyzing the system to ensure you're storing and >> > querying on the data efficiently. >> >> Normalization will grow the number of tables (and later joins) and >> you'll will very likely end up with at least a table with a >> "gazillion" rows. > > Natuarlly, but that "gazillion" rows table would be much smaller for > having the data normalized- if you don't normalize it then the gazillion > row table is a huge amount of duplicated data, making the entire system > much larger than necessary. Partitioning is done for other reasons than de-duplicating data. The number of rows to be analyzed would be still the same and the indexes over those columns would still be rather large. >> I fear normalization, provided it's really needed, would provide little help. > > I seriously doubt that's the case. Normalization might reduce that > 100TB down to 10's of TB instead, or perhaps even smaller. A 10x duplication factor seems very bad to me. If that's the case then normalization would provide for some improvement
[v10] CREATE TEMP FUNCTION/CREATE FUNCTION PG_TEMP.X
Hi all. It seems I cannot use a temporary function. I know there's no "CREATE TEMP FUNCTION". But while I can do tmp2=# CREATE FUNCTION pg_temp.x( OUT b BOOL ) language PLPGSQL AS $L0$ BEGIN b := TRUE; END; $L0$; SET search_path TO pg_temp,"$user", public; the following fails: tmp2=# SELECT * FROM x(); LINE 1: select * from x(); ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. tmp2=# \df+ x List of functions Schema | Name | Result data type | Argument data types | Type | Volatility | Parallel | Owner | Security | Access privileges | Language | Source code | Description +--+--+-+--++--+---+--+---+--+-+- (0 rows) tmp2=# \df+ pg_temp.x List of functions Schema | Name | Result data type | Argument data types | Type | Volatility | Parallel | Owner | Security | Access privileges | Language | Source code | Description +--+--+-+--++--+---+--+---+--+-+- (0 rows) but this succeeds: tmp2=# select * from pg_temp.x(); b --- t (1 row) I think I am doing/thinking something wrong. But what? TALIA! -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: [v10] CREATE TEMP FUNCTION/CREATE FUNCTION PG_TEMP.X
2017-12-21 17:52 GMT+01:00 Tom Lane : > Vincenzo Romano writes: >> It seems I cannot use a temporary function. > > You have to schema-qualify the temp function name when calling it, too. > > regards, tom lane Hi. So search_path is not used with functions? -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: [v10] CREATE TEMP FUNCTION/CREATE FUNCTION PG_TEMP.X
2017-12-21 17:56 GMT+01:00 Tom Lane : > Vincenzo Romano writes: >> 2017-12-21 17:52 GMT+01:00 Tom Lane : >>> You have to schema-qualify the temp function name when calling it, too. > >> So search_path is not used with functions? > > pg_temp is explicitly ignored when searching for functions/operators. > Otherwise, installing a trojan horse is just too easy. > > regards, tom lane I'm not sure whether this decision actually makes PG more scure. But, anyway, thanks for the insight: I've just found the documentations for this. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: [v10] CREATE TEMP FUNCTION/CREATE FUNCTION PG_TEMP.X
2017-12-22 0:50 GMT+01:00 Melvin Davidson : > > > > On Thu, Dec 21, 2017 at 12:07 PM, Vincenzo Romano > wrote: >> >> 2017-12-21 17:56 GMT+01:00 Tom Lane : >> > Vincenzo Romano writes: >> >> 2017-12-21 17:52 GMT+01:00 Tom Lane : >> >>> You have to schema-qualify the temp function name when calling it, too. >> > >> >> So search_path is not used with functions? >> > >> > pg_temp is explicitly ignored when searching for functions/operators. >> > Otherwise, installing a trojan horse is just too easy. >> > >> > regards, tom lane >> >> I'm not sure whether this decision actually makes PG more scure. >> But, anyway, thanks for the insight: I've just found the >> documentations for this. >> >> -- >> Vincenzo Romano - NotOrAnd.IT >> Information Technologies >> -- >> NON QVIETIS MARIBVS NAVTA PERITVS >> > > Aside from the simple explanations you have received, I question your > justification for even having a temporary function. > Functions are only entries in the system catalogs and as such, take up just a > tiny amount of physical space. In addition, > if you ever need it again, you will have to expend time recreating it. Why > not just once and keep it? > Hi. Thanks for your comment. The reason for having temporary object, in my current design, is to have something shadowing something else on a per session basis, thanks to the search_path variable. It's not simply a matter or storage room or access speed. Not at all to me. If you use, for example: SET search_path to pg_temp,"$user",public; you can put general stuff in public, per-user data in "$user" and per session data in pg_temp. Then the "name resolution" will follow the above priority during lookup. And, as I put more and more logics in the DB, having temporary functions gives me a simple, clean and yet powerful design. As soon as my applications connect, they run SELECT * FROM f_application_init( 'MYAPPNAME' ). That function (which is not temporary) will setup the DB-level, the user-level and the session-level stuff. Currently it eats about 500 msec to run and it's run only once per session. So, the answer to your question is: "why not if it can be useful " -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: 5 USD for PostgreSQL books at PacktPub
2018-01-02 16:13 GMT+01:00 Alexander Farber : > Hello fellow PostgreSQL users, > > there is currently a sale for books at > https://www.packtpub.com/tech/PostgreSQL > > I am not affiliated in any way with them, it is just a "heads up". > > For myself I have pre-ordered > https://www.packtpub.com/big-data-and-business-intelligence/mastering-postgresql-10 > > Regards > Alex Those are e-books, though, not dead tree books. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: Production Database requirement
2018-01-04 18:47 GMT+01:00 Azimuddin Mohammed : > Hello All, > Can anyone please let me know > > 1. What is the hardware and software requirement for postgressql to be > installed in production. We are planning to use postgres for hadoop backend, > to store user access info, metadata etc. So, it will not be high transaction > db, but we might see lost of reads to this db. Your question makes little sense without any details about the data and the application architecture. Please elaborate more. You'd elaborate more on these points in order to get some meaningful advise. > 2. Can we use Vm's to run the Db? Is it recommended ? There's nothing against virtualization apart the computing and I/O limits it can pose. In general, real hardware can provide for better performances. > Thanks in Advance! > > > -- > > Regards, > Azim > > > Virus-free. www.avast.com -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: help with generation_series in pg10
2018-01-09 11:01 GMT+01:00 Alvaro Herrera : > Márcio A. Sepp wrote: > >> how can i have the same in pg10 as i have had in pg 9.x? > > Move the function call to the FROM clause: > > select g, (g - 1) % 5 + 1 from generate_series(1, 10) g; > >> I need it to date type to... if possible. > > There is a generate_series() variant that can return dates (more > precisely, timestamp with time zone). But what exactly would you like > returned? > > -- > Álvaro Herrerahttps://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services > You can add a generated series as seconds, minutes, hours ... to a base timestamp. Yes, it's not an easy going expression, but I'd do it like this. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: How Many Partitions are Good Performing
2018-01-09 18:15 GMT+01:00 Andrew Staller : > This is the blog post that Rakesh referenced: > https://blog.timescale.com/time-series-data-postgresql- > 10-vs-timescaledb-816ee808bac5 > > Please note, this analysis is done in the context of working with > time-series data, where 1000s of chunks is not uncommon because of the > append-mostly nature of the workload. > > On Mon, Jan 8, 2018 at 6:54 PM, Rakesh Kumar > wrote: > >> >> You should have read carefully what I wrote. 1000 is not an upper >> limit. 1000 partition is the number after which performance starts >> dropping . >> >> There is a blog in www.timescale.com which also highlights the same. >> >> Sent: Monday, January 08, 2018 at 6:20 PM >> From: "Kumar, Virendra" >> To: "pgsql-gene...@postgresql.org" >> Subject: How Many Partitions are Good Performing >> >> Can somebody tell us how many partitions are good number without >> impacting the performance. We are hearing around a thousand, is that a >> limit. Do we have plan to increase the number of partitions for a table. We >> would appreciate if somebody can help us with this? >> >> Regards, >> Virendra >> >> >> >> This message is intended only for the use of the addressee and may contain >> information that is PRIVILEGED AND CONFIDENTIAL. >> >> If you are not the intended recipient, you are hereby notified that any >> dissemination of this communication is strictly prohibited. If you have >> received this communication in error, please erase all copies of the >> message >> and its attachments and notify the sender immediately. Thank you. >> >> > > > -- > TimescaleDB* | *Growth & Developer Evangelism > c: 908.581.9509 > > 335 Madison Ave. > <https://maps.google.com/?q=335+Madison+Ave.%C2%A0New+York,+NY%C2%A010017&entry=gmail&source=g> > New York, NY 10017 > <https://maps.google.com/?q=335+Madison+Ave.%C2%A0New+York,+NY%C2%A010017&entry=gmail&source=g> > http://www.timescale.com/ > https://github.com/timescale/timescaledb > The data about the query performances would have shed more light on the situation. Unluckily there's none. Weird! -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: Notify client when a table was full
2018-01-21 19:31 GMT+01:00 Francisco Olarte : > On Sun, Jan 21, 2018 at 1:27 PM, Michael Paquier > wrote: >> On Fri, Jan 19, 2018 at 03:40:01PM +, Raymond O'Donnell wrote: > ... >>> How do you define "full"? The only possible and meaningful case, IMHO, as stated by David earlier, is "file system full". Which is communicated by Postgres with the "Class 53 — Insufficient Resources" error codes. Please refer to official documentation like: https://www.postgresql.org/docs/10/static/errcodes-appendix.html For specific programming languages more details need to be checked.
Re: Best non-networked front end for postgresql
2018-01-21 19:57 GMT+01:00 Sherman Willden : > Name: Sherman > > Single laptop: Compaq 6710b > > Operating System: Ubuntu 17.10 > > Postgresql: 9.6 > > Used for: Just me and my home database > > Seeking advice: Best non-networked front-end > > Considerations: I am retired and want to create my own database and database > captures. I have experience with PERL > > Basic question 1: Which non-networked front-end would work best for me? > > Basic question 2: I am seriously considering HTML fields to capture and > process the information. So to connect with postgresql what do I need to > know? Do I need to know javascript, python, and other languages? How is PERL > for something like this? > > I am entering the below values by hand into a functional database. I thought > that I would create some type of front-end to enter the values and then have > the front-end enter the values into the postgresql database. > 01). visit_date > 02). start_time > 03). end_time > 04). venue (This is the casino name) > 05). city > 06). state > 07). limit (4/8 20/40 etc) > 08). game (7-card-stud etc) > 09). variant (fixed-limit no-limit etc) > 10). high-low (mixed-high-low high-only etc) > 11). buy_in > 12). cash_out > > Thank you; > > Sherman PGAdmin is among the best tools to manage Postgres. https://www.pgadmin.org/ As far as a front-end program, perl can be used. As well as a number of other languages ranging from C, C++, Java, PHP. Almost all languages have a "module" to interact with Postgres databases. The best one is IMHO the one you know the best. P.S. The differences between a local Unix socket and a TCP one are rather subtle from your point of view. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: Notify client when a table was full
2018-01-22 14:58 GMT+01:00 John McKown : > On Mon, Jan 22, 2018 at 2:07 AM, Steve Atkins wrote: >> >> >> > On Jan 21, 2018, at 6:44 PM, Vincenzo Romano >> > wrote: >> > >> > 2018-01-21 19:31 GMT+01:00 Francisco Olarte : >> >> On Sun, Jan 21, 2018 at 1:27 PM, Michael Paquier >> >> wrote: >> >>> On Fri, Jan 19, 2018 at 03:40:01PM +, Raymond O'Donnell wrote: >> >> ... >> >>>> How do you define "full"? >> > >> > The only possible and meaningful case, IMHO, as stated by David >> > earlier, is "file system full". >> >> If your filesystem is full you're pretty much off the air. It's something >> that should never happen on a production system. So ... any automation >> around "the filesystem is full" is going to be much the same as "the >> server >> is dead". You're unlikely to be able to do anything useful from the >> app at that point, let alone from a trigger function. > > > Well, I'll agree that PostgreSQL is likely "off the air". On my system > (Linux/Intel), I use quotas to restrict a user's use of disk space. Yes, > even the PostgreSQL user. I also, at times, use a separate filesystem for a > database and use a TABLESPACE for the tables within it. This is easy to do > with Linux because I can create a filesystem in a regular disk file. It > doesn't perform as well as "native", but my system is not a highly used, > performance oriented, system. Use of a separate filesystem and tablespaces > is, I think, a decent way to control disk usage for "something" so that if > "something" goes "insane", it can't really impact "others" very much. Of > course, others may reasonably disagree with me on this. Each cat, his own > rat. > > >> >> >> If the answer involves handling the case where the file system is full >> we're >> not answering a useful question, and the original poster probably needs to >> clarify. >> >> > Which is communicated by Postgres with the "Class 53 — Insufficient >> > Resources" error codes. >> > Please refer to official documentation like: >> > >> > https://www.postgresql.org/docs/10/static/errcodes-appendix.html >> > >> > For specific programming languages more details need to be checked. >> > >> >> Cheers, >> Steve > > > > > -- > I have a theory that it's impossible to prove anything, but I can't prove > it. > > Maranatha! <>< > John McKown Going back to the original question, how can a client know that "a table is full"? I think that it should see commands like INSERTs and UPDATEs failing with some error code. Maybe those I pointed to. All other facts seem to fall beyond the question scope, while being really interesting and insightful. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: PGSQL 10, many Random named DB
2018-01-24 12:18 GMT+01:00 Condor : > On 24-01-2018 12:52, Durumdara wrote: >> >> Hello! >> >> Somewhere the system administrator (who don't know the PG really) >> installed a PGSQL server (10.x) with a database. >> He couldn't manage the server well. >> >> Yesterday my colleague saw 21 databases in this server with random >> names. >> >> He checked it with built in PGAdmin IV. >> Today we checked it again, and we saw 33 databases. >> >> The first name is "ahucli" for example - like an aztec king... :-). >> >> The server OS is Windows, the PGSQL is 10.x. >> >> What can cause this strange thing? >> >> 1.) PGAdmin IV bug? >> 2.) Their server is hacked/cracked from outside? >> 3.) A wrong configured tool, or an automation? >> 4.) "Alien invasion", etc. >> >> Did you see same thing anywhere? >> >> Thank you for any advice in this theme! >> >> Best regards >>dd > > > > > I bet on 2. > probably did not setup pg_hba.conf file. > > Regards, > HC > The logs are your friends. Maybe you can track down those "CREATE DATABASE" commands with timestamps and IP addresses. There could be a (local rogue) piece of software that does this trick. Again, check with logs. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: PostgreSQL 9.6: view based on sequence
2018-01-24 12:52 GMT+01:00 Enrico Pirozzi : > Hi, I was doing a test on a 9.6 and I've seen a strange thing > > if I run > > create sequence test_sequence; > create view v_test_sequence as select * from test_sequence; > > PostgreSQL creates a view based on the sequence and the strange thing is > that, among the various fields that are created in the view, > there is a field named sequence_name with datatype name. > > I've seen in the docs > https://www.postgresql.org/docs/9.6/static/datatype-character.html > and the datatype name should be reserved only for structures internal of PG. > > Does anyone help me? > > Enrico > > > > -- > Enrico Pirozzi > e.piro...@nbsgroup.it > > tmp1=# create sequence x; CREATE SEQUENCE Time: 69,857 ms tmp1=# select * from x; last_value | log_cnt | is_called +-+--- 1 | 0 | f (1 row) Maybe what you want to do is create view v_test_sequence as select nextval( 'test_sequence' ); Ciao. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: PostgreSQL 9.6: view based on sequence
2018-01-24 13:02 GMT+01:00 Enrico Pirozzi : > >> tmp1=# create sequence x; >> CREATE SEQUENCE >> Time: 69,857 ms >> tmp1=# select * from x; >> last_value | log_cnt | is_called >> +-+--- >> 1 | 0 | f >> (1 row) >> >> Maybe what you want to do is >> >> create view v_test_sequence as select nextval( 'test_sequence' ); >> >> Ciao. > > > Yes but this appens on a 10.x version :) > > Ciao > -- > > Enrico Pirozzi > e.piro...@nbsgroup.it > I haven't any v9 any more in my deployments. Sorry. What is the objective of that view? If it's to hide the nextval() function, then the solution can be similar to mine. If it's to access the sequence metainfo, then maybe it'd be better to explore the pg_catalog schema. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: PostgreSQL 9.6: view based on sequence
2018-01-24 13:10 GMT+01:00 Enrico Pirozzi : > it's just to know why postgresql does not return a warning. > > > > Il 24/01/2018 13:06, Vincenzo Romano ha scritto: >> >> I haven't any v9 any more in my deployments. Sorry. >> >> What is the objective of that view? >> If it's to hide the nextval() function, then the solution can be >> similar to mine. >> If it's to access the sequence metainfo, then maybe it'd be better to >> explore the pg_catalog schema. > > > -- > Enrico Pirozzi > e.piro...@nbsgroup.it > Those queries are all legal. No warning is due. -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS
Re: PostgreSQL 9.6: view based on sequence
2018-01-24 14:52 GMT+01:00 Enrico Pirozzi : > Yes the query is legal,but if you use pg_upgrade to upgrade from 9.6 to 10.x > , pg_upgrade will fail . > > Enrico > > > > Il 24/01/2018 13:11, Vincenzo Romano ha scritto: >> >> Those queries are all legal. No warning is due. > > > -- > Enrico Pirozzi > e.piro...@nbsgroup.it > It is all documented here: https://www.postgresql.org/docs/10/static/release-10.html#idm46428658049888 "...selecting from a sequence relation now returns only the three fields named above." And there's no way to keep portability from 9.x to 10.x -- Vincenzo Romano - NotOrAnd.IT Information Technologies -- NON QVIETIS MARIBVS NAVTA PERITVS