Re: Oracle to postgres migration via ora2pg (blob data)
you the data migration speed so you can adjust all these parameters to see if you have some performances gains. If you want to know exactly at which speed Oracle is able to send the data add --oracle_speed to the ora2pg command. Ora2Pg will only extract data from Oracle, there will be no bytea transformation or data writing, just the full Oracle speed. You can do some test with the value of the -J option to see what is the best value. On the other side you can use --ora2pg_speed option to see at which speed Ora2Pg is able to convert the data, nothing will be written too. Use it to know if you have some win with the value of the -j option. Don't forget to do some additional test with the BLOB_LIMIT value to see if there some more improvement. If someone can prove me that they have better performances at Oracle data extraction side I will be pleased to look at this code. I hope this will help. Regards, -- Gilles Darold http://www.darold.net/
Re: aws sct/dms versus ora2pg
Le 22/02/2020 à 02:07, Ayub M a écrit : > I would like to get suggestions and feedback on aws sct/dms vs ora2pg > for an Oracle (on-prem) to PostgreSQL (aws rds) migration project. > > One big difference between them I see is the fact that dms supports > incremental loads from oracle to postgres (dont think ora2pg supports > that, but I could be wrong). > > Someone who researched or used these tools help list pros and cons of > each approach and which is highly recommended. Hi, I don't know a lot about aws migration but it seems to only be able to migrate DDL and data, ora2pg can do much more. About the incremental data migration Ora2Pg has an experimental mode to add the incremental feature, see DATADIFF in documentation and ora2pg.conf. Unfortunately I still not have really used this feature but help to test and improve it is welcome. Although as Ora2Pg is a spare time project I guess that with the financial means of amazon their product is far better. If you or someone else give a try to both solution fill free to send feedback. Regards, -- Gilles Darold http://www.darold.net/
Re: ora2pg error : DBD::Oracle::db prepare failed: ORA-28110
Le 12/06/2020 à 00:40, Adrian Klaver a écrit : > On 6/11/20 1:30 PM, George Dimopoulos wrote: > Please reply to list also. > Ccing list. > >> Hi Adrian, >> >> Thank you very much for your help !!! >> I did the query, and I found out that >> SELECT DISTINCT c.SHAPE.SDO_GTYPE FROM NRM_CAVES.LOCATIONS c WHERE >> ROWNUM < 5 >> * >> ERROR at line 1: >> ORA-28110: policy function or package NRM_CAVES.UTIL_SECURITY has error > > So to be clear the above error is being seen on the Oracle database. > >> >> This policy has been applied to few tables so I did exclude them >> (with directive TYPE TABLE). >> I was hopping that I will be able to migrate the other tables that do >> not depend on this policy. >> That means that developers have to correct this in Oracle side before >> I can do the migration and succeed. >> Is this correct? > > Not sure. > Hi, "ORA-28110: Policy function or package has error occurs when a policy function has some error such as a compilation problem. The solution is to recompile the package or the function after fixing the underlying problem." If you set COMPILE_SCHEMA to 1 in your ora2pg.conf, Ora2pg will first recompile the all functions but if the policy function is not in the schema you are exporting it will not help. Actually this is an Oracle issue you may want to contact an Oracle DBA to see what can be done. Best regards -- Gilles Darold http://www.darold.net/
Re: Loading Oracle Spatial Data to Postgresql
Le 20/08/2020 à 16:30, Ko, Christina a écrit : Hi All, I am running into issue using DMS to load oracle spatial data (oracle 11.2.0.3) to postgresql. Oracle table: CREATETABLESpatial_Tbl IDNUMBER(38,9), P_ID NUMBER(38,9), GEOMETRY MDSYS.SDO_GEOMETRY AWS DMS calls the SDO2GEOJSON custom function trying to load the data, but it failed and the Geometry columns in Postgresql was empty I also tried using SDO_UTIL.TO_GEOJSON utility and didn’t work either. I am looking for suggestions of how to load the geometry data from oracle 11.2.0.3 to oracle. Thanks in advanced for your help! Christina Hi, You can either use oracle_fdw or Ora2Pg. The first is a FDW extension that will allow you to use a foreign table to upload the data into your destination table and the second tool export data from the Oracle database to plain text file or to your PostgreSQL table directly. Best regards, -- Gilles Darold http://www.darold.net/
Re: EXTERNAL: Re: Loading Oracle Spatial Data to Postgresql
Le 20/08/2020 à 17:57, Ko, Christina a écrit : *From:* Gilles Darold *Sent:* Thursday, August 20, 2020 10:54 AM *To:* Ko, Christina (US) ; pgsql-general@lists.postgresql.org *Cc:* Ho, Chuong *Subject:* EXTERNAL: Re: Loading Oracle Spatial Data to Postgresql Le 20/08/2020 à 16:30, Ko, Christina a écrit : Hi All, I am running into issue using DMS to load oracle spatial data (oracle 11.2.0.3) to postgresql. Oracle table: CREATETABLESpatial_Tbl IDNUMBER(38,9), P_ID NUMBER(38,9), GEOMETRY MDSYS.SDO_GEOMETRY AWS DMS calls the SDO2GEOJSON custom function trying to load the data, but it failed and the Geometry columns in Postgresql was empty I also tried using SDO_UTIL.TO_GEOJSON utility and didn’t work either. I am looking for suggestions of how to load the geometry data from oracle 11.2.0.3 to oracle. Thanks in advanced for your help! Christina Hi, You can either use oracle_fdw or Ora2Pg. The first is a FDW extension that will allow you to use a foreign table to upload the data into your destination table and the second tool export data from the Oracle database to plain text file or to your PostgreSQL table directly. Best regards, -- Gilles Darold http://www.darold.net/ I have just installed ora2pg and will see if it works. I believe I have to specify my setting in the config, do you have any suggestion of what I have to set in the config file to load spatial data from oracle to postgresql. Thank you. Christina Well if you start with Ora2Pg your bible is http://www.ora2pg.com/documentation.html and especially this chapter that will make you save time http://www.ora2pg.com/documentation.html#Generate-a-migration-template After reading that as a shortcut once your ora2pg.conf is configured to communicate with Oracle and if you just want to migrate this single table: ora2pg -c config/ora2pg.conf -t COPY -b data/ -o data.sql -a 'SPATIAL_TB1' I also recommend you to read http://www.darold.net/confs/ora2pg_the_hard_way.pdf, it is a bit old but plenty of useful information on Ora2PG use. Best regards, -- Gilles Darold http://www.darold.net/
Re: Transaction and SQL errors
Le 04/04/2022 à 18:20, Sebastien Flaesch a écrit : David, Personally, I can see where it has significant value for psql in interactive mode because people make typos. Application code doesn't. That removes a whole class of problems where the feature provides benefit. Sure, application code must not have typos, but I prefer to let the DB engine check for SQL constraints. Imagine the following case: BEGIN WORK ... (some other SQL) ... DELETE FROM items WHERE item_id = 12823 -- Can raise foreign key error if sql-error then ... To me it's better than: BEGIN WORK ... SELECT ... FROM orders WHERE item_id = 12823 if not-found then -- make sure we get no SQL error than cancels TX! DELETE FROM items WHERE item_id = 12823 endif ... ... and not even sure it's valid atomic code depending on isolation level... A good argument for PostgreSQL's behavior would be that it's better to cancel the whole transaction and restart all SQL commands. However, legacy code is often spaghetti code where one function starts the TX, then calls other functions doing SQL ... ( yes, good candidate for savepoints usage! ) Anyway, thanks for the info, nothing planed short term, and that's what I was asking for. Cheers! Seb FYI there was a discussion [1] and a patch to allow this feature through an extension because having this implemented in core will probably never happen. Everything was ready but the final status is Rejected so I don't think you might expect any planned work on this feature in any term. But who knows, things can evolve. [1] https://www.postgresql.org/message-id/983d80b3-d187-127a-2de5-38c92ccb38ab%40darold.net -- Gilles Darold http://www.darold.net/
Re: INSERT ALL with DML ERROR Logging replacement in PostgreSQL
Le 21/06/2022 à 09:08, Jagmohan Kaintura a écrit : Hi Team, We are working on a project where we are moving from Oracle to PostgreSQL and working on a migration tool which mostly have statements for inserting the records which are correct and logging the errors in error table using ORACLE inbuilt statement for INSERT ALL with DML ERROR logging. As part of the postgresql best practices, what approach are we taking to move these types of statements in Postgresql as we don't have any such equivalent mechanism to load correct data in the main table and error record in error table with error reason. The statements mostly used are --> INSERT ALL INTO target_table (COLUMN LIST) VALUES() LOG ERROR INTO ... SELECT statement considering the source tables; ) Can anyone please help me with what could be the best approach to convert this in the tool. -- *Best Regards,* Jagmohan Hi, Maybe what you are looking for is here https://github.com/MigOpsRepos/pg_dbms_errlog , this is a PostgreSQL extension that emulates the DBMS_ERRLOG Oracle package. Best regards, -- Gilles Darold http://www.darold.net/
Re: INSERT ALL with DML ERROR Logging replacement in PostgreSQL
Le 21/06/2022 à 10:28, Jagmohan Kaintura a écrit : Hi Gilles, I was going though this earlier today but didn't compiled it as I read it may not be able to capture the errors if we have below type of statement and most of our statements are of INSERT .. SELECT statements only. The form |INSERT INTO SELECT ...| will not have the same behavior than in Oracle. It will not stored the successful insert and logged the rows in error. This is not supported because it is a single transaction for PostgreSQL and everything is rolled back in case of error. Our all statements are of that form will it be still useful. Right, this was not obvious in your post, but yes if you are using INSERT + SELECT this is not possible with the current version of this extension. Maybe that could be possible by rewriting internally the query to loop over the result of the select and generate an insert per row returned, but with performances lost of courses. Best regards, -- Gilles Darold http://www.darold.net/
Re: password rules
Le 24/06/2025 à 07:18, raphi a écrit : Am 23.06.2025 um 22:39 schrieb Christoph Berg: Re: raphi Sorry for this rather long (first) email on this list but I feel like I had to explain our usecase and why LDAP is not always as simple as adding a line to hba.conf. Did you give the "pam" method a try? T Not really because it's a local solution. How do you change passwords or keep history on your standby nodes? Besides, the documentation says that postgres can't handle /etc/shadow because it runs unprivileged, only pam_ldap would work. Or am I missing something? have fun, raphi I think the credcheck extension has been created to handle the features you are requesting. > - enforce some password complexity and prevent reuse This is already implemented. > - expire a password immediately after creating and prompt the user to change it upon first login try. They can connect with the initial > password but cannot login until they've set a new password. I have started to work some weeks ago and it just need more time to end/polish the job. > the password history is not being replicated to the standby so we can not use it. It is in my TODO list for a year as you noted and will try to implement it this summer. -- Gilles Darold