This is an automated email from the ASF dual-hosted git repository. davsclaus pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/camel.git
The following commit(s) were added to refs/heads/master by this push: new 82ce7b3 Regen 82ce7b3 is described below commit 82ce7b38328f3e166a7d9c9e934b226c4f85bf60 Author: Claus Ibsen <claus.ib...@gmail.com> AuthorDate: Thu Jun 4 06:39:45 2020 +0200 Regen --- .../ROOT/pages/debezium-mongodb-component.adoc | 16 ++++++++-------- .../modules/ROOT/pages/debezium-mysql-component.adoc | 20 ++++++++++---------- .../ROOT/pages/debezium-postgres-component.adoc | 16 ++++++++-------- .../ROOT/pages/debezium-sqlserver-component.adoc | 16 ++++++++-------- 4 files changed, 34 insertions(+), 34 deletions(-) diff --git a/docs/components/modules/ROOT/pages/debezium-mongodb-component.adoc b/docs/components/modules/ROOT/pages/debezium-mongodb-component.adoc index 02f497f..58536ba 100644 --- a/docs/components/modules/ROOT/pages/debezium-mongodb-component.adoc +++ b/docs/components/modules/ROOT/pages/debezium-mongodb-component.adoc @@ -72,14 +72,14 @@ The Debezium MongoDB Connector component supports 43 options, which are listed b | *collectionBlacklist* (mongodb) | Description is not available here, please check Debezium website for corresponding key 'collection.blacklist' description. | | String | *collectionWhitelist* (mongodb) | The collections for which changes are to be captured | | String | *connectBackoffInitialDelayMs* (mongodb) | The initial delay when trying to reconnect to a primary after a connection cannot be made or when no primary is available. Defaults to 1 second (1000 ms). | 1s | long -| *connectBackoffMaxDelayMs* (mongodb) | The maximum delay when trying to reconnect to a primary after a connection cannot be made or when no primary is available. Defaults to 120 second (120,000 ms). | 120s | long +| *connectBackoffMaxDelayMs* (mongodb) | The maximum delay when trying to reconnect to a primary after a connection cannot be made or when no primary is available. Defaults to 120 second (120,000 ms). | 2m | long | *connectMaxAttempts* (mongodb) | Maximum number of failed connection attempts to a replica set primary before an exception occurs and task is aborted. Defaults to 16, which with the defaults for 'connect.backoff.initial.delay.ms' and 'connect.backoff.max.delay.ms' results in just over 20 minutes of attempts before failing. | 16 | int | *databaseBlacklist* (mongodb) | The databases for which changes are to be excluded | | String | *databaseHistoryFileFilename* (mongodb) | The path to the file that will be used to record the database history | | String | *databaseWhitelist* (mongodb) | The databases for which changes are to be captured | | String | *fieldBlacklist* (mongodb) | Description is not available here, please check Debezium website for corresponding key 'field.blacklist' description. | | String | *fieldRenames* (mongodb) | Description is not available here, please check Debezium website for corresponding key 'field.renames' description. | | String -| *heartbeatIntervalMs* (mongodb) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0s | int +| *heartbeatIntervalMs* (mongodb) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int | *heartbeatTopicsPrefix* (mongodb) | The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String | *initialSyncMaxThreads* (mongodb) | Maximum number of threads used to perform an initial sync of the collections in a replica set. Defaults to 1. | 1 | int | *maxBatchSize* (mongodb) | Maximum size of each batch of source records. Defaults to 2048. | 2048 | int @@ -91,9 +91,9 @@ The Debezium MongoDB Connector component supports 43 options, which are listed b | *mongodbSslEnabled* (mongodb) | Should connector use SSL to connect to MongoDB instances | false | boolean | *mongodbSslInvalidHostname Allowed* (mongodb) | Whether invalid host names are allowed when using SSL. If true the connection will not prevent man-in-the-middle attacks | false | boolean | *mongodbUser* (mongodb) | Database user for connecting to MongoDB, if necessary. | | String -| *pollIntervalMs* (mongodb) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 0.5s | long +| *pollIntervalMs* (mongodb) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 500ms | long | *skippedOperations* (mongodb) | The comma-separated list of operations to skip during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for deletes. By default, no operations will be skipped. | | String -| *snapshotDelayMs* (mongodb) | The number of milliseconds to delay before a snapshot will begin. | 0s | long +| *snapshotDelayMs* (mongodb) | The number of milliseconds to delay before a snapshot will begin. | 0ms | long | *snapshotFetchSize* (mongodb) | The maximum number of records that should be loaded into memory while performing a snapshot | | int | *snapshotMode* (mongodb) | The criteria for running a snapshot upon startup of the connector. Options include: 'initial' (the default) to specify the connector should always perform an initial sync when required; 'never' to specify the connector should never perform an initial sync | initial | String | *sourceStructVersion* (mongodb) | A version of the format of the publicly visible source part in the message | v2 | String @@ -146,14 +146,14 @@ with the following path and query parameters: | *collectionBlacklist* (mongodb) | Description is not available here, please check Debezium website for corresponding key 'collection.blacklist' description. | | String | *collectionWhitelist* (mongodb) | The collections for which changes are to be captured | | String | *connectBackoffInitialDelayMs* (mongodb) | The initial delay when trying to reconnect to a primary after a connection cannot be made or when no primary is available. Defaults to 1 second (1000 ms). | 1s | long -| *connectBackoffMaxDelayMs* (mongodb) | The maximum delay when trying to reconnect to a primary after a connection cannot be made or when no primary is available. Defaults to 120 second (120,000 ms). | 120s | long +| *connectBackoffMaxDelayMs* (mongodb) | The maximum delay when trying to reconnect to a primary after a connection cannot be made or when no primary is available. Defaults to 120 second (120,000 ms). | 2m | long | *connectMaxAttempts* (mongodb) | Maximum number of failed connection attempts to a replica set primary before an exception occurs and task is aborted. Defaults to 16, which with the defaults for 'connect.backoff.initial.delay.ms' and 'connect.backoff.max.delay.ms' results in just over 20 minutes of attempts before failing. | 16 | int | *databaseBlacklist* (mongodb) | The databases for which changes are to be excluded | | String | *databaseHistoryFileFilename* (mongodb) | The path to the file that will be used to record the database history | | String | *databaseWhitelist* (mongodb) | The databases for which changes are to be captured | | String | *fieldBlacklist* (mongodb) | Description is not available here, please check Debezium website for corresponding key 'field.blacklist' description. | | String | *fieldRenames* (mongodb) | Description is not available here, please check Debezium website for corresponding key 'field.renames' description. | | String -| *heartbeatIntervalMs* (mongodb) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0s | int +| *heartbeatIntervalMs* (mongodb) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int | *heartbeatTopicsPrefix* (mongodb) | The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String | *initialSyncMaxThreads* (mongodb) | Maximum number of threads used to perform an initial sync of the collections in a replica set. Defaults to 1. | 1 | int | *maxBatchSize* (mongodb) | Maximum size of each batch of source records. Defaults to 2048. | 2048 | int @@ -165,9 +165,9 @@ with the following path and query parameters: | *mongodbSslEnabled* (mongodb) | Should connector use SSL to connect to MongoDB instances | false | boolean | *mongodbSslInvalidHostname Allowed* (mongodb) | Whether invalid host names are allowed when using SSL. If true the connection will not prevent man-in-the-middle attacks | false | boolean | *mongodbUser* (mongodb) | Database user for connecting to MongoDB, if necessary. | | String -| *pollIntervalMs* (mongodb) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 0.5s | long +| *pollIntervalMs* (mongodb) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 500ms | long | *skippedOperations* (mongodb) | The comma-separated list of operations to skip during streaming, defined as: 'i' for inserts; 'u' for updates; 'd' for deletes. By default, no operations will be skipped. | | String -| *snapshotDelayMs* (mongodb) | The number of milliseconds to delay before a snapshot will begin. | 0s | long +| *snapshotDelayMs* (mongodb) | The number of milliseconds to delay before a snapshot will begin. | 0ms | long | *snapshotFetchSize* (mongodb) | The maximum number of records that should be loaded into memory while performing a snapshot | | int | *snapshotMode* (mongodb) | The criteria for running a snapshot upon startup of the connector. Options include: 'initial' (the default) to specify the connector should always perform an initial sync when required; 'never' to specify the connector should never perform an initial sync | initial | String | *sourceStructVersion* (mongodb) | A version of the format of the publicly visible source part in the message | v2 | String diff --git a/docs/components/modules/ROOT/pages/debezium-mysql-component.adoc b/docs/components/modules/ROOT/pages/debezium-mysql-component.adoc index 5b337e5..9b161ec 100644 --- a/docs/components/modules/ROOT/pages/debezium-mysql-component.adoc +++ b/docs/components/modules/ROOT/pages/debezium-mysql-component.adoc @@ -80,14 +80,14 @@ The Debezium MySQL Connector component supports 73 options, which are listed bel | *binlogBufferSize* (mysql) | The size of a look-ahead buffer used by the binlog reader to decide whether the transaction in progress is going to be committed or rolled back. Use 0 to disable look-ahead buffering. Defaults to 0 (i.e. buffering is disabled). | 0 | int | *columnBlacklist* (mysql) | Description is not available here, please check Debezium website for corresponding key 'column.blacklist' description. | | String | *connectKeepAlive* (mysql) | Whether a separate thread should be used to ensure the connection is kept alive. | true | boolean -| *connectKeepAliveIntervalMs* (mysql) | Interval in milliseconds to wait for connection checking if keep alive thread is used. | 60s | long +| *connectKeepAliveIntervalMs* (mysql) | Interval in milliseconds to wait for connection checking if keep alive thread is used. | 1m | long | *connectTimeoutMs* (mysql) | Maximum time in milliseconds to wait after trying to connect to the database before timing out. | 30s | int | *databaseBlacklist* (mysql) | Description is not available here, please check Debezium website for corresponding key 'database.blacklist' description. | | String | *databaseHistory* (mysql) | The name of the DatabaseHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'database.history.' string. | io.debezium.relational.history.FileDatabaseHistory | String | *databaseHistoryFileFilename* (mysql) | The path to the file that will be used to record the database history | | String | *databaseHistoryKafkaBootstrap Servers* (mysql) | A list of host/port pairs that the connector will use for establishing the initial connection to the Kafka cluster for retrieving database schema history previously stored by the connector. This should point to the same Kafka cluster used by the Kafka Connect process. | | String | *databaseHistoryKafkaRecovery Attempts* (mysql) | The number of attempts in a row that no data are returned from Kafka before recover completes. The maximum amount of time to wait after receiving no data is (recovery.attempts) x (recovery.poll.interval.ms). | 100 | int -| *databaseHistoryKafkaRecovery PollIntervalMs* (mysql) | The number of milliseconds to wait while polling for persisted data during recovery. | 0.1s | int +| *databaseHistoryKafkaRecovery PollIntervalMs* (mysql) | The number of milliseconds to wait while polling for persisted data during recovery. | 100ms | int | *databaseHistoryKafkaTopic* (mysql) | The name of the topic for the database schema history | | String | *databaseHistorySkipUnparseable Ddl* (mysql) | Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes. | false | boolean | *databaseHistoryStoreOnly MonitoredTablesDdl* (mysql) | Controls what DDL will Debezium store in database history.By default (false) Debezium will store all incoming DDL statements. If set to truethen only DDL that manipulates a monitored table will be stored. | false | boolean @@ -114,7 +114,7 @@ The Debezium MySQL Connector component supports 73 options, which are listed bel | *gtidSourceExcludes* (mysql) | The source UUIDs used to exclude GTID ranges when determine the starting position in the MySQL server's binlog. | | String | *gtidSourceFilterDmlEvents* (mysql) | If set to true, we will only produce DML events into Kafka for transactions that were written on mysql servers with UUIDs matching the filters defined by the gtid.source.includes or gtid.source.excludes configuration options, if they are specified. | true | boolean | *gtidSourceIncludes* (mysql) | The source UUIDs used to include GTID ranges when determine the starting position in the MySQL server's binlog. | | String -| *heartbeatIntervalMs* (mysql) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0s | int +| *heartbeatIntervalMs* (mysql) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int | *heartbeatTopicsPrefix* (mysql) | The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String | *includeQuery* (mysql) | Whether the connector should include the original SQL query that generated the change event. Note: This option requires MySQL be configured with the binlog_rows_query_log_events option set to ON. Query will not be present for events generated from snapshot. WARNING: Enabling this option may expose tables or fields explicitly blacklisted or masked by including the original SQL statement in the change event. For this reason the default value is 'false'. | false | [...] | *includeSchemaChanges* (mysql) | Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value includes the DDL statement(s).The default is 'true'. This is independent of how the connector internally records database history. | true | boolean @@ -122,8 +122,8 @@ The Debezium MySQL Connector component supports 73 options, which are listed bel | *maxBatchSize* (mysql) | Maximum size of each batch of source records. Defaults to 2048. | 2048 | int | *maxQueueSize* (mysql) | Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. | 8192 | int | *messageKeyColumns* (mysql) | A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':',where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector,and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) [...] -| *pollIntervalMs* (mysql) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 0.5s | long -| *snapshotDelayMs* (mysql) | The number of milliseconds to delay before a snapshot will begin. | 0s | long +| *pollIntervalMs* (mysql) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 500ms | long +| *snapshotDelayMs* (mysql) | The number of milliseconds to delay before a snapshot will begin. | 0ms | long | *snapshotFetchSize* (mysql) | The maximum number of records that should be loaded into memory while performing a snapshot | | int | *snapshotLockingMode* (mysql) | Controls how long the connector holds onto the global read lock while it is performing a snapshot. The default is 'minimal', which means the connector holds the global read lock (and thus prevents any updates) for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this can be done using the snapshot process' REPEATABLE REA [...] | *snapshotMode* (mysql) | The criteria for running a snapshot upon startup of the connector. Options include: 'when_needed' to specify that the connector run a snapshot upon startup whenever it deems it necessary; 'initial' (the default) to specify the connector can run a snapshot only when no offsets are available for the logical server name; 'initial_only' same as 'initial' except the connector should stop after completing the snapshot and before it would normally read the binlog; and [...] @@ -184,14 +184,14 @@ with the following path and query parameters: | *binlogBufferSize* (mysql) | The size of a look-ahead buffer used by the binlog reader to decide whether the transaction in progress is going to be committed or rolled back. Use 0 to disable look-ahead buffering. Defaults to 0 (i.e. buffering is disabled). | 0 | int | *columnBlacklist* (mysql) | Description is not available here, please check Debezium website for corresponding key 'column.blacklist' description. | | String | *connectKeepAlive* (mysql) | Whether a separate thread should be used to ensure the connection is kept alive. | true | boolean -| *connectKeepAliveIntervalMs* (mysql) | Interval in milliseconds to wait for connection checking if keep alive thread is used. | 60s | long +| *connectKeepAliveIntervalMs* (mysql) | Interval in milliseconds to wait for connection checking if keep alive thread is used. | 1m | long | *connectTimeoutMs* (mysql) | Maximum time in milliseconds to wait after trying to connect to the database before timing out. | 30s | int | *databaseBlacklist* (mysql) | Description is not available here, please check Debezium website for corresponding key 'database.blacklist' description. | | String | *databaseHistory* (mysql) | The name of the DatabaseHistory class that should be used to store and recover database schema changes. The configuration properties for the history are prefixed with the 'database.history.' string. | io.debezium.relational.history.FileDatabaseHistory | String | *databaseHistoryFileFilename* (mysql) | The path to the file that will be used to record the database history | | String | *databaseHistoryKafkaBootstrap Servers* (mysql) | A list of host/port pairs that the connector will use for establishing the initial connection to the Kafka cluster for retrieving database schema history previously stored by the connector. This should point to the same Kafka cluster used by the Kafka Connect process. | | String | *databaseHistoryKafkaRecovery Attempts* (mysql) | The number of attempts in a row that no data are returned from Kafka before recover completes. The maximum amount of time to wait after receiving no data is (recovery.attempts) x (recovery.poll.interval.ms). | 100 | int -| *databaseHistoryKafkaRecovery PollIntervalMs* (mysql) | The number of milliseconds to wait while polling for persisted data during recovery. | 0.1s | int +| *databaseHistoryKafkaRecovery PollIntervalMs* (mysql) | The number of milliseconds to wait while polling for persisted data during recovery. | 100ms | int | *databaseHistoryKafkaTopic* (mysql) | The name of the topic for the database schema history | | String | *databaseHistorySkipUnparseable Ddl* (mysql) | Controls the action Debezium will take when it meets a DDL statement in binlog, that it cannot parse.By default the connector will stop operating but by changing the setting it can ignore the statements which it cannot parse. If skipping is enabled then Debezium can miss metadata changes. | false | boolean | *databaseHistoryStoreOnly MonitoredTablesDdl* (mysql) | Controls what DDL will Debezium store in database history.By default (false) Debezium will store all incoming DDL statements. If set to truethen only DDL that manipulates a monitored table will be stored. | false | boolean @@ -218,7 +218,7 @@ with the following path and query parameters: | *gtidSourceExcludes* (mysql) | The source UUIDs used to exclude GTID ranges when determine the starting position in the MySQL server's binlog. | | String | *gtidSourceFilterDmlEvents* (mysql) | If set to true, we will only produce DML events into Kafka for transactions that were written on mysql servers with UUIDs matching the filters defined by the gtid.source.includes or gtid.source.excludes configuration options, if they are specified. | true | boolean | *gtidSourceIncludes* (mysql) | The source UUIDs used to include GTID ranges when determine the starting position in the MySQL server's binlog. | | String -| *heartbeatIntervalMs* (mysql) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0s | int +| *heartbeatIntervalMs* (mysql) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int | *heartbeatTopicsPrefix* (mysql) | The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String | *includeQuery* (mysql) | Whether the connector should include the original SQL query that generated the change event. Note: This option requires MySQL be configured with the binlog_rows_query_log_events option set to ON. Query will not be present for events generated from snapshot. WARNING: Enabling this option may expose tables or fields explicitly blacklisted or masked by including the original SQL statement in the change event. For this reason the default value is 'false'. | false | [...] | *includeSchemaChanges* (mysql) | Whether the connector should publish changes in the database schema to a Kafka topic with the same name as the database server ID. Each schema change will be recorded using a key that contains the database name and whose value includes the DDL statement(s).The default is 'true'. This is independent of how the connector internally records database history. | true | boolean @@ -226,8 +226,8 @@ with the following path and query parameters: | *maxBatchSize* (mysql) | Maximum size of each batch of source records. Defaults to 2048. | 2048 | int | *maxQueueSize* (mysql) | Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. | 8192 | int | *messageKeyColumns* (mysql) | A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':',where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector,and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column(s) [...] -| *pollIntervalMs* (mysql) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 0.5s | long -| *snapshotDelayMs* (mysql) | The number of milliseconds to delay before a snapshot will begin. | 0s | long +| *pollIntervalMs* (mysql) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 500ms | long +| *snapshotDelayMs* (mysql) | The number of milliseconds to delay before a snapshot will begin. | 0ms | long | *snapshotFetchSize* (mysql) | The maximum number of records that should be loaded into memory while performing a snapshot | | int | *snapshotLockingMode* (mysql) | Controls how long the connector holds onto the global read lock while it is performing a snapshot. The default is 'minimal', which means the connector holds the global read lock (and thus prevents any updates) for just the initial portion of the snapshot while the database schemas and other metadata are being read. The remaining work in a snapshot involves selecting all rows from each table, and this can be done using the snapshot process' REPEATABLE REA [...] | *snapshotMode* (mysql) | The criteria for running a snapshot upon startup of the connector. Options include: 'when_needed' to specify that the connector run a snapshot upon startup whenever it deems it necessary; 'initial' (the default) to specify the connector can run a snapshot only when no offsets are available for the logical server name; 'initial_only' same as 'initial' except the connector should stop after completing the snapshot and before it would normally read the binlog; and [...] diff --git a/docs/components/modules/ROOT/pages/debezium-postgres-component.adoc b/docs/components/modules/ROOT/pages/debezium-postgres-component.adoc index 91df8ba..dc00be7 100644 --- a/docs/components/modules/ROOT/pages/debezium-postgres-component.adoc +++ b/docs/components/modules/ROOT/pages/debezium-postgres-component.adoc @@ -86,7 +86,7 @@ The Debezium PostgresSQL Connector component supports 67 options, which are list | *decimalHandlingMode* (postgres) | Specify how DECIMAL and NUMERIC columns should be represented in change events, including:'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in [...] | *eventProcessingFailureHandling Mode* (postgres) | Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including:'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped;'ignore' the problematic event will be skipped. | fail | String | *heartbeatActionQuery* (postgres) | The query executed with every heartbeat. Defaults to an empty string. | | String -| *heartbeatIntervalMs* (postgres) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0s | int +| *heartbeatIntervalMs* (postgres) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int | *heartbeatTopicsPrefix* (postgres) | The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String | *hstoreHandlingMode* (postgres) | Specify how HSTORE columns should be represented in change events, including:'json' represents values as string-ified JSON (default)'map' represents values as a key/value map | json | String | *includeUnknownDatatypes* (postgres) | Specify whether the fields of data type not supported by Debezium should be processed:'false' (the default) omits the fields; 'true' converts the field into an implementation dependent binary representation. | false | boolean @@ -95,7 +95,7 @@ The Debezium PostgresSQL Connector component supports 67 options, which are list | *maxQueueSize* (postgres) | Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. | 8192 | int | *messageKeyColumns* (postgres) | A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':',where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector,and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column [...] | *pluginName* (postgres) | The name of the Postgres logical decoding plugin installed on the server. Supported values are 'decoderbufs' and 'wal2json'. Defaults to 'decoderbufs'. | decoderbufs | String -| *pollIntervalMs* (postgres) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 0.5s | long +| *pollIntervalMs* (postgres) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 500ms | long | *provideTransactionMetadata* (postgres) | Enables transaction metadata extraction together with event counting | false | boolean | *publicationName* (postgres) | The name of the Postgres 10 publication used for streaming changes from a plugin.Defaults to 'dbz_publication' | dbz_publication | String | *schemaBlacklist* (postgres) | The schemas for which events must not be captured | | String @@ -107,7 +107,7 @@ The Debezium PostgresSQL Connector component supports 67 options, which are list | *slotRetryDelayMs* (postgres) | The number of milli-seconds to wait between retry attempts when the connector fails to connect to a replication slot. | 10s | long | *slotStreamParams* (postgres) | Any optional parameters used by logical decoding plugin. Semi-colon separated. E.g. 'add-tables=public.table,public.table2;include-lsn=true' | | String | *snapshotCustomClass* (postgres) | When 'snapshot.mode' is set as custom, this setting must be set to specify a fully qualified class name to load (via the default class loader).This class must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot and how to build queries. | | String -| *snapshotDelayMs* (postgres) | The number of milliseconds to delay before a snapshot will begin. | 0s | long +| *snapshotDelayMs* (postgres) | The number of milliseconds to delay before a snapshot will begin. | 0ms | long | *snapshotFetchSize* (postgres) | The maximum number of records that should be loaded into memory while performing a snapshot | | int | *snapshotLockTimeoutMs* (postgres) | The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds | 10s | long | *snapshotMode* (postgres) | The criteria for running a snapshot upon startup of the connector. Options include: 'always' to specify that the connector run a snapshot each time it starts up; 'initial' (the default) to specify the connector can run a snapshot only when no offsets are available for the logical server name; 'initial_only' same as 'initial' except the connector should stop after completing the snapshot and before it would normally start emitting changes;'never' to specify t [...] @@ -119,7 +119,7 @@ The Debezium PostgresSQL Connector component supports 67 options, which are list | *timePrecisionMode* (postgres) | Time, date, and timestamps can be represented with different kinds of precisions, including:'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive_time_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision;'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which us [...] | *toastedValuePlaceholder* (postgres) | Specify the constant that will be provided by Debezium to indicate that the original value is a toasted value not provided by the database.If starts with 'hex:' prefix it is expected that the rest of the string repesents hexadecimally encoded octets. | __debezium_unavailable_value | String | *tombstonesOnDelete* (postgres) | Whether delete operations should be represented by a delete event and a subsquenttombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted. | false | boolean -| *xminFetchIntervalMs* (postgres) | Specify how often (in ms) the xmin will be fetched from the replication slot. This xmin value is exposed by the slot which gives a lower bound of where a new replication slot could start from. The lower the value, the more likely this value is to be the current 'true' value, but the bigger the performance cost. The bigger the value, the less likely this value is to be the current 'true' value, but the lower the performance penalty. The default is set [...] +| *xminFetchIntervalMs* (postgres) | Specify how often (in ms) the xmin will be fetched from the replication slot. This xmin value is exposed by the slot which gives a lower bound of where a new replication slot could start from. The lower the value, the more likely this value is to be the current 'true' value, but the bigger the performance cost. The bigger the value, the less likely this value is to be the current 'true' value, but the lower the performance penalty. The default is set [...] |=== // component options: END @@ -184,7 +184,7 @@ with the following path and query parameters: | *decimalHandlingMode* (postgres) | Specify how DECIMAL and NUMERIC columns should be represented in change events, including:'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in [...] | *eventProcessingFailureHandling Mode* (postgres) | Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including:'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped;'ignore' the problematic event will be skipped. | fail | String | *heartbeatActionQuery* (postgres) | The query executed with every heartbeat. Defaults to an empty string. | | String -| *heartbeatIntervalMs* (postgres) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0s | int +| *heartbeatIntervalMs* (postgres) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int | *heartbeatTopicsPrefix* (postgres) | The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String | *hstoreHandlingMode* (postgres) | Specify how HSTORE columns should be represented in change events, including:'json' represents values as string-ified JSON (default)'map' represents values as a key/value map | json | String | *includeUnknownDatatypes* (postgres) | Specify whether the fields of data type not supported by Debezium should be processed:'false' (the default) omits the fields; 'true' converts the field into an implementation dependent binary representation. | false | boolean @@ -193,7 +193,7 @@ with the following path and query parameters: | *maxQueueSize* (postgres) | Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. | 8192 | int | *messageKeyColumns* (postgres) | A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':',where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector,and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key column [...] | *pluginName* (postgres) | The name of the Postgres logical decoding plugin installed on the server. Supported values are 'decoderbufs' and 'wal2json'. Defaults to 'decoderbufs'. | decoderbufs | String -| *pollIntervalMs* (postgres) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 0.5s | long +| *pollIntervalMs* (postgres) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 500ms | long | *provideTransactionMetadata* (postgres) | Enables transaction metadata extraction together with event counting | false | boolean | *publicationName* (postgres) | The name of the Postgres 10 publication used for streaming changes from a plugin.Defaults to 'dbz_publication' | dbz_publication | String | *schemaBlacklist* (postgres) | The schemas for which events must not be captured | | String @@ -205,7 +205,7 @@ with the following path and query parameters: | *slotRetryDelayMs* (postgres) | The number of milli-seconds to wait between retry attempts when the connector fails to connect to a replication slot. | 10s | long | *slotStreamParams* (postgres) | Any optional parameters used by logical decoding plugin. Semi-colon separated. E.g. 'add-tables=public.table,public.table2;include-lsn=true' | | String | *snapshotCustomClass* (postgres) | When 'snapshot.mode' is set as custom, this setting must be set to specify a fully qualified class name to load (via the default class loader).This class must implement the 'Snapshotter' interface and is called on each app boot to determine whether to do a snapshot and how to build queries. | | String -| *snapshotDelayMs* (postgres) | The number of milliseconds to delay before a snapshot will begin. | 0s | long +| *snapshotDelayMs* (postgres) | The number of milliseconds to delay before a snapshot will begin. | 0ms | long | *snapshotFetchSize* (postgres) | The maximum number of records that should be loaded into memory while performing a snapshot | | int | *snapshotLockTimeoutMs* (postgres) | The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds | 10s | long | *snapshotMode* (postgres) | The criteria for running a snapshot upon startup of the connector. Options include: 'always' to specify that the connector run a snapshot each time it starts up; 'initial' (the default) to specify the connector can run a snapshot only when no offsets are available for the logical server name; 'initial_only' same as 'initial' except the connector should stop after completing the snapshot and before it would normally start emitting changes;'never' to specify t [...] @@ -217,7 +217,7 @@ with the following path and query parameters: | *timePrecisionMode* (postgres) | Time, date, and timestamps can be represented with different kinds of precisions, including:'adaptive' (the default) bases the precision of time, date, and timestamp values on the database column's precision; 'adaptive_time_microseconds' like 'adaptive' mode, but TIME fields always use microseconds precision;'connect' always represents time, date, and timestamp values using Kafka Connect's built-in representations for Time, Date, and Timestamp, which us [...] | *toastedValuePlaceholder* (postgres) | Specify the constant that will be provided by Debezium to indicate that the original value is a toasted value not provided by the database.If starts with 'hex:' prefix it is expected that the rest of the string repesents hexadecimally encoded octets. | __debezium_unavailable_value | String | *tombstonesOnDelete* (postgres) | Whether delete operations should be represented by a delete event and a subsquenttombstone event (true) or only by a delete event (false). Emitting the tombstone event (the default behavior) allows Kafka to completely delete all events pertaining to the given key once the source record got deleted. | false | boolean -| *xminFetchIntervalMs* (postgres) | Specify how often (in ms) the xmin will be fetched from the replication slot. This xmin value is exposed by the slot which gives a lower bound of where a new replication slot could start from. The lower the value, the more likely this value is to be the current 'true' value, but the bigger the performance cost. The bigger the value, the less likely this value is to be the current 'true' value, but the lower the performance penalty. The default is set [...] +| *xminFetchIntervalMs* (postgres) | Specify how often (in ms) the xmin will be fetched from the replication slot. This xmin value is exposed by the slot which gives a lower bound of where a new replication slot could start from. The lower the value, the more likely this value is to be the current 'true' value, but the bigger the performance cost. The bigger the value, the less likely this value is to be the current 'true' value, but the lower the performance penalty. The default is set [...] |=== // endpoint options: END diff --git a/docs/components/modules/ROOT/pages/debezium-sqlserver-component.adoc b/docs/components/modules/ROOT/pages/debezium-sqlserver-component.adoc index 4ac6250..5c0fc2b 100644 --- a/docs/components/modules/ROOT/pages/debezium-sqlserver-component.adoc +++ b/docs/components/modules/ROOT/pages/debezium-sqlserver-component.adoc @@ -72,7 +72,7 @@ The Debezium SQL Server Connector component supports 48 options, which are liste | *databaseHistoryFileFilename* (sqlserver) | The path to the file that will be used to record the database history | | String | *databaseHistoryKafkaBootstrap Servers* (sqlserver) | A list of host/port pairs that the connector will use for establishing the initial connection to the Kafka cluster for retrieving database schema history previously stored by the connector. This should point to the same Kafka cluster used by the Kafka Connect process. | | String | *databaseHistoryKafkaRecovery Attempts* (sqlserver) | The number of attempts in a row that no data are returned from Kafka before recover completes. The maximum amount of time to wait after receiving no data is (recovery.attempts) x (recovery.poll.interval.ms). | 100 | int -| *databaseHistoryKafkaRecovery PollIntervalMs* (sqlserver) | The number of milliseconds to wait while polling for persisted data during recovery. | 0.1s | int +| *databaseHistoryKafkaRecovery PollIntervalMs* (sqlserver) | The number of milliseconds to wait while polling for persisted data during recovery. | 100ms | int | *databaseHistoryKafkaTopic* (sqlserver) | The name of the topic for the database schema history | | String | *databaseHostname* (sqlserver) | Resolvable hostname or IP address of the SQL Server database server. | | String | *databasePassword* (sqlserver) | *Required* Password of the SQL Server database user to be used when connecting to the database. | | String @@ -82,14 +82,14 @@ The Debezium SQL Server Connector component supports 48 options, which are liste | *databaseUser* (sqlserver) | Name of the SQL Server database user to be used when connecting to the database. | | String | *decimalHandlingMode* (sqlserver) | Specify how DECIMAL and NUMERIC columns should be represented in change events, including:'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in [...] | *eventProcessingFailureHandling Mode* (sqlserver) | Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including:'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped;'ignore' the problematic event will be skipped. | fail | String -| *heartbeatIntervalMs* (sqlserver) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0s | int +| *heartbeatIntervalMs* (sqlserver) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int | *heartbeatTopicsPrefix* (sqlserver) | The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String | *maxBatchSize* (sqlserver) | Maximum size of each batch of source records. Defaults to 2048. | 2048 | int | *maxQueueSize* (sqlserver) | Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. | 8192 | int | *messageKeyColumns* (sqlserver) | A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':',where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector,and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key colum [...] -| *pollIntervalMs* (sqlserver) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 0.5s | long +| *pollIntervalMs* (sqlserver) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 500ms | long | *provideTransactionMetadata* (sqlserver) | Enables transaction metadata extraction together with event counting | false | boolean -| *snapshotDelayMs* (sqlserver) | The number of milliseconds to delay before a snapshot will begin. | 0s | long +| *snapshotDelayMs* (sqlserver) | The number of milliseconds to delay before a snapshot will begin. | 0ms | long | *snapshotFetchSize* (sqlserver) | The maximum number of records that should be loaded into memory while performing a snapshot | | int | *snapshotLockTimeoutMs* (sqlserver) | The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds | 10s | long | *snapshotMode* (sqlserver) | The criteria for running a snapshot upon startup of the connector. Options include: 'initial' (the default) to specify the connector should run a snapshot only when no offsets are available for the logical server name; 'schema_only' to specify the connector should run a snapshot of the schema when no offsets are available for the logical server name. | initial | String @@ -151,7 +151,7 @@ with the following path and query parameters: | *databaseHistoryFileFilename* (sqlserver) | The path to the file that will be used to record the database history | | String | *databaseHistoryKafkaBootstrap Servers* (sqlserver) | A list of host/port pairs that the connector will use for establishing the initial connection to the Kafka cluster for retrieving database schema history previously stored by the connector. This should point to the same Kafka cluster used by the Kafka Connect process. | | String | *databaseHistoryKafkaRecovery Attempts* (sqlserver) | The number of attempts in a row that no data are returned from Kafka before recover completes. The maximum amount of time to wait after receiving no data is (recovery.attempts) x (recovery.poll.interval.ms). | 100 | int -| *databaseHistoryKafkaRecovery PollIntervalMs* (sqlserver) | The number of milliseconds to wait while polling for persisted data during recovery. | 0.1s | int +| *databaseHistoryKafkaRecovery PollIntervalMs* (sqlserver) | The number of milliseconds to wait while polling for persisted data during recovery. | 100ms | int | *databaseHistoryKafkaTopic* (sqlserver) | The name of the topic for the database schema history | | String | *databaseHostname* (sqlserver) | Resolvable hostname or IP address of the SQL Server database server. | | String | *databasePassword* (sqlserver) | *Required* Password of the SQL Server database user to be used when connecting to the database. | | String @@ -161,14 +161,14 @@ with the following path and query parameters: | *databaseUser* (sqlserver) | Name of the SQL Server database user to be used when connecting to the database. | | String | *decimalHandlingMode* (sqlserver) | Specify how DECIMAL and NUMERIC columns should be represented in change events, including:'precise' (the default) uses java.math.BigDecimal to represent values, which are encoded in the change events using a binary representation and Kafka Connect's 'org.apache.kafka.connect.data.Decimal' type; 'string' uses string to represent values; 'double' represents values using Java's 'double', which may not offer the precision but will be far easier to use in [...] | *eventProcessingFailureHandling Mode* (sqlserver) | Specify how failures during processing of events (i.e. when encountering a corrupted event) should be handled, including:'fail' (the default) an exception indicating the problematic event and its position is raised, causing the connector to be stopped; 'warn' the problematic event and its position will be logged and the event will be skipped;'ignore' the problematic event will be skipped. | fail | String -| *heartbeatIntervalMs* (sqlserver) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0s | int +| *heartbeatIntervalMs* (sqlserver) | Length of an interval in milli-seconds in in which the connector periodically sends heartbeat messages to a heartbeat topic. Use 0 to disable heartbeat messages. Disabled by default. | 0ms | int | *heartbeatTopicsPrefix* (sqlserver) | The prefix that is used to name heartbeat topics.Defaults to __debezium-heartbeat. | __debezium-heartbeat | String | *maxBatchSize* (sqlserver) | Maximum size of each batch of source records. Defaults to 2048. | 2048 | int | *maxQueueSize* (sqlserver) | Maximum size of the queue for change events read from the database log but not yet recorded or forwarded. Defaults to 8192, and should always be larger than the maximum batch size. | 8192 | int | *messageKeyColumns* (sqlserver) | A semicolon-separated list of expressions that match fully-qualified tables and column(s) to be used as message key. Each expression must match the pattern ':',where the table names could be defined as (DB_NAME.TABLE_NAME) or (SCHEMA_NAME.TABLE_NAME), depending on the specific connector,and the key columns are a comma-separated list of columns representing the custom key. For any table without an explicit key configuration the table's primary key colum [...] -| *pollIntervalMs* (sqlserver) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 0.5s | long +| *pollIntervalMs* (sqlserver) | Frequency in milliseconds to wait for new change events to appear after receiving no events. Defaults to 500ms. | 500ms | long | *provideTransactionMetadata* (sqlserver) | Enables transaction metadata extraction together with event counting | false | boolean -| *snapshotDelayMs* (sqlserver) | The number of milliseconds to delay before a snapshot will begin. | 0s | long +| *snapshotDelayMs* (sqlserver) | The number of milliseconds to delay before a snapshot will begin. | 0ms | long | *snapshotFetchSize* (sqlserver) | The maximum number of records that should be loaded into memory while performing a snapshot | | int | *snapshotLockTimeoutMs* (sqlserver) | The maximum number of millis to wait for table locks at the beginning of a snapshot. If locks cannot be acquired in this time frame, the snapshot will be aborted. Defaults to 10 seconds | 10s | long | *snapshotMode* (sqlserver) | The criteria for running a snapshot upon startup of the connector. Options include: 'initial' (the default) to specify the connector should run a snapshot only when no offsets are available for the logical server name; 'schema_only' to specify the connector should run a snapshot of the schema when no offsets are available for the logical server name. | initial | String