[ https://issues.apache.org/jira/browse/GEODE-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17228801#comment-17228801 ]
ASF GitHub Bot commented on GEODE-8626: --------------------------------------- jchen21 commented on a change in pull request #5637: URL: https://github.com/apache/geode/pull/5637#discussion_r520038736 ########## File path: geode-connectors/src/test/java/org/apache/geode/connectors/jdbc/internal/JdbcConnectorServiceTest.java ########## @@ -216,4 +249,213 @@ public void validateMappingSucceedsWithCompositeKeys() { when(mapping.getIds()).thenReturn(KEY_COLUMN_NAME + "," + COMPOSITE_KEY_COLUMN_NAME); service.validateMapping(mapping); } + + @Test + public void getTableMetaDataViewSucceeds() { + TableMetaDataView result = service.getTableMetaDataView(mapping); + assertThat(result).isEqualTo(view); + verify(manager).getTableMetaDataView(connection, mapping); + } + + @Test + public void getTableMetaDataViewThrowsExceptionWhenDataSourceDoesNotExist() { + doReturn(null).when(service).getDataSource(DATA_SOURCE_NAME); + Throwable throwable = catchThrowable(() -> service.getTableMetaDataView(mapping)); + assertThat(throwable).isInstanceOf(JdbcConnectorException.class).hasMessageContaining( + String.format("No datasource \"%s\" found when getting table meta data \"%s\"", + mapping.getDataSourceName(), mapping.getRegionName())); + } + + @Test + public void getTableMetaDataViewThrowsExceptionWhenGetConnectionHasSqlException() + throws SQLException { + when(dataSource.getConnection()).thenThrow(SQLException.class); Review comment: Minor pick. I would recommend some error message for the `SQLException`, e.g. ``` String reason = "connection failed"; when(dataSource.getConnection()).thenThrow(new SQLException(reason)); ``` The the error message in the next few lines would be `Exception thrown while connecting to datasource \"dataSource\": connection failed`, instead of `Exception thrown while connecting to datasource \"dataSource\": null` ########## File path: geode-connectors/src/acceptanceTest/java/org/apache/geode/connectors/jdbc/GfshJdbcMappingIntegrationTest.java ########## @@ -0,0 +1,208 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.connectors.jdbc; + +import java.lang.reflect.Constructor; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Set; + +import org.apache.geode.cache.CacheFactory; +import org.apache.geode.cache.configuration.JndiBindingsType; +import org.apache.geode.cache.configuration.RegionConfig; +import org.apache.geode.cache.execute.Function; +import org.apache.geode.cache.execute.ResultCollector; +import org.apache.geode.connectors.jdbc.internal.cli.CreateMappingFunction; +import org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunction; +import org.apache.geode.connectors.jdbc.internal.configuration.FieldMapping; +import org.apache.geode.connectors.jdbc.internal.configuration.RegionMapping; +import org.apache.geode.distributed.DistributedMember; +import org.apache.geode.internal.cache.InternalCache; +import org.apache.geode.management.configuration.RegionType; +import org.apache.geode.management.internal.cli.commands.CreateJndiBindingCommand; +import org.apache.geode.management.internal.cli.functions.CreateJndiBindingFunction; +import org.apache.geode.management.internal.cli.functions.CreateRegionFunctionArgs; +import org.apache.geode.management.internal.cli.functions.RegionCreateFunction; +import org.apache.geode.management.internal.configuration.converters.RegionConverter; +import org.apache.geode.management.internal.functions.CliFunctionResult; +import org.apache.geode.management.internal.util.ManagementUtils; + +public class GfshJdbcMappingIntegrationTest extends JdbcMappingIntegrationTest { + + @Override + protected InternalCache createCacheAndCreateJdbcMapping(String cacheXmlTestName) + throws Exception { + InternalCache cache = + (InternalCache) new CacheFactory().set("locators", "").set("mcast-port", "0").create(); + Set<DistributedMember> targetMembers = findMembers(cache, null, null); + + CliFunctionResult createRegionFuncResult = executeCreateRegionFunction(targetMembers); Review comment: This is an integration test, which tests the behavior of gfsh commands from a user's point view. I don't think this test should use internal functions of specific gfsh command's implementation. The users should not worry about the gfsh implementation. And I don't recommend using a lot of `System.out.println` in the tests. Is that for debugging purpose? If you have to use internal functions, I would recommend testing them in a unit test, or some other integration test or dunit that specifically test a specific gfsh command. You said you use the internal functions because some parts of `GfshRule` can not verify in case of errors. Can you give a specific example? ########## File path: geode-connectors/src/test/java/org/apache/geode/connectors/jdbc/internal/cli/CreateMappingPreconditionCheckFunctionTest.java ########## @@ -172,16 +168,6 @@ public void executeFunctionThrowsIfDataSourceDoesNotExist() { + DATA_SOURCE_NAME + "'."); } - @Test Review comment: 👍 ########## File path: geode-connectors/src/acceptanceTest/java/org/apache/geode/connectors/jdbc/CacheXmlJdbcMappingIntegrationTest.java ########## @@ -0,0 +1,89 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license + * agreements. See the NOTICE file distributed with this work for additional information regarding + * copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance with the License. You may obtain a + * copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software distributed under the License + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express + * or implied. See the License for the specific language governing permissions and limitations under + * the License. + */ +package org.apache.geode.connectors.jdbc; + +import static org.apache.geode.test.util.ResourceUtils.createTempFileFromResource; + +import org.junit.Rule; +import org.junit.contrib.java.lang.system.RestoreSystemProperties; + +import org.apache.geode.cache.CacheFactory; +import org.apache.geode.internal.cache.InternalCache; + +public class CacheXmlJdbcMappingIntegrationTest extends JdbcMappingIntegrationTest { + + @Rule + public RestoreSystemProperties restoreSystemProperties = new RestoreSystemProperties(); + + @Override + protected InternalCache createCacheAndCreateJdbcMapping(String cacheXmlTestName) + throws Exception { + String url = dbRule.getConnectionUrl().replaceAll("&", "&"); Review comment: 👍 ########## File path: geode-connectors/src/main/java/org/apache/geode/connectors/jdbc/internal/JdbcConnectorServiceImpl.java ########## @@ -210,4 +224,152 @@ private TableMetaDataView getTableMetaDataView(RegionMapping regionMapping, + regionMapping.getDataSourceName() + "\": ", ex); } } + + @Override + public TableMetaDataView getTableMetaDataView(RegionMapping regionMapping) { + DataSource dataSource = getDataSource(regionMapping.getDataSourceName()); + if (dataSource == null) { + throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName() + + "\" found when getting table meta data \"" + regionMapping.getRegionName() + "\""); Review comment: I was trying to say: ``` if (dataSource == null) { throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName()); } ``` If you would like to provide more information, ``` if (dataSource == null) { throw new JdbcConnectorException("No datasource \"" + regionMapping.getDataSourceName() + "\" found when getting table meta data \"" + regionMapping.getTableName() + "\""); } ``` Note that it is `regionMapping.getTableName()`, not `regionMapping.getRegionName()`. ########## File path: geode-connectors/src/test/java/org/apache/geode/connectors/jdbc/internal/cli/CreateMappingPreconditionCheckFunctionTest.java ########## @@ -306,45 +292,10 @@ public void executeFunctionThrowsGivenPdxSerializableWithNoZeroArgConstructor() "Could not generate a PdxType for the class org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg because it did not have a public zero arg constructor. Details: java.lang.NoSuchMethodException: org.apache.geode.connectors.jdbc.internal.cli.CreateMappingPreconditionCheckFunctionTest$PdxClassDummyNoZeroArg.<init>()"); } - @Test Review comment: I understand that. However, none of the `JdbcConnectorServiceTest.createDefaultFieldMapping*` uses `ReflectionBasedAutoSerializer`. Although those tests do verify the field mapping. I expect there is some test that uses `ReflectionBasedAutoSerializer` and verifies the field mapping as well, like the test deleted below. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Omitting field-mapping tag of cache.xml when using Simple JDBC Connector > ------------------------------------------------------------------------ > > Key: GEODE-8626 > URL: https://issues.apache.org/jira/browse/GEODE-8626 > Project: Geode > Issue Type: Improvement > Components: jdbc > Reporter: Masaki Yamakawa > Priority: Minor > Labels: pull-request-available > > When configuring Simple JDBC Connector with gfsh, I don't need to create > field-mapping, the default field-mapping will be created from pdx and table > meta data. > On the other hand, when using cache.xml(cluster.xml), pdx and table meta data > cannot be used, and field-mapping must be described in cache.xml. > I would like to create field-mapping defaults based on pdx and table meta > data when using cache.xml. > If field-mapping is specified in cache.xml, the xml setting has priority, and > only if there are no field-mapping tags. > cache.xml will be as follows: > {code:java} > <region name="Region1" refid="REPLICATE"> > <jdbc:mapping > data-source="TestDataSource" > table="employees" > pdx-name="org.apache.geode.connectors.jdbc.Employee" > ids="id"> > <!-- no need to jdbc:field-mapping tag > <jdbc:field-mapping pdx-name="id" pdx-type="STRING" jdbc-name="id" > jdbc-type="VARCHAR" jdbc-nullable="false"/> > <jdbc:field-mapping pdx-name="name" pdx-type="STRING" > jdbc-name="name" jdbc-type="VARCHAR" jdbc-nullable="true"/> > <jdbc:field-mapping pdx-name="age" pdx-type="INT" jdbc-name="age" > jdbc-type="INTEGER" jdbc-nullable="true"/> > --> > </jdbc:mapping> > </region> > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)