Copilot commented on code in PR #298:
URL: https://github.com/apache/bigtop-manager/pull/298#discussion_r3005915652


##########
bigtop-manager-ai/src/main/java/org/apache/bigtop/manager/ai/platform/OpenAIAssistant.java:
##########
@@ -58,16 +64,44 @@ public static Builder builder() {
 
     public static class Builder extends AbstractAIAssistant.Builder {
 
+        @Override
+        protected String resolveModelsBaseUrl() {
+            Map<String, String> credentials = config == null ? null : 
config.getCredentials();
+            if (credentials != null) {
+                String baseUrl = credentials.get("baseUrl");
+                if (baseUrl != null && !baseUrl.isBlank()) {
+                    return baseUrl;
+                }
+            }
+            return resolveDefaultBaseUrl();
+        }
+
+        private String resolveDefaultBaseUrl() {
+            String envBaseUrl = System.getenv(BASE_URL_ENV_KEY);
+            if (envBaseUrl != null && !envBaseUrl.isBlank()) {
+                return envBaseUrl;
+            }
+            return BASE_URL;
+        }
+
         @Override
         public ChatModel getChatModel() {
             String model = config.getModel();
             Assert.notNull(model, "model must not be null");
             String apiKey = config.getCredentials().get("apiKey");
             Assert.notNull(apiKey, "apiKey must not be null");
 
-            OpenAiApi openAiApi =
-                    
OpenAiApi.builder().baseUrl(BASE_URL).apiKey(apiKey).build();
-            OpenAiChatOptions options = 
OpenAiChatOptions.builder().model(model).build();
+            OpenAiApi openAiApi = OpenAiApi.builder()
+                    .baseUrl(resolveDefaultBaseUrl())
+                    .apiKey(apiKey)
+                    .build();
+            OpenAiChatOptions.Builder optionsBuilder =
+                    OpenAiChatOptions.builder().model(model);
+            List<io.modelcontextprotocol.client.McpAsyncClient> mcpClients = 
getMcpAsyncClients();
+            if (!mcpClients.isEmpty()) {
+                
optionsBuilder.toolCallbacks(buildObservedToolCallbacks(mcpClients));
+            }
+            OpenAiChatOptions options = optionsBuilder.build();

Review Comment:
   `OpenAIAssistant` supports a credential-provided `baseUrl` for model 
discovery, but chat requests still always use `resolveDefaultBaseUrl()` 
(ignoring that credential). This can make model listing succeed against a 
custom endpoint while chat requests go to a different host. Consider using the 
same base-url resolution for both chat and model listing to keep behavior 
consistent.



##########
bigtop-manager-server/src/main/java/org/apache/bigtop/manager/server/service/impl/ChatbotServiceImpl.java:
##########
@@ -129,19 +131,49 @@ public List<ChatThreadVO> getAllChatThreads() {
 
     @Override
     public SseEmitter talk(Long threadId, ChatbotCommand command, String 
message) {
-        AIAssistant aiAssistant = prepareTalk(threadId, command);
+        SseEmitter emitter = new SseEmitter(CHAT_SSE_TIMEOUT_MILLIS);
+        AtomicBoolean emitterClosed = new AtomicBoolean(false);
+        AtomicReference<Disposable> subscriptionRef = new AtomicReference<>();
+        AIAssistant aiAssistant = prepareTalk(threadId, command, emitter, 
emitterClosed);
 
         Flux<String> stringFlux =
                 (command == null) ? aiAssistant.streamAsk(message) : 
Flux.just(aiAssistant.ask(message));

Review Comment:
   `Flux.just(aiAssistant.ask(message))` evaluates `ask()` eagerly (before 
subscription). With tool-execution events now emitted during `ask()`, this can 
send SSE events before the emitter lifecycle handlers are registered / before 
the client is fully connected. Use a deferred publisher (e.g., 
`Flux.defer`/`Mono.fromCallable`) for the non-streaming path so execution 
starts on subscribe and can be cancelled/cleaned up consistently.
   ```suggestion
                   (command == null)
                           ? aiAssistant.streamAsk(message)
                           : Flux.defer(() -> 
Flux.just(aiAssistant.ask(message)));
   ```



##########
bigtop-manager-ui/src/features/ai-assistant/chat-message.vue:
##########
@@ -34,7 +47,41 @@
     </div>
     <article class="chat-item-message">
       <div class="msg-wrp">
-        <markdown-view :mark-raw="$props.record.message" />
+        <template v-if="isTool">
+          <a-collapse v-model:active-key="activeKey" :bordered="false" 
:ghost="true" class="tool-collapse">
+            <a-collapse-panel :key="props.record.executionId || 
props.record.toolName || 'tool'">
+              <template #header>
+                <div class="tool-meta">
+                  <span class="tool-name">{{ props.record.toolName }}</span>
+                  <a-tag
+                    :color="
+                      props.record.toolStatus === 'failed'
+                        ? 'error'
+                        : props.record.toolStatus === 'completed'
+                          ? 'success'
+                          : 'processing'
+                    "
+                  >
+                    {{ props.record.toolStatus }}
+                  </a-tag>
+                </div>
+              </template>
+              <div class="tool-section">
+                <span class="tool-section-title">Input</span>
+                <markdown-view :mark-raw="toolInput" />
+              </div>
+              <div v-if="props.record.toolStatus === 'completed'" 
class="tool-section">
+                <span class="tool-section-title">Output</span>
+                <markdown-view :mark-raw="toolOutput" />
+              </div>
+              <div v-if="props.record.toolStatus === 'failed'" 
class="tool-section">
+                <span class="tool-section-title">Error</span>
+                <markdown-view :mark-raw="toolError" />

Review Comment:
   This component introduces new user-visible labels ("Input", "Output", 
"Error") and displays raw status strings (started/completed/failed) without 
i18n. Other ai-assistant components use `useI18n()`, so these should be 
translated and status values mapped to localized labels.



##########
pom.xml:
##########
@@ -161,6 +161,17 @@
         </dependency>
     </dependencies>
 
+    <repositories>
+        <repository>
+            <snapshots>
+                <enabled>false</enabled>
+            </snapshots>
+            <id>spring-milestones</id>
+            <name>Spring Milestones</name>
+            <url>https://repo.spring.io/milestone</url>
+        </repository>
+    </repositories>

Review Comment:
   The added Spring milestone repository appears unnecessary for Spring AI 
1.1.3 / spring-ai-starter-mcp-client-webflux (both are published to Maven 
Central). Keeping an extra repository can make builds less reproducible and 
increases supply-chain surface; consider removing it (or gating it behind a 
profile only when needed).



##########
bigtop-manager-server/src/main/resources/application.yml:
##########
@@ -25,6 +25,24 @@ spring:
         type: ASYNC
         sse-endpoint: /mcp/sse
         sse-message-endpoint: /mcp/messages
+      client:
+        enabled: true
+        request-timeout-seconds: 120
+        init-timeout-seconds: 20
+        # Optional multi-connection definition.
+        # Format:
+        #   
name=<id>,type=<sse|streamable-http|local>,baseUrl=<url>,endpoint=<path>,command=<cmd>,args=<a|b>,env=<K:V|K2:V2>,requestTimeoutSeconds=<n>,initTimeoutSeconds=<n>;
+        # Notes:
+        #   - sse / streamable-http use baseUrl (+ optional endpoint)
+        #   - local uses command/args/env
+        #   - requestTimeoutSeconds/initTimeoutSeconds are optional 
per-connection overrides
+        # Example:
+        # connections: >
+        #   
name=embedded,type=sse,baseUrl=http://localhost:8080,endpoint=/mcp/sse;
+        #   
name=remote,type=streamable-http,baseUrl=http://127.0.0.1:9000,endpoint=/mcp,requestTimeoutSeconds=180;
+        #   
name=memory,type=local,command=npx,args=-y|@modelcontextprotocol/server-memory,env=NODE_ENV:production
+        connections: >
+            name=bm,type=sse,baseUrl=http://localhost:8080,endpoint=/mcp/sse

Review Comment:
   MCP client is enabled by default and ships with a default connection to 
`http://localhost:8080`. This can lead to unexpected outbound 
connections/timeouts in real deployments and makes the feature non-optional by 
default. Consider defaulting `spring.ai.mcp.client.enabled` to false and 
leaving `connections` empty (keep examples commented) so operators opt in 
explicitly.



##########
bigtop-manager-server/src/main/java/org/apache/bigtop/manager/server/model/converter/AuthPlatformConverter.java:
##########
@@ -54,6 +54,7 @@ default Map<String, String> 
mapAuthCredentials(List<AuthCredentialReq> authCrede
             return null;
         }
         return authCredentials.stream()
+                .filter(item -> item != null && item.getKey() != null)
                 .collect(Collectors.toMap(AuthCredentialReq::getKey, 
AuthCredentialReq::getValue));

Review Comment:
   `Collectors.toMap(AuthCredentialReq::getKey, AuthCredentialReq::getValue)` 
will throw a NullPointerException if any credential has a null value (e.g., 
frontend sends `{key: "apiKey"}` when `value` is undefined). Filter out null 
values (and ideally handle duplicate keys via a merge function) to avoid 500s 
on the new models endpoint and existing auth-platform endpoints.
   ```suggestion
                   .filter(item -> item != null && item.getKey() != null && 
item.getValue() != null)
                   .collect(Collectors.toMap(AuthCredentialReq::getKey, 
AuthCredentialReq::getValue, (existing, replacement) -> replacement));
   ```



##########
bigtop-manager-server/src/main/java/org/apache/bigtop/manager/server/mcp/tool/StackMcpTool.java:
##########
@@ -51,7 +54,7 @@ public List<StackVO> listStacks() {
             
stackVO.setServices(ServiceConverter.INSTANCE.fromDTO2VO(serviceDTOList));
             stackVOList.add(stackVO);
         }
-
+        log.info("ListStacks tool called, total stacks: {}", 
stackVOList.size());

Review Comment:
   Logging every tool invocation at INFO (`ListStacks tool called...`) may be 
noisy if tools are invoked frequently during chats. Consider lowering this to 
DEBUG (or sampling) to avoid log volume growth in production.
   ```suggestion
           log.debug("ListStacks tool called, total stacks: {}", 
stackVOList.size());
   ```



##########
bigtop-manager-server/src/main/java/org/apache/bigtop/manager/server/service/impl/LLMConfigServiceImpl.java:
##########
@@ -78,15 +73,61 @@ public class LLMConfigServiceImpl implements 
LLMConfigService {
     @Resource
     private AIAssistantFactory aiAssistantFactory;
 
-    private static final String TEST_FLAG = "ZmxhZw==";
-    private static final String TEST_KEY = "bm";
-
     @Override
     public List<PlatformVO> platforms() {
         List<PlatformPO> platformPOs = platformDao.findAll();
         return PlatformConverter.INSTANCE.fromPO2VO(platformPOs);
     }
 
+    private List<String> getDynamicModels(PlatformPO platformPO, Map<String, 
String> explicitCreds) {
+        PlatformType platformType =
+                
PlatformType.getPlatformType(platformPO.getName().toLowerCase());
+        if (platformType == null) {
+            return getDefaultModels(platformPO);
+        }
+
+        Map<String, String> creds = new HashMap<>();
+
+        if (explicitCreds != null && !explicitCreds.isEmpty()) {
+            creds.putAll(explicitCreds);
+        } else {
+            List<AuthPlatformPO> authPlatformPOs = authPlatformDao.findAll();
+            for (AuthPlatformPO auth : authPlatformPOs) {
+                if (auth.getPlatformId().equals(platformPO.getId()) && 
!auth.getIsDeleted()) {
+                    creds = 
AuthPlatformConverter.INSTANCE.fromPO2DTO(auth).getAuthCredentials();
+                    break;
+                }
+            }
+        }
+
+        GeneralAssistantConfig config = GeneralAssistantConfig.builder()
+                .setPlatformType(platformType)
+                .setModel("dummy")
+                .addCredentials(creds)
+                .build();
+
+        List<String> models = aiAssistantFactory.getModels(config);
+        if (models != null && !models.isEmpty()) {
+            return models;
+        }
+
+        return getDefaultModels(platformPO);
+    }
+
+    private List<String> getDefaultModels(PlatformPO platformPO) {
+        String supportModels = platformPO.getSupportModels();
+        if (supportModels == null || supportModels.isBlank()) {
+            return Collections.emptyList();
+        }
+        return List.of(supportModels.split(","));

Review Comment:
   `getDefaultModels` returns `List.of(supportModels.split(","))` without 
trimming/filtering blanks. If `supportModels` contains spaces or trailing 
commas, callers may see invalid model IDs (e.g., " gpt-4" or ""). Consider 
trimming each entry and filtering empty strings before returning.
   ```suggestion
           String[] rawModels = supportModels.split(",");
           List<String> models = new ArrayList<>();
           for (String rawModel : rawModels) {
               if (rawModel == null) {
                   continue;
               }
               String trimmed = rawModel.trim();
               if (!trimmed.isEmpty()) {
                   models.add(trimmed);
               }
           }
           if (models.isEmpty()) {
               return Collections.emptyList();
           }
           return models;
   ```



##########
bigtop-manager-server/src/main/java/org/apache/bigtop/manager/server/controller/LLMConfigController.java:
##########
@@ -69,6 +70,15 @@ public ResponseEntity<List<PlatformAuthCredentialVO>> 
platformsAuthCredential(
         return 
ResponseEntity.success(llmConfigService.platformsAuthCredentials(platformId));
     }
 
+    @Operation(summary = "list platform models", description = "List models 
from /v1/models")
+    @PostMapping("/platforms/{platformId}/models")
+    public ResponseEntity<List<String>> platformModels(
+            @PathVariable(name = "platformId") Long platformId, @RequestBody 
AuthPlatformReq authPlatformReq) {
+        AuthPlatformDTO authPlatformDTO = 
AuthPlatformConverter.INSTANCE.fromReq2DTO(authPlatformReq);
+        Map<String, String> authCredentials = 
authPlatformDTO.getAuthCredentials();
+        return 
ResponseEntity.success(llmConfigService.platformModels(platformId, 
authCredentials));
+    }

Review Comment:
   The new `/platforms/{platformId}/models` endpoint accepts `AuthPlatformReq`, 
but `platformId` is already provided in the path and the request type contains 
many unrelated fields. Consider introducing a dedicated request DTO (e.g., `{ 
authCredentials: [...] }`) to keep the API contract clear and avoid implying 
that other fields are required/used.



##########
bigtop-manager-ui/src/store/ai-assistant/index.ts:
##########
@@ -180,21 +192,79 @@ export const useAiChatStore = defineStore(
       }
     }
 
+    const updateToolExecution = (item: ReceivedMessageItem) => {
+      const { executionId, toolName, toolStatus, toolPayload } = item
+      if (!executionId || !toolName || !toolStatus) {
+        return
+      }
+
+      const prevRecord = toolExecutions.value.find((execution) => 
execution.executionId === executionId)
+      const record = {
+        sender: 'AI' as const,
+        message: '',
+        messageType: 'tool' as const,
+        executionId,
+        toolName,
+        toolStatus,
+        toolPayload,
+        toolInput: toolStatus === 'started' ? toolPayload || '' : 
prevRecord?.toolInput || '',
+        toolOutput: toolStatus === 'completed' ? toolPayload || '' : 
prevRecord?.toolOutput || '',
+        toolError: toolStatus === 'failed' ? toolPayload || '' : 
prevRecord?.toolError || '',
+        createTime: dayjs().format()
+      }

Review Comment:
   `updateToolExecution` overwrites `createTime` on every tool status update. 
This can reorder tool records / show misleading timestamps (e.g., completed 
time instead of started time). Preserve the original `createTime` from 
`prevRecord` when present and only set it when creating a new execution record.



##########
bigtop-manager-ui/src/pages/system-manage/llm-config/components/add-llm-item.vue:
##########
@@ -137,6 +138,20 @@
     }
   }
 
+  const handleRefreshModels = async () => {
+    loadingModels.value = true
+    const models = await llmConfigStore.refreshPlatformModels()
+    if (models.length > 0) {
+      if (!models.includes(currPlatform.value.model as string)) {
+        currPlatform.value.model = undefined
+      }
+      message.success(t('llmConfig.models_loaded'))
+    } else {
+      message.error(t('llmConfig.models_load_failed'))
+    }
+    loadingModels.value = false
+  }

Review Comment:
   `handleRefreshModels` sets `loadingModels` to true before awaiting the 
refresh, but doesn’t use a `try/finally` to guarantee it is reset if an 
exception is thrown (e.g., network layer throws unexpectedly). Wrap the body in 
`try/finally` so the button can’t get stuck in a loading state.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to