Skip to content

Make non-Anthropic providers work end-to-end (logging, tool format, message bag)#8

Open
dkd-dobberkau wants to merge 3 commits intob13:mainfrom
dkd-dobberkau:feat/streaming-request-log
Open

Make non-Anthropic providers work end-to-end (logging, tool format, message bag)#8
dkd-dobberkau wants to merge 3 commits intob13:mainfrom
dkd-dobberkau:feat/streaming-request-log

Conversation

@dkd-dobberkau
Copy link
Copy Markdown

@dkd-dobberkau dkd-dobberkau commented May 6, 2026

Summary

This PR fixes three closely related gaps that together prevent non-Anthropic providers (Mistral, OpenAI, Gemini, Ollama) from working end-to-end through SymfonyAiPlatformAdapter, and also fills the request-log gap for streaming/tool-calling regardless of provider.

The three commits are independent fixes but share the same call path (processToolCallingRequest and processConversationRequest) and surface together once you actually try to switch from Anthropic to e.g. Mistral with the existing demo.

1. Persist request log entries for streaming and tool-calling

Closes #7

Ai::conversationStream() and direct callers of processToolCallingRequest() (e.g. extension-side agent dispatchers using getCapability(...)) bypass the synchronous middleware pipeline. RequestLoggingMiddleware never sees those requests, so tx_aim_request_log stays empty and the dashboard widgets show "No requests logged".

This wires inline logging in SymfonyAiPlatformAdapter:

  • processConversationRequest(stream=true): passes an onComplete callback to StreamChunkIterator (the parameter was already declared but never used). Logs after the stream finishes with the accumulated content and final usage statistics.
  • processToolCallingRequest: logs after each invocation, both on success and on error. Agent loops that run multiple rounds produce one row per round.

RequestLogRepository is resolved via GeneralUtility::makeInstance(...) with an explicit ConnectionPool constructor argument because the repository is registered as private in Configuration/Services.yaml and isn't reachable through the container at runtime from outside DI. Logging failures are swallowed so they never break the response path.

2. Pick tool definition format based on providerIdentifier

processToolCallingRequest currently calls $tool->toArray(), which produces OpenAI's {type: function, function: {name, description, parameters}}. Anthropic, however, expects {name, description, input_schema}. With the current code, agent loops with Anthropic silently ignore the tool definitions because the schema is wrong; with non-Anthropic providers this would work, but in combination with #3 the loop still breaks.

Switch on $request->configuration->providerIdentifier:

  • 'anthropic' → emit Anthropic's tool schema
  • everything else → emit the OpenAI function-calling schema

3. Forward assistant tool_calls and tool messages into MessageBag

buildMessageBag dropped two pieces of information when converting AiM messages into Symfony AI's MessageBag:

  • AssistantMessage->toolCalls were thrown away. With OpenAI-style providers (Mistral, OpenAI), the next round's history then contained an assistant message with neither content nor tool_calls. Mistral rejects this with HTTP 400 "Assistant message must have either content or tool_calls".
  • ToolMessage (role=tool) silently fell through to Message::ofUser, so tool results were sent back as user messages without a tool_call_id and providers couldn't match them to the original call.

Detect both subclasses up front and emit Message::ofAssistant($content, $toolCalls) / Message::ofToolCall(new ToolCall(...), $content) respectively. This relies on Symfony AI's Message::ofAssistant(?string, ?array $toolCalls) and Message::ofToolCall(ToolCall, string) factories.

Test plan

  • Configure a streaming-capable provider (Anthropic) and a non-Anthropic provider (Mistral via symfony/ai-mistral-platform).
  • Trigger a streaming chat call → confirm a ConversationRequest row appears in tx_aim_request_log with non-zero token counts and duration.
  • Trigger a tool-calling agent loop with Anthropic → confirm one ToolCallingRequest row per round, and the tools payload uses Anthropic's input_schema shape.
  • Trigger the same agent loop with Mistral as default → confirm the loop completes through 2+ rounds (no HTTP 400), tool definitions use OpenAI's {type: function, function: ...} shape, and the assistant message in round 2 carries tool_calls in the wire format.
  • Backend "AI Request Log" module shows the new entries with the correct provider_identifier.

Verified locally with b13/aim 0.1.0, TYPO3 13.4.28, PHP 8.3, against claude-sonnet-4-5-20250929 (Anthropic) and mistral-medium-latest (Mistral).

🤖 Generated with Claude Code

dkd-dobberkau and others added 3 commits May 6, 2026 16:50
…equests

Both Ai::conversationStream() and AgentDispatcher-style direct
processToolCallingRequest() callers bypass the synchronous middleware
pipeline, so RequestLoggingMiddleware never sees their requests and
tx_aim_request_log stays empty.

This adds inline logging in SymfonyAiPlatformAdapter:

  * processConversationRequest(stream=true): wires StreamChunkIterator's
    onComplete callback to log after the stream finishes.
  * processToolCallingRequest: logs after each invocation, both on
    success and on error (so each agent loop round produces one row).

Logging routes through RequestLogRepository directly. Failures are
swallowed to never break the response path.

Closes b13#7

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
processToolCallingRequest currently sends OpenAI's $tool->toArray()
format to every provider. This is wrong for Anthropic, which expects
{name, description, input_schema} instead of OpenAI's {type: function,
function: {...}}.

Switch on $request->configuration->providerIdentifier and emit the
correct shape for each provider family.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
buildMessageBag dropped two pieces of information when converting AiM
messages into Symfony AI MessageBag:

  * AssistantMessage->toolCalls were thrown away. With OpenAI-style
    providers (Mistral, OpenAI), this turned the next round's history
    into an assistant message with neither content nor tool_calls,
    which Mistral rejects with HTTP 400 "Assistant message must have
    either content or tool_calls".

  * ToolMessage (role=tool) silently fell through to Message::ofUser,
    so tool results were sent back as user messages without a
    tool_call_id and providers couldn't match them to the original
    call.

Detect both message subclasses up front and emit Message::ofAssistant
with tool calls / Message::ofToolCall with tool_call_id respectively.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@dkd-dobberkau dkd-dobberkau changed the title Persist request log entries for streaming and tool-calling requests Make non-Anthropic providers work end-to-end (logging, tool format, message bag) May 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Streaming responses bypass RequestLoggingMiddleware (no entries in tx_aim_request_log for streamed chats)

1 participant