You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using an Anthropic model (e.g., Claude 4 Opus/Sonnet) within the n8n AI Agent node, if the "Enable Thinking" option is activated in the chat model's configuration, any attempt by the agent to use a tool/function fails. The error is a 400 Bad Request from the Anthropic API, specifically indicating an issue with message formatting related to their "extended thinking" feature.
The core of the problem, as stated by the Anthropic API error, is:
"messages.X.content.0.type: Expected thinking or redacted_thinking, but found tool_use. When thinking is enabled, a final assistant message must start with a thinking block (preceeding the lastmost set of tool_use and tool_result blocks)..."
This implies that the n8n AI Agent (or the underlying LangChain layer) is not prepending the required <thinking>...</thinking> block to the assistant's message content when a tool call is initiated with "Enable Thinking" active.
Disabling "Enable Thinking" allows tool calls to function correctly, but this circumvents the desired "extended thinking" capability.
Full Error Message from Anthropic API (from user's n8n instance):
{
"errorMessage": "Bad request - please check your parameters",
"errorDescription": "messages.11.content.0.type: Expected `thinking` or `redacted_thinking`, but found `tool_use`. When `thinking` is enabled, a final `assistant` message must start with a thinking block (preceeding the lastmost set of `tool_use` and `tool_result` blocks). We recommend you include thinking blocks from previous turns. To avoid this requirement, disable `thinking`. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking",
"errorDetails": {},
"n8nDetails": {
"time": "5/27/2025, 12:57:55 AM",
"n8nVersion": "1.95.0 (Self Hosted)",
"binaryDataMode": "default",
"cause": {
"status": 400,
"headers": {
"anthropic-organization-id": "838176c5-da4a-4355-b9b9-e072be0a8fb3",
"cf-cache-status": "DYNAMIC",
"cf-ray": "9462ef340b718b39-STI",
"connection": "keep-alive",
"content-length": "538",
"content-type": "application/json",
"date": "Tue, 27 May 2025 04:57:57 GMT",
"request-id": "req_011CPXSdRY1Hnr1TFZfoThY1",
"server": "cloudflare",
"strict-transport-security": "max-age=31536000; includeSubDomains; preload",
"via": "1.1 google",
"x-robots-tag": "none",
"x-should-retry": "false"
},
"request_id": "req_011CPXSdRY1Hnr1TFZfoThY1",
"error": {
"type": "error",
"error": {
"type": "invalid_request_error",
"message": "messages.11.content.0.type: Expected `thinking` or `redacted_thinking`, but found `tool_use`. When `thinking` is enabled, a final `assistant` message must start with a thinking block (preceeding the lastmost set of `tool_use` and `tool_result` blocks). We recommend you include thinking blocks from previous turns. To avoid this requirement, disable `thinking`. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"
}
},
"lc_error_code": "INVALID_TOOL_RESULTS",
"attemptNumber": 1,
"retriesLeft": 6
}
}
}
Hypothesis on the Cause:
This issue likely stems from how the tool-calling mechanism is implemented for Anthropic models with "thinking" enabled, potentially within the LangChain layer (@langchain/anthropic or LangChain's agent logic) that n8n utilizes.
LangChain Abstraction Layer: The LangChain adapter for Anthropic might not fully account for, or correctly implement, the strict formatting requirement of "thinking" + "tool_use" when "Enable Thinking" is active. It might be constructing the tool call message in a more generic way that is incompatible with this specific Anthropic feature.
AI Agent Logic: The n8n AI Agent's internal logic for managing the conversation history and constructing LLM calls might not be correctly signaling or formatting the request to include the thinking block appropriately before a tool_use block when this Anthropic-specific feature is enabled.
Scope of Previous Fixes: While PR #15381 addressed an issue with Anthropic models and thinking mode, its focus appeared to be on output parsing (StringOutputParser vs. JsonOutputParser) for the LLM Chain node. This current issue relates to input formatting for tool calls specifically within the AI Agent node.
Workaround:
Currently, the only workaround is to disable the "Enable Thinking" option in the Anthropic Chat Model node configuration. This allows tool calls to function correctly but sacrifices the "extended thinking" feature.
Configure the AI Agent to use an Anthropic Chat Model (e.g., Claude 4 Opus or Claude 3.7 / 4 Sonnet).
In the Anthropic Chat Model node (sub-node of the AI Agent), activate the "Enable Thinking" option under "Options".
Define at least one tool/function that the AI Agent is permitted to use.
Provide a prompt to the AI Agent that explicitly requires it to use the defined tool to answer.
Execute the workflow.
Observe the AI Agent node failing with a 400 error from the Anthropic API, with the error message detailed in the description.
Expected behavior
When "Enable Thinking" is active for an Anthropic model, and the AI Agent decides the model should use a tool, the request sent to the Anthropic API should correctly format the assistant's message. Specifically, the tool_use block must be preceded by a thinking block within the same assistant message content array, as per Anthropic's "extended thinking" documentation.
Example of the expected structure for the assistant's message content initiating a tool call:
[
{
"type": "thinking",
"text": "Okay, I need to use the 'example_tool' to find out X."
},
{
"type": "tool_use",
"id": "toolu_...",
"name": "example_tool",
"input": {"parameter": "value"}
}
]
The workflow should proceed with the tool call, and subsequent interactions should also adhere to this formatting if further thinking/tool use occurs.
Operating System
Windows 11 (hosting Docker Desktop)
n8n Version
1.95.0
Node.js Version
20.19.2
Database
SQLite (default)
Execution mode
main (default)
The text was updated successfully, but these errors were encountered:
Bug Description
When using an Anthropic model (e.g., Claude 4 Opus/Sonnet) within the n8n AI Agent node, if the "Enable Thinking" option is activated in the chat model's configuration, any attempt by the agent to use a tool/function fails. The error is a 400 Bad Request from the Anthropic API, specifically indicating an issue with message formatting related to their "extended thinking" feature.
The core of the problem, as stated by the Anthropic API error, is:
"messages.X.content.0.type: Expected
thinking
orredacted_thinking
, but foundtool_use
. Whenthinking
is enabled, a finalassistant
message must start with a thinking block (preceeding the lastmost set oftool_use
andtool_result
blocks)..."This implies that the n8n AI Agent (or the underlying LangChain layer) is not prepending the required
<thinking>...</thinking>
block to the assistant's message content when a tool call is initiated with "Enable Thinking" active.Disabling "Enable Thinking" allows tool calls to function correctly, but this circumvents the desired "extended thinking" capability.
Full Error Message from Anthropic API (from user's n8n instance):
Hypothesis on the Cause:
This issue likely stems from how the tool-calling mechanism is implemented for Anthropic models with "thinking" enabled, potentially within the LangChain layer (@langchain/anthropic or LangChain's agent logic) that n8n utilizes.
LangChain Abstraction Layer: The LangChain adapter for Anthropic might not fully account for, or correctly implement, the strict formatting requirement of "thinking" + "tool_use" when "Enable Thinking" is active. It might be constructing the tool call message in a more generic way that is incompatible with this specific Anthropic feature.
AI Agent Logic: The n8n AI Agent's internal logic for managing the conversation history and constructing LLM calls might not be correctly signaling or formatting the request to include the thinking block appropriately before a tool_use block when this Anthropic-specific feature is enabled.
Scope of Previous Fixes: While PR #15381 addressed an issue with Anthropic models and thinking mode, its focus appeared to be on output parsing (StringOutputParser vs. JsonOutputParser) for the LLM Chain node. This current issue relates to input formatting for tool calls specifically within the AI Agent node.
Workaround:
Currently, the only workaround is to disable the "Enable Thinking" option in the Anthropic Chat Model node configuration. This allows tool calls to function correctly but sacrifices the "extended thinking" feature.
References:
Anthropic "Extended Thinking" Documentation: https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking
Potentially related (but different focus) n8n PR: https://github.com/n8n-io/n8n/pull/15381
To Reproduce
Expected behavior
When "Enable Thinking" is active for an Anthropic model, and the AI Agent decides the model should use a tool, the request sent to the Anthropic API should correctly format the assistant's message. Specifically, the tool_use block must be preceded by a thinking block within the same assistant message content array, as per Anthropic's "extended thinking" documentation.
Example of the expected structure for the assistant's message content initiating a tool call:
The workflow should proceed with the tool call, and subsequent interactions should also adhere to this formatting if further thinking/tool use occurs.
Operating System
Windows 11 (hosting Docker Desktop)
n8n Version
1.95.0
Node.js Version
20.19.2
Database
SQLite (default)
Execution mode
main (default)
The text was updated successfully, but these errors were encountered: