Skip to content

fix(anthropic): 增强聊天适配器的输出逻辑,添加推理结果处理#39

Merged
me2seeks merged 1 commit intotask-based-corefrom
me2
Jun 25, 2025
Merged

fix(anthropic): 增强聊天适配器的输出逻辑,添加推理结果处理#39
me2seeks merged 1 commit intotask-based-corefrom
me2

Conversation

@me2seeks
Copy link
Copy Markdown
Collaborator

  • 在 Chat 方法中添加对 ThinkingDelta 的处理,确保推理过程中的输出能够正确发送。
  • 优化错误处理逻辑,仅在有实际内容时发送输出块,提升代码的健壮性和可读性。

- 在 Chat 方法中添加对 ThinkingDelta 的处理,确保推理过程中的输出能够正确发送。
- 优化错误处理逻辑,仅在有实际内容时发送输出块,提升代码的健壮性和可读性。
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @me2seeks, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the Anthropic chat adapter's output streaming capabilities. My changes focus on providing richer real-time feedback by streaming the model's intermediate reasoning steps and optimizing error reporting to ensure only meaningful data is transmitted, improving both the user experience and the robustness of the integration.

Highlights

  • Anthropic Integration: I've added explicit handling for Anthropic's ThinkingDelta within the Chat method. This ensures that intermediate reasoning steps from the LLM are captured and streamed as Reasoning output chunks, providing more insight into the model's thought process during inference.
  • Error Handling Optimization: I've refined the error handling logic to be more robust. Now, error output chunks are only sent to the outputChan if they contain actual content (ContentParts) or reasoning (Reasoning). This prevents the emission of empty or redundant error chunks, improving the clarity and efficiency of the output stream.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@me2seeks me2seeks merged commit 04d27bc into task-based-core Jun 25, 2025
1 check passed
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds handling for ThinkingDelta events and refactors error handling. The ThinkingDelta implementation is good, but the error handling change introduces a critical issue where stream errors are no longer propagated. The review includes a comment detailing this bug and a suggestion to fix it.

Comment on lines +198 to +201
// 只有当有实际内容时才发送 chunk
if len(chunk.ContentParts) > 0 || chunk.Reasoning.IsSome() {
outputChan <- chunk
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The conditional check if len(chunk.ContentParts) > 0 || chunk.Reasoning.IsSome() prevents stream errors from being propagated. Since chunk.ContentParts is empty and chunk.Reasoning is None when an error occurs, the error chunk is never sent, silently ignoring the error. Revert to unconditionally sending the error chunk to ensure proper error propagation.

Suggested change
// 只有当有实际内容时才发送 chunk
if len(chunk.ContentParts) > 0 || chunk.Reasoning.IsSome() {
outputChan <- chunk
}
outputChan <- chunk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant