#113 Replace StreamingChatLanguageModelReply.Error with throwing exception#178
#113 Replace StreamingChatLanguageModelReply.Error with throwing exception#178
Conversation
- Removed duplicated dependency from `pom.xml`. - Simplified lambda expressions in `ChatMemoryServiceExtensions`. - Consolidated assertion imports and improved flow testing in `ServiceWithFlowTest`. - Enhanced formatting and readability across sample classes and tests. - Removed unused and redundant code from `StreamingChatModelExtensions` and other files.
Coverage summary from CodacySee diff coverage on Codacy
Coverage variation details
Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: Diff coverage details
Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: See your quality gate settings Change summary preferences |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #178 +/- ##
==========================================
- Coverage 68.86% 68.83% -0.04%
==========================================
Files 29 29
Lines 742 738 -4
Branches 132 132
==========================================
- Hits 511 508 -3
+ Misses 164 163 -1
Partials 67 67 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
📝 WalkthroughSummary by CodeRabbit
WalkthroughThis update primarily simplifies error handling in the Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant StreamingChatModel
participant Flow
Client->>StreamingChatModel: Start chatFlow()
StreamingChatModel->>Flow: Emit PartialResponse(s)
Note over Flow: On error, log and close flow (no error reply emitted)
Flow-->>Client: PartialResponse(s)
Flow--x Client: Exception thrown on error
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
langchain4j-kotlin/src/main/kotlin/me/kpavlov/langchain4j/kotlin/service/ReflectionVariableResolver.kt (1)
113-116: Signature tightened – consider updating KDoc/nullable contract
argsis now non-nullable infindMemoryIdParameter.
The caller already guards againstnull, so the change is safe and removes a redundant null-check inside this helper. Just make sure any internal documentation (or IntelliJ hints) still claiming the parameter is nullable is updated accordingly, otherwise readers might get confused.langchain4j-kotlin/src/main/kotlin/me/kpavlov/langchain4j/kotlin/service/memory/ChatMemoryServiceExtensions.kt (1)
21-25: Consistent style – consider dropping explicitpublicKotlin defaults to
public; you can omit the modifier for brevity here (as done in other files). Purely stylistic – feel free to ignore.samples/src/main/kotlin/me/kpavlov/langchain4j/kotlin/samples/Environment.kt (1)
9-14: Consider lazy-initialising themodelconstantCreating the
ChatModelat file-load time will execute builder logic (and potentially hit network/env access) on every import, even when the model is unused (e.g. in unit tests). A small tweak keeps sample code equally simple while deferring the cost until the first use:-val model: ChatModel = - OpenAiChatModel - .builder() - .modelName("gpt-4o-nano") - .apiKey(testEnv["OPENAI_API_KEY"]) - .build() +val model: ChatModel by lazy { + OpenAiChatModel + .builder() + .modelName("gpt-4o-nano") + .apiKey(testEnv["OPENAI_API_KEY"]) + .build() +}No functional change, just safer module initialisation and slightly faster test start-up.
samples/src/main/kotlin/me/kpavlov/langchain4j/kotlin/samples/AsyncAiServiceExample.kt (1)
19-25: Optional: extract builder chain for reuse / testabilityThe fluent chain building
assistantis perfectly valid, but if multiple samples (or production code) create identical assistants, consider wrapping the chain in a helper to improve reuse and make mocking easier:private fun buildAsyncAssistant(model: ChatModel): AsyncAssistant = createAiService( serviceClass = AsyncAssistant::class.java, factory = AsyncAiServicesFactory(), ) .chatModel(model) .systemMessageProvider { "You are a helpful software engineer" } .build()Then:
val assistant = buildAsyncAssistant(model)Purely optional – feel free to skip if this is the only call site.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
langchain4j-kotlin/pom.xml(0 hunks)langchain4j-kotlin/src/main/kotlin/me/kpavlov/langchain4j/kotlin/model/chat/StreamingChatModelExtensions.kt(0 hunks)langchain4j-kotlin/src/main/kotlin/me/kpavlov/langchain4j/kotlin/service/ReflectionVariableResolver.kt(1 hunks)langchain4j-kotlin/src/main/kotlin/me/kpavlov/langchain4j/kotlin/service/memory/ChatMemoryServiceExtensions.kt(1 hunks)langchain4j-kotlin/src/test/kotlin/me/kpavlov/langchain4j/kotlin/adapters/ServiceWithFlowTest.kt(2 hunks)langchain4j-kotlin/src/test/kotlin/me/kpavlov/langchain4j/kotlin/model/chat/StreamingChatModelIT.kt(1 hunks)langchain4j-kotlin/src/test/kotlin/me/kpavlov/langchain4j/kotlin/service/ServiceWithPromptTemplatesTest.kt(1 hunks)samples/src/main/kotlin/me/kpavlov/langchain4j/kotlin/samples/AsyncAiServiceExample.kt(1 hunks)samples/src/main/kotlin/me/kpavlov/langchain4j/kotlin/samples/Environment.kt(1 hunks)samples/src/main/kotlin/me/kpavlov/langchain4j/kotlin/samples/OpenAiChatModelExample.kt(1 hunks)
💤 Files with no reviewable changes (2)
- langchain4j-kotlin/pom.xml
- langchain4j-kotlin/src/main/kotlin/me/kpavlov/langchain4j/kotlin/model/chat/StreamingChatModelExtensions.kt
🧰 Additional context used
🧬 Code Graph Analysis (1)
samples/src/main/kotlin/me/kpavlov/langchain4j/kotlin/samples/AsyncAiServiceExample.kt (1)
langchain4j-kotlin/src/main/kotlin/me/kpavlov/langchain4j/kotlin/service/AiServicesExtensions.kt (1)
createAiService(10-13)
🔇 Additional comments (9)
langchain4j-kotlin/src/main/kotlin/me/kpavlov/langchain4j/kotlin/service/memory/ChatMemoryServiceExtensions.kt (1)
16-20: Nice expression-body rewriteConverting
withContextblocks to a single-line expression improves readability without altering behaviour. LGTM.langchain4j-kotlin/src/test/kotlin/me/kpavlov/langchain4j/kotlin/service/ServiceWithPromptTemplatesTest.kt (1)
21-60: LGTM! Code formatting improvements enhance readability.The reformatting improves code readability without changing any functional logic. The test continues to properly verify that the AiServices builder correctly processes system and user prompt templates.
langchain4j-kotlin/src/test/kotlin/me/kpavlov/langchain4j/kotlin/model/chat/StreamingChatModelIT.kt (1)
66-75: LGTM! Error handling change aligns with PR objectives.The removal of
StreamingChatModelReply.Errorcase handling is correct. With the new approach, errors will be thrown as exceptions rather than emitted as error replies, making the error handling more straightforward and preventing duplicate error emissions.langchain4j-kotlin/src/test/kotlin/me/kpavlov/langchain4j/kotlin/adapters/ServiceWithFlowTest.kt (2)
13-20: LGTM! New imports support the updated error handling approach.The added imports (shouldThrow, shouldHaveSize, shouldBe, shouldBeInstanceOf, onEach) are appropriate for the new test logic that expects exceptions instead of error replies.
153-173: LGTM! Test correctly validates the new exception-based error handling.The test has been properly updated to align with the new error handling approach:
- Correct exception expectation: Uses
shouldThrow<Exception>to expect the flow to throw when an error occurs- Proper token collection: Collects partial responses using
onEachbefore the error is thrown- Comprehensive assertions: Verifies both the exception message and the collected tokens
This change successfully validates that errors are now propagated as exceptions rather than being emitted as
StreamingChatModelReply.Errorobjects, which aligns perfectly with the PR objectives.samples/src/main/kotlin/me/kpavlov/langchain4j/kotlin/samples/OpenAiChatModelExample.kt (1)
11-11: Trailing comma 👍Using a trailing comma after the constructor parameter improves future diffs and is supported from Kotlin 1.4+. Looks good.
samples/src/main/kotlin/me/kpavlov/langchain4j/kotlin/samples/AsyncAiServiceExample.kt (3)
10-12: Readable multi-line signatureThe reformatted
suspend fun askQuestion(..., )with a trailing comma is clear and consistent with Kotlin style guidelines.
16-16: Constructor trailing commaSame remark as above – improves diff hygiene, no issues.
28-28: Trailing comma in callThe trailing comma after the argument is fine and keeps calls consistent with the rest of the file.
Removed the Error type from StreamingChatLanguageModelReply and its usage. Currently the error was being emitted twice. Once as an exception, and once as data. We should avoid emitting an event twice, since it only occurs once. It's possible for the user to manually transform the exception into a value using catch { emit(...) }
Closes #113
Integrated changes from #117 by @nomisRev :