-
-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Incorrect Markdown breaks the chat window #913
Comments
Important If your issue does NOT contain a valid Thanks @f-leeske. I'll get to this as soon as I can. In the meantime, please ensure:
|
To summarize:
Is that correct? No error messages or anything in the logs? (Might be worth turning log level to trace for everything but beware leaving your API keys in). Can you open the debug window in the chat buffer and paste the message table after you've tried to reply to the LLM? Also, what happens if the markdown isn't incorrect? Say with a newer LLM? I'm perplexed because this shouldn't be causing an issue. We only parse ranges in the chat buffer. So after the LLM request completes, the chat buffer class is updated to know where in the buffer the response ended. This becomes the start of the range, so when you submit a user prompt, we run a TS query to the end of the buffer and look for H2 headings that match the user role name you defined in the config. So the chat buffer doesn't actually care about what the LLM wrote and hence it shouldn't break anything. It might mess up the formatting but it shouldn't break anything. Just to confirm, you've been able to recreate it with the minimal file you supplied? And you've not edited the LLMs response to remove any lines? |
Can confirm I can recreate this with the following prompt:
Especially if it leads to:
|
I'm being dumb. Of course this going to be an issue because the markdown LanguageTree ( Unfortunately, I think the solution to this will be via prompt engineering. Perhaps ask the LLM to ensure it closes any markdown code block and avoids triple backtick errors. |
Your
minimal.lua
configError messages
None
Log output
I had a look at /tmp/.../1.json afterward, it looks like this (on another attempt, slightly different in/output than below)
Describe the bug
When the LLM response contains incorrect markdown that prevents the tier 2 heading between LLM and user message from being parsed, any further messages from the user don't seem to make it to the LLM. I stumpled upon this when I wanted the LLM to generate some queries for the obsidian plugin dataview. I recreated it using the given minimal setup, with openai gpt-4o-mini:
After the LLMs response that includes nested triple backtick blocks in the markdown, any request I send does get a response from the model, but its output always responds to the initial request. Other adapters I tested with did not include the outer triple backtick markdown block for this specific query, but there will inevitably be model outputs that are parsed as incorrect markdown.
Deleting the problematic part in the buffer does not change anything.
Reproduce the bug
Steps to reproduce:
Adapter and model
openai and gpt-4o-mini
Final checks
minimal.lua
file from above and have shared thisThe text was updated successfully, but these errors were encountered: