You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've come to think that it may be best if we divide the LLMcoder copletion pipeline into two parts using two models:
The fine-tuned (Fine-Tuning #6) autocomplete model provides the first code suggestion as a consistently formatted and reliable baseline
The "feedback" model (maybe the regular gpt-3.5-turbo) continues the conversation with
analysis results or a "reflect"-prompt (detection)
a "fix this issue"-prompt (fixing)
We may need to evaluate how consistent the model is in feedback-mode, and if it suffers from the same problems regarding the format of the output and modifications of previous code (maybe we need to fine-tune another variant for the feedback-loop?)
The text was updated successfully, but these errors were encountered:
I've come to think that it may be best if we divide the LLMcoder copletion pipeline into two parts using two models:
gpt-3.5-turbo
) continues the conversation withWe may need to evaluate how consistent the model is in feedback-mode, and if it suffers from the same problems regarding the format of the output and modifications of previous code (maybe we need to fine-tune another variant for the feedback-loop?)
The text was updated successfully, but these errors were encountered: