-
-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add agentic workflows to the plugin #35
Conversation
@nuvic could you give this branch a go? You should see a new option appear in the action palette. Obviously the quality will depend on the LLM you're using but if you could test this feature from a user workflow perspective that would be greatly appreciated. |
:workflow({ | ||
{ | ||
role = "system", | ||
content = "You are an expert coder and helpful assistant who can help outline, draft, consider and revise code for the " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some prompting techniques that could be helpful for getting better answers. This is something I've gotten from Jeremy Howard at fast.ai:
You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are great suggestions. Would you PR these?
{ | ||
role = "user", | ||
content = "Thanks. Now let's revise the code based on the feedback.", | ||
auto_submit = true, | ||
}, | ||
{ | ||
role = "user", | ||
content = "For clarity, can you show the final code without any explanations?", | ||
auto_submit = true, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is where we generate the final output, what do you think about just using one prompt? Thanks. Now let's revise the code based on the feedback, without additional explanations.
As per Andrew Ng's beautifully explained tweet.
Initial plan is to support reflection in an automated fashion...that is, prompt the LLM to self reflect on its own work (without the user having to interact with the chat buffer).
In the future we'll figure out a way to add support for tooling and I will endeavour to make sure all of this can be done in the chat buffer.