Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add agentic workflows to the plugin #35

Merged
merged 11 commits into from
Apr 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 31 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Currently supports: Anthropic, Ollama and OpenAI adapters
</p>

> [!IMPORTANT]
> This plugin is provided as-is and is primarily developed for my own workflows. As such, I offer no guarantees of regular updates or support and I expect the plugin's API to change regularly. Bug fixes and feature enhancements will be implemented at my discretion, and only if they align with my personal use-case. Feel free to fork the project and customize it to your needs, but please understand my involvement in further development will be intermittent. To be notified of breaking changes in the plugin, please subscribe to [this issue](https://github.com/olimorris/codecompanion.nvim/issues/9).
> This plugin is provided as-is and is primarily developed for my own workflows. As such, I offer no guarantees of regular updates or support and I expect the plugin's API to change regularly. Bug fixes and feature enhancements will be implemented at my discretion, and only if they align with my personal use-cases. Feel free to fork the project and customize it to your needs, but please understand my involvement in further development will be intermittent. To be notified of breaking changes in the plugin, please subscribe to [this issue](https://github.com/olimorris/codecompanion.nvim/issues/9).

<p align="center">
<img src="https://github.com/olimorris/codecompanion.nvim/assets/9512444/5e5a5e54-c1d9-4fe2-8ae0-1cfbfdd6cea5" alt="Header" />
Expand All @@ -28,6 +28,7 @@ Currently supports: Anthropic, Ollama and OpenAI adapters

- :speech_balloon: A Copilot Chat experience from within Neovim
- :electric_plug: Adapter support for many generative AI services
- :robot: Agentic workflows to improve LLM output
- :rocket: Inline code creation and modification
- :sparkles: Built in actions for specific language prompts, LSP error fixes and code advice
- :building_construction: Create your own custom actions for Neovim
Expand Down Expand Up @@ -245,13 +246,16 @@ The plugin has a number of commands:
- `:CodeCompanionToggle` - Toggle a chat buffer
- `:CodeCompanionActions` - To open up the action palette window

For an optimum workflow, the plugin author recommendeds the following keymaps:
For an optimum workflow, the plugin author recommendeds the following:

```lua
vim.api.nvim_set_keymap("n", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("n", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })

-- Expand `cc` into CodeCompanion in the command line
vim.cmd([[cab cc CodeCompanion]])
```

> [!NOTE]
Expand Down Expand Up @@ -329,17 +333,34 @@ The plugin comes with a number of [in-built actions](https://github.com/olimorri

#### Chat and Chat as

Both of these actions utilise the `chat` strategy. The `Chat` action opens up a fresh chat buffer. The `Chat as` action allows for persona based context to be set in the chat buffer allowing for better and more detailed responses from the generative AI service.

> [!TIP]
> Both of these actions allow for visually selected code to be sent to the chat buffer as code blocks.

Both of these actions utilise the `chat` strategy. The `Chat` action opens up a fresh chat buffer. The `Chat as` action allows for persona based context to be set in the chat buffer allowing for better and more detailed responses from the generative AI service.

#### Open chats

This action enables users to easily navigate between their open chat buffers. A chat buffer can be deleted (and removed from memory) by pressing `<C-c>`.

#### Agentic Workflows

> [!WARNING]
> Agentic workflows may result in the significant consumption of tokens if you're using an external generative AI service.

As outlined in Andrew Ng's [tweet](https://twitter.com/AndrewYNg/status/1773393357022298617), agentic workflows have the ability to dramatically improve the output of an LLM and can be as simple as prompting an LLM multiple times. The plugin supports this via the use of workflows. At various stages of the workflow, the plugin will automatically prompt the LLM for feedback and self-reflection without any input from the user.

Currently, the plugin only supports _"reflection"_ (multiple prompts within the same application) and comes with the following workflows:

- Adding a new feature
- Refactoring code

Of course you can add new workflows by following the [RECIPES](RECIPES.md) guide.

#### Inline code

> [!NOTE]
> The options available to the user in the Action Palette will depend on the Vim mode.

These actions utilize the `inline` strategy. They can be useful for writing inline code in a buffer or even refactoring a visual selection; all based on a user's prompt. The actions are designed to write code for the buffer filetype that it is initated in, or, if run from a terminal prompt, to write commands.

The strategy comes with a number of helpers which the user can type in the prompt, similar to [GitHub Copilot Chat](https://github.blog/changelog/2024-01-30-code-faster-and-better-with-github-copilots-new-features-in-visual-studio/):
Expand All @@ -348,15 +369,19 @@ The strategy comes with a number of helpers which the user can type in the promp
- `/optimize` to analyze and improve the running time of the selected code
- `/tests` to create unit tests for the selected code

> [!NOTE]
> The options available to the user in the Action Palette will depend on the Vim mode.

#### Code advisor

> [!NOTE]
> This option is only available in visual mode

As the name suggests, this action provides advice on a visual selection of code and utilises the `chat` strategy. The response from the API is streamed into a chat buffer which follows the `display.chat` settings in your configuration.

#### LSP assistant

> [!NOTE]
> This option is only available in visual mode

Taken from the fantastic [Wtf.nvim](https://github.com/piersolenski/wtf.nvim) plugin, this action provides advice on how to correct any LSP diagnostics which are present on the visually selected lines. Again, the `send_code = false` value can be set in your config to prevent the code itself being sent to the generative AI service.

## :rainbow: Helpers
Expand Down
47 changes: 39 additions & 8 deletions doc/codecompanion.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
*codecompanion.txt* For NVIM v0.9.2 Last change: 2024 March 24
*codecompanion.txt* For NVIM v0.9.2 Last change: 2024 April 01

==============================================================================
Table of Contents *codecompanion-table-of-contents*
Expand All @@ -14,6 +14,7 @@ FEATURES *codecompanion-features*

- A Copilot Chat experience from within Neovim
- Adapter support for many generative AI services
- Agentic workflows to improve LLM output
- Inline code creation and modification
- Built in actions for specific language prompts, LSP error fixes and code advice
- Create your own custom actions for Neovim
Expand Down Expand Up @@ -234,13 +235,16 @@ The plugin has a number of commands:
- `:CodeCompanionToggle` - Toggle a chat buffer
- `:CodeCompanionActions` - To open up the action palette window

For an optimum workflow, the plugin author recommendeds the following keymaps:
For an optimum workflow, the plugin author recommendeds the following:

>lua
vim.api.nvim_set_keymap("n", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<C-a>", "<cmd>CodeCompanionActions<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("n", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })
vim.api.nvim_set_keymap("v", "<LocalLeader>a", "<cmd>CodeCompanionToggle<cr>", { noremap = true, silent = true })

-- Expand `cc` into CodeCompanion in the command line
vim.cmd([[cab cc CodeCompanion]])
<


Expand Down Expand Up @@ -353,23 +357,49 @@ Neovim buffer.

CHAT AND CHAT AS


[!TIP] Both of these actions allow for visually selected code to be sent to the
chat buffer as code blocks.
Both of these actions utilise the `chat` strategy. The `Chat` action opens up a
fresh chat buffer. The `Chat as` action allows for persona based context to be
set in the chat buffer allowing for better and more detailed responses from the
generative AI service.


[!TIP] Both of these actions allow for visually selected code to be sent to the
chat buffer as code blocks.

OPEN CHATS

This action enables users to easily navigate between their open chat buffers. A
chat buffer can be deleted (and removed from memory) by pressing `<C-c>`.


AGENTIC WORKFLOWS


[!WARNING] Agentic workflows may result in the significant consumption of
tokens if you’re using an external generative AI service.
As outlined in Andrew Ng’s tweet
<https://twitter.com/AndrewYNg/status/1773393357022298617>, agentic workflows
have the ability to dramatically improve the output of an LLM and can be as
simple as prompting an LLM multiple times. The plugin supports this via the use
of workflows. At various stages of the workflow, the plugin will automatically
prompt the LLM for feedback and self-reflection without any input from the
user.

Currently, the plugin only supports _“reflection”_ (multiple prompts within
the same application) and comes with the following workflows:

- Adding a new feature
- Refactoring code

Of course you can add new workflows by following the RECIPES <RECIPES.md>
guide.


INLINE CODE


[!NOTE] The options available to the user in the Action Palette will depend on
the Vim mode.
These actions utilize the `inline` strategy. They can be useful for writing
inline code in a buffer or even refactoring a visual selection; all based on a
user’s prompt. The actions are designed to write code for the buffer filetype
Expand All @@ -384,18 +414,19 @@ prompt, similar to GitHub Copilot Chat
- `/tests` to create unit tests for the selected code


[!NOTE] The options available to the user in the Action Palette will depend on
the Vim mode.

CODE ADVISOR


[!NOTE] This option is only available in visual mode
As the name suggests, this action provides advice on a visual selection of code
and utilises the `chat` strategy. The response from the API is streamed into a
chat buffer which follows the `display.chat` settings in your configuration.


LSP ASSISTANT


[!NOTE] This option is only available in visual mode
Taken from the fantastic Wtf.nvim <https://github.com/piersolenski/wtf.nvim>
plugin, this action provides advice on how to correct any LSP diagnostics which
are present on the visually selected lines. Again, the `send_code = false`
Expand Down
118 changes: 118 additions & 0 deletions lua/codecompanion/actions.lua
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,124 @@ M.static.actions = {
},
},
},
{
name = "Agentic Workflows...",
strategy = "chat",
description = "Workflows to improve the performance of your LLM",
picker = {
prompt = "Select a workflow",
items = {
{
name = "Code a feature - Outline, draft, consider and then revise",
callback = function(context)
local agent = require("codecompanion.agent")
return agent
.new({
context = context,
strategy = "chat",
})
:workflow({
{
role = "system",
content = "You are an expert coder and helpful assistant who can help outline, draft, consider and revise code for the "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some prompting techniques that could be helpful for getting better answers. This is something I've gotten from Jeremy Howard at fast.ai:

You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so. always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question. always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are great suggestions. Would you PR these?

.. context.filetype
.. " language.",
start = true,
},
{
condition = function()
return context.is_visual
end,
contains_code = true,
role = "user",
content = "Here is some relevant context: " .. send_code(context),
start = true,
},
{
role = "user",
content = "I want you to help me code a feature. Before we write any code let's outline how we'll architect and implement the feature with the context you already have. The feature I'd like to add is ",
start = true,
},
{
role = "user",
content = "Thanks. Now let's draft the code for the feature.",
auto_submit = true,
},
{
role = "user",
content = "Great. Now let's consider the code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.",
auto_submit = true,
},
{
role = "user",
content = "Thanks. Now let's revise the code based on the feedback.",
auto_submit = true,
},
{
role = "user",
content = "For clarity, can you show the final code without any explanations?",
auto_submit = true,
},
Comment on lines +340 to +349
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is where we generate the final output, what do you think about just using one prompt? Thanks. Now let's revise the code based on the feedback, without additional explanations.

})
end,
},
{
name = "Refactor some code - Outline, draft, consider and then revise",
callback = function(context)
local agent = require("codecompanion.agent")
return agent
.new({
context = context,
strategy = "chat",
})
:workflow({
{
role = "system",
content = "You are an expert coder and helpful assistant who can help outline, draft, consider and revise code for the "
.. context.filetype
.. " language.",
start = true,
},
{
condition = function()
return context.is_visual
end,
contains_code = true,
role = "user",
content = "Here is some relevant context: " .. send_code(context),
start = true,
},
{
role = "user",
content = "I want you to help me with a refactor. Before we write any code let's outline how we'll architect and implement the code with the context you already have. What I'm looking to achieve is ",
start = true,
},
{
role = "user",
content = "Thanks. Now let's draft the code for the refactor.",
auto_submit = true,
},
{
role = "user",
content = "Great. Now let's consider the code. I'd like you to check it carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it.",
auto_submit = true,
},
{
role = "user",
content = "Thanks. Now let's revise the code based on the feedback.",
auto_submit = true,
},
{
role = "user",
content = "For clarity, can you show the final code without any explanations?",
auto_submit = true,
},
})
end,
},
},
},
},
{
name = "Inline code ...",
strategy = "inline",
Expand Down
52 changes: 52 additions & 0 deletions lua/codecompanion/agent.lua
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
local config = require("codecompanion.config")

---@class CodeCompanion.Agent
local Agent = {}

---@class CodeCompanion.AgentArgs
---@field context table
---@field strategy string

---@param args table
---@return CodeCompanion.Agent
function Agent.new(args)
return setmetatable(args, { __index = Agent })
end

---@param prompts table
function Agent:workflow(prompts)
local starting_prompts = {}
local workflow_prompts = {}

for _, prompt in ipairs(prompts) do
if prompt.start then
if
(type(prompt.condition) == "function" and not prompt.condition())
or (prompt.contains_code and not config.options.send_code)
then
goto continue
end

table.insert(starting_prompts, {
role = prompt.role,
content = prompt.content,
})
else
table.insert(workflow_prompts, {
role = prompt.role,
content = prompt.content,
auto_submit = prompt.auto_submit,
})
end
::continue::
end

return require("codecompanion.strategies.chat").new({
type = "chat",
messages = starting_prompts,
workflow = workflow_prompts,
show_buffer = true,
})
end

return Agent
6 changes: 4 additions & 2 deletions lua/codecompanion/init.lua
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,9 @@ M.actions = function(args)
format = function(item)
local formatted_item = {}
for _, column in ipairs(opts.columns) do
table.insert(formatted_item, item[column] or "")
if item[column] ~= nil then
table.insert(formatted_item, item[column] or "")
end
end
return formatted_item
end,
Expand All @@ -119,7 +121,7 @@ M.actions = function(args)
}
picker(actions.validate(item.picker.items(), context), picker_opts, selection)
elseif item and type(item.callback) == "function" then
return item.callback(selection)
return item.callback(context)
else
local Strategy = require("codecompanion.strategy")
return Strategy.new({
Expand Down
Loading