Releases: JusticeRage/Gepetto
Releases · JusticeRage/Gepetto
v1.5.1
This release makes Gepetto discoverable by hcli, IDA's plugin manager.
What's Changed
- ida-plugin.json: fix syntax by @williballenthin in #116
- Decoupling tool call handling from tool code by @JusticeRage in #118
- fix(tool): sanitize names and wrap results/errors by @quippy-dev in #121
- feat(ui): add reasoning stream and log categories by @quippy-dev in #117
- fix: address compatibility issues and runtime errors by @quippy-dev in #125
- Thread-safe IDA tooling refresh by @quippy-dev in #126
- feat(ui): introduce resizable splitter for conversation and log views by @quippy-dev in #127
- feat(models): async model discovery & menu refresh by @quippy-dev in #128
- fix(models): set Gemini function_response call id by @quippy-dev in #129
- feat(ida): add new IDA tools and thread-safe helpers by @quippy-dev in #131
- Localize untranslated Gepetto strings across locales by @JusticeRage in #132
v1.5.0
New features
- This release starts the shift towards agentic reverse-engineering by exposing many tools to the LLM, available via the cli.
- A new window dedicated to Gepetto was added (by @quippy-dev, who also vastly improved Gemini support)
- Many minor improvements and bugfixes
What's Changed
- Added support for GPT-4o-mini by @nonetype in #48
- feat: update novita models and readme by @jasonhp in #53
- Add support to LM Studio by @D3fau4 in #58
- Added support for DeepSeek-Chat by @marsharinco in #52
- Fix crash when Ollama or LMStudio isn't configured by @devnoname120 in #61
- Add support for openrouter by @felipejfc in #62
- Add support for Azure OpenAI models by @0xa13d in #66
- fix: the keyword of this parameter should be ‘proxy’ not ‘proxies’ by @jindaxia in #68
- Add three providers by @jindaxia in #69
- Add some features, and some fix by @jindaxia in #71
- Add kluster.ai Support by @themacexpert in #72
- Load available models has better error handling 🚀✨ by @mahmoudimus in #75
- Improve translation installation by @mahmoudimus in #76
- Add deepseek-r1 support by @marsharinco in #80
- Support for Gemini models by @Xplo8E in #83
- UI for renaming variables by @Albeoris in #85
- Add AI-assisted function commenting in Hex-Rays pseudocode by @Albeoris in #86
- Add localization for AI-generated comments by @Albeoris in #87
- Ensure gettext strings present in all locales by @JusticeRage in #89
- Enable CLI tool by @JusticeRage in #91
- Fix OpenAI-compatible plugin configuration by @JusticeRage in #92
- Add auto-rename action and function renaming by @JusticeRage in #93
- Add tool for renaming IDA local variables by @JusticeRage in #94
- Add function rename tool for IDA by @JusticeRage in #95
- Add get_ea tool for resolving symbol addresses by @JusticeRage in #97
- Add list_symbols tool for enumerating IDA symbols by @JusticeRage in #98
- Add decimal to hex conversion tool by @JusticeRage in #99
- Add search tool for text and hex pattern lookups by @JusticeRage in #100
- Add disassembly retrieval tool by @JusticeRage in #101
- Fix plugin loading on IDA < 9 (interactive) & update OpenAI API keys link by @Epieos in #102
- feat(ui): add dockable status panel with streaming output by @quippy-dev in #107
- add metadata for plugin repository by @williballenthin in #105
- feat(models): migrate Gemini to google-genai with tools/stream by @quippy-dev in #112
- fix(ida): stabilize status panel actions by @quippy-dev in #111
- fix(ui): Qt5/Qt6 groundwork; wire request timers by @quippy-dev in #113
- refactor(models): rework Gemini streaming & tools by @quippy-dev in #114
Full Changelog: v1.4.1...v1.5.0
v1.4.1
v1.4
- Local models exposed via Ollama are now supported!
- Ollama will only appear in the menu if the program is running.
- Added a CLI interface in IDA Pro so that users can converse with the selected model directly.
- Important refactoring under the hood so that menus can be generated dynamically based on all available models.
- An added benefit is that third parties trying to add support for more models should have an easier time.
⚠ A new dependency was added (ollama), make sure you run pip install requirements.txt again after updating! ⚠
v1.3
Added support for a variety of non-OpenAI models
v1.2
v1.2
