Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump llama-index from 0.12.22 to 0.12.23 #486

Merged
merged 2 commits into from
Mar 10, 2025

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Mar 10, 2025

Bumps llama-index from 0.12.22 to 0.12.23.

Changelog

Sourced from llama-index's changelog.

llama-index-core [0.12.23]

  • added merging_separator argument to allow for specifying chunk merge separator in semantic splitter (#18027)
  • Add support for running single-agent workflows within the BaseWorkflowAgent class (#18038)
  • Fix the error raised when ReactAgent is created without an explicit system message (#18041)
  • add a field keep_whitespaces to TokenTextSplitter (#17998)
  • do not convert raw tool output to string in AgentWorkflow (#18006)

llama-index-embeddings-ollama [0.6.0]

  • feat: add client_kwargs Parameter to OllamaEmbedding Class (#18012)

llama-index-llms-anthropic [0.6.10]

  • anthropic caching and thinking updates (#18039)
  • allow caching of tool results (#18028)
  • support caching of anthropic system prompt (#18008)
  • Ensure resuming a workflow actually works (#18023)
  • [MarkdownNodeParser] Adding customizable header path separator char (#17964)
  • feat: return event instance from run() when stop event is custom (#18001)

llama-index-llms-azure-openai [0.3.2]

  • AzureOpenAI: api_base and azure_endpoint are mutually exclusive (#18037)
  • Add base_url to AzureOpenAI (#17996)

llama-index-llms-bedrock-converse [0.4.8]

  • message text is required in boto3 model (#17989)

llama-index-llms-ollama [0.5.3]

  • Make request_timeout in Ollama LLM optional (#18007)

llama-index-llms-mistralai [0.4.0]

  • MistralAI support for multImodal content blocks (#17997)

llama-index-readers-file [0.4.6]

  • Bugfix: Use torch.no grad() in inference in ImageVisionLLMReader when PyTorch is installed (#17970)

llama-index-storage-chat-store-mongo [0.1.0]

  • Feat/mongo chat store (#17979)

llama-index-core [0.12.23]

  • added merging_separator argument to allow for specifying chunk merge separator in semantic splitter (#18027)
  • Add support for running single-agent workflows within the BaseWorkflowAgent class (#18038)

... (truncated)

Commits
  • 4c8d1d6 v0.12.23 (#18050)
  • 54e0a2c Change AgentWorkflow to FunctionAgent in documentation. (#18042)
  • 13de07b chore: bump jinja version in docs dependencies (#18047)
  • d3a861f added merging_separator argument to allow for specifying chunk merge (#18027)
  • d569009 Fix the error raised when ReactAgent is created without an explicit system me...
  • 3dee964 Add support for running single-agent workflows within the BaseWorkflowAgent c...
  • 8b3e456 chore: skip deeplake tests when running on CI (#18015)
  • 37243c8 build(deps-dev): bump jinja2 from 3.1.5 to 3.1.6 (#18024)
  • 02a4c93 build(deps-dev): bump jinja2 from 3.1.5 to 3.1.6 in /llama-index-core (#18025)
  • fa51d5a AzureOpenAI: api_base and azure_endpoint are mutually exclusive (#18037)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.12.22 to 0.12.23.
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](run-llama/llama_index@v0.12.22...v0.12.23)

---
updated-dependencies:
- dependency-name: llama-index
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 10, 2025
@niklassiemer niklassiemer merged commit d5c7d66 into main Mar 10, 2025
5 of 6 checks passed
@niklassiemer niklassiemer deleted the dependabot/pip/llama-index-0.12.23 branch March 10, 2025 10:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants