dependabot[bot] opened a new pull request, #10650: URL: https://github.com/apache/gravitino/pull/10650
Bumps [llama-index](https://github.com/run-llama/llama_index) from 0.13.0 to 0.14.19. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/run-llama/llama_index/releases">llama-index's releases</a>.</em></p> <blockquote> <h2>v0.14.19</h2> <h1>Release Notes</h1> <h2>[2026-03-25]</h2> <h3>llama-index-agent-agentmesh [0.2.0]</h3> <ul> <li>chore(deps): bump the uv group across 49 directories with 1 update (<a href="https://redirect.github.com/run-llama/llama_index/pull/21083">#21083</a>)</li> </ul> <h3>llama-index-callbacks-argilla [0.5.0]</h3> <ul> <li>chore(deps): bump the uv group across 3 directories with 1 update (<a href="https://redirect.github.com/run-llama/llama_index/pull/21069">#21069</a>)</li> </ul> <h3>llama-index-core [0.14.19]</h3> <ul> <li>fix: pass <code>delete_from_docstore</code> parameter in <code>BaseIndex.delete_ref_doc</code> (<a href="https://redirect.github.com/run-llama/llama_index/pull/20990">#20990</a>)</li> <li>fix(core): preserve CTE names during schema prefixing in SQLDatabase.run_sql (<a href="https://redirect.github.com/run-llama/llama_index/pull/21028">#21028</a>)</li> <li>fix(core): align sync retrieval dedup key with async (hash + ref_doc_id) (<a href="https://redirect.github.com/run-llama/llama_index/pull/21034">#21034</a>)</li> <li>fix(core): raise ValueError instead of returning string from structured_predict (<a href="https://redirect.github.com/run-llama/llama_index/pull/21036">#21036</a>)</li> <li>fix(core): remove incorrect per-node delete calls in index helpers (<a href="https://redirect.github.com/run-llama/llama_index/pull/21050">#21050</a>)</li> <li>chore(deps): bump the uv group across 49 directories with 1 update (<a href="https://redirect.github.com/run-llama/llama_index/pull/21083">#21083</a>)</li> <li>chore(deps): bump the uv group across 44 directories with 1 update (<a href="https://redirect.github.com/run-llama/llama_index/pull/21097">#21097</a>)</li> <li>enable llama-cloud>1.0 install (<a href="https://redirect.github.com/run-llama/llama_index/pull/21140">#21140</a>)</li> </ul> <h3>llama-index-embeddings-fireworks [0.5.2]</h3> <ul> <li>test(embeddings-fireworks): add test suite and fix docs (<a href="https://redirect.github.com/run-llama/llama_index/pull/20977">#20977</a>)</li> </ul> <h3>llama-index-embeddings-upstage [0.6.1]</h3> <ul> <li>chore(deps): bump the uv group across 49 directories with 1 update (<a href="https://redirect.github.com/run-llama/llama_index/pull/21083">#21083</a>)</li> </ul> <h3>llama-index-indices-managed-llama-cloud [0.11.1]</h3> <ul> <li>fix: llama-cloud managed index and remove llamaparse reader (<a href="https://redirect.github.com/run-llama/llama_index/pull/21043">#21043</a>)</li> <li>enable llama-cloud>1.0 install (<a href="https://redirect.github.com/run-llama/llama_index/pull/21140">#21140</a>)</li> </ul> <h3>llama-index-llms-azure-openai [0.5.3]</h3> <ul> <li>azure openai responses support (<a href="https://redirect.github.com/run-llama/llama_index/pull/21088">#21088</a>)</li> <li>fix azure openai responses (<a href="https://redirect.github.com/run-llama/llama_index/pull/21099">#21099</a>)</li> </ul> <h3>llama-index-llms-bedrock-converse [0.14.3]</h3> <ul> <li>use proper tool choice format in bedrock converse (<a href="https://redirect.github.com/run-llama/llama_index/pull/21098">#21098</a>)</li> </ul> <h3>llama-index-llms-cohere [0.8.0]</h3> <ul> <li>docs(cohere): update first basic usage example to chat API (<a href="https://redirect.github.com/run-llama/llama_index/pull/21108">#21108</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md">llama-index's changelog</a>.</em></p> <blockquote> <h3>llama-index-core [0.14.19]</h3> <ul> <li>fix: pass <code>delete_from_docstore</code> parameter in <code>BaseIndex.delete_ref_doc</code> (<a href="https://redirect.github.com/run-llama/llama_index/pull/20990">#20990</a>)</li> <li>fix(core): preserve CTE names during schema prefixing in SQLDatabase.run_sql (<a href="https://redirect.github.com/run-llama/llama_index/pull/21028">#21028</a>)</li> <li>fix(core): align sync retrieval dedup key with async (hash + ref_doc_id) (<a href="https://redirect.github.com/run-llama/llama_index/pull/21034">#21034</a>)</li> <li>fix(core): raise ValueError instead of returning string from structured_predict (<a href="https://redirect.github.com/run-llama/llama_index/pull/21036">#21036</a>)</li> <li>fix(core): remove incorrect per-node delete calls in index helpers (<a href="https://redirect.github.com/run-llama/llama_index/pull/21050">#21050</a>)</li> <li>chore(deps): bump the uv group across 49 directories with 1 update (<a href="https://redirect.github.com/run-llama/llama_index/pull/21083">#21083</a>)</li> <li>chore(deps): bump the uv group across 44 directories with 1 update (<a href="https://redirect.github.com/run-llama/llama_index/pull/21097">#21097</a>)</li> <li>enable llama-cloud>1.0 install (<a href="https://redirect.github.com/run-llama/llama_index/pull/21140">#21140</a>)</li> </ul> <h3>llama-index-embeddings-fireworks [0.5.2]</h3> <ul> <li>test(embeddings-fireworks): add test suite and fix docs (<a href="https://redirect.github.com/run-llama/llama_index/pull/20977">#20977</a>)</li> </ul> <h3>llama-index-embeddings-upstage [0.6.1]</h3> <ul> <li>chore(deps): bump the uv group across 49 directories with 1 update (<a href="https://redirect.github.com/run-llama/llama_index/pull/21083">#21083</a>)</li> </ul> <h3>llama-index-indices-managed-llama-cloud [0.11.1]</h3> <ul> <li>fix: llama-cloud managed index and remove llamaparse reader (<a href="https://redirect.github.com/run-llama/llama_index/pull/21043">#21043</a>)</li> <li>enable llama-cloud>1.0 install (<a href="https://redirect.github.com/run-llama/llama_index/pull/21140">#21140</a>)</li> </ul> <h3>llama-index-llms-azure-openai [0.5.3]</h3> <ul> <li>azure openai responses support (<a href="https://redirect.github.com/run-llama/llama_index/pull/21088">#21088</a>)</li> <li>fix azure openai responses (<a href="https://redirect.github.com/run-llama/llama_index/pull/21099">#21099</a>)</li> </ul> <h3>llama-index-llms-bedrock-converse [0.14.3]</h3> <ul> <li>use proper tool choice format in bedrock converse (<a href="https://redirect.github.com/run-llama/llama_index/pull/21098">#21098</a>)</li> </ul> <h3>llama-index-llms-cohere [0.8.0]</h3> <ul> <li>docs(cohere): update first basic usage example to chat API (<a href="https://redirect.github.com/run-llama/llama_index/pull/21108">#21108</a>)</li> </ul> <h3>llama-index-llms-google-genai [0.9.1]</h3> <ul> <li>feat: gemini 3 default and temperature (<a href="https://redirect.github.com/run-llama/llama_index/pull/21060">#21060</a>)</li> <li>fix(google-genai): avoid mutating messages list in prepare_chat_params (<a href="https://redirect.github.com/run-llama/llama_index/pull/21141">#21141</a>)</li> </ul> <h3>llama-index-llms-litellm [0.7.1]</h3> <ul> <li>Add support for custom LLM provider in model kwargs (<a href="https://redirect.github.com/run-llama/llama_index/pull/21095">#21095</a>)</li> </ul> <h3>llama-index-llms-minimax [0.1.0]</h3> <ul> <li>feat: add MiniMax LLM provider integration with M2.7 default (<a href="https://redirect.github.com/run-llama/llama_index/pull/20955">#20955</a>)</li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/run-llama/llama_index/commit/6a3269261d0df1ea8cc5adab8e16ffda6b166d58"><code>6a32692</code></a> Release 0.14.19 (<a href="https://redirect.github.com/run-llama/llama_index/issues/21147">#21147</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/1b21484406c09e50a6bc2727d2f0d45373af6fed"><code>1b21484</code></a> enable llama-cloud>1.0 install (<a href="https://redirect.github.com/run-llama/llama_index/issues/21140">#21140</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/465959b10fdbf776a2a9482f7d2cb1652eab7c77"><code>465959b</code></a> fix(google-genai): avoid mutating messages list in prepare_chat_params (<a href="https://redirect.github.com/run-llama/llama_index/issues/21141">#21141</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/c4e586301723d456c3999762c4a02e6a78f130b8"><code>c4e5863</code></a> restrict new packages (<a href="https://redirect.github.com/run-llama/llama_index/issues/21139">#21139</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/ea76d2caee7433f0b598234c2154f12f463a5d6e"><code>ea76d2c</code></a> docs(cohere): update first basic usage example to chat API (<a href="https://redirect.github.com/run-llama/llama_index/issues/21108">#21108</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/58ee450dc074a663b69b9be6f37a972af65b9d15"><code>58ee450</code></a> fix bedrock tests (<a href="https://redirect.github.com/run-llama/llama_index/issues/21129">#21129</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/c346327e51eaf26c84a495f8bee1f9ea81542bc7"><code>c346327</code></a> fix azure openai responses (<a href="https://redirect.github.com/run-llama/llama_index/issues/21099">#21099</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/2b74a92798d543ded57e7d392451ad0d64a74f8c"><code>2b74a92</code></a> fix(ollama): pass custom headers to auto-created clients (<a href="https://redirect.github.com/run-llama/llama_index/issues/21091">#21091</a>)</li> <li><a href="https://github.com/run-llama/llama_index/commit/edd23cc730feb78002c08ba8aade1628238c5428"><code>edd23cc</code></a> chore(deps): bump tornado from 6.5.4 to 6.5.5 in /docs/api_reference in the p...</li> <li><a href="https://github.com/run-llama/llama_index/commit/2cc2e465637c0900e8fd5cdea4bc70d0d965922c"><code>2cc2e46</code></a> feat(llms/openai): Add support for Mini and Nano variants of GPT 5.4 (<a href="https://redirect.github.com/run-llama/llama_index/issues/21065">#21065</a>)</li> <li>Additional commits viewable in <a href="https://github.com/run-llama/llama_index/compare/v0.13.0...v0.14.19">compare view</a></li> </ul> </details> <br /> [](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
