Skip to content

Releases: run-llama/llama_index

v0.12.1

21 Nov 03:17
3d00f90
Compare
Choose a tag to compare

2024-11-17 (v0.12.0)

18 Nov 17:44
49416d2
Compare
Choose a tag to compare

NOTE: Updating to v0.12.0 will require bumping every other llama-index-* package! Every package has had a version bump. Only notable changes are below.

llama-index-core [0.12.0]

  • Dropped python3.8 support, Unpinned numpy (#16973)
  • Kg/dynamic pg triplet retrieval limit (#16928)

llama-index-indices-managed-llama-cloud [0.6.1]

  • Add ID support for LlamaCloudIndex & update from_documents logic, modernize apis (#16927)
  • allow skipping waiting for ingestion when uploading file (#16934)
  • add support for files endpoints (#16933)

llama-index-indices-managed-vectara [0.3.0]

  • Add Custom Prompt Parameter (#16976)

llama-index-llms-bedrock [0.3.0]

  • minor fix for messages/completion to prompt (#15729)

llama-index-llms-bedrock-converse [0.4.0]

  • Fix async streaming with bedrock converse (#16942)

llama-index-multi-modal-llms-nvidia [0.2.0]

llama-index-readers-confluence [0.3.0]

  • Permit passing params to Confluence client (#16961)

llama-index-readers-github [0.5.0]

  • Add base URL extraction method to GithubRepositoryReader (#16926)

llama-index-vector-stores-weaviate [1.2.0]

  • Allow passing in Weaviate vector store kwargs (#16954)

v0.11.23

12 Nov 05:05
e4ff8c8
Compare
Choose a tag to compare
v0.11.23

v0.11.22

05 Nov 21:48
56358e5
Compare
Choose a tag to compare
v0.11.22

v0.11.21

01 Nov 14:49
35234d2
Compare
Choose a tag to compare
v0.11.21

v0.11.20

25 Oct 05:03
5902cf8
Compare
Choose a tag to compare
v0.11.20

v0.11.19

19 Oct 21:59
00fe1d1
Compare
Choose a tag to compare
v0.11.19

v0.11.18

15 Oct 14:50
560fa4a
Compare
Choose a tag to compare
v0.11.18

v0.11.17

09 Oct 01:29
65946eb
Compare
Choose a tag to compare
v0.11.17

v0.11.16

04 Oct 05:23
977d60a
Compare
Choose a tag to compare
v0.11.16