Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Creator not registered for key: LLMType.OLLAMA #1553

Open
vanhocpham opened this issue Oct 30, 2024 · 6 comments
Open

ValueError: Creator not registered for key: LLMType.OLLAMA #1553

vanhocpham opened this issue Oct 30, 2024 · 6 comments
Assignees

Comments

@vanhocpham
Copy link

Bug description

I using MetaGPT ver 0.8.1 but when use RAG with method SimpleEngine.from_docs have error ValueError: Creator not registered for key: LLMType.OLLAMA

Environment information

  • LLM type and model name: ollama and model: hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF
  • System version:
  • Python version: 3.10
  • MetaGPT version or branch: 0.8.1
  • packages version:
  • installation method:

Screenshots or logs

config2.yaml
embedding:
api_type: "ollama"
model: "hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF"
base_url: "http://127.0.0.1:11434/api"

llm:
api_type: "ollama"
model: "hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF"
base_url: "http://127.0.0.1:11434/api"

Error Response
/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/base.py in get_instance(self, key, **kwargs)
27 return creator(**kwargs)
28
---> 29 raise ValueError(f"Creator not registered for key: {key}")
30
31

ValueError: Creator not registered for key: LLMType.OLLAMA

@better629
Copy link
Collaborator

@vanhocpham try the main branch, it supports ollama in https://github.com/geekan/MetaGPT/blob/main/metagpt/rag/factories/embedding.py#L26

@vanhocpham
Copy link
Author

@better629 i using pip install is working?

@better629
Copy link
Collaborator

@vanhocpham refs to https://docs.deepwisdom.ai/main/en/guide/get_started/installation.html#install-in-development-mode

cd MetaGPT
pip3 install -e .

@vanhocpham
Copy link
Author

@better629 Thanks, I will try and get back to you

@vanhocpham
Copy link
Author

@better629 I tried the way you instructed but another error occurred

/usr/local/lib/python3.10/dist-packages/metagpt/rag/engines/simple.py in from_docs(cls, input_dir, input_files, transformations, embed_model, llm, retriever_configs, ranker_configs)
    116         nodes = run_transformations(documents, transformations=transformations)
    117 
--> 118         return cls._from_nodes(
    119             nodes=nodes,
    120             transformations=transformations,

/usr/local/lib/python3.10/dist-packages/metagpt/rag/engines/simple.py in _from_nodes(cls, nodes, transformations, embed_model, llm, retriever_configs, ranker_configs)
    230         llm = llm or get_rag_llm()
    231 
--> 232         retriever = get_retriever(configs=retriever_configs, nodes=nodes, embed_model=embed_model)
    233         rankers = get_rankers(configs=ranker_configs, llm=llm)  # Default []
    234 

/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/retriever.py in get_retriever(self, configs, **kwargs)
     72             return self._create_default(**kwargs)
     73 
---> 74         retrievers = super().get_instances(configs, **kwargs)
     75 
     76         return SimpleHybridRetriever(*retrievers) if len(retrievers) > 1 else retrievers[0]

/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/base.py in get_instances(self, keys, **kwargs)
     16     def get_instances(self, keys: list[Any], **kwargs) -> list[Any]:
     17         """Get instances by keys."""
---> 18         return [self.get_instance(key, **kwargs) for key in keys]
     19 
     20     def get_instance(self, key: Any, **kwargs) -> Any:

/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/base.py in <listcomp>(.0)
     16     def get_instances(self, keys: list[Any], **kwargs) -> list[Any]:
     17         """Get instances by keys."""
---> 18         return [self.get_instance(key, **kwargs) for key in keys]
     19 
     20     def get_instance(self, key: Any, **kwargs) -> Any:

/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/base.py in get_instance(self, key, **kwargs)
     44         creator = self._creators.get(type(key))
     45         if creator:
---> 46             return creator(key, **kwargs)
     47 
     48         self._raise_for_key(key)

/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/retriever.py in _create_faiss_retriever(self, config, **kwargs)
     87 
     88     def _create_faiss_retriever(self, config: FAISSRetrieverConfig, **kwargs) -> FAISSRetriever:
---> 89         config.index = self._build_faiss_index(config, **kwargs)
     90 
     91         return FAISSRetriever(**config.model_dump())

/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/retriever.py in wrapper(self, config, **kwargs)
     45         if index is not None:
     46             return index
---> 47         return build_index_func(self, config, **kwargs)
     48 
     49     return wrapper

/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/retriever.py in _build_faiss_index(self, config, **kwargs)
    128         vector_store = FaissVectorStore(faiss_index=faiss.IndexFlatL2(config.dimensions))
    129 
--> 130         return self._build_index_from_vector_store(config, vector_store, **kwargs)
    131 
    132     @get_or_build_index

/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/retriever.py in _build_index_from_vector_store(self, config, vector_store, **kwargs)
    156     ) -> VectorStoreIndex:
    157         storage_context = StorageContext.from_defaults(vector_store=vector_store)
--> 158         index = VectorStoreIndex(
    159             nodes=self._extract_nodes(config, **kwargs),
    160             storage_context=storage_context,

/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py in __init__(self, nodes, use_async, store_nodes_override, embed_model, insert_batch_size, objects, index_struct, storage_context, callback_manager, transformations, show_progress, **kwargs)
     74 
     75         self._insert_batch_size = insert_batch_size
---> 76         super().__init__(
     77             nodes=nodes,
     78             index_struct=index_struct,

/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/base.py in __init__(self, nodes, objects, index_struct, storage_context, callback_manager, transformations, show_progress, **kwargs)
     75             if index_struct is None:
     76                 nodes = nodes or []
---> 77                 index_struct = self.build_index_from_nodes(
     78                     nodes + objects,  # type: ignore
     79                     **kwargs,  # type: ignore

/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py in build_index_from_nodes(self, nodes, **insert_kwargs)
    308             print("Some nodes are missing content, skipping them...")
    309 
--> 310         return self._build_index_from_nodes(content_nodes, **insert_kwargs)
    311 
    312     def _insert(self, nodes: Sequence[BaseNode], **insert_kwargs: Any) -> None:

/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py in _build_index_from_nodes(self, nodes, **insert_kwargs)
    277             run_async_tasks(tasks)
    278         else:
--> 279             self._add_nodes_to_index(
    280                 index_struct,
    281                 nodes,

/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py in _add_nodes_to_index(self, index_struct, nodes, show_progress, **insert_kwargs)
    230 
    231         for nodes_batch in iter_batch(nodes, self._insert_batch_size):
--> 232             nodes_batch = self._get_node_with_embedding(nodes_batch, show_progress)
    233             new_ids = self._vector_store.add(nodes_batch, **insert_kwargs)
    234 

/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/vector_store/base.py in _get_node_with_embedding(self, nodes, show_progress)
    137 
    138         """
--> 139         id_to_embed_map = embed_nodes(
    140             nodes, self._embed_model, show_progress=show_progress
    141         )

/usr/local/lib/python3.10/dist-packages/llama_index/core/indices/utils.py in embed_nodes(nodes, embed_model, show_progress)
    136             id_to_embed_map[node.node_id] = node.embedding
    137 
--> 138     new_embeddings = embed_model.get_text_embedding_batch(
    139         texts_to_embed, show_progress=show_progress
    140     )

/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py in wrapper(func, instance, args, kwargs)
    309 
    310             try:
--> 311                 result = func(*args, **kwargs)
    312                 if isinstance(result, asyncio.Future):
    313                     # If the result is a Future, wrap it

/usr/local/lib/python3.10/dist-packages/llama_index/core/base/embeddings/base.py in get_text_embedding_batch(self, texts, show_progress, **kwargs)
    333                     payload={EventPayload.SERIALIZED: self.to_dict()},
    334                 ) as event:
--> 335                     embeddings = self._get_text_embeddings(cur_batch)
    336                     result_embeddings.extend(embeddings)
    337                     event.on_end(

/usr/local/lib/python3.10/dist-packages/llama_index/embeddings/ollama/base.py in _get_text_embeddings(self, texts)
     73         embeddings_list: List[List[float]] = []
     74         for text in texts:
---> 75             embeddings = self.get_general_text_embedding(text)
     76             embeddings_list.append(embeddings)
     77 

/usr/local/lib/python3.10/dist-packages/llama_index/embeddings/ollama/base.py in get_general_text_embedding(self, texts)
     86     def get_general_text_embedding(self, texts: str) -> List[float]:
     87         """Get Ollama embedding."""
---> 88         result = self._client.embeddings(
     89             model=self.model_name, prompt=texts, options=self.ollama_additional_kwargs
     90         )

/usr/local/lib/python3.10/dist-packages/ollama/_client.py in embeddings(self, model, prompt, options, keep_alive)
    279     keep_alive: Optional[Union[float, str]] = None,
    280   ) -> Mapping[str, Sequence[float]]:
--> 281     return self._request(
    282       'POST',
    283       '/api/embeddings',

/usr/local/lib/python3.10/dist-packages/ollama/_client.py in _request(self, method, url, **kwargs)
     73       response.raise_for_status()
     74     except httpx.HTTPStatusError as e:
---> 75       raise ResponseError(e.response.text, e.response.status_code) from None
     76 
     77     return response

ResponseError: 404 page not found

@seehi
Copy link
Contributor

seehi commented Oct 31, 2024

Try removing "/api" from the base_url of the embedding, such as

embedding:
  api_type: "ollama"
  model: "hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF"
  base_url: "http://127.0.0.1:11434"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants