feat: add configurable LLM settings for mem0_memory tool #221
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Add configurable LLM settings for mem0_memory tool
Description
This PR adds support for configurable LLM and embedder settings in the
mem0_memorytool through environment variables, allowing users to customize the language model and embedding model used for memory processing.Changes made:
MEM0_LLM_PROVIDER,MEM0_LLM_MODEL,MEM0_LLM_TEMPERATURE,MEM0_LLM_MAX_TOKENSenvironment variablesMEM0_EMBEDDER_PROVIDER,MEM0_EMBEDDER_MODELenvironment variablesDEFAULT_CONFIGto properly useos.environ.get()with appropriate defaults and type conversionThis allows users to easily switch between different LLM providers (OpenAI, Azure OpenAI, AWS Bedrock, etc.) and models without code changes, addressing the limitation where the LLM configuration was previously hardcoded.
Related Issues
Fixes #220
Documentation PR
Documentation updates are included in this PR (README.md and module docstrings).
Type of Change
New Tool
Testing
How have you tested the change?
Test commands run: