Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix delta role error when using custom LLM (#3223)
* Fix delta role error when using custom LLM This addresses a delta chunk issue that happens when you use a custom baseURL on the openai chat model. Some models like llama 2 on openrouter may have a empty delta and result in a undefined error. Example error: ``` Cannot read properties of undefined (reading 'role')", "error.stack": "TypeError: Cannot read properties of undefined (reading 'role')\n at _convertDeltaToMessageChunk (/home/ubuntu/node_modules/langchain/dist/chat_models/openai.cjs:72:24)\n at ChatOpenAI._streamResponseChunks (/home/ubuntu/node_modules/langchain/dist/chat_models/openai.cjs:409:27)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async ChatOpenAI._streamIterator (/home/ubuntu/node_modules/langchain/dist/chat_models/base.cjs:77:34)\n at async RunnableSequence._streamIterator (/home/ubuntu/node_modules/langchain/dist/schema/runnable/base.cjs:780:30)\n at async Object.pull (/home/ubuntu/node_modules/langchain/dist/util/stream.cjs:73:41) ``` on _convertDeltaToMessageChunk, openai works fine but when you start using models from openrouter or other baseURL llms, it doesnt take in account for empty deltas and assumes theres always a value. I was able to test with a custom baseURL and normal openai that this works with no errors on streaming after this tweak. * add default role * add ? on chunk text for streaming delta * Simpler fix * Revert --------- Co-authored-by: jacoblee93 <[email protected]>
- Loading branch information