Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantized model updates, switch to recommending TheBloke #208

Merged
merged 10 commits into from
May 30, 2023

Conversation

pseudotensor
Copy link
Collaborator

@pseudotensor pseudotensor commented May 30, 2023

  • Switch to recommending TheBloke for llama.cpp
  • Add prompt types to cover other WizardLM Bloke models.
  • Auto-gen kwargs for llama and gpt4all models, should fix model listening to expert settings.

@pseudotensor pseudotensor changed the title Nonhfupdate Quantized model updates, switch to recommending TheBloke May 30, 2023
@pseudotensor pseudotensor marked this pull request as ready for review May 30, 2023 09:20
@pseudotensor
Copy link
Collaborator Author

============================================================================================================ short test summary info ============================================================================================================
FAILED tests/test_cli.py::test_cli_langchain_llamacpp - assert 'The cat is sitting on a window sill.' in '\nIt is sitting on a window and looking out at the view of the city outside.\n\nSources [Score | Link]:<p><ul><li>0.39 | <a href="file/user_path_test/./pexels-evg-kowalievska-1170986_sma...
FAILED tests/test_client_calls.py::test_client_chat_nostream_llama7b - AssertionError: assert 'What do you do?' in 'I’m a software engineer with a passion for building scalable and reliable systems. I’ve been working in the tech industry for over 10 years and have experience in a variety of technologies in...
FAILED tests/test_eval.py::test_eval1[True-32-h2oai/h2ogpt-oig-oasst1-512-6_9b] - concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
FAILED tests/test_manual_test.py::test_chat_context - NotImplementedError: MANUAL TEST FOR NOW
FAILED tests/test_manual_test.py::test_upload_one_file - NotImplementedError: MANUAL TEST FOR NOW -- do and ask query of file
FAILED tests/test_manual_test.py::test_upload_multiple_file - NotImplementedError: MANUAL TEST FOR NOW -- do and ask query of files
FAILED tests/test_manual_test.py::test_upload_url - NotImplementedError: MANUAL TEST FOR NOW -- put in URL box https://github.com/h2oai/h2ogpt/ (and ask what is h2ogpt?). Ensure can go to source links
FAILED tests/test_manual_test.py::test_upload_arxiv - NotImplementedError: MANUAL TEST FOR NOW -- paste in arxiv:1706.03762 and ask who wrote attention paper. Ensure can go to source links
FAILED tests/test_manual_test.py::test_upload_pasted_text - NotImplementedError: MANUAL TEST FOR NOW -- do and see test code for what to try
FAILED tests/test_manual_test.py::test_no_db_dirs - NotImplementedError: MANUAL TEST FOR NOW -- Remove db_dirs, ensure can still start up and use in MyData Mode.
FAILED tests/test_manual_test.py::test_upload_unsupported_file - NotImplementedError: MANUAL TEST FOR NOW -- e.g. json, ensure error correct and reasonable, no cascades
FAILED tests/test_manual_test.py::test_upload_to_UserData_and_MyData - NotImplementedError: MANUAL TEST FOR NOW Upload to each when enabled, ensure no failures
FAILED tests/test_manual_test.py::test_chat_control - NotImplementedError: MANUAL TEST FOR NOW save chat, select chats, clear chat, export, import, etc.
FAILED tests/test_manual_test.py::test_subset_only - NotImplementedError: MANUAL TEST FOR NOW UserData, Select Only for subset, then put in whisper.  Ensure get back only chunks of data with url links to data sources.
=========================================================================== 14 failed, 58 passed, 25 skipped, 1 xfailed, 1 xpassed, 2 warnings in 1164.68s (0:19:24) ============================================================================
(h2ollm) jon@pseudotensor:~/h2o-llm$ 

and fixed above tests, broken pool was just OOM doing CPU inference in 32-bit. After removing some things in memory, all good.

@pseudotensor pseudotensor merged commit 9b21084 into main May 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant