Skip to content

Latest commit

 

History

History
19 lines (13 loc) · 2.07 KB

instructions.md

File metadata and controls

19 lines (13 loc) · 2.07 KB

This service provides a chat interface for the GPT4All-13b-snoozy AI assistant. It trained on a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories.

Basic Usage

In the "Text generation" tab you can ask questions or give instructions, and see the responses generated by the GPT4All large language model (LLM). Like most LLMs, GPT4All can not access any information not already included in the 8GB parameter file included in the service, which means its answers may be wrong or out-of-date. Always verify the output of a LLM before using it for anything important (such a as a court filing). It can take quite a while for the assistant to respond to large inputs, so be patient.

Additional Models

Depending on how you would like to use the service, you may get better results with a model other than GPT4All-13b-snoozy. It is possible to download other LLMs from huggingface and upload them to the /gpt4all/models directory using the File Browser service. But not all LLM checkpoints are currently supported by the service. You will get the best results with checkpoints that

  1. Contain "ggmlv3" somewhere in the name.
  2. Contain "q2", "q4", or "q5" somewhere in the name.
  3. End in a ".bin" file extension.
  4. Are 13b parameters or less.

Some known good models are:

If you have uploaded a valid checkpoint, you will be able to access it from the "Models" tab of the user interface. Make sure that the "llama.cpp" runtime is automatically selected. Also make sure that the "Chat settings" are correctly configured for the model of your choice.