This is an experimental tree-based writing interface for GPT-3. The code is actively being developed and thus unstable and poorly documented.
-
Read mode
- Linear story view
- Tree nav bar
- Edit mode
-
Tree view
- Explore tree visually with mouse
- Expand and collapse nodes
- Change tree topology
- Edit nodes in place
-
Navigation
- Hotkeys
- Bookmarks
- Chapters
- 'Visited' state
-
Generation
- Generate N children with GPT-3
- Modify generation settings
- Change hidden memory on a node-by-node basis
-
File I/O
- Open/save trees as JSON files
- Work with trees in multiple tabs
- Combine trees
ooo what features! wow so cool
Read this for a conceptual explanation of block multiverse interface and demo video
- Click
Wavefunction
button on bottom bar. This will open the block multiverse interface in the right sidebar (drag to resize). - Write initial prompt in the main textbox.
- [Optional] Write ground truth continuation in the gray entry box at the bottom of the block multiverse interface. Blocks in ground truth trajectory will be colored black.
- Set model and params in top bar.
- Click
Propagate
to propagate plot the block multiverse - Click on any of the blocks to zoom ("renormalize") to that block
- Click
Propagate
again to plot future block multiverse starting from a renormalized frame - Click
Reset zoom
to reset zoom level to initial position - Click
Clear
to clear the block multiverse plot. Do this before generating a new block multiverse.
Alt hotkeys correspond to Command on Mac
Open: o
, Control-o
Import JSON as subtree: Control-Shift-O
Save: s
, Control-s
Change chapter: Control-y
Preferences: Control-p
Generation Settings: Control-Shift-P
Visualization Settings: Control-u
Multimedia dialog: u
Tree Info: Control-i
Node Metadata: Control+Shift+N
Run Code: Control+Shift+B
Toggle edit / save edits: e
, Control-e
Toggle story textbox editable: Control-Shift-e
Toggle visualize: j
, Control-j
Toggle bottom pane: Tab
Toggle side pane: Alt-p
Toggle show children: Alt-c
Hoist: Alt-h
Unhoist: Alt-Shift-h
Click to go to node: Control-shift-click
Next: period
, Return
, Control-period
Prev: comma
, Control-comma
Go to child: Right
, Control-Right
Go to next sibling: Down
, Control-Down
Go to parent: Left
, Control-Left
Go to previous Sibling: Up
, Control-Up
Return to root: r
, Control-r
Walk: w
, Control-w
Go to checkpoint: t
Save checkpoint: Control-t
Go to next bookmark: d
, Control-d
Go to prev bookmark: a
, Control-a
Search ancestry: Control-f
Search tree: Control-shift-f
Click to split node: Control-alt-click
Goto node by id: Control-shift-g
Toggle bookmark: b
, Control-b
Toggle archive node: !
Generate: g
, Control-g
Inline generate: Alt-i
Add memory: Control-m
View current AI memory: Control-Shift-m
View node memory: Alt-m
Delete: BackSpace
, Control-BackSpace
Merge with Parent: Shift-Left
Merge with children: Shift-Right
Move node up: Shift-Up
Move node down: Shift-Down
Change parent: Shift-P
New root child: Control-Shift-h
New Child: h
, Control-h
, Alt-Right
New Parent: Alt-Left
New Sibling: Alt-Down
Toggle edit / save edits: Control-e
Save edits as new sibling: Alt-e
Click to edit history: Control-click
Click to select token: Alt-click
Next counterfactual token: Alt-period
Previous counterfactual token: Alt-comma
Apply counterfactual changes: Alt-return
Enter text: Control-bar
Escape textbox: Escape
Prepend newline: n
, Control-n
Prepend space: Control-Space
Collapse all except subtree: Control-colon
Collapse node: Control-question
Collapse subtree: Control-minus
Expand children: Control-quotedbl
Expand subtree: Control-plus
Center view: l
, Control-l
Reset zoom: Control-0
-
Make sure you have tkinter installed
sudo apt-get install python3-tk
-
Setup your python env (should be >= 3.9.13)
```python3 -m venv env``` ```source env/bin/activate```
-
Install requirements
pip install -r requirements.txt
-
[Optional] Set environmental variables for
OPENAI_API_KEY
,GOOSEAI_API_KEY
,AI21_API_KEY
(you can also use the settings options)export OPENAI_API_KEY={your api key}
-
Run main.py
-
Load a json tree
-
Read :)
conda create -n pyloom python=3.10
conda activate pyloom
pip install -r requirements-mac.txt
- set the OPENAI_API_KEY env variable
python main.py
(Only tested on Linux.)
-
[Optional] Edit the Makefile with your API keys (you can also use the settings options)
-
Run the make targets
```make build``` ```make run```
-
Load a json tree
-
Read :)
llama.cpp lets you run models locally, and is especially useful for running models on Mac. [https://github.com/abetlen/llama-cpp-python] provides nice installation and a convenient API.
conda create -n llama-cpp-local python=3.10; conda activate llama-cpp-local
- Set your preferred backend before installing
llama-cpp-python
, as per these instructions. For instance, to infer on MPS:CMAKE_ARGS="-DLLAMA_METAL=on"
pip install 'llama-cpp-python[server]'
pip install huggingface-hub
- Now you can run the server with whatever .gguf model you desire from Huggingface, i.e:
python3 -m llama_cpp.server --hf_model_repo_id NousResearch/Meta-Llama-3-8B-GGUF --model 'Meta-Llama-3-8B-Q4_5_M.gguf' --port 8009
conda activate llama-cpp-local
and start your llama-cpp-python server.- In a new terminal window, activate your
pyloom
environment and runmain.py
- Enter configurations for your local model in Settings > Model config > Add model. By default, the llama-cpp-port-8009 model uses the following settings:
{
'model': 'Meta-Llama-3-8B-Q4_5_M',
'type': 'llama-cpp',
'api_base': 'http://localhost:8009/v1',
},