Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revamped Settings, Editable Commands, Custom Prompt #29

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 51 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
# 🦙 Obsidian Ollama

This is a plugin for [Obsidian](https://obsidian.md) that allows you to use [Ollama](https://ollama.ai) within your notes.
There are different pre configured promts:

You can create commands, which are reusable prompts to Ollama models.

By default, this plugin includes these commands.

- Summarize selection
- Explain selection
Expand All @@ -15,3 +18,50 @@ There are different pre configured promts:
But you can also configure your own prompts, specify their model and temperature. The plugin always passes the prompt and either selected text or full note to Ollama and inserts the result into your note at the cursor position.

This requires a local installation of [Ollama](https://ollama.ai) which can currently be installed as a [MacOS app](https://github.com/jmorganca/ollama#download). By default the plugin will connect to `http://localhost:11434` - the port of the MacOS app.

## Custom One-Off Prompts

The plugin command `Custom prompt` when ran will open a modal form where you can type a custom 1 time prompt to run on the selected text using the default model specified in the settings.

You can save the last custom prompt you ran to its own command using the command `Save custom prompt`.

## Tips and Tricks

### Prompts

To make it obvious in your prompt what context you are referring to, I recommend referring to your selected text as "the text" in the prompt, similar to how the prompt template refers to it.

### Prompt Template

This acts as wrapper text applied to all commands. For example, see the default prompt template:
```
Act as a writer. {prompt} Output only the text and nothing else, do not chat, no preamble, get to the point.
```
When you run a command, the `{prompt}` token is replaced with the command's prompt. If `{prompt}` is not specified, the template is appended to the prompt by default.

You may want certain commands to bypass this prompt template (for example if you wanted the model to act as another type of character). There is a setting for this.

### Model Template

The model template is a standard practice parameter of LLMs which help with formatting responses, specifying context and other fine-tuning.

Optionally, this plugin allows you to set a custom model template for the default model. If a command doesn't use the default model, the model template will be ignored.

The structure and syntax of a prompt template will depend on your model, utilising tokens that are specially recognised by that model. For example, a [llama3](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) model template would look something like this:
```
{{ if .System }}
<|start_header_id|>system<|end_header_id|>{{ .System }}<|eot_id|>
{{ end }}

<|start_header_id|>text<|end_header_id|>{text}<|eot_id|>

{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>{{ .Prompt }}<|eot_id|>{{ end }}

<|start_header_id|>assistant<|end_header_id|>{{ .Response }}<|eot_id|>
```

Where the prompt is inserted is handled by the model (the prompt would be inserted at the llama3 token `{{ .Prompt }}`).

The selected text passed to a command would be inserted at the token `{text}`. This is handled by the plugin and will not depend on the model you use. You don't have to include this token in your model template (by default, the text is appended to the prompt).

Refer to the model's documentation for specific formatting information.
4 changes: 2 additions & 2 deletions manifest.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"id": "ollama",
"name": "Ollama",
"id": "ollama-jpw03-fork",
"name": "Ollama JPW03 Fork",
"version": "0.0.1",
"minAppVersion": "0.15.0",
"description": "This is a plugin for Obsidian that enables the usage of Ollama within your notes.",
Expand Down
184 changes: 140 additions & 44 deletions src/Ollama.ts
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
import { kebabCase } from "service/kebabCase";
import { Editor, Notice, Plugin, requestUrl } from "obsidian";
import { OllamaSettingTab } from "OllamaSettingTab";
import { OllamaSettingTab } from "gui/OllamaSettingTab";
import { DEFAULT_SETTINGS } from "data/defaultSettings";
import { OllamaSettings } from "model/OllamaSettings";
import { CustomPromptModal } from "gui/CustomPromptModal";
import { OllamaCommand } from "model/OllamaCommand";
import { SaveCustomPromptModal } from "gui/SaveCustomPromptModal";

export class Ollama extends Plugin {
settings: OllamaSettings;
previousCustomPrompt: OllamaCommand | null = null;

async onload() {
await this.loadSettings();
this.addCustomPromptCommand();
this.addPromptCommands();
this.addSettingTab(new OllamaSettingTab(this.app, this));
}
Expand All @@ -19,54 +24,145 @@ export class Ollama extends Plugin {
id: kebabCase(command.name),
name: command.name,
editorCallback: (editor: Editor) => {
const selection = editor.getSelection();
const text = selection ? selection : editor.getValue();

const cursorPosition = editor.getCursor();

editor.replaceRange("✍️", cursorPosition);

requestUrl({
method: "POST",
url: `${this.settings.ollamaUrl}/api/generate`,
body: JSON.stringify({
prompt: command.prompt + "\n\n" + text,
model: command.model || this.settings.defaultModel,
options: {
temperature: command.temperature || 0.2,
},
}),
})
.then((response) => {
const steps = response.text
.split("\n")
.filter((step) => step && step.length > 0)
.map((step) => JSON.parse(step));

editor.replaceRange(
steps
.map((step) => step.response)
.join("")
.trim(),
cursorPosition,
{
ch: cursorPosition.ch + 1,
line: cursorPosition.line,
}
);
})
.catch((error) => {
new Notice(`Error while generating text: ${error.message}`);
editor.replaceRange("", cursorPosition, {
ch: cursorPosition.ch + 1,
line: cursorPosition.line,
});
});
this.promptOllama(editor, command);
},
});
});
}

private addCustomPromptCommand() {
this.addCommand({
id: "custom-prompt",
name: "Custom prompt",
hotkeys: [{ modifiers: ["Mod"], key: "i" }],
editorCallback: (editor: Editor) => {
this.customPromptCommandCallback(editor);
},
});

// Add custom prompt command to ribbon
this.addRibbonIcon("bot", "Ollama Custom Prompt", () => {
const editor = this.app.workspace.activeEditor?.editor;
if (editor) {
this.customPromptCommandCallback(editor);
}
else {
new Notice("Please open a file and select some text.");
}
});

// Allow users to save their previous custom prompt to a command
this.addCommand({
id: "save-custom-prompt",
name: "Save custom prompt",
callback: () => {
if (!this.previousCustomPrompt) {
new Notice("No custom prompt to save.");
}
else {
// Open modal to ask user for command name.
new SaveCustomPromptModal(this.app, this.previousCustomPrompt, (previousCustomPrompt) => {
this.settings.commands.push(previousCustomPrompt);
new Notice("Custom prompt saved.");
}).open();
}
},
});
}

private customPromptCommandCallback(editor: Editor) {
new CustomPromptModal(this.app, (prompt) => {
const customCommand: OllamaCommand = {
name: "Custom prompt",
prompt: prompt,
};
// Uses the default model and temperature

// Record this custom prompt in case the user wants to save it
this.previousCustomPrompt = customCommand;

this.promptOllama(editor, customCommand);
}).open();
}

private promptOllama(editor: Editor, command: OllamaCommand) {
const selection = editor.getSelection();
const text = selection ? selection : editor.getValue();

// Insert the prompt into the prompt template if necessary
const promptInTemplate = this.settings.promptTemplate.contains("{prompt}");
// new Notice(`debug: promptInTemplate: ${promptInTemplate}`, 5000);
let prompt = command.prompt;
if (!command.ignorePromptTemplate) {
// If the prompt template doesn't specify where the prompt should be inserted, prepend the prompt.
prompt = promptInTemplate ?
this.settings.promptTemplate.replace("{prompt}", command.prompt) :
command.prompt + "\n\n" + this.settings.promptTemplate;
}
// new Notice(`debug: prompt: ${prompt}`, 5000);

// If the command uses the default model, the model template will be used. If not, ignore the model template.
const useModelTemplate = command.model == undefined || command.model == this.settings.defaultModel;
let template = "";

if (useModelTemplate) {
const textInTemplate = this.settings.modelTemplate.contains("{text}");
// new Notice(`debug: textInTemplate: ${textInTemplate}`, 5000);
template = textInTemplate ?
this.settings.modelTemplate.replace("{text}", text) :
this.settings.modelTemplate;
// new Notice(`debug: ${template}`, 5000);
}

// If the model template doesn't specify where the text should be inserted, append to prompt.
prompt = template != "" ?
prompt :
prompt + "\n\n\"" + text + "\"";

// new Notice("debug: Prompted Ollama with the following: " + prompt, 5000);

const cursorPosition = editor.getCursor();
editor.replaceRange("✍️", cursorPosition);

requestUrl({
method: "POST",
url: `${this.settings.ollamaUrl}/api/generate`,
body: JSON.stringify({
prompt,
model: command.model || this.settings.defaultModel,
options: {
temperature: command.temperature || 0.2,
},
template,
}),
})
.then((response) => {
const steps = response.text
.split("\n")
.filter((step) => step && step.length > 0)
.map((step) => JSON.parse(step));

editor.replaceRange(
steps
.map((step) => step.response)
.join("")
.trim(),
cursorPosition,
{
ch: cursorPosition.ch + 1, // accounts for the added writing emoji
line: cursorPosition.line,
}
);
})
.catch((error) => {
new Notice(`Error while generating text: ${error.message}`);
editor.replaceRange("", cursorPosition, {
ch: cursorPosition.ch + 1,
line: cursorPosition.line,
});
});
}

onunload() {}

async loadSettings() {
Expand Down
Loading