The goal of rollama
is to wrap the Ollama API, which allows you to run
different LLMs locally and create an experience similar to
ChatGPT/OpenAI’s API. Ollama is very easy to deploy and handles a huge
number of models. Checkout the project here:
https://github.com/ollama/ollama.
You can install this package from CRAN:
install.packages("rollama")
Or you can install the development version of rollama
from
GitHub. This version is updated
more frequently and may contain bug fixes (or new bugs):
# install.packages("remotes")
remotes::install_github("JBGruber/rollama")
However, rollama
is just the client package. The models are run in
Ollama
, which you need to install on your system, on a remote system
or through Docker. The easiest way
is to simply download and install the Ollama application from their
website. Once Ollama
is running, you can see if
you can access it with:
rollama::ping_ollama()
#> ▶ Ollama (v0.5.1) is running at <http://localhost:11434>!
For beginners we recommend to download Ollama from their website. However, if you are familiar with Docker, you can also run Ollama through Docker. The advantage of running things through Docker is that the application is isolated from the rest of your system, behaves the same on different systems, and is easy to download and update. You can also get a nice web interface. After making sure Docker is installed, you can simply use the Docker Compose file from this gist.
If you don’t know how to use Docker Compose, you can follow this video to use the compose file and start Ollama and Open WebUI.
The first thing you should do after installation is to pull one of the
models from https://ollama.com/library. By calling pull_model()
without arguments, you are pulling the (current) default model —
“llama3.1 8b”:
library(rollama)
pull_model()
There are two ways to communicate with the Ollama API. You can make single requests, which does not store any history and treats each query as the beginning of a new chat:
# ask a single question
query("Why is the sky blue? Answer with one sentence.")
#>
#> ── Answer from llama3.1 ────────────────────────────────────────────────────────
#> The sky appears blue because of a phenomenon called Rayleigh scattering, in
#> which shorter wavelengths of light (like blue and violet) are scattered by tiny
#> molecules of gases in the atmosphere more than longer wavelengths (like red and
#> orange), giving the sky its blue color.
With the output argument, we can specify the format of the response. Available options include “text”, “list”, “data.frame”, “response”, “httr2_response”, and “httr2_request”:
# ask a single question and specify the output format
query("Why is the sky blue? Answer with one sentence." , output = "text")
#>
#> ── Answer from llama3.1 ────────────────────────────────────────────────────────
#> The sky appears blue because of a phenomenon called Rayleigh scattering, in
#> which sunlight interacts with tiny molecules of gases like nitrogen and oxygen
#> in the Earth's atmosphere, scattering shorter (blue) wavelengths more than
#> longer (red) ones.
Or you can use the chat
function, treats all messages sent during an R
session as part of the same conversation:
# hold a conversation
chat("Why is the sky blue? Give a short answer.")
#>
#> ── Answer from llama3.1 ────────────────────────────────────────────────────────
#> The sky appears blue because of a phenomenon called Rayleigh scattering, where
#> sunlight scatters shorter (blue) wavelengths more than longer (red) wavelengths
#> as it passes through the Earth's atmosphere. This scattering effect makes the
#> blue light visible to our eyes, giving the sky its characteristic blue color.
chat("And how do you know that? Give a short answer.")
#>
#> ── Answer from llama3.1 ────────────────────────────────────────────────────────
#> I was trained on vast amounts of text data from various sources, including
#> scientific literature, educational resources, and online articles, which
#> provided explanations for phenomena like Rayleigh scattering and the color of
#> the sky. This information is widely accepted and supported by experts in the
#> fields of physics and astronomy.
If you are done with a conversation and want to start a new one, you can do that like so:
new_chat()
You can set a number of model parameters, either by creating a new model, with a modelfile, or by including the parameters in the prompt:
query("Why is the sky blue? Answer with one sentence.", output = "text",
model_params = list(
seed = 42,
num_gpu = 0)
)
#>
#> ── Answer from llama3.1 ────────────────────────────────────────────────────────
#> The sky appears blue because of a phenomenon called Rayleigh scattering, in
#> which shorter (blue) wavelengths of light are scattered more than longer (red)
#> wavelengths by the tiny molecules of gases in the atmosphere.
You can configure the server address, the system prompt and the model
used for a query or chat. If not configured otherwise, rollama
assumes
you are using the default port (11434) of a local instance
(“localhost”). Let’s make this explicit by setting the option:
options(rollama_server = "http://localhost:11434")
You can change how a model answers by setting a configuration or system message in plain English (or another language supported by the model):
options(rollama_config = "You make short answers understandable to a 5 year old")
query("Why is the sky blue?")
#>
#> ── Answer from llama3.1 ────────────────────────────────────────────────────────
#> The sky looks blue because of tiny particles in the air that bend light towards
#> blue. It's like when you see a big blue swimming pool and it looks super cool,
#> but with the whole sky being blue! Isn't that awesome?
By default, the package uses the “llama3.1 8B” model. Supported models
can be found at https://ollama.com/library. To download a specific
model make use of the additional information available in “Tags”
https://ollama.com/library/llama3.2/tags. Change this via
rollama_model
:
options(rollama_model = "llama3.2:3b-instruct-q4_1")
# if you don't have the model yet: pull_model("llama3.2:3b-instruct-q4_1")
query("Why is the sky blue? Answer with one sentence.")
#>
#> ── Answer from llama3.2:3b-instruct-q4_1 ───────────────────────────────────────
#> The sky looks blue because when sunlight comes into our world, it bounces off
#> tiny things in the air and makes our eyes see that color!
The make_query
function simplifies the creation of structured queries,
which can, for example, be used in annotation
tasks.
Main components (check the documentation for more options):
text
: The text(s) to classify.prompt
: Could be a (classification) questionsystem
: Optional system prompt providing context or instructions for the task.examples
: Optional prior examples for one-shot or few-shot learning (user messages and assistant responses).
Zero-shot Example
In this example, the function is used without examples:
# Create a query using make_query
q_zs <- make_query(
text = "the pizza tastes terrible",
prompt = "Is this text: 'positive', 'neutral', or 'negative'?",
system = "You assign texts into categories. Answer with just the correct category."
)
# Print the query
print(q_zs)
#> [[1]]
#> # A tibble: 2 × 2
#> role content
#> <chr> <glue>
#> 1 system You assign texts into categories. Answer with just the correct categor…
#> 2 user the pizza tastes terrible
#> Is this text: 'positive', 'neutral', or 'neg…
# Run the query
query(q_zs, output = "text")
#>
#> ── Answer from llama3.2:3b-instruct-q4_1 ───────────────────────────────────────
#> negative
- Use rollama for annotation tasks
- Annotate images
- Get text embedding
- Use more models (GGUF format) from Hugging Face
Please cite the package using the pre print DOI: https://doi.org/10.48550/arXiv.2404.07654