The Spezi LLM Swift Package includes modules that are helpful to integrate LLM-related functionality in your application. The package provides all necessary tools for local LLM execution as well as the usage of remote OpenAI-based LLMs.
OpenAI LLM Chat View |
Language Model Download |
Local LLM Chat View |
You need to add the SpeziLLM Swift package to your app in Xcode or Swift package.
Important
If your application is not yet configured to use Spezi, follow the Spezi setup article to set up the core Spezi infrastructure.
As Spezi LLM contains a variety of different targets for specific LLM functionalities, please follow the additional setup guide in the respective target section of this README.
Spezi LLM provides a number of targets to help developers integrate LLMs in their Spezi-based applications:
- SpeziLLM: Base infrastructure of LLM execution in the Spezi ecosystem.
- SpeziLLMLocal: Local LLM execution capabilities directly on-device. Enables running open-source LLMs like Meta's Llama2 models.
- SpeziLLMLocalDownload: Download and storage manager of local Language Models, including onboarding views.
- SpeziLLMOpenAI: Integration with OpenAIs GPT models via using OpenAIs API service.
The section below highlights the setup and basic use of the SpeziLLMLocal and SpeziLLMOpenAI targets in order to integrate Language Models in a Spezi-based application.
Note
To learn more about the usage of the individual targets, please refer to the [DocC documentation of the package] (https://swiftpackageindex.com/stanfordspezi/spezillm/documentation).
The target enables developers to easily execute medium-size Language Models (LLMs) locally on-device via the llama.cpp framework. The module allows you to interact with the locally run LLM via purely Swift-based APIs, no interaction with low-level C or C++ code is necessary, building on top of the infrastructure of the SpeziLLM target.
Important
Important: In order to use the LLM local target, one needs to set build parameters in the consuming Xcode project or the consuming SPM package to enable the Swift / C++ Interop, introduced in Xcode 15 and Swift 5.9. Keep in mind that this is true for nested dependencies, one needs to set this configuration recursivly for the entire dependency tree towards the llama.cpp SPM package.
For Xcode projects:
- Open your build settings in Xcode by selecting PROJECT_NAME > TARGET_NAME > Build Settings.
- Within the Build Settings, search for the
C++ and Objective-C Interoperability
setting and set it toC++ / Objective-C++
. This enables the project to use the C++ headers from llama.cpp.
For SPM packages:
- Open the
Package.swift
file of your SPM package - Within the package
target
that consumes the llama.cpp package, add theinteroperabilityMode(_:)
Swift build setting like that:
/// Adds the dependency to the Spezi LLM SPM package
dependencies: [
.package(url: "https://github.com/StanfordSpezi/SpeziLLM", .upToNextMinor(from: "0.6.0"))
],
targets: [
.target(
name: "ExampleConsumingTarget",
/// State the dependence of the target to SpeziLLMLocal
dependencies: [
.product(name: "SpeziLLMLocal", package: "SpeziLLM")
],
/// Important: Configure the `.interoperabilityMode(_:)` within the `swiftSettings`
swiftSettings: [
.interoperabilityMode(.Cxx)
]
)
]
You can configure the Spezi Local LLM execution within the typical SpeziAppDelegate
.
In the example below, the LLMRunner
from the SpeziLLM target which is responsible for providing LLM functionality within the Spezi ecosystem is configured with the LLMLocalPlatform
from the SpeziLLMLocal target. This prepares the LLMRunner
to locally execute Language Models.
class TestAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
LLMRunner {
LLMLocalPlatform()
}
}
}
}
The code example below showcases the interaction with local LLMs through the the SpeziLLM LLMRunner
, which is injected into the SwiftUI Environment
via the Configuration
shown above.
The LLMLocalSchema
defines the type and configurations of the to-be-executed LLMLocalSession
. This transformation is done via the LLMRunner
that uses the LLMLocalPlatform
. The inference via LLMLocalSession/generate()
returns an AsyncThrowingStream
that yields all generated String
pieces.
struct LLMLocalDemoView: View {
@Environment(LLMRunner.self) var runner
@State var responseText = ""
var body: some View {
Text(responseText)
.task {
// Instantiate the `LLMLocalSchema` to an `LLMLocalSession` via the `LLMRunner`.
let llmSession: LLMLocalSession = runner(
with: LLMLocalSchema(
modelPath: URL(string: "URL to the local model file")!
)
)
for try await token in try await llmSession.generate() {
responseText.append(token)
}
}
}
}
Note
To learn more about the usage of SpeziLLMLocal, please refer to the [DocC documentation]: (https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillmlocal).
A module that allows you to interact with GPT-based Large Language Models (LLMs) from OpenAI within your Spezi application.
SpeziLLMOpenAI
provides a pure Swift-based API for interacting with the OpenAI GPT API, building on top of the infrastructure of the SpeziLLM target.
In addition, SpeziLLMOpenAI
provides developers with a declarative Domain Specific Language to utilize OpenAI function calling mechanism. This enables a structured, bidirectional, and reliable communication between the OpenAI LLMs and external tools, such as the Spezi ecosystem.
In order to use OpenAI LLMs within the Spezi ecosystem, the SpeziLLM LLMRunner
needs to be initialized in the Spezi Configuration
with the LLMOpenAIPlatform
. Only after, the LLMRunner
can be used for inference of OpenAI LLMs.
See the SpeziLLM documentation for more details.
class LLMOpenAIAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
LLMRunner {
LLMOpenAIPlatform()
}
}
}
}
The code example below showcases the interaction with an OpenAI LLM through the the SpeziLLM LLMRunner
, which is injected into the SwiftUI Environment
via the Configuration
shown above.
The LLMOpenAISchema
defines the type and configurations of the to-be-executed LLMOpenAISession
. This transformation is done via the LLMRunner
that uses the LLMOpenAIPlatform
. The inference via LLMOpenAISession/generate()
returns an AsyncThrowingStream
that yields all generated String
pieces.
struct LLMOpenAIDemoView: View {
@Environment(LLMRunner.self) var runner
@State var responseText = ""
var body: some View {
Text(responseText)
.task {
// Instantiate the `LLMOpenAISchema` to an `LLMOpenAISession` via the `LLMRunner`.
let llmSession: LLMOpenAISession = runner(
with: LLMOpenAISchema(
parameters: .init(
modelType: .gpt3_5Turbo,
systemPrompt: "You're a helpful assistant that answers questions from users.",
overwritingToken: "abc123"
)
)
)
for try await token in try await llmSession.generate() {
responseText.append(token)
}
}
}
}
Note
To learn more about the usage of SpeziLLMOpenAI, please refer to the [DocC documentation] (https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillmopenai).
Contributions to this project are welcome. Please make sure to read the contribution guidelines and the contributor covenant code of conduct first.
This project is licensed under the MIT License. See Licenses for more information.