⚠️ This is a community-maintained library and is not officially supported by Groq.
A Swift SDK for the Groq API, providing a convenient way to interact with Groq's language models in Swift applications. The SDK is designed to work on both Apple platforms and Linux.
Here's a demo chat application built using GroqSwift:
- ✨ Modern async/await API design
- 🔄 Support for both regular and streaming completions
- 🛡️ Type-safe request and response models
- 🐧 Linux compatibility
- ⚡️ Proper error handling with detailed messages
- 📱 Support for all Apple platforms
Add the following to your Package.swift
file:
dependencies: [
.package(url: "https://github.com/engali94/groq-swift.git", from: "0.1.0")
]
Please ensure you have set the GROQ_API_KEY
environment variable, before running the following code,
please DON'T store API key in code.
import GroqSwift
let apiKey = ProcessInfo.processInfo.environment["GROQ_API_KEY"] ?? ""
let client = GroqClient(apiKey: apiKey)
let request = ChatCompletionRequest(
model: .mixtral8x7bChat,
messages: [Message(role: .user, content: "What is the capital of France?")],
temperature: 0.7
)
let response = try await client.createChatCompletion(request)
print(response.choices.first?.message.content ?? "")
} catch {
print("Error: \(error)")
}
// Streaming completion
for try await response in await client.createStreamingChatCompletion(request) {
if let content = response.choices.first?.delta.content {
print(content, terminator: "")
}
}
The SDK supports all message roles:
// System message to set behavior
let systemMessage = Message(role: .system, content: "You are a helpful assistant")
// User message
let userMessage = Message(role: .user, content: "Hello!")
// Assistant message
let assistantMessage = Message(role: .assistant, content: "Hi there!")
Use dot syntax to specify models:
// LLaMA models
let llamaRequest = ChatCompletionRequest(model: .llama70bChat)
let llamaVersatileRequest = ChatCompletionRequest(model: .llama70bVersatile)
// Mixtral models
let mixtralRequest = ChatCompletionRequest(model: .mixtral8x7bChat)
let mixtralVersatileRequest = ChatCompletionRequest(model: .mixtral8x7bVersatile)
// Gemma models
let gemmaRequest = ChatCompletionRequest(model: .gemma7bChat)
let gemmaVersatileRequest = ChatCompletionRequest(model: .gemma7bVersatile)
// DeepSeek models
let deepseekLlamaRequest = ChatCompletionRequest(model: .deepseekR1DistillLlama70b)
let deepseekQwenRequest = ChatCompletionRequest(model: .deepseekR1DistillQwen32b)
Customize your requests with various parameters:
let request = ChatCompletionRequest(
model: .mixtral8x7bChat,
messages: messages,
stream: true, // Enable streaming
maxCompletionTokens: 100, // Limit response length
temperature: 0.7, // Control randomness
topP: 0.9, // Nucleus sampling
presencePenalty: 0.5, // Penalize token presence
frequencyPenalty: 0.5, // Penalize token frequency
stop: ["END"], // Stop sequences
user: "user-123" // User identifier
)
The SDK provides detailed error information:
do {
let response = try await client.createChatCompletion(request)
} catch let error as GroqError {
switch error {
case .invalidRequest(let message):
print("Invalid request: \(message)")
case .authenticationError(let message):
print("Auth error: \(message)")
case .apiError(let statusCode, let message):
print("API error \(statusCode): \(message)")
case .invalidResponse(let message):
print("Invalid response: \(message)")
case .invalidURL:
print("Invalid URL")
}
} catch {
print("Unexpected error: \(error)")
}
Check out the GroqChatDemo directory for a complete SwiftUI chat application that demonstrates the SDK's capabilities.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- macOS 13.0+
- iOS 16.0+
- watchOS 9.0+
- tvOS 16.0+
- visionOS 1.0+
- Swift 5.9+