If you like the project, please ★ star this repository to show your support! 🤩
15 Jan 2024 - As I reflect on the journey of Spago, I am filled with gratitude for the enriching experience it has provided me. Mastering Go and revisiting the fundamentals of Deep Learning through Spago has been immensely rewarding. The unique features of Spago, especially its asynchronous computation graph and focusing on clean coding, have made it an extraordinary project to work on. Our goal was to create a minimalist ML framework in Go, eliminating the dependency on Python in production by enabling the creation of standalone executables. This approach of Spago successfully powered several of my projects in challenging production environments.
However, the endeavor to elevate Spago to a level where it can compete effectively in the evolving 'AI space', which now extensively involves computation on GPUs, requires substantial commitment. At the same time, the vision that Spago aspired to achieve is now being impressively realized by the Candle project in Rust. With my limited capacity to dedicate the necessary attention to Spago, and in the absence of a supporting maintenance team, I have made the pragmatic decision to pause the project for now.
I am deeply grateful for the journey Spago has taken me on and for the community that has supported it. As we continue to explore the ever-evolving field of machine learning, I look forward to the exciting developments that lie ahead.
Warm regards,
Matteo Grella
Spago is a Machine Learning library written in pure Go designed to support relevant neural architectures in Natural Language Processing.
Spago is self-contained, in that it uses its own lightweight computational graph both for training and inference, easy to understand from start to finish.
It provides:
- Automatic differentiation via dynamic define-by-run execution
- Feed-forward layers (Linear, Highway, Convolution...)
- Recurrent layers (LSTM, GRU, BiLSTM...)
- Attention layers (Self-Attention, Multi-Head Attention...)
- Gradient descent optimizers (Adam, RAdam, RMS-Prop, AdaGrad, SGD)
- Gob compatible neural models for serialization
If you're interested in NLP-related functionalities, be sure to explore the Cybertron package!
Requirements:
Clone this repo or get the library:
go get -u github.com/nlpodyssey/spago
A good place to start is by looking at the implementation of built-in neural models, such as the LSTM.
Here is an example of how to calculate the sum of two variables:
package main
import (
"fmt"
"log"
"github.com/nlpodyssey/spago/ag"
"github.com/nlpodyssey/spago/mat"
)
func main() {
// define the type of the elements in the tensors
type T = float32
// create a new node of type variable with a scalar
a := mat.Scalar(T(2.0), mat.WithGrad(true)) // create another node of type variable with a scalar
b := mat.Scalar(T(5.0), mat.WithGrad(true)) // create an addition operator (the calculation is actually performed here)
c := ag.Add(a, b)
// print the result
fmt.Printf("c = %v (float%d)\n", c.Value(), c.Value().Item().BitSize())
c.AccGrad(mat.Scalar(T(0.5)))
if err := ag.Backward(c); err != nil {
log.Fatalf("error during Backward(): %v", err)
}
fmt.Printf("ga = %v\n", a.Grad())
fmt.Printf("gb = %v\n", b.Grad())
}
Output:
c = [7] (float32)
ga = [0.5]
gb = [0.5]
Here is a simple implementation of the perceptron formula:
package main
import (
"fmt"
. "github.com/nlpodyssey/spago/ag"
"github.com/nlpodyssey/spago/mat"
)
func main() {
x := mat.Scalar(-0.8)
w := mat.Scalar(0.4)
b := mat.Scalar(-0.2)
y := Sigmoid(Add(Mul(w, x), b))
fmt.Printf("y = %0.3f\n", y.Value().Item())
}
If you think something is missing or could be improved, please open issues and pull requests.
To start contributing, check the Contributing Guidelines.
We highly encourage you to create an issue as it will contribute to the growth of the community. However, if you prefer to communicate with us privately, please feel free to email Matteo Grella with any questions or comments you may have.