-
Notifications
You must be signed in to change notification settings - Fork 0
Getting started
Yi Zhu edited this page May 25, 2021
·
4 revisions
The framework uses Float32
to handle data in order to avoid consuming too much RAM.
Since the multithreading of BLAS
performs not well, it will be automatically set using a single thread for a better performence, once the package is imported.
It is recommended to execute the whole program with multithreading via julia -t
with the number of all your CPU threads.
The code below shows bascially how everything works like.
using MLDatasets, DianoiaML
train_x, train_y = MNIST.traindata()
test_x, test_y = MNIST.testdata()
dict = Dict{Int64, Int64}(1=>1, 2=>2, 3=>3, 4=>4, 5=>5, 6=>6, 7=>7, 8=>8, 9=>9, 0=>10)
model = Sequential()
model.add_layer(model, Conv2D; filter=32, input_shape=(28,28,1), kernel_size=(3,3), activation_function=ReLU)
model.add_layer(model, MaxPooling2D; pool_size=(2,2))
model.add_layer(model, Conv2D; filter=64, kernel_size=(3,3), activation_function=ReLU)
model.add_layer(model, MaxPooling2D; pool_size=(2,2))
model.add_layer(model, Flatten;)
model.add_layer(model, Dense; layer_size=128, activation_function=ReLU)
model.add_layer(model, Dense; layer_size=10, activation_function=Softmax_CEL)
Adam.fit(model=model, input_data=Array{Float32}(reshape(train_x, 28,28,1,60000)), output_data=oneHot(train_y, 10, dict),
loss_function=Categorical_Cross_Entropy_Loss, monitor=Classification, epochs=10, batch=128)
Star the repo if you like it! :-)