Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem running the best model (Feed-Forward Neural Network) after hyperparameter optimization #501

Open
mcvta opened this issue Feb 2, 2022 · 2 comments

Comments

@mcvta
Copy link

mcvta commented Feb 2, 2022

Hi everyone,

I´m running the Feed-Forward Neural Network (FNN) with R (4.1.2) and tensorflow (2.7.0) that is available from: https://github.com/MoritzFeigl/wateRtemp.

I´m running the test dataset that is available from the same source. After the optimization process when the model tries to run the best model I´m getting the following error:

** Starting FNN computation for catchment test_catchment ***
Mean and standard deviation used for feature scaling are saved under test_catchment/FNN/standard_FNN/scaling_values.csv
Using existing scores as initial grid for the Bayesian Optimization
Bayesian Hyperparameter Optimization:
40 iterations were already computed
Run the best performing model as ensemble:
2022-02-02 13:06:51.059425: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-02-02 13:06:51.060834: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Loaded Tensorflow version 2.7.0
Error in py_call_impl(callable, dots$args, dots$keywords) :
TypeError: Exception encountered when calling layer "alpha_dropout" (type AlphaDropout).

'>' not supported between instances of 'dict' and 'float'

Call arguments received:
• inputs=tf.Tensor(shape=(None, 42), dtype=float32)
• training=None
In addition: Warning message:
In if (dropout_layers) { :
the condition has length > 1 and only the first element will be used

This is the code that I´m using to run the model:

library("wateRtemp")
library(tensorflow)
data(test_catchment)

wt_preprocess(test_catchment)
train_data <- feather::read_feather("test_catchment/train_data.feather")
test_data <- feather::read_feather("test_catchment/test_data.feather")

wt_fnn(
  train_data,
  test_data = NULL,
  catchment = NULL,
  model_name = NULL,
  seed = NULL,
  n_iter = 40,
  n_random_initial_points = 20,
  epochs = 100,
  early_stopping_patience = 5,
  ensemble_runs = 5,
  bounds_layers = c(1, 5),
  bounds_units = c(5, 200),
  bounds_dropout = c(0, 0.2),
  bounds_batch_size = c(5, 150),
  initial_grid_from_model_scores = TRUE
)

wt_fnn(train_data,test_data,catchment = "test_catchment", seed = 42, model_name = "standard_FNN")

This are the parameters for the best model:
layers = 3
units = 200
max_epoc = 100
early_stopping_patience = 5
batch_size = 60
dropout = 2.22044604925031E-16
ensemble =1

Can this problem be related with the very small value of dropout =2.22044604925031E-16 ?

Thank you

@mcvta mcvta changed the title Problem running the best model after hyperparameter optimization Problem running the best model (Feed-Forward Neural Network) after hyperparameter optimization Feb 2, 2022
@t-kalinowski
Copy link
Member

This seems to be a bug in the wateRtemp package. Can you file an issue there?

I also just pushed a commit to keras simplifying the layer_alpha_dropout() wrapper, but it won't fix this bug.

@mcvta
Copy link
Author

mcvta commented Feb 17, 2022

Hi I have already done that:

MoritzFeigl/wateRtemp#1

Can you run the the FNN (wt_fnn) with the test dataset that is provided here:
https://github.com/MoritzFeigl/wateRtemp

Just to check if it is a bug in the original code

Thank you,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants