Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests failing for 1.7 #290

Closed
anandijain opened this issue Oct 9, 2021 · 2 comments
Closed

tests failing for 1.7 #290

anandijain opened this issue Oct 9, 2021 · 2 comments

Comments

@anandijain
Copy link
Contributor

Tested on 3 different configs, the two using 1.7-rc1 (fail) one on 1.6 (passes)

Now I kind of understand for error tests that floating point can change things a bit, but the values are too far off for that to be the cause I think.

Pointers on how to dig into this would be helpful. thanks

Julia Version 1.7.0-rc1
Commit 9eade6195e (2021-09-12 06:45 UTC)
Platform Info:
  OS: macOS (arm64-apple-darwin20.5.0)
  CPU: Apple M1
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, cyclone)
Environment:
  JULIA_NUM_THREADS = 4
  JULIA_EDITOR = code

     Project DataDrivenDiffEq v0.6.6
      Status `~/.julia/dev/DataDrivenDiffEq/Project.toml`
  [34da2185] Compat v3.39.0
  [82cc6244] DataInterpolations v3.6.1
  [2b5f629d] DiffEqBase v6.73.2
  [31c24e10] Distributions v0.25.18
  [ffbed154] DocStringExtensions v0.8.5
  [961ee093] ModelingToolkit v6.5.2
  [92933f4c] ProgressMeter v1.7.1
  [1fd47b50] QuadGK v2.4.2
  [189a3867] Reexport v1.2.2
  [ae029012] Requires v1.1.3
  [2913bbd2] StatsBase v0.33.10
  [0c5d862f] Symbolics v3.4.1
  [37e2e46d] LinearAlgebra
  [10745b16] Statistics
  [8dfed614] Test

gave

Ideal data: Test Failed at /Users/anand/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:44
  Expression: m.Sparsity == 4
   Evaluated: 5.0 == 4

Ideal data: Test Failed at /Users/anand/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:44
  Expression: m.Sparsity == 4
   Evaluated: 5.0 == 4

Noisy data: Test Failed at /Users/anand/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:60
  Expression: m.Sparsity == 4
   Evaluated: 3.0 == 4

Noisy data: Test Failed at /Users/anand/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:58
  Expression: m.Error < 0.3
   Evaluated: 0.37293275755801814 < 0.3

Noisy data: Test Failed at /Users/anand/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:67
  Expression: m.Error < 0.5
   Evaluated: 8.205814781673297 < 0.5

Noisy data: Test Failed at /Users/anand/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:68
  Expression: m.AICC < 12.0
   Evaluated: 12.679219458576277 < 12.0

Noisy data: Test Failed at /Users/anand/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:69
  Expression: m.Sparsity == 4
   Evaluated: 9.0 == 4

Test Summary:         | Pass  Fail  Total
Sparse Identification |   35     7     42
  Pendulum            |   18           18
  Michaelis Menten    |   14     7     21
    Ideal data        |   10     2     12
    Noisy data        |    4     5      9
  Cartpole            |    3            3

I tried on a different machine

Julia Version 1.7.0-rc1
Commit 9eade6195e (2021-09-12 06:45 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-12.0.1 (ORCJIT, skylake-avx512)
Environment:
  JULIA_NUM_THREADS = 8
  JULIA_EDITOR = "/home/anandijain/.vscode-server/bin/ee8c7def80afc00dd6e593ef12f37756d8f504ea/node"

     Project DataDrivenDiffEq v0.6.6
      Status `~/.julia/dev/DataDrivenDiffEq/Project.toml`
  [34da2185] Compat v3.39.0
  [82cc6244] DataInterpolations v3.6.1
  [2b5f629d] DiffEqBase v6.73.2
  [31c24e10] Distributions v0.25.18
  [ffbed154] DocStringExtensions v0.8.5
  [961ee093] ModelingToolkit v6.5.2
  [92933f4c] ProgressMeter v1.7.1
  [1fd47b50] QuadGK v2.4.2
  [189a3867] Reexport v1.2.2
  [ae029012] Requires v1.1.3
  [2913bbd2] StatsBase v0.33.10
  [0c5d862f] Symbolics v3.4.1
  [37e2e46d] LinearAlgebra
  [10745b16] Statistics
  [8dfed614] Test

gave

Noisy data: Test Failed at /home/anandijain/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:60
  Expression: m.Sparsity == 4
   Evaluated: 3.0 == 4

Noisy data: Test Failed at /home/anandijain/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:58
  Expression: m.Error < 0.3
   Evaluated: 0.37293275755801686 < 0.3

Noisy data: Test Failed at /home/anandijain/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:67
  Expression: m.Error < 0.5
   Evaluated: 8.205814781673297 < 0.5

Noisy data: Test Failed at /home/anandijain/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:68
  Expression: m.AICC < 12.0
   Evaluated: 12.679219458576277 < 12.0

Noisy data: Test Failed at /home/anandijain/.julia/dev/DataDrivenDiffEq/test/sindy/michaelis_menten.jl:69
  Expression: m.Sparsity == 4
   Evaluated: 9.0 == 4

Test Summary:         | Pass  Fail  Total
Sparse Identification |   37     5     42
  Pendulum            |   18           18
  Michaelis Menten    |   16     5     21
    Ideal data        |   12           12
    Noisy data        |    4     5      9
  Cartpole            |    3            3
ERROR: LoadError: Some tests did not pass: 37 passed, 5 failed, 0 errored, 0 broken.

On this, everything passed

Julia Version 1.6.1
Commit 6aaedecc44 (2021-04-23 05:59 UTC)
Platform Info:
  OS: Linux (x86_64-pc-linux-gnu)
  CPU: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, skylake-avx512)
Environment:
  JULIA_NUM_THREADS = 8

     Project DataDrivenDiffEq v0.6.6
      Status `~/.julia/dev/DataDrivenDiffEq/Project.toml`
  [34da2185] Compat v3.39.0
  [82cc6244] DataInterpolations v3.6.1
  [2b5f629d] DiffEqBase v6.73.2
  [31c24e10] Distributions v0.25.18
  [ffbed154] DocStringExtensions v0.8.5
  [961ee093] ModelingToolkit v6.5.2
  [92933f4c] ProgressMeter v1.7.1
  [1fd47b50] QuadGK v2.4.2
  [189a3867] Reexport v1.2.2
  [ae029012] Requires v1.1.3
  [2913bbd2] StatsBase v0.33.10
  [0c5d862f] Symbolics v3.4.1
  [37e2e46d] LinearAlgebra
  [10745b16] Statistics
  [8dfed614] Test
@AlCap23
Copy link
Collaborator

AlCap23 commented Oct 10, 2021

I've experienced a similar, albeit not as drastic, failure on the M1 on v.1.6.3 , where the Michaelis Menten Testset fails. Since there are still some issues with the architecture, I would wait, take a grain of salt and lean back a bit as long as Ubuntu is doing fine.

Since the Cartpole example works right now, I suspect ADM to be the bad guy here. Probably related to the null space computation. But I did not look into it deeper.

TBH, we could simply get rid of ADM, since the ImplicitOptimizer strategy is more effective robust anyways.

Update : Running tests without the ADM optimiser works. So this is definitely related.

Julia Version 1.6.3
Commit ae8452a9e0 (2021-09-23 17:34 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin19.5.0)
  CPU: Apple M1
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, westmere)
Environment:
  JULIA_EDITOR = code
  JULIA_NUM_THREADS = 

@AlCap23
Copy link
Collaborator

AlCap23 commented Nov 22, 2021

#310

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants