Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

MethodError during training when running mx.fit in regression-example.jl (Julia) #17108

Open
truedichotomy opened this issue Dec 18, 2019 · 0 comments
Labels

Comments

@truedichotomy
Copy link

truedichotomy commented Dec 18, 2019

Description

MethodError during training when running mx.fit in regression-example.jl in Julia 1.3 on macOS 10.15.2.

Error Message

julia> mx.fit(model, optimizer, trainprovider,
              initializer = mx.NormalInitializer(0.0, 0.1),
              eval_metric = mx.MSE(),
              eval_data = evalprovider,
              n_epoch = 20,
              callbacks = [mx.speedometer()])
[ Info: Start training on Context[CPU0]
[ Info: Initializing parameters...
[ Info: Creating KVStore...
[ Info: TempSpace: Total 0 MB allocated on CPU0
[ Info: Start training...
ERROR: MethodError: no method matching (::MXNet.mx.var"#5784#5785")(::Float64, ::NDArray{Float32,1})
Closest candidates are:
  #5784(::Any) at /Users/c2po/.julia/packages/MXNet/XoVCW/src/metric.jl:263
Stacktrace:
 [1] (::Base.var"#3#4"{MXNet.mx.var"#5784#5785"})(::Tuple{Float64,NDArray{Float32,1}}) at ./generator.jl:36
 [2] iterate at ./generator.jl:47 [inlined]
 [3] mapfoldl_impl(::Function, ::Function, ::NamedTuple{(),Tuple{}}, ::Base.Generator{Base.Iterators.Zip{Tuple{Float64,Array{NDArray{Float32,1},1}}},Base.var"#3#4"{MXNet.mx.var"#5784#5785"}}) at ./reduce.jl:55
 [4] #mapfoldl#186 at ./reduce.jl:72 [inlined]
 [5] mapfoldl at ./reduce.jl:72 [inlined]
 [6] #mapreduce#194 at ./reduce.jl:200 [inlined]
 [7] mapreduce at ./reduce.jl:200 [inlined]
 [8] #reduce#196 at ./reduce.jl:357 [inlined]
 [9] reduce(::Function, ::Base.Generator{Base.Iterators.Zip{Tuple{Float64,Array{NDArray{Float32,1},1}}},Base.var"#3#4"{MXNet.mx.var"#5784#5785"}}) at ./reduce.jl:357
 [10] #mapreduce#195(::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::typeof(mapreduce), ::Function, ::Function, ::Float64, ::Vararg{Any,N} where N) at ./reduce.jl:201
 [11] mapreduce(::Function, ::Function, ::Float64, ::Array{NDArray{Float32,1},1}) at ./reduce.jl:201
 [12] get(::MSE{1}) at /Users/c2po/.julia/packages/MXNet/XoVCW/src/metric.jl:263
 [13] #fit#5876(::Base.Iterators.Pairs{Symbol,Any,NTuple{5,Symbol},NamedTuple{(:initializer, :eval_metric, :eval_data, :n_epoch, :callbacks),Tuple{NormalInitializer,MSE{1},ArrayDataProvider{Float32,2},Int64,Array{MXNet.mx.BatchCallback,1}}}}, ::typeof(MXNet.mx.fit), ::FeedForward, ::ADAM, ::ArrayDataProvider{Float32,2}) at /Users/c2po/.julia/packages/MXNet/XoVCW/src/model.jl:545
 [14] (::MXNet.mx.var"#kw##fit")(::NamedTuple{(:initializer, :eval_metric, :eval_data, :n_epoch, :callbacks),Tuple{NormalInitializer,MSE{1},ArrayDataProvider{Float32,2},Int64,Array{MXNet.mx.BatchCallback,1}}}, ::typeof(MXNet.mx.fit), ::FeedForward, ::ADAM, ::ArrayDataProvider{Float32,2}) at ./none:0
 [15] top-level scope at REPL[62]:1

To Reproduce

I was simply following the regression-example.jl provided here: https://github.com/apache/incubator-mxnet/blob/master/julia/examples/regression-example.jl

The error occurred when I was doing initial training with a small batch size in lines 81-86.

Environment

We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:

curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python

----------Python Info----------
Version      : 3.7.4
Compiler     : Clang 4.0.1 (tags/RELEASE_401/final)
Build        : ('default', 'Aug 13 2019 15:17:50')
Arch         : ('64bit', '')
------------Pip Info-----------
Version      : 19.3.1
Directory    : /Users/c2po/anaconda3/lib/python3.7/site-packages/pip
----------MXNet Info-----------
No MXNet installed.
----------System Info----------
Platform     : Darwin-19.2.0-x86_64-i386-64bit
system       : Darwin
node         : xxx.local
release      : 19.2.0
version      : Darwin Kernel Version 19.2.0: Sat Nov  9 03:47:04 PST 2019; root:xnu-6153.61.1~20/RELEASE_X86_64
----------Hardware Info----------
machine      : x86_64
processor    : i386
b'machdep.cpu.brand_string: Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz'
b'machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX SMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C'
b'machdep.cpu.leaf7_features: RDWRFSGS TSC_THREAD_OFFSET SGX BMI1 HLE AVX2 SMEP BMI2 ERMS INVPCID RTM FPU_CSDS MPX RDSEED ADX SMAP CLFSOPT IPT SGXLC MDCLEAR TSXFA IBRS STIBP L1DF SSBD'
b'machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT PREFETCHW RDTSCP TSCI'
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.1604 sec, LOAD: 0.7992 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0006 sec, LOAD: 0.6267 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.2079 sec, LOAD: 0.5721 sec.
Timing for D2L: http://d2l.ai, DNS: 0.2246 sec, LOAD: 0.2906 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.1295 sec, LOAD: 0.3294 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.2297 sec, LOAD: 2.0067 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.1510 sec, LOAD: 1.4693 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.2071 sec, LOAD: 0.4814 sec.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

1 participant