From 8b3f6a6693a0c789f7b8eaf29279872b539b433f Mon Sep 17 00:00:00 2001 From: Christian Convey Date: Mon, 28 Jan 2019 17:46:25 -0500 Subject: [PATCH] Merge in upstream `master` through 4fe546 (#543) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Support full convention in quantized pooling (#13260) * fix quantized pooling and enable it in INT8 SqueezeNet * add test * fix test * address review comments * refine the test for quantized pooling * Add utility slave (#13383) * A few operators on graphs stored as CSR (#13290) * edge_id op csr forward on CPU (#34) * add node subgraph generator. (#35) * create DGLSubgraph. * fix. * return old eids in node_subgraph. * accelerate subgraph construction. * Add neighborhood op (#37) * add csr_neighborhood op * update neighborhood sample * Update csr_neighborhood_sample-inl.h * Update csr_neighborhood_sample-inl.h * Update csr_neighborhood_sample.cc * add graph compact operator. * fix a bug in dgl_subgraph. * fix a bug in dgl_graph_compact. * Update csr sample op (#39) * add csr_neighborhood op * update neighborhood sample * Update csr_neighborhood_sample-inl.h * Update csr_neighborhood_sample-inl.h * Update csr_neighborhood_sample.cc * Update csr_neighborhood_sample-inl.h * Update csr_neighborhood_sample.cc * Update csr_neighborhood_sample-inl.h * remove space. * move to dgl_graph to contrib. * move code. * move edge id. * fix compilation error. * add test for subgraph. * cleanup. * fix. * fix. * fix compile error. * fix compile error. * fix compile error. * fix. * add operator doc. * remove graph_compact * update doc. * address comments. * retrigger. * address comments. * retrigger * fix a bug in test. * retrigger * add check_format * Fixes #13386 - Refer Warnings (#13387) * Updated the paths for images for java tutorial (#13361) * Updated the paths for images * Empty commit * Empty commit * Nudge to CI * Fix/env disable mkldnn cache map (#13324) * add flag to disable mkldnn cache * update docs * fix typos * update var name * fix ordering * set cache size * fix log message * update docs * fix lint * fix lint * fix comparison * update method name * fix missing * fix logging * remove random item when cache exceeded * update helper name * update hash namespace * make ophash template * udpate function params * fix return * fix return * update return for helper * chagne class to typename * add typename * fix lint * update doc * pass ptr to cache * retrigger * retrigger * retrigger * change env var name to MXNET_MKLDNN_CACHE_NUM * fix log env name * retrigger * Initial website documentation for Java API (#13289) * Initial website documentation for Java API * Changing paths to be relative * Refactoring Java API website landing page * Update Java web docs based on feedback * Minor formatting fixes * Update maven repo to nightly build so that java will be available prior to 1.4.0 release * Adding java tutorial index to test_sanity_tutorials whitelist * Fix link to javadocs * Fix javadoc for infer package and minor install doc fix * Minor path fix * Replace mxnetci dockcross with public dockcross due to missing image (#13402) * Replace mxnetci dockcross with public dockcross due to missing image * Remove source lists change * Disable Jetson * Move to mxnetcipinned * Correct shapes of images in cifar10 and cifar100 (#13348) * Correct shapes of images in cifar10 and cifar100 cifar10 and cifar100 have 3 channels * Retrigger build * Updated recommenders example (#13041) * initial modification recommender * Recommender updates * fix notebooks * Update README.md * trigger build * Update README.md * Retrigger build * Improving multi-processing reliability for gluon DataLoader (#13318) * improving multi-processing reliability for gluon dataloader I found some multi-processing-related issues in the Gluon DataLoader. 1) Each time a _MultiWorkerIter shuts down, it could leave some dangling processes. The shutdown mechanism could not guarantee that all worker processes can be terminated. As a result, after running for several epochs, more and more dangling processes will stay there. This problem barely happens during training. In this case, there is a decent time interval between the last-batch data prefetching and the _MultiWorkerIter's shutting down). But the problem frequently happens 1) when I stop the iter before the end of an epoch, and 2) when I use the DataLoader for a data loading service and load the data as fast as possible. In both cases, the time interval between the most recent data prefetching and the iter shutdown are short. I guess that the _MultiWorkerIter iter is unable to shut down properly during active data prefetching. To fix this, I explicitly terminate the worker processes inside the shutdown function. 2) When loading data fast (still mostly during testing and data serving), there seems to be a risk of data racing. The data iter uses a _MultiWorkerIter to cache prefetched data, but the dict does not seem to be thread-safe for concurrent inserting and deleting elements. So occasionally, the data can be missing from the dict. To prevent this, I use a scope lock to guard the dict access. * do not wait for the workers to join, and kill any alive wokers as soon as possible * Onnx multi output (#13390) * Fix ONNX export to support multi-output graphs * Add ONNX unit-test * Added multi-output shape inference. - Removed unnecessary forward_pass() call - Modified infer_output_shape to return multiple shapes for multiple outputs as well as output names. * Fixed pylint * Change docker login (#13408) * Fixing doc links and minor edits for Java API (#13405) Update the main website links * Fix repeated typo in mxnet_op.h (#13406) * Use dynamic omp schedule for sparse dot with large matrix (#13398) * dynamic omp for dot update heuristic * add doc * Update mxnet_op.h * Update dot-inl.h * Added proper default value in cpp-package for optional (#13415) * Fix infoGan Gluon tutorial errors. (#13416) - Update notebook to avoid divide by 0 causing a warning. - Add MXBoard dependency. * :memo: Fixes #13388 Adds Clojure to MXNet installation docs (#13393) * Minor fixes to documentation (#13412) * Minor fixes to documentation * Updated the Maven Repository URL to point to staging repo * [Example] fix cpp example inception-bn and training acc issue (#13284) * fix inception-bn and training acc issue * add parameter initialization, fix lint * fix comparison * change optimizer to sgd * update sgd and update model name * add inception_bn in jenkins build * make max epoch an argument * remove inception_bn test * trigger ci * remove ci test * trigger ci * [Example]Fix mlp_csv example (#13273) * add instruction to get the data and fix typo * fix typo * update file name * trigger CI * add unit_test for unit_test_mlp_csv * add mlp_csv to jenkinsfile * revert jenkinsfile to another PR * trigger CI * trigger CI * Java doc (#13368) * Fix scaladoc and javadoc errors * Stop on errors starting on scala 1.3.x build * Adding Java to ubuntu setup install page and minor fixes to docs (#13420) * Adding Java to ubuntu setup install page and minor fixes to other java api docs * Improving javadoc for java-api predictor class Mostly documentation changes * [MXNET-1029] Feature request: randint operator (#12749) * randint operator add along with add optional tag to params * register param * lint space issue * randn issue fix * uniform_int_distribution doesn't support int8, uint8 fix * dtype ftype * ftype to dtype - invalid template arg * fix template arg issue * test with int dtype for windows * removed int8,uint8 from test * gpu implementation * gpu engine state diff * removed gpu support * empty commit * temporary fix : batchnorm flaky test skip * removed randn symbol specific code since other PR is on it * revert ndarray/randn for compatibility * added unit test for checking extremes and uniform distribution for sufficient samples * increased the high val * int32 to int64 support, indentation fix, check for optype correctly based on type of random function * gpu support, revert finfertype using template specialization, remove defaults, prints, test other low high val * fix for invalid template arg by checking for int32,int64 * gpu randint in random_generator * sample_uniform issue and param, removed old flaky test skip line * replaced discrete_uniform function by rand_int64 for consistency * formula update and removed itype * change ctx to include gpu, randint samepl_op.cu typo * trigger ci * doc fix, check fix, whitespace remove * added the without dtype testcase * Java demo file-path fix (#13358) * fix on ubuntu * add readme instruction * fix intellij Tutorials * fix intelliJ tutorial * fix the document * update demo * revert the change on intelliJ tutorial * fix make process * fix documentation * Updated README and NEWS with 1.3.1 release information (#13423) * Be more explicit about the exit status of the container (#13425) * [MKLDNN]Add quantized concat (#13297) * Add quantized concat * Fix non-mkldnn build * Add size check for MKLDNNQuantizedConcatForward * use all capital for constant * Rename constant with Google C++ style. * Address apeforest comments * Address apeforest comments * fix lint * Add frontend interface. * Retrigger CI * Add ARMv7 builds to dev_menu.py (#13432) * Add ARMv7 builds to dev_menu.py * Add Python3 CPU Intel MKLDNN unittests to dev_menu * [MXNET-1110] find the path to include header files (#13359) * add find_include_path API * address reviewer comment * change return type from list to string * add unit test * address reviewer comment * address reviewer comment * address reviewer comment * address reviewer comment * add subgraph adjacency operator. (#13396) * add adjacency. * fix lint. * add GPU impl. * retrigger * address comments. * Update dgl_graph.cc * Java added to install page (#13404) * added java install option * update maven blocks * update maven button url to snapshot search for java * add version; remove formatting on dependency * merge clojure updates * merge clojure updates - give code some breathing room * merge clojure updates - give code even more breathing room * 1.3.1 website updates (#13444) * 1.3.1 website updates * Java added to install page (#13404) * added java install option * update maven blocks * update maven button url to snapshot search for java * add version; remove formatting on dependency * merge clojure updates * merge clojure updates - give code some breathing room * merge clojure updates - give code even more breathing room * remove redundant link (#13428) * remove redundant link * retrigger * retrigger * [MXNET-886] ONNX export: HardSigmoid, Less, Greater, Equal (#12812) * ONNX export: Comparison operators * ONNX export: Hard sigmoid * Correct Inception Reference for Pertained Model (#13360) I noticed that the symbols and parameters in the model zoo are infact from https://github.com/apache/incubator-mxnet/blob/master/example/image-classification/symbols/inception-bn.py, which is not inception v3. It is inception + batch normalization. In this commit, I update the documentation and link to the correct research basis. * exclude the error folder from sphinx toc (#13354) * exclude the error folder from sphinx toc * bumping commit for CI * Update MKL-DNN to fix LSTM perf regression (#13417) * update MKL-DNN CI id * fix the reorder perf issue * bumped version to v0.17.1 * bumped to MKL-DNN v0.17.1 * pin to v0.17.1 * Mitigate #13341 (#13343) - KL never succeeds so it always goes exponential - Too many weight matrices were rejected because of zero weights, simplify generation to not include 0 weight edges * parallelize NDArray::Copy when data size is large (#12926) * parallelize NDArray::Copy by OpenMP when data size > MXNET_CPU_PARALLEL_COPY_SIZE * code specification according to reviewer's suggestions * align with std::memcpy api * add descriptive error message * update MXNET_CPU_PARALLEL_COPY_SIZE doc * update MXNET_CPU_PARALLEL_COPY_SIZE doc again * fix property not updating bug (#13085) * [MXNET-1222] Scala Inference enable different shapes input (#13330) * init commit with Predictor Improvement * add predictor Example * change into dArr * add img config * add new line and fix code style important bug fixes * Fix deconvolution / PR 13421 (#13433) * add test case * revert refactor * use with seed decorator * retrigger * remove seed * remove iteration * remove old test * update deconvolution test to have filter length that triggers mkldnn reorder * Add DGL subgraph sampling op (#13392) * add csr sample op * fix compile error in some platform * update * update openmp * speedup sampling * update csr * update csr * update time seed * update * fix compiler error * update doc * fix ci error * fix quantize_graph pass error when there're multiple outputs from a single node (#13000) * fix quantize_graph pass error when there're multiple outputs from a single node that need to insert 'contrib_quantize', 'min' and 'max' nodes for these outputs. * fix lint * Make the single output align with multiple outputs when inserting contrib_quantize * Change op comparing from its name to itself * skip unsupported quantize_concat * retrigger ci * Get the correct include path in pip package (#13452) * add find_include_path API * address reviewer comment * change return type from list to string * add unit test * address reviewer comment * address reviewer comment * address reviewer comment * address reviewer comment * fix include path problem in pip package * add comment * fix lint error * address reviewer comment * address reviewer comment * Use ~/.ccache as default ccache directory so is not cache is not erased on reboot (#13431) * Skip flaky test https://github.com/apache/incubator-mxnet/issues/13446 (#13480) * Rewrite dataloader with process pool, improves responsiveness and reliability (#13447) * fix recordio.py * rewrite dataloader with pool * fix batch as tuple * fix prefetching * fix pylint * picklable function * use pickle * add missing commit * Fix errors in docstrings for subgraph op; use code directive (#13463) * [MXNET-1158] JVM Memory Management Documentation (#13105) * update train_mnist * Add documentation for JVM Memory Management * update doc * address nit picks * address nit picks * Grammar and clarity edits for memory management doc * Edits for scala memory management * Update memory-management.md * Update memory-management.md * Update memory-management.md * capitalization fix * Update row_sparse tutorial (#13414) Update row_sparse tutorial * Add resiliency to onnx export code (#13426) * Added resiliency to onnx export code - With previous infer-shape implementation, if input shape was list instead of tuple or if extra non-existent parameters were provided, the code would still work. The fixes in this commit make sure that behavior is restored to prevent any compatibility issues with existing export code. * Fixed name of net in unittest * Fix pylint * [MXNET-1185] Support large array in several operators (part 1) (#13418) * fix a few operators with large arrays (# of elements) * fix bug in broadcast_div and add tests * address reviewer comment * add unit test * add empty line * retrigger CI * [MXNET-1210 ] Gluon Audio - Example (#13325) * Initialized the example * Addressed PR comments, about existing synset.txt file - no overwrite * RST - docstring issues fixed * added README * Addressed PR comments * Addressed PR comments, checking Divide by 0 * Raising error if format is not supported. * changed a line for ndarray of labels * Trigger CI * Trigger CI * PR comments addressed around skip_header argument * Addressed PR comments around librosa import * PR Comments * Passing lazy=lazy from argument * Added PR comments, labels to README.MD * Trigger CI * Addressing PR Comments in README * Modified README.md * Added example under audio folder * Retrigger CI * Retrigger CI * ONNX export: Instance normalization, Shape (#12920) * ONNX import/export: Make backend_rep common * ONNX export: Instance Normalization * ONNX export: Shape operator * Clarify dependency on OpenCV in CNN Visualization tutorial. (#13495) * clarify ops faq regarding docs strings (#13492) * Add graph_compact operator. (#13436) * add graph_compact. * fix. * add doc. * add tests for graph_compact. * address comments. * update docs. * trigger CI * Deprecate Jenkinsfile (#13474) * update github location for sampled_block.py (#13508) Updated to https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/model/sampled_block.py * #13453 [Clojure] - Add Spec Validations to the Optimizer namespace (#13499) * ONNX export: Logical operators (#12852) * Fix cmake options parsing in dev_menu (#13458) Add GPU+MKLDNN unittests to dev_menu * Revert "Manually track num_max_thread (#12380)" (#13501) This reverts commit 75410210e07a5fab5e044348aee276d578d5857e. * Feature/mkldnn static 2 (#13503) * build mkldnn as static lib * update makefile to statically build mkldnn * build static mkldnn * fix static name * fix static name * update static for mac * rename mkldnn dep in ci * remove moving mkldnn dynamic lib * remove commented code * remove mkldnn dnaymic for unitest * force static for mkldnn lib * remove dynamic mkldnn bind * only link windows * add mkldnn.mk * try force linking * remove mkldnn dynanmic check * remove test mkldnn install * fix spacing * fix index * add artifacts * add comment about windows * remove static * update makefile * fix toctree Sphinx errors (#13489) * fix toctree errors * nudging file for CI * Disabled flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker (#13527) * [MXNET-1234] Fix shape inference problems in Activation backward (#13409) * Provide a failing test for ReLU activation shape inference bug * Fix Activation backward shape inference fixes: #13333 * Add softsign Activation to test_gluon.py * Use activation in GPU if we are using CUDNN and not MKLDNN as it's happening right now * Don't disable MKLDNN * Docs & website sphinx errors squished 🌦 (#13488) * fix scala ndarray docs; remove interpreter style * fix docs error in kvstore * remove interpreter format in examples * remove python indicator for these non-functioning python code blocks; clears a sphinx error * remove old table that was not being used and was triggering a sphinx error * get rid of curly braces that was causing a pygments error * fix ambiguous reference causing sphinx error * nudging file for CI * [MXNET-1235] Add a test for AdaMax optimizer (#13467) * Add a test for AdaMax optimizer * Modify nested for loop with itertools.product and left tolerance to default * Trigger * Adadelta optimizer test (#13443) * adadelta test * comments * Update java setup docs for 1.4.0 (#13536) * Update java setup docs for 1.4.0 * Update Java-demo to 1.4.0 * Revert "Feature/mkldnn static 2 (#13503)" (#13540) This reverts commit 65edc9500b10a3404945d6d79acbae54a2833890. * doc fix (#13465) * [MXAPPS-1020] Clean up some Sphinx warnings. (#13539) * [MXNET-1110] Add header files required by horovod (#13062) * Add header files required by horovod * Add symbolic link and cherry picked required header * add python API to return include path * update link * fix windows CI * fix windows build * fix dlpack link * merge with master * exclude 3rd party header files from license check * exclude license check * exclude include directory * remove commented lines * Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file (#13478) * updated to v1.5.0 * Bumped minor version from 1.4.0 to 1.5.0 on master * added Anirudh as maintainer for R package ... adding something useful and re-trigger PR check * Updated license file for clojure, onnx-tensorrt, gtest, R-package * Get the correct include path in pip package (#13452) * add find_include_path API * address reviewer comment * change return type from list to string * add unit test * address reviewer comment * address reviewer comment * address reviewer comment * address reviewer comment * fix include path problem in pip package * add comment * fix lint error * address reviewer comment * address reviewer comment * Use ~/.ccache as default ccache directory so is not cache is not erased on reboot (#13431) * Skip flaky test https://github.com/apache/incubator-mxnet/issues/13446 (#13480) * Rewrite dataloader with process pool, improves responsiveness and reliability (#13447) * fix recordio.py * rewrite dataloader with pool * fix batch as tuple * fix prefetching * fix pylint * picklable function * use pickle * add missing commit * Fix errors in docstrings for subgraph op; use code directive (#13463) * [MXNET-1158] JVM Memory Management Documentation (#13105) * update train_mnist * Add documentation for JVM Memory Management * update doc * address nit picks * address nit picks * Grammar and clarity edits for memory management doc * Edits for scala memory management * Update memory-management.md * Update memory-management.md * Update memory-management.md * capitalization fix * Update row_sparse tutorial (#13414) Update row_sparse tutorial * Add resiliency to onnx export code (#13426) * Added resiliency to onnx export code - With previous infer-shape implementation, if input shape was list instead of tuple or if extra non-existent parameters were provided, the code would still work. The fixes in this commit make sure that behavior is restored to prevent any compatibility issues with existing export code. * Fixed name of net in unittest * Fix pylint * [MXNET-1185] Support large array in several operators (part 1) (#13418) * fix a few operators with large arrays (# of elements) * fix bug in broadcast_div and add tests * address reviewer comment * add unit test * add empty line * retrigger CI * [MXNET-1210 ] Gluon Audio - Example (#13325) * Initialized the example * Addressed PR comments, about existing synset.txt file - no overwrite * RST - docstring issues fixed * added README * Addressed PR comments * Addressed PR comments, checking Divide by 0 * Raising error if format is not supported. * changed a line for ndarray of labels * Trigger CI * Trigger CI * PR comments addressed around skip_header argument * Addressed PR comments around librosa import * PR Comments * Passing lazy=lazy from argument * Added PR comments, labels to README.MD * Trigger CI * Addressing PR Comments in README * Modified README.md * Added example under audio folder * Retrigger CI * Retrigger CI * ONNX export: Instance normalization, Shape (#12920) * ONNX import/export: Make backend_rep common * ONNX export: Instance Normalization * ONNX export: Shape operator * Clarify dependency on OpenCV in CNN Visualization tutorial. (#13495) * clarify ops faq regarding docs strings (#13492) * Add graph_compact operator. (#13436) * add graph_compact. * fix. * add doc. * add tests for graph_compact. * address comments. * update docs. * trigger CI * Deprecate Jenkinsfile (#13474) * update github location for sampled_block.py (#13508) Updated to https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/model/sampled_block.py * #13453 [Clojure] - Add Spec Validations to the Optimizer namespace (#13499) * ONNX export: Logical operators (#12852) * Fix cmake options parsing in dev_menu (#13458) Add GPU+MKLDNN unittests to dev_menu * Revert "Manually track num_max_thread (#12380)" (#13501) This reverts commit 75410210e07a5fab5e044348aee276d578d5857e. * Feature/mkldnn static 2 (#13503) * build mkldnn as static lib * update makefile to statically build mkldnn * build static mkldnn * fix static name * fix static name * update static for mac * rename mkldnn dep in ci * remove moving mkldnn dynamic lib * remove commented code * remove mkldnn dnaymic for unitest * force static for mkldnn lib * remove dynamic mkldnn bind * only link windows * add mkldnn.mk * try force linking * remove mkldnn dynanmic check * remove test mkldnn install * fix spacing * fix index * add artifacts * add comment about windows * remove static * update makefile * fix toctree Sphinx errors (#13489) * fix toctree errors * nudging file for CI * Disabled flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker (#13527) * [MXNET-1234] Fix shape inference problems in Activation backward (#13409) * Provide a failing test for ReLU activation shape inference bug * Fix Activation backward shape inference fixes: #13333 * Add softsign Activation to test_gluon.py * Use activation in GPU if we are using CUDNN and not MKLDNN as it's happening right now * Don't disable MKLDNN * Fixing a 404 in the ubuntu setup doc (#13542) * [MXNET-1249] Fix Object Detector Performance with GPU (#13522) * Reduce post processing time * fix ssd * fix the CI * add comments * [MXNET-769] Use MXNET_HOME in a tempdir in windows to prevent access denied due t… (#13531) * Use MXNET_HOME in cwd in windows to prevent access denied due to concurrent data downloads Fixes #13484 * Revert "Disabled flaky test test_gluon_data.test_recordimage_dataset_with_data_loader_multiworker (#13527)" This reverts commit 3d499cb3584919b767142c5596211a7f7fb18d50. * Add a retry to qemu_provision (#13551) Fixes #13504 * Fix #13521 (#13537) * fix pool release * fix * Simplifications and some fun stuff for the MNIST Gluon tutorial (#13094) * Simplify mnist Gluon tutorial and add mislabelled sample plotting * Add mnist Gluon tutorial images * Gluon MNIST tutorial: Use modern Gluon constructs, fix some wordings * [Gluon] Move to data loaders and improve wording in MNIST tutorial * Fix broken links * Fix spelling of mislabeled * Final rewordings and code simplifications * Fix things according to review - Apply hybrid blocks - Move outputs outside of code blocks and mark for notebooks to ignore - Remove images, use external link - Fix a few formulations * Change activations to sigmoid in MNIST tutorial * Remove superfluous last layer activations in MNIST tutorial * Updated docs for randint operator (#13541) * updated docs for randint * added randint in __all__ and reordered acc to categorical then alphabetical * Trigger CI * minus mxnet.symbol and alphabetical for ndarray,symbol.md * alphabetical order * Chi_square_check for discrete distribution fix (#13543) * check for bucket instead of index * enumerate instead of range(len()) * count instead of sum to solve attribute error * revert to sum * seperate discrete and continuous * Trigger CI * Revert "Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file" (#13558) * Revert "Chi_square_check for discrete distribution fix (#13543)" This reverts commit cf6e8cbd035bf315b3e8280416468a629c780d03. * Revert "Updated docs for randint operator (#13541)" This reverts commit e0ff3c36ee171386fef01fb86c54c343e4b04c14. * Revert "Simplifications and some fun stuff for the MNIST Gluon tutorial (#13094)" This reverts commit 8bbac827742c21607a863137792f03bd09847419. * Revert "Fix #13521 (#13537)" This reverts commit f6b4665995f8f8ff32862a029b2074475d8467eb. * Revert "Add a retry to qemu_provision (#13551)" This reverts commit f6f840110d74111f98c20eab5b08d64a46ebf0cd. * Revert "[MXNET-769] Use MXNET_HOME in a tempdir in windows to prevent access denied due t… (#13531)" This reverts commit bd8e0f8356676749ecae16ec38a366b4cc00bf15. * Revert "[MXNET-1249] Fix Object Detector Performance with GPU (#13522)" This reverts commit 1c8972c3c8f832519364916865541f48597581c7. * Revert "Fixing a 404 in the ubuntu setup doc (#13542)" This reverts commit cb0db290adcfd0fce956d02c234f81d453e41013. * Revert "Bumped minor version from 1.4.0 to 1.5.0 on master, updated License file (#13478)" This reverts commit 40db61908000ee86d21aac847ff2225807d6c168. * #13441 [Clojure] Add Spec Validations for the Random namespace (#13523) * Adding test for softmaxoutput (#13116) * Add workspace cleaning after job finished (#13490) * Add workspace cleaning after job finished * Update Jenkinsfile_utils.groovy * Update Jenkinsfile_utils.groovy * Fix flaky test test_random:test_randint_generator (#13498) * updated seed, alpha value, comments * typo in comment fix * added nrepeat * removed unusued variable, added link for scipy alpha, rephrased the sentence for discrete distribution buckets * removed fixed seed, alpha * Update version to v1.5.0 including clojure package (#13566) * Update DESCRIPTION * update version to v1.5.0 except for clojure * update version from 1.4.0 to 1.5.0 - add utility script to help bump versions in future - fix README to correct to current maven versions * License update (#13565) * Update LICENSE * update license for Clojure, R, ONNX-TRT and location of 3rd party dependencies. * fixed typo * Fix use-before-assignment in convert_dot (#13511) * fix the situation where idx didn't align with rec (#13550) minor fix the image.py add last_batch_handle for imagedeiter remove the label type refactor the imageiter unit test fix the trailing whitespace fix coding style add new line move helper function to the top of the file * Update MXNetTutorialTemplate.ipynb (#13568) Fix typos * ONNX import/export: Size (#13112) * fix link for gluon model zoo (#13583) * Fix exception handling api doc (#13519) * Fix exception handling api doc * Update waitall api doc Co-Authored-By: anirudh2290 * add cpp example inception to nightly test (#13534) * add inception test * fix max iter for mlp * rename and add comment * rename epoch num * Add notes about debug with libstdc++ symbols (#13533) * Add imresize and copyMakeBorder to mx.image (#13357) * Add imresize API to docs * address comments * copyMakeBorder * [MXNET-1253] fix control_flow_op (#13555) * fix control_flow_op * change type for M * add test for sparse where op * Add Intel MKL blas to Jenkins (#13607) * add mkl blas to Jenkins * add mkl install script * fix bug in mkl script * remove python2 ut and add cpu-mkl node * #13385 [Clojure] - Turn examples into integration tests (#13554) * fix the Float not showing correctly problem (#13617) Merge this PR for 1.4.x * [MXNET-1155] Add scala packageTest utility (#13046) * [MXNET-1155] Add scala packageTest utility * Clean up utility * Safe change directory in Makefile for scala * mvn install file instructions with details * [MXNET-1224]: improve scala maven jni build and packing. (#13493) Major JNI feature changes. Please find more info here: https://cwiki.apache.org/confluence/display/MXNET/Scala+maven+build+improvement * [MXNET-1225] Always use config.mk in make install instructions (#13364) * Always use config.mk in make install instructions * Specify Cuda 0 for ubuntu with mkldnn * Scala install doc avoid build_from_source Minor doc fixes * Fix build_from_source CMake usage * CPP Install Instruction with CMake * Use cmake out of source build * Fix warning in waitall doc (#13618) * Optimize C++ API (#13496) * Optimize C++ API Pass parameter with reference instead of value. Add const as well as it is not changed. * fix docs/architecture/overview.md Fix BinaryShapeFunction typedef Add a right brace for SmoothL1Shape_ * fix quantize pass error when the quantization supported Op are excluded in the model (#13596) * Scripts for building dependency libraries of MXNet (#13282) * openblas script * ps-lite dependencies * USE_S3 dependencies * image libraries * license * add batch norm test (#13625) * add batch norm test * fix formatting * use out_arr as input * fix typo * remove const * use ptr * eval ptr * Set install path for libmxnet.so dynamic lib on Mac OS (#13629) * Fix the bug of BidirectionalCell (#13575) * Fix the bug of BidirectionalCell I did hybridize( ) and pass "valid_length" to the unroll( ) function of BidirectionalCell, then returned AssertionError in line 79. Because symbol.split( ) return a symbol but not a symbol list. Result in the length of inputs dont equal parameter "length" when call unroll( ) to compute r_outputs and r_states. * add a test for BidirectionalCell * Fix the bug of BidirectionalCell I did hybridize( ) and pass "valid_length" to the unroll( ) function of BidirectionalCell, then returned AssertionError in line 79. Because symbol.split( ) return a symbol but not a symbol list. Result in the length of inputs dont equal parameter "length" when call unroll( ) to compute r_outputs and r_states. * fix test_bidirectional_unroll_valid_length( ) Fix the error of parameter. * Fix the bug of BidirectionalCell I did hybridize( ) and pass "valid_length" to the unroll( ) function of BidirectionalCell, then returned AssertionError in line 79. Because symbol.split( ) return a symbol but not a symbol list. Result in the length of inputs dont equal parameter "length" when call unroll( ) to compute r_outputs and r_states. * fix test_bidirectional_unroll_valid_length( ) * Feature/mkldnn static (#13628) * Revert "Revert "Feature/mkldnn static 2 (#13503)" (#13540)" This reverts commit a3eca5f5c96eed0bc29bd4e58e470997091a1fb3. * include headers on mkldnn lib * retrigger * retrigger * build config for maven and pip (#13556) * config for pip * symbol whitelist * maven build config * Fix for import mxnet taking long time if multiple process launched (#13602) * https://github.com/apache/incubator-mxnet/issues/12255 doing import mxnet in multiple processes take very long time. Details : #12255 One of the reason we have OMP tuning code which iterates to find OMP tune overhead. We are reducing this iteration count to reduce the overehead of tuning code. Also, We added an environment variable where users can set the number of cores that should be used to determine tuning. * cpplint fix * Adding new environment variable: MXNET_USE_NUM_CORES_OPERATOR_TUNING to doc * fixing formatting in doc * Add reshape op supported by MKL-DNN (#12980) * Add reshape op supported by MKL-DNN * fix build issue * fix lint * fix lint * fix lint * fix lint * fix lint * fix lint * fix white space * add unit test * merge if blocks * Improve dev_menu usability, local build and virtualenv (#13529) * Improve dev_menu, add build command and virtualenv creation with local builds for easy testing * Update dev_menu.py Co-Authored-By: larroy * Cuda off by default, use ccache * address CR * [Clojure] Correct the versions in the README so they correspond to the latest maven.org release (#13507) * Correct the versions so they correspond to the latest maven.org release * trigger build * feedback from @kohr-h * Optimization of metric evaluation (#13471) * Change argsort to argpartition * Global statistics in metrics * Fix lint * Fixes from review * Trigger * Fixes from review, fix to F1, MCC and perplexity metrics, added test for global stats * Fix lint * Fix compatibility with Python 2 * Revert "Feature/mkldnn static (#13628)" (#13638) This reverts commit 5bcf2bd6e8b48fa27bfcfdafd06401ec2d28978b. * support mkl log when dtype is fp32 or fp64 (#13150) * support mkl log when dtype is fp32 or fp64 * remove macro * ensure data size less than or equal MKL_INT_MAX * code specification * fix indent * for retrigger * [MXNET-1209] Tutorial transpose reshape (#13208) * transpose tutorial * Adding Anirudhs comments * Update tutorial with some more examples * Adding links * Fixing the links, adding more examples * Update reshape_transpose.md * Fixing spelling mistakes * Updating image resolution * Adding Simon's comments * Small fixes * Update reshape_transpose.md * Update reshape_transpose.md * empty commit * empty commit * updated reference to Apache MXNet (#13645) * Complimentary gluon DataLoader improvements (#13606) * init * add tests * doc * lint * fix openmp * Improve CCache handling (#13456) * Remove gitignore entries * Modify Makefile * Modify user permissions * Add new ccache wrapper function * Change PATH rewrite to a different one to resolve CUDA issues * Add ccache to gpu cmake * Enable ccache for every build * Set permissions for arm dockerfiles * Disable ccache for ASAN * Remove g++-8 ccache redirect * Update Android Dockerfiles for user permissions * Fix ASAN compiler typo * Remove sanity for speed * Move build dir creation in android armv8 * Revert "Remove sanity for speed" This reverts commit e8386a774dafe96337930b9cac36cb24fc36585e. * Add ccache for NVCC in Makefile * [MXNET-918] Random module (#13039) * introduce random API * revert useless changes * shorter types in APIDoc gen code * fix after merge from master * Trigger CI * temp code / diag on CI * cleanup type-class code * cleanup type-class code * fix scalastyle * Fix incorrect delete in MXExecutorReshape exception handling (#13376) * Fix bad delete. Delete the pointed-to handle on cleanup, not the location of the handle itself. Also don't delete it if we didn't set it in the first place. * Remove unusued 'exec' var from MXExecutorBindEX. * [MXNET-1251] Basic configuration to do static-linking (#13621) * Basic configuration to do static-linking * update build script and place it in the install part * clean up the code further * revert maven into build-from-source * add curl to deps * [MXNET-1195] Cleanup Scala README file (#13582) * Updated the Scala-Readme with upto-date information * Updated the header * Removed redundant build status * Minor formatting changes * Addressed the PR feedback * Added section on Scala training APIs * Removed mention of deprecated Model API * scripts for building libmxnet binary and wheel (#13648) * add script for making all dependencies * tools for building pip package * build scripts for lib and wheel * [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API (#13294) * [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API * [MXNET-1083] Add the example to demonstrate the inference workflow using C++ API * Updated the code to address the review comments. * Added the README file for the folder. * Addressed the review comments * Addressed the review comments to use argmax and default mean values. * Update MKLDNN_README.md (#13653) * Support Quantized Fully Connected by INT8 GEMM (#12922) * add quantized fully connect support * disable qfc cpu case since s8u8s32 is only supported by MKL BLAS library * retrigger to ci testing * move implementation to cc file and add STORAGE_TYPE_ASSIGN_CHECK * fix typo bug * retrigger the ci test * fix typo bug * retrigger ci * retrigger the ci test * retrigger the ci * retrigger the ci test * retrigger ci test * fix indent issue * retrigger the ci * retrigger the ci test * add verbose message * update log message * using range for loop * using for auto range * enable MKL BLAS ci test * fix typo issue * use TYPE_ASSIGN_CHECK * retrigger the ci * add build fix for Scala/Java build (#13655) * Fix Jetson compilation (#13532) * remove omp which can cause ssd accuracy variance (#13622) * Revert "[MXNET-43] Fix Jetson compilation" (#13665) * Revert "remove omp which can cause ssd accuracy variance (#13622)" This reverts commit 655f1c6f7a0706dd622f73db9af2e6df895ca213. * Revert "Fix Jetson compilation (#13532)" This reverts commit 48e25c4cae355753dd96ea7afe004bf78e0719e4. * Fix Jetson compilation (#13666) * turn on Sphinx warnings as errors (#13544) * turn on warnings as errors * move warnings as error logic to build_all_version * fix typo in comment * add warning as error option for docs pipeline * bump ci to test again; use this chance to add notes on this feature * fix bugs in image.py docs * Update CODEOWNERS, add Pedro Larroy. (#13579) * Revert "Revert "[MXNET-43] Fix Jetson compilation" (#13665)" (#13672) This reverts commit 3433776dac7be75928082bbc1d552fca248fb8e8. * Accelerate DGL csr neighbor sampling (#13588) * Speedup and fix bug in dgl_csr_sampling op * Update dgl_graph.cc * simplify functions. * avoid adding nodes in the last level in the queue. * remove a hashtable lookup in neigh_pos. * reduce a hashtable lookup in sub_ver_mp. * merge copying vids and layers. * reduce hashtable lookup when writing to output csr. * fix a bug. * limit the number of sampled vertices. * fix lint. * fix a compile error. * fix compile error. * fix compile. * remove one hashtable lookup per vertex and hashtable iteration. * remove queue. * use vector for neigh_pos. * fix lint * avoid init output arrays. * fix tests. * fix tests. * update docs. * retrigger * retrigger * [MXNET-1252][1 of 2] Decouple NNVM to ONNX from NNVM to TenosrRT conversion (#13659) * fix unpicklable transform_first on windows (#13686) * Move the debug output message into MXNET_MKLDNN_DEBUG (#13662) * NEWS.md backport from v1.4.x to master (#13693) * merge NEWS.md from 1.4.x to master * NEWS.md backport from v1.4.x to master * Fallback to dense version for grad(reshape), grad(expand_dims) (#13599) * fallback to dense version for grad(reshape), grad(expand_dims) * add _backward_reshape gpu version * reshape test case comments * fix gpu test * remove mkldnn support for _backward_reshape * ONNX export: Add Flatten before Gemm (#13356) * Add Flatten before Gemm * ONNX export test: Allow multiple inputs in forward pass * ONNX export: Test for fully connected * [MXNET-1164] Generate the document for cpp-package using Doxygen (#12977) * Adding cpp-package directory to the Doxyfile. Updating the index.md file in c++ api directory. * Updating the link to classes in C++ API to point to correct html file. * Updated the links to use relative paths. * Removed the extra slash character in the url * Excluded the 3rdparty folder as per the review comment. * Update git clone location to apache github (#13706) * Add timeout/retry logic to docker cache download (#13573) * Added timeout/retry (linear backoff) to docker cache download * Units changed, as time.sleep takes seconds as argument * Improved error handling * Using retry decorator * Added retry decorator to _login_dockerhub method * Fixed wrong import * Fix NDArray ToDLPack Bug (#13698) * Added javadocs and improved example instructions (#13711) * Rearrange tests written only for update_on_kvstore = True (#13514) * Update test_gluon_trainer.py * Update test_gluon_trainer.py * test * Update mshadow to support batch_dot with fp16. (#13716) * fp16 dot * update mshadow * update mshadow * update mshadow * Fix the quantization script to support Python2 (#13700) * fix the quantization script to support python2 * Fix comments, fix similiar issue in imagenet_inference.py * ONNX test code cleanup (#13553) * ONNX test code cleanup * Make tests use the common test case list * Remove import test_cases * Make Gluon backend rep common * Partially enable broadcast tests * Common function to populate tests * Make backend common * test models * Test nodes * ONNX export: Test for fully connected * Edit CI scripts mxnet export test cleanup * Further cleanup backend tests * README * Some corrections * test case format for test_models * update social media section (#13705) * script for installing gpu libraries and build tools (#13646) * Port of scala infer package to clojure (#13595) * Port of scala infer package to clojure * Add inference examples * Fix project.clj * Update code for integration tests * Address comments and add unit tests * Add specs and simplify interface * Minor nit * Update README * update code owner (#13737) * AdamW operator (Fixing Weight Decay Regularization in Adam) (#13728) * tests * remove optimizer and move op to contrib * rename parameter * ONNX import/export: Add missing tests, ONNX export: LogSoftMax (#13654) * Logsoftmax, missing tests * Support multiple outputs in Gluon backendrep * Remove repeated unsqueeze test * Allow multiple output support * ONNX test code cleanup - part 2 (#13738) * Common test caller * Remove incorrect comment * Make corrections to CI * fix ci script * Update basic_layers.py (#13732) * ONNX import: Hardmax (#13717) * ONNX import: Hardmax * Fix lint errors * add github link for issue with reshape * gluon docfix (#13631) * Fixes for trainer with update_on_kvstore=False (#13721) * add clarification for param_dict * more tests for dist kvstore * more unittests * fix a bug * more dist exception test * revert optimizer list * fix bug and comment * fix doc rendering and lint * add invalid sched test * fix website * trigger * update doc * Reorder module import orders for dist-kvstore (#13742) * Reorder module import orders for dist-kvstore * more code comments * CMake: Enable installation of cpp-package headers (#13339) * Allow CMake based installation of cpp-package * Add installation of missing nnvm headers * Add documentation as to where public headers will be installed * disable error checking when building old versions (#13725) * Integrate MKLDNN Conv1d and support 3d layout (#13530) * add 3d layout support for MKLDNN Conv and Activation * fix lint * code refactor * add testcase for group1 conv and skip quantization for conv1d * fix lint * avoid conv1d quantization * code refactor and add activation ut * del todo * Making MKL-DNN default on MXNet master (#13681) * mkldnn is default makefile and explicitly turn off for buidls * add endif * retrigger * retrigger * build mkldnn as static lib * update makefile to statically build mkldnn * build static mkldnn * fix static name * fix static name * update static for mac * rename mkldnn dep in ci * remove moving mkldnn dynamic lib * retrigger * remove commented code * retrigger * remove mkldnn dnaymic for unitest * retrigger * retrigger * force static for mkldnn lib * turn of mkldnn on arm builds * remove dynamic mkldnn bind * update jenkins to use only mkldnn * remove last flag * turn mkldnn by default on mac * move mkldnn files for GPU MKLDNN build * copy lib mxnet in gpu build * only link windows * add mkldnn.mk * try force linking * retrigger * retrigger * remove mkldnn dynanmic check * use ifndef * remove test mkldnn install * fix spacing * fix index * remove cp of mkldnn since statically linked * add libmkldnn.a to list of files to pack * include mkl_ml * add mkldnn to pack * add libiomp to ci pack * move static libs * fix typo * pack mkldnn * retrigger * add linux artifacts * move libmkldnn in gpu cmake build * move libmkldnn and libiomp5 on gpu workspace * move linked files * fix typo * fix typo * add artifacts for tensorrt * move mkldnn lib in scala build * move mkldnn lib on cpu scala * create dir for binding * rename libmkldnn in scala * move mklml dep in scala builds * move mkl to another linked folder * move libmkl to another dir * add libmklml * move mkldnn * move mkldnn on centos * specify new dynamic path * retrigger * remove mkldnn dynamic lib * remove moving mkldnn artifact * add ld path * retrigger * Revert "remove moving mkldnn artifact" This reverts commit 16cca196e9e1ad92db74f4e8a01b3b052076d268. * Revert "remove mkldnn dynamic lib" This reverts commit d51043622d4ef7fcb95aff6a3e84d91ab71b48c9. * update makefile * Revert RPATH change and trigger CI * correcting use-mkldnn flags for two tests * mkldnn default on linux for starters * reverting naming rules of pack_lib * adding mkldnn=0 flags to centos non mkldnn builds * adding mkldnn=0 flags to ubuntu gpu non mkldnn builds * removing mkldnn binary operation for ubuntu gpu cmake non mkldnn build * removing mkldnn binary operation for centos non-mkldnn unittest * adding explicit USE_MKLDNN=0 flags for clang builds * adding explicit USE_MKLDNN=0 flags for cpu ubuntu builds * removing mkldnn binaries from non mkldnn builds scala gpu * adding explicit flag mkldnn=0 for tensorrt gpu build * adding explicit flag mkldnn=0 for ubuntu cmake asan * adding centos cpu mkldnn tests to CI * adding CentOS GPU MKLDNN build and unittest * not keeping mkldnn default for mac os * setting mkldnn default for x86_64 only * running docs with mkldnn=0 flag * removing CentOS CPU Scala MKLDNN test * setting mkldnn default for x86_64 only * not making mkldn default on windows * removing Centos MKLDNN tests from CI * retrigger * retrigger * retrigger * use relative links; update links (#13741) * [MXNET-1231] Allow not using Some in the Scala operators (#13619) * add initial commit * update image classifier as well * create Util class make Some conversion * add test changes * adress Comments * fix the spacing problem * fix generator base * change name to Option * fix bug in profiler tutorial when using cpu (#13695) try except approach only goes to ctx=mx.gpu() because test_utils.list_gpus() at least returns empty array and do not producing error * local docs build feature (#13682) * make ROIAlign support position-sensitive pooling (#13088) * make ROIAlign support position-sensitive pooling * add unittest for RoIAlign op * fix ccplint error * fix python3 compability for unittest * change OMP for better performance * delete blank line to trigger CI * add shape check when position_sensitive is true * fix the typo * typo: shuold -> should * remove private() clause in omp statement * add examples and fix the dependency problem (#13620) * add examples and fix the dependency problem * add Nightly run and optimized script * add explanation for the line * Update Adam optimizer documentation (#13754) * Less cudaGet/SetDevice calls in Gluon execution (#13764) * Remove unnecessary cudaGetDevice/cudaSetDevice calls * Fixes for the DeviceGuard * Retrigger CI * Fix for possible invalid device ordinal when using DeviceStore while driver is unloading * Fix for RTC when the driver API call is the first call * Added DeviceStore to pooled engine * Scope requests so it's not needed for dev_menu (#13771) * Fix USE_MKLDNN check in Makefile (#13775) * fix makefile * change make/config.mk * add comments * retrigger ci * fix c complier to clang (#13778) * Fixed mailing list addresses (#13766) * [MXNET-1255] update hybridize documentation (#13597) * update hybridize documentation * address review comments * improve doc * address comments * address comments * [MXNET-244] Work around likely compiler bug on nested inlines and temporary acces… (#13535) * Work around likely compiler bug on nested inlines and temporary access to stream * Don't compile khatri_rao tests if we don't have LAPACK * Address CR comment * Use curl to download sample data instead of wget. (#13761) * fix bipartite match memory corruption (#13727) * remove attributs clear on TRT nodes for GetOptimizedSymbol (#13703) * Add CPU test coverage and refine cmake builds (#13338) * add license (#13793) * [MXNET-862] Basic maven jenkins pipeline (#13450) * Jenkins Publish Nightly Maven Progress * Seperate Build, Test, and Deploy Stages with parallel * Re-organize Scala maven build (#13626) * Re-organize scala maven build 1. Automatically detect which platform to build for scala. 2. Remove platform dependend submodules 3. Fix cyclic module dependencies 4. Fix scalatype style check 5. Now mvn can be executed in submodule 6. Maven build can be executed from any directory not only in root project 7. Checkin javah header file, and use verify task to detect native API changes 8. Improve incremental build performance 9. Remove unittest and integration-test profile, use proper task instead 10. Delete generated scala file during maven clean. * Redo maven deploy related tasks. 1. Removed maven release plugin. 2. Make maven build friendly to CI, allow cli override version. 3. Moved gpg signing to deploy stage. 4. Created a separeated deploy module. 5. Updated Makefile to new maven build change. 6. Remove unused nexus-staging-plugin 7. Added nightly and staging profile for CI. * Support mkldnn for Scala. * Add extra header file to export for error checking (#13795) * add extra header file to include * fix sanity check * fix sanity * move c_api_common.h to include folder * fix build error * keep c_api_common.h internal * strip out error handling API into a separate header * consolidate comment into one paragraph per review * remove unnecessary include * fix redirection issues; set default version to master (#13796) * [MXNET-898] ONNX import/export: Sample_multinomial, ONNX export: GlobalLpPool, LpPool (#13500) * ONNX import/export: Sample_multinomial * ONNX export: GlobalLpPool, LpPool * Handle default p_value * Add tests for multinomial, lppool, globallppool * add a comment about shape test * whitelist symbols for using MXNet error handling externally (#13812) * fix for params with no dims in onnx (#13413) * fix for params with no dims * fix * fix * retrigger build * test added * retrigger CI * retrigger ci * Remove semicolon in libmxnet.sym file (#13822) * Remove semicolon in libmxnet.sym file * empty commit to trigger CI * Clojure example for fixed label-width captcha recognition (#13769) * Clojure example for fixed label-width captcha recognition * Update evaluation * Better training and inference (w/ cleanup) * Captcha generation for testing * Make simple test work * Add test and update README * Add missing consts file * Follow comments * Update LICENSE File with subcomponents (#13808) * Update LICENSE File with subcomponents * Fix JavaScript licenses * Dockerfiles for Publish Testing (#13707) * Add new Maven build for Scala package (#13819) * clean up build * fix minor issue and add mkldnn * fix mx_dist problem * fix clojure build * fix skip test * ONNX ops: norm exported and lpnormalization imported (#13806) * ReduceL1, l2 export, lpnormalization import added * fix * fix * fix * fix * remove useless code (#13777) * Fixing a symlink issue with R install (#13708) * fix minor indentation (#13827) * [MXNET-880] ONNX export: Random uniform, Random normal, MaxRoiPool (#13676) * ONNX export: Random uniform, Random normal * ONNX export: MaxRoiPool * tests for maxroipool, randomnormal, randomuniform * onnx export ops (#13821) * onnx export ops * retrigger ci * retrigger ci * fix * [MXNET-1260] Float64 DType computation support in Scala/Java (#13678) * Added Float64 as a supported datatype in NDArray * Added unit tests for Float64 in NDArray * Fix for failing Clojure unit tests * Added Float and Double as MX_PRIMITIVES for computation in Scala * Trying out second approach --> Private Impl methods with generic signature, and public methods calling the Impls * Fixed errors in *= method * Added Float64 in IO.scala and DataIter.scala * Added another testcase for IO.DataDesc creation * Fixed failing CI * Added Float64 in Predictor class * Added Float64 in Classifier class * Added Double as a possible return type to : classifyWithNDArray * Added unit tests for Classifier and Predictor.scala classes for Float64/Double * Approach 3 --> Using a trait to mirror Float and Double in Scala * Added comments on MX_PRIMITIVES.scala * Added Float64/Double support for inference in ImageClassifier APIs * Added unary- and compareTo in MX_NUMBER_LIKE * Renamed MX_NUMBER_LIKE to MX_PRIMITIVE_TYPE * Fixed linting issue * Now specifying dType from the available data in copyTo and MXDataIter.scala for creating a new DataIterator * Add primitives support handling to the generator for proper conversion * Reduced code duplication in classify method in Classifier.scala * Fix infer package for new signatures and address some bugs * Removed code duplication in getPixelsArray * remove debugging * Changed classifyWithNDArray method in Classifier.scala * Removed code duplication in predictImpl * Satisfying lint god _/\_ * Fixed failing PredictorSuite test * Renamed MX_FLOAT to Camel case * Revert "Renamed MX_FLOAT to Camel case" This reverts commit 9d7c3ce6f9c4d6ed2c46041a02e23c0f1df8dfe5. * Added an implicit conversion from int--> float to support int operations in NDArrays. (These ops were already supported in the previous versions) * Added Float64 as a training option to ImClassification Suite. Also added integration tests for it * Satisfy Lint God _/\_ * Added Float64 support in Java NDArray * Added Float64 support in Java's Predictor API * Added yours truly to the Contributors list * Added method comments on Predictor.predict with Array[Double] as a possible input * Added method comments explaining what MX_PRIMITIVE_TYPE is * Fixed errors cause by rebasing with master * Added licences to the files * [MXNET-1263] Unit Tests for Java Predictor and Object Detector APIs (#13794) * Added unit tests for Predictor API in Java * Added unit tests for ObjectDetectorOutput * Added unit tests for ObjectDetector API in Java * Addressed PR comments * Added Maven SureFire plugin to run the Java UTs * Pom file clean up -- moved surefire plugin to parent pom.xml * Renamed skipTests to SkipJavaTests * Fix scala doc build break for v1.3.1 (#13820) * Fix doc build break for v1.3.1 * ignore errors on v1.3.x during scala docs gen * Remove MXNET_STORAGE_FALLBACK_LOG_VERBOSE from test_autograd.py (#13830) * Add Local test stage and option to jump directly to menu item from commandline (#13809) * Removes unneeded nvidia driver ppa installation (#13814) * Improve license_header tool by only traversing files under revision c… (#13803) * Improve license_header tool by only traversing files under revision control * use HEAD instead of master for CI * Disabled flaky test (#13758) * change to compile time (#13835) * fix Makefile for rpkg (#13590) * fix Makefile for rpkg * update R and roxygen2 requirements * add roxygen requirement * add roxygen requirement * [CI] Prevent timeouts when rebuilding containers with docker. (#13818) * Prevent timeouts when rebuilding containers with docker. Increase timeout from 120 to 180 for pipelines * Increase docker cache timeout * Increase timeout also for docs * limit parallel builds to 10 * Code modification for testcases of various network models in directory example (#12498) * example testcase modified * rcnn file add * license add * license init * CI test trigger * rcnn modify give up * trigger * modify for better user experience * change the default parameter to xpu=None * Update bdk_demo.py * Update fcn_xs.py * Update test.py * Update train.py * Update bdk_demo.py * Update bdk_demo.py * modify review comments * refine * modify Readmes according to the changed code. * finetune READMEs * re-trigger ci * re-trigger ci twice * Add copyrights for third party licenses to license file (#13851) * Fix Tree Reduction on new instance type p3dn.24xlarge (#13852) * add fallback for gpu topology detection using CUDA 9.2 * add fallback for gpu topology detection using CUDA 9.2 * add log * update 3rdparty to master * add fallback for gpu topology detection using CUDA 9.2 * add log * update 3rdparty to master * bring 3rdparty packages to upstream/master * rebase to master * Update gpu_topology.h * [Clojure] package infer tweaks (#13864) * change object detection prediction to be a map * change predictions to a map for image-classifiers * change return types of the classifiers to be a map - add tests for base classifier and with-ndarray as well * tweak return types and inputs for predict - add test for plain predict * updated infer-classify examples * adjust the infer/object detections tests * tweak predictor test * Feedback from @kedarbellare review * put scaling back in * put back predict so it can handle multiple inputs * restore original functions signatures (remove first) * Modifying clojure CNN text classification example (#13865) * Modifying clojure CNN text classification example * Small fixes * Another minor fix * adding tolerance to flaky test (#13850) * adding tolerance * retrigger ci * retrigger ci * Julia v0.7/1.0 support and drop v0.6 support (#12845) * Fix cpp examples build on Mac. (#13826) This is a regression of addning @rpath name to libmxnet.so on Mac, example executable is not able to find libmxnet.so anymore. Add @rpath search path to fix this issue. * Fix launch bounds in spatial transformer (#13188) * Fix launch bounds in spatial transformer * Adding explanation in comment. * Update example scripts classpath. (#13849) * [MXNET-1177]Adding Scala Demo to be run as a part of Nightly CI (#13823) * Adding Scala Demo to be run as a part of Nightly CI * Addressed PR feedback : making a profile to fetch nightly jars only on CI * Changed name from scalacidemo to scala_ci_demo * Synchronized the scala-demo and java-demo for nightly CI runs * Pruned the maven command to simply maven install * changed running from ./.sh to bash .sh to be consistent * Add CODEOWNERS for Julia package (#13872) * fix ssd quantization script error (#13843) * fix ssd quantization script error * update readme for ssd * move quantized SSD instructions from quantization/README.md to ssd/README.md * update ssd readme and accuracy * update readme for SSD-vGG16 * Rename to avoid merge conflict with upstream. * Update submodule versions. - update mkldnn and mshadow to version used by upstream master - update ngraph-mxnet-bridge to current master Renames nGraph README to follow MXnet conventions. * Fix merge error for nGraph support in CMakeLists.txt * Fixes CMake file error. --- .gitignore | 6 +- .travis.yml | 5 +- 3rdparty/mkldnn | 2 +- 3rdparty/mshadow | 2 +- CMakeLists.txt | 43 +- CODEOWNERS | 33 +- CONTRIBUTORS.md | 20 + LICENSE | 552 ++-- MKLDNN_README.md | 14 +- MXNET_README.md | 1 + Makefile | 124 +- NEWS.md | 714 +++++- README.md => NGRAPH_README.md | 0 R-package/.Rbuildignore | 1 + R-package/DESCRIPTION | 14 +- R-package/LICENSE | 2 +- R-package/R/zzz.R | 4 +- R-package/dummy.NAMESPACE | 16 + ci/Jenkinsfile_docker_cache | 4 +- ci/Jenkinsfile_utils.groovy | 133 +- ci/README.md | 91 +- ci/build.py | 169 +- ci/build_windows.py | 3 - ci/docker/Dockerfile.build.android_armv7 | 7 +- ci/docker/Dockerfile.build.android_armv8 | 10 +- ci/docker/Dockerfile.build.armv6 | 7 +- ci/docker/Dockerfile.build.armv7 | 7 +- ci/docker/Dockerfile.build.armv8 | 11 +- ci/docker/Dockerfile.build.centos7_cpu | 2 + ci/docker/Dockerfile.build.jetson | 12 +- ci/docker/Dockerfile.build.test.arm_qemu | 6 +- ci/docker/Dockerfile.build.ubuntu_cpu | 3 + ci/docker/Dockerfile.publish.test.centos7_cpu | 38 + ci/docker/Dockerfile.publish.test.centos7_gpu | 38 + .../Dockerfile.publish.test.ubuntu1404_cpu | 39 + .../Dockerfile.publish.test.ubuntu1404_gpu | 40 + .../Dockerfile.publish.test.ubuntu1604_cpu | 39 + .../Dockerfile.publish.test.ubuntu1604_gpu | 39 + .../Dockerfile.publish.test.ubuntu1804_cpu | 44 +- .../Dockerfile.publish.test.ubuntu1804_gpu | 41 + ci/docker/Dockerfile.publish.ubuntu1404_cpu | 36 + ci/docker/Dockerfile.publish.ubuntu1404_gpu | 36 + ci/docker/install/centos7_adduser.sh | 5 + ci/docker/install/centos7_base.sh | 33 + ci/docker/install/centos7_scala.sh | 39 + ci/docker/install/ubuntu_adduser.sh | 5 + ci/docker/install/ubuntu_arm_qemu.sh | 5 +- ci/docker/install/ubuntu_base.sh | 36 + ci/docker/install/ubuntu_caffe.sh | 1 + ci/docker/install/ubuntu_clang.sh | 2 + ci/docker/install/ubuntu_core.sh | 11 +- ci/docker/install/ubuntu_docs.sh | 1 + ci/docker/install/ubuntu_emscripten.sh | 1 + ci/docker/install/ubuntu_gcc8.sh | 2 +- ci/docker/install/ubuntu_julia.sh | 28 +- ci/docker/install/ubuntu_llvm.sh | 3 +- ci/docker/install/ubuntu_mkl.sh | 31 + ci/docker/install/ubuntu_mklml.sh | 2 +- ci/docker/install/ubuntu_nightly_tests.sh | 2 +- ci/docker/install/ubuntu_npm_blc.sh | 2 +- ci/docker/install/ubuntu_nvidia.sh | 4 - ci/docker/install/ubuntu_onnx.sh | 1 + ci/docker/install/ubuntu_perl.sh | 1 + ci/docker/install/ubuntu_publish.sh | 65 + ci/docker/install/ubuntu_python.sh | 1 + ci/docker/install/ubuntu_r.sh | 2 +- ci/docker/install/ubuntu_rat.sh | 2 +- ci/docker/install/ubuntu_scala.sh | 38 +- ci/docker/install/ubuntu_tutorials.sh | 5 +- ci/docker/qemu/runtime_functions.py | 35 +- ci/docker/qemu/vmcontrol.py | 105 +- ci/docker/runtime_functions.sh | 405 ++- ci/docker_cache.py | 59 +- .../jenkins/Jenkins_steps.groovy | 1136 +++++---- ci/jenkins/Jenkinsfile_centos_cpu | 53 + ci/jenkins/Jenkinsfile_centos_gpu | 51 + ci/jenkins/Jenkinsfile_clang | 51 + ci/jenkins/Jenkinsfile_edge | 56 + ci/jenkins/Jenkinsfile_miscellaneous | 54 + ci/jenkins/Jenkinsfile_sanity | 48 + ci/jenkins/Jenkinsfile_unix_cpu | 74 + ci/jenkins/Jenkinsfile_unix_gpu | 73 + ci/jenkins/Jenkinsfile_website | 52 + ci/jenkins/Jenkinsfile_windows_cpu | 52 + ci/jenkins/Jenkinsfile_windows_gpu | 54 + ci/publish/Jenkinsfile | 107 + ci/publish/scala/build.sh | 29 + ci/publish/scala/buildkey.py | 165 ++ ci/publish/scala/deploy.sh | 41 + ci/publish/scala/fullDeploy.sh | 23 + ci/publish/scala/test.sh | 28 + ci/requirements.txt | 1 + ci/test_docker_cache.py | 4 +- ci/util.py | 2 +- ci/windows/test_py2_cpu.ps1 | 7 +- ci/windows/test_py2_gpu.ps1 | 11 +- ci/windows/test_py3_cpu.ps1 | 7 +- ci/windows/test_py3_gpu.ps1 | 11 +- cmake/AutoDetectF16C.cmake | 53 + cmake/DownloadMKLML.cmake | 74 + cmake/FirstClassLangCuda.cmake | 16 + cmake/MklDnn.cmake | 44 - cmake/cmake_options.yml | 53 + contrib/clojure-package/LICENSE | 2 +- contrib/clojure-package/README.md | 247 +- .../examples/captcha/.gitignore | 3 + .../examples/captcha/README.md | 61 + .../examples/captcha/captcha_example.png | Bin 0 -> 9762 bytes .../examples/captcha/gen_captcha.py | 40 + .../examples/captcha/get_data.sh | 32 + .../examples/captcha/project.clj | 28 + .../examples/captcha/src/captcha/consts.clj | 27 + .../captcha/src/captcha/infer_ocr.clj | 56 + .../captcha/src/captcha/train_ocr.clj | 156 ++ .../captcha/test/captcha/train_ocr_test.clj | 119 + .../cnn-text-classification/README.md | 38 +- .../cnn-text-classification/project.clj | 2 +- .../cnn_text_classification/classifier.clj | 53 +- .../cnn_text_classification/data_helper.clj | 195 +- .../classifier_test.clj | 48 + .../clojure-package/examples/gan/project.clj | 5 +- .../examples/gan/src/gan/gan_mnist.clj | 6 +- .../examples/gan/src/gan/viz.clj | 4 +- .../examples/gan/test/gan/gan_test.clj | 25 + .../examples/imclassification/project.clj | 2 +- .../src/imclassification/train_mnist.clj | 20 +- .../imclassification/train_mnist_test.clj | 39 + .../test/test-symbol.json.ref | 105 + .../examples/infer/imageclassifier/.gitignore | 12 + .../examples/infer/imageclassifier/README.md | 24 + .../infer/imageclassifier/project.clj | 25 + .../scripts/get_resnet_18_data.sh | 45 + .../scripts/get_resnet_data.sh | 44 + .../src/infer/imageclassifier_example.clj | 111 + .../infer/imageclassifier_example_test.clj | 58 + .../examples/infer/objectdetector/.gitignore | 12 + .../examples/infer/objectdetector/README.md | 24 + .../examples/infer/objectdetector/project.clj | 25 + .../objectdetector/scripts/get_ssd_data.sh | 49 + .../src/infer/objectdetector_example.clj | 120 + .../infer/objectdetector_example_test.clj | 65 + .../examples/infer/predictor/.gitignore | 12 + .../examples/infer/predictor/README.md | 24 + .../examples/infer/predictor/project.clj | 25 + .../predictor/scripts/get_resnet_18_data.sh | 44 + .../predictor/scripts/get_resnet_data.sh | 44 + .../predictor/src/infer/predictor_example.clj | 101 + .../test/infer/predictor_example_test.clj | 51 + .../examples/module/project.clj | 2 +- .../examples/module/test/mnist_mlp_test.clj | 29 + .../examples/multi-label/project.clj | 2 +- .../multi-label/test/multi_label_test.clj | 26 + .../examples/neural-style/project.clj | 2 +- .../neural-style/src/neural_style/core.clj | 24 +- .../test/neural_style/vgg_19_test.clj | 53 + .../examples/pre-trained-models/project.clj | 2 +- .../examples/profiler/project.clj | 2 +- .../examples/profiler/src/profiler/core.clj | 6 +- .../examples/profiler/test/core_test.clj | 31 + .../test/profile-matmul-20iter.json.ref | 271 ++ .../clojure-package/examples/rnn/project.clj | 2 +- .../examples/rnn/src/rnn/test_char_rnn.clj | 4 + .../examples/rnn/src/rnn/train_char_rnn.clj | 4 + .../examples/rnn/test/rnn/core_test.clj | 26 + .../examples/tutorial/.gitignore | 1 + .../examples/tutorial/project.clj | 7 +- .../tutorial/src/tutorial/kvstore.clj | 60 +- .../examples/tutorial/src/tutorial/module.clj | 169 +- .../tutorial/src/tutorial/ndarray.clj | 75 +- .../examples/tutorial/src/tutorial/symbol.clj | 138 +- .../tutorial/test/tutorial/core_test.clj | 27 + .../examples/visualization/project.clj | 2 +- .../test/visualization/core_test.clj | 28 + contrib/clojure-package/integration-tests.sh | 28 + contrib/clojure-package/project.clj | 7 +- .../scripts/infer/get_resnet_18_data.sh | 38 + .../scripts/infer/get_ssd_data.sh | 39 + .../scripts/update_versions.sh | 18 +- contrib/clojure-package/src/dev/generator.clj | 13 +- .../src/org/apache/clojure_mxnet/image.clj | 139 ++ .../src/org/apache/clojure_mxnet/infer.clj | 372 +++ .../src/org/apache/clojure_mxnet/ndarray.clj | 3 +- .../org/apache/clojure_mxnet/optimizer.clj | 52 +- .../org/apache/clojure_mxnet/primitives.clj | 46 + .../src/org/apache/clojure_mxnet/random.clj | 30 +- .../src/org/apache/clojure_mxnet/symbol.clj | 2 +- .../src/org/apache/clojure_mxnet/util.clj | 16 +- .../test/good-test-ndarray.clj | 10 +- .../clojure-package/test/good-test-symbol.clj | 10 +- .../org/apache/clojure_mxnet/image_test.clj | 79 + .../infer/imageclassifier_test.clj | 128 + .../infer/objectdetector_test.clj | 82 + .../clojure_mxnet/infer/predictor_test.clj | 77 + .../org/apache/clojure_mxnet/ndarray_test.clj | 2 +- .../apache/clojure_mxnet/operator_test.clj | 2 +- .../apache/clojure_mxnet/optimizer_test.clj | 10 + .../apache/clojure_mxnet/primitives_test.clj | 45 + .../org/apache/clojure_mxnet/random_test.clj | 17 +- .../org/apache/clojure_mxnet/util_test.clj | 20 + .../test/test-images/Pug-Cookie.jpg | Bin 0 -> 104323 bytes .../test/test-images/kitten.jpg | Bin 0 -> 110969 bytes cpp-package/CMakeLists.txt | 2 + cpp-package/example/Makefile | 17 +- cpp-package/example/README.md | 39 +- cpp-package/example/alexnet.cpp | 12 +- cpp-package/example/charRNN.cpp | 2 +- cpp-package/example/example.mk | 6 +- cpp-package/example/feature_extract/Makefile | 4 +- cpp-package/example/googlenet.cpp | 2 +- cpp-package/example/inception_bn.cpp | 26 +- cpp-package/example/inference/Makefile | 40 + cpp-package/example/inference/README.md | 41 + .../example/inference/inception_inference.cpp | 446 ++++ .../unit_test_inception_inference.sh | 43 + cpp-package/example/lenet_with_mxdataiter.cpp | 2 +- cpp-package/example/mlp.cpp | 10 +- cpp-package/example/mlp_csv.cpp | 2 +- cpp-package/example/resnet.cpp | 19 +- cpp-package/example/test_optimizer.cpp | 4 + cpp-package/example/test_score.cpp | 5 + cpp-package/example/utils.h | 2 +- cpp-package/include/mxnet-cpp/operator.h | 4 +- cpp-package/include/mxnet-cpp/operator.hpp | 4 +- cpp-package/include/mxnet-cpp/symbol.hpp | 4 +- cpp-package/scripts/OpWrapperGenerator.py | 6 +- cpp-package/tests/ci_test.sh | 5 + dev_menu.py | 239 ++ docs/Doxyfile | 4 +- docs/Jenkinsfile | 6 +- docs/Jenkinsfile-dev | 4 +- docs/README.md | 6 +- docs/_static/js/auto_module_index.js | 36 +- docs/_static/js/clipboard.js | 778 ++++++ docs/_static/js/copycode.js | 21 +- docs/_static/js/docversion.js | 22 +- docs/_static/js/navbar.js | 24 +- docs/_static/js/options.js | 22 +- docs/_static/js/page.js | 19 + docs/_static/js/search.js | 24 +- docs/_static/js/sidebar.js | 23 +- docs/_static/mxnet-theme/index.html | 6 +- docs/_static/mxnet-theme/navbar.html | 2 + docs/_static/selectlang.js | 21 +- docs/api/c++/index.md | 55 +- docs/api/clojure/ndarray.md | 3 +- docs/api/clojure/symbol.md | 18 - docs/api/index.md | 2 + docs/api/java/index.md | 15 + docs/api/python/gluon/loss.md | 1 + docs/api/python/gluon/model_zoo.md | 2 + docs/api/python/image/image.md | 8 +- docs/api/python/io/io.md | 39 +- docs/api/python/module/module.md | 2 +- docs/api/python/ndarray/contrib.md | 11 + docs/api/python/ndarray/ndarray.md | 28 +- docs/api/python/ndarray/random.md | 7 +- docs/api/python/ndarray/sparse.md | 3 +- docs/api/python/symbol/contrib.md | 8 + docs/api/python/symbol/random.md | 6 +- docs/api/python/symbol/symbol.md | 26 +- docs/api/scala/index.md | 18 +- docs/api/scala/kvstore.md | 98 +- docs/api/scala/ndarray.md | 186 +- docs/api/scala/symbol.md | 66 +- docs/architecture/note_memory.md | 15 +- docs/architecture/overview.md | 5 +- docs/build_version_doc/artifacts/.htaccess | 41 +- docs/build_version_doc/build_all_version.sh | 9 +- docs/community/contribute.md | 25 +- docs/community/ecosystem.md | 2 +- docs/community/mxnet_channels.md | 6 +- docs/conf.py | 3 +- docs/faq/add_op_in_backend.md | 24 +- docs/faq/develop_and_hack.md | 4 +- docs/faq/env_var.md | 50 +- docs/faq/index.md | 44 +- docs/gluon/index.md | 11 +- docs/install/build_from_source.md | 249 +- docs/install/c_plus_plus.md | 3 +- docs/install/centos_setup.md | 62 +- docs/install/download.md | 3 +- docs/install/index.md | 252 +- docs/install/java_setup.md | 123 + docs/install/osx_setup.md | 11 +- docs/install/scala_setup.md | 4 +- docs/install/ubuntu_setup.md | 58 +- docs/install/windows_setup.md | 2 +- docs/model_zoo/index.md | 2 +- docs/mxdoc.py | 53 +- docs/settings.ini | 39 + .../vision}/cnn_visualization/gradcam.py | 4 +- docs/tutorials/basic/module.md | 148 +- docs/tutorials/basic/reshape_transpose.md | 197 ++ docs/tutorials/basic/symbol.md | 8 +- docs/tutorials/c++/basics.md | 12 +- .../control_flow/ControlFlowTutorial.md | 12 +- docs/tutorials/gluon/hybrid.md | 110 +- docs/tutorials/gluon/info_gan.md | 437 ++++ docs/tutorials/gluon/learning_rate_finder.md | 28 +- .../gluon/learning_rate_schedules.md | 4 +- docs/tutorials/gluon/mnist.md | 554 +++-- docs/tutorials/gluon/save_load_params.md | 2 +- docs/tutorials/index.md | 11 + docs/tutorials/java/index.md | 8 + docs/tutorials/java/mxnet_java_on_intellij.md | 180 ++ docs/tutorials/java/ssd_inference.md | 186 ++ docs/tutorials/onnx/export_mxnet_to_onnx.md | 2 +- docs/tutorials/onnx/fine_tuning_gluon.md | 45 +- docs/tutorials/python/linear-regression.md | 2 +- docs/tutorials/python/predict_image.md | 1 + docs/tutorials/python/profiler.md | 8 +- docs/tutorials/r/fiveMinutesNeuralNetwork.md | 4 +- docs/tutorials/scala/char_lstm.md | 2 +- .../scala/mxnet_scala_on_intellij.md | 64 +- docs/tutorials/sparse/row_sparse.md | 2 +- docs/tutorials/unsupervised_learning/gan.md | 9 +- docs/tutorials/vision/cnn_visualization.md | 21 +- example/MXNetTutorialTemplate.ipynb | 6 +- example/adversary/README.md | 4 +- example/adversary/adversary_generation.ipynb | 343 +-- .../variational_autoencoder}/README.md | 0 .../variational_autoencoder}/VAE.py | 0 .../variational_autoencoder/VAE_example.ipynb | 1204 +++++++++ example/bayesian-methods/README.md | 24 + example/bayesian-methods/bdk_demo.py | 73 +- example/bi-lstm-sort/README.md | 28 +- example/bi-lstm-sort/bi-lstm-sort.ipynb | 607 +++++ example/bi-lstm-sort/infer_sort.py | 80 - example/bi-lstm-sort/lstm.py | 175 -- example/bi-lstm-sort/lstm_sort.py | 142 -- example/bi-lstm-sort/rnn_model.py | 73 - example/bi-lstm-sort/sort_io.py | 255 -- example/capsnet/README.md | 132 +- example/capsnet/capsulenet.py | 695 +++--- example/cnn_visualization/README.md | 17 - example/cnn_visualization/gradcam_demo.py | 110 - example/cnn_visualization/vgg.py | 84 - example/deep-embedded-clustering/README.md | 11 +- example/deep-embedded-clustering/data.py | 15 +- example/deep-embedded-clustering/dec.py | 25 +- example/fcn-xs/README.md | 27 +- example/fcn-xs/fcn_xs.py | 4 +- .../gluon/{ => actor_critic}/actor_critic.py | 0 example/gluon/audio/transforms.py | 205 ++ example/gluon/audio/urban_sounds/README.md | 100 + example/gluon/audio/urban_sounds/datasets.py | 179 ++ example/gluon/audio/urban_sounds/model.py | 33 + example/gluon/audio/urban_sounds/predict.py | 92 + .../gluon/audio/urban_sounds/requirements.txt | 2 + example/gluon/audio/urban_sounds/train.py | 157 ++ example/gluon/{DCGAN => dc_gan}/README.md | 0 example/gluon/{DCGAN => dc_gan}/__init__.py | 0 example/gluon/{DCGAN => dc_gan}/dcgan.py | 0 .../{DCGAN => dc_gan}/inception_score.py | 0 .../kaggle_k_fold_cross_validation.py | 0 example/gluon/learning_rate_manipulation.py | 63 - example/gluon/{ => lstm_crf}/lstm_crf.py | 10 +- example/gluon/{ => mnist}/mnist.py | 0 example/gluon/sn_gan/data.py | 2 +- example/gluon/sn_gan/model.py | 2 +- example/gluon/sn_gan/train.py | 2 +- example/gluon/sn_gan/utils.py | 2 +- .../super_resolution.py | 0 example/gluon/tree_lstm/README.md | 29 + example/gluon/tree_lstm/dataset.py | 4 +- .../gluon/tree_lstm/fetch_and_preprocess.sh | 4 +- example/gluon/tree_lstm/main.py | 1 - example/gluon/tree_lstm/scripts/download.py | 1 - example/image-classification/train_mnist.py | 1 + example/memcost/Makefile | 38 - example/memcost/README.md | 30 - example/memcost/inception_memcost.py | 107 - example/{ => module}/utils/__init__.py | 0 example/{ => module}/utils/get_data.py | 6 +- example/multi-task/README.md | 13 +- example/multi-task/example_multi_task.py | 159 -- example/multi-task/multi-task-learning.ipynb | 454 ++++ example/multivariate_time_series/README.md | 4 +- example/named_entity_recognition/README.md | 11 +- .../named_entity_recognition/src/metrics.py | 2 +- example/named_entity_recognition/src/ner.py | 2 +- example/nce-loss/README.md | 2 +- example/notebooks/README.md | 4 - example/numpy-ops/numpy_softmax.py | 84 - example/onnx/super_resolution.py | 86 - example/python-howto/README.md | 37 - example/python-howto/data_iter.py | 76 - example/python-howto/monitor_weights.py | 46 - example/python-howto/multiple_outputs.py | 38 - example/quantization/README.md | 43 +- .../quantization/imagenet_gen_qsym_mkldnn.py | 13 +- example/quantization/imagenet_inference.py | 7 +- example/rcnn/README.md | 3 +- example/rcnn/demo.py | 2 +- example/rcnn/test.py | 4 +- example/rcnn/train.py | 4 +- example/recommenders/README.md | 38 +- example/recommenders/crossentropy.py | 141 -- example/recommenders/demo1-MF.ipynb | 1694 ++++++++++++- example/recommenders/demo1-MF2-fancy.ipynb | 295 --- example/recommenders/demo2-binary.ipynb | 137 - example/recommenders/demo2-dssm.ipynb | 2210 +++++++++++++++++ example/recommenders/demo3-dssm.ipynb | 113 - example/recommenders/matrix_fact.py | 85 +- example/recommenders/movielens_data.py | 12 +- example/recommenders/negativesample.py | 180 -- example/recommenders/randomproj.py | 150 -- example/recommenders/recotools.py | 45 - example/recommenders/symbol_alexnet.py | 62 - example/reinforcement-learning/ddpg/README.md | 2 + example/reinforcement-learning/dqn/setup.sh | 6 +- .../restricted-boltzmann-machine/README.md | 52 + example/rnn-time-major/bucket_io.py | 264 -- .../rnn-time-major/get_sherlockholmes_data.sh | 43 - example/rnn-time-major/readme.md | 24 - example/rnn-time-major/rnn_cell_demo.py | 189 -- example/rnn/README.md | 5 + example/rnn/bucketing/README.md | 13 +- example/rnn/large_word_lm/data.py | 2 +- example/rnn/large_word_lm/readme.md | 66 - example/rnn/word_lm/README.md | 2 +- example/sparse/linear_classification/data.py | 5 +- example/speech_recognition/README.md | 4 +- example/speech_recognition/label_util.py | 2 +- example/speech_recognition/log_util.py | 74 +- example/speech_recognition/main.py | 6 +- example/speech_recognition/singleton.py | 32 +- .../speech_recognition/stt_datagenerator.py | 6 +- example/speech_recognition/stt_io_iter.py | 6 +- example/speech_recognition/stt_metric.py | 2 +- example/speech_recognition/stt_utils.py | 6 +- example/speech_recognition/train.py | 2 +- example/ssd/README.md | 47 + example/ssd/quantization.py | 11 +- example/stochastic-depth/sd_cifar10.py | 31 +- example/stochastic-depth/sd_mnist.py | 6 +- example/svm_mnist/svm_mnist.py | 110 +- example/svrg_module/README.md | 6 +- .../benchmarks/svrg_benchmark.ipynb | 111 +- .../linear_regression/data_reader.py | 24 +- .../svrg_module/linear_regression/train.py | 4 +- .../README.md | 0 .../convert_data.py | 0 .../vaegan_mxnet.py | 0 example/vae/VAE_example.ipynb | 1204 --------- include/dlpack | 1 + include/dmlc | 1 + include/mshadow | 1 + include/mxnet/base.h | 18 +- include/mxnet/c_api.h | 10 + include/mxnet/c_api_error.h | 56 + include/mxnet/engine.h | 1 - include/mxnet/ndarray.h | 36 +- include/mxnet/random_generator.h | 16 +- include/nnvm | 1 + julia/.gitignore | 3 - julia/README.md | 4 +- julia/REQUIRE | 1 - julia/deps/build.jl | 74 +- julia/docs/.gitignore | 6 + julia/docs/Makefile | 13 +- julia/docs/Project.toml | 7 + julia/docs/make.jl | 16 +- julia/docs/mkdocs.yml | 2 +- julia/docs/src/tutorial/mnist.md | 2 +- julia/examples/char-lstm/lstm.jl | 11 +- julia/examples/char-lstm/seq-data.jl | 13 +- julia/examples/char-lstm/train.jl | 2 +- julia/examples/char-lstm/visualize.jl | 4 +- .../Prediction with Pre-trained Model.ipynb | 2 +- julia/examples/mnist/mlp-test.jl | 26 +- julia/examples/mnist/mlp.jl | 2 +- julia/src/MXNet.jl | 20 +- julia/src/autograd.jl | 18 +- julia/src/base.jl | 66 +- julia/src/broadcast.jl | 29 +- julia/src/callback.jl | 2 +- julia/src/context.jl | 21 +- julia/src/deprecated.jl | 61 +- julia/src/executor.jl | 20 +- julia/src/initializer.jl | 8 +- julia/src/io.jl | 47 +- julia/src/kvstore.jl | 14 +- julia/src/metric.jl | 60 +- julia/src/model.jl | 72 +- julia/src/ndarray.jl | 341 ++- julia/src/nn-factory.jl | 2 +- julia/src/optimizer.jl | 10 +- julia/src/optimizers/adadelta.jl | 2 +- julia/src/optimizers/adagrad.jl | 2 +- julia/src/optimizers/nadam.jl | 2 +- julia/src/optimizers/rmsprop.jl | 2 +- julia/src/optimizers/sgd.jl | 4 +- julia/src/random.jl | 9 +- julia/src/symbolic-node.jl | 242 +- julia/src/util.jl | 34 +- julia/src/visualize.jl | 8 +- julia/test/runtests.jl | 18 +- julia/test/unittest/autograd.jl | 79 +- julia/test/unittest/bind.jl | 6 +- julia/test/unittest/io.jl | 8 +- julia/test/unittest/kvstore.jl | 14 +- julia/test/unittest/metric.jl | 6 +- julia/test/unittest/model.jl | 6 +- julia/test/unittest/name.jl | 6 +- julia/test/unittest/ndarray.jl | 463 ++-- julia/test/unittest/operator.jl | 6 +- julia/test/unittest/optimizer.jl | 10 +- julia/test/unittest/random.jl | 15 +- julia/test/unittest/symbolic-node.jl | 120 +- julia/test/unittest/util.jl | 9 +- julia/test/unittest/visualize.jl | 4 +- make/config.mk | 6 +- make/config/libmxnet.sym | 15 + make/config/libmxnet.ver | 19 + make/crosscompile.jetson.mk | 4 +- make/maven/maven_darwin_cpu.mk | 182 ++ make/maven/maven_linux_cpu.mk | 182 ++ make/maven/maven_linux_cu90.mk | 185 ++ make/pip/pip_darwin_cpu.mk | 182 ++ make/pip/pip_darwin_mkl.mk | 166 ++ make/pip/pip_linux_cpu.mk | 182 ++ make/pip/pip_linux_cu100.mk | 185 ++ make/pip/pip_linux_cu100mkl.mk | 169 ++ make/pip/pip_linux_cu75.mk | 182 ++ make/pip/pip_linux_cu75mkl.mk | 166 ++ make/pip/pip_linux_cu80.mk | 185 ++ make/pip/pip_linux_cu80mkl.mk | 169 ++ make/pip/pip_linux_cu90.mk | 185 ++ make/pip/pip_linux_cu90mkl.mk | 169 ++ make/pip/pip_linux_cu91.mk | 185 ++ make/pip/pip_linux_cu91mkl.mk | 169 ++ make/pip/pip_linux_cu92.mk | 185 ++ make/pip/pip_linux_cu92mkl.mk | 169 ++ make/pip/pip_linux_mkl.mk | 166 ++ perl-package/AI-MXNet-Gluon-Contrib/META.yml | 17 + perl-package/AI-MXNet-Gluon-ModelZoo/META.yml | 17 + perl-package/AI-MXNet/META.yml | 17 + perl-package/AI-MXNetCAPI/META.yml | 17 + perl-package/AI-MXNetCAPI/mxnet.i | 10 + perl-package/AI-NNVMCAPI/META.yml | 17 + python/mxnet/__init__.py | 9 +- python/mxnet/autograd.py | 6 +- python/mxnet/base.py | 12 +- python/mxnet/callback.py | 4 +- python/mxnet/context.py | 24 + python/mxnet/contrib/io.py | 4 +- .../contrib/onnx/mx2onnx/_op_translations.py | 1544 +++++------- .../contrib/onnx/mx2onnx/export_model.py | 5 + .../mxnet/contrib/onnx/mx2onnx/export_onnx.py | 135 +- .../contrib/onnx/onnx2mx/_import_helper.py | 14 +- .../contrib/onnx/onnx2mx/_op_translations.py | 79 +- .../onnx/onnx2mx/_translation_utils.py | 6 +- .../contrib/onnx/onnx2mx/import_model.py | 19 +- .../mxnet/contrib/onnx/onnx2mx/import_onnx.py | 9 +- .../contrib/onnx/onnx2mx/import_to_gluon.py | 5 + python/mxnet/contrib/quantization.py | 6 +- .../contrib/svrg_optimization/svrg_module.py | 17 +- python/mxnet/contrib/text/embedding.py | 84 +- python/mxnet/contrib/text/vocab.py | 12 +- python/mxnet/gluon/block.py | 6 +- python/mxnet/gluon/contrib/nn/basic_layers.py | 4 +- .../mxnet/gluon/contrib/rnn/conv_rnn_cell.py | 18 +- python/mxnet/gluon/contrib/rnn/rnn_cell.py | 13 +- python/mxnet/gluon/data/dataloader.py | 275 +- python/mxnet/gluon/data/dataset.py | 16 +- python/mxnet/gluon/data/vision/datasets.py | 41 +- python/mxnet/gluon/data/vision/transforms.py | 4 +- python/mxnet/gluon/loss.py | 72 +- .../mxnet/gluon/model_zoo/vision/mobilenet.py | 14 +- python/mxnet/gluon/nn/basic_layers.py | 19 +- python/mxnet/gluon/parameter.py | 10 +- python/mxnet/gluon/rnn/rnn_cell.py | 43 +- python/mxnet/gluon/rnn/rnn_layer.py | 130 +- python/mxnet/gluon/trainer.py | 130 +- python/mxnet/image/detection.py | 66 +- python/mxnet/image/image.py | 140 +- python/mxnet/initializer.py | 9 +- python/mxnet/io/io.py | 8 +- python/mxnet/libinfo.py | 35 +- python/mxnet/metric.py | 305 ++- python/mxnet/model.py | 5 + python/mxnet/module/base_module.py | 13 +- python/mxnet/module/sequential_module.py | 15 +- python/mxnet/ndarray/contrib.py | 108 +- python/mxnet/ndarray/ndarray.py | 77 +- python/mxnet/ndarray/random.py | 136 +- python/mxnet/ndarray/sparse.py | 92 +- python/mxnet/optimizer/optimizer.py | 106 +- python/mxnet/recordio.py | 49 +- python/mxnet/rnn/rnn.py | 6 +- python/mxnet/rnn/rnn_cell.py | 23 +- python/mxnet/symbol/contrib.py | 18 +- python/mxnet/symbol/random.py | 82 +- python/mxnet/symbol/symbol.py | 2 +- python/mxnet/symbol_doc.py | 13 - python/mxnet/test_utils.py | 83 +- python/mxnet/torch.py | 2 +- python/mxnet/visualization.py | 37 +- python/setup.py | 2 +- readthedocs.yml | 17 + scala-package/.gitignore | 5 + scala-package/README.md | 281 ++- .../assembly/linux-x86_64-cpu/pom.xml | 127 - .../src/main/assembly/assembly.xml | 28 - .../assembly/linux-x86_64-gpu/pom.xml | 127 - .../src/main/assembly/assembly.xml | 28 - .../osx-x86_64-cpu/main/assembly/assembly.xml | 30 - scala-package/assembly/osx-x86_64-cpu/pom.xml | 123 - .../src/main/assembly/assembly.xml | 28 - scala-package/assembly/pom.xml | 157 +- scala-package/assembly/src/javadoc.xml | 24 - .../assembly/src/main/assembly/assembly.xml | 57 + .../assembly/src/main/assembly/javadoc.xml | 15 + .../assembly/src/main/assembly/source.xml | 19 + scala-package/assembly/src/source.xml | 32 - scala-package/core/pom.xml | 142 +- .../main/scala/org/apache/mxnet/Base.scala | 58 +- .../main/scala/org/apache/mxnet/Context.scala | 2 + .../scala/org/apache/mxnet/Executor.scala | 9 +- .../src/main/scala/org/apache/mxnet/IO.scala | 7 +- .../main/scala/org/apache/mxnet/Image.scala | 185 ++ .../main/scala/org/apache/mxnet/KVStore.scala | 2 +- .../main/scala/org/apache/mxnet/LibInfo.scala | 3 + .../org/apache/mxnet/MX_PRIMITIVES.scala | 85 + .../main/scala/org/apache/mxnet/NDArray.scala | 232 +- .../scala/org/apache/mxnet/NDArrayAPI.scala | 13 +- .../scala/org/apache/mxnet/Optimizer.scala | 2 +- .../org/apache/mxnet/ResourceScope.scala | 6 +- .../main/scala/org/apache/mxnet/Symbol.scala | 2 + .../scala/org/apache/mxnet/SymbolAPI.scala | 12 +- .../org/apache/mxnet/Visualization.scala | 1 + .../org/apache/mxnet/io/MXDataIter.scala | 10 +- .../org/apache/mxnet/io/NDArrayIter.scala | 11 +- .../org/apache/mxnet/io/PrefetchingIter.scala | 4 +- .../org/apache/mxnet/io/ResizeIter.scala | 4 +- .../org/apache/mxnet/javaapi/Context.scala | 16 +- .../scala/org/apache/mxnet/javaapi/IO.scala | 11 +- .../org/apache/mxnet/javaapi/NDArray.scala | 463 ++++ .../org/apache/mxnet/javaapi/Shape.scala | 3 +- .../org/apache/mxnet/module/BaseModule.scala | 6 +- .../apache/mxnet/module/BucketingModule.scala | 4 +- .../module/DataParallelExecutorGroup.scala | 4 +- .../org/apache/mxnet/module/Module.scala | 4 +- .../mxnet/module/SequentialModule.scala | 4 +- .../mxnet/util/NativeLibraryLoader.scala | 82 +- .../apache/mxnet/util/OptionConversion.scala | 22 + .../org/apache/mxnet/javaapi/NDArrayTest.java | 100 + .../mxnet/javaapi/ResourceScopeTestSuite.java | 110 + .../test/scala/org/apache/mxnet/IOSuite.scala | 27 + .../scala/org/apache/mxnet/ImageSuite.scala | 100 + .../scala/org/apache/mxnet/NDArraySuite.scala | 422 +++- .../scala/org/apache/mxnet/SymbolSuite.scala | 22 + scala-package/deploy/pom.xml | 140 ++ .../deploy/src/main/deploy/deploy.xml | 33 + scala-package/dev/compile-mxnet-backend.sh | 7 +- scala-package/examples/pom.xml | 79 +- .../benchmark/run_image_inference_bm.sh | 17 +- .../benchmark/run_java_inference_bm.sh | 27 + .../scripts/benchmark/run_text_charrnn_bm.sh | 18 +- .../examples/scripts/customop/run_customop.sh | 2 +- .../scripts/customop/run_customopwithrtc.sh | 2 +- .../imageclassifier/get_resnet_18_data.sh | 8 +- .../infer/imageclassifier/get_resnet_data.sh | 8 +- .../imageclassifier/run_classifier_example.sh | 2 +- .../infer/objectdetector/get_ssd_data.sh | 13 +- .../infer/objectdetector/run_ssd_example.sh | 3 +- .../objectdetector/run_ssd_java_example.sh | 34 + .../predictor/run_predictor_java_example.sh | 31 + .../examples/scripts/module/mnist_mlp.sh | 2 +- .../scripts/module/run_sequential_module.sh | 2 +- .../neuralstyle_end2end/run_test_end2end.sh | 2 +- .../neuralstyle_end2end/run_train_end2end.sh | 2 +- .../scripts/profiler/run_profiler_matmul.sh | 2 +- .../scripts/profiler/run_profiler_ndarray.sh | 2 +- .../scripts/rnn/run_lstm_bucketing.sh | 7 +- .../examples/scripts/rnn/run_test_charrnn.sh | 9 +- .../examples/scripts/rnn/run_train_charrnn.sh | 10 +- .../scripts/run_cnntextclassification.sh | 2 +- .../examples/scripts/run_gan_mnist.sh | 2 +- .../examples/scripts/run_multitask.sh | 2 +- .../examples/scripts/run_neuralstyle.sh | 2 +- .../examples/scripts/run_train_mnist.sh | 10 +- .../examples/scripts/run_visualization.sh | 2 +- .../javaapi/benchmark/InferBase.java | 35 + .../javaapi/benchmark/JavaBenchmark.java | 129 + .../benchmark/ObjectDetectionBenchmark.java | 64 + .../javaapi/infer/objectdetector/README.md | 97 + .../objectdetector/SSDClassifierExample.java | 199 ++ .../infer/predictor/PredictorExample.java | 200 ++ .../javaapi/infer/predictor/README.md | 61 + .../imclassification/TrainModel.scala | 24 +- .../datasets/SyntheticDataIter.scala | 8 +- .../imclassification/models/Lenet.scala | 5 +- .../models/MultiLayerPerceptron.scala | 5 +- .../imclassification/models/Resnet.scala | 16 +- .../infer/objectdetector/README.md | 20 +- .../objectdetector/SSDClassifierExample.scala | 4 +- .../infer/predictor/PredictorExample.scala | 92 + .../CNNClassifierExampleSuite.scala | 3 +- .../IMClassificationExampleSuite.scala | 10 +- .../ImageClassifierExampleSuite.scala | 6 +- .../ObjectDetectorExampleSuite.scala | 5 +- .../predictor/PredictorExampleSuite.scala | 87 + .../multitask/MultiTaskSuite.scala | 3 +- .../profiler/ProfilerSuite.scala | 3 +- scala-package/infer/pom.xml | 165 +- .../org/apache/mxnet/infer/Classifier.scala | 54 +- .../apache/mxnet/infer/ImageClassifier.scala | 48 +- .../apache/mxnet/infer/ObjectDetector.scala | 31 +- .../org/apache/mxnet/infer/Predictor.scala | 77 +- .../mxnet/infer/javaapi/ObjectDetector.scala | 129 + .../infer/javaapi/ObjectDetectorOutput.scala | 71 + .../mxnet/infer/javaapi/Predictor.scala | 134 + .../javaapi/ObjectDetectorOutputTest.java | 59 + .../infer/javaapi/ObjectDetectorTest.java | 106 + .../mxnet/infer/javaapi/PredictorTest.java | 100 + .../apache/mxnet/infer/ClassifierSuite.scala | 47 +- .../mxnet/infer/ImageClassifierSuite.scala | 7 + .../apache/mxnet/infer/PredictorSuite.scala | 32 +- .../init-native/linux-x86_64/pom.xml | 106 - scala-package/init-native/osx-x86_64/pom.xml | 105 - scala-package/init-native/pom.xml | 174 +- .../org_apache_mxnet_init_native_c_api.h | 45 + scala-package/init/pom.xml | 107 +- .../scala/org/apache/mxnet/init/Base.scala | 20 +- scala-package/macros/pom.xml | 88 +- .../org/apache/mxnet/APIDocGenerator.scala | 346 ++- .../org/apache/mxnet/GeneratorBase.scala | 232 ++ .../scala/org/apache/mxnet/NDArrayMacro.scala | 352 ++- .../scala/org/apache/mxnet/SymbolMacro.scala | 325 ++- .../mxnet/javaapi/JavaNDArrayMacro.scala | 125 + .../apache/mxnet/utils/CToScalaUtils.scala | 39 +- .../scala/org/apache/mxnet/MacrosSuite.scala | 3 +- scala-package/memory-management.md | 118 + scala-package/mxnet-demo/java-demo/Makefile | 52 + scala-package/mxnet-demo/java-demo/README.md | 90 + .../mxnet-demo/java-demo/bin/java_sample.sh | 8 +- .../mxnet-demo/java-demo/bin/run_od.sh | 8 +- scala-package/mxnet-demo/java-demo/pom.xml | 71 + .../src/main/java/mxnet/HelloWorld.java | 32 + .../src/main/java/mxnet/ObjectDetection.java | 101 + .../mxnet-demo/{ => scala-demo}/Makefile | 8 +- .../mxnet-demo/{ => scala-demo}/README.md | 15 +- .../mxnet-demo/{ => scala-demo}/bin/demo.sh | 0 .../mxnet-demo/{ => scala-demo}/bin/run_im.sh | 0 .../mxnet-demo/{ => scala-demo}/pom.xml | 21 +- .../src/main/scala/sample/HelloWorld.scala | 0 .../sample/ImageClassificationExample.scala | 0 scala-package/native/README.md | 67 + scala-package/native/linux-x86_64-cpu/pom.xml | 106 - scala-package/native/linux-x86_64-gpu/pom.xml | 106 - scala-package/native/osx-x86_64-cpu/pom.xml | 106 - scala-package/native/pom.xml | 169 +- .../native/org_apache_mxnet_native_c_api.cc | 10 + .../native/org_apache_mxnet_native_c_api.h | 861 +++++++ scala-package/packageTest/Makefile | 91 + scala-package/packageTest/README.md | 72 + scala-package/packageTest/core/pom.xml | 39 + scala-package/packageTest/core/scripts | 1 + scala-package/packageTest/examples/pom.xml | 48 + scala-package/packageTest/examples/scripts | 1 + scala-package/packageTest/infer/pom.xml | 38 + scala-package/packageTest/pom.xml | 196 ++ scala-package/pom.xml | 204 +- scala-package/spark/README.md | 3 +- scala-package/spark/bin/run-mnist-example.sh | 4 +- scala-package/spark/pom.xml | 33 +- snapcraft.yaml | 19 +- src/c_api/c_api.cc | 11 + src/c_api/c_api_common.h | 42 +- src/c_api/c_api_executor.cc | 12 +- src/c_api/c_api_symbolic.cc | 8 +- src/c_api/c_predict_api.cc | 14 +- src/common/cuda_utils.h | 19 +- src/common/exec_utils.h | 8 +- src/common/utils.h | 21 + src/engine/engine.cc | 2 +- src/engine/stream_manager.h | 10 +- src/engine/threaded_engine.h | 2 +- src/engine/threaded_engine_pooled.cc | 8 +- src/executor/attach_op_execs_pass.cc | 6 +- src/executor/graph_executor.cc | 13 +- src/executor/tensorrt_pass.cc | 8 +- src/executor/trt_graph_executor.cc | 9 +- src/imperative/cached_op.cc | 57 +- src/imperative/imperative.cc | 57 +- src/imperative/imperative_utils.cc | 6 +- src/imperative/imperative_utils.h | 17 +- src/initialize.cc | 8 +- src/io/image_aug_default.cc | 17 +- src/io/image_det_aug_default.cc | 20 +- src/io/image_io.cc | 9 +- src/io/iter_image_recordio_2.cc | 4 +- src/kvstore/comm.h | 31 +- src/kvstore/comm_tree.h | 37 +- src/kvstore/gpu_topology.h | 89 +- src/ndarray/ndarray.cc | 31 +- src/ndarray/ndarray_function.cc | 14 +- src/nnvm/graph_editor.cc | 13 +- src/nnvm/legacy_json_util.cc | 8 +- src/nnvm/legacy_op_util.cc | 6 +- src/operator/bilinear_sampler-inl.h | 8 +- src/operator/c_lapack_api.cc | 72 + src/operator/c_lapack_api.h | 35 +- src/operator/contrib/adamw-inl.h | 125 + src/operator/contrib/adamw.cc | 71 + src/operator/contrib/adamw.cu | 35 + src/operator/contrib/adaptive_avg_pooling.cc | 8 +- src/operator/contrib/boolean_mask-inl.h | 134 + src/operator/contrib/boolean_mask.cc | 114 + src/operator/contrib/bounding_box-inl.h | 2 +- src/operator/contrib/bounding_box.cc | 27 +- src/operator/contrib/dgl_graph-inl.h | 68 + src/operator/contrib/dgl_graph.cc | 1631 ++++++++++++ src/operator/contrib/dgl_graph.cu | 29 + src/operator/contrib/index_copy-inl.h | 14 +- src/operator/contrib/index_copy.cc | 36 +- src/operator/contrib/multibox_detection.cc | 1 - src/operator/contrib/nnvm_to_onnx-inl.h | 42 +- src/operator/contrib/nnvm_to_onnx.cc | 34 +- src/operator/contrib/nnz.cc | 189 ++ src/operator/contrib/quadratic_op-inl.h | 2 +- src/operator/contrib/quadratic_op.cc | 6 +- src/operator/contrib/roi_align-inl.h | 8 +- src/operator/contrib/roi_align.cc | 63 +- src/operator/contrib/roi_align.cu | 28 +- src/operator/contrib/tensorrt-inl.h | 38 +- src/operator/contrib/tensorrt.cc | 28 +- src/operator/contrib/tensorrt.cu | 2 +- src/operator/cudnn_rnn-inl.h | 305 ++- src/operator/custom/custom.cc | 14 +- src/operator/custom/native_op-inl.h | 12 +- src/operator/custom/ndarray_op-inl.h | 6 +- src/operator/elemwise_op_common.h | 28 +- src/operator/grid_generator-inl.h | 8 +- src/operator/l2_normalization-inl.h | 2 +- src/operator/l2_normalization.cc | 102 +- src/operator/linalg_impl.h | 52 +- src/operator/math_functions-inl.h | 2 + src/operator/mshadow_op.h | 4 + src/operator/mxnet_op.h | 117 +- src/operator/nn/activation-inl.h | 12 +- src/operator/nn/activation.cc | 84 +- src/operator/nn/activation.cu | 30 +- src/operator/nn/batch_norm.cc | 7 +- src/operator/nn/concat.cc | 29 +- src/operator/nn/ctc_loss.cc | 3 + src/operator/nn/cudnn/cudnn_algoreg-inl.h | 66 +- src/operator/nn/cudnn/cudnn_convolution-inl.h | 484 ++-- .../nn/cudnn/cudnn_deconvolution-inl.h | 506 ++-- src/operator/nn/cudnn/cudnn_pooling-inl.h | 5 + src/operator/nn/dropout-inl.h | 5 +- src/operator/nn/mkldnn/mkldnn_act.cc | 19 +- src/operator/nn/mkldnn/mkldnn_base-inl.h | 55 +- src/operator/nn/mkldnn/mkldnn_base.cc | 105 +- .../nn/mkldnn/mkldnn_batch_norm-inl.h | 10 +- src/operator/nn/mkldnn/mkldnn_concat-inl.h | 83 + src/operator/nn/mkldnn/mkldnn_concat.cc | 83 +- src/operator/nn/mkldnn/mkldnn_convolution.cc | 291 ++- .../nn/mkldnn/mkldnn_deconvolution.cc | 121 +- src/operator/nn/mkldnn/mkldnn_lrn-inl.h | 10 +- src/operator/nn/mkldnn/mkldnn_pooling-inl.h | 18 +- src/operator/nn/mkldnn/mkldnn_pooling.cc | 50 +- src/operator/operator_common.h | 4 +- src/operator/operator_tune-inl.h | 5 +- src/operator/operator_tune.cc | 2 + src/operator/operator_util.cc | 10 +- .../mkldnn/mkldnn_quantized_concat.cc | 119 + .../quantization/quantization_utils.h | 2 + .../quantization/quantize_graph_pass.cc | 100 +- src/operator/quantization/quantized_concat.cc | 149 ++ .../quantization/quantized_fully_connected.cc | 159 +- .../quantization/quantized_pooling.cc | 27 +- src/operator/random/multisample_op.h | 8 +- src/operator/random/sample_op.cc | 195 +- src/operator/random/sample_op.cu | 48 +- src/operator/random/sample_op.h | 610 +++-- src/operator/random/sampler.h | 73 +- src/operator/rnn-inl.h | 84 +- src/operator/rnn.cc | 5 + src/operator/rnn.cu | 2 +- src/operator/spatial_transformer-inl.h | 8 +- src/operator/spatial_transformer.cu | 43 +- .../subgraph/mkldnn/mkldnn_conv-inl.h | 3 - src/operator/subgraph/mkldnn/mkldnn_conv.cc | 14 +- src/operator/subgraph/partition_graph.cc | 26 +- src/operator/tensor/broadcast_reduce-inl.h | 98 +- src/operator/tensor/broadcast_reduce_op.h | 4 +- .../tensor/broadcast_reduce_op_index.cc | 2 +- .../tensor/broadcast_reduce_op_index.cu | 4 +- .../tensor/broadcast_reduce_op_value.cc | 4 +- .../tensor/broadcast_reduce_op_value.cu | 4 +- src/operator/tensor/cast_storage-inl.cuh | 2 +- src/operator/tensor/cast_storage-inl.h | 2 +- src/operator/tensor/control_flow_op.cu | 2 +- src/operator/tensor/control_flow_op.h | 18 +- src/operator/tensor/diag_op-inl.h | 2 +- src/operator/tensor/diag_op.cc | 2 +- src/operator/tensor/diag_op.cu | 2 +- src/operator/tensor/dot-inl.cuh | 2 +- src/operator/tensor/dot-inl.h | 23 +- .../elemwise_binary_broadcast_op-inl.cuh | 4 + .../tensor/elemwise_binary_broadcast_op.h | 18 +- .../elemwise_binary_broadcast_op_basic.cc | 4 +- .../elemwise_binary_broadcast_op_basic.cu | 4 +- .../elemwise_binary_broadcast_op_extended.cc | 2 +- .../elemwise_binary_broadcast_op_extended.cu | 4 +- .../elemwise_binary_broadcast_op_logic.cc | 2 +- .../elemwise_binary_broadcast_op_logic.cu | 4 +- src/operator/tensor/elemwise_binary_op-inl.h | 2 +- src/operator/tensor/elemwise_binary_op.cc | 7 + .../tensor/elemwise_binary_op_basic.cc | 4 +- .../tensor/elemwise_binary_op_basic.cu | 4 +- .../tensor/elemwise_binary_op_extended.cc | 4 +- .../tensor/elemwise_binary_op_extended.cu | 2 +- .../tensor/elemwise_binary_op_logic.cc | 2 +- .../tensor/elemwise_binary_op_logic.cu | 2 +- .../tensor/elemwise_binary_scalar_op_basic.cc | 4 +- .../tensor/elemwise_binary_scalar_op_basic.cu | 4 +- .../elemwise_binary_scalar_op_extended.cc | 4 +- .../elemwise_binary_scalar_op_extended.cu | 4 +- .../tensor/elemwise_binary_scalar_op_logic.cc | 4 +- .../tensor/elemwise_binary_scalar_op_logic.cu | 4 +- src/operator/tensor/elemwise_scatter_op.cc | 5 + src/operator/tensor/elemwise_scatter_op.cu | 5 + src/operator/tensor/elemwise_scatter_op.h | 2 +- src/operator/tensor/elemwise_sum.cc | 2 +- src/operator/tensor/elemwise_sum.cu | 2 +- src/operator/tensor/elemwise_sum.h | 2 +- src/operator/tensor/elemwise_unary_op.h | 41 + .../tensor/elemwise_unary_op_basic.cc | 41 +- .../tensor/elemwise_unary_op_basic.cu | 16 +- src/operator/tensor/elemwise_unary_op_trig.cc | 2 +- src/operator/tensor/elemwise_unary_op_trig.cu | 4 +- src/operator/tensor/histogram-inl.h | 4 + src/operator/tensor/histogram.cc | 4 + src/operator/tensor/histogram.cu | 4 + src/operator/tensor/indexing_op.cc | 146 +- src/operator/tensor/indexing_op.cu | 127 +- src/operator/tensor/indexing_op.h | 158 +- src/operator/tensor/init_op.h | 6 +- .../tensor/{la_op_inline.h => la_op-inl.h} | 244 +- src/operator/tensor/la_op.cc | 35 +- src/operator/tensor/la_op.cu | 4 +- src/operator/tensor/la_op.h | 18 +- src/operator/tensor/matrix_op-inl.h | 229 +- src/operator/tensor/matrix_op.cc | 86 +- src/operator/tensor/ordering_op-inl.h | 2 +- src/operator/tensor/ordering_op.cc | 2 +- src/operator/tensor/ordering_op.cu | 4 +- src/operator/tensor/ravel.cc | 2 +- src/operator/tensor/ravel.cu | 2 +- src/operator/tensor/sparse_retain-inl.h | 2 +- src/operator/tensor/sparse_retain.cc | 2 +- src/operator/tensor/sparse_retain.cu | 2 +- src/profiler/aggregate_stats.cc | 13 +- src/storage/cpu_device_storage.h | 10 +- src/storage/gpu_device_storage.h | 12 +- src/storage/naive_storage_manager.h | 6 +- src/storage/pinned_memory_storage.h | 12 +- src/storage/pooled_storage_manager.h | 34 +- src/storage/storage.cc | 50 +- tests/cpp/include/test_mkldnn.h | 625 +++++ tests/cpp/kvstore/gpu_topology_test.cc | 36 +- tests/cpp/operator/activation_perf.cc | 26 +- tests/cpp/operator/krprod_test.cc | 3 + tests/cpp/operator/mkldnn.cc | 1646 ------------ tests/cpp/operator/mkldnn_operator_test.cc | 1356 ++++++++++ tests/cpp/operator/mkldnn_test.cc | 420 ++++ tests/jenkins/run_test.sh | 4 +- tests/jenkins/run_test_amzn_linux_gpu.sh | 4 +- tests/jenkins/run_test_ubuntu.sh | 4 +- tests/nightly/Jenkinsfile | 20 +- tests/nightly/JenkinsfileForBinaries | 20 +- .../apache_rat_license_check/rat-excludes | 14 +- .../JenkinsfileForBLC | 4 +- tests/nightly/dist_async_kvstore.py | 18 +- tests/nightly/dist_device_sync_kvstore.py | 19 + tests/nightly/dist_sync_kvstore.py | 26 +- .../JenkinsfileForMBCC | 4 +- .../train_mxnet_legacy_models.sh | 4 +- tests/nightly/test_large_array.py | 139 +- tests/python-pytest/onnx/README.md | 33 + .../onnx/{export => }/backend.py | 66 +- .../mxnet_backend_rep.py => backend_rep.py} | 93 +- tests/python-pytest/onnx/backend_test.py | 89 + .../python-pytest/onnx/export/backend_rep.py | 84 - .../onnx/export/mxnet_export_test.py | 251 -- .../onnx/export/onnx_backend_test.py | 147 -- .../onnx/import/gluon_backend.py | 75 - .../onnx/import/gluon_backend_rep.py | 71 - .../onnx/import/gluon_backend_test.py | 55 - .../onnx/import/mxnet_backend.py | 67 - .../onnx/import/mxnet_backend_test.py | 55 - .../onnx/import/onnx_import_test.py | 290 --- tests/python-pytest/onnx/import/test_cases.py | 120 - tests/python-pytest/onnx/mxnet_export_test.py | 121 + tests/python-pytest/onnx/test_cases.py | 131 + tests/python-pytest/onnx/test_models.py | 178 ++ tests/python-pytest/onnx/test_node.py | 278 +++ tests/python/gpu/test_device.py | 46 +- tests/python/gpu/test_gluon_gpu.py | 184 +- tests/python/gpu/test_kvstore_gpu.py | 18 +- tests/python/gpu/test_operator_gpu.py | 55 +- tests/python/mkl/test_mkldnn.py | 46 + tests/python/mkl/test_subgraph.py | 23 +- .../python/quantization/test_quantization.py | 239 +- tests/python/unittest/common.py | 11 +- tests/python/unittest/test_autograd.py | 17 +- .../python/unittest/test_contrib_operator.py | 1 + tests/python/unittest/test_dgl_graph.py | 245 ++ tests/python/unittest/test_gluon.py | 21 +- tests/python/unittest/test_gluon_data.py | 30 +- tests/python/unittest/test_gluon_rnn.py | 44 +- tests/python/unittest/test_gluon_trainer.py | 92 +- tests/python/unittest/test_image.py | 254 +- .../python/unittest/test_libinfo.py | 15 +- tests/python/unittest/test_loss.py | 18 + tests/python/unittest/test_metric.py | 63 + tests/python/unittest/test_module.py | 59 + tests/python/unittest/test_ndarray.py | 53 + tests/python/unittest/test_operator.py | 560 +++-- tests/python/unittest/test_optimizer.py | 151 +- tests/python/unittest/test_random.py | 243 +- tests/python/unittest/test_sparse_ndarray.py | 23 +- tests/python/unittest/test_sparse_operator.py | 42 + tests/requirements.txt | 1 + tests/tutorials/test_sanity_tutorials.py | 7 +- tests/tutorials/test_tutorials.py | 14 +- tools/build/build_lib.sh | 80 + tools/build/build_wheel.sh | 31 + tools/dependencies/README.md | 14 + tools/dependencies/cityhash.sh | 32 + tools/dependencies/curl.sh | 64 + tools/dependencies/eigen.sh | 34 + tools/dependencies/libpng.sh | 40 + tools/dependencies/libtiff.sh | 32 + tools/dependencies/libturbojpeg.sh | 47 + .../dependencies/libz.sh | 50 +- tools/dependencies/lz4.sh | 31 + .../dependencies/make_shared_dependencies.sh | 39 +- tools/dependencies/openblas.sh | 35 + tools/dependencies/opencv.sh | 192 ++ tools/dependencies/openssl.sh | 38 + tools/dependencies/patch/opencv_lapack.h | 23 + tools/dependencies/protobuf.sh | 43 + tools/dependencies/zmq.sh | 38 + tools/license_header.py | 173 +- tools/pip/MANIFEST.in | 28 + tools/pip/sanity_test.py | 32 + tools/pip/setup.py | 195 ++ tools/pip_package/README.md | 9 - tools/pip_package/make_pip_package.sh | 179 -- tools/pip_package/setup.py | 60 - tools/setup_gpu_build_tools.sh | 254 ++ 1056 files changed, 50819 insertions(+), 20101 deletions(-) rename README.md => NGRAPH_README.md (100%) create mode 100644 R-package/dummy.NAMESPACE create mode 100644 ci/docker/Dockerfile.publish.test.centos7_cpu create mode 100644 ci/docker/Dockerfile.publish.test.centos7_gpu create mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1404_cpu create mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1404_gpu create mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_cpu create mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1604_gpu rename example/bi-lstm-sort/gen_data.py => ci/docker/Dockerfile.publish.test.ubuntu1804_cpu (61%) create mode 100644 ci/docker/Dockerfile.publish.test.ubuntu1804_gpu create mode 100644 ci/docker/Dockerfile.publish.ubuntu1404_cpu create mode 100644 ci/docker/Dockerfile.publish.ubuntu1404_gpu create mode 100755 ci/docker/install/centos7_base.sh create mode 100755 ci/docker/install/centos7_scala.sh create mode 100755 ci/docker/install/ubuntu_base.sh create mode 100755 ci/docker/install/ubuntu_mkl.sh create mode 100644 ci/docker/install/ubuntu_publish.sh rename Jenkinsfile => ci/jenkins/Jenkins_steps.groovy (71%) create mode 100644 ci/jenkins/Jenkinsfile_centos_cpu create mode 100644 ci/jenkins/Jenkinsfile_centos_gpu create mode 100644 ci/jenkins/Jenkinsfile_clang create mode 100644 ci/jenkins/Jenkinsfile_edge create mode 100644 ci/jenkins/Jenkinsfile_miscellaneous create mode 100644 ci/jenkins/Jenkinsfile_sanity create mode 100644 ci/jenkins/Jenkinsfile_unix_cpu create mode 100644 ci/jenkins/Jenkinsfile_unix_gpu create mode 100644 ci/jenkins/Jenkinsfile_website create mode 100644 ci/jenkins/Jenkinsfile_windows_cpu create mode 100644 ci/jenkins/Jenkinsfile_windows_gpu create mode 100644 ci/publish/Jenkinsfile create mode 100755 ci/publish/scala/build.sh create mode 100644 ci/publish/scala/buildkey.py create mode 100755 ci/publish/scala/deploy.sh create mode 100644 ci/publish/scala/fullDeploy.sh create mode 100755 ci/publish/scala/test.sh create mode 100644 ci/requirements.txt create mode 100644 cmake/AutoDetectF16C.cmake create mode 100644 cmake/DownloadMKLML.cmake delete mode 100644 cmake/MklDnn.cmake create mode 100644 cmake/cmake_options.yml create mode 100644 contrib/clojure-package/examples/captcha/.gitignore create mode 100644 contrib/clojure-package/examples/captcha/README.md create mode 100644 contrib/clojure-package/examples/captcha/captcha_example.png create mode 100755 contrib/clojure-package/examples/captcha/gen_captcha.py create mode 100755 contrib/clojure-package/examples/captcha/get_data.sh create mode 100644 contrib/clojure-package/examples/captcha/project.clj create mode 100644 contrib/clojure-package/examples/captcha/src/captcha/consts.clj create mode 100644 contrib/clojure-package/examples/captcha/src/captcha/infer_ocr.clj create mode 100644 contrib/clojure-package/examples/captcha/src/captcha/train_ocr.clj create mode 100644 contrib/clojure-package/examples/captcha/test/captcha/train_ocr_test.clj create mode 100644 contrib/clojure-package/examples/cnn-text-classification/test/cnn_text_classification/classifier_test.clj create mode 100644 contrib/clojure-package/examples/gan/test/gan/gan_test.clj create mode 100644 contrib/clojure-package/examples/imclassification/test/imclassification/train_mnist_test.clj create mode 100644 contrib/clojure-package/examples/imclassification/test/test-symbol.json.ref create mode 100644 contrib/clojure-package/examples/infer/imageclassifier/.gitignore create mode 100644 contrib/clojure-package/examples/infer/imageclassifier/README.md create mode 100644 contrib/clojure-package/examples/infer/imageclassifier/project.clj create mode 100755 contrib/clojure-package/examples/infer/imageclassifier/scripts/get_resnet_18_data.sh create mode 100755 contrib/clojure-package/examples/infer/imageclassifier/scripts/get_resnet_data.sh create mode 100644 contrib/clojure-package/examples/infer/imageclassifier/src/infer/imageclassifier_example.clj create mode 100644 contrib/clojure-package/examples/infer/imageclassifier/test/infer/imageclassifier_example_test.clj create mode 100644 contrib/clojure-package/examples/infer/objectdetector/.gitignore create mode 100644 contrib/clojure-package/examples/infer/objectdetector/README.md create mode 100644 contrib/clojure-package/examples/infer/objectdetector/project.clj create mode 100755 contrib/clojure-package/examples/infer/objectdetector/scripts/get_ssd_data.sh create mode 100644 contrib/clojure-package/examples/infer/objectdetector/src/infer/objectdetector_example.clj create mode 100644 contrib/clojure-package/examples/infer/objectdetector/test/infer/objectdetector_example_test.clj create mode 100644 contrib/clojure-package/examples/infer/predictor/.gitignore create mode 100644 contrib/clojure-package/examples/infer/predictor/README.md create mode 100644 contrib/clojure-package/examples/infer/predictor/project.clj create mode 100755 contrib/clojure-package/examples/infer/predictor/scripts/get_resnet_18_data.sh create mode 100755 contrib/clojure-package/examples/infer/predictor/scripts/get_resnet_data.sh create mode 100644 contrib/clojure-package/examples/infer/predictor/src/infer/predictor_example.clj create mode 100644 contrib/clojure-package/examples/infer/predictor/test/infer/predictor_example_test.clj create mode 100644 contrib/clojure-package/examples/module/test/mnist_mlp_test.clj create mode 100644 contrib/clojure-package/examples/multi-label/test/multi_label_test.clj create mode 100644 contrib/clojure-package/examples/neural-style/test/neural_style/vgg_19_test.clj create mode 100644 contrib/clojure-package/examples/profiler/test/core_test.clj create mode 100644 contrib/clojure-package/examples/profiler/test/profile-matmul-20iter.json.ref create mode 100644 contrib/clojure-package/examples/rnn/test/rnn/core_test.clj create mode 100644 contrib/clojure-package/examples/tutorial/test/tutorial/core_test.clj create mode 100644 contrib/clojure-package/examples/visualization/test/visualization/core_test.clj create mode 100755 contrib/clojure-package/integration-tests.sh create mode 100755 contrib/clojure-package/scripts/infer/get_resnet_18_data.sh create mode 100755 contrib/clojure-package/scripts/infer/get_ssd_data.sh rename ci/docker/qemu/qemu_run.sh => contrib/clojure-package/scripts/update_versions.sh (70%) create mode 100644 contrib/clojure-package/src/org/apache/clojure_mxnet/image.clj create mode 100644 contrib/clojure-package/src/org/apache/clojure_mxnet/infer.clj create mode 100644 contrib/clojure-package/src/org/apache/clojure_mxnet/primitives.clj create mode 100644 contrib/clojure-package/test/org/apache/clojure_mxnet/image_test.clj create mode 100644 contrib/clojure-package/test/org/apache/clojure_mxnet/infer/imageclassifier_test.clj create mode 100644 contrib/clojure-package/test/org/apache/clojure_mxnet/infer/objectdetector_test.clj create mode 100644 contrib/clojure-package/test/org/apache/clojure_mxnet/infer/predictor_test.clj create mode 100644 contrib/clojure-package/test/org/apache/clojure_mxnet/primitives_test.clj create mode 100644 contrib/clojure-package/test/test-images/Pug-Cookie.jpg create mode 100644 contrib/clojure-package/test/test-images/kitten.jpg create mode 100644 cpp-package/example/inference/Makefile create mode 100644 cpp-package/example/inference/README.md create mode 100644 cpp-package/example/inference/inception_inference.cpp create mode 100755 cpp-package/example/inference/unit_test_inception_inference.sh create mode 100755 dev_menu.py create mode 100644 docs/_static/js/clipboard.js create mode 100644 docs/api/java/index.md create mode 100644 docs/install/java_setup.md rename {example => docs/tutorial_utils/vision}/cnn_visualization/gradcam.py (98%) create mode 100644 docs/tutorials/basic/reshape_transpose.md create mode 100644 docs/tutorials/gluon/info_gan.md create mode 100644 docs/tutorials/java/index.md create mode 100644 docs/tutorials/java/mxnet_java_on_intellij.md create mode 100644 docs/tutorials/java/ssd_inference.md rename example/{vae => autoencoder/variational_autoencoder}/README.md (100%) rename example/{vae => autoencoder/variational_autoencoder}/VAE.py (100%) create mode 100755 example/autoencoder/variational_autoencoder/VAE_example.ipynb create mode 100644 example/bi-lstm-sort/bi-lstm-sort.ipynb delete mode 100644 example/bi-lstm-sort/infer_sort.py delete mode 100644 example/bi-lstm-sort/lstm.py delete mode 100644 example/bi-lstm-sort/lstm_sort.py delete mode 100644 example/bi-lstm-sort/rnn_model.py delete mode 100644 example/bi-lstm-sort/sort_io.py delete mode 100644 example/cnn_visualization/README.md delete mode 100644 example/cnn_visualization/gradcam_demo.py delete mode 100644 example/cnn_visualization/vgg.py rename example/gluon/{ => actor_critic}/actor_critic.py (100%) create mode 100644 example/gluon/audio/transforms.py create mode 100644 example/gluon/audio/urban_sounds/README.md create mode 100644 example/gluon/audio/urban_sounds/datasets.py create mode 100644 example/gluon/audio/urban_sounds/model.py create mode 100644 example/gluon/audio/urban_sounds/predict.py create mode 100644 example/gluon/audio/urban_sounds/requirements.txt create mode 100644 example/gluon/audio/urban_sounds/train.py rename example/gluon/{DCGAN => dc_gan}/README.md (100%) rename example/gluon/{DCGAN => dc_gan}/__init__.py (100%) rename example/gluon/{DCGAN => dc_gan}/dcgan.py (100%) rename example/gluon/{DCGAN => dc_gan}/inception_score.py (100%) rename example/gluon/{ => house_prices}/kaggle_k_fold_cross_validation.py (100%) delete mode 100644 example/gluon/learning_rate_manipulation.py rename example/gluon/{ => lstm_crf}/lstm_crf.py (95%) rename example/gluon/{ => mnist}/mnist.py (100%) rename example/gluon/{ => super_resolution}/super_resolution.py (100%) create mode 100644 example/gluon/tree_lstm/README.md delete mode 100644 example/memcost/Makefile delete mode 100644 example/memcost/README.md delete mode 100644 example/memcost/inception_memcost.py rename example/{ => module}/utils/__init__.py (100%) rename example/{ => module}/utils/get_data.py (94%) delete mode 100644 example/multi-task/example_multi_task.py create mode 100644 example/multi-task/multi-task-learning.ipynb delete mode 100644 example/notebooks/README.md delete mode 100644 example/numpy-ops/numpy_softmax.py delete mode 100644 example/onnx/super_resolution.py delete mode 100644 example/python-howto/README.md delete mode 100644 example/python-howto/data_iter.py delete mode 100644 example/python-howto/monitor_weights.py delete mode 100644 example/python-howto/multiple_outputs.py delete mode 100644 example/recommenders/crossentropy.py delete mode 100644 example/recommenders/demo1-MF2-fancy.ipynb delete mode 100644 example/recommenders/demo2-binary.ipynb create mode 100644 example/recommenders/demo2-dssm.ipynb delete mode 100644 example/recommenders/demo3-dssm.ipynb delete mode 100644 example/recommenders/negativesample.py delete mode 100644 example/recommenders/randomproj.py delete mode 100644 example/recommenders/recotools.py delete mode 100644 example/recommenders/symbol_alexnet.py delete mode 100644 example/rnn-time-major/bucket_io.py delete mode 100755 example/rnn-time-major/get_sherlockholmes_data.sh delete mode 100644 example/rnn-time-major/readme.md delete mode 100644 example/rnn-time-major/rnn_cell_demo.py delete mode 100644 example/rnn/large_word_lm/readme.md rename example/{mxnet_adversarial_vae => vae-gan}/README.md (100%) rename example/{mxnet_adversarial_vae => vae-gan}/convert_data.py (100%) rename example/{mxnet_adversarial_vae => vae-gan}/vaegan_mxnet.py (100%) delete mode 100755 example/vae/VAE_example.ipynb create mode 120000 include/dlpack create mode 120000 include/dmlc create mode 120000 include/mshadow create mode 100644 include/mxnet/c_api_error.h create mode 120000 include/nnvm create mode 100644 julia/docs/.gitignore create mode 100644 julia/docs/Project.toml create mode 100644 make/config/libmxnet.sym create mode 100644 make/config/libmxnet.ver create mode 100644 make/maven/maven_darwin_cpu.mk create mode 100644 make/maven/maven_linux_cpu.mk create mode 100644 make/maven/maven_linux_cu90.mk create mode 100644 make/pip/pip_darwin_cpu.mk create mode 100644 make/pip/pip_darwin_mkl.mk create mode 100644 make/pip/pip_linux_cpu.mk create mode 100644 make/pip/pip_linux_cu100.mk create mode 100644 make/pip/pip_linux_cu100mkl.mk create mode 100644 make/pip/pip_linux_cu75.mk create mode 100644 make/pip/pip_linux_cu75mkl.mk create mode 100644 make/pip/pip_linux_cu80.mk create mode 100644 make/pip/pip_linux_cu80mkl.mk create mode 100644 make/pip/pip_linux_cu90.mk create mode 100644 make/pip/pip_linux_cu90mkl.mk create mode 100644 make/pip/pip_linux_cu91.mk create mode 100644 make/pip/pip_linux_cu91mkl.mk create mode 100644 make/pip/pip_linux_cu92.mk create mode 100644 make/pip/pip_linux_cu92mkl.mk create mode 100644 make/pip/pip_linux_mkl.mk delete mode 100644 scala-package/assembly/linux-x86_64-cpu/pom.xml delete mode 100644 scala-package/assembly/linux-x86_64-cpu/src/main/assembly/assembly.xml delete mode 100644 scala-package/assembly/linux-x86_64-gpu/pom.xml delete mode 100644 scala-package/assembly/linux-x86_64-gpu/src/main/assembly/assembly.xml delete mode 100644 scala-package/assembly/osx-x86_64-cpu/main/assembly/assembly.xml delete mode 100644 scala-package/assembly/osx-x86_64-cpu/pom.xml delete mode 100644 scala-package/assembly/osx-x86_64-cpu/src/main/assembly/assembly.xml delete mode 100644 scala-package/assembly/src/javadoc.xml create mode 100644 scala-package/assembly/src/main/assembly/assembly.xml create mode 100644 scala-package/assembly/src/main/assembly/javadoc.xml create mode 100644 scala-package/assembly/src/main/assembly/source.xml delete mode 100644 scala-package/assembly/src/source.xml create mode 100644 scala-package/core/src/main/scala/org/apache/mxnet/Image.scala create mode 100644 scala-package/core/src/main/scala/org/apache/mxnet/MX_PRIMITIVES.scala create mode 100644 scala-package/core/src/main/scala/org/apache/mxnet/javaapi/NDArray.scala create mode 100644 scala-package/core/src/main/scala/org/apache/mxnet/util/OptionConversion.scala create mode 100644 scala-package/core/src/test/java/org/apache/mxnet/javaapi/NDArrayTest.java create mode 100644 scala-package/core/src/test/java/org/apache/mxnet/javaapi/ResourceScopeTestSuite.java create mode 100644 scala-package/core/src/test/scala/org/apache/mxnet/ImageSuite.scala create mode 100644 scala-package/deploy/pom.xml create mode 100644 scala-package/deploy/src/main/deploy/deploy.xml create mode 100644 scala-package/examples/scripts/benchmark/run_java_inference_bm.sh create mode 100755 scala-package/examples/scripts/infer/objectdetector/run_ssd_java_example.sh create mode 100755 scala-package/examples/scripts/infer/predictor/run_predictor_java_example.sh create mode 100644 scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/benchmark/InferBase.java create mode 100644 scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/benchmark/JavaBenchmark.java create mode 100644 scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/benchmark/ObjectDetectionBenchmark.java create mode 100644 scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/objectdetector/README.md create mode 100644 scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/objectdetector/SSDClassifierExample.java create mode 100644 scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/PredictorExample.java create mode 100644 scala-package/examples/src/main/java/org/apache/mxnetexamples/javaapi/infer/predictor/README.md create mode 100644 scala-package/examples/src/main/scala/org/apache/mxnetexamples/infer/predictor/PredictorExample.scala create mode 100644 scala-package/examples/src/test/scala/org/apache/mxnetexamples/infer/predictor/PredictorExampleSuite.scala create mode 100644 scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/ObjectDetector.scala create mode 100644 scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/ObjectDetectorOutput.scala create mode 100644 scala-package/infer/src/main/scala/org/apache/mxnet/infer/javaapi/Predictor.scala create mode 100644 scala-package/infer/src/test/java/org/apache/mxnet/infer/javaapi/ObjectDetectorOutputTest.java create mode 100644 scala-package/infer/src/test/java/org/apache/mxnet/infer/javaapi/ObjectDetectorTest.java create mode 100644 scala-package/infer/src/test/java/org/apache/mxnet/infer/javaapi/PredictorTest.java delete mode 100644 scala-package/init-native/linux-x86_64/pom.xml delete mode 100644 scala-package/init-native/osx-x86_64/pom.xml create mode 100644 scala-package/init-native/src/main/native/org_apache_mxnet_init_native_c_api.h create mode 100644 scala-package/macros/src/main/scala/org/apache/mxnet/GeneratorBase.scala create mode 100644 scala-package/macros/src/main/scala/org/apache/mxnet/javaapi/JavaNDArrayMacro.scala create mode 100644 scala-package/memory-management.md create mode 100644 scala-package/mxnet-demo/java-demo/Makefile create mode 100644 scala-package/mxnet-demo/java-demo/README.md rename tools/pip_package/MANIFEST.in => scala-package/mxnet-demo/java-demo/bin/java_sample.sh (81%) mode change 100644 => 100755 rename ci/docker/qemu/ansible.cfg => scala-package/mxnet-demo/java-demo/bin/run_od.sh (81%) mode change 100644 => 100755 create mode 100644 scala-package/mxnet-demo/java-demo/pom.xml create mode 100644 scala-package/mxnet-demo/java-demo/src/main/java/mxnet/HelloWorld.java create mode 100644 scala-package/mxnet-demo/java-demo/src/main/java/mxnet/ObjectDetection.java rename scala-package/mxnet-demo/{ => scala-demo}/Makefile (90%) rename scala-package/mxnet-demo/{ => scala-demo}/README.md (85%) rename scala-package/mxnet-demo/{ => scala-demo}/bin/demo.sh (100%) rename scala-package/mxnet-demo/{ => scala-demo}/bin/run_im.sh (100%) rename scala-package/mxnet-demo/{ => scala-demo}/pom.xml (87%) rename scala-package/mxnet-demo/{ => scala-demo}/src/main/scala/sample/HelloWorld.scala (100%) rename scala-package/mxnet-demo/{ => scala-demo}/src/main/scala/sample/ImageClassificationExample.scala (100%) create mode 100644 scala-package/native/README.md delete mode 100644 scala-package/native/linux-x86_64-cpu/pom.xml delete mode 100644 scala-package/native/linux-x86_64-gpu/pom.xml delete mode 100644 scala-package/native/osx-x86_64-cpu/pom.xml create mode 100644 scala-package/native/src/main/native/org_apache_mxnet_native_c_api.h create mode 100644 scala-package/packageTest/Makefile create mode 100644 scala-package/packageTest/README.md create mode 100644 scala-package/packageTest/core/pom.xml create mode 120000 scala-package/packageTest/core/scripts create mode 100644 scala-package/packageTest/examples/pom.xml create mode 120000 scala-package/packageTest/examples/scripts create mode 100644 scala-package/packageTest/infer/pom.xml create mode 100644 scala-package/packageTest/pom.xml create mode 100644 src/operator/c_lapack_api.cc create mode 100644 src/operator/contrib/adamw-inl.h create mode 100644 src/operator/contrib/adamw.cc create mode 100644 src/operator/contrib/adamw.cu create mode 100644 src/operator/contrib/boolean_mask-inl.h create mode 100644 src/operator/contrib/boolean_mask.cc create mode 100644 src/operator/contrib/dgl_graph-inl.h create mode 100644 src/operator/contrib/dgl_graph.cc create mode 100644 src/operator/contrib/dgl_graph.cu create mode 100644 src/operator/contrib/nnz.cc create mode 100644 src/operator/nn/mkldnn/mkldnn_concat-inl.h create mode 100644 src/operator/quantization/mkldnn/mkldnn_quantized_concat.cc create mode 100644 src/operator/quantization/quantized_concat.cc rename src/operator/tensor/{la_op_inline.h => la_op-inl.h} (77%) create mode 100644 tests/cpp/include/test_mkldnn.h delete mode 100644 tests/cpp/operator/mkldnn.cc create mode 100644 tests/cpp/operator/mkldnn_operator_test.cc create mode 100644 tests/cpp/operator/mkldnn_test.cc create mode 100644 tests/python-pytest/onnx/README.md rename tests/python-pytest/onnx/{export => }/backend.py (57%) rename tests/python-pytest/onnx/{import/mxnet_backend_rep.py => backend_rep.py} (56%) create mode 100644 tests/python-pytest/onnx/backend_test.py delete mode 100644 tests/python-pytest/onnx/export/backend_rep.py delete mode 100644 tests/python-pytest/onnx/export/mxnet_export_test.py delete mode 100644 tests/python-pytest/onnx/export/onnx_backend_test.py delete mode 100644 tests/python-pytest/onnx/import/gluon_backend.py delete mode 100644 tests/python-pytest/onnx/import/gluon_backend_rep.py delete mode 100644 tests/python-pytest/onnx/import/gluon_backend_test.py delete mode 100644 tests/python-pytest/onnx/import/mxnet_backend.py delete mode 100644 tests/python-pytest/onnx/import/mxnet_backend_test.py delete mode 100644 tests/python-pytest/onnx/import/onnx_import_test.py delete mode 100644 tests/python-pytest/onnx/import/test_cases.py create mode 100644 tests/python-pytest/onnx/mxnet_export_test.py create mode 100644 tests/python-pytest/onnx/test_cases.py create mode 100644 tests/python-pytest/onnx/test_models.py create mode 100644 tests/python-pytest/onnx/test_node.py create mode 100644 tests/python/unittest/test_dgl_graph.py rename julia/deps/cpcblas.sh => tests/python/unittest/test_libinfo.py (75%) mode change 100755 => 100644 create mode 100755 tools/build/build_lib.sh create mode 100755 tools/build/build_wheel.sh create mode 100644 tools/dependencies/README.md create mode 100755 tools/dependencies/cityhash.sh create mode 100755 tools/dependencies/curl.sh create mode 100755 tools/dependencies/eigen.sh create mode 100755 tools/dependencies/libpng.sh create mode 100755 tools/dependencies/libtiff.sh create mode 100755 tools/dependencies/libturbojpeg.sh rename ci/docker/qemu/playbook.yml => tools/dependencies/libz.sh (55%) mode change 100644 => 100755 create mode 100755 tools/dependencies/lz4.sh rename example/python-howto/debug_conv.py => tools/dependencies/make_shared_dependencies.sh (58%) mode change 100644 => 100755 create mode 100755 tools/dependencies/openblas.sh create mode 100755 tools/dependencies/opencv.sh create mode 100755 tools/dependencies/openssl.sh create mode 100644 tools/dependencies/patch/opencv_lapack.h create mode 100755 tools/dependencies/protobuf.sh create mode 100755 tools/dependencies/zmq.sh create mode 100644 tools/pip/MANIFEST.in create mode 100644 tools/pip/sanity_test.py create mode 100644 tools/pip/setup.py delete mode 100644 tools/pip_package/README.md delete mode 100755 tools/pip_package/make_pip_package.sh delete mode 100644 tools/pip_package/setup.py create mode 100755 tools/setup_gpu_build_tools.sh diff --git a/.gitignore b/.gitignore index c8a813649..7eb8e7d6e 100644 --- a/.gitignore +++ b/.gitignore @@ -167,13 +167,11 @@ python/.eggs tests/Makefile tests/mxnet_unit_tests -# generated wrappers for ccache -cc -cxx - # Code coverage related .coverage *.gcov *.gcno coverage.xml +# Local CMake build config +cmake_options.yml diff --git a/.travis.yml b/.travis.yml index b5c323f3a..079293059 100644 --- a/.travis.yml +++ b/.travis.yml @@ -35,8 +35,9 @@ script: - mv make/osx.mk config.mk - make -j 2 + # Temporarily disabled due to https://github.com/apache/incubator-mxnet/issues/13136 # We ignore several tests to avoid possible timeouts on large PRs. # This lowers our test coverage, but is required for consistent Travis runs. # These tests will be tested in a variety of environments in Jenkins based tests. - - python -m nose --with-timer --exclude-test=test_sparse_operator.test_elemwise_binary_ops --exclude-test=test_gluon_model_zoo.test_models --exclude-test=test_random.test_shuffle --exclude-test=test_operator.test_broadcast_binary_op --exclude-test=test_operator.test_pick --exclude-test=test_profiler.test_continuous_profile_and_instant_marker --exclude-test=test_metric_perf.test_metric_performance --exclude-test=test_operator.test_order --verbose tests/python/unittest/ - - python2 -m nose --verbose tools/coreml/test --exclude-test=test_mxnet_image +# - python -m nose --with-timer --exclude-test=test_sparse_operator.test_elemwise_binary_ops --exclude-test=test_gluon_model_zoo.test_models --exclude-test=test_random.test_shuffle --exclude-test=test_operator.test_broadcast_binary_op --exclude-test=test_operator.test_pick --exclude-test=test_profiler.test_continuous_profile_and_instant_marker --exclude-test=test_metric_perf.test_metric_performance --exclude-test=test_operator.test_order --verbose tests/python/unittest/ +# - python2 -m nose --verbose tools/coreml/test --exclude-test=test_mxnet_image diff --git a/3rdparty/mkldnn b/3rdparty/mkldnn index 0e7ca7388..a7c5f5383 160000 --- a/3rdparty/mkldnn +++ b/3rdparty/mkldnn @@ -1 +1 @@ -Subproject commit 0e7ca738866d22cc700aa33b8de120b938f910d0 +Subproject commit a7c5f53832acabade6e5086e72c960adedb3c38a diff --git a/3rdparty/mshadow b/3rdparty/mshadow index 696803bd7..6dc04f7c7 160000 --- a/3rdparty/mshadow +++ b/3rdparty/mshadow @@ -1 +1 @@ -Subproject commit 696803bd7723ade8230af878460d96c68a550fbc +Subproject commit 6dc04f7c729cd5c6c6210d5d4d2026a26ce0bfbf diff --git a/CMakeLists.txt b/CMakeLists.txt index d4bda9f40..45ab12d14 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -21,7 +21,7 @@ mxnet_option(USE_LAPACK "Build with lapack support" ON) mxnet_option(USE_NGRAPH "Build with nGraph support" ON) mxnet_option(USE_MKL_IF_AVAILABLE "Use MKL if found" ON) mxnet_option(USE_MKLML_MKL "Use MKLDNN variant of MKL (if MKL found)" ON IF USE_MKL_IF_AVAILABLE AND (NOT APPLE)) -mxnet_option(USE_MKLDNN "Use MKLDNN variant of MKL (if MKL found)" ON IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) AND (NOT USE_NGRAPH)) +mxnet_option(USE_MKLDNN "Use MKLDNN variant of MKL (if MKL found)" ON IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) AND (NOT MSVC) AND (CMAKE_SYSTEM_PROCESSOR MATCHES x86_64) AND (NOT USE_NGRAPH)) mxnet_option(USE_OPERATOR_TUNING "Enable auto-tuning of operators" ON IF NOT MSVC) mxnet_option(USE_GPERFTOOLS "Build with GPerfTools support (if found)" ON) mxnet_option(USE_JEMALLOC "Build with Jemalloc support" ON) @@ -113,27 +113,17 @@ else(MSVC) endif() # For cross complication, turn off flag if target device does not support it if(USE_F16C) - check_cxx_compiler_flag("-mf16c" COMPILER_SUPPORT_MF16C) - if(CMAKE_SYSTEM_NAME STREQUAL "Linux") - execute_process(COMMAND cat /proc/cpuinfo - COMMAND grep flags - COMMAND grep f16c - OUTPUT_VARIABLE CPU_SUPPORT_F16C) - elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin") - execute_process(COMMAND sysctl -a - COMMAND grep machdep.cpu.features - COMMAND grep F16C - OUTPUT_VARIABLE CPU_SUPPORT_F16C) - endif() - if(NOT CPU_SUPPORT_F16C) - message("CPU does not support F16C instructions") - endif() - if(CPU_SUPPORT_F16C AND COMPILER_SUPPORT_MF16C) - set(SUPPORT_F16C TRUE) - endif() + # Determine if hardware supports F16C instruction set + message(STATUS "Determining F16C support") + include(cmake/AutoDetectF16C.cmake) else() set(SUPPORT_F16C FALSE) endif() + if(SUPPORT_F16C) + set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mf16c") + else() + add_definitions(-DMSHADOW_USE_F16C=0) + endif() set(CMAKE_POSITION_INDEPENDENT_CODE ON) set(CMAKE_C_FLAGS "-Wall -Wno-unknown-pragmas -Wno-sign-compare") if ("${CMAKE_CXX_COMPILER_ID}" MATCHES ".*Clang$") @@ -304,13 +294,17 @@ if(ENABLE_TESTCOVERAGE) endif() if(USE_MKLDNN) - include(cmake/MklDnn.cmake) + include(cmake/DownloadMKLML.cmake) # CPU architecture (e.g., C5) can't run on another architecture (e.g., g3). if(NOT MSVC) set(ARCH_OPT_FLAGS "-mtune=generic") + else() + set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /EHsc") + set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /EHsc /Gy") endif() - set(WITH_TEST OFF) - set(WITH_EXAMPLE OFF) + set(WITH_TEST OFF CACHE INTERNAL "" FORCE) + set(WITH_EXAMPLE OFF CACHE INTERNAL "" FORCE) + set(ARCH_OPT_FLAGS "" CACHE INTERNAL "" FORCE) add_subdirectory(3rdparty/mkldnn) include_directories(3rdparty/mkldnn/include) @@ -498,6 +492,7 @@ endif() # ---[ LAPack if(USE_LAPACK) + message("USE_LAPACK is ON") add_definitions(-DMXNET_USE_LAPACK=1) if (NOT MSVC) list(APPEND mxnet_LINKER_LIBS lapack) @@ -816,8 +811,12 @@ install(TARGETS ${MXNET_INSTALL_TARGETS} LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR} ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR} ) +# NOTE: Public headers will be installed into ${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}, see +# https://cmake.org/cmake/help/v3.0/variable/CMAKE_INSTALL_PREFIX.html +# https://cmake.org/cmake/help/v3.0/module/GNUInstallDirs.html install(DIRECTORY include/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}) +install(DIRECTORY 3rdparty/tvm/nnvm/include/ DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}) if (INSTALL_EXAMPLES) install(DIRECTORY example DESTINATION ${CMAKE_INSTALL_DATADIR}/${PROJECT_NAME}) endif() diff --git a/CODEOWNERS b/CODEOWNERS index 5a88e89df..a9655ecf0 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -12,15 +12,19 @@ * @apache/mxnet-committers # Language bindings -/R-package/ @thirdwing -/scala-package/ @yzhliu @nswamy -/perl-package/ @sergeykolychev -/python/ @szha -/contrib/clojure-package/ @gigasquid +/R-package/ @thirdwing +/scala-package/ @yzhliu @nswamy @pllarroy +/perl-package/ @sergeykolychev +/python/ @szha @pllarroy +/python/mxnet/kvstore.py @eric-haibin-lin +/python/mxnet/optimizer/ @eric-haibin-lin +/python/mxnet/gluon/trainer.py @eric-haibin-lin +/contrib/clojure-package/ @gigasquid +/julia/ @iblis17 # C++ base /src/kvstore/ @rahul003 @anirudh2290 -/include/ @anirudh2290 +/include/ @anirudh2290 @pllarroy /src/c_api/ @anirudh2290 /src/common/ @anirudh2290 /src/engine/ @anirudh2290 @@ -31,15 +35,20 @@ /src/nnvm/ @anirudh2290 /src/operator/ @anirudh2290 /src/profiler/ @anirudh2290 +/src/kvstore/ @eric-haibin-lin /src/storage/ @anirudh2290 /tests/cpp/ @anirudh2290 -/cpp-package/ @nswamy +/cpp-package/ @nswamy @pllarroy +/src/ @pllarroy +/plugin/ @pllarroy # CMake -CMakeLists.txt @szha @rahul003 -/cmake/ @szha @rahul003 +CMakeLists.txt @szha @rahul003 @pllarroy +/cmake/ @szha @rahul003 @pllarroy # MXNet CI +dev_menu.py @pllarroy +/ci/ @pllarroy /tests/ci_build/ @marcoabreu Jenkinsfile @marcoabreu .travis.yml @marcoabreu @@ -50,16 +59,16 @@ Makefile @szha prepare_mkl.sh @szha # Docs -/docs/ @szha +/docs/ @szha @pllarroy # Submodules .gitmodules @szha # Examples -/example/ @szha +/example/ @szha @pllarroy # Tools -/tools/ @szha +/tools/ @szha @pllarroy # Github templates /.github/ @szha diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md index 5e5e76d52..5b5fdce71 100644 --- a/CONTRIBUTORS.md +++ b/CONTRIBUTORS.md @@ -58,6 +58,7 @@ who are willing to help maintaining and leading the project. Committers come fro New committers will be proposed by current committers, with support from more than two of current committers. + List of Contributors -------------------- * [Full List of Contributors](https://github.com/apache/incubator-mxnet/graphs/contributors) @@ -187,3 +188,22 @@ List of Contributors * [LuckyPigeon](https://github.com/LuckyPigeon) * [Anton Chernov](https://github.com/lebeg) * [Denisa Roberts](https://github.com/D-Roberts) +* [Dick Carter](https://github.com/DickJC123) +* [Rahul Padmanabhan](https://github.com/rahul3) +* [Yuxi Hu](https://github.com/yuxihu) +* [Harsh Patel](https://github.com/harshp8l) +* [Xiao Wang](https://github.com/BeyonderXX) +* [Piyush Ghai](https://github.com/piyushghai) + +Label Bot +--------- +* [mxnet-label-bot](https://github.com/mxnet-label-bot) + - mxnet-label-bot provides users with the functionality to manage labels for Issues/Pull Requests on the repository + - To use me, comment: + - @mxnet-label-bot add [specify comma separated labels here] + - @mxnet-label-bot remove [specify comma separated labels here] + - @mxnet-label-bot update [specify comma separated labels here] + (i.e. @mxnet-label-bot update [Bug, Python]) + + - Available label names which are supported: [Labels](https://github.com/apache/incubator-mxnet/labels) + - For further details: [My Wiki Page](https://cwiki.apache.org/confluence/display/MXNET/Machine+Learning+Based+GitHub+Bot) diff --git a/LICENSE b/LICENSE index a8b57e583..b73ba3740 100644 --- a/LICENSE +++ b/LICENSE @@ -216,18 +216,65 @@ The following components are provided under an Apache 2.0 license. 1. MXNet Cpp-package - For details, /cpp-package/LICENSE + Copyright (c) 2015-2016 by Contributors 2. MXNet rcnn - For details, see, example/rcnn/LICENSE - 3. scala-package - For details, see, scala-package/LICENSE - 4. Warp-CTC - For details, see, src/operator/contrib/ctc_include/LICENSE + Copyright (c) 2014, 2015, The Regents of the University of California (Regents) + 3. MXNet scala-package - For details, see, scala-package/LICENSE + Copyright (c) 2014, 2015, the respective contributors + 4. Warp-CTC - For details, see, 3rdparty/ctc_include/LICENSE + Copyright 2015-2016, Baidu USA LLC. 5. 3rdparty/dlpack - For details, see, 3rdparty/dlpack/LICENSE + Copyright 2017 by Contributors 6. 3rdparty/dmlc-core - For details, see, 3rdparty/dmlc-core/LICENSE + Copyright (c) 2015 by Contributors + Copyright 2015 by dmlc-core developers + Copyright by Contributors 7. 3rdparty/mshadow - For details, see, 3rdparty/mshadow/LICENSE + Copyright (c) 2014-2016 by Contributors + Copyright by Contributors 8. 3rdparty/tvm - For details, see, 3rdparty/tvm/LICENSE - 9. 3rdparty/tvm/dmlc-core - For details, see, 3rdparty/tvm/dmlc-core/LICENSE - 10. 3rdparty/tvm/nnvm - For details, see, 3rdparty/tvm/nnvm/LICENSE + Copyright (c) 2016-2018 by Contributors + Copyright 2018 by Contributors + Copyright (c) 2018 by Xilinx, Contributors + 9. 3rdparty/tvm/dmlc-core - For details, see, 3rdparty/tvm/3rdparty/dmlc-core/LICENSE + Copyright (c) 2015 by Contributors + 10. 3rdparty/tvm/dlpack - For details, see, 3rdparty/tvm/3rdparty/dlpack/LICENSE + Copyright (c) 2015-2017 by Contributors + Copyright by Contributors 11. 3rdparty/ps-lite - For details, see, 3rdparty/ps-lite/LICENSE + Copyright 2015 Carnegie Mellon University + Copyright 2016, ps-lite developers + Copyright (c) 2015-2016 by Contributors + Copyright by Contributors 12. 3rdparty/mkldnn - For details, see, 3rdparty/mkldnn/LICENSE + Copyright (c) 2017-2018 Intel Corporation + Copyright 2016-2018 Intel Corporation + Copyright 2018 YANDEX LLC 13. googlemock scripts/generator - For details, see, 3rdparty/googletest/googlemock/scripts/generator/LICENSE + Copyright [2007-2009] Neal Norwitz + Portions Copyright [2007-2009] Google Inc. + 14. MXNet clojure-package - For details, see, contrib/clojure-package/LICENSE + Copyright 2018 by Contributors + 15. MXNet R-package - For details, see, R-package/LICENSE + Copyright (c) 2015 by Contributors + 16. ONNX-TensorRT benchmark package - For details, see, 3rdparty/onnx-tensorrt/third_party/onnx/third_party/benchmark/LICENSE + Copyright 2015 Google Inc. All rights reserved. + Copyright 2016 Ismael Jimenez Martinez. All rights reserved. + Copyright 2017 Roman Lebedev. All rights reserved. + 17. Dockerfiles - For details, see docker/Dockerfiles/License.md + 18. MXNet Julia Package - For details, see julia/LICENSE.md + Copyright (c) 2015-2018 by Chiyuan Zhang + 19. Benchdnn - For details, see 3rdparty/mkldnn/tests/benchdnn/README.md + Copyright 2017-2018 Intel Corporation + 20. MXNet perl-package - For details, see perl-package/README + 21. MXNet perl-package AI-MXNET - For details, see perl-package/AI-MXNet/README + 22. MXNet perl-package AI-MXNET Gluon Contrib - For details, see perl-package/AI-MXNet-Gluon-Contrib/README + 23. MXNet perl-package AI-MXNET Gluon ModelZoo - For details, see perl-package/AI-MXNet-Gluon-ModelZoo/README + 24. MXNet perl-package AI-MXNETCAPI - For details, see perl-package/AI-MXNetCAPI/README + 25. MXNet perl-package AI-NNVMCAPI - For details, see perl-package/AI-NNVMCAPI/README + 26. Cephes Library Functions - For details, see src/operator/special_functions-inl.h + Copyright (c) 2015 by Contributors + Copyright 1984, 1987, 1992 by Stephen L. Moshier ======================================================================================= @@ -235,75 +282,111 @@ ======================================================================================= 1. Fast R-CNN - For details, see example/rcnn/LICENSE + Copyright (c) Microsoft Corporation 2. Faster R-CNN - For details, see example/rcnn/LICENSE + Copyright (c) 2015 Microsoft Corporation 3. tree_lstm - For details, see example/gluon/tree_lstm/LICENSE + Copyright (c) 2017 Riddhiman Dasgupta, Sheng Zha 4. OpenMP - For details, see 3rdparty/openmp/LICENSE.txt - 5. HalideIR - For details, see nnvm/tvm/HalideIR/LICENSE + Copyright (c) 1997-2016 Intel Corporation + 6. HalideIR - For details, see 3rdparty/tvm/3rdparty/HalideIR/LICENSE + Copyright (c) 2016 HalideIR contributors + Copyright (c) 2012-2014 MIT CSAIL, Google Inc., and other contributors + Copyright (c) 2016-2018 by Contributors + 7. ONNX-TensorRT - For details, see 3rdparty/onnx-tensorrt/LICENSE + Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. + Copyright (c) 2018 Open Neural Network Exchange + 8. ONNX-TensorRT - For details, see 3rdparty/onnx-tensorrt/third_party/onnx/LICENSE + Copyright (c) Facebook, Inc. and Microsoft Corporation. + 9. clipboard.js - Refer to https://zenorocha.github.io/clipboard.js + Licensed MIT © Zeno Rocha + 10. clipboard.min.js - Refer to https://zenorocha.github.io/clipboard.js + Licensed MIT © Zeno Rocha ======================================================================================= - NVIDIA Licenses + 3-clause BSD licenses ======================================================================================= - 1. Moderngpu - For details, see, src/operator/contrib/ctc_include/contrib/moderngpu/LICENSE - - /****************************************************************************** - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ - - 2. CUB Library - For details, see, 3rdparty/cub/LICENSE.TXT + 1. Xbyak - For details, see 3rdparty/mkldnn/src/cpu/xbyak/COPYRIGHT + Copyright (c) 2007 MITSUNARI Shigeo + Copyright 2016-2018 Intel Corporation + 2. gtest - For details, see, 3rdparty/mkldnn/tests/gtests/gtest/LICENSE + Copyright 2005-2008, Google Inc. + 3. Moderngpu - For details, see, 3rdparty/ctc_include/contrib/moderngpu/LICENSE + Copyright (c) 2013, NVIDIA CORPORATION. All rights reserved. + 4. CUB Library - For details, see, 3rdparty/cub/LICENSE.TXT + Copyright (c) 2010-2011, Duane Merrill. All rights reserved. + Copyright (c) 2011-2016, NVIDIA CORPORATION. All rights reserved. + 5. CUB mersenne.h - For details, see 3rdparty/cub/test/mersenne.h + Copyright (C) 1997 - 2002, Makoto Matsumoto and Takuji Nishimura, + 6. Googlemock - For details, see, 3rdparty/googletest/googlemock/LICENSE + Copyright 2006-2015, Google Inc. + 7. Googletest - For details, see, 3rdparty/googletest/googletest/LICENSE + Copyright 2005-2015, Google Inc. + 8. OpenMP Testsuite - For details, see, 3rdparty/openmp/testsuite/LICENSE + Copyright (c) 2011, 2012 University of Houston System + + + ======================================================================================= + 2-clause BSD licenses + ======================================================================================= + + 1. Sphinx JavaScript utilties for the full-text search - For details, see, docs/_static/searchtools_custom.js + Copyright (c) 2007-2017 by the Sphinx team + 2. blockingconcurrentqueue.h - For details, see, 3rdparty/dmlc-core/include/dmlc/blockingconcurrentqueue.h + ©2015-2016 Cameron Desrochers + 3. concurrentqueue.h - For details, see, 3rdparty/dmlc-core/include/dmlc/concurrentqueue.h + Copyright (c) 2013-2016, Cameron Desrochers. + 4. MSCOCO Toolbox - For details, see, example/ssd/dataset/pycocotools/coco.py + Code written by Piotr Dollar and Tsung-Yi Lin, 2014. + 5. PyBind11 FindEigen3.cmake - For details, see 3rdparty/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/FindEigen3.cmake + Copyright (c) 2006, 2007 Montel Laurent, + Copyright (c) 2008, 2009 Gael Guennebaud, + Copyright (c) 2009 Benoit Jacob + 6. PyBind11 FindPythonLibsNew.cmake - For details, see 3rdparty/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/FindPythonLibsNew.cmake + Copyright 2001-2009 Kitware, Inc. + Copyright 2012 Continuum Analytics, Inc. - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are met: - * Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - * Neither the name of the NVIDIA CORPORATION nor the - names of its contributors may be used to endorse or promote products - derived from this software without specific prior written permission. - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND - ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED - WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE - DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ======================================================================================= Other Licenses ======================================================================================= - 1. Caffe - For details, see, example/rcnn/LICENSE + 1. Caffe - For details, see, example/rcnn/LICENSE + Copyright (c) 2014, 2015, The Regents of the University of California (Regents) + Copyright (c) 2014, 2015, the respective contributors + 2. pool.h - For details, see, src/operator/nn/pool.h + Copyright (c) 2014-2017 The Regents of the University of California (Regents) + Copyright (c) 2014-2017, the respective contributors + 3. pool.cuh - For details, see, src/operator/nn/pool.cuh + Copyright (c) 2014-2017 The Regents of the University of California (Regents) + Copyright (c) 2014-2017, the respective contributors + 4. im2col.h - For details, see, src/operator/nn/im2col.h + Copyright (c) 2014-2017 The Regents of the University of California (Regents) + Copyright (c) 2014-2017, the respective contributors + 5. im2col.cuh - For details, see, src/operator/nn/im2col.cuh + Copyright (c) 2014-2017 The Regents of the University of California (Regents) + Copyright (c) 2014-2017, the respective contributors + + 6. deformable_im2col.h - For details, see, src/operator/contrib/nn/deformable_im2col.h + Copyright (c) 2014-2017 The Regents of the University of California (Regents) + Copyright (c) 2014-2017, the respective contributors + + 7. deformable_im2col.cuh - For details, see, src/operator/contrib/nn/deformable_im2col.cuh + Copyright (c) 2014-2017 The Regents of the University of California (Regents) + Copyright (c) 2014-2017, the respective contributors + + + COPYRIGHT + + Caffe uses a shared copyright model: each contributor holds copyright over + their contributions to Caffe. The project versioning records all such + contribution and copyright details. If a contributor wants to further mark + their specific copyright on a particular contribution, they should indicate + their copyright solely in the commit message of the change when it is + committed. LICENSE @@ -335,8 +418,9 @@ ======================================================================================= - 2. MS COCO API + 8. MS COCO API For details, see, example/rcnn/LICENSE + Copyright (c) 2014, Piotr Dollar and Tsung-Yi Lin Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: @@ -364,155 +448,15 @@ ======================================================================================= - 3. Sphinx JavaScript utilties for the full-text search - For details, see, docs/_static/searchtools_custom.js - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are - met: - - * Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - - * Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - - ======================================================================================= - - 4. FindCrypto.cmake - For details, see, 3rdparty/dmlc-core/cmake/Modules/FindCrypto.cmake, - Redistribution and use is allowed according to the terms of the BSD license. - - ======================================================================================= - - 5. Googlemock - For details, see, 3rdparty/googletest/googlemock/LICENSE - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are - met: - - * Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above - copyright notice, this list of conditions and the following disclaimer - in the documentation and/or other materials provided with the - distribution. - * Neither the name of Google Inc. nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - - ======================================================================================= - - 6. Googletest - For details, see, 3rdparty/googletest/googletest/LICENSE - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are - met: - - * Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above - copyright notice, this list of conditions and the following disclaimer - in the documentation and/or other materials provided with the - distribution. - * Neither the name of Google Inc. nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - - ======================================================================================= - - 7. OpenMP Testsuite - For details, see, 3rdparty/openmp/testsuite/LICENSE - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - - o Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - - o Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - - o Neither the name of the University of Houston System nor the names of its - contributors may be used to - endorse or promote products derived from this software without specific - prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED - TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR - PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF - LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING - NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - - ======================================================================================= - - 8. Semaphore implementation in blockingconcurrentqueue.h + 9. Semaphore implementation in blockingconcurrentqueue.h This file uses a semaphore implementation under the terms of its separate zlib license. For details, see, 3rdparty/dmlc-core/include/dmlc/blockingconcurrentqueue.h + Copyright Jeff Preshing ======================================================================================= - 9. blockingconcurrentqueue.h - This file is Distributed under the terms of the simplified BSD license. - For details, see, 3rdparty/dmlc-core/include/dmlc/blockingconcurrentqueue.h - - ======================================================================================= - - 10. concurrentqueue.h - This file is Distributed under the terms of the simplified BSD license. - For details, see, 3rdparty/dmlc-core/include/dmlc/concurrentqueue.h - - ======================================================================================= - - 11. ONNX Export module - For details, see, python/mxnet/contrib/onnx/_export/LICENSE + 10. ONNX Export module + For details, see, python/mxnet/contrib/onnx/mx2onnx/LICENSE # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file @@ -559,4 +503,206 @@ # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + ======================================================================================= + + 11. ONNX python bindings + For details, see, 3rdparty/onnx-tensorrt/third_party/onnx/third_party/pybind11/LICENSE + Copyright (c) 2015-2017 Wenzel Jakob , All rights reserved. + Copyright (c) 2016 Trent Houliston and Wenzel Jakob + Copyright (c) 2016-2017 Jason Rhinelander + Copyright (c) 2016 Klemens Morgenstern and Wenzel Jakob + Copyright (c) 2017 Henry F. Schreiner + Copyright (c) 2016 Sergey Lyskov and Wenzel Jakob + Copyright (c) 2016 Ben North + Copyright (c) 2016 Klemens D. Morgenstern + Copyright (c) 2016 Pim Schellart + Copyright (c) 2016 Ivan Smirnov + Copyright (c) 2016 Sergey Lyskov + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions are met: + + 1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + + 2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + + 3. Neither the name of the copyright holder nor the names of its contributors + may be used to endorse or promote products derived from this software + without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND + ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE + DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE + FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR + SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER + CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, + OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + You are under no obligation whatsoever to provide any bug fixes, patches, or + upgrades to the features, functionality or performance of the source code + ("Enhancements") to anyone; however, if you choose to make your Enhancements + available either publicly, or directly to the author of this software, without + imposing a separate written license agreement for such Enhancements, then you + hereby grant the following license: a non-exclusive, royalty-free perpetual + license to install, use, modify, prepare derivative works, incorporate into + other computer software, distribute, and sublicense such enhancements or + derivative works thereof, in binary and source code form. + + ======================================================================================= + + 12. Clang + For details, see, 3rdparty/onnx-tensorrt/third_party/onnx/third_party/pybind11/tools/clang/LICENSE.TXT + + LLVM Release License + University of Illinois/NCSA + Open Source License + + Copyright (c) 2007-2012 University of Illinois at Urbana-Champaign. + All rights reserved. + + Developed by: + + LLVM Team + + University of Illinois at Urbana-Champaign + + http://llvm.org + + Permission is hereby granted, free of charge, to any person obtaining a copy of + this software and associated documentation files (the "Software"), to deal with + the Software without restriction, including without limitation the rights to + use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies + of the Software, and to permit persons to whom the Software is furnished to do + so, subject to the following conditions: + + * Redistributions of source code must retain the above copyright notice, + this list of conditions and the following disclaimers. + + * Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimers in the + documentation and/or other materials provided with the distribution. + + * Neither the names of the LLVM Team, University of Illinois at + Urbana-Champaign, nor the names of its contributors may be used to + endorse or promote products derived from this Software without specific + prior written permission. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS + FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE + SOFTWARE. + + The LLVM software contains code written by third parties. Such software will + have its own individual LICENSE.TXT file in the directory in which it appears. + This file will describe the copyrights, license, and restrictions which apply + to that code. + + The disclaimer of warranty in the University of Illinois Open Source License + applies to all code in the LLVM Distribution, and nothing in any of the + other licenses gives permission to use the names of the LLVM Team or the + University of Illinois to endorse or promote products derived from this + Software. + + The following pieces of software have additional or alternate copyrights, + licenses, and/or restrictions: + + Program Directory + ------- --------- + + + ======================================================================================= + + 13. MKL BLAS + For details, see, [Intel® Simplified license](https://software.intel.com/en-us/license/intel-simplified-software-license) and MKLDNN_README.md + + Copyright (c) 2018 Intel Corporation. + + Use and Redistribution. You may use and redistribute the software (the “Software”), without modification, provided the following conditions are met: + + * Redistributions must reproduce the above copyright notice and the following terms of use in the Software and in the documentation and/or other materials provided with the distribution. + + * Neither the name of Intel nor the names of its suppliers may be used to endorse or promote products derived from this Software without specific prior written permission. + + * No reverse engineering, decompilation, or disassembly of this Software is permitted. + + Limited patent license. Intel grants you a world-wide, royalty-free, non-exclusive license under patents it now or hereafter owns or controls to make, have made, use, import, offer to sell and sell (“Utilize”) this Software, but solely to the extent that any such patent is necessary to Utilize the Software alone. The patent license shall not apply to any combinations which include this software. No hardware per se is licensed hereunder. + + Third party and other Intel programs. “Third Party Programs” are the files listed in the “third-party-programs.txt” text file that is included with the Software and may include Intel programs under separate license terms. Third Party Programs, even if included with the distribution of the Materials, are governed by separate license terms and those license terms solely govern your use of those programs. + + DISCLAIMER. THIS SOFTWARE IS PROVIDED "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT ARE DISCLAIMED. THIS SOFTWARE IS NOT INTENDED FOR USE IN SYSTEMS OR APPLICATIONS WHERE FAILURE OF THE SOFTWARE MAY CAUSE PERSONAL INJURY OR DEATH AND YOU AGREE THAT YOU ARE FULLY RESPONSIBLE FOR ANY CLAIMS, COSTS, DAMAGES, EXPENSES, AND ATTORNEYS’ FEES ARISING OUT OF ANY SUCH USE, EVEN IF ANY CLAIM ALLEGES THAT INTEL WAS NEGLIGENT REGARDING THE DESIGN OR MANUFACTURE OF THE MATERIALS. + + LIMITATION OF LIABILITY. IN NO EVENT WILL INTEL BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. YOU AGREE TO INDEMNIFY AND HOLD INTEL HARMLESS AGAINST ANY CLAIMS AND EXPENSES RESULTING FROM YOUR USE OR UNAUTHORIZED USE OF THE SOFTWARE. + + No support. Intel may make changes to the Software, at any time without notice, and is not obligated to support, update or provide training for the Software. + + Termination. Intel may terminate your right to use the Software in the event of your breach of this Agreement and you fail to cure the breach within a reasonable period of time. + + Feedback. Should you provide Intel with comments, modifications, corrections, enhancements or other input (“Feedback”) related to the Software Intel will be free to use, disclose, reproduce, license or otherwise distribute or exploit the Feedback in its sole discretion without any obligations or restrictions of any kind, including without limitation, intellectual property rights or licensing obligations. + + Compliance with laws. You agree to comply with all relevant laws and regulations governing your use, transfer, import or export (or prohibition thereof) of the Software. + + Governing law. All disputes will be governed by the laws of the United States of America and the State of Delaware without reference to conflict of law principles and subject to the exclusive jurisdiction of the state or federal courts sitting in the State of Delaware, and each party agrees that it submits to the personal jurisdiction and venue of those courts and waives any objections. The United Nations Convention on Contracts for the International Sale of Goods (1980) is specifically excluded and will not apply to the Software. + + *Other names and brands may be claimed as the property of others. + + ======================================================================================= + 14. FindJeMalloc.cmake + For details, see, cmake/Modules/FindJeMalloc.cmake + + Licensed to the Apache Software Foundation (ASF) under one + or more contributor license agreements. See the NOTICE file + distributed with this work for additional information + regarding copyright ownership. The ASF licenses this file + to you under the Apache License, Version 2.0 (the + "License"); you may not use this file except in compliance + with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, + software distributed under the License is distributed on an + "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + KIND, either express or implied. See the License for the + specific language governing permissions and limitations + under the License. + + + Copyright (c) 2014 Thomas Heller + Copyright (c) 2007-2012 Hartmut Kaiser + Copyright (c) 2010-2011 Matt Anderson + Copyright (c) 2011 Bryce Lelbach + + Distributed under the Boost Software License, Version 1.0. + Boost Software License - Version 1.0 - August 17th, 2003 + + Permission is hereby granted, free of charge, to any person or organization + obtaining a copy of the software and accompanying documentation covered by + this license (the "Software") to use, reproduce, display, distribute, + execute, and transmit the Software, and to prepare derivative works of the + Software, and to permit third-parties to whom the Software is furnished to + do so, all subject to the following: + + The copyright notices in the Software and this entire statement, including + the above license grant, this restriction and the following disclaimer, + must be included in all copies of the Software, in whole or in part, and + all derivative works of the Software, unless such copies or derivative + works are solely in the form of machine-executable object code generated by + a source language processor. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT + SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE + FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + DEALINGS IN THE SOFTWARE. diff --git a/MKLDNN_README.md b/MKLDNN_README.md index 2618d2338..ecb721e5f 100644 --- a/MKLDNN_README.md +++ b/MKLDNN_README.md @@ -1,9 +1,9 @@ # Build/Install MXNet with MKL-DNN -A better training and inference perforamce are expected to achieved on Intel-Architecture CPUs with MXNET built with [Intel MKL-DNN](https://github.com/intel/mkl-dnn) on multiple operating system, including Linux, Windows and MacOS. -In the following sections, you will find building instructions for MXNET with Intel MKL-DNN on Linux, MacOS and Windows. +A better training and inference performance is expected to be achieved on Intel-Architecture CPUs with MXNet built with [Intel MKL-DNN](https://github.com/intel/mkl-dnn) on multiple operating system, including Linux, Windows and MacOS. +In the following sections, you will find build instructions for MXNet with Intel MKL-DNN on Linux, MacOS and Windows. -The detailed performance data collected on Intel Xeon CPU with MXNET built with Intel MKL-DNN can be found at [here](https://mxnet.incubator.apache.org/faq/perf.html#intel-cpu). +The detailed performance data collected on Intel Xeon CPU with MXNet built with Intel MKL-DNN can be found [here](https://mxnet.incubator.apache.org/faq/perf.html#intel-cpu).

Contents

@@ -78,12 +78,12 @@ cd incubator-mxnet ### Build MXNet with MKL-DNN ``` -LIBRARY_PATH=$(brew --prefix llvm)/lib/ make -j $(sysctl -n hw.ncpu) CC=$(brew --prefix llvm)/bin/clang++ CXX=$(brew --prefix llvm)/bin/clang++ USE_OPENCV=1 USE_OPENMP=1 USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1 +LIBRARY_PATH=$(brew --prefix llvm)/lib/ make -j $(sysctl -n hw.ncpu) CC=$(brew --prefix llvm)/bin/clang CXX=$(brew --prefix llvm)/bin/clang++ USE_OPENCV=1 USE_OPENMP=1 USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1 ```

Windows

-On Windows, you can use [Micrsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) and [Microsoft Visual Studio 2017](https://www.visualstudio.com/downloads/) to compile MXNET with Intel MKL-DNN. +On Windows, you can use [Micrsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) and [Microsoft Visual Studio 2017](https://www.visualstudio.com/downloads/) to compile MXNet with Intel MKL-DNN. [Micrsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) is recommended. **Visual Studio 2015** @@ -123,7 +123,7 @@ cmake -G "Visual Studio 14 Win64" .. -DUSE_CUDA=0 -DUSE_CUDNN=0 -DUSE_NVRTC=0 -D These commands produce a library called ```libmxnet.dll``` in the ```./build/Release/``` or ```./build/Debug``` folder. Also ```libmkldnn.dll``` with be in the ```./build/3rdparty/mkldnn/src/Release/``` -6. Make sure that all the dll files used above(such as `libmkldnn.dll`, `libmklml.dll`, `libiomp5.dll`, `libopenblas.dll`, etc) are added to the system PATH. For convinence, you can put all of them to ```\windows\system32```. Or you will come across `Not Found Dependencies` when loading mxnet. +6. Make sure that all the dll files used above(such as `libmkldnn.dll`, `libmklml.dll`, `libiomp5.dll`, `libopenblas.dll`, etc) are added to the system PATH. For convinence, you can put all of them to ```\windows\system32```. Or you will come across `Not Found Dependencies` when loading MXNet. **Visual Studio 2017** @@ -177,7 +177,7 @@ cmake -G "Visual Studio 15 2017 Win64" .. -T host=x64 -DUSE_CUDA=0 -DUSE_CUDNN=0 msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount ``` -9. Make sure that all the dll files used above(such as `libmkldnn.dll`, `libmklml.dll`, `libiomp5.dll`, `libopenblas.dll`, etc) are added to the system PATH. For convinence, you can put all of them to ```\windows\system32```. Or you will come across `Not Found Dependencies` when loading mxnet. +9. Make sure that all the dll files used above(such as `libmkldnn.dll`, `libmklml.dll`, `libiomp5.dll`, `libopenblas.dll`, etc) are added to the system PATH. For convinence, you can put all of them to ```\windows\system32```. Or you will come across `Not Found Dependencies` when loading MXNet.

Verify MXNet with python

diff --git a/MXNET_README.md b/MXNET_README.md index 23b9d329d..369df9b64 100644 --- a/MXNET_README.md +++ b/MXNET_README.md @@ -33,6 +33,7 @@ How to Contribute What's New ---------- +* [Version 1.3.1 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.3.1) - MXNet 1.3.1 Patch Release. * [Version 1.3.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.3.0) - MXNet 1.3.0 Release. * [Version 1.2.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.2.0) - MXNet 1.2.0 Release. * [Version 1.1.0 Release](https://github.com/apache/incubator-mxnet/releases/tag/1.1.0) - MXNet 1.1.0 Release. diff --git a/Makefile b/Makefile index 50639cebf..cf05a0035 100644 --- a/Makefile +++ b/Makefile @@ -23,16 +23,14 @@ print-%: @echo '$*=$($*)' .DEFAULT_GOAL := all - ROOTDIR = $(CURDIR) TPARTYDIR = $(ROOTDIR)/3rdparty -SCALA_VERSION_PROFILE := scala-2.11 - ifeq ($(OS),Windows_NT) UNAME_S := Windows else UNAME_S := $(shell uname -s) + UNAME_P := $(shell uname -p) endif ifndef config @@ -69,6 +67,15 @@ endif # use customized config file include $(config) +ifndef USE_MKLDNN +ifneq ($(UNAME_S), Darwin) +ifneq ($(UNAME_S), Windows) +ifeq ($(UNAME_P), x86_64) + USE_MKLDNN=1 +endif +endif +endif +endif ifeq ($(USE_MKL2017), 1) $(warning "USE_MKL2017 is deprecated. We will switch to USE_MKLDNN.") USE_MKLDNN=1 @@ -80,12 +87,10 @@ ifeq ($(USE_MKLDNN), 1) export USE_MKLML = 1 endif - include $(TPARTYDIR)/mshadow/make/mshadow.mk include $(DMLC_CORE)/make/dmlc.mk include 3rdparty/ngraph-mxnet-bridge/ngraph.mk - # all tge possible warning tread WARNFLAGS= -Wall -Wsign-compare -Wno-comment CFLAGS = -DMSHADOW_FORCE_STREAM $(WARNFLAGS) @@ -123,6 +128,7 @@ ifeq ($(USE_TENSORRT), 1) CFLAGS += -I$(ROOTDIR) -I$(TPARTYDIR) -DONNX_NAMESPACE=$(ONNX_NAMESPACE) -DMXNET_USE_TENSORRT=1 LDFLAGS += -lprotobuf -pthread -lonnx -lonnx_proto -lnvonnxparser -lnvonnxparser_runtime -lnvinfer -lnvinfer_plugin endif +# -L/usr/local/lib ifeq ($(DEBUG), 1) NVCCFLAGS += -std=c++11 -Xcompiler -D_FORCE_INLINES -g -G -O0 -ccbin $(CXX) $(MSHADOW_NVCCFLAGS) @@ -325,7 +331,6 @@ ifeq ($(USE_ASAN), 1) CFLAGS += -fsanitize=address -fno-omit-frame-pointer LDFLAGS += -lasan endif - ifneq ($(ADD_CFLAGS), NONE) CFLAGS += $(ADD_CFLAGS) endif @@ -367,7 +372,7 @@ endif # be JIT-compiled by the updated driver from the included PTX. ifeq ($(USE_CUDA), 1) ifeq ($(CUDA_ARCH),) - KNOWN_CUDA_ARCHS := 30 35 50 52 60 61 70 + KNOWN_CUDA_ARCHS := 30 35 50 52 60 61 70 75 # Run nvcc on a zero-length file to check architecture-level support. # Create args to include SASS in the fat binary for supported levels. CUDA_ARCH := $(foreach arch,$(KNOWN_CUDA_ARCHS), \ @@ -427,18 +432,13 @@ PLUGIN_OBJ = PLUGIN_CUOBJ = include $(MXNET_PLUGINS) -ifeq ($(UNAME_S), Windows) - # TODO(yizhi) currently scala package does not support windows - SCALA_PKG_PROFILE := windows -else +ifneq ($(UNAME_S), Windows) ifeq ($(UNAME_S), Darwin) WHOLE_ARCH= -all_load NO_WHOLE_ARCH= -noall_load - SCALA_PKG_PROFILE := osx-x86_64 else WHOLE_ARCH= --whole-archive NO_WHOLE_ARCH= --no-whole-archive - SCALA_PKG_PROFILE := linux-x86_64 endif endif @@ -457,7 +457,6 @@ ifeq ($(USE_CUDA), 1) # Make sure to add stubs as fallback in order to be able to build # without full CUDA install (especially if run without nvidia-docker) LDFLAGS += -L/usr/local/cuda/lib64/stubs - SCALA_PKG_PROFILE := $(SCALA_PKG_PROFILE)-gpu ifeq ($(USE_NCCL), 1) ifneq ($(USE_NCCL_PATH), NONE) CFLAGS += -I$(USE_NCCL_PATH)/include @@ -469,7 +468,6 @@ ifeq ($(USE_CUDA), 1) CFLAGS += -DMXNET_USE_NCCL=0 endif else - SCALA_PKG_PROFILE := $(SCALA_PKG_PROFILE)-cpu CFLAGS += -DMXNET_USE_NCCL=0 endif @@ -484,6 +482,9 @@ else CFLAGS += -DMXNET_USE_LIBJPEG_TURBO=0 endif +ifeq ($(CI), 1) + MAVEN_ARGS := -B +endif # For quick compile test, used smaller subset ALLX_DEP= $(ALL_DEP) @@ -509,13 +510,17 @@ build/plugin/%.o: plugin/%.cc | ngraph %_gpu.o: %.cu @mkdir -p $(@D) - $(NVCC) $(NVCCFLAGS) $(CUDA_ARCH) -Xcompiler "$(CFLAGS) -Isrc/operator" -M -MT $*_gpu.o $< >$*_gpu.d + $(NVCC) $(NVCCFLAGS) $(CUDA_ARCH) -Xcompiler "$(CFLAGS) -Isrc/operator" --generate-dependencies -MT $*_gpu.o $< >$*_gpu.d $(NVCC) -c -o $@ $(NVCCFLAGS) $(CUDA_ARCH) -Xcompiler "$(CFLAGS) -Isrc/operator" $< %.o: %.cc $(CORE_INC) @mkdir -p $(@D) $(CXX) -std=c++11 -c $(CFLAGS) -MMD -Isrc/operator -c $< -o $@ +# Set install path for libmxnet.so on Mac OS +ifeq ($(UNAME_S), Darwin) + LDFLAGS += -Wl,-install_name,@rpath/libmxnet.so +endif # NOTE: to statically link libmxnet.a we need the option # --Wl,--whole-archive -lmxnet --Wl,--no-whole-archive lib/libmxnet.a: $(ALLX_DEP) @@ -554,7 +559,6 @@ $(NNVM_PATH)/lib/libnnvm.a: $(NNVM_INC) $(NNVM_SRC) bin/im2rec: tools/im2rec.cc $(ALLX_DEP) MXNET_RELATIVE_PATH_TO_RUNTIME_LIB_DIR := "../lib" - $(BIN) : @mkdir -p $(@D) $(CXX) $(CFLAGS) -std=c++11 -o $@ $(filter %.cpp %.o %.c %.a %.cc, $^) \ @@ -585,7 +589,6 @@ pylint: python_clean: $(RM) -r python/build $(RM) -r python/dist - doc: docs docs: @@ -620,87 +623,50 @@ rpkg: cp -rf lib/libmxnet.so R-package/inst/libs mkdir -p R-package/inst/include cp -rf include/* R-package/inst/include + rm R-package/inst/include/dmlc + rm R-package/inst/include/nnvm cp -rf 3rdparty/dmlc-core/include/* R-package/inst/include/ cp -rf 3rdparty/tvm/nnvm/include/* R-package/inst/include Rscript -e "if(!require(devtools)){install.packages('devtools', repo = 'https://cloud.r-project.org/')}" + Rscript -e "if(!require(devtools)||packageVersion('roxygen2') < '6.1.1'){install.packages('roxygen2', repo = 'https://cloud.r-project.org/')}" Rscript -e "library(devtools); library(methods); options(repos=c(CRAN='https://cloud.r-project.org/')); install_deps(pkg='R-package', dependencies = TRUE)" - echo "import(Rcpp)" > R-package/NAMESPACE - echo "import(methods)" >> R-package/NAMESPACE + cp R-package/dummy.NAMESPACE R-package/NAMESPACE + echo "import(Rcpp)" >> R-package/NAMESPACE R CMD INSTALL R-package - Rscript -e "require(mxnet); mxnet:::mxnet.export('R-package')" - Rscript -e "if (!require('roxygen2')||packageVersion('roxygen2')!= '5.0.1'){\ - devtools::install_version('roxygen2',version='5.0.1',\ - repo='https://cloud.r-project.org/',quiet=TRUE)}" - Rscript -e "require(roxygen2); roxygen2::roxygenise('R-package')" + Rscript -e "require(mxnet); mxnet:::mxnet.export('R-package'); warnings()" + rm R-package/NAMESPACE + Rscript -e "devtools::document('R-package'); warnings()" R CMD INSTALL R-package rpkgtest: Rscript -e 'require(testthat);res<-test_dir("R-package/tests/testthat");if(!testthat:::all_passed(res)){stop("Test failures", call. = FALSE)}' - Rscript -e 'res<-covr:::package_coverage("R-package");fileConn<-file("r-package_coverage.json");writeLines(covr:::to_codecov(res), fileConn);close(fileConn)' + Rscript -e 'res<-covr:::package_coverage("R-package");fileConn<-file(paste("r-package_coverage_",toString(runif(1)),".json"));writeLines(covr:::to_codecov(res), fileConn);close(fileConn)' scalaclean: - (cd $(ROOTDIR)/scala-package; \ - mvn clean -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE)) + (cd $(ROOTDIR)/scala-package && mvn clean) scalapkg: - (cd $(ROOTDIR)/scala-package; \ - mvn package -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) -Dcxx="$(CXX)" \ - -Dbuild.platform="$(SCALA_PKG_PROFILE)" \ - -Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \ - -Dcurrent_libdir="$(ROOTDIR)/lib" \ - -Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a") + (cd $(ROOTDIR)/scala-package && mvn install -DskipTests) + +scalainstall: + (cd $(ROOTDIR)/scala-package && mvn install) scalaunittest: - (cd $(ROOTDIR)/scala-package; \ - mvn integration-test -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE),unittest -Dcxx="$(CXX)" \ - -Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \ - -Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a" $(SCALA_TEST_ARGS)) + (cd $(ROOTDIR)/scala-package && mvn install) scalaintegrationtest: - (cd $(ROOTDIR)/scala-package; \ - mvn integration-test -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE),integrationtest -Dcxx="$(CXX)" \ - -Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \ - -Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a" $(SCALA_TEST_ARGS)) - -scalainstall: - (cd $(ROOTDIR)/scala-package; \ - mvn install -P$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) -DskipTests=true -Dcxx="$(CXX)" \ - -Dbuild.platform="$(SCALA_PKG_PROFILE)" \ - -Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \ - -Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a") - -scalarelease-dryrun: - (cd $(ROOTDIR)/scala-package; \ - mvn release:clean release:prepare -DdryRun=true -DautoVersionSubmodules=true \ - -Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \ - -Darguments=""-Dbuild\.platform=\""$(SCALA_PKG_PROFILE)\""\ -DskipTests=true\ -Dcflags=\""$(CFLAGS)\""\ -Dcxx=\""$(CXX)\""\ -Dldflags=\""$(LDFLAGS)\""\ -Dlddeps=\""$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a\"""") - -scalarelease-prepare: - (cd $(ROOTDIR)/scala-package; \ - mvn release:clean release:prepare -DautoVersionSubmodules=true \ - -Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \ - -Darguments=""-Dbuild\.platform=\""$(SCALA_PKG_PROFILE)\""\ -DskipTests=true\ -Dcflags=\""$(CFLAGS)\""\ -Dcxx=\""$(CXX)\""\ -Dldflags=\""$(LDFLAGS)\""\ -Dlddeps=\""$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a\"""") - -scalarelease-perform: - (cd $(ROOTDIR)/scala-package; \ - mvn release:perform -DautoVersionSubmodules=true \ - -Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \ - -Darguments=""-Dbuild\.platform=\""$(SCALA_PKG_PROFILE)\""\ -DskipTests=true\ -Dcflags=\""$(CFLAGS)\""\ -Dcxx=\""$(CXX)\""\ -Dldflags=\""$(LDFLAGS)\""\ -Dlddeps=\""$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a\"""") - -scaladeploy: - (cd $(ROOTDIR)/scala-package; \ - mvn deploy -Papache-release,$(SCALA_PKG_PROFILE),$(SCALA_VERSION_PROFILE) \-DskipTests=true -Dcxx="$(CXX)" \ - -Dbuild.platform="$(SCALA_PKG_PROFILE)" \ - -Dcflags="$(CFLAGS)" -Dldflags="$(LDFLAGS)" \ - -Dlddeps="$(LIB_DEP) $(ROOTDIR)/lib/libmxnet.a") + (cd $(ROOTDIR)/scala-package && mvn integration-test -DskipTests=false) jnilint: - 3rdparty/dmlc-core/scripts/lint.py mxnet-jnicpp cpp scala-package/native/src + 3rdparty/dmlc-core/scripts/lint.py mxnet-jnicpp cpp scala-package/native/src --exclude_path scala-package/native/src/main/native/org_apache_mxnet_native_c_api.h -ifneq ($(EXTRA_OPERATORS),) -clean: cyclean $(EXTRA_PACKAGES_CLEAN) - $(RM) -r build lib bin *~ */*~ */*/*~ */*/*/*~ R-package/NAMESPACE R-package/man R-package/R/mxnet_generated.R \ +rclean: + $(RM) -r R-package/src/image_recordio.h R-package/NAMESPACE R-package/man R-package/R/mxnet_generated.R \ R-package/inst R-package/src/*.o R-package/src/*.so mxnet_*.tar.gz + +ifneq ($(EXTRA_OPERATORS),) +clean: rclean cyclean $(EXTRA_PACKAGES_CLEAN) + $(RM) -r build lib bin deps *~ */*~ */*/*~ */*/*/*~ cd $(DMLC_CORE); $(MAKE) clean; cd - cd $(PS_PATH); $(MAKE) clean; cd - cd $(NNVM_PATH); $(MAKE) clean; cd - @@ -708,7 +674,7 @@ clean: cyclean $(EXTRA_PACKAGES_CLEAN) $(RM) -r $(patsubst %, %/*.d, $(EXTRA_OPERATORS)) $(patsubst %, %/*/*.d, $(EXTRA_OPERATORS)) $(RM) -r $(patsubst %, %/*.o, $(EXTRA_OPERATORS)) $(patsubst %, %/*/*.o, $(EXTRA_OPERATORS)) else -clean: ngraph_clean mkldnn_clean cyclean testclean $(EXTRA_PACKAGES_CLEAN) +clean: ngraph_clean rclean mkldnn_clean cyclean testclean $(EXTRA_PACKAGES_CLEAN) $(RM) -r build lib bin *~ */*~ */*/*~ */*/*/*~ R-package/NAMESPACE R-package/man R-package/R/mxnet_generated.R \ R-package/inst R-package/src/image_recordio.h R-package/src/*.o R-package/src/*.so mxnet_*.tar.gz cd $(DMLC_CORE); $(MAKE) clean; cd - diff --git a/NEWS.md b/NEWS.md index 4dcf8bf0c..f06cc35d8 100644 --- a/NEWS.md +++ b/NEWS.md @@ -1,5 +1,675 @@ MXNet Change Log ================ + +## 1.4.0 + +- [New Features](#new-features) + * [Java Inference API](#java-inference-api) + * [Julia API](#julia-api) + * [Control Flow Operators (experimental)](#control-flow-operators--experimental-) + * [SVRG Optimization](#svrg-optimization) + * [Subgraph API (experimental)](#subgraph-api--experimental-) + * [JVM Memory Management](#jvm-memory-management) + * [Topology-aware AllReduce (experimental)](#topology-aware-allreduce--experimental-) + * [MKLDNN backend: Graph optimization and Quantization (experimental)](#mkldnn-backend--graph-optimization-and-quantization--experimental-) + + [Graph Optimization](#graph-optimization) + + [Quantization](#quantization) +- [New Operators](#new-operators) +- [Feature improvements](#feature-improvements) + * [Operator](#operator) + * [Optimizer](#optimizer) + * [Sparse](#sparse) + * [ONNX](#onnx) + * [MKLDNN](#mkldnn) + * [Inference](#inference) + * [Other](#other) +- [Frontend API updates](#frontend-api-updates) + * [Gluon](#gluon) + * [Symbol](#symbol) +- [Language API updates](#language-api-updates) + * [Java](#java) + * [R](#r) + * [Scala](#scala) + * [Clojure](#clojure) + * [Perl](#perl) + * [Julia](#julia) +- [Performance benchmarks and improvements](#performance-benchmarks-and-improvements) +- [Bug fixes](#bug-fixes) +- [Licensing updates](#licensing-updates) +- [Improvements](#improvements) + * [Tutorial](#tutorial) + * [Example](#example) + * [Documentation](#documentation) + * [Website](#website) + * [MXNet Distributions](#mxnet-distributions) + * [Installation](#installation) + * [Build and CI](#build-and-ci) + * [3rd party](#3rd-party) + + [TVM:](#tvm-) + + [CUDNN:](#cudnn-) + + [Horovod:](#horovod-) +- [Deprications](#deprications) +- [Other](#other-1) +- [How to build MXNet](#how-to-build-mxnet) +- [List of submodules used by Apache MXNet (Incubating) and when they were updated last](#list-of-submodules-used-by-apache-mxnet--incubating--and-when-they-were-updated-last) +### New Features +#### Java Inference API + +Model inference is often managed in a production ecosystem using primarily Java/Scala tools and frameworks. This release seeks to alleviate the need for software engineers to write custom MXNet wrappers to fit their production environment. + +Inference on a trained model has a couple of common use cases: + + 1. Real-time or Online Inference - tasks that require immediate feedback, such as fraud detection + 2. Batch or Offline Inference - tasks that don't require immediate feedback, these are use cases where you have massive amounts of data and want to run inference or pre-compute inference results +Real-time Inference is often performed and deployed on popular web frameworks such as Tomcat, Netty, Jetty, etc., all of which use Java. +Batch Inference is often performed on big data platforms such as Spark using Scala or Java. + +With this project, we had the following goals: +* Build a new set of APIs that are Java friendly, compatible with Java 7+, are easy to use for inference. +* Lower the barrier to entry of consuming MXNet for production use cases. + +More details can be found at the [Java Inference API document](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Java+Inference+API). + +#### Julia API + +MXNet.jl is the Julia package of Apache MXNet. MXNet.jl brings flexible and efficient GPU computing and state-of-art deep learning to Julia. Some highlights of features include: + + * Efficient tensor/matrix computation across multiple devices, including multiple CPUs, GPUs and distributed server nodes. + * Flexible manipulation of symbolic to composite for construction of state-of-the-art deep learning models. + +#### Control Flow Operators (experimental) + +Today we observe more and more dynamic neural network models, especially in the fields of natural language processing and graph analysis. The dynamics in these models come from multiple sources, including: + + * Models are expressed with control flow, such as conditions and loops; + * NDArrays in a model may have dynamic shapes, meaning the NDArrays of a model or some of the NDArrays have different shapes for different batches; + * Models may want to use more dynamic data structures, such as lists or dictionaries. +It's natural to express dynamic models in frameworks with an imperative programming interface (e.g., Gluon, Pytorch, TensorFlow Eager). In this kind of interface, developers can use Python control flows, or NDArrays with any shape at any moment, or use Python lists and dictionaries to store data as they want. The problem of this approach is that it highly dependent on the originating front-end programming language (mainly Python). A model implemented in one language can only run in the same language. + +A common use case is that machine learning scientists want to develop their models in Python, whereas engineers who deploy the models usually have to use a different "production" language (e.g., Java or C). Gluon tries to close the gap between the model development and production deployment. Machine learning scientists design and implement their models in Python with the imperative interface, and then Gluon converts the implementations from imperative to symbolic by invoking `hybridize()` for model exporting. + +The goal of this project is to enhance Gluon to turn a dynamic neural network into a static computation graph. The dynamic control flows are expressed by control flow operators with Gluon hybridization, and these are exported for deployment. + +More information can be found at [Optimize dynamic neural network models with control flow operators](https://cwiki.apache.org/confluence/display/MXNET/Optimize+dynamic+neural+network+models+with+control+flow+operators) + +#### SVRG Optimization + +SVRG stands for Stochastic Variance Reduced Gradient, which was first introduced in the paper [Accelerating Stochastic Gradient Descent using Predicative Variance Reduction in 2013](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf). It is an optimization technique that complements SGD. + +SGD is known for large scale optimization, but it suffers from slow convergence asymptotically due to the inherent variance. SGD approximates the full gradient using a small batch of samples which introduces variance. In order to converge faster, SGD often needs to start with a smaller learning rate. + +SVRG remedies the slow convergence problem by keeping a version of the estimated weights that is close to the optimal parameters and maintains the average of the full gradient over the full pass of data. The average of the full gradients of all data is calculated w.r.t to parameters of last mth epochs. It has provable guarantees for strongly convex smooth functions; a detailed proof can be found in section 3 of the [paper](https://papers.nips.cc/paper/4937-accelerating-stochastic-gradient-descent-using-predictive-variance-reduction.pdf). SVRG uses a different update rule than SGD: gradients w.r.t current parameters minus gradients w.r.t parameters from the last mth epoch, plus the average of gradients over all data. + +Key Characteristics of SVRG: + + * Explicit variance reduction + * Ability to use relatively large learning rate compared to SGD, which leads to faster convergence. +More details can be found at [SVRG Optimization in MXNet Python Module](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries) + +#### Subgraph API (experimental) + +MXNet can integrate with many different kinds of backend libraries, including TVM, MKLDNN, TensorRT, Intel nGraph and more. In general, these backends support a limited number of operators, so running computation in a model usually involves an interaction between backend-supported operators and MXNet operators. These backend libraries share some common requirements: + +TVM , MKLDNN and nGraph use customized data formats. Interaction between these backends with MXNet requires data format conversion. +TVM, MKLDNN, TensorRT and nGraph fuses operators. +Integration with these backends should happen in the granularity of subgraphs instead of in the granularity of operators. To fuse operators, it's obvious that we need to divide a graph into subgraphs so that the operators in a subgraph can be fused into a single operator. To handle customized data formats, we should partition a computation graph into subgraphs as well. Each subgraph contains only TVM, MKLDNN or nGraph operators. In this way, MXNet converts data formats only when entering such a subgraph, and the operators inside a subgraph handle format conversion themselves if necessary. This makes interaction of TVM and MKLDNN with MXNet much easier. Neither the MXNet executor nor the MXNet operators need to deal with customized data formats. Even though invoking these libraries from MXNet requires similar steps, the partitioning rule and the subgraph execution of these backends can be different. As such, we define the following interface for backends to customize graph partitioning and subgraph execution inside an operator. More details can be found at PR 12157 and [Subgraph API](https://cwiki.apache.org/confluence/display/MXNET/Unified+integration+with+external+backend+libraries). + +#### JVM Memory Management + +The MXNet Scala and Java API uses native memory to manage NDArray, Symbol, Executor, DataIterators using MXNet's internal C APIs. The C APIs provide appropriate interfaces to create, access and free these objects. MXNet Scala has corresponding Wrappers and APIs that have pointer references to the native memory. Before this project, JVM users (e.g. Scala, Clojure, or Java) of MXNet have to manage MXNet objects manually using the dispose pattern. There are a few usability problems with this approach: + +* Users have to track the MXNet objects manually and remember to call `dispose`. This is not Java idiomatic and not user friendly. Quoting a user: "this feels like I am writing C++ code which I stopped ages ago". +* Leads to memory leaks if `dispose` is not called. +* Many objects in MXNet-Scala are managed in native memory, needing to use `dispose` on them as well. +* Bloated code with `dispose()` methods. +* Hard to debug memory-leaks. +Goals of the project are: +* Provide MXNet JVM users automated memory management that can release native memory when there are no references to JVM objects. +* Provide automated memory management for both GPU and CPU memory without performance degradation. More details can be found here: [JVM Memory Management](https://cwiki.apache.org/confluence/display/MXNET/JVM+Memory+Management) + +#### Topology-aware AllReduce (experimental) +For distributed training, the `Reduce` communication patterns used by NCCL and MXNet are not optimal for small batch sizes. The `Topology-aware AllReduce` approach is based on the idea of using trees to perform the `Reduce` and `Broadcast` operations. We can use the idea of minimum spanning trees to do a binary tree `Reduce` communication pattern to improve distributed training following this paper by Wang, Li, Edo and Smola [1]. Our strategy is to use: + + * a single tree (latency-optimal for small messages) to handle `Reduce` on small messages + * multiple trees (bandwidth-optimal for large messages) to handle `Reduce` on large messages + +More details can be found here: [Topology-aware AllReduce](https://cwiki.apache.org/confluence/display/MXNET/Single+machine+All+Reduce+Topology-aware+Communication) +Note: This is an experimental feature and has known problems - see [13341](https://github.com/apache/incubator-mxnet/issues/13341). Please help to contribute to improve the robustness of the feature. + +#### MKLDNN backend: Graph optimization and Quantization (experimental) + +Two advanced features, graph optimization (operator fusion) and reduced-precision (INT8) computation, are introduced to MKLDNN backend in this release ([#12530](https://github.com/apache/incubator-mxnet/pull/12530), [#13297](https://github.com/apache/incubator-mxnet/pull/13297), [#13260](https://github.com/apache/incubator-mxnet/pull/13260)). +These features significantly boost the inference performance on CPU (up to 4X) for a broad range of deep learning topologies. Currently, this feature is only available for inference on platforms with [supported Intel CPUs](https://github.com/intel/mkl-dnn#system-requirements). + +##### Graph Optimization +MKLDNN backend takes advantage of MXNet subgraph to implement the most of possible operator fusions for inference, such as Convolution + ReLU, Batch Normalization folding, etc. When using mxnet-mkl package, users can easily enable this feature by setting export MXNET_SUBGRAPH_BACKEND=MKLDNN. + +##### Quantization +Performance of reduced-precision (INT8) computation is also dramatically improved after the graph optimization feature is applied on CPU Platforms. Various models are supported and can benefit from reduced-precision computation, including symbolic models, Gluon models and even custom models. Users can run most of the pre-trained models with only a few lines of commands and a new quantization script imagenet_gen_qsym_mkldnn.py. The observed accuracy loss is less than 0.5% for popular CNN networks, like ResNet-50, Inception-BN, MobileNet, etc. + +Please find detailed information and performance/accuracy numbers here: [MKLDNN README](https://github.com/apache/incubator-mxnet/blob/master/MKLDNN_README.md), [quantization README](https://github.com/apache/incubator-mxnet/tree/master/example/quantization#1) and [design proposal](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN) + +### New Operators + +* Add trigonometric operators (#12424) +* [MXNET-807] Support integer label type in ctc_loss operator (#12468) +* [MXNET-876] make CachedOp a normal operator (#11641) +* Add index_copy() operator (#12810) +* Fix getnnz operator for CSR matrix (#12908) - issue #12872 +* [MXNET-1173] Debug operators - isfinite, isinf and isnan (#12967) +* Add sample_like operators (#13034) +* Add gauss err function operator (#13229) +* [MXNET -1030] Enhanced Cosine Embedding Loss (#12750) +* Add bytearray support back to imdecode (#12855, #12868) (#12912) +* Add Psroipooling CPU implementation (#12738) + +### Feature improvements +#### Operator +* [MXNET-912] Refactoring ctc loss operator (#12637) +* Refactor L2_normalization (#13059) +* Customized and faster `TakeOpForward` operator on CPU (#12997) +* Allow stop of arange operator to be inferred from dims. (#12064) +* Make check_isfinite, check_scale optional in clip_global_norm (#12042) add FListInputNames attribute to softmax_cross_entropy (#12701) [MXNET-867] Pooling1D with same padding (#12594) +* Add support for more req patterns for bilinear sampler backward (#12386) [MXNET-882] Support for N-d arrays added to diag op. (#12430) + +#### Optimizer +* Add a special version of Adagrad optimizer with row-wise learning rate (#12365) +* Add a Python SVRGModule for performing SVRG Optimization Logic (#12376) + +#### Sparse + +* Fall back when sparse arrays are passed to MKLDNN-enabled operators (#11664) +* Add Sparse support for logic operators (#12860) +* Add Sparse support for take(csr, axis=0) (#12889) + +#### ONNX + +* ONNX export - Clip operator (#12457) +* ONNX version update from 1.2.1 to 1.3 in CI (#12633) +* Use modern ONNX API to load a model from file (#12777) +* [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731) +* ONNX export: Fully connected operator w/o bias, ReduceSum, Square (#12646) +* ONNX export/import: Selu (#12785) +* ONNX export: Cleanup (#12878) +* [MXNET-892] ONNX export/import: DepthToSpace, SpaceToDepth operators (#12731) +* ONNX export: Scalar, Reshape - Set appropriate tensor type (#13067) +* [MXNET-886] ONNX export: HardSigmoid, Less, Greater, Equal (#12812) + +#### MKLDNN + +* MKLDNN Forward FullyConnected op cache (#11611) +* [MXNET-753] Fallback when using non-MKLDNN supported operators (#12019) +* MKLDNN Backward op cache (#11301) +* Implement mkldnn convolution fusion and quantization. (#12530) +* Improve mkldnn fallback. (#12663) +* Update MKL-DNN dependency (#12953) +* Update MKLML dependency (#13181) +* [MXNET-33] Enhance mkldnn pooling to support full convention (#11047) + +#### Inference +* [MXNET-910] Multithreading inference. (#12456) +* Tweaked the copy in c_predict_api.h (#12600) + +#### Other +* support for upper triangular matrices in linalg (#12904) +* Introduce Random module / Refactor code generation (#13038) +* [MXNET-779]Add DLPack Transformation API (#12047) +* Draw label name next to corresponding bounding boxes when the mapping of id to names is specified (#9496) +* Track epoch metric separately (#12182) +* Set correct update on kvstore flag in dist_device_sync mode (#12786) + +### Frontend API updates + +#### Gluon + +* Update basic_layers.py (#13299) +* Gluon LSTM Projection and Clipping Support (#13056) +* Make Gluon download function to be atomic (#12572) +* [MXNET -1004] Poisson NegativeLog Likelihood loss (#12697) +* Add activation information for `mxnet.gluon.nn._Conv` (#12354) +* Gluon DataLoader: avoid recursionlimit error (#12622) + +#### Symbol +* Addressed dumplicate object reference issues (#13214) +* Throw exception if MXSymbolInferShape fails (#12733) +* Infer dtype in SymbolBlock import from input symbol (#12412) + +### Language API updates +#### Java +* [MXNET-1198] MXNet Java API (#13162) + +#### R +* Refactor R Optimizers to fix memory leak - 11374 +* Add new Vignettes to the R package + * Char-level Language modeling - 12670 + * Multidimensional Time series forecasting - 12664 +* Fix broken Examples and tutorials + * Tutorial on neural network introduction - 12117 + * CGAN example - 12283 + * Test classification with LSTMs - 12263 + +#### Scala +* Explain the details for Scala Experimental (#12348) +* [MXNET-716] Adding Scala Inference Benchmarks (#12721) +* [MXNET-716][MIRROR #12723] Scala Benchmark Extension pack (#12758) +* NativeResource Management in Scala (#12647) +* Ignore generated Scala files (#12928) +* Use ResourceScope in Model/Trainer/FeedForward.scala (#12882) +* [MXNET-1180] Scala Image API (#12995) +* Update log4j version of Scala package (#13131) +* Review require() usages to add meaningful messages (#12570) +* Fix Scala readme (#13082) + +#### Clojure +* Introduction to Clojure-MXNet video link (#12754) +* Improve the Clojure Package README to Make it Easier to Get Started (#12881) +* MXNET-873 - Bring Clojure Package Inline with New DataDesc and Layout in Scala Package (#12387) +* Port of Scala Image API to Clojure (#13107) + +#### Perl +* [MXNET-1026] [Perl] Sync with recent changes in Python's API (#12739) + +#### Julia +* Import Julia binding (#10149), how to use is available at https://github.com/apache/incubator-mxnet/tree/master/julia + +### Performance benchmarks and improvements +* Update mshadow for omp acceleration when nvcc is not present (#12674) +* [MXNET-860] Avoid implicit double conversions (#12361) +* Add more models to benchmark_score (#12780) +* Add resnet50-v1 to benchmark_score (#12595) + +### Bug fixes +* Fix for #10920 - increase tolerance for sparse dot (#12527) +* [MXNET-1234] Fix shape inference problems in Activation backward (#13409) +* Fix a bug in `where` op with 1-D input (#12325) +* [MXNET-825] Fix CGAN R Example with MNIST dataset (#12283) +* [MXNET-535] Fix bugs in LR Schedulers and add warmup (#11234) +* Fix speech recognition example (#12291) +* Fix bug in 'device' type kvstore (#12350) +* fix search result 404s (#12414) +* Fix help in imread (#12420) +* Fix render issue on < and > (#12482) +* [MXNET-853] Fix for smooth_l1 operator scalar default value (#12284) +* Fix subscribe links, remove disabled icons (#12474) +* Fix broken URLs (#12508) +* Fix/public internal header (#12374) +* Fix lazy record io when used with dataloader and multi_worker > 0 (#12554) +* Fix error in try/finally block for blc (#12561) +* Add cudnn_off parameter to SpatialTransformer Op and fix the inconsistency between CPU & GPU code (#12557) +* [MXNET-798] Fix the dtype cast from non float32 in Gradient computation (#12290) +* Fix CodeCovs proper commit detection (#12551) +* Add TensorRT tutorial to index and fix ToC (#12587) +* Fixed typo in c_predict_api.cc (#12601) +* Fix typo in profiler.h (#12599) +* Fixed NoSuchMethodError for Jenkins Job for MBCC (#12618) +* [MXNET-922] Fix memleak in profiler (#12499) +* [MXNET-969] Fix buffer overflow in RNNOp (#12603) +* Fixed param coercion of clojure executor/forward (#12627) (#12630) +* Fix version dropdown behavior (#12632) +* Fix reference to wrong function (#12644) +* Fix the location of the tutorial of control flow operators (#12638) +* Fix issue 12613 (#12614) +* [MXNET-780] Fix exception handling bug (#12051) +* Fix bug in prelu, issue 12061 (#12660) +* [MXNET-833] [R] Char-level RNN tutorial fix (#12670) +* Fix static / dynamic linking of gperftools and jemalloc (#12714) +* Fix #12672, importing numpy scalars (zero-dimensional arrays) (#12678) +* [MXNET-623] Fixing an integer overflow bug in large NDArray (#11742) +* Fix benchmark on control flow operators (#12693) +* Fix regression in MKLDNN caused by PR 12019 (#12740) +* Fixed broken link for Baidu's WARP CTC (#12774) +* Fix CNN visualization tutorial (#12719) +* [MXNET-979] Add fix_beta support in BatchNorm (#12625) +* R fix metric shape (#12776) +* Revert [MXNET-979] Add fix_beta support in BatchNorm (#12625) (#12789) +* Fix mismatch shapes (#12793) +* Fixed symbols naming in RNNCell, LSTMCell, GRUCell (#12794) +* Fixed __setattr__ method of _MXClassPropertyMetaClass (#12811) +* Fixed regex for matching platform type in Scala Benchmark scripts (#12826) +* Fix broken links (#12856) +* Fix Flaky Topk (#12798) +* [MXNET-1033] Fix a bug in MultiboxTarget GPU implementation (#12840) +* [MXNET-1107] Fix CPUPinned unexpected behaviour (#12031) +* Fix __all__ in optimizer/optimizer.py (#12886) +* Fix Batch input issue with Scala Benchmark (#12848) +* fix type inference in index_copy. (#12890) +* Fix the paths issue for downloading script (#12913) +* Fix indpt[0] for take(csr) (#12927) +* Fix the bug of assigning large integer to NDArray (#12921) +* Fix Sphinx errors for tutorials and install ToCs (#12945) +* Fix variable name in tutorial code snippet (#13052) +* Fix example for mxnet.nd.contrib.cond and fix typo in src/engine (#12954) +* Fix a typo in operator guide (#13115) +* Fix variational autoencoder example (#12880) +* Fix problem with some OSX not handling the cast on imDecode (#13207) +* [MXNET-953] Fix oob memory read (#12631) +* Fix Sphinx error in ONNX file (#13251) +* [Example] Fixing Gradcam implementation (#13196) +* Fix train mnist for inception-bn and resnet (#13239) +* Fix a bug in index_copy (#13218) +* Fix Sphinx errors in box_nms (#13261) +* Fix Sphinx errors (#13252) +* Fix the cpp example compiler flag (#13293) +* Made fixes to sparse.py and sparse.md (#13305) +* [Example] Gradcam- Fixing a link (#13307) +* Manually track num_max_thread (#12380) +* [Issue #11912] throw mxnet exceptions when decoding invalid images. (#12999) +* Undefined name: load_model() --> utils.load_model() (#12867) +* Change the way NDArrayIter handle the last batch (#12545) +* Add embedding to print_summary (#12796) +* Allow foreach on input with 0 length (#12471) +* [MXNET-360]auto convert str to bytes in img.imdecode when py3 (#10697) +* Fix unpicklable transform_first on windows (#13686) + +### Licensing updates +* Add license headers to R-package (#12559) +* License header (#13178) +* add url and license to clojure package project (#13304) + +### Improvements +#### Tutorial +* [MXNET-422] Distributed training tutorial (#10955) +* Add a tutorial for control flow operators. (#12340) +* Add tutorial Gotchas using NumPy (#12007) +* Updated Symbol tutorial with Gluon (#12190) +* Improve tutorial redirection (#12607) +* Include missing import in TensorRT tutorial (#12609) +* Update Operator Implementation Tutorial (#12230) +* Add a tutorial for the subgraph API. (#12698) +* Improve clojure tutorial (#12974) +* Update scala intellij tutorial (#12827) +* [Example] Gradcam consolidation in tutorial (#13255) +* [MXNET-1203] Tutorial infogan (#13144) +* [MXNET-703] Add a TensorRT walkthrough (#12548) + +#### Example +* Update C++ example so it is easier to run (#12397) +* [MXNET-580] Add SN-GAN example (#12419) +* [MXNET-637] Multidimensional LSTM example for MXNetR (#12664) +* [MXNET-982] Provide example to illustrate usage of CSVIter in C++ API (#12636) +* [MXNET-947] Expand scala imclassification example with resnet (#12639) +* MKL-DNN Quantization Examples and README (#12808) +* Extending the DCGAN example implemented by gluon API to provide a more straight-forward evaluation on the generated image (#12790) +* [MXNET-1017] Updating the readme file for cpp-package and adding readme file for example directory. (#12773) +* Update tree lstm example (#12960) +* Update bilstm integer array sorting example (#12929) +* Updated / Deleted some examples (#12968) +* Update module example (#12961) +* Update adversary attack generation example (#12918) +* Update Gluon example folder (#12951) +* Update dec example (#12950) +* Updated capsnet example (#12934) +* Updates to several examples (#13068) +* Update multi-task learning example (#12964) +* Remove obsolete memory cost example (#13235) +* [Example] Update cpp example README (#13280) +* [Example]update NER example readme on module prediction (#13184) +* Update proposal_target.py (#12709) +* Removing the re-size for validation data, which breaking the validation accuracy of CIFAR training (#12362) +* Update the README with instruction to redirect the user to gluon-cv (#13186) + +#### Documentation +* Update ONNX API docs references (#12317) +* Documentation update related to sparse support (#12367) +* Edit shape.array doc and some style improvements (#12162) +* Fixed docs/website build checkout bug (#12413) +* Add Python API docs for test_utils and visualization (#12455) +* Fix the installation doc for MKL-DNN backend (#12534) +* Added comment to docs regarding ToTensor transform (#12186) +* Pinned dockcross to a tag with fixed ABI for RPi (#12588) +* Refine the documentation of im2rec (#12606) +* Update and modify Windows docs (#12620) +* update docs to list cmake required for build from source page (#12592) +* update the distributed_training document (#12626) +* Add docstring in im2rec.py (#12621) +* [Doc] Change the description for pip packages (#12584) +* Change dependencies documentation opencv2-->opencv (#12654) +* Add documents for two new environment variables for memory pool. (#12668) +* Scala Docs - Replace old Symbol api usages (#12759) +* add/update infer_range docs (#12879) +* Fix typo in formula in docstring for GRU cell and layer and add clarification to description (gluon.rnn) (#12896) +* Fix the operator API documentation (#12942) +* fix broken docs (#12871) +* fix mac r install and windows python build from source docs (#12919) +* Document the newly added env variable (#13049) +* Add documentation on GPU performance on Quantization example (#13145) +* Fix Sphinx python docstring formatting error. (#13177) +* [Doc] Fix repo paths in Ubuntu build doc (#13101) +* Fix Sphinx document parsing error. (#13195) +* Fix #13090, Add image.imread to python API doc. (#13176) +* Fix Sphinx docstring formatting error. (#13004, #13005, #13006) (#13175) +* Fix #12944, Fix Sphinx python docstring formatting error. (#13174) +* Fix #13013, Fix Sphinx python docstring error. (#13173) +* Fixed Sparse astype doc string formatting error (#13171) +* Fixed Documentation issues (#13215) +* update the doc (#13205) +* Fix Sphinx doc errors (#13170) +* Fix Sphinx python docstring error: initializer.InitDesc (#12939) (#13148) +* Fix Sphinx python docstring error: text contrib module (#12949) (#13149) +* Fix Sphinx python docstrings (#13160) +* Add Java API docs generation (#13071) +* Fix scaladoc build errors (#13189) +* Add missing documentations for getnnz (#13128) +* Addressed ONNX module documentation warnings and added notes for short-form representation (#13259) +* Doc fixes (#13256) +* Addressed doc issues (#13165) +* stop gap fix to let website builds through; scaladoc fix pending (#13298) +* Fix Sphinx python docstring formatting error. (#13194) +* Visualization doc fix. Added notes for shortform (#13291) +* [Example] Add docstring for test optimizer and test score (#13286) +* Fix descriptions in scaladocs for macro ndarray/sybmol APIs (#13210) +* Sphinx error reduction (#12323) +* Sphinx errors in Gluon (#13275) +* Update env_var.md (#12702) +* Updated the Instructions for use of the label bot (#13192) +* Added/changed file_name, brief description comments in some files (#13033) + +#### Website +* adding apache conf promo to home page (#12347) +* Consistent website theme and custom 404 (#12426) +* update apachecon links to https (#12521) +* [HOLD] 1.3.0 release website updates (#12509) +* add mentions of the gluon toolkits and links to resources (#12667) +* remove apachecon promo (#12695) +* [MXNet-1002] Add GluonCV and NLP tookits, Keras, and developer wiki to navigation (#12704) + +#### MXNet Distributions +* Make the output of ci/docker/install/ubuntu_mklml.sh less verbose (#12422) +* Fix tvm dependency for docker (#12479) +* [MXNET-703] Add TensorRT runtime Dockerfile (#12549) +* [MXNET-951] Python dockerfiles built on pip binaries and build/release script (#12556) +* Change numpy version to 1.15.2 in python and docker install requirements (#12711) +* Add mkl-dnn to docker install method (#12643) +* Fix docker cleanup race condition (#13092) +* Bugfix in ci/docker_cache.py (#13249) +* Update PyPI version number (#11773) +* update download links to apache distros (#12617) + +#### Installation +* Installation instructions consolidation (#12388) +* Refine mxnet python installation (#12696) +* R install instructions update for macOS (#12832) +* remove legacy installation of Roxygen2 5.0 and add R-specific clean target (#12993) (#12998) +* Force APT cache update before executing install (#13285) +* Make the Ubuntu scripts executable after download. (#12180) +* replacing windows setup with newer instructions (#12504) +* Updated download links and verification instructions (#12651) +* Remove pip overwrites (#12604) + +#### Build and CI +* [MXNET-908] Enable minimal OSX Travis build (#12462) +* Use jom for parallel Windows builds (#12533) +* [MXNET-950] Enable parallel R dep builds in CI (#12552) +* Speed up CI Windows builds (#12563) +* [MXNET-908] Speed up travis builds to avoid timeouts (#12706) +* Simplify mac MKLDNN build (#12724) +* [MXNET-674] Speed up GPU builds in CI (#12782) +* Improved git reset for CI builds (#12784) +* Improve cpp-package example project build files. (#13093) +* Add --no-cache option to build.py when building containers (#13182) +* Addressed sphinx build issue (#13246) +* Tighten up PyLint directives again (#12322) +* [MXNET-859] Add a clang-tidy stage to CI (#12282) +* A solution to prevent zombie containers locally and in CI (#12381) +* [MXNET-696][PYTHON][UNDEFINED NAME] import logging in ci/util.py (#12488) +* [MXNET-703] Static linking for libprotobuf with TensorRT (#12475) +* Remove regression checks for website links (#12507) +* [MXNET-953] - Add ASAN sanitizer, Enable in CI (#12370) +* Allow custom path and static linking for custom mallocs in make (#12645) +* Correct PR branch detection in code coverage (#12615) +* Update osx.mk - Added apple to USE_BLAS comment (#12819) +* [MXNET-953] Correct ASAN cflags flag (#12659) +* [MXNET-1025] Add Jetpack 3.3 support to Jetson (#12735) +* Fail the broken link job when broken links are found (#12905) +* Removed unused header (#13066) +* Maven Surefire bug workaround (#13081) +* Add Turing and Volta support to arch_name (#13168) +* Moves f16c autodetection to its own cmake module (#12331) +* la_op_inline.h to la_op-inl.h for consistency (#13045) +* [MXNET-793] Virtualized ARMv7 with Qemu CI integration (#13203) +* Remove unused variable `rotateM_` (#10803) +* Separate refactoring from #12276 in a prior PR (#12296) +* [MXNET-860] Remove std::moves that have no affect (#12730) +* [MXNET-860] Use emplace where helpful (#12694) +* Enable C++ coverage (#12642) +* [MXNET-860] Update to modern nullptr usage (#12352) +* [MXNET-860] Reduce redundant copies, check for regressions with clang-tidy (#12355) + + +#### 3rd party +##### TVM: +* Updated tvm submodule head (#12764) +* Updated tvm submodule head (#12448) +##### CUDNN: +* [MXNET-1179] Enforce deterministic algorithms in convolution layers (#12992) +* CudnnFind() usage improvements (#12804) +* Add option for automatic downcasting dtype for cudnn to allow using Tensorcore for fp32 (#12722) +##### Horovod: +* [MXNET-1111] Remove CPUPinned in ImageRecordIter (#12666) + +### Deprications +* Add a deprecate message (#13042) contrib_CTCLoss is deprecated. Added a message in command +### Other +* Updating news, readme files and bumping master version to 1.3.1 (#12525) +* Add new name to CONTRIBUTORS.md (#12763) +* Update contribute.md (#12685) +* Updated CONTRIBUTORS.md to include lebeg and gigasquid, moved mabreu to committers section (#12766) +* Update CONTRIBUTORS.md (#12996) +* Updated CONTRIBUTORS.md to include mxnet-label-bot (#13048) + +### How to build MXNet +Please follow the instructions at https://mxnet.incubator.apache.org/install/index.html + +### List of submodules used by Apache MXNet (Incubating) and when they were updated last +Submodule@commit ID::Last updated by MXNet:: Last update in submodule + +* cub@05eb57f::Jul 31, 2017 :: Jul 31, 2017 +* dlpack@10892ac:: Oct 30, 2017 :: Aug 23, 2018 +* dmlc-core@0a0e8ad:: Aug 15, 2018 :: Nov 15, 2018 +* googletest@ec44c6c:: July 14, 2016 :: July 14, 2016 +* mkldnn@a7c5f53:: Nov 7, 2018 :: Nov 5, 2018 +* mshadow@696803b:: Sep 28, 2018 :: Nov 7, 2018 +* onnx-tensorrt@3d8ee04:: Aug 22, 2018 :: Nov 10, 2018 +* openmp@37c7212: Nov 22, 2017 :: Nov 13, 2018 +* ps-lite@8a76389: April 25, 2018 :: Oct 9, 2018 +* tvm@0f053c8: Oct 10, 2018 :: Oct 8, 2018 + +## 1.3.1 + +### Bug fixes + +* [MXNET-953] Fix oob memory read (v1.3.x) / [#13118](https://github.com/apache/incubator-mxnet/pull/13118) +Simple bugfix addressing an out-of-bounds memory read. + + +* [MXNET-969] Fix buffer overflow in RNNOp (v1.3.x) / [#13119](https://github.com/apache/incubator-mxnet/pull/13119) +This fixes an buffer overflow detected by ASAN. + + +* CudnnFind() usage improvements (v1.3.x) / [#13123](https://github.com/apache/incubator-mxnet/pull/13123) + This PR improves the MXNet's use of cudnnFind() to address a few issues: + 1. With the gluon imperative style, cudnnFind() is called during forward(), and so might have its timings perturbed by other GPU activity (including potentially other cudnnFind() calls). + 2. With some cuda drivers versions, care is needed to ensure that the large I/O and workspace cudaMallocs() performed by cudnnFind() are immediately released and available to MXNet. + 3. cudnnFind() makes both conv I/O and workspace allocations that must be covered by the GPU global memory headroom defined by MXNET_GPU_MEM_POOL_RESERVE. Per issue #12662, large convolutions can result in out-of-memory errors, even when MXNet's storage allocator has free memory in its pool. + + This PR addresses these issues, providing the following benefits: + 1. Consistent algo choice for a given convolution type in a model, both for instances in the same GPU and in other GPUs in a multi-GPU training setting. + 2. Consistent algo choice from run to run, based on eliminating sources of interference of the cudnnFind() timing process. + 3. Consistent model global memory footprint, both because of the consistent algo choice (algo's can have markedly different workspace requirements) and changes to MXNet's use of cudaMalloc. + 4. Increased training performance based on being able to consistently run with models that approach the GPU's full global memory footprint. + 5. Adds a unittest for and solves issue #12662. + +* [MXNET-922] Fix memleak in profiler (v1.3.x) / [#13120](https://github.com/apache/incubator-mxnet/pull/13120) + Fix a memleak reported locally by ASAN during a normal inference test. + +* Fix lazy record io when used with dataloader and multi_worker > 0 (v1.3.x) / [#13124](https://github.com/apache/incubator-mxnet/pull/13124) + Fixes multi_worker data loader when record file is used. The MXRecordIO instance needs to require a new file handler after fork to be safely manipulated simultaneously. + + This fix also safely voids the previous temporary fixes #12093 #11370. + +* fixed symbols naming in RNNCell, LSTMCell, GRUCell (v1.3.x) / [#13158](https://github.com/apache/incubator-mxnet/pull/13158) + This fixes #12783, by assigning all nodes in hybrid_forward a unique name. Some operations were in fact performed without attaching the appropriate (time) prefix to the name, which makes serialized graphs non-deserializable. + +* Fixed `__setattr__` method of `_MXClassPropertyMetaClass` (v1.3.x) / [#13157](https://github.com/apache/incubator-mxnet/pull/13157) + Fixed `__setattr__` method + +* allow foreach on input with 0 length (v1.3.x) / [#13151](https://github.com/apache/incubator-mxnet/pull/13151) + Fix #12470. With this change, outs shape can be inferred correctly. + +* Infer dtype in SymbolBlock import from input symbol (v1.3.x) / [#13117](https://github.com/apache/incubator-mxnet/pull/13117) + Fix for the issue - #11849 + Currently, Gluon symbol block cannot import any symbol with type other than fp32. All the parameters are created as FP32 leading to failure in importing the params when it is of type fp16, fp64 etc, + In this PR, we infer the type of the symbol being imported and create the Symbol Block Parameters with that inferred type. + Added the tests + +### Documentation fixes + +* Document the newly added env variable (v1.3.x) / [#13156](https://github.com/apache/incubator-mxnet/pull/13156) + Document the env variable: MXNET_ENFORCE_DETERMINISM added in PR: [#12992](https://github.com/apache/incubator-mxnet/pull/12992) + +* fix broken links (v1.3.x) / [#13155](https://github.com/apache/incubator-mxnet/pull/13155) + This PR fixes broken links on the website. + +* fix broken Python IO API docs (v1.3.x) / [#13154](https://github.com/apache/incubator-mxnet/pull/13154) + Fixes [#12854: Data Iterators documentation is broken](https://github.com/apache/incubator-mxnet/issues/12854) + + This PR manually specifies members of the IO module so that the docs will render as expected. This is workaround in the docs to deal with a bug introduced in the Python code/structure since v1.3.0. See the comments for more info. + + This PR also fixes another issue that may or may not be related. Cross references to same-named entities like name, shape, or type are confusing Sphinx and it seems to just link to whatever it last dealt with that has the same name, and not the current module. To fix this you have to be very specific. Don't use type, use np.type if that's what you want. Otherwise you might end up with mxnet.kvstore.KVStore.type. This is a known Sphinx issue, so it might be something we have to deal with for the time being. + + This is important for any future modules - that they recognize this issue and make efforts to map the params and other elements. + +* add/update infer_range docs (v1.3.x) / [#13153](https://github.com/apache/incubator-mxnet/pull/13153) + This PR adds or updates the docs for the infer_range feature. + + Clarifies the param in the C op docs + Clarifies the param in the the Scala symbol docs + Adds the param for the the Scala ndarray docs + Adds the param for the Python symbol docs + Adds the param for the Python ndarray docs + +### Other Improvements + +* [MXNET-1179] Enforce deterministic algorithms in convolution layers (v1.3.x) / [#13152](https://github.com/apache/incubator-mxnet/pull/13152) + Some of the CUDNN convolution algorithms are non-deterministic (see issue #11341). This PR adds an env variable to enforce determinism in the convolution operators. If set to true, only deterministic CUDNN algorithms will be used. If no deterministic algorithm is available, MXNet will error out. + + +### Submodule updates + +* update mshadow (v1.3.x) / [#13122](https://github.com/apache/incubator-mxnet/pull/13122) + Update mshadow for omp acceleration when nvcc is not present + +### Known issues + +The test test_operator.test_dropout has issues and has been disabled on the branch: + +* Disable flaky test test_operator.test_dropout (v1.3.x) / [#13200](https://github.com/apache/incubator-mxnet/pull/13200) + + + +For more information and examples, see [full release notes](https://cwiki.apache.org/confluence/x/eZGzBQ) + + ## 1.3.0 ### New Features - Gluon RNN layers are now HybridBlocks @@ -211,7 +881,7 @@ For more information and examples, see [full release notes](https://cwiki.apache - Added support for distributed mixed precision training with FP16. It supports storing of master copy of weights in float32 with the multi_precision mode of optimizers (#10183). Improved speed of float16 operations on x86 CPU by 8 times through F16C instruction set. Added support for more operators to work with FP16 inputs (#10125, #10078, #10169). Added a tutorial on using mixed precision with FP16 (#10391). ### New Features - Added Profiling Enhancements -- Enhanced built-in profiler to support native Intel:registered: VTune:tm: Amplifier objects such as Task, Frame, Event, Counter and Marker from both C++ and Python -- which is also visible in the Chrome tracing view(#8972). Added Runtime tracking of symbolic and imperative operators as well as memory and API calls. Added Tracking and dumping of aggregate profiling data. Profiler also no longer affects runtime performance when not in use. +- Enhanced built-in profiler to support native Intel:registered: VTune:tm: Amplifier objects such as Task, Frame, Event, Counter and Marker from both C++ and Python -- which is also visible in the Chrome tracing view(#8972). Added Runtime tracking of symbolic and imperative operators as well as memory and API calls. Added Tracking and dumping of aggregate profiling data. Profiler also no longer affects runtime performance when not in use. ### Breaking Changes - Changed Namespace for MXNet scala from `ml.dmlc.mxnet` to `org.apache.mxnet` (#10284). @@ -242,7 +912,7 @@ For more information and examples, see [full release notes](https://cwiki.apache - Fixed a bug that was causing training metrics to be printed as NaN sometimes (#10437). - Fixed a crash with non positive reps for tile ops (#10417). -### Performance Improvements +### Performance Improvements - On average, after the MKL-DNN change, the inference speed of MXNet + MKLDNN outperforms MXNet + OpenBLAS by a factor of 32, outperforms MXNet + MKLML by 82% and outperforms MXNet + MKLML with the experimental flag by 8%. The experiments were run for the image classifcation example, for different networks and different batch sizes. - Improved sparse SGD, sparse AdaGrad and sparse Adam optimizer speed on GPU by 30x (#9561, #10312, #10293, #10062). - Improved `sparse.retain` performance on CPU by 2.5x (#9722) @@ -347,7 +1017,7 @@ For more information and examples, see [full release notes](https://cwiki.apache - Added `axis` argument to `SequenceLast`, `SequenceMask` and `SequenceReverse` operators (#9306) - Added `lazy_update` option for standard `SGD` & `Adam` optimizer with `row_sparse` gradients (#9468, #9189) - Added `select` option in `Block.collect_params` to support regex (#9348) -- Added support for (one-to-one and sequence-to-one) inference on explicit unrolled RNN models in R (#9022) +- Added support for (one-to-one and sequence-to-one) inference on explicit unrolled RNN models in R (#9022) ### Deprecations - The Scala API name space is still called `ml.dmlc`. The name space is likely be changed in a future release to `org.apache` and might brake existing applications and scripts (#9579, #9324) ### Performance Improvements @@ -393,10 +1063,10 @@ For more information and examples, see [full release notes](https://cwiki.apache - MXNet now compiles and runs on NVIDIA Jetson TX2 boards with GPU acceleration. - You can install the python MXNet package on a Jetson board by running - `$ pip install mxnet-jetson-tx2`. ### New Features - Sparse Tensor Support [General Availability] - - Added more sparse operators: `contrib.SparseEmbedding`, `sparse.sum` and `sparse.mean`. + - Added more sparse operators: `contrib.SparseEmbedding`, `sparse.sum` and `sparse.mean`. - Added `asscipy()` for easier conversion to scipy. - Added `check_format()` for sparse ndarrays to check if the array format is valid. -### Bug-fixes +### Bug-fixes - Fixed a[-1] indexing doesn't work on `NDArray`. - Fixed `expand_dims` if axis < 0. - Fixed a bug that causes topk to produce incorrect result on large arrays. @@ -408,9 +1078,9 @@ For more information and examples, see [full release notes](https://cwiki.apache ### Doc Updates - Added a security best practices document under FAQ section. - Fixed License Headers including restoring copyright attributions. - - Documentation updates. + - Documentation updates. - Links for viewing source. - + For more information and examples, see [full release notes](https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.0+Release+Notes) @@ -418,15 +1088,15 @@ For more information and examples, see [full release notes](https://cwiki.apache ### Bug-fixes - Added GPU support for the `syevd` operator which ensures that there is GPU support for all linalg-operators. - Bugfix for `syevd` on CPU such that it works for `float32`. - - Fixed API call when `OMP_NUM_THREADS` environment variable is set. + - Fixed API call when `OMP_NUM_THREADS` environment variable is set. - Fixed `MakeNonlossGradNode` bug. - - Fixed bug related to passing `dtype` to `array()`. + - Fixed bug related to passing `dtype` to `array()`. - Fixed some minor bugs for sparse distributed training. - - Fixed a bug on `Slice` accessing uninitialized memory in `param.begin` in the file `matrix_op-inl.h`. + - Fixed a bug on `Slice` accessing uninitialized memory in `param.begin` in the file `matrix_op-inl.h`. - Fixed `gluon.data.RecordFileDataset`. - Fixed a bug that caused `autograd` to crash on some networks. - - + + ## 0.12.0 ### Performance - Added full support for NVIDIA Volta GPU Architecture and CUDA 9. Training CNNs is up to 3.5x faster than Pascal when using float16 precision. @@ -434,7 +1104,7 @@ For more information and examples, see [full release notes](https://cwiki.apache - Improved ImageRecordIO image loading performance and added indexed RecordIO support. - Added better openmp thread management to improve CPU performance. ### New Features - Gluon - - Added enhancements to the Gluon package, a high-level interface designed to be easy to use while keeping most of the flexibility of low level API. Gluon supports both imperative and symbolic programming, making it easy to train complex models imperatively with minimal impact on performance. Neural networks (and other machine learning models) can be defined and trained with `gluon.nn` and `gluon.rnn` packages. + - Added enhancements to the Gluon package, a high-level interface designed to be easy to use while keeping most of the flexibility of low level API. Gluon supports both imperative and symbolic programming, making it easy to train complex models imperatively with minimal impact on performance. Neural networks (and other machine learning models) can be defined and trained with `gluon.nn` and `gluon.rnn` packages. - Added new loss functions - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, `HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`. - `gluon.Trainer` now allows reading and setting learning rate with `trainer.learning_rate` property. - Added API `HybridBlock.export` for exporting gluon models to MXNet format. @@ -447,7 +1117,7 @@ For more information and examples, see [full release notes](https://cwiki.apache - Added `mx.autograd.grad` and experimental second order gradient support (most operators don't support second order gradient yet). - Autograd now supports cross-device graphs. Use `x.copyto(mx.gpu(i))` and `x.copyto(mx.cpu())` to do computation on multiple devices. ### New Features - Sparse Tensor Support - - Added support for sparse matrices. + - Added support for sparse matrices. - Added limited cpu support for two sparse formats in `Symbol` and `NDArray` - `CSRNDArray` and `RowSparseNDArray`. - Added a sparse dot product operator and many element-wise sparse operators. - Added a data iterator for sparse data input - `LibSVMIter`. @@ -457,7 +1127,7 @@ For more information and examples, see [full release notes](https://cwiki.apache - Added limited support for fancy indexing, which allows you to very quickly access and modify complicated subsets of an array's values. `x[idx_arr0, idx_arr1, ..., idx_arrn]` is now supported. Features such as combining and slicing are planned for the next release. Checkout master to get a preview. - Random number generators in `mx.nd.random.*` and `mx.sym.random.*` now support both CPU and GPU. - `NDArray` and `Symbol` now supports "fluent" methods. You can now use `x.exp()` etc instead of `mx.nd.exp(x)` or `mx.sym.exp(x)`. - - Added `mx.rtc.CudaModule` for writing and running CUDA kernels from python. + - Added `mx.rtc.CudaModule` for writing and running CUDA kernels from python. - Added `multi_precision` option to optimizer for easier float16 training. - Better support for IDE auto-completion. IDEs like PyCharm can now correctly parse mxnet operators. ### API Changes @@ -505,14 +1175,14 @@ For more information and examples, see [full release notes](https://cwiki.apache ## 0.10.0 -- Overhauled documentation for commonly used Python APIs, Installation instructions, Tutorials, HowTos and MXNet Architecture. -- Updated mxnet.io for improved readability. -- Pad operator now support reflection padding. -- Fixed a memory corruption error in threadedengine. -- Added CTC loss layer to contrib package. See mx.contrib.sym.ctc_loss. -- Added new sampling operators for several distributions (normal,uniform,gamma,exponential,negative binomial). +- Overhauled documentation for commonly used Python APIs, Installation instructions, Tutorials, HowTos and MXNet Architecture. +- Updated mxnet.io for improved readability. +- Pad operator now support reflection padding. +- Fixed a memory corruption error in threadedengine. +- Added CTC loss layer to contrib package. See mx.contrib.sym.ctc_loss. +- Added new sampling operators for several distributions (normal,uniform,gamma,exponential,negative binomial). - Added documentation for experimental RNN APIs. - + ## 0.9.3 - Move symbolic API to NNVM @tqchen - Most front-end C API are backward compatible diff --git a/README.md b/NGRAPH_README.md similarity index 100% rename from README.md rename to NGRAPH_README.md diff --git a/R-package/.Rbuildignore b/R-package/.Rbuildignore index 423105a79..d0da835e2 100644 --- a/R-package/.Rbuildignore +++ b/R-package/.Rbuildignore @@ -3,5 +3,6 @@ src/*.so$ \.dll$ ^.*\.Rproj$ ^\.Rproj\.user$ +dummy.NAMESPACE README.md diff --git a/R-package/DESCRIPTION b/R-package/DESCRIPTION index 0990c63e0..70aa66e36 100644 --- a/R-package/DESCRIPTION +++ b/R-package/DESCRIPTION @@ -1,17 +1,17 @@ Package: mxnet Type: Package Title: MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems -Version: 1.3.1 +Version: 1.5.0 Date: 2017-06-27 -Author: Tianqi Chen, Qiang Kou, Tong He +Author: Tianqi Chen, Qiang Kou, Tong He, Anirudh Acharya Maintainer: Qiang Kou -Repository: DMLC +Repository: apache/incubator-mxnet Description: MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix the flavours of deep learning programs together to maximize the efficiency and your productivity. License: Apache License (== 2.0) -URL: https://github.com/dmlc/mxnet/tree/master/R-package -BugReports: https://github.com/dmlc/mxnet/issues +URL: https://github.com/apache/incubator-mxnet/tree/master/R-package +BugReports: https://github.com/apache/incubator-mxnet/issues Imports: methods, Rcpp (>= 0.12.1), @@ -29,7 +29,7 @@ Suggests: imager, covr Depends: - R (>= 3.3.0) + R (>= 3.4.4) LinkingTo: Rcpp VignetteBuilder: knitr -RoxygenNote: 5.0.1 +RoxygenNote: 6.1.1 diff --git a/R-package/LICENSE b/R-package/LICENSE index 8f71f43fe..0ebc218b3 100644 --- a/R-package/LICENSE +++ b/R-package/LICENSE @@ -186,7 +186,7 @@ same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright {yyyy} {name of copyright owner} + Copyright (c) 2015 by Contributors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/R-package/R/zzz.R b/R-package/R/zzz.R index 6c63081af..db7573ab2 100644 --- a/R-package/R/zzz.R +++ b/R-package/R/zzz.R @@ -34,8 +34,8 @@ NULL .onLoad <- function(libname, pkgname) { # Require methods for older versions of R require(methods) - library.dynam("libmxnet", pkgname, libname, local=FALSE) - library.dynam("mxnet", pkgname, libname) + tryCatch(library.dynam("libmxnet", pkgname, libname, local=FALSE), error = function(e) { print('Loading local: inst/libs/libmxnet.so'); dyn.load("R-package/inst/libs/libmxnet.so", local=FALSE) }) + tryCatch(library.dynam("mxnet", pkgname, libname), error = function(e) { print('Loading local: src/mxnet.so'); dyn.load("R-package/src/mxnet.so") }) loadModule("mxnet", TRUE) init.symbol.methods() init.context.default() diff --git a/R-package/dummy.NAMESPACE b/R-package/dummy.NAMESPACE new file mode 100644 index 000000000..6225fbf70 --- /dev/null +++ b/R-package/dummy.NAMESPACE @@ -0,0 +1,16 @@ +# Generated by roxygen2: do not edit by hand + +import(Rcpp) +import(methods) +importFrom(DiagrammeR,add_global_graph_attrs) +importFrom(DiagrammeR,create_edge_df) +importFrom(DiagrammeR,create_graph) +importFrom(DiagrammeR,create_node_df) +importFrom(DiagrammeR,render_graph) +importFrom(jsonlite,fromJSON) +importFrom(magrittr,"%>%") +importFrom(stringr,str_extract_all) +importFrom(stringr,str_replace_all) +importFrom(stringr,str_replace_na) +importFrom(stringr,str_trim) +importFrom(visNetwork,visHierarchicalLayout) diff --git a/ci/Jenkinsfile_docker_cache b/ci/Jenkinsfile_docker_cache index 77f0122f9..35e6ff9e7 100644 --- a/ci/Jenkinsfile_docker_cache +++ b/ci/Jenkinsfile_docker_cache @@ -23,12 +23,12 @@ // timeout in minutes total_timeout = 300 -node('restricted-mxnetlinux-cpu') { +node('restricted-utility') { // Loading the utilities requires a node context unfortunately checkout scm utils = load('ci/Jenkinsfile_utils.groovy') } -utils.assign_node_labels(linux_cpu: 'restricted-mxnetlinux-cpu', linux_gpu: 'restricted-mxnetlinux-gpu', linux_gpu_p3: 'restricted-mxnetlinux-gpu-p3', windows_cpu: 'restricted-mxnetwindows-cpu', windows_gpu: 'restricted-mxnetwindows-gpu') +utils.assign_node_labels(utility: 'restricted-utility', linux_cpu: 'restricted-mxnetlinux-cpu', linux_gpu: 'restricted-mxnetlinux-gpu', linux_gpu_p3: 'restricted-mxnetlinux-gpu-p3', windows_cpu: 'restricted-mxnetwindows-cpu', windows_gpu: 'restricted-mxnetwindows-gpu') utils.main_wrapper( core_logic: { diff --git a/ci/Jenkinsfile_utils.groovy b/ci/Jenkinsfile_utils.groovy index e2b08555d..8291bae1f 100644 --- a/ci/Jenkinsfile_utils.groovy +++ b/ci/Jenkinsfile_utils.groovy @@ -26,8 +26,11 @@ def init_git() { // retries as this will increase the amount of requests and worsen the throttling timeout(time: 15, unit: 'MINUTES') { checkout scm - sh 'git submodule update --init --recursive' sh 'git clean -xdff' + sh 'git reset --hard' + sh 'git submodule update --init --recursive' + sh 'git submodule foreach --recursive git clean -ffxd' + sh 'git submodule foreach --recursive git reset --hard' } } catch (exc) { deleteDir() @@ -45,8 +48,11 @@ def init_git_win() { // retries as this will increase the amount of requests and worsen the throttling timeout(time: 15, unit: 'MINUTES') { checkout scm - bat 'git submodule update --init --recursive' bat 'git clean -xdff' + bat 'git reset --hard' + bat 'git submodule update --init --recursive' + bat 'git submodule foreach --recursive git clean -ffxd' + bat 'git submodule foreach --recursive git reset --hard' } } catch (exc) { deleteDir() @@ -58,9 +64,11 @@ def init_git_win() { // pack libraries for later use def pack_lib(name, libs, include_gcov_data = false) { - sh """ + sh returnStatus: true, script: """ +set +e echo "Packing ${libs} into ${name}" echo ${libs} | sed -e 's/,/ /g' | xargs md5sum +return 0 """ stash includes: libs, name: name @@ -75,9 +83,11 @@ echo ${libs} | sed -e 's/,/ /g' | xargs md5sum def unpack_and_init(name, libs, include_gcov_data = false) { init_git() unstash name - sh """ + sh returnStatus: true, script: """ +set +e echo "Unpacked ${libs} from ${name}" echo ${libs} | sed -e 's/,/ /g' | xargs md5sum +return 0 """ if (include_gcov_data) { // Restore GCNO files that are required for GCOV to operate during runtime @@ -85,23 +95,32 @@ echo ${libs} | sed -e 's/,/ /g' | xargs md5sum } } +def get_jenkins_master_url() { + return env.BUILD_URL.split('/')[2].split(':')[0] +} + +def get_git_commit_hash() { + lastCommitMessage = sh (script: "git log -1 --pretty=%B", returnStdout: true) + lastCommitMessage = lastCommitMessage.trim() + if (lastCommitMessage.startsWith("Merge commit '") && lastCommitMessage.endsWith("' into HEAD")) { + // Merge commit applied by Jenkins, skip that commit + git_commit_hash = sh (script: "git rev-parse @~", returnStdout: true) + } else { + git_commit_hash = sh (script: "git rev-parse @", returnStdout: true) + } + return git_commit_hash +} + def publish_test_coverage() { // CodeCovs auto detection has trouble with our CIs PR validation due the merging strategy - lastCommitMessage = sh (script: "git log -1 --pretty=%B", returnStdout: true) - lastCommitMessage = lastCommitMessage.trim() - if (lastCommitMessage.startsWith("Merge commit '") && lastCommitMessage.endsWith("' into HEAD")) { - // Merge commit applied by Jenkins, skip that commit - GIT_COMMIT_HASH = sh (script: "git rev-parse @~", returnStdout: true) - } else { - GIT_COMMIT_HASH = sh (script: "git rev-parse @", returnStdout: true) - } + git_commit_hash = get_git_commit_hash() if (env.CHANGE_ID) { // PR execution - codecovArgs = "-B ${env.CHANGE_TARGET} -C ${GIT_COMMIT_HASH} -P ${env.CHANGE_ID}" + codecovArgs = "-B ${env.CHANGE_TARGET} -C ${git_commit_hash} -P ${env.CHANGE_ID}" } else { // Branch execution - codecovArgs = "-B ${env.BRANCH_NAME} -C ${GIT_COMMIT_HASH}" + codecovArgs = "-B ${env.BRANCH_NAME} -C ${git_commit_hash}" } // To make sure we never fail because test coverage reporting is not available @@ -128,8 +147,9 @@ def collect_test_results_windows(original_file_name, new_file_name) { } -def docker_run(platform, function_name, use_nvidia, shared_mem = '500m') { - def command = "ci/build.py --docker-registry ${env.DOCKER_CACHE_REGISTRY} %USE_NVIDIA% --platform %PLATFORM% --docker-build-retries 3 --shm-size %SHARED_MEM% /work/runtime_functions.sh %FUNCTION_NAME%" +def docker_run(platform, function_name, use_nvidia, shared_mem = '500m', env_vars = "") { + def command = "ci/build.py %ENV_VARS% --docker-registry ${env.DOCKER_CACHE_REGISTRY} %USE_NVIDIA% --platform %PLATFORM% --docker-build-retries 3 --shm-size %SHARED_MEM% /work/runtime_functions.sh %FUNCTION_NAME%" + command = command.replaceAll('%ENV_VARS%', env_vars.length() > 0 ? "-e ${env_vars}" : '') command = command.replaceAll('%USE_NVIDIA%', use_nvidia ? '--nvidiadocker' : '') command = command.replaceAll('%PLATFORM%', platform) command = command.replaceAll('%FUNCTION_NAME%', function_name) @@ -138,14 +158,83 @@ def docker_run(platform, function_name, use_nvidia, shared_mem = '500m') { sh command } +// Allow publishing to GitHub with a custom context (the status shown under a PR) +// Credit to https://plugins.jenkins.io/github +def get_repo_url() { + checkout scm + return sh(returnStdout: true, script: "git config --get remote.origin.url").trim() +} +def update_github_commit_status(state, message) { + node(NODE_UTILITY) { + // NOTE: https://issues.jenkins-ci.org/browse/JENKINS-39482 + //The GitHubCommitStatusSetter requires that the Git Server is defined under + //*Manage Jenkins > Configure System > GitHub > GitHub Servers*. + //Otherwise the GitHubCommitStatusSetter is not able to resolve the repository name + //properly and you would see an empty list of repos: + //[Set GitHub commit status (universal)] PENDING on repos [] (sha:xxxxxxx) with context:test/mycontext + //See https://cwiki.apache.org/confluence/display/MXNET/Troubleshooting#Troubleshooting-GitHubcommit/PRstatusdoesnotgetpublished + repoUrl = get_repo_url() + commitSha = get_git_commit_hash() + context = get_github_context() + + step([ + $class: 'GitHubCommitStatusSetter', + reposSource: [$class: "ManuallyEnteredRepositorySource", url: repoUrl], + contextSource: [$class: "ManuallyEnteredCommitContextSource", context: context], + commitShaSource: [$class: "ManuallyEnteredShaSource", sha: commitSha], + statusBackrefSource: [$class: "ManuallyEnteredBackrefSource", backref: "${env.RUN_DISPLAY_URL}"], + errorHandlers: [[$class: 'ShallowAnyErrorHandler']], + statusResultSource: [ + $class: 'ConditionalStatusResultSource', + results: [[$class: "AnyBuildResult", message: message, state: state]] + ] + ]) + } +} + +def get_github_context() { + // Since we use multi-branch pipelines, Jenkins appends the branch name to the job name + if (env.BRANCH_NAME) { + short_job_name = JOB_NAME.substring(0, JOB_NAME.lastIndexOf('/')) + } else { + short_job_name = JOB_NAME + } + + return "ci/jenkins/${short_job_name}" +} + +def parallel_stage(stage_name, steps) { + // Allow to pass an array of steps that will be executed in parallel in a stage + new_map = [:] + + for (def step in steps) { + new_map = new_map << step + } + + stage(stage_name) { + parallel new_map + } +} def assign_node_labels(args) { + // This function allows to assign instance labels to the generalized placeholders. + // This serves two purposes: + // 1. Allow generalized placeholders (e.g. NODE_WINDOWS_CPU) in the job definition + // in order to abstract away the underlying node label. This allows to schedule a job + // onto a different node for testing or security reasons. This could be, for example, + // when you want to test a new set of slaves on separate labels or when a job should + // only be run on restricted slaves + // 2. Restrict the allowed job types within a Jenkinsfile. For example, a UNIX-CPU-only + // Jenkinsfile should not allowed access to Windows or GPU instances. This prevents + // users from just copy&pasting something into an existing Jenkinsfile without + // knowing about the limitations. NODE_LINUX_CPU = args.linux_cpu NODE_LINUX_GPU = args.linux_gpu NODE_LINUX_GPU_P3 = args.linux_gpu_p3 NODE_WINDOWS_CPU = args.windows_cpu NODE_WINDOWS_GPU = args.windows_gpu + NODE_UTILITY = args.utility } def main_wrapper(args) { @@ -158,21 +247,27 @@ def main_wrapper(args) { // assign any caught errors here err = null try { + update_github_commit_status('PENDING', 'Job has been enqueued') args['core_logic']() // set build status to success at the end currentBuild.result = "SUCCESS" + update_github_commit_status('SUCCESS', 'Job succeeded') } catch (caughtError) { - node(NODE_LINUX_CPU) { + node(NODE_UTILITY) { sh "echo caught ${caughtError}" err = caughtError currentBuild.result = "FAILURE" + update_github_commit_status('FAILURE', 'Job failed') } } finally { - node(NODE_LINUX_CPU) { + node(NODE_UTILITY) { // Call failure handler args['failure_handler']() - + + // Clean workspace to reduce space requirements + cleanWs() + // Remember to rethrow so the build is marked as failing if (err) { throw err diff --git a/ci/README.md b/ci/README.md index 3737fc740..f56a6f6a7 100644 --- a/ci/README.md +++ b/ci/README.md @@ -5,6 +5,8 @@ Docker containers You need docker and nvidia docker if you have a GPU. +Also you need to run `pip3 install docker` as it uses the [docker python module](https://docker-py.readthedocs.io/en/stable/containers.html#) + If you are in ubuntu an easy way to install Docker CE is executing the following script: @@ -86,9 +88,9 @@ source 3rdparty folder of the parent mxnet sources concurrent builds of different platforms is NOT SUPPORTED. ## ccache -For all builds a directory from the host system is mapped where ccache will store cached -compiled object files (defaults to /tmp/ci_ccache). This will speed up rebuilds -significantly. You can set this directory explicitly by setting CCACHE_DIR environment +For all builds a directory from the host system is mapped where ccache will store cached +compiled object files (defaults to /tmp/ci_ccache). This will speed up rebuilds +significantly. You can set this directory explicitly by setting CCACHE_DIR environment variable. All ccache instances are currently set to be 10 Gigabytes max in size. @@ -98,5 +100,84 @@ To run the unit tests under qemu: ./build.py -p armv7 && ./build.py -p test.arm_qemu ./runtime_functions.py run_ut_py3_qemu ``` -To get a shell on the container and debug issues with the emulator itself: -Run the output of `./build.py -p test.arm_qemu --print-docker-run` +To get a shell on the container and debug issues with the emulator itself, we build the container +and then execute it interactively. We can afterwards use port 2222 on the host to connect with SSH. + + +``` +ci/build.py -p test.arm_qemu -b && docker run -p2222:2222 -ti mxnetci/build.test.arm_qemu +``` + +Then from another terminal: + +``` +ssh -o StrictHostKeyChecking=no -p 2222 qemu@localhost +``` + +There are two pre-configured users: `root` and `qemu` both without passwords. + + +### Example of reproducing a test result with QEMU on ARM + + +You might want to enable a debug build first: + +``` +$ git diff +diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh +index 39631f9..666ceea 100755 +--- a/ci/docker/runtime_functions.sh ++++ b/ci/docker/runtime_functions.sh +@@ -172,6 +172,7 @@ build_armv7() { + -DUSE_LAPACK=OFF \ + -DBUILD_CPP_EXAMPLES=OFF \ + -Dmxnet_LINKER_LIBS=-lgfortran \ ++ -DCMAKE_BUILD_TYPE=Debug \ + -G Ninja /work/mxnet + + ninja -v + +``` + +Then we build the project for armv7, the test container and start QEMU inside docker: + +``` +ci/build.py -p armv7 +ci/build.py -p test.arm_qemu -b && docker run -p2222:2222 -ti mxnetci/build.test.arm_qemu +``` + + + +At this point we copy artifacts and sources to the VM, in another terminal (host) do the following: + +``` +# Copy mxnet sources to the VM +rsync --delete -e 'ssh -p2222' --exclude='.git/' -zvaP ./ qemu@localhost:mxnet + + +# Ssh into the vm +ssh -p2222 qemu@localhost + +cd mxnet + +# Execute a single failing C++ test +build/tests/mxnet_unit_tests --gtest_filter="ACTIVATION_PERF.ExecuteBidirectional" + +# To install MXNet: +sudo pip3 install --upgrade --force-reinstall build/mxnet-1.3.1-py2.py3-none-any.whl + +# Execute a single python test: + +nosetests-3.4 -v -s tests/python/unittest/test_ndarray.py + + +# Debug with cgdb +sudo apt install -y libstdc++6-6-dbg +cgdb build/tests/mxnet_unit_tests + +(gdb) !pwd +/home/qemu/mxnet +(gdb) set substitute-path /work /home/qemu +(gdb) set substitute-path /build/gcc-6-6mK9AW/gcc-6-6.3.0/build/arm-linux-gnueabihf/libstdc++-v3/include/ /usr/include/c++/6/ +(gdb) r --gtest_filter="ACTIVATION_PERF.ExecuteBidirectional" +``` diff --git a/ci/build.py b/ci/build.py index 238b89ae7..3f420d5d3 100755 --- a/ci/build.py +++ b/ci/build.py @@ -92,34 +92,38 @@ def get_dockerfiles_path(): def get_platforms(path: str = get_dockerfiles_path()) -> List[str]: """Get a list of architectures given our dockerfiles""" - dockerfiles = glob.glob(os.path.join(path, "Dockerfile.build.*")) + dockerfiles = glob.glob(os.path.join(path, "Dockerfile.*")) dockerfiles = list(filter(lambda x: x[-1] != '~', dockerfiles)) - files = list(map(lambda x: re.sub(r"Dockerfile.build.(.*)", r"\1", x), dockerfiles)) + files = list(map(lambda x: re.sub(r"Dockerfile.(.*)", r"\1", x), dockerfiles)) platforms = list(map(lambda x: os.path.split(x)[1], sorted(files))) return platforms def get_docker_tag(platform: str, registry: str) -> str: """:return: docker tag to be used for the container""" - return "{0}/build.{1}".format(registry, platform) + platform = platform if any(x in platform for x in ['build.', 'publish.']) else 'build.{}'.format(platform) + if not registry: + registry = "mxnet_local" + return "{0}/{1}".format(registry, platform) def get_dockerfile(platform: str, path=get_dockerfiles_path()) -> str: - return os.path.join(path, "Dockerfile.build.{0}".format(platform)) + platform = platform if any(x in platform for x in ['build.', 'publish.']) else 'build.{}'.format(platform) + return os.path.join(path, "Dockerfile.{0}".format(platform)) def get_docker_binary(use_nvidia_docker: bool) -> str: return "nvidia-docker" if use_nvidia_docker else "docker" -def build_docker(platform: str, docker_binary: str, registry: str, num_retries: int, use_cache: bool) -> str: +def build_docker(platform: str, docker_binary: str, registry: str, num_retries: int, no_cache: bool) -> str: """ Build a container for the given platform :param platform: Platform :param docker_binary: docker binary to use (docker/nvidia-docker) :param registry: Dockerhub registry name :param num_retries: Number of retries to build the docker image - :param use_cache: will pass cache_from to docker to use the previously pulled tag + :param no_cache: pass no-cache to docker to rebuild the images :return: Id of the top level image """ @@ -145,7 +149,9 @@ def build_docker(platform: str, docker_binary: str, registry: str, num_retries: "-f", get_dockerfile(platform), "--build-arg", "USER_ID={}".format(os.getuid()), "--build-arg", "GROUP_ID={}".format(os.getgid())] - if use_cache: + if no_cache: + cmd.append("--no-cache") + elif registry: cmd.extend(["--cache-from", tag]) cmd.extend(["-t", tag, get_dockerfiles_path()]) @@ -209,7 +215,7 @@ def default_ccache_dir() -> str: ccache_dir = "/tmp/_mxnet_ccache" os.makedirs(ccache_dir, exist_ok=True) return ccache_dir - return os.path.join(tempfile.gettempdir(), "ci_ccache") + return os.path.join(os.path.expanduser("~"), ".ccache") def trim_container_id(cid): @@ -224,20 +230,21 @@ def container_run(platform: str, local_ccache_dir: str, command: List[str], cleanup: Cleanup, + environment: Dict[str, str], dry_run: bool = False) -> int: """Run command in a container""" container_wait_s = 600 # # Environment setup # - environment = { + environment.update({ 'CCACHE_MAXSIZE': '500G', 'CCACHE_TEMPDIR': '/tmp/ccache', # temp dir should be local and not shared 'CCACHE_DIR': '/work/ccache', # this path is inside the container as /work/ccache is # mounted 'CCACHE_LOGFILE': '/tmp/ccache.log', # a container-scoped log, useful for ccache # verification. - } + }) # These variables are passed to the container to the process tree killer can find runaway # process inside the container # https://wiki.jenkins.io/display/JENKINS/ProcessTreeKiller @@ -294,7 +301,6 @@ def container_run(platform: str, # noinspection PyShadowingNames # runc is default (docker info | grep -i runtime) runtime = 'nvidia' - container = docker_client.containers.run( tag, runtime=runtime, @@ -312,52 +318,59 @@ def container_run(platform: str, {'bind': '/work/ccache', 'mode': 'rw'}, }, environment=environment) - logging.info("Started container: %s", trim_container_id(container.id)) - # Race condition: - # If the previous call is interrupted then it's possible that the container is not cleaned up - # We avoid by masking the signals temporarily - cleanup.add_container(container) - signal.pthread_sigmask(signal.SIG_UNBLOCK, {signal.SIGINT, signal.SIGTERM}) - # - ############################# - - stream = container.logs(stream=True, stdout=True, stderr=True) - sys.stdout.flush() - for chunk in stream: - sys.stdout.buffer.write(chunk) - sys.stdout.buffer.flush() - sys.stdout.flush() - stream.close() try: - logging.info("Waiting for status of container %s for %d s.", - trim_container_id(container.id), - container_wait_s) - wait_result = container.wait(timeout=container_wait_s) - logging.info("Container exit status: %s", wait_result) - ret = wait_result.get('StatusCode', 200) - except Exception as e: - logging.exception(e) - ret = 150 - - # Stop - try: - logging.info("Stopping container: %s", trim_container_id(container.id)) - container.stop() - except Exception as e: - logging.exception(e) - ret = 151 + logging.info("Started container: %s", trim_container_id(container.id)) + # Race condition: + # If the previous call is interrupted then it's possible that the container is not cleaned up + # We avoid by masking the signals temporarily + cleanup.add_container(container) + signal.pthread_sigmask(signal.SIG_UNBLOCK, {signal.SIGINT, signal.SIGTERM}) + # + ############################# + + stream = container.logs(stream=True, stdout=True, stderr=True) + sys.stdout.flush() + for chunk in stream: + sys.stdout.buffer.write(chunk) + sys.stdout.buffer.flush() + sys.stdout.flush() + stream.close() + try: + logging.info("Waiting for status of container %s for %d s.", + trim_container_id(container.id), + container_wait_s) + wait_result = container.wait(timeout=container_wait_s) + logging.info("Container exit status: %s", wait_result) + ret = wait_result.get('StatusCode', 200) + if ret != 0: + logging.error("Container exited with an error 😞") + else: + logging.info("Container exited with success 👍") + except Exception as e: + logging.exception(e) + ret = 150 - # Remove - try: - logging.info("Removing container: %s", trim_container_id(container.id)) - container.remove() - except Exception as e: - logging.exception(e) - ret = 152 - cleanup.remove_container(container) - containers = docker_client.containers.list() - if containers: - logging.info("Other running containers: %s", [trim_container_id(x.id) for x in containers]) + # Stop + try: + logging.info("Stopping container: %s", trim_container_id(container.id)) + container.stop() + except Exception as e: + logging.exception(e) + ret = 151 + + # Remove + try: + logging.info("Removing container: %s", trim_container_id(container.id)) + container.remove() + except Exception as e: + logging.exception(e) + ret = 152 + cleanup.remove_container(container) + containers = docker_client.containers.list() + if containers: + logging.info("Other running containers: %s", [trim_container_id(x.id) for x in containers]) + except docker.errors.NotFound as e: + logging.info("Container was stopped before cleanup started: %s", e) return ret @@ -391,11 +404,15 @@ def script_name() -> str: """:returns: script name with leading paths removed""" return os.path.split(sys.argv[0])[1] - -def main() -> int: +def config_logging(): + import time logging.getLogger().setLevel(logging.INFO) logging.getLogger("requests").setLevel(logging.WARNING) - logging.basicConfig(format='{}: %(asctime)-15s %(message)s'.format(script_name())) + logging.basicConfig(format='{}: %(asctime)sZ %(levelname)s %(message)s'.format(script_name())) + logging.Formatter.converter = time.gmtime + +def main() -> int: + config_logging() logging.info("MXNet container based build tool.") log_environment() @@ -433,7 +450,7 @@ def main() -> int: action='store_true') parser.add_argument("-d", "--docker-registry", - help="Dockerhub registry name to retrieve cache from. Default is 'mxnetci'", + help="Dockerhub registry name to retrieve cache from.", default='mxnetci', type=str) @@ -442,10 +459,12 @@ def main() -> int: default=1, type=int) - parser.add_argument("-c", "--no-dockerhub-cache", action="store_true", - help="Disables use of --cache-from option on docker build, allowing docker" - " to use local layers for caching. If absent, we use the cache from dockerhub" - " which is the default.") + parser.add_argument("--no-cache", action="store_true", + help="passes --no-cache to docker build") + + parser.add_argument("-e", "--environment", nargs="*", default=[], + help="Environment variables for the docker container. " + "Specify with a list containing either names or name=value") parser.add_argument("command", help="command to run in the container", @@ -458,9 +477,6 @@ def main() -> int: args = parser.parse_args() - def use_cache(): - return not args.no_dockerhub_cache or under_ci() - command = list(chain(*args.command)) docker_binary = get_docker_binary(args.nvidiadocker) @@ -478,15 +494,18 @@ def signal_handler(signum, _): signal.signal(signal.SIGTERM, signal_handler) signal.signal(signal.SIGINT, signal_handler) + environment = dict([(e.split('=')[:2] if '=' in e else (e, os.environ[e])) + for e in args.environment]) + if args.list: print(list_platforms()) elif args.platform: platform = args.platform tag = get_docker_tag(platform=platform, registry=args.docker_registry) - if use_cache(): + if args.docker_registry: load_docker_cache(tag=tag, docker_registry=args.docker_registry) build_docker(platform=platform, docker_binary=docker_binary, registry=args.docker_registry, - num_retries=args.docker_build_retries, use_cache=use_cache()) + num_retries=args.docker_build_retries, no_cache=args.no_cache) if args.build_only: logging.warning("Container was just built. Exiting due to build-only.") return 0 @@ -497,13 +516,13 @@ def signal_handler(signum, _): ret = container_run( platform=platform, nvidia_runtime=args.nvidiadocker, shared_memory_size=args.shared_memory_size, command=command, docker_registry=args.docker_registry, - local_ccache_dir=args.ccache_dir, cleanup=cleanup) + local_ccache_dir=args.ccache_dir, cleanup=cleanup, environment=environment) elif args.print_docker_run: command = [] ret = container_run( platform=platform, nvidia_runtime=args.nvidiadocker, shared_memory_size=args.shared_memory_size, command=command, docker_registry=args.docker_registry, - local_ccache_dir=args.ccache_dir, dry_run=True, cleanup=cleanup) + local_ccache_dir=args.ccache_dir, dry_run=True, cleanup=cleanup, environment=environment) else: # With no commands, execute a build function for the target platform command = ["/work/mxnet/ci/docker/runtime_functions.sh", "build_{}".format(platform)] @@ -511,7 +530,7 @@ def signal_handler(signum, _): ret = container_run( platform=platform, nvidia_runtime=args.nvidiadocker, shared_memory_size=args.shared_memory_size, command=command, docker_registry=args.docker_registry, - local_ccache_dir=args.ccache_dir, cleanup=cleanup) + local_ccache_dir=args.ccache_dir, cleanup=cleanup, environment=environment) if ret != 0: logging.critical("Execution of %s failed with status: %d", command, ret) @@ -519,14 +538,14 @@ def signal_handler(signum, _): elif args.all: platforms = get_platforms() + platforms = [platform for platform in platforms if 'build.' in platform] logging.info("Building for all architectures: %s", platforms) logging.info("Artifacts will be produced in the build/ directory.") for platform in platforms: tag = get_docker_tag(platform=platform, registry=args.docker_registry) - if use_cache(): - load_docker_cache(tag=tag, docker_registry=args.docker_registry) + load_docker_cache(tag=tag, docker_registry=args.docker_registry) build_docker(platform, docker_binary=docker_binary, registry=args.docker_registry, - num_retries=args.docker_build_retries, use_cache=use_cache()) + num_retries=args.docker_build_retries, no_cache=args.no_cache) if args.build_only: continue shutil.rmtree(buildir(), ignore_errors=True) @@ -540,7 +559,7 @@ def signal_handler(signum, _): container_run( platform=platform, nvidia_runtime=args.nvidiadocker, shared_memory_size=args.shared_memory_size, command=command, docker_registry=args.docker_registry, - local_ccache_dir=args.ccache_dir, cleanup=cleanup) + local_ccache_dir=args.ccache_dir, cleanup=cleanup, environment=environment) shutil.move(buildir(), plat_buildir) logging.info("Built files left in: %s", plat_buildir) diff --git a/ci/build_windows.py b/ci/build_windows.py index 56769f7cd..b7d47fb1f 100755 --- a/ci/build_windows.py +++ b/ci/build_windows.py @@ -160,9 +160,6 @@ def windows_package(args): copy_tree('python', j(pkgdir, 'python')) logging.info('packing headers') copy_tree('include', j(pkgdir, 'include')) - copy_tree(j('3rdparty','dmlc-core','include'), j(pkgdir, 'include')) - copy_tree(j('3rdparty','mshadow', 'mshadow'), j(pkgdir, 'include', 'mshadow')) - copy_tree(j('3rdparty','tvm','nnvm', 'include'), j(pkgdir,'include', 'nnvm', 'include')) logging.info("Compressing package: %s", pkgfile) check_call(['7z', 'a', pkgfile, pkgdir]) diff --git a/ci/docker/Dockerfile.build.android_armv7 b/ci/docker/Dockerfile.build.android_armv7 index 799e29c99..a2e98cd2e 100644 --- a/ci/docker/Dockerfile.build.android_armv7 +++ b/ci/docker/Dockerfile.build.android_armv7 @@ -18,7 +18,7 @@ # # Dockerfile to build MXNet for Android ARMv7 -FROM mxnetci/dockcross-linux-base:08212018 +FROM mxnetcipinned/dockcross-base:11262018 MAINTAINER Pedro Larroy "pllarroy@amazon.com" # The cross-compiling emulator @@ -75,6 +75,11 @@ ENV OpenBLAS_DIR=${CROSS_ROOT} WORKDIR /work +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + COPY runtime_functions.sh /work/ WORKDIR /work/mxnet diff --git a/ci/docker/Dockerfile.build.android_armv8 b/ci/docker/Dockerfile.build.android_armv8 index 2c2c71c00..f7de86763 100644 --- a/ci/docker/Dockerfile.build.android_armv8 +++ b/ci/docker/Dockerfile.build.android_armv8 @@ -18,7 +18,7 @@ # # Dockerfile to build MXNet for Android ARM64/ARMv8 -FROM mxnetci/dockcross-linux-base:08212018 +FROM mxnetcipinned/dockcross-base:11262018 MAINTAINER Pedro Larroy "pllarroy@amazon.com" RUN apt-get update && apt-get install -y \ @@ -74,6 +74,12 @@ ENV CXX=${CROSS_ROOT}/bin/${CROSS_TRIPLE}-clang++ COPY install/android_arm64_openblas.sh /work/ RUN /work/android_arm64_openblas.sh ENV CPLUS_INCLUDE_PATH /work/deps/OpenBLAS -WORKDIR /work/build + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh COPY runtime_functions.sh /work/ + +WORKDIR /work/build \ No newline at end of file diff --git a/ci/docker/Dockerfile.build.armv6 b/ci/docker/Dockerfile.build.armv6 index 78071fa33..60e223b7a 100644 --- a/ci/docker/Dockerfile.build.armv6 +++ b/ci/docker/Dockerfile.build.armv6 @@ -18,7 +18,7 @@ # # Dockerfile to build MXNet for ARMv6 -FROM mxnetci/dockcross-linux-armv6:08212018 +FROM mxnetcipinned/dockcross-linux-armv6:11262018 ENV ARCH armv6l ENV HOSTCC gcc @@ -38,5 +38,10 @@ ENV OpenBLAS_DIR=${CROSS_ROOT} COPY install/deb_ubuntu_ccache.sh /work/ RUN /work/deb_ubuntu_ccache.sh +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + COPY runtime_functions.sh /work/ WORKDIR /work/mxnet diff --git a/ci/docker/Dockerfile.build.armv7 b/ci/docker/Dockerfile.build.armv7 index 9a23a5dbe..0b557d583 100644 --- a/ci/docker/Dockerfile.build.armv7 +++ b/ci/docker/Dockerfile.build.armv7 @@ -18,7 +18,7 @@ # # Dockerfile to build MXNet for Android ARMv7 -FROM mxnetci/dockcross-linux-armv7:09182018 +FROM mxnetcipinned/dockcross-linux-armv7:11262018 ENV ARCH armv7l ENV HOSTCC gcc @@ -38,5 +38,10 @@ ENV OpenBLAS_DIR=${CROSS_ROOT} COPY install/deb_ubuntu_ccache.sh /work/ RUN /work/deb_ubuntu_ccache.sh +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + COPY runtime_functions.sh /work/ WORKDIR /work/mxnet diff --git a/ci/docker/Dockerfile.build.armv8 b/ci/docker/Dockerfile.build.armv8 index 46cc229d5..ef9c95865 100644 --- a/ci/docker/Dockerfile.build.armv8 +++ b/ci/docker/Dockerfile.build.armv8 @@ -18,7 +18,7 @@ # # Dockerfile to build MXNet for ARM64/ARMv8 -FROM mxnetci/dockcross-linux-arm64:08212018 +FROM mxnetcipinned/dockcross-linux-arm64:11262018 ENV ARCH aarch64 ENV HOSTCC gcc @@ -27,8 +27,8 @@ ENV TARGET ARMV8 WORKDIR /work/deps # gh issue #11567 https://github.com/apache/incubator-mxnet/issues/11567 -RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security jessie/updates main#d' /etc/apt/sources.list -RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list +#RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security jessie/updates main#d' /etc/apt/sources.list +#RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list COPY install/ubuntu_arm.sh /work/ RUN /work/ubuntu_arm.sh @@ -42,5 +42,10 @@ ENV OpenBLAS_DIR=${CROSS_ROOT} COPY install/deb_ubuntu_ccache.sh /work/ RUN /work/deb_ubuntu_ccache.sh +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + COPY runtime_functions.sh /work/ WORKDIR /work/build diff --git a/ci/docker/Dockerfile.build.centos7_cpu b/ci/docker/Dockerfile.build.centos7_cpu index 076ef5df9..e2802aa2f 100644 --- a/ci/docker/Dockerfile.build.centos7_cpu +++ b/ci/docker/Dockerfile.build.centos7_cpu @@ -28,6 +28,8 @@ COPY install/centos7_ccache.sh /work/ RUN /work/centos7_ccache.sh COPY install/centos7_python.sh /work/ RUN /work/centos7_python.sh +COPY install/centos7_scala.sh /work/ +RUN /work/centos7_scala.sh COPY install/ubuntu_mklml.sh /work/ RUN /work/ubuntu_mklml.sh diff --git a/ci/docker/Dockerfile.build.jetson b/ci/docker/Dockerfile.build.jetson index 15518cd6f..07097887f 100644 --- a/ci/docker/Dockerfile.build.jetson +++ b/ci/docker/Dockerfile.build.jetson @@ -22,15 +22,15 @@ FROM nvidia/cuda:9.0-cudnn7-devel as cudabuilder -FROM mxnetci/dockcross-linux-arm64:05082018 +FROM mxnetcipinned/dockcross-linux-arm64:11262018 ENV ARCH aarch64 ENV HOSTCC gcc ENV TARGET ARMV8 # gh issue #11567 https://github.com/apache/incubator-mxnet/issues/11567 -RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security jessie/updates main#d' /etc/apt/sources.list -RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list +#RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security jessie/updates main#d' /etc/apt/sources.list +#RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list WORKDIR /work/deps @@ -77,10 +77,16 @@ RUN JETPACK_DOWNLOAD_PREFIX=https://developer.download.nvidia.com/devzone/devcen dpkg -i --force-architecture $ARM_NVINFER_INSTALLER_PACKAGE && \ dpkg -i --force-architecture $ARM_NVINFER_DEV_INSTALLER_PACKAGE && \ apt update -y || true && apt install -y cuda-libraries-dev-9-0 libcudnn7-dev libnvinfer-dev +RUN ln -s /usr/include/aarch64-linux-gnu/cudnn_v7.h /usr/include/aarch64-linux-gnu/cudnn.h ENV PATH $PATH:/usr/local/cuda/bin ENV NVCCFLAGS "-m64" ENV CUDA_ARCH "-gencode arch=compute_53,code=sm_53 -gencode arch=compute_62,code=sm_62" ENV NVCC /usr/local/cuda/bin/nvcc +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + COPY runtime_functions.sh /work/ WORKDIR /work/mxnet diff --git a/ci/docker/Dockerfile.build.test.arm_qemu b/ci/docker/Dockerfile.build.test.arm_qemu index fde105c3f..68891a72c 100644 --- a/ci/docker/Dockerfile.build.test.arm_qemu +++ b/ci/docker/Dockerfile.build.test.arm_qemu @@ -39,6 +39,8 @@ RUN /work/ubuntu_adduser.sh COPY runtime_functions.sh /work/ COPY qemu/* /work/ -COPY qemu/ansible.cfg /etc/ansible/ansible.cfg -CMD ["./runtime_functions.py","run_ut_py3_qemu"] +# SSH to the Qemu VM +EXPOSE 2222/tcp + +CMD ["./runtime_functions.py","run_qemu_interactive"] diff --git a/ci/docker/Dockerfile.build.ubuntu_cpu b/ci/docker/Dockerfile.build.ubuntu_cpu index 7c7e2240e..2df9f5887 100644 --- a/ci/docker/Dockerfile.build.ubuntu_cpu +++ b/ci/docker/Dockerfile.build.ubuntu_cpu @@ -54,6 +54,9 @@ RUN /work/ubuntu_clang.sh COPY install/ubuntu_gcc8.sh /work/ RUN /work/ubuntu_gcc8.sh +COPY install/ubuntu_mkl.sh /work/ +RUN /work/ubuntu_mkl.sh + COPY install/ubuntu_mklml.sh /work/ RUN /work/ubuntu_mklml.sh diff --git a/ci/docker/Dockerfile.publish.test.centos7_cpu b/ci/docker/Dockerfile.publish.test.centos7_cpu new file mode 100644 index 000000000..7d2844529 --- /dev/null +++ b/ci/docker/Dockerfile.publish.test.centos7_cpu @@ -0,0 +1,38 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to build and run MXNet on CentOS 7 for CPU + +FROM centos:7 + +WORKDIR /work/deps + +COPY install/centos7_base.sh /work/ +RUN /work/centos7_base.sh + +COPY install/centos7_scala.sh /work/ +RUN /work/centos7_scala.sh + +ARG USER_ID=0 +COPY install/centos7_adduser.sh /work/ +RUN /work/centos7_adduser.sh + +ENV PYTHONPATH=./python/ +WORKDIR /work/mxnet + +COPY runtime_functions.sh /work/ diff --git a/ci/docker/Dockerfile.publish.test.centos7_gpu b/ci/docker/Dockerfile.publish.test.centos7_gpu new file mode 100644 index 000000000..e7f584683 --- /dev/null +++ b/ci/docker/Dockerfile.publish.test.centos7_gpu @@ -0,0 +1,38 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to build and run MXNet on CentOS 7 for CPU + +FROM nvidia/cuda:9.2-cudnn7-devel-centos7 + +WORKDIR /work/deps + +COPY install/centos7_base.sh /work/ +RUN /work/centos7_base.sh + +COPY install/centos7_scala.sh /work/ +RUN /work/centos7_scala.sh + +ARG USER_ID=0 +COPY install/centos7_adduser.sh /work/ +RUN /work/centos7_adduser.sh + +ENV PYTHONPATH=./python/ +WORKDIR /work/mxnet + +COPY runtime_functions.sh /work/ diff --git a/ci/docker/Dockerfile.publish.test.ubuntu1404_cpu b/ci/docker/Dockerfile.publish.test.ubuntu1404_cpu new file mode 100644 index 000000000..035837686 --- /dev/null +++ b/ci/docker/Dockerfile.publish.test.ubuntu1404_cpu @@ -0,0 +1,39 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to build and run MXNet on Ubuntu 14.04 for CPU + +FROM ubuntu:14.04 + +WORKDIR /work/deps + +COPY install/ubuntu_base.sh /work/ +RUN /work/ubuntu_base.sh + +COPY install/ubuntu_scala.sh /work/ +RUN /work/ubuntu_scala.sh + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + +COPY runtime_functions.sh /work/ + +WORKDIR /work/mxnet +ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib diff --git a/ci/docker/Dockerfile.publish.test.ubuntu1404_gpu b/ci/docker/Dockerfile.publish.test.ubuntu1404_gpu new file mode 100644 index 000000000..854dd68a6 --- /dev/null +++ b/ci/docker/Dockerfile.publish.test.ubuntu1404_gpu @@ -0,0 +1,40 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to run MXNet on Ubuntu 14.04 for GPU + +# Use CPU with setup_gpu script +FROM ubuntu:14.04 + +WORKDIR /work/deps + +COPY install/ubuntu_base.sh /work/ +RUN /work/ubuntu_base.sh + +COPY install/ubuntu_scala.sh /work/ +RUN /work/ubuntu_scala.sh + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + +COPY runtime_functions.sh /work/ + +WORKDIR /work/mxnet +ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib diff --git a/ci/docker/Dockerfile.publish.test.ubuntu1604_cpu b/ci/docker/Dockerfile.publish.test.ubuntu1604_cpu new file mode 100644 index 000000000..bbb7b6a0d --- /dev/null +++ b/ci/docker/Dockerfile.publish.test.ubuntu1604_cpu @@ -0,0 +1,39 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to build and run MXNet on Ubuntu 16.04 for CPU + +FROM ubuntu:16.04 + +WORKDIR /work/deps + +COPY install/ubuntu_base.sh /work/ +RUN /work/ubuntu_base.sh + +COPY install/ubuntu_scala.sh /work/ +RUN /work/ubuntu_scala.sh + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + +COPY runtime_functions.sh /work/ + +WORKDIR /work/mxnet +ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib diff --git a/ci/docker/Dockerfile.publish.test.ubuntu1604_gpu b/ci/docker/Dockerfile.publish.test.ubuntu1604_gpu new file mode 100644 index 000000000..660461dc0 --- /dev/null +++ b/ci/docker/Dockerfile.publish.test.ubuntu1604_gpu @@ -0,0 +1,39 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to run MXNet on Ubuntu 16.04 for GPU + +FROM nvidia/cuda:9.2-cudnn7-devel-ubuntu16.04 + +WORKDIR /work/deps + +COPY install/ubuntu_base.sh /work/ +RUN /work/ubuntu_base.sh + +COPY install/ubuntu_scala.sh /work/ +RUN /work/ubuntu_scala.sh + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + +COPY runtime_functions.sh /work/ + +WORKDIR /work/mxnet +ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib diff --git a/example/bi-lstm-sort/gen_data.py b/ci/docker/Dockerfile.publish.test.ubuntu1804_cpu similarity index 61% rename from example/bi-lstm-sort/gen_data.py rename to ci/docker/Dockerfile.publish.test.ubuntu1804_cpu index 55af1b455..e3a8c193f 100644 --- a/example/bi-lstm-sort/gen_data.py +++ b/ci/docker/Dockerfile.publish.test.ubuntu1804_cpu @@ -1,3 +1,4 @@ +# -*- mode: dockerfile -*- # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information @@ -14,24 +15,27 @@ # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. +# +# Dockerfile to build and run MXNet on Ubuntu 18.04 for CPU + +FROM ubuntu:18.04 + +WORKDIR /work/deps + +ENV DEBIAN_FRONTEND noninteractive + +COPY install/ubuntu_base.sh /work/ +RUN /work/ubuntu_base.sh + +COPY install/ubuntu_scala.sh /work/ +RUN /work/ubuntu_scala.sh + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + +COPY runtime_functions.sh /work/ -import random - -vocab = [str(x) for x in range(100, 1000)] -sw_train = open("sort.train.txt", "w") -sw_test = open("sort.test.txt", "w") -sw_valid = open("sort.valid.txt", "w") - -for i in range(1000000): - seq = " ".join([vocab[random.randint(0, len(vocab) - 1)] for j in range(5)]) - k = i % 50 - if k == 0: - sw_test.write(seq + "\n") - elif k == 1: - sw_valid.write(seq + "\n") - else: - sw_train.write(seq + "\n") - -sw_train.close() -sw_test.close() -sw_valid.close() +WORKDIR /work/mxnet +ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib diff --git a/ci/docker/Dockerfile.publish.test.ubuntu1804_gpu b/ci/docker/Dockerfile.publish.test.ubuntu1804_gpu new file mode 100644 index 000000000..99f7e0d3e --- /dev/null +++ b/ci/docker/Dockerfile.publish.test.ubuntu1804_gpu @@ -0,0 +1,41 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to run MXNet on Ubuntu 18.04 for GPU + +FROM nvidia/cuda:9.2-cudnn7-devel-ubuntu18.04 + +WORKDIR /work/deps + +ENV DEBIAN_FRONTEND noninteractive + +COPY install/ubuntu_base.sh /work/ +RUN /work/ubuntu_base.sh + +COPY install/ubuntu_scala.sh /work/ +RUN /work/ubuntu_scala.sh + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + +COPY runtime_functions.sh /work/ + +WORKDIR /work/mxnet +ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib diff --git a/ci/docker/Dockerfile.publish.ubuntu1404_cpu b/ci/docker/Dockerfile.publish.ubuntu1404_cpu new file mode 100644 index 000000000..04ce94f95 --- /dev/null +++ b/ci/docker/Dockerfile.publish.ubuntu1404_cpu @@ -0,0 +1,36 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to build and run MXNet on Ubuntu 14.04 for CPU + +FROM ubuntu:14.04 + +WORKDIR /work/deps + +COPY install/ubuntu_publish.sh /work/ +RUN /work/ubuntu_publish.sh + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + +COPY runtime_functions.sh /work/ + +WORKDIR /work/mxnet +ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib diff --git a/ci/docker/Dockerfile.publish.ubuntu1404_gpu b/ci/docker/Dockerfile.publish.ubuntu1404_gpu new file mode 100644 index 000000000..9855986a2 --- /dev/null +++ b/ci/docker/Dockerfile.publish.ubuntu1404_gpu @@ -0,0 +1,36 @@ +# -*- mode: dockerfile -*- +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. +# +# Dockerfile to run MXNet on Ubuntu 14.04 for GPU + +FROM ubuntu:14.04 + +WORKDIR /work/deps + +COPY install/ubuntu_publish.sh /work/ +RUN /work/ubuntu_publish.sh + +ARG USER_ID=0 +ARG GROUP_ID=0 +COPY install/ubuntu_adduser.sh /work/ +RUN /work/ubuntu_adduser.sh + +COPY runtime_functions.sh /work/ + +WORKDIR /work/mxnet +ENV LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib diff --git a/ci/docker/install/centos7_adduser.sh b/ci/docker/install/centos7_adduser.sh index ba72c9b92..f9d2402c9 100755 --- a/ci/docker/install/centos7_adduser.sh +++ b/ci/docker/install/centos7_adduser.sh @@ -34,4 +34,9 @@ then mkdir /work/mxnet mkdir /work/build chown -R jenkins_slave /work/ + + # Later on, we have to override the links because underlying build systems ignore our compiler settings. Thus, + # we have to give the process the proper permission to these files. This is hacky, but unfortunately + # there's no better way to do this without patching all our submodules. + chown -R jenkins_slave /usr/local/bin fi diff --git a/ci/docker/install/centos7_base.sh b/ci/docker/install/centos7_base.sh new file mode 100755 index 000000000..3b84aeb57 --- /dev/null +++ b/ci/docker/install/centos7_base.sh @@ -0,0 +1,33 @@ +#!/usr/bin/env bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# build and install are separated so changes to build don't invalidate +# the whole docker cache for the image + +set -ex + +# Multipackage installation does not fail in yum +yum -y install epel-release +yum -y install git +yum -y install wget +yum -y install make +yum -y install cmake +yum -y install unzip +yum -y install ninja-build +yum -y install gcc-gfortran diff --git a/ci/docker/install/centos7_scala.sh b/ci/docker/install/centos7_scala.sh new file mode 100755 index 000000000..5c43f011c --- /dev/null +++ b/ci/docker/install/centos7_scala.sh @@ -0,0 +1,39 @@ +#!/usr/bin/env bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# build and install are separated so changes to build don't invalidate +# the whole docker cache for the image + +set -ex + +yum install -y java-1.8.0-openjdk-devel +export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk +export PATH=$JAVA_HOME/bin:$PATH +# Build from source with Maven +wget -q http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz +tar xzf apache-maven-3.3.9-bin.tar.gz +mkdir /usr/local/maven +mv apache-maven-3.3.9/ /usr/local/maven/ +alternatives --install /usr/bin/mvn mvn /usr/local/maven/apache-maven-3.3.9/bin/mvn 1 + +echo "export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk" >> /etc/profile.d/maven.sh +echo "export M3_HOME=/usr/local/src/apache-maven" >> /etc/profile.d/maven.sh +echo "export PATH=$M3_HOME/bin:$JAVA_HOME/bin:$PATH" >> /etc/profile.d/maven.sh +chmod +x /etc/profile.d/maven.sh +source /etc/profile.d/maven.sh diff --git a/ci/docker/install/ubuntu_adduser.sh b/ci/docker/install/ubuntu_adduser.sh index 515a80f63..a7668bac2 100755 --- a/ci/docker/install/ubuntu_adduser.sh +++ b/ci/docker/install/ubuntu_adduser.sh @@ -40,4 +40,9 @@ then mkdir /work/mxnet mkdir /work/build chown -R jenkins_slave /work/ + + # Later on, we have to override the links because underlying build systems ignore our compiler settings. Thus, + # we have to give the process the proper permission to these files. This is hacky, but unfortunately + # there's no better way to do this without patching all our submodules. + chown -R jenkins_slave /usr/local/bin fi diff --git a/ci/docker/install/ubuntu_arm_qemu.sh b/ci/docker/install/ubuntu_arm_qemu.sh index c30dc4f13..79ab67bfd 100755 --- a/ci/docker/install/ubuntu_arm_qemu.sh +++ b/ci/docker/install/ubuntu_arm_qemu.sh @@ -31,6 +31,7 @@ apt-get install -y \ qemu-system-arm \ unzip \ bzip2 \ - vim-nox + vim-nox \ + toilet -pip3 install ansible ipython +pip3 install ipython diff --git a/ci/docker/install/ubuntu_base.sh b/ci/docker/install/ubuntu_base.sh new file mode 100755 index 000000000..b34c0b3e1 --- /dev/null +++ b/ci/docker/install/ubuntu_base.sh @@ -0,0 +1,36 @@ +#!/usr/bin/env bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# build and install are separated so changes to build don't invalidate +# the whole docker cache for the image + +set -ex +apt-get update || true +apt-get install -y \ + build-essential \ + ca-certificates \ + cmake \ + curl \ + git \ + ninja-build \ + libgfortran3 \ + software-properties-common \ + sudo \ + unzip \ + wget diff --git a/ci/docker/install/ubuntu_caffe.sh b/ci/docker/install/ubuntu_caffe.sh index 8c1361211..eaa8ab9b1 100755 --- a/ci/docker/install/ubuntu_caffe.sh +++ b/ci/docker/install/ubuntu_caffe.sh @@ -18,6 +18,7 @@ # under the License. set -ex +apt-get update || true apt-get install -y \ libgflags-dev \ libgoogle-glog-dev \ diff --git a/ci/docker/install/ubuntu_clang.sh b/ci/docker/install/ubuntu_clang.sh index cb0f234a1..19aada9b3 100755 --- a/ci/docker/install/ubuntu_clang.sh +++ b/ci/docker/install/ubuntu_clang.sh @@ -21,6 +21,8 @@ # the whole docker cache for the image set -ex + +apt-get update || true # Install clang 3.9 (the same version as in XCode 8.*) and 6.0 (latest major release) wget -O - http://apt.llvm.org/llvm-snapshot.gpg.key | apt-key add - && \ apt-add-repository "deb http://apt.llvm.org/xenial/ llvm-toolchain-xenial-3.9 main" && \ diff --git a/ci/docker/install/ubuntu_core.sh b/ci/docker/install/ubuntu_core.sh index 8a2ea30cd..4382aa6ae 100755 --- a/ci/docker/install/ubuntu_core.sh +++ b/ci/docker/install/ubuntu_core.sh @@ -21,12 +21,11 @@ # the whole docker cache for the image set -ex -apt-get update +apt-get update || true apt-get install -y \ apt-transport-https \ build-essential \ ca-certificates \ - cmake \ curl \ git \ libatlas-base-dev \ @@ -41,3 +40,11 @@ apt-get install -y \ sudo \ unzip \ wget + + +# Ubuntu 14.04 +if [[ $(lsb_release -r | grep 14.04) ]]; then + apt-get install -y cmake3 +else + apt-get install -y cmake +fi diff --git a/ci/docker/install/ubuntu_docs.sh b/ci/docker/install/ubuntu_docs.sh index a709b3de7..5dc201c15 100755 --- a/ci/docker/install/ubuntu_docs.sh +++ b/ci/docker/install/ubuntu_docs.sh @@ -23,6 +23,7 @@ set -ex # Install dependencies echo 'Installing dependencies...' +apt-get update || true apt-get install -y \ doxygen \ pandoc diff --git a/ci/docker/install/ubuntu_emscripten.sh b/ci/docker/install/ubuntu_emscripten.sh index e3d72caf0..28ede755f 100755 --- a/ci/docker/install/ubuntu_emscripten.sh +++ b/ci/docker/install/ubuntu_emscripten.sh @@ -25,6 +25,7 @@ set -ex +apt-get update || true apt-get -y install nodejs git clone -b 1.38.6 https://github.com/kripken/emscripten.git diff --git a/ci/docker/install/ubuntu_gcc8.sh b/ci/docker/install/ubuntu_gcc8.sh index 0a9dc6057..cd31f8213 100755 --- a/ci/docker/install/ubuntu_gcc8.sh +++ b/ci/docker/install/ubuntu_gcc8.sh @@ -19,5 +19,5 @@ sudo add-apt-repository ppa:jonathonf/gcc-8.0 sudo add-apt-repository ppa:jonathonf/gcc-7.3 -sudo apt-get update +sudo apt-get update || true sudo apt-get install -y gcc-8 g++-8 diff --git a/ci/docker/install/ubuntu_julia.sh b/ci/docker/install/ubuntu_julia.sh index 62013e36d..13093acc4 100755 --- a/ci/docker/install/ubuntu_julia.sh +++ b/ci/docker/install/ubuntu_julia.sh @@ -22,16 +22,22 @@ set -ex -export JLBINARY='julia.tar.gz' -export JULIADIR='/work/julia' -export JULIA="${JULIADIR}/bin/julia" +function install_julia() { + local suffix=`echo $1 | sed 's/\.//'` # 0.7 -> 07; 1.0 -> 10 + local JLBINARY="julia-$1.tar.gz" + local JULIADIR="/work/julia$suffix" + local JULIA="${JULIADIR}/bin/julia" -mkdir -p $JULIADIR -# The julia version in Ubuntu repo is too old -# We download the tarball from the official link: -# https://julialang.org/downloads/ -wget -O $JLBINARY https://julialang-s3.julialang.org/bin/linux/x64/0.6/julia-0.6.2-linux-x86_64.tar.gz -tar xzvf $JLBINARY -C $JULIADIR --strip 1 -rm $JLBINARY + mkdir -p $JULIADIR + # The julia version in Ubuntu repo is too old + # We download the tarball from the official link: + # https://julialang.org/downloads/ + wget -O $JLBINARY https://julialang-s3.julialang.org/bin/linux/x64/$1/julia-$2-linux-x86_64.tar.gz + tar xzvf $JLBINARY -C $JULIADIR --strip 1 + rm $JLBINARY -$JULIA -e 'versioninfo()' + $JULIA -e 'using InteractiveUtils; versioninfo()' +} + +install_julia 0.7 0.7.0 +install_julia 1.0 1.0.3 diff --git a/ci/docker/install/ubuntu_llvm.sh b/ci/docker/install/ubuntu_llvm.sh index 09e13d3d1..afd881eae 100755 --- a/ci/docker/install/ubuntu_llvm.sh +++ b/ci/docker/install/ubuntu_llvm.sh @@ -23,4 +23,5 @@ echo deb-src http://apt.llvm.org/xenial/ llvm-toolchain-xenial-5.0 main\ >> /etc/apt/sources.list.d/llvm.list wget -O - http://apt.llvm.org/llvm-snapshot.gpg.key|sudo apt-key add - -apt-get update && apt-get install -y --force-yes llvm-5.0 \ No newline at end of file +apt-get update || true +apt-get install -y --force-yes llvm-5.0 diff --git a/ci/docker/install/ubuntu_mkl.sh b/ci/docker/install/ubuntu_mkl.sh new file mode 100755 index 000000000..36fc7b07f --- /dev/null +++ b/ci/docker/install/ubuntu_mkl.sh @@ -0,0 +1,31 @@ +#!/usr/bin/env bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# build and install are separated so changes to build don't invalidate +# the whole docker cache for the image + +set -ex + +apt-get update || true +# Install Intel Math Kernel Library (latest major release) +# https://software.intel.com/en-us/articles/installing-intel-free-libs-and-python-apt-repo +wget -O - wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB | apt-key add - && \ + sh -c 'echo deb https://apt.repos.intel.com/mkl all main > /etc/apt/sources.list.d/intel-mkl.list' && \ + apt-get update && \ + apt-get install -y intel-mkl-2019.1-053 diff --git a/ci/docker/install/ubuntu_mklml.sh b/ci/docker/install/ubuntu_mklml.sh index 7e17295f4..862e28464 100755 --- a/ci/docker/install/ubuntu_mklml.sh +++ b/ci/docker/install/ubuntu_mklml.sh @@ -21,5 +21,5 @@ # the whole docker cache for the image set -ex -wget -q --no-check-certificate -O /tmp/mklml.tgz https://github.com/intel/mkl-dnn/releases/download/v0.14/mklml_lnx_2018.0.3.20180406.tgz +wget -q --no-check-certificate -O /tmp/mklml.tgz https://github.com/intel/mkl-dnn/releases/download/v0.17-rc/mklml_lnx_2019.0.1.20180928.tgz tar -zxf /tmp/mklml.tgz && cp -rf mklml_*/* /usr/local/ && rm -rf mklml_* diff --git a/ci/docker/install/ubuntu_nightly_tests.sh b/ci/docker/install/ubuntu_nightly_tests.sh index 406985ea3..80028d87c 100755 --- a/ci/docker/install/ubuntu_nightly_tests.sh +++ b/ci/docker/install/ubuntu_nightly_tests.sh @@ -25,7 +25,7 @@ set -ex # Adding ppas frequently fails due to busy gpg servers, retry 5 times with 5 minute delays. for i in 1 2 3 4 5; do add-apt-repository -y ppa:ubuntu-toolchain-r/test && break || sleep 300; done -apt-get update +apt-get update || true apt-get -y install time # Install for RAT License Check Nightly Test diff --git a/ci/docker/install/ubuntu_npm_blc.sh b/ci/docker/install/ubuntu_npm_blc.sh index 30fcb5a1b..59caa9f88 100755 --- a/ci/docker/install/ubuntu_npm_blc.sh +++ b/ci/docker/install/ubuntu_npm_blc.sh @@ -22,7 +22,7 @@ set -ex echo 'Installing npm...' -apt-get update +apt-get update || true apt-get install -y npm echo "Obtaining NodeJS version 8.x" diff --git a/ci/docker/install/ubuntu_nvidia.sh b/ci/docker/install/ubuntu_nvidia.sh index 7b16ed16f..7012b897f 100755 --- a/ci/docker/install/ubuntu_nvidia.sh +++ b/ci/docker/install/ubuntu_nvidia.sh @@ -18,10 +18,6 @@ # under the License. set -ex -apt install -y software-properties-common - -# Adding ppas frequently fails due to busy gpg servers, retry 5 times with 5 minute delays. -for i in 1 2 3 4 5; do add-apt-repository -y ppa:graphics-drivers && break || sleep 300; done # Retrieve ppa:graphics-drivers and install nvidia-drivers. # Note: DEBIAN_FRONTEND required to skip the interactive setup steps diff --git a/ci/docker/install/ubuntu_onnx.sh b/ci/docker/install/ubuntu_onnx.sh index 329352efd..0dad3f9ee 100755 --- a/ci/docker/install/ubuntu_onnx.sh +++ b/ci/docker/install/ubuntu_onnx.sh @@ -27,6 +27,7 @@ set -e set -x echo "Installing libprotobuf-dev and protobuf-compiler ..." +apt-get update || true apt-get install -y libprotobuf-dev protobuf-compiler echo "Installing pytest, pytest-cov, protobuf, Pillow, ONNX and tabulate ..." diff --git a/ci/docker/install/ubuntu_perl.sh b/ci/docker/install/ubuntu_perl.sh index 4d868f7d5..e04141eee 100755 --- a/ci/docker/install/ubuntu_perl.sh +++ b/ci/docker/install/ubuntu_perl.sh @@ -22,5 +22,6 @@ set -ex # install libraries for mxnet's perl package on ubuntu +apt-get update || true apt-get install -y libmouse-perl pdl cpanminus swig libgraphviz-perl cpanm -q Function::Parameters Hash::Ordered PDL::CCS diff --git a/ci/docker/install/ubuntu_publish.sh b/ci/docker/install/ubuntu_publish.sh new file mode 100644 index 000000000..1ad6ab947 --- /dev/null +++ b/ci/docker/install/ubuntu_publish.sh @@ -0,0 +1,65 @@ +#!/usr/bin/env bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# Build on Ubuntu 14.04 LTS for LINUX CPU/GPU +apt-get update +apt-get install -y software-properties-common +add-apt-repository ppa:ubuntu-toolchain-r/test -y +add-apt-repository ppa:openjdk-r/ppa -y # Java lib +apt-get update +apt-get install -y git \ + cmake3 \ + libcurl4-openssl-dev \ + unzip \ + gcc-4.8 \ + g++-4.8 \ + gfortran \ + gfortran-4.8 \ + binutils \ + nasm \ + libtool \ + curl \ + wget \ + sudo \ + gnupg \ + gnupg2 \ + gnupg-agent \ + pandoc \ + python3-pip \ + automake \ + pkg-config \ + openjdk-8-jdk +curl -o apache-maven-3.3.9-bin.tar.gz http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz +tar xzf apache-maven-3.3.9-bin.tar.gz +mkdir /usr/local/maven +mv apache-maven-3.3.9/ /usr/local/maven/ +update-alternatives --install /usr/bin/mvn mvn /usr/local/maven/apache-maven-3.3.9/bin/mvn 1 +update-ca-certificates -f + +apt-get install -y python python3 + +# the version of the pip shipped with ubuntu may be too lower, install a recent version here +wget -nv https://bootstrap.pypa.io/get-pip.py +python3 get-pip.py +python2 get-pip.py + +apt-get remove -y python3-urllib3 + +pip2 install nose cpplint==1.3.0 pylint==1.9.3 'numpy<=1.15.2,>=1.8.2' nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 boto3 +pip3 install nose cpplint==1.3.0 pylint==2.1.1 'numpy<=1.15.2,>=1.8.2' nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 boto3 diff --git a/ci/docker/install/ubuntu_python.sh b/ci/docker/install/ubuntu_python.sh index d6e66aa45..ee05058f2 100755 --- a/ci/docker/install/ubuntu_python.sh +++ b/ci/docker/install/ubuntu_python.sh @@ -22,6 +22,7 @@ set -ex # install libraries for mxnet's python package on ubuntu +apt-get update || true apt-get install -y python-dev python3-dev virtualenv wget # the version of the pip shipped with ubuntu may be too lower, install a recent version here diff --git a/ci/docker/install/ubuntu_r.sh b/ci/docker/install/ubuntu_r.sh index 0e95601ea..cefc4172f 100755 --- a/ci/docker/install/ubuntu_r.sh +++ b/ci/docker/install/ubuntu_r.sh @@ -34,7 +34,7 @@ apt-key add r.gpg # Installing the latest version (3.3+) that is compatible with MXNet add-apt-repository 'deb [arch=amd64,i386] https://cran.rstudio.com/bin/linux/ubuntu xenial/' -apt-get update +apt-get update || true apt-get install -y --allow-unauthenticated \ libcairo2-dev \ libssl-dev \ diff --git a/ci/docker/install/ubuntu_rat.sh b/ci/docker/install/ubuntu_rat.sh index b131a0bb5..2c905fc27 100755 --- a/ci/docker/install/ubuntu_rat.sh +++ b/ci/docker/install/ubuntu_rat.sh @@ -20,7 +20,7 @@ set -ex echo "Install dependencies" -apt-get update +apt-get update || true apt-get install -y subversion maven openjdk-8-jdk openjdk-8-jre echo "download RAT" diff --git a/ci/docker/install/ubuntu_scala.sh b/ci/docker/install/ubuntu_scala.sh index bee0e6bba..5bade4746 100755 --- a/ci/docker/install/ubuntu_scala.sh +++ b/ci/docker/install/ubuntu_scala.sh @@ -24,17 +24,31 @@ set -ex cd "$(dirname "$0")" # install libraries for mxnet's scala package on ubuntu echo 'Installing Scala...' -apt-get install -y software-properties-common -apt-get update -apt-get install -y openjdk-8-jdk -apt-get install -y openjdk-8-jre -echo "deb https://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list -# ubuntu keyserver is very flaky -#apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823 -#apt-key adv --keyserver keys.gnupg.net --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823 -apt-key add sbt.gpg -apt-get update && apt-get install -y \ - maven \ - sbt \ +# Ubuntu 14.04 +if [[ $(lsb_release -r | grep 14.04) ]]; then + add-apt-repository -y ppa:openjdk-r/ppa +fi + +# All Ubuntu +apt-get update || true +apt-get install -y \ + openjdk-8-jdk \ + openjdk-8-jre \ + software-properties-common \ + gnupg \ + gnupg2 \ + gnupg-agent \ scala + +# Ubuntu 14.04 +if [[ $(lsb_release -r | grep 14.04) ]]; then + curl -o apache-maven-3.3.9-bin.tar.gz http://www.eu.apache.org/dist/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz + tar xzf apache-maven-3.3.9-bin.tar.gz + mkdir /usr/local/maven + mv apache-maven-3.3.9/ /usr/local/maven/ + update-alternatives --install /usr/bin/mvn mvn /usr/local/maven/apache-maven-3.3.9/bin/mvn 1 + update-ca-certificates -f +else + apt-get install -y maven +fi diff --git a/ci/docker/install/ubuntu_tutorials.sh b/ci/docker/install/ubuntu_tutorials.sh index 9a236bbf4..404d4bbf6 100755 --- a/ci/docker/install/ubuntu_tutorials.sh +++ b/ci/docker/install/ubuntu_tutorials.sh @@ -21,6 +21,7 @@ # the whole docker cache for the image set -ex +apt-get update || true apt-get install graphviz python-opencv -pip2 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm -pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm +pip2 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm mxboard +pip3 install jupyter matplotlib Pillow opencv-python scikit-learn graphviz tqdm mxboard diff --git a/ci/docker/qemu/runtime_functions.py b/ci/docker/qemu/runtime_functions.py index 6cf01b6f9..8b8e5acb5 100755 --- a/ci/docker/qemu/runtime_functions.py +++ b/ci/docker/qemu/runtime_functions.py @@ -33,6 +33,8 @@ import sys import types import glob +import vmcontrol +from vmcontrol import qemu_ssh, qemu_provision, qemu_rsync_to_host, VM def activate_this(base): import site @@ -53,21 +55,24 @@ def activate_this(base): sys.path.remove(item) sys.path[:0] = new_sys_path + + + def run_ut_py3_qemu(): + """Run unit tests in the emulator and copy the results back to the host through the mounted + volume in /mxnet""" from vmcontrol import VM with VM() as vm: - logging.info("VM provisioning with ansible") - check_call(["ansible-playbook", "-v", "-u", "qemu", "-i", "localhost:{},".format(vm.ssh_port), "playbook.yml"]) - logging.info("VM provisioned successfully.") - logging.info("sync tests") - check_call(['rsync', '-e', 'ssh -p{}'.format(vm.ssh_port), '-a', 'mxnet/tests', 'qemu@localhost:mxnet']) + qemu_provision(vm.ssh_port) logging.info("execute tests") - check_call(["ssh", "-o", "ServerAliveInterval=5", "-p{}".format(vm.ssh_port), "qemu@localhost", "./runtime_functions.py", "run_ut_python3_qemu_internal"]) + qemu_ssh(vm.ssh_port, "./runtime_functions.py", "run_ut_python3_qemu_internal") + qemu_rsync_to_host(vm.ssh_port, "*.xml", "mxnet") + logging.info("copied to host") logging.info("tests finished, vm shutdown.") vm.shutdown() def run_ut_python3_qemu_internal(): - """this runs inside the vm, it's run by the playbook above by ansible""" + """this runs inside the vm""" pkg = glob.glob('mxnet_dist/*.whl')[0] logging.info("=== NOW Running inside QEMU ===") logging.info("PIP Installing %s", pkg) @@ -75,8 +80,20 @@ def run_ut_python3_qemu_internal(): logging.info("PIP Installing mxnet/tests/requirements.txt") check_call(['sudo', 'pip3', 'install', '-r', 'mxnet/tests/requirements.txt']) logging.info("Running tests in mxnet/tests/python/unittest/") - check_call(['nosetests', '--with-timer', '--with-xunit', '--xunit-file', 'nosetests_unittest.xml', '--verbose', 'mxnet/tests/python/unittest/test_ndarray.py:test_ndarray_fluent']) + check_call(['nosetests', '--with-timer', '--with-xunit', '--xunit-file', 'nosetests_unittest.xml', '--verbose', 'mxnet/tests/python/unittest/test_engine.py']) + # Example to run a single unit test: + # check_call(['nosetests', '--with-timer', '--with-xunit', '--xunit-file', 'nosetests_unittest.xml', '--verbose', 'mxnet/tests/python/unittest/test_ndarray.py:test_ndarray_fluent']) + + + +def run_qemu_interactive(): + vm = VM(interactive=True) + vm.detach() + vm.start() + vm.wait() + logging.info("QEMU finished") +################################ def parsed_args(): parser = argparse.ArgumentParser(description="""python runtime functions""", epilog="") @@ -95,7 +112,7 @@ def chdir_to_script_directory(): os.chdir(base) def main(): - logging.getLogger().setLevel(logging.DEBUG) + logging.getLogger().setLevel(logging.INFO) logging.basicConfig(format='{}: %(asctime)-15s %(message)s'.format(script_name())) chdir_to_script_directory() diff --git a/ci/docker/qemu/vmcontrol.py b/ci/docker/qemu/vmcontrol.py index 2262bc77b..d80e22b1d 100644 --- a/ci/docker/qemu/vmcontrol.py +++ b/ci/docker/qemu/vmcontrol.py @@ -42,6 +42,8 @@ # # The VMs are provisioned after boot, tests are run and then they are stopped # +QEMU_SSH_PORT=2222 +QEMU_RAM=4096 QEMU_RUN=""" qemu-system-arm -M virt -m {ram} \ @@ -55,17 +57,72 @@ -display none -nographic """ +QEMU_RUN_INTERACTIVE=""" +qemu-system-arm -M virt -m {ram} \ + -kernel vmlinuz \ + -initrd initrd.img \ + -append 'root=/dev/vda1' \ + -drive if=none,file=vda.qcow2,format=qcow2,id=hd \ + -device virtio-blk-device,drive=hd \ + -netdev user,id=mynet,hostfwd=tcp::{ssh_port}-:22 \ + -device virtio-net-device,netdev=mynet \ + -nographic +""" + +def retry(target_exception, tries=4, delay_s=1, backoff=2): + """Retry calling the decorated function using an exponential backoff. + + http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/ + original from: http://wiki.python.org/moin/PythonDecoratorLibrary#Retry + + :param target_exception: the exception to check. may be a tuple of + exceptions to check + :type target_exception: Exception or tuple + :param tries: number of times to try (not retry) before giving up + :type tries: int + :param delay_s: initial delay between retries in seconds + :type delay_s: int + :param backoff: backoff multiplier e.g. value of 2 will double the delay + each retry + :type backoff: int + """ + import time + from functools import wraps + + def decorated_retry(f): + @wraps(f) + def f_retry(*args, **kwargs): + mtries, mdelay = tries, delay_s + while mtries > 1: + try: + return f(*args, **kwargs) + except target_exception as e: + logging.warning("Exception: %s, Retrying in %d seconds...", str(e), mdelay) + time.sleep(mdelay) + mtries -= 1 + mdelay *= backoff + return f(*args, **kwargs) + + return f_retry # true decorator + + return decorated_retry + + + + class VMError(RuntimeError): pass class VM: """Control of the virtual machine""" - def __init__(self, ssh_port=2222): + def __init__(self, ssh_port=QEMU_SSH_PORT, ram=QEMU_RAM, interactive=False): self.log = logging.getLogger(VM.__name__) self.ssh_port = ssh_port self.timeout_s = 300 self.qemu_process = None self._detach = False + self._interactive = interactive + self.ram = ram def __enter__(self): self.start() @@ -77,13 +134,22 @@ def __exit__(self, exc_type, exc_value, traceback): self.terminate() def start(self): - self.log.info("Starting VM, ssh port redirected to localhost:%s", self.ssh_port) + sys.stderr.flush() + call(['toilet', '-f', 'smbraille', 'Starting QEMU']) + sys.stdout.flush() + self.log.info("Starting VM, ssh port redirected to localhost:%s (inside docker, not exposed by default)", self.ssh_port) if self.is_running(): raise VMError("VM is running, shutdown first") - self.qemu_process = run_qemu(self.ssh_port) + if self._interactive: + self.qemu_process = Popen(shlex.split(QEMU_RUN_INTERACTIVE.format(ssh_port=self.ssh_port, ram=self.ram))) + return + else: + self.log.info("Starting in non-interactive mode. Terminal output is disabled.") + self.qemu_process = Popen(shlex.split(QEMU_RUN.format(ssh_port=self.ssh_port, ram=self.ram)), stdout=DEVNULL, stdin=DEVNULL, stderr=PIPE) def keep_waiting(): return self.is_running() + logging.info("waiting for ssh to be open in the VM (timeout {}s)".format(self.timeout_s)) ssh_working = wait_ssh_open('127.0.0.1', self.ssh_port, keep_waiting, self.timeout_s) if not self.is_running(): @@ -140,11 +206,30 @@ def __del__(self): logging.info("VM destructor hit") self.terminate() -def run_qemu(ssh_port=2222): - cmd = QEMU_RUN.format(ssh_port=ssh_port, ram=4096) - logging.info("QEMU command: %s", cmd) - qemu_process = Popen(shlex.split(cmd), stdout=DEVNULL, stdin=DEVNULL, stderr=PIPE) - return qemu_process + +def qemu_ssh(ssh_port=QEMU_SSH_PORT, *args): + check_call(["ssh", "-o", "ServerAliveInterval=5", "-o", "StrictHostKeyChecking=no", "-p{}".format(ssh_port), "qemu@localhost", *args]) + + +def qemu_rsync(ssh_port, local_path, remote_path): + check_call(['rsync', '-e', 'ssh -o StrictHostKeyChecking=no -p{}'.format(ssh_port), '-a', local_path, 'qemu@localhost:{}'.format(remote_path)]) + +def qemu_rsync_to_host(ssh_port, remote_path, local_path): + check_call(['rsync', '-e', 'ssh -o StrictHostKeyChecking=no -p{}'.format(ssh_port), '-va', 'qemu@localhost:{}'.format(remote_path), local_path]) + + +@retry(subprocess.CalledProcessError) +def qemu_provision(ssh_port=QEMU_SSH_PORT): + import glob + logging.info("Provisioning the VM with artifacts and sources") + + artifact = glob.glob('/work/mxnet/build/*.whl') + for x in artifact: + qemu_rsync(ssh_port, x, 'mxnet_dist/') + qemu_rsync(ssh_port, '/work/runtime_functions.py','') + qemu_rsync(ssh_port, '/work/vmcontrol.py','') + qemu_rsync(ssh_port, 'mxnet/tests', 'mxnet') + logging.info("Provisioning completed successfully.") def wait_ssh_open(server, port, keep_waiting=None, timeout=None): @@ -159,7 +244,7 @@ def wait_ssh_open(server, port, keep_waiting=None, timeout=None): import errno import time log = logging.getLogger('wait_ssh_open') - sleep_s = 0 + sleep_s = 1 if timeout: from time import time as now # time module is needed to calc timeout shared between two exceptions @@ -183,7 +268,7 @@ def wait_ssh_open(server, port, keep_waiting=None, timeout=None): log.debug("connect timeout %d s", next_timeout) s.settimeout(next_timeout) - log.info("connect %s:%d", server, port) + log.debug("connect %s:%d", server, port) s.connect((server, port)) ret = s.recv(1024).decode() if ret and ret.startswith('SSH'): diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh index 43006f239..6c3f99948 100755 --- a/ci/docker/runtime_functions.sh +++ b/ci/docker/runtime_functions.sh @@ -23,6 +23,7 @@ set -ex NOSE_COVERAGE_ARGUMENTS="--with-coverage --cover-inclusive --cover-xml --cover-branches --cover-package=mxnet" +NOSE_TIMER_ARGUMENTS="--with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error" CI_CUDA_COMPUTE_CAPABILITIES="-gencode=arch=compute_52,code=sm_52 -gencode=arch=compute_70,code=sm_70" CI_CMAKE_CUDA_ARCH_BIN="52,70" @@ -35,35 +36,67 @@ clean_repo() { git submodule update --init --recursive } +scala_prepare() { + # Clean up maven logs + export MAVEN_OPTS="-Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn" +} + build_ccache_wrappers() { set -ex - rm -f cc - rm -f cxx - - touch cc - touch cxx - if [ -z ${CC+x} ]; then echo "No \$CC set, defaulting to gcc"; export CC=gcc fi - - if [ -z ${CXX+x} ]; then + if [ -z ${CXX+x} ]; then echo "No \$CXX set, defaulting to g++"; export CXX=g++ fi - # this function is nessesary for cuda enabled make based builds, since nvcc needs just an executable for -ccbin - - echo -e "#!/bin/sh\n/usr/local/bin/ccache ${CC} \"\$@\"\n" >> cc - echo -e "#!/bin/sh\n/usr/local/bin/ccache ${CXX} \"\$@\"\n" >> cxx - - chmod +x cc - chmod +x cxx - - export CC=`pwd`/cc - export CXX=`pwd`/cxx + # Recommended by CCache: https://ccache.samba.org/manual.html#_run_modes + # Add to the beginning of path to ensure this redirection is picked up instead + # of the original ones. Especially CUDA/NVCC appends itself to the beginning of the + # path and thus this redirect is ignored. This change fixes this problem + # This hacky approach with symbolic links is required because underlying build + # systems of our submodules ignore our CMake settings. If they use Makefile, + # we can't influence them at all in general and NVCC also prefers to hardcode their + # compiler instead of respecting the settings. Thus, we take this brutal approach + # and just redirect everything of this installer has been called. + # In future, we could do these links during image build time of the container. + # But in the beginning, we'll make this opt-in. In future, loads of processes like + # the scala make step or numpy compilation and other pip package generations + # could be heavily sped up by using ccache as well. + mkdir /tmp/ccache-redirects + export PATH=/tmp/ccache-redirects:$PATH + ln -s ccache /tmp/ccache-redirects/gcc + ln -s ccache /tmp/ccache-redirects/gcc-8 + ln -s ccache /tmp/ccache-redirects/g++ + ln -s ccache /tmp/ccache-redirects/g++-8 + ln -s ccache /tmp/ccache-redirects/nvcc + ln -s ccache /tmp/ccache-redirects/clang++-3.9 + ln -s ccache /tmp/ccache-redirects/clang-3.9 + ln -s ccache /tmp/ccache-redirects/clang++-5.0 + ln -s ccache /tmp/ccache-redirects/clang-5.0 + ln -s ccache /tmp/ccache-redirects/clang++-6.0 + ln -s ccache /tmp/ccache-redirects/clang-6.0 + ln -s ccache /usr/local/bin/gcc + ln -s ccache /usr/local/bin/gcc-8 + ln -s ccache /usr/local/bin/g++ + ln -s ccache /usr/local/bin/g++-8 + ln -s ccache /usr/local/bin/nvcc + ln -s ccache /usr/local/bin/clang++-3.9 + ln -s ccache /usr/local/bin/clang-3.9 + ln -s ccache /usr/local/bin/clang++-5.0 + ln -s ccache /usr/local/bin/clang-5.0 + ln -s ccache /usr/local/bin/clang++-6.0 + ln -s ccache /usr/local/bin/clang-6.0 + + export NVCC=ccache + + # Uncomment if you would like to debug CCache hit rates. + # You can monitor using tail -f ccache-log + # export CCACHE_LOGFILE=/work/mxnet/ccache-log + # export CCACHE_DEBUG=1 } build_wheel() { @@ -105,6 +138,8 @@ build_jetson() { set -ex pushd . + #build_ccache_wrappers + cp make/crosscompile.jetson.mk ./config.mk make -j$(nproc) @@ -128,6 +163,7 @@ build_armv6() { # We do not need OpenMP, since most armv6 systems have only 1 core + build_ccache_wrappers cmake \ -DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE} \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ @@ -158,6 +194,7 @@ build_armv7() { # file tries to add -llapack. Lapack functionality though, requires -lgfortran # to be linked additionally. + build_ccache_wrappers cmake \ -DCMAKE_TOOLCHAIN_FILE=${CMAKE_TOOLCHAIN_FILE} \ -DCMAKE_CROSSCOMPILING=ON \ @@ -180,6 +217,7 @@ build_armv7() { } build_armv8() { + build_ccache_wrappers cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ @@ -204,6 +242,7 @@ build_armv8() { build_android_armv7() { set -ex cd /work/build + build_ccache_wrappers cmake \ -DANDROID=ON\ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ @@ -224,6 +263,7 @@ build_android_armv7() { build_android_armv8() { set -ex cd /work/build + build_ccache_wrappers cmake\ -DANDROID=ON \ -DUSE_CUDA=OFF\ @@ -243,19 +283,21 @@ build_centos7_cpu() { cd /work/mxnet export CC="ccache gcc" export CXX="ccache g++" - + build_ccache_wrappers make \ DEV=1 \ USE_LAPACK=1 \ ENABLE_TESTCOVERAGE=1 \ USE_LAPACK_PATH=/usr/lib64/liblapack.so \ USE_BLAS=openblas \ + USE_MKLDNN=0 \ USE_DIST_KVSTORE=1 \ -j$(nproc) } build_amzn_linux_cpu() { cd /work/build + build_ccache_wrappers cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ @@ -272,19 +314,17 @@ build_amzn_linux_cpu() { ninja -v } - build_centos7_mkldnn() { set -ex cd /work/mxnet export CC="ccache gcc" export CXX="ccache g++" - + build_ccache_wrappers make \ DEV=1 \ ENABLE_TESTCOVERAGE=1 \ USE_LAPACK=1 \ USE_LAPACK_PATH=/usr/lib64/liblapack.so \ - USE_MKLDNN=1 \ USE_BLAS=openblas \ -j$(nproc) } @@ -293,13 +333,14 @@ build_centos7_gpu() { set -ex cd /work/mxnet # unfortunately this build has problems in 3rdparty dependencies with ccache and make - # build_ccache_wrappers + build_ccache_wrappers make \ DEV=1 \ ENABLE_TESTCOVERAGE=1 \ USE_LAPACK=1 \ USE_LAPACK_PATH=/usr/lib64/liblapack.so \ USE_BLAS=openblas \ + USE_MKLDNN=0 \ USE_CUDA=1 \ USE_CUDA_PATH=/usr/local/cuda \ USE_CUDNN=1 \ @@ -313,6 +354,21 @@ build_ubuntu_cpu() { } build_ubuntu_cpu_openblas() { + set -ex + export CC="gcc" + export CXX="g++" + build_ccache_wrappers + make \ + DEV=1 \ + ENABLE_TESTCOVERAGE=1 \ + USE_CPP_PACKAGE=1 \ + USE_BLAS=openblas \ + USE_MKLDNN=0 \ + USE_DIST_KVSTORE=1 \ + -j$(nproc) +} + +build_ubuntu_cpu_mkl() { set -ex export CC="ccache gcc" export CXX="ccache g++" @@ -320,7 +376,9 @@ build_ubuntu_cpu_openblas() { DEV=1 \ ENABLE_TESTCOVERAGE=1 \ USE_CPP_PACKAGE=1 \ - USE_BLAS=openblas \ + USE_BLAS=mkl \ + USE_MKLDNN=0 \ + USE_INTEL_PATH=/opt/intel \ USE_DIST_KVSTORE=1 \ -j$(nproc) } @@ -329,6 +387,7 @@ build_ubuntu_cpu_cmake_debug() { set -ex pushd . cd /work/build + build_ccache_wrappers cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ @@ -337,6 +396,7 @@ build_ubuntu_cpu_cmake_debug() { -DUSE_MKL_IF_AVAILABLE=OFF \ -DUSE_OPENMP=OFF \ -DUSE_OPENCV=ON \ + -DUSE_SIGNAL_HANDLER=ON \ -DCMAKE_BUILD_TYPE=Debug \ -G Ninja \ /work/mxnet @@ -350,13 +410,15 @@ build_ubuntu_cpu_cmake_asan() { pushd . cd /work/build + export CXX=g++-8 + export CC=gcc-8 + build_ccache_wrappers cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ - -DCMAKE_CXX_COMPILER=/usr/bin/g++-8 \ - -DCMAKE_C_COMPILER=/usr/bin/gcc-8 \ -DUSE_CUDA=OFF \ -DUSE_MKL_IF_AVAILABLE=OFF \ + -DUSE_MKLDNN=OFF \ -DUSE_OPENMP=OFF \ -DUSE_OPENCV=OFF \ -DCMAKE_BUILD_TYPE=Debug \ @@ -376,13 +438,14 @@ build_ubuntu_cpu_cmake_asan() { build_ubuntu_cpu_clang39() { set -ex - export CXX=clang++-3.9 + export CXX=clang++-3.9 export CC=clang-3.9 - build_ccache_wrappers - make \ + build_ccache_wrappers + make \ ENABLE_TESTCOVERAGE=1 \ USE_CPP_PACKAGE=1 \ USE_BLAS=openblas \ + USE_MKLDNN=0 \ USE_OPENMP=0 \ USE_DIST_KVSTORE=1 \ -j$(nproc) @@ -400,6 +463,7 @@ build_ubuntu_cpu_clang60() { ENABLE_TESTCOVERAGE=1 \ USE_CPP_PACKAGE=1 \ USE_BLAS=openblas \ + USE_MKLDNN=0 \ USE_OPENMP=1 \ USE_DIST_KVSTORE=1 \ -j$(nproc) @@ -414,10 +478,12 @@ build_ubuntu_cpu_clang_tidy() { pushd . cd /work/build + build_ccache_wrappers cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ -DUSE_CUDA=OFF \ + -DUSE_MKLDNN=OFF \ -DUSE_MKL_IF_AVAILABLE=OFF \ -DUSE_OPENCV=ON \ -DCMAKE_BUILD_TYPE=Debug \ @@ -443,7 +509,6 @@ build_ubuntu_cpu_clang39_mkldnn() { ENABLE_TESTCOVERAGE=1 \ USE_CPP_PACKAGE=1 \ USE_BLAS=openblas \ - USE_MKLDNN=1 \ USE_OPENMP=0 \ -j$(nproc) } @@ -460,7 +525,6 @@ build_ubuntu_cpu_clang60_mkldnn() { ENABLE_TESTCOVERAGE=1 \ USE_CPP_PACKAGE=1 \ USE_BLAS=openblas \ - USE_MKLDNN=1 \ USE_OPENMP=1 \ -j$(nproc) } @@ -475,7 +539,19 @@ build_ubuntu_cpu_mkldnn() { ENABLE_TESTCOVERAGE=1 \ USE_CPP_PACKAGE=1 \ USE_BLAS=openblas \ - USE_MKLDNN=1 \ + -j$(nproc) +} + +build_ubuntu_cpu_mkldnn_mkl() { + set -ex + + build_ccache_wrappers + + make \ + DEV=1 \ + ENABLE_TESTCOVERAGE=1 \ + USE_CPP_PACKAGE=1 \ + USE_BLAS=mkl \ -j$(nproc) } @@ -497,6 +573,8 @@ build_ubuntu_gpu_tensorrt() { mkdir -p build cd build cmake \ + -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ + -DCMAKE_C_COMPILER_LAUNCHER=ccache \ -DCMAKE_CXX_FLAGS=-I/usr/include/python${PYVER}\ -DBUILD_SHARED_LIBS=ON ..\ -G Ninja @@ -511,7 +589,10 @@ build_ubuntu_gpu_tensorrt() { cd 3rdparty/onnx-tensorrt/ mkdir -p build cd build - cmake .. + cmake \ + -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ + -DCMAKE_C_COMPILER_LAUNCHER=ccache \ + .. make -j$(nproc) export LIBRARY_PATH=`pwd`:$LIBRARY_PATH popd @@ -530,6 +611,7 @@ build_ubuntu_gpu_tensorrt() { USE_CUDA_PATH=/usr/local/cuda \ USE_CUDNN=1 \ USE_OPENCV=0 \ + USE_MKLDNN=0 \ USE_DIST_KVSTORE=0 \ USE_TENSORRT=1 \ USE_JEMALLOC=0 \ @@ -549,7 +631,6 @@ build_ubuntu_gpu_mkldnn() { ENABLE_TESTCOVERAGE=1 \ USE_CPP_PACKAGE=1 \ USE_BLAS=openblas \ - USE_MKLDNN=1 \ USE_CUDA=1 \ USE_CUDA_PATH=/usr/local/cuda \ USE_CUDNN=1 \ @@ -566,7 +647,6 @@ build_ubuntu_gpu_mkldnn_nocudnn() { DEV=1 \ ENABLE_TESTCOVERAGE=1 \ USE_BLAS=openblas \ - USE_MKLDNN=1 \ USE_CUDA=1 \ USE_CUDA_PATH=/usr/local/cuda \ USE_CUDNN=0 \ @@ -582,6 +662,7 @@ build_ubuntu_gpu_cuda91_cudnn7() { DEV=1 \ ENABLE_TESTCOVERAGE=1 \ USE_BLAS=openblas \ + USE_MKLDNN=0 \ USE_CUDA=1 \ USE_CUDA_PATH=/usr/local/cuda \ USE_CUDNN=1 \ @@ -594,6 +675,7 @@ build_ubuntu_gpu_cuda91_cudnn7() { build_ubuntu_amalgamation() { set -ex # Amalgamation can not be run with -j nproc + build_ccache_wrappers make -C amalgamation/ clean make -C amalgamation/ \ USE_BLAS=openblas \ @@ -603,6 +685,7 @@ build_ubuntu_amalgamation() { build_ubuntu_amalgamation_min() { set -ex # Amalgamation can not be run with -j nproc + build_ccache_wrappers make -C amalgamation/ clean make -C amalgamation/ \ USE_BLAS=openblas \ @@ -613,14 +696,16 @@ build_ubuntu_amalgamation_min() { build_ubuntu_gpu_cmake_mkldnn() { set -ex cd /work/build + build_ccache_wrappers cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ + -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \ + -DUSE_SIGNAL_HANDLER=ON \ -DENABLE_TESTCOVERAGE=ON \ -DUSE_CUDA=1 \ -DUSE_CUDNN=1 \ -DUSE_MKLML_MKL=1 \ - -DUSE_MKLDNN=1 \ -DCMAKE_BUILD_TYPE=Release \ -DCUDA_ARCH_NAME=Manual \ -DCUDA_ARCH_BIN=$CI_CMAKE_CUDA_ARCH_BIN \ @@ -636,15 +721,19 @@ build_ubuntu_gpu_cmake_mkldnn() { build_ubuntu_gpu_cmake() { set -ex cd /work/build + build_ccache_wrappers cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ + -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \ + -DUSE_SIGNAL_HANDLER=ON \ -DENABLE_TESTCOVERAGE=ON \ - -DUSE_CUDA=1 \ - -DUSE_CUDNN=1 \ - -DUSE_MKLML_MKL=0 \ - -DUSE_MKLDNN=0 \ - -DUSE_DIST_KVSTORE=1 \ + -DUSE_CUDA=ON \ + -DUSE_CUDNN=ON \ + -DUSE_MKL_IF_AVAILABLE=OFF \ + -DUSE_MKLML_MKL=OFF \ + -DUSE_MKLDNN=OFF \ + -DUSE_DIST_KVSTORE=ON \ -DCMAKE_BUILD_TYPE=Release \ -DCUDA_ARCH_NAME=Manual \ -DCUDA_ARCH_BIN=$CI_CMAKE_CUDA_ARCH_BIN \ @@ -674,9 +763,9 @@ unittest_ubuntu_python2_cpu() { export PYTHONPATH=./python/ export MXNET_MKLDNN_DEBUG=1 export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_unittest.xml --verbose tests/python/unittest - nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_train.xml --verbose tests/python/train - nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_quantization.xml --verbose tests/python/quantization + nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_unittest.xml --verbose tests/python/unittest + nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_train.xml --verbose tests/python/train + nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_quantization.xml --verbose tests/python/quantization } unittest_ubuntu_python3_cpu() { @@ -684,8 +773,8 @@ unittest_ubuntu_python3_cpu() { export PYTHONPATH=./python/ export MXNET_MKLDNN_DEBUG=1 # Ignored if not present export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_unittest.xml --verbose tests/python/unittest - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_quantization.xml --verbose tests/python/quantization + nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_unittest.xml --verbose tests/python/unittest + nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_quantization.xml --verbose tests/python/quantization } unittest_ubuntu_python3_cpu_mkldnn() { @@ -693,8 +782,8 @@ unittest_ubuntu_python3_cpu_mkldnn() { export PYTHONPATH=./python/ export MXNET_MKLDNN_DEBUG=1 # Ignored if not present export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_unittest.xml --verbose tests/python/unittest - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_mkl.xml --verbose tests/python/mkl + nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_unittest.xml --verbose tests/python/unittest + nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_mkl.xml --verbose tests/python/mkl } unittest_ubuntu_python2_gpu() { @@ -702,31 +791,8 @@ unittest_ubuntu_python2_gpu() { export PYTHONPATH=./python/ export MXNET_MKLDNN_DEBUG=1 # Ignored if not present export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu -} - -tutorialtest_ubuntu_python3_gpu() { - set -ex - cd /work/mxnet/docs - export MXNET_DOCS_BUILD_MXNET=0 - make html - export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - export PYTHONPATH=/work/mxnet/python/ - export MXNET_TUTORIAL_TEST_KERNEL=python3 - cd /work/mxnet/tests/tutorials - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_tutorials.xml test_tutorials.py --nologcapture -} - -tutorialtest_ubuntu_python2_gpu() { - set -ex - cd /work/mxnet/docs - export MXNET_DOCS_BUILD_MXNET=0 - make html - export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - export PYTHONPATH=/work/mxnet/python/ - export MXNET_TUTORIAL_TEST_KERNEL=python2 - cd /work/mxnet/tests/tutorials - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_tutorials.xml test_tutorials.py --nologcapture + export CUDNN_VERSION=7.0.3 + nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu } unittest_ubuntu_python3_gpu() { @@ -734,7 +800,8 @@ unittest_ubuntu_python3_gpu() { export PYTHONPATH=./python/ export MXNET_MKLDNN_DEBUG=1 # Ignored if not present export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu + export CUDNN_VERSION=7.0.3 + nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu } unittest_ubuntu_python3_gpu_nocudnn() { @@ -742,7 +809,7 @@ unittest_ubuntu_python3_gpu_nocudnn() { export PYTHONPATH=./python/ export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 export CUDNN_OFF_TEST_ONLY=true - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu + nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu } unittest_ubuntu_tensorrt_gpu() { @@ -750,8 +817,9 @@ unittest_ubuntu_tensorrt_gpu() { export PYTHONPATH=./python/ export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 export LD_LIBRARY_PATH=/work/mxnet/lib:$LD_LIBRARY_PATH + export CUDNN_VERSION=7.0.3 python tests/python/tensorrt/lenet5_train.py - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_trt_gpu.xml --verbose --nocapture tests/python/tensorrt/ + nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_trt_gpu.xml --verbose --nocapture tests/python/tensorrt/ } # quantization gpu currently only runs on P3 instances @@ -761,7 +829,8 @@ unittest_ubuntu_python2_quantization_gpu() { export PYTHONPATH=./python/ export MXNET_MKLDNN_DEBUG=1 # Ignored if not present export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_quantization_gpu.xml --verbose tests/python/quantization_gpu + export CUDNN_VERSION=7.0.3 + nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_quantization_gpu.xml --verbose tests/python/quantization_gpu } # quantization gpu currently only runs on P3 instances @@ -771,19 +840,31 @@ unittest_ubuntu_python3_quantization_gpu() { export PYTHONPATH=./python/ export MXNET_MKLDNN_DEBUG=1 # Ignored if not present export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 - nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_quantization_gpu.xml --verbose tests/python/quantization_gpu + export CUDNN_VERSION=7.0.3 + nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_quantization_gpu.xml --verbose tests/python/quantization_gpu } unittest_ubuntu_cpu_scala() { set -ex - make scalapkg USE_BLAS=openblas USE_DIST_KVSTORE=1 ENABLE_TESTCOVERAGE=1 - make scalaunittest USE_BLAS=openblas USE_DIST_KVSTORE=1 ENABLE_TESTCOVERAGE=1 + scala_prepare + cd scala-package + mvn -B integration-test +} + +unittest_centos7_cpu_scala() { + set -ex + cd /work/mxnet + scala_prepare + cd scala-package + mvn -B integration-test } unittest_ubuntu_cpu_clojure() { set -ex - make scalapkg USE_OPENCV=1 USE_BLAS=openblas USE_DIST_KVSTORE=1 ENABLE_TESTCOVERAGE=1 - make scalainstall USE_OPENCV=1 USE_BLAS=openblas USE_DIST_KVSTORE=1 ENABLE_TESTCOVERAGE=1 + scala_prepare + cd scala-package + mvn -B install + cd .. ./contrib/clojure-package/ci-test.sh } @@ -792,7 +873,7 @@ unittest_ubuntu_cpugpu_perl() { ./perl-package/test.sh } -unittest_ubuntu_gpu_cpp() { +unittest_cpp() { set -ex build/tests/mxnet_unit_tests } @@ -826,57 +907,67 @@ unittest_ubuntu_gpu_R() { make rpkgtest R_LIBS=/tmp/r-site-library R_GPU_ENABLE=1 } -unittest_ubuntu_cpu_julia06() { +unittest_ubuntu_cpu_julia() { set -ex - export PATH="/work/julia/bin:$PATH" + export PATH="$1/bin:$PATH" export MXNET_HOME='/work/mxnet' - export JULIA_PKGDIR='/work/julia-pkg' - export DEPDIR=`julia -e 'print(Pkg.dir())'` + export JULIA_DEPOT_PATH='/work/julia-depot' + export DEVDIR="$JULIA_DEPOT_PATH/dev" - julia -e 'versioninfo()' - julia -e 'Pkg.init()' + julia -e 'using InteractiveUtils; versioninfo()' # install package - ln -sf ${MXNET_HOME}/julia ${DEPDIR}/MXNet + mkdir -p $DEVDIR + ln -sf ${MXNET_HOME}/julia ${DEVDIR}/MXNet - # install dependencies - julia -e 'Pkg.resolve()' + # register MXNet.jl and dependencies + julia -e 'using Pkg; Pkg.develop("MXNet")' # FIXME export LD_PRELOAD='/usr/lib/x86_64-linux-gnu/libjemalloc.so' + export LD_LIBRARY_PATH=/work/mxnet/lib:$LD_LIBRARY_PATH # use the prebuilt binary from $MXNET_HOME/lib - julia -e 'Pkg.build("MXNet")' + julia -e 'using Pkg; Pkg.build("MXNet")' # run the script `julia/test/runtests.jl` - julia -e 'Pkg.test("MXNet")' + julia -e 'using Pkg; Pkg.test("MXNet")' # See https://github.com/dmlc/MXNet.jl/pull/303#issuecomment-341171774 julia -e 'using MXNet; mx._sig_checker()' } +unittest_ubuntu_cpu_julia07() { + set -ex + unittest_ubuntu_cpu_julia /work/julia07 +} + +unittest_ubuntu_cpu_julia10() { + set -ex + unittest_ubuntu_cpu_julia /work/julia10 +} + unittest_centos7_cpu() { set -ex cd /work/mxnet - python3.6 -m "nose" $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_unittest.xml --verbose tests/python/unittest - python3.6 -m "nose" $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_train.xml --verbose tests/python/train + python3.6 -m "nose" $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_unittest.xml --verbose tests/python/unittest + python3.6 -m "nose" $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_train.xml --verbose tests/python/train } unittest_centos7_gpu() { set -ex cd /work/mxnet - python3.6 -m "nose" $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu + export CUDNN_VERSION=7.0.3 + python3.6 -m "nose" $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu } integrationtest_ubuntu_cpu_onnx() { set -ex export PYTHONPATH=./python/ - python example/onnx/super_resolution.py - pytest tests/python-pytest/onnx/import/mxnet_backend_test.py - pytest tests/python-pytest/onnx/import/onnx_import_test.py - pytest tests/python-pytest/onnx/import/gluon_backend_test.py - pytest tests/python-pytest/onnx/export/onnx_backend_test.py - python tests/python-pytest/onnx/export/mxnet_export_test.py + python tests/python-pytest/onnx/backend_test.py + pytest tests/python-pytest/onnx/mxnet_export_test.py + pytest tests/python-pytest/onnx/test_models.py + pytest tests/python-pytest/onnx/test_node.py } integrationtest_ubuntu_gpu_python() { @@ -927,8 +1018,10 @@ integrationtest_ubuntu_cpu_dist_kvstore() { integrationtest_ubuntu_gpu_scala() { set -ex - make scalapkg USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 USE_DIST_KVSTORE=1 SCALA_ON_GPU=1 ENABLE_TESTCOVERAGE=1 - make scalaintegrationtest USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=1 SCALA_TEST_ON_GPU=1 USE_DIST_KVSTORE=1 ENABLE_TESTCOVERAGE=1 + scala_prepare + cd scala-package + export SCALA_TEST_ON_GPU=1 + mvn -B integration-test -DskipTests=false } integrationtest_ubuntu_gpu_dist_kvstore() { @@ -953,7 +1046,7 @@ test_ubuntu_cpu_python2() { cd /work/mxnet/python pip install -e . cd /work/mxnet - python -m "nose" $NOSE_COVERAGE_ARGUMENTS --with-timer --verbose tests/python/unittest + python -m "nose" $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --verbose tests/python/unittest popd } @@ -969,7 +1062,7 @@ test_ubuntu_cpu_python3() { pip3 install nose nose-timer pip3 install -e . cd /work/mxnet - python3 -m "nose" $NOSE_COVERAGE_ARGUMENTS --with-timer --verbose tests/python/unittest + python3 -m "nose" $NOSE_COVERAGE_ARGUMENTS $NOSE_TIMER_ARGUMENTS --verbose tests/python/unittest popd } @@ -992,7 +1085,6 @@ build_docs() { popd } - # Functions that run the nightly Tests: #Runs Apache RAT Check on MXNet Source for License Headers @@ -1086,7 +1178,7 @@ nightly_straight_dope_python2_single_gpu_tests() { cd /work/mxnet/tests/nightly/straight_dope export PYTHONPATH=/work/mxnet/python/ export MXNET_TEST_KERNEL=python2 - nosetests-2.7 --with-xunit --xunit-file nosetests_straight_dope_python2_single_gpu.xml \ + nosetests-2.7 $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_straight_dope_python2_single_gpu.xml \ test_notebooks_single_gpu.py --nologcapture } @@ -1095,7 +1187,7 @@ nightly_straight_dope_python3_single_gpu_tests() { cd /work/mxnet/tests/nightly/straight_dope export PYTHONPATH=/work/mxnet/python/ export MXNET_TEST_KERNEL=python3 - nosetests-3.4 --with-xunit --xunit-file nosetests_straight_dope_python3_single_gpu.xml \ + nosetests-3.4 $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_straight_dope_python3_single_gpu.xml \ test_notebooks_single_gpu.py --nologcapture } @@ -1105,7 +1197,7 @@ nightly_straight_dope_python2_multi_gpu_tests() { cd /work/mxnet/tests/nightly/straight_dope export PYTHONPATH=/work/mxnet/python/ export MXNET_TEST_KERNEL=python2 - nosetests-2.7 --with-xunit --xunit-file nosetests_straight_dope_python2_multi_gpu.xml \ + nosetests-2.7 $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_straight_dope_python2_multi_gpu.xml \ test_notebooks_multi_gpu.py --nologcapture } @@ -1114,46 +1206,107 @@ nightly_straight_dope_python3_multi_gpu_tests() { cd /work/mxnet/tests/nightly/straight_dope export PYTHONPATH=/work/mxnet/python/ export MXNET_TEST_KERNEL=python3 - nosetests-3.4 --with-xunit --xunit-file nosetests_straight_dope_python3_multi_gpu.xml \ + nosetests-3.4 $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_straight_dope_python3_multi_gpu.xml \ test_notebooks_multi_gpu.py --nologcapture } +nightly_tutorial_test_ubuntu_python3_gpu() { + set -ex + cd /work/mxnet/docs + export BUILD_VER=tutorial + export MXNET_DOCS_BUILD_MXNET=0 + make html + export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 + export PYTHONPATH=/work/mxnet/python/ + export MXNET_TUTORIAL_TEST_KERNEL=python3 + cd /work/mxnet/tests/tutorials + nosetests-3.4 $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_tutorials.xml test_tutorials.py --nologcapture +} + +nightly_tutorial_test_ubuntu_python2_gpu() { + set -ex + cd /work/mxnet/docs + export BUILD_VER=tutorial + export MXNET_DOCS_BUILD_MXNET=0 + make html + export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 + export PYTHONPATH=/work/mxnet/python/ + export MXNET_TUTORIAL_TEST_KERNEL=python2 + cd /work/mxnet/tests/tutorials + nosetests-3.4 $NOSE_TIMER_ARGUMENTS --with-xunit --xunit-file nosetests_tutorials.xml test_tutorials.py --nologcapture +} + +nightly_java_demo_test_cpu() { + set -ex + cd /work/mxnet/scala-package/mxnet-demo/java-demo + make java_ci_demo + bash bin/java_sample.sh + bash bin/run_od.sh +} + +nightly_scala_demo_test_cpu() { + set -ex + cd /work/mxnet/scala-package/mxnet-demo/scala-demo + make scala_ci_demo + bash bin/demo.sh + bash bin/run_im.sh +} + # Deploy deploy_docs() { set -ex pushd . - make docs + make docs SPHINXOPTS=-W popd } deploy_jl_docs() { set -ex - export PATH="/work/julia/bin:$PATH" + export PATH="/work/julia10/bin:$PATH" export MXNET_HOME='/work/mxnet' - export JULIA_PKGDIR='/work/julia-pkg' - export DEPDIR=`julia -e 'print(Pkg.dir())'` + export JULIA_DEPOT_PATH='/work/julia-depot' + export DEVDIR="$JULIA_DEPOT_PATH/dev" - julia -e 'versioninfo()' - julia -e 'Pkg.init()' - ln -sf ${MXNET_HOME}/julia ${DEPDIR}/MXNet - julia -e 'Pkg.resolve()' + julia -e 'using InteractiveUtils; versioninfo()' + mkdir -p $DEVDIR # FIXME export LD_PRELOAD='/usr/lib/x86_64-linux-gnu/libjemalloc.so' + export LD_LIBRARY_PATH=/work/mxnet/lib:$LD_LIBRARY_PATH - # use the prebuilt binary from $MXNET_HOME/lib - julia -e 'Pkg.build("MXNet")' - # build docs - julia -e 'Pkg.add("Documenter")' - julia -e 'cd(Pkg.dir("MXNet")); include(joinpath("docs", "make.jl"))' + make -C julia/docs # TODO: make Jenkins worker push to MXNet.jl ph-pages branch if master build # ... } +publish_scala_build() { + set -ex + pushd . + scala_prepare + ./ci/publish/scala/build.sh + popd +} + +publish_scala_test() { + set -ex + pushd . + scala_prepare + ./ci/publish/scala/test.sh + popd +} + +publish_scala_deploy() { + set -ex + pushd . + scala_prepare + ./ci/publish/scala/deploy.sh + popd +} + # broken_link_checker broken_link_checker() { @@ -1180,5 +1333,3 @@ EOF declare -F | cut -d' ' -f3 echo fi - - diff --git a/ci/docker_cache.py b/ci/docker_cache.py index bebcb25fb..f906b0eba 100755 --- a/ci/docker_cache.py +++ b/ci/docker_cache.py @@ -32,7 +32,13 @@ import json from typing import * import build as build_util +from util import retry +DOCKERHUB_LOGIN_NUM_RETRIES = 5 +DOCKERHUB_RETRY_SECONDS = 5 +DOCKER_CACHE_NUM_RETRIES = 3 +DOCKER_CACHE_TIMEOUT_MINS = 15 +PARALLEL_BUILDS = 10 def build_save_containers(platforms, registry, load_cache) -> int: @@ -47,7 +53,7 @@ def build_save_containers(platforms, registry, load_cache) -> int: if len(platforms) == 0: return 0 - platform_results = Parallel(n_jobs=len(platforms), backend="multiprocessing")( + platform_results = Parallel(n_jobs=PARALLEL_BUILDS, backend="multiprocessing")( delayed(_build_save_container)(platform, registry, load_cache) for platform in platforms) @@ -78,7 +84,7 @@ def _build_save_container(platform, registry, load_cache) -> Optional[str]: logging.debug('Building %s as %s', platform, docker_tag) try: # Increase the number of retries for building the cache. - image_id = build_util.build_docker(docker_binary='docker', platform=platform, registry=registry, num_retries=10, use_cache=True) + image_id = build_util.build_docker(docker_binary='docker', platform=platform, registry=registry, num_retries=10, no_cache=False) logging.info('Built %s as %s', docker_tag, image_id) # Push cache to registry @@ -99,24 +105,47 @@ def _upload_image(registry, docker_tag, image_id) -> None: :param image_id: Image id :return: None """ - _login_dockerhub() # We don't have to retag the image since it is already in the right format logging.info('Uploading %s (%s) to %s', docker_tag, image_id, registry) push_cmd = ['docker', 'push', docker_tag] subprocess.check_call(push_cmd) +@retry(target_exception=subprocess.CalledProcessError, tries=DOCKERHUB_LOGIN_NUM_RETRIES, + delay_s=DOCKERHUB_RETRY_SECONDS) def _login_dockerhub(): """ Login to the Docker Hub account :return: None """ dockerhub_credentials = _get_dockerhub_credentials() - login_cmd = ['docker', 'login', '--username', dockerhub_credentials['username'], '--password', - dockerhub_credentials['password']] - subprocess.check_call(login_cmd) + + logging.info('Logging in to DockerHub') + # We use password-stdin instead of --password to avoid leaking passwords in case of an error. + # This method will produce the following output: + # > WARNING! Your password will be stored unencrypted in /home/jenkins_slave/.docker/config.json. + # > Configure a credential helper to remove this warning. See + # > https://docs.docker.com/engine/reference/commandline/login/#credentials-store + # Since we consider the restricted slaves a secure environment, that's fine. Also, using this will require + # third party applications which would need a review first as well. + p = subprocess.run(['docker', 'login', '--username', dockerhub_credentials['username'], '--password-stdin'], + stdout=subprocess.PIPE, input=str.encode(dockerhub_credentials['password'])) + logging.info(p.stdout) + logging.info('Successfully logged in to DockerHub') + + +def _logout_dockerhub(): + """ + Log out of DockerHub to delete local credentials + :return: None + """ + logging.info('Logging out of DockerHub') + subprocess.call(['docker', 'logout']) + logging.info('Successfully logged out of DockerHub') +@retry(target_exception=subprocess.TimeoutExpired, tries=DOCKER_CACHE_NUM_RETRIES, + delay_s=DOCKERHUB_RETRY_SECONDS) def load_docker_cache(registry, docker_tag) -> None: """ Load the precompiled docker cache from the registry @@ -125,9 +154,16 @@ def load_docker_cache(registry, docker_tag) -> None: :return: None """ # We don't have to retag the image since it's already in the right format + if not registry: + return + assert docker_tag + logging.info('Loading Docker cache for %s from %s', docker_tag, registry) pull_cmd = ['docker', 'pull', docker_tag] - subprocess.call(pull_cmd) # Don't throw an error if the image does not exist + + # Don't throw an error if the image does not exist + subprocess.run(pull_cmd, timeout=DOCKER_CACHE_TIMEOUT_MINS*60) + logging.info('Successfully pulled docker cache') def delete_local_docker_cache(docker_tag): @@ -175,8 +211,7 @@ def _get_dockerhub_credentials(): # pragma: no cover logging.exception("The request was invalid due to:") elif client_error.response['Error']['Code'] == 'InvalidParameterException': logging.exception("The request had invalid params:") - else: - raise + raise else: secret = get_secret_value_response['SecretString'] secret_dict = json.loads(secret) @@ -213,7 +248,11 @@ def script_name() -> str: args = parser.parse_args() platforms = build_util.get_platforms() - return build_save_containers(platforms=platforms, registry=args.docker_registry, load_cache=True) + try: + _login_dockerhub() + return build_save_containers(platforms=platforms, registry=args.docker_registry, load_cache=True) + finally: + _logout_dockerhub() if __name__ == '__main__': diff --git a/Jenkinsfile b/ci/jenkins/Jenkins_steps.groovy similarity index 71% rename from Jenkinsfile rename to ci/jenkins/Jenkins_steps.groovy index 64ef1d982..e812c4e24 100644 --- a/Jenkinsfile +++ b/ci/jenkins/Jenkins_steps.groovy @@ -17,13 +17,17 @@ // specific language governing permissions and limitations // under the License. // -// Jenkins pipeline -// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ +// This file contains the steps that will be used in the +// Jenkins pipelines + +utils = load('ci/Jenkinsfile_utils.groovy') // mxnet libraries mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a' -// for scala build, need to pass extra libs when run with dist_kvstore -mx_dist_lib = 'lib/libmxnet.so, lib/libmxnet.a, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a' + +// Python wheels +mx_pip = 'build/*.whl' + // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static library by default. mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so' // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static library by default. @@ -31,12 +35,8 @@ mx_cmake_lib_debug = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/dmlc-c mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so, build/3rdparty/mkldnn/src/libmkldnn.so.0' mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libiomp5.so, lib/libmkldnn.so.0, lib/libmklml_intel.so, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a' mx_tensorrt_lib = 'lib/libmxnet.so, lib/libnvonnxparser_runtime.so.0, lib/libnvonnxparser.so.0, lib/libonnx_proto.so, lib/libonnx.so' -mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/lenet, build/cpp-package/example/alexnet, build/cpp-package/example/googlenet, build/cpp-package/example/lenet_with_mxdataiter, build/cpp-package/example/resnet, build/cpp-package/example/mlp, build/cpp-package/example/mlp_cpu, build/cpp-package/example/mlp_gpu, build/cpp-package/example/test_score, build/cpp-package/example/test_optimizer' -mx_lib_cpp_examples_cpu = 'build/libmxnet.so, build/cpp-package/example/mlp_cpu' - -// timeout in minutes -max_time = 120 - +mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*' +mx_lib_cpp_examples_cpu = 'build/libmxnet.so, build/cpp-package/example/*' // Python unittest for CPU // Python 2 @@ -89,126 +89,206 @@ def python3_gpu_ut_nocudnn(docker_container_name) { } } -def deploy_docs() { - parallel 'Docs': { - node(NODE_LINUX_CPU) { - ws('workspace/docs') { - timeout(time: max_time, unit: 'MINUTES') { - utils.init_git() - utils.docker_run('ubuntu_cpu', 'deploy_docs', false) - sh "ci/other/ci_deploy_doc.sh ${env.BRANCH_NAME} ${env.BUILD_NUMBER}" +//------------------------------------------------------------------------------------------ + +def compile_unix_cpu_openblas() { + return ['CPU: Openblas': { + node(NODE_LINUX_CPU) { + ws('workspace/build-cpu-openblas') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_openblas', false) + utils.pack_lib('cpu', mx_lib, true) + } } } - } - }, - 'Julia docs': { - node(NODE_LINUX_CPU) { - ws('workspace/julia-docs') { - timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('cpu', mx_lib) - utils.docker_run('ubuntu_cpu', 'deploy_jl_docs', false) + }] +} + +def compile_unix_openblas_debug_cpu() { + return ['CPU: Openblas, cmake, debug': { + node(NODE_LINUX_CPU) { + ws('workspace/build-cpu-openblas') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_cmake_debug', false) + utils.pack_lib('cpu_debug', mx_cmake_lib_debug, true) + } } } - } - } + }] } -node('mxnetlinux-cpu') { - // Loading the utilities requires a node context unfortunately - checkout scm - utils = load('ci/Jenkinsfile_utils.groovy') +def compile_unix_mkl_cpu() { + return ['CPU: MKL': { + node(NODE_LINUX_CPU) { + ws('workspace/build-cpu-mkl') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_mkl', false) + utils.pack_lib('cpu_mkl', mx_mkldnn_lib, true) + } + } + } + }] } -utils.assign_node_labels(linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3', windows_cpu: 'mxnetwindows-cpu', windows_gpu: 'mxnetwindows-gpu') -utils.main_wrapper( -core_logic: { - stage('Sanity Check') { - parallel 'Lint': { +def compile_unix_mkldnn_cpu() { + return ['CPU: MKLDNN': { node(NODE_LINUX_CPU) { - ws('workspace/sanity-lint') { - utils.init_git() - utils.docker_run('ubuntu_cpu', 'sanity_check', false) + ws('workspace/build-mkldnn-cpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_mkldnn', false) + utils.pack_lib('mkldnn_cpu', mx_mkldnn_lib, true) + } } } - }, - 'RAT License': { + }] +} + +def compile_unix_mkldnn_mkl_cpu() { + return ['CPU: MKLDNN_MKL': { node(NODE_LINUX_CPU) { - ws('workspace/sanity-rat') { - utils.init_git() - utils.docker_run('ubuntu_rat', 'nightly_test_rat_check', false) + ws('workspace/build-mkldnn-cpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_mkldnn_mkl', false) + utils.pack_lib('mkldnn_mkl_cpu', mx_mkldnn_lib, true) + } } } - } - } + }] +} - stage('Build') { - parallel 'CPU: CentOS 7': { +def compile_unix_mkldnn_gpu() { + return ['GPU: MKLDNN': { node(NODE_LINUX_CPU) { - ws('workspace/build-centos7-cpu') { + ws('workspace/build-mkldnn-gpu') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('centos7_cpu', 'build_centos7_cpu', false) - utils.pack_lib('centos7_cpu', mx_lib, true) + utils.docker_run('ubuntu_build_cuda', 'build_ubuntu_gpu_mkldnn', false) + utils.pack_lib('mkldnn_gpu', mx_mkldnn_lib, true) + } + } + } + }] +} + +def compile_unix_mkldnn_nocudnn_gpu() { + return ['GPU: MKLDNN_CUDNNOFF': { + node(NODE_LINUX_CPU) { + ws('workspace/build-mkldnn-gpu-nocudnn') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_build_cuda', 'build_ubuntu_gpu_mkldnn_nocudnn', false) + utils.pack_lib('mkldnn_gpu_nocudnn', mx_mkldnn_lib, true) + } + } + } + }] +} + +def compile_unix_full_gpu() { + return ['GPU: CUDA9.1+cuDNN7': { + node(NODE_LINUX_CPU) { + ws('workspace/build-gpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_build_cuda', 'build_ubuntu_gpu_cuda91_cudnn7', false) + utils.pack_lib('gpu', mx_lib_cpp_examples, true) } } } - }, - 'CPU: CentOS 7 MKLDNN': { + }] +} + +def compile_unix_cmake_mkldnn_gpu() { + return ['GPU: CMake MKLDNN': { node(NODE_LINUX_CPU) { - ws('workspace/build-centos7-mkldnn') { + ws('workspace/build-cmake-mkldnn-gpu') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('centos7_cpu', 'build_centos7_mkldnn', false) - utils.pack_lib('centos7_mkldnn', mx_lib, true) + utils.docker_run('ubuntu_gpu', 'build_ubuntu_gpu_cmake_mkldnn', false) + utils.pack_lib('cmake_mkldnn_gpu', mx_cmake_mkldnn_lib, true) } } } - }, - 'GPU: CentOS 7': { + }] +} + +def compile_unix_cmake_gpu() { + return ['GPU: CMake': { node(NODE_LINUX_CPU) { - ws('workspace/build-centos7-gpu') { + ws('workspace/build-cmake-gpu') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('centos7_gpu', 'build_centos7_gpu', false) - utils.pack_lib('centos7_gpu', mx_lib, true) + utils.docker_run('ubuntu_gpu', 'build_ubuntu_gpu_cmake', false) + utils.pack_lib('cmake_gpu', mx_cmake_lib, true) + } + } + } + }] +} + +def compile_unix_tensorrt_gpu() { + return ['TensorRT': { + node(NODE_LINUX_CPU) { + ws('workspace/build-tensorrt') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_gpu_tensorrt', 'build_ubuntu_gpu_tensorrt', false) + utils.pack_lib('tensorrt', mx_tensorrt_lib, true) } } } - }, - 'CPU: Openblas': { + }] +} + +def compile_centos7_cpu() { + return ['CPU: CentOS 7': { node(NODE_LINUX_CPU) { - ws('workspace/build-cpu-openblas') { + ws('workspace/build-centos7-cpu') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_openblas', false) - utils.pack_lib('cpu', mx_dist_lib, true) + utils.docker_run('centos7_cpu', 'build_centos7_cpu', false) + utils.pack_lib('centos7_cpu', mx_lib, true) } } } - }, - 'CPU: ASAN': { + }] +} + +def compile_centos7_cpu_mkldnn() { + return ['CPU: CentOS 7 MKLDNN': { node(NODE_LINUX_CPU) { - ws('workspace/build-cpu-asan') { + ws('workspace/build-centos7-mkldnn') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_cmake_asan', false) - utils.pack_lib('cpu_asan', mx_lib_cpp_examples_cpu) + utils.docker_run('centos7_cpu', 'build_centos7_mkldnn', false) + utils.pack_lib('centos7_mkldnn', mx_mkldnn_lib, true) } } } - }, - 'CPU: Openblas, debug': { + }] +} + +def compile_centos7_gpu() { + return ['GPU: CentOS 7': { node(NODE_LINUX_CPU) { - ws('workspace/build-cpu-openblas') { + ws('workspace/build-centos7-gpu') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_cmake_debug', false) - utils.pack_lib('cpu_debug', mx_cmake_lib_debug, true) + utils.docker_run('centos7_gpu', 'build_centos7_gpu', false) + utils.pack_lib('centos7_gpu', mx_lib, true) } } } - }, - 'CPU: Clang 3.9': { + }] +} + +def compile_unix_clang_3_9_cpu() { + return ['CPU: Clang 3.9': { node(NODE_LINUX_CPU) { ws('workspace/build-cpu-clang39') { timeout(time: max_time, unit: 'MINUTES') { @@ -217,8 +297,11 @@ core_logic: { } } } - }, - 'CPU: Clang 6': { + }] +} + +def compile_unix_clang_6_cpu() { + return ['CPU: Clang 6': { node(NODE_LINUX_CPU) { ws('workspace/build-cpu-clang60') { timeout(time: max_time, unit: 'MINUTES') { @@ -227,8 +310,11 @@ core_logic: { } } } - }, - 'CPU: Clang Tidy': { + }] +} + +def compile_unix_clang_tidy_cpu() { + return ['CPU: Clang Tidy': { node(NODE_LINUX_CPU) { ws('workspace/build-cpu-clang60_tidy') { timeout(time: max_time, unit: 'MINUTES') { @@ -237,8 +323,11 @@ core_logic: { } } } - }, - 'CPU: Clang 3.9 MKLDNN': { + }] +} + +def compile_unix_clang_3_9_mkldnn_cpu() { + return ['CPU: Clang 3.9 MKLDNN': { node(NODE_LINUX_CPU) { ws('workspace/build-cpu-mkldnn-clang39') { timeout(time: max_time, unit: 'MINUTES') { @@ -248,8 +337,11 @@ core_logic: { } } } - }, - 'CPU: Clang 6 MKLDNN': { + }] +} + +def compile_unix_clang_6_mkldnn_cpu() { + return ['CPU: Clang 6 MKLDNN': { node(NODE_LINUX_CPU) { ws('workspace/build-cpu-mkldnn-clang60') { timeout(time: max_time, unit: 'MINUTES') { @@ -259,106 +351,130 @@ core_logic: { } } } - }, - 'CPU: MKLDNN': { + }] +} + +def compile_armv8_jetson_gpu() { + return ['NVidia Jetson / ARMv8':{ node(NODE_LINUX_CPU) { - ws('workspace/build-mkldnn-cpu') { + ws('workspace/build-jetson-armv8') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_mkldnn', false) - utils.pack_lib('mkldnn_cpu', mx_mkldnn_lib, true) + utils.docker_run('jetson', 'build_jetson', false) } } } - }, - 'GPU: MKLDNN': { + }] +} + +def compile_armv7_cpu() { + return ['ARMv7':{ node(NODE_LINUX_CPU) { - ws('workspace/build-mkldnn-gpu') { + ws('workspace/build-ARMv7') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_build_cuda', 'build_ubuntu_gpu_mkldnn', false) - utils.pack_lib('mkldnn_gpu', mx_mkldnn_lib, true) + utils.docker_run('armv7', 'build_armv7', false) + utils.pack_lib('armv7', mx_pip) } } } - }, - 'GPU: MKLDNN_CUDNNOFF': { - node(NODE_LINUX_CPU) { - ws('workspace/build-mkldnn-gpu-nocudnn') { - timeout(time: max_time, unit: 'MINUTES') { - utils.init_git() - utils.docker_run('ubuntu_build_cuda', 'build_ubuntu_gpu_mkldnn_nocudnn', false) - utils.pack_lib('mkldnn_gpu_nocudnn', mx_mkldnn_lib, true) - } - } - } - }, - 'GPU: CUDA9.1+cuDNN7': { + }] +} + +def compile_armv6_cpu() { + return ['ARMv6':{ node(NODE_LINUX_CPU) { - ws('workspace/build-gpu') { + ws('workspace/build-ARMv6') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_build_cuda', 'build_ubuntu_gpu_cuda91_cudnn7', false) - utils.pack_lib('gpu', mx_lib_cpp_examples, true) + utils.docker_run('armv6', 'build_armv6', false) } } } - }, - 'Amalgamation MIN': { + }] +} + +def compile_armv8_cpu() { + return ['ARMv8':{ node(NODE_LINUX_CPU) { - ws('workspace/amalgamationmin') { + ws('workspace/build-ARMv8') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_cpu', 'build_ubuntu_amalgamation_min', false) + utils.docker_run('armv8', 'build_armv8', false) } } } - }, - 'Amalgamation': { + }] +} + +def compile_armv8_android_cpu() { + return ['Android / ARMv8':{ node(NODE_LINUX_CPU) { - ws('workspace/amalgamation') { + ws('workspace/android64') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_cpu', 'build_ubuntu_amalgamation', false) + utils.docker_run('android_armv8', 'build_android_armv8', false) } } } - }, + }] +} - 'GPU: CMake MKLDNN': { +def compile_armv7_android_cpu() { + return ['Android / ARMv7':{ node(NODE_LINUX_CPU) { - ws('workspace/build-cmake-mkldnn-gpu') { + ws('workspace/androidv7') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_gpu', 'build_ubuntu_gpu_cmake_mkldnn', false) - utils.pack_lib('cmake_mkldnn_gpu', mx_cmake_mkldnn_lib, true) + utils.docker_run('android_armv7', 'build_android_armv7', false) } } } - }, - 'GPU: CMake': { + }] +} + +def compile_unix_asan_cpu() { + return ['CPU: ASAN': { node(NODE_LINUX_CPU) { - ws('workspace/build-cmake-gpu') { + ws('workspace/build-cpu-asan') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_gpu', 'build_ubuntu_gpu_cmake', false) - utils.pack_lib('cmake_gpu', mx_cmake_lib, true) + utils.docker_run('ubuntu_cpu', 'build_ubuntu_cpu_cmake_asan', false) + utils.pack_lib('cpu_asan', mx_lib_cpp_examples_cpu) + } + } + } + }] +} + +def compile_unix_amalgamation_min() { + return ['Amalgamation MIN': { + node(NODE_LINUX_CPU) { + ws('workspace/amalgamationmin') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.docker_run('ubuntu_cpu', 'build_ubuntu_amalgamation_min', false) } } } - }, - 'TensorRT': { + }] +} + +def compile_unix_amalgamation() { + return ['Amalgamation': { node(NODE_LINUX_CPU) { - ws('workspace/build-tensorrt') { + ws('workspace/amalgamation') { timeout(time: max_time, unit: 'MINUTES') { utils.init_git() - utils.docker_run('ubuntu_gpu_tensorrt', 'build_ubuntu_gpu_tensorrt', false) - utils.pack_lib('tensorrt', mx_tensorrt_lib, true) + utils.docker_run('ubuntu_cpu', 'build_ubuntu_amalgamation', false) } } } - }, - 'Build CPU windows':{ + }] +} + +def compile_windows_cpu() { + return ['Build CPU windows':{ node(NODE_WINDOWS_CPU) { timeout(time: max_time, unit: 'MINUTES') { ws('workspace/build-cpu') { @@ -370,9 +486,11 @@ core_logic: { } } } - }, + }] +} - 'Build GPU windows':{ +def compile_windows_gpu() { + return ['Build GPU windows':{ node(NODE_WINDOWS_CPU) { timeout(time: max_time, unit: 'MINUTES') { ws('workspace/build-gpu') { @@ -384,8 +502,11 @@ core_logic: { } } } - }, - 'Build GPU MKLDNN windows':{ + }] +} + +def compile_windows_gpu_mkldnn() { + return ['Build GPU MKLDNN windows':{ node(NODE_WINDOWS_CPU) { timeout(time: max_time, unit: 'MINUTES') { ws('workspace/build-gpu') { @@ -397,72 +518,11 @@ core_logic: { } } } - }, - 'NVidia Jetson / ARMv8':{ - node(NODE_LINUX_CPU) { - ws('workspace/build-jetson-armv8') { - timeout(time: max_time, unit: 'MINUTES') { - utils.init_git() - utils.docker_run('jetson', 'build_jetson', false) - } - } - } - }, - // 'ARMv7':{ - // node(NODE_LINUX_CPU) { - // ws('workspace/build-ARMv7') { - // timeout(time: max_time, unit: 'MINUTES') { - // utils.init_git() - // utils.docker_run('armv7', 'build_armv7', false) - // } - // } - // } - // }, - 'ARMv6':{ - node(NODE_LINUX_CPU) { - ws('workspace/build-ARMv6') { - timeout(time: max_time, unit: 'MINUTES') { - utils.init_git() - utils.docker_run('armv6', 'build_armv6', false) - } - } - } - }, - 'ARMv8':{ - node(NODE_LINUX_CPU) { - ws('workspace/build-ARMv8') { - timeout(time: max_time, unit: 'MINUTES') { - utils.init_git() - utils.docker_run('armv8', 'build_armv8', false) - } - } - } - }, - 'Android / ARMv8':{ - node(NODE_LINUX_CPU) { - ws('workspace/android64') { - timeout(time: max_time, unit: 'MINUTES') { - utils.init_git() - utils.docker_run('android_armv8', 'build_android_armv8', false) - } - } - } - }, - 'Android / ARMv7':{ - node(NODE_LINUX_CPU) { - ws('workspace/androidv7') { - timeout(time: max_time, unit: 'MINUTES') { - utils.init_git() - utils.docker_run('android_armv7', 'build_android_armv7', false) - } - } - } - } - - } // End of stage('Build') + }] +} - stage('Tests') { - parallel 'Python2: CPU': { +def test_unix_python2_cpu() { + return ['Python2: CPU': { node(NODE_LINUX_CPU) { ws('workspace/ut-python2-cpu') { try { @@ -476,56 +536,95 @@ core_logic: { } } } - }, - 'Python3: CPU': { - node(NODE_LINUX_CPU) { - ws('workspace/ut-python3-cpu') { + }] +} + +def test_unix_python2_gpu() { + return ['Python2: GPU': { + node(NODE_LINUX_GPU) { + ws('workspace/ut-python2-gpu') { try { - utils.unpack_and_init('cpu', mx_lib, true) - python3_ut('ubuntu_cpu') + utils.unpack_and_init('gpu', mx_lib, true) + python2_gpu_ut('ubuntu_gpu') utils.publish_test_coverage() } finally { - utils.collect_test_results_unix('nosetests_unittest.xml', 'nosetests_python3_cpu_unittest.xml') - utils.collect_test_results_unix('nosetests_quantization.xml', 'nosetests_python3_cpu_quantization.xml') + utils.collect_test_results_unix('nosetests_gpu.xml', 'nosetests_python2_gpu.xml') } } } - }, - 'CPU ASAN': { - node(NODE_LINUX_CPU) { - ws('workspace/ut-python3-cpu-asan') { - utils.unpack_and_init('cpu_asan', mx_lib_cpp_examples_cpu) - utils.docker_run('ubuntu_cpu', 'integrationtest_ubuntu_cpu_asan', false) + }] +} + +def test_unix_python2_quantize_gpu() { + return ['Python2: Quantize GPU': { + node(NODE_LINUX_GPU_P3) { + ws('workspace/ut-python2-quantize-gpu') { + timeout(time: max_time, unit: 'MINUTES') { + try { + utils.unpack_and_init('gpu', mx_lib, true) + utils.docker_run('ubuntu_gpu', 'unittest_ubuntu_python2_quantization_gpu', true) + utils.publish_test_coverage() + } finally { + utils.collect_test_results_unix('nosetests_quantization_gpu.xml', 'nosetests_python2_quantize_gpu.xml') + } + } + } + } + }] +} + +def test_unix_python2_mkldnn_gpu() { + return ['Python2: MKLDNN-GPU': { + node(NODE_LINUX_GPU) { + ws('workspace/ut-python2-mkldnn-gpu') { + try { + utils.unpack_and_init('mkldnn_gpu', mx_mkldnn_lib, true) + python2_gpu_ut('ubuntu_gpu') + utils.publish_test_coverage() + } finally { + utils.collect_test_results_unix('nosetests_gpu.xml', 'nosetests_python2_mkldnn_gpu.xml') + } } } - }, - 'Python3: CPU debug': { + }] +} + +def test_unix_python3_cpu() { + return ['Python3: CPU': { node(NODE_LINUX_CPU) { - ws('workspace/ut-python3-cpu-debug') { + ws('workspace/ut-python3-cpu') { try { - utils.unpack_and_init('cpu_debug', mx_cmake_lib_debug, true) + utils.unpack_and_init('cpu', mx_lib, true) python3_ut('ubuntu_cpu') + utils.publish_test_coverage() } finally { - utils.collect_test_results_unix('nosetests_unittest.xml', 'nosetests_python3_cpu_debug_unittest.xml') - utils.collect_test_results_unix('nosetests_quantization.xml', 'nosetests_python3_cpu_debug_quantization.xml') + utils.collect_test_results_unix('nosetests_unittest.xml', 'nosetests_python3_cpu_unittest.xml') + utils.collect_test_results_unix('nosetests_quantization.xml', 'nosetests_python3_cpu_quantization.xml') } } } - }, - 'Python2: GPU': { - node(NODE_LINUX_GPU) { - ws('workspace/ut-python2-gpu') { + }] +} + +def test_unix_python3_mkl_cpu() { + return ['Python3: MKL-CPU': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-python3-cpu') { try { - utils.unpack_and_init('gpu', mx_lib, true) - python2_gpu_ut('ubuntu_gpu') + utils.unpack_and_init('cpu_mkl', mx_lib, true) + python3_ut('ubuntu_cpu') utils.publish_test_coverage() } finally { - utils.collect_test_results_unix('nosetests_gpu.xml', 'nosetests_python2_gpu.xml') + utils.collect_test_results_unix('nosetests_unittest.xml', 'nosetests_python3_cpu_unittest.xml') + utils.collect_test_results_unix('nosetests_quantization.xml', 'nosetests_python3_cpu_quantization.xml') } } } - }, - 'Python3: GPU': { + }] +} + +def test_unix_python3_gpu() { + return ['Python3: GPU': { node(NODE_LINUX_GPU) { ws('workspace/ut-python3-gpu') { try { @@ -537,23 +636,11 @@ core_logic: { } } } - }, - 'Python2: Quantize GPU': { - node(NODE_LINUX_GPU_P3) { - ws('workspace/ut-python2-quantize-gpu') { - timeout(time: max_time, unit: 'MINUTES') { - try { - utils.unpack_and_init('gpu', mx_lib, true) - utils.docker_run('ubuntu_gpu', 'unittest_ubuntu_python2_quantization_gpu', true) - utils.publish_test_coverage() - } finally { - utils.collect_test_results_unix('nosetests_quantization_gpu.xml', 'nosetests_python2_quantize_gpu.xml') - } - } - } - } - }, - 'Python3: Quantize GPU': { + }] +} + +def test_unix_python3_quantize_gpu() { + return ['Python3: Quantize GPU': { node(NODE_LINUX_GPU_P3) { ws('workspace/ut-python3-quantize-gpu') { timeout(time: max_time, unit: 'MINUTES') { @@ -567,8 +654,27 @@ core_logic: { } } } - }, - 'Python2: MKLDNN-CPU': { + }] +} + +def test_unix_python3_debug_cpu() { + return ['Python3: CPU debug': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-python3-cpu-debug') { + try { + utils.unpack_and_init('cpu_debug', mx_cmake_lib_debug, true) + python3_ut('ubuntu_cpu') + } finally { + utils.collect_test_results_unix('nosetests_unittest.xml', 'nosetests_python3_cpu_debug_unittest.xml') + utils.collect_test_results_unix('nosetests_quantization.xml', 'nosetests_python3_cpu_debug_quantization.xml') + } + } + } + }] +} + +def test_unix_python2_mkldnn_cpu() { + return ['Python2: MKLDNN-CPU': { node(NODE_LINUX_CPU) { ws('workspace/ut-python2-mkldnn-cpu') { try { @@ -582,25 +688,32 @@ core_logic: { } } } - }, - 'Python2: MKLDNN-GPU': { - node(NODE_LINUX_GPU) { - ws('workspace/ut-python2-mkldnn-gpu') { + }] +} + +def test_unix_python3_mkldnn_cpu() { + return ['Python3: MKLDNN-CPU': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-python3-mkldnn-cpu') { try { - utils.unpack_and_init('mkldnn_gpu', mx_mkldnn_lib, true) - python2_gpu_ut('ubuntu_gpu') + utils.unpack_and_init('mkldnn_cpu', mx_mkldnn_lib, true) + python3_ut_mkldnn('ubuntu_cpu') utils.publish_test_coverage() } finally { - utils.collect_test_results_unix('nosetests_gpu.xml', 'nosetests_python2_mkldnn_gpu.xml') + utils.collect_test_results_unix('nosetests_unittest.xml', 'nosetests_python3_mkldnn_cpu_unittest.xml') + utils.collect_test_results_unix('nosetests_mkl.xml', 'nosetests_python3_mkldnn_cpu_mkl.xml') } } } - }, - 'Python3: MKLDNN-CPU': { + }] +} + +def test_unix_python3_mkldnn_mkl_cpu() { + return ['Python3: MKLDNN-MKL-CPU': { node(NODE_LINUX_CPU) { - ws('workspace/ut-python3-mkldnn-cpu') { + ws('workspace/ut-python3-mkldnn-mkl-cpu') { try { - utils.unpack_and_init('mkldnn_cpu', mx_mkldnn_lib, true) + utils.unpack_and_init('mkldnn_mkl_cpu', mx_mkldnn_lib, true) python3_ut_mkldnn('ubuntu_cpu') utils.publish_test_coverage() } finally { @@ -609,8 +722,11 @@ core_logic: { } } } - }, - 'Python3: MKLDNN-GPU': { + }] +} + +def test_unix_python3_mkldnn_gpu() { + return ['Python3: MKLDNN-GPU': { node(NODE_LINUX_GPU) { ws('workspace/ut-python3-mkldnn-gpu') { try { @@ -622,8 +738,11 @@ core_logic: { } } } - }, - 'Python3: MKLDNN-GPU-NOCUDNN': { + }] +} + +def test_unix_python3_mkldnn_nocudnn_gpu() { + return ['Python3: MKLDNN-GPU-NOCUDNN': { node(NODE_LINUX_GPU) { ws('workspace/ut-python3-mkldnn-gpu-nocudnn') { try { @@ -635,76 +754,128 @@ core_logic: { } } } - }, - 'Python3: CentOS 7 CPU': { - node(NODE_LINUX_CPU) { - ws('workspace/build-centos7-cpu') { + }] +} + +def test_unix_python3_tensorrt_gpu() { + return ['Python3: TensorRT GPU': { + node(NODE_LINUX_GPU_P3) { + ws('workspace/build-tensorrt') { timeout(time: max_time, unit: 'MINUTES') { try { - utils.unpack_and_init('centos7_cpu', mx_lib, true) - utils.docker_run('centos7_cpu', 'unittest_centos7_cpu', false) + utils.unpack_and_init('tensorrt', mx_tensorrt_lib, true) + utils.docker_run('ubuntu_gpu_tensorrt', 'unittest_ubuntu_tensorrt_gpu', true) utils.publish_test_coverage() } finally { - utils.collect_test_results_unix('nosetests_unittest.xml', 'nosetests_python3_centos7_cpu_unittest.xml') - utils.collect_test_results_unix('nosetests_train.xml', 'nosetests_python3_centos7_cpu_train.xml') + utils.collect_test_results_unix('nosetests_tensorrt.xml', 'nosetests_python3_tensorrt_gpu.xml') } } } } - }, - 'Python3: CentOS 7 GPU': { + }] +} + +def test_unix_python3_integration_gpu() { + return ['Python Integration GPU': { node(NODE_LINUX_GPU) { - ws('workspace/build-centos7-gpu') { + ws('workspace/it-python-gpu') { timeout(time: max_time, unit: 'MINUTES') { - try { - utils.unpack_and_init('centos7_gpu', mx_lib, true) - utils.docker_run('centos7_gpu', 'unittest_centos7_gpu', true) - utils.publish_test_coverage() - } finally { - utils.collect_test_results_unix('nosetests_gpu.xml', 'nosetests_python3_centos7_gpu.xml') - } + utils.unpack_and_init('gpu', mx_lib, true) + utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_python', true) + utils.publish_test_coverage() } } } - }, - 'Python3: TensorRT GPU': { - node(NODE_LINUX_GPU_P3) { - ws('workspace/build-tensorrt') { - timeout(time: max_time, unit: 'MINUTES') { - try { - utils.unpack_and_init('tensorrt', mx_tensorrt_lib, true) - utils.docker_run('ubuntu_gpu_tensorrt', 'unittest_ubuntu_tensorrt_gpu', true) - utils.publish_test_coverage() - } finally { - utils.collect_test_results_unix('nosetests_tensorrt.xml', 'nosetests_python3_tensorrt_gpu.xml') + }] +} + +def test_unix_caffe_gpu() { + return ['Caffe GPU': { + node(NODE_LINUX_GPU) { + ws('workspace/it-caffe') { + timeout(time: max_time, unit: 'MINUTES') { + utils.init_git() + utils.unpack_lib('gpu', mx_lib) + utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_caffe', true) + utils.publish_test_coverage() + } } + } + }] +} + +def test_unix_cpp_package_gpu() { + return ['cpp-package GPU': { + node(NODE_LINUX_GPU) { + ws('workspace/it-cpp-package') { + timeout(time: max_time, unit: 'MINUTES') { + utils.unpack_and_init('gpu', mx_lib_cpp_examples, true) + utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_cpp_package', true) + utils.publish_test_coverage() } } } - }, - 'Scala: CPU': { + }] +} + +def test_unix_scala_cpu() { + return ['Scala: CPU': { node(NODE_LINUX_CPU) { ws('workspace/ut-scala-cpu') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('cpu', mx_dist_lib, true) + utils.unpack_and_init('cpu', mx_lib, true) + utils.docker_run('ubuntu_cpu', 'unittest_ubuntu_cpu_scala', false) + utils.publish_test_coverage() + } + } + } + }] +} + +def test_unix_scala_mkldnn_cpu(){ + return ['Scala: MKLDNN-CPU': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-scala-mkldnn-cpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.unpack_and_init('mkldnn_cpu', mx_mkldnn_lib, true) utils.docker_run('ubuntu_cpu', 'unittest_ubuntu_cpu_scala', false) utils.publish_test_coverage() } } } - }, - 'Clojure: CPU': { + }] +} + +def test_unix_scala_gpu() { + return ['Scala: GPU': { + node(NODE_LINUX_GPU) { + ws('workspace/ut-scala-gpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.unpack_and_init('gpu', mx_lib, true) + utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_scala', true) + utils.publish_test_coverage() + } + } + } + }] +} + +def test_unix_clojure_cpu() { + return ['Clojure: CPU': { node(NODE_LINUX_CPU) { ws('workspace/ut-clojure-cpu') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('cpu', mx_dist_lib, true) + utils.unpack_and_init('cpu', mx_lib, true) utils.docker_run('ubuntu_cpu', 'unittest_ubuntu_cpu_clojure', false) utils.publish_test_coverage() } } } - }, - 'Perl: CPU': { + }] +} + +def test_unix_r_cpu() { + return ['Perl: CPU': { node(NODE_LINUX_CPU) { ws('workspace/ut-perl-cpu') { timeout(time: max_time, unit: 'MINUTES') { @@ -714,104 +885,204 @@ core_logic: { } } } - }, - 'Perl: GPU': { + }] +} + +def test_unix_cpp_gpu() { + return ['Cpp: GPU': { node(NODE_LINUX_GPU) { - ws('workspace/ut-perl-gpu') { + ws('workspace/ut-cpp-gpu') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('gpu', mx_lib, true) - utils.docker_run('ubuntu_gpu', 'unittest_ubuntu_cpugpu_perl', true) + utils.unpack_and_init('cmake_gpu', mx_cmake_lib, true) + utils.docker_run('ubuntu_gpu', 'unittest_cpp', true) utils.publish_test_coverage() } } } - }, - 'Cpp: GPU': { + }] +} + +def test_unix_cpp_mkldnn_gpu() { + return ['Cpp: MKLDNN+GPU': { node(NODE_LINUX_GPU) { - ws('workspace/ut-cpp-gpu') { + ws('workspace/ut-cpp-mkldnn-gpu') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('cmake_gpu', mx_cmake_lib, true) - utils.docker_run('ubuntu_gpu', 'unittest_ubuntu_gpu_cpp', true) + utils.unpack_and_init('cmake_mkldnn_gpu', mx_cmake_mkldnn_lib, true) + utils.docker_run('ubuntu_gpu', 'unittest_cpp', true) + utils.publish_test_coverage() + } + } + } + }] +} + +def test_unix_cpp_cpu() { + return ['Cpp: CPU': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-cpp-cpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.unpack_and_init('cpu_debug', mx_cmake_lib_debug, true) + utils.docker_run('ubuntu_cpu', 'unittest_cpp', false) utils.publish_test_coverage() } } } - }, - 'Cpp: MKLDNN+GPU': { + }] +} + +def test_unix_r_gpu() { + return ['Perl: GPU': { node(NODE_LINUX_GPU) { - ws('workspace/ut-cpp-mkldnn-gpu') { + ws('workspace/ut-perl-gpu') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('cmake_mkldnn_gpu', mx_cmake_mkldnn_lib, true) - utils.docker_run('ubuntu_gpu', 'unittest_ubuntu_gpu_cpp', true) + utils.unpack_and_init('gpu', mx_lib, true) + utils.docker_run('ubuntu_gpu', 'unittest_ubuntu_cpugpu_perl', true) utils.publish_test_coverage() } } } - }, - 'R: CPU': { + }] +} + +def test_unix_julia07_cpu() { + return ['Julia 0.7: CPU': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-julia07-cpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.unpack_and_init('cpu', mx_lib) + utils.docker_run('ubuntu_cpu', 'unittest_ubuntu_cpu_julia07', false) + } + } + } + }] +} + +def test_unix_julia10_cpu() { + return ['Julia 1.0: CPU': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-julia10-cpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.unpack_and_init('cpu', mx_lib) + utils.docker_run('ubuntu_cpu', 'unittest_ubuntu_cpu_julia10', false) + } + } + } + }] +} + +def test_unix_onnx_cpu() { + return ['Onnx CPU': { node(NODE_LINUX_CPU) { - ws('workspace/ut-r-cpu') { + ws('workspace/it-onnx-cpu') { timeout(time: max_time, unit: 'MINUTES') { utils.unpack_and_init('cpu', mx_lib, true) - utils.docker_run('ubuntu_cpu', 'unittest_ubuntu_cpu_R', false) + utils.docker_run('ubuntu_cpu', 'integrationtest_ubuntu_cpu_onnx', false) + utils.publish_test_coverage() + } + } + } + }] +} + +def test_unix_distributed_kvstore_cpu() { + return ['dist-kvstore tests CPU': { + node(NODE_LINUX_CPU) { + ws('workspace/it-dist-kvstore') { + timeout(time: max_time, unit: 'MINUTES') { + utils.unpack_and_init('cpu', mx_lib, true) + utils.docker_run('ubuntu_cpu', 'integrationtest_ubuntu_cpu_dist_kvstore', false) utils.publish_test_coverage() } } } - }, - 'R: GPU': { + }] +} + +def test_unix_distributed_kvstore_gpu() { + return ['dist-kvstore tests GPU': { node(NODE_LINUX_GPU) { - ws('workspace/ut-r-gpu') { + ws('workspace/it-dist-kvstore') { timeout(time: max_time, unit: 'MINUTES') { utils.unpack_and_init('gpu', mx_lib, true) - utils.docker_run('ubuntu_gpu', 'unittest_ubuntu_gpu_R', true) + utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_dist_kvstore', true) utils.publish_test_coverage() } } } - }, - 'Julia 0.6: CPU': { + }] +} + +def test_centos7_python3_cpu() { + return ['Python3: CentOS 7 CPU': { node(NODE_LINUX_CPU) { - ws('workspace/ut-julia06-cpu') { + ws('workspace/build-centos7-cpu') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('cpu', mx_lib) - utils.docker_run('ubuntu_cpu', 'unittest_ubuntu_cpu_julia06', false) + try { + utils.unpack_and_init('centos7_cpu', mx_lib, true) + utils.docker_run('centos7_cpu', 'unittest_centos7_cpu', false) + utils.publish_test_coverage() + } finally { + utils.collect_test_results_unix('nosetests_unittest.xml', 'nosetests_python3_centos7_cpu_unittest.xml') + utils.collect_test_results_unix('nosetests_train.xml', 'nosetests_python3_centos7_cpu_train.xml') + } } } } - }, + }] +} - 'Python 2: CPU Win':{ - node(NODE_WINDOWS_CPU) { - timeout(time: max_time, unit: 'MINUTES') { - ws('workspace/ut-python-cpu') { +def test_centos7_python3_gpu() { + return ['Python3: CentOS 7 GPU': { + node(NODE_LINUX_GPU) { + ws('workspace/build-centos7-gpu') { + timeout(time: max_time, unit: 'MINUTES') { try { - utils.init_git_win() - unstash 'windows_package_cpu' - powershell 'ci/windows/test_py2_cpu.ps1' + utils.unpack_and_init('centos7_gpu', mx_lib, true) + utils.docker_run('centos7_gpu', 'unittest_centos7_gpu', true) + utils.publish_test_coverage() } finally { - utils.collect_test_results_windows('nosetests_unittest.xml', 'nosetests_unittest_windows_python2_cpu.xml') + utils.collect_test_results_unix('nosetests_gpu.xml', 'nosetests_python3_centos7_gpu.xml') } } } } - }, - 'Python 3: CPU Win': { + }] +} + +def test_centos7_scala_cpu() { + return ['Scala: CentOS CPU': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-scala-centos7-cpu') { + timeout(time: max_time, unit: 'MINUTES') { + utils.unpack_and_init('centos7_cpu', mx_lib, true) + utils.docker_run('centos7_cpu', 'unittest_centos7_cpu_scala', false) + utils.publish_test_coverage() + } + } + } + }] +} + +def test_windows_python2_cpu() { + return ['Python 2: CPU Win':{ node(NODE_WINDOWS_CPU) { timeout(time: max_time, unit: 'MINUTES') { ws('workspace/ut-python-cpu') { try { utils.init_git_win() unstash 'windows_package_cpu' - powershell 'ci/windows/test_py3_cpu.ps1' + powershell 'ci/windows/test_py2_cpu.ps1' } finally { - utils.collect_test_results_windows('nosetests_unittest.xml', 'nosetests_unittest_windows_python3_cpu.xml') + utils.collect_test_results_windows('nosetests_unittest.xml', 'nosetests_unittest_windows_python2_cpu.xml') } } } } - }, - 'Python 2: GPU Win':{ + }] +} + +def test_windows_python2_gpu() { + return ['Python 2: GPU Win':{ node(NODE_WINDOWS_GPU) { timeout(time: max_time, unit: 'MINUTES') { ws('workspace/ut-python-gpu') { @@ -826,8 +1097,11 @@ core_logic: { } } } - }, - 'Python 3: GPU Win':{ + }] +} + +def test_windows_python3_gpu() { + return ['Python 3: GPU Win':{ node(NODE_WINDOWS_GPU) { timeout(time: max_time, unit: 'MINUTES') { ws('workspace/ut-python-gpu') { @@ -842,8 +1116,11 @@ core_logic: { } } } - }, - 'Python 3: MKLDNN-GPU Win':{ + }] +} + +def test_windows_python3_gpu_mkldnn() { + return ['Python 3: MKLDNN-GPU Win':{ node(NODE_WINDOWS_GPU) { timeout(time: max_time, unit: 'MINUTES') { ws('workspace/ut-python-gpu') { @@ -858,97 +1135,104 @@ core_logic: { } } } - }, - 'Onnx CPU': { - node(NODE_LINUX_CPU) { - ws('workspace/it-onnx-cpu') { - timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('cpu', mx_lib, true) - utils.docker_run('ubuntu_cpu', 'integrationtest_ubuntu_cpu_onnx', false) - utils.publish_test_coverage() + }] +} + +def test_windows_python3_cpu() { + return ['Python 3: CPU Win': { + node(NODE_WINDOWS_CPU) { + timeout(time: max_time, unit: 'MINUTES') { + ws('workspace/ut-python-cpu') { + try { + utils.init_git_win() + unstash 'windows_package_cpu' + powershell 'ci/windows/test_py3_cpu.ps1' + } finally { + utils.collect_test_results_windows('nosetests_unittest.xml', 'nosetests_unittest_windows_python3_cpu.xml') + } } } } - }, - 'Python GPU': { - node(NODE_LINUX_GPU) { - ws('workspace/it-python-gpu') { + }] +} + +def test_qemu_armv7_cpu() { + return ['ARMv7 QEMU': { + node(NODE_LINUX_CPU) { + ws('workspace/ut-armv7-qemu') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('gpu', mx_lib, true) - utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_python', true) - utils.publish_test_coverage() + utils.unpack_and_init('armv7', mx_pip) + sh "ci/build.py --docker-registry ${env.DOCKER_CACHE_REGISTRY} -p test.arm_qemu ./runtime_functions.py run_ut_py3_qemu" } } } - }, - 'cpp-package GPU': { - node(NODE_LINUX_GPU) { - ws('workspace/it-cpp-package') { + }] +} + +def docs_website() { + return ['Docs': { + node(NODE_LINUX_CPU) { + ws('workspace/docs') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('gpu', mx_lib_cpp_examples, true) - utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_cpp_package', true) - utils.publish_test_coverage() + utils.init_git() + utils.docker_run('ubuntu_cpu', 'deploy_docs', false) + + master_url = utils.get_jenkins_master_url() + if ( master_url == 'jenkins.mxnet-ci.amazon-ml.com') { + sh "ci/other/ci_deploy_doc.sh ${env.BRANCH_NAME} ${env.BUILD_NUMBER}" + } else { + print "Skipping staging documentation publishing since we are not running in prod. Host: {$master_url}" + } } } } - }, - // Disabled due to: https://github.com/apache/incubator-mxnet/issues/11407 - // 'Caffe GPU': { - // node(NODE_LINUX_GPU) { - // ws('workspace/it-caffe') { - // timeout(time: max_time, unit: 'MINUTES') { - // utils.init_git() - // utils.unpack_lib('gpu', mx_lib) - // utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_caffe', true) - // utils.publish_test_coverage() - // } - // } - // } - // }, - 'dist-kvstore tests GPU': { - node(NODE_LINUX_GPU) { - ws('workspace/it-dist-kvstore') { + }] +} + +def docs_julia() { + return ['Julia docs': { + node(NODE_LINUX_CPU) { + ws('workspace/julia-docs') { timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('gpu', mx_lib, true) - utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_dist_kvstore', true) - utils.publish_test_coverage() + utils.unpack_and_init('cpu', mx_lib) + utils.docker_run('ubuntu_cpu', 'deploy_jl_docs', false) } } } - }, - 'dist-kvstore tests CPU': { + }] +} + +def misc_asan_cpu() { + return ['CPU ASAN': { node(NODE_LINUX_CPU) { - ws('workspace/it-dist-kvstore') { - timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('cpu', mx_lib, true) - utils.docker_run('ubuntu_cpu', 'integrationtest_ubuntu_cpu_dist_kvstore', false) - utils.publish_test_coverage() - } + ws('workspace/ut-python3-cpu-asan') { + utils.unpack_and_init('cpu_asan', mx_lib_cpp_examples_cpu) + utils.docker_run('ubuntu_cpu', 'integrationtest_ubuntu_cpu_asan', false) } } - }, - 'Scala: GPU': { - node(NODE_LINUX_GPU) { - ws('workspace/ut-scala-gpu') { - timeout(time: max_time, unit: 'MINUTES') { - utils.unpack_and_init('gpu', mx_dist_lib, true) - utils.docker_run('ubuntu_gpu', 'integrationtest_ubuntu_gpu_scala', true) - utils.publish_test_coverage() - } + }] +} + +def sanity_lint() { + return ['Lint': { + node(NODE_LINUX_CPU) { + ws('workspace/sanity-lint') { + utils.init_git() + utils.docker_run('ubuntu_cpu', 'sanity_check', false) } } - } - } - - stage('Deploy') { - deploy_docs() - } + }] } -, -failure_handler: { - // Only send email if master or release branches failed - if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { - emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' - } + +def sanity_rat_license() { + return ['RAT License': { + node(NODE_LINUX_CPU) { + ws('workspace/sanity-rat') { + utils.init_git() + utils.docker_run('ubuntu_rat', 'nightly_test_rat_check', false) + } + } + }] } -) + +return this diff --git a/ci/jenkins/Jenkinsfile_centos_cpu b/ci/jenkins/Jenkinsfile_centos_cpu new file mode 100644 index 000000000..a47ab3de7 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_centos_cpu @@ -0,0 +1,53 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_centos7_cpu(), + custom_steps.compile_centos7_cpu_mkldnn() + ]) + + utils.parallel_stage('Tests', [ + custom_steps.test_centos7_python3_cpu(), + custom_steps.test_centos7_scala_cpu() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_centos_gpu b/ci/jenkins/Jenkinsfile_centos_gpu new file mode 100644 index 000000000..cad77a9a7 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_centos_gpu @@ -0,0 +1,51 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_centos7_gpu() + ]) + + utils.parallel_stage('Tests', [ + custom_steps.test_centos7_python3_gpu() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_clang b/ci/jenkins/Jenkinsfile_clang new file mode 100644 index 000000000..029c72081 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_clang @@ -0,0 +1,51 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_unix_clang_3_9_cpu(), + custom_steps.compile_unix_clang_6_cpu(), + custom_steps.compile_unix_clang_tidy_cpu(), + custom_steps.compile_unix_clang_3_9_mkldnn_cpu(), + custom_steps.compile_unix_clang_6_mkldnn_cpu() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_edge b/ci/jenkins/Jenkinsfile_edge new file mode 100644 index 000000000..9d8e01399 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_edge @@ -0,0 +1,56 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_armv8_jetson_gpu(), + custom_steps.compile_armv7_cpu(), + custom_steps.compile_armv6_cpu(), + custom_steps.compile_armv8_cpu(), + custom_steps.compile_armv8_android_cpu(), + custom_steps.compile_armv7_android_cpu() + ]) + + utils.parallel_stage('Tests', [ + custom_steps.test_qemu_armv7_cpu() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_miscellaneous b/ci/jenkins/Jenkinsfile_miscellaneous new file mode 100644 index 000000000..dbf2a9e41 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_miscellaneous @@ -0,0 +1,54 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3', windows_cpu: 'mxnetwindows-cpu', windows_gpu: 'mxnetwindows-gpu') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_unix_asan_cpu(), + custom_steps.compile_unix_amalgamation_min(), + custom_steps.compile_unix_amalgamation() + ]) + + utils.parallel_stage('Tests', [ + custom_steps.misc_asan_cpu() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_sanity b/ci/jenkins/Jenkinsfile_sanity new file mode 100644 index 000000000..ed4d16ec4 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_sanity @@ -0,0 +1,48 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3', windows_cpu: 'mxnetwindows-cpu', windows_gpu: 'mxnetwindows-gpu') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Sanity Check', [ + custom_steps.sanity_lint(), + custom_steps.sanity_rat_license() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_unix_cpu b/ci/jenkins/Jenkinsfile_unix_cpu new file mode 100644 index 000000000..ea3c06175 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_unix_cpu @@ -0,0 +1,74 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_unix_cpu_openblas(), + custom_steps.compile_unix_openblas_debug_cpu(), + custom_steps.compile_unix_mkl_cpu(), + custom_steps.compile_unix_mkldnn_cpu(), + custom_steps.compile_unix_mkldnn_mkl_cpu() + ]) + + utils.parallel_stage('Tests', [ + custom_steps.test_unix_python2_cpu(), + custom_steps.test_unix_python3_cpu(), + custom_steps.test_unix_python3_debug_cpu(), + custom_steps.test_unix_python3_mkl_cpu(), + custom_steps.test_unix_python2_mkldnn_cpu(), + custom_steps.test_unix_python3_mkldnn_cpu(), + custom_steps.test_unix_python3_mkldnn_mkl_cpu(), + custom_steps.test_unix_scala_cpu(), + custom_steps.test_unix_scala_mkldnn_cpu(), + custom_steps.test_unix_clojure_cpu(), + custom_steps.test_unix_r_cpu(), + custom_steps.test_unix_julia07_cpu(), + custom_steps.test_unix_julia10_cpu(), + custom_steps.test_unix_onnx_cpu(), + custom_steps.test_unix_cpp_cpu(), + /* Disabled due to master build failure: + * http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/master/1221/pipeline/ + * https://github.com/apache/incubator-mxnet/issues/11801 + custom_steps.test_unix_distributed_kvstore_cpu() + */ + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_unix_gpu b/ci/jenkins/Jenkinsfile_unix_gpu new file mode 100644 index 000000000..bd884904d --- /dev/null +++ b/ci/jenkins/Jenkinsfile_unix_gpu @@ -0,0 +1,73 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_unix_mkldnn_gpu(), + custom_steps.compile_unix_mkldnn_nocudnn_gpu(), + custom_steps.compile_unix_full_gpu(), + custom_steps.compile_unix_cmake_mkldnn_gpu(), + custom_steps.compile_unix_cmake_gpu(), + custom_steps.compile_unix_tensorrt_gpu(), + ]) + + utils.parallel_stage('Tests', [ + custom_steps.test_unix_python2_gpu(), + custom_steps.test_unix_python3_gpu(), + custom_steps.test_unix_python2_quantize_gpu(), + custom_steps.test_unix_python3_quantize_gpu(), + custom_steps.test_unix_python2_mkldnn_gpu(), + custom_steps.test_unix_python3_mkldnn_gpu(), + custom_steps.test_unix_python3_mkldnn_nocudnn_gpu(), + custom_steps.test_unix_python3_tensorrt_gpu(), + custom_steps.test_unix_r_gpu(), + custom_steps.test_unix_cpp_gpu(), + custom_steps.test_unix_cpp_mkldnn_gpu(), + custom_steps.test_unix_python3_integration_gpu(), + custom_steps.test_unix_cpp_package_gpu(), + custom_steps.test_unix_scala_gpu(), + custom_steps.test_unix_distributed_kvstore_gpu() + + // Disabled due to: https://github.com/apache/incubator-mxnet/issues/11407 + //custom_steps.test_unix_caffe_gpu() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_website b/ci/jenkins/Jenkinsfile_website new file mode 100644 index 000000000..acdd2be4d --- /dev/null +++ b/ci/jenkins/Jenkinsfile_website @@ -0,0 +1,52 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_unix_cpu_openblas() + ]) + + utils.parallel_stage('Deploy', [ + custom_steps.docs_website(), + custom_steps.docs_julia() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_windows_cpu b/ci/jenkins/Jenkinsfile_windows_cpu new file mode 100644 index 000000000..a8746db73 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_windows_cpu @@ -0,0 +1,52 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', windows_cpu: 'mxnetwindows-cpu') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_windows_cpu() + ]) + + utils.parallel_stage('Tests', [ + custom_steps.test_windows_python2_cpu(), + custom_steps.test_windows_python3_cpu() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/jenkins/Jenkinsfile_windows_gpu b/ci/jenkins/Jenkinsfile_windows_gpu new file mode 100644 index 000000000..2319f2594 --- /dev/null +++ b/ci/jenkins/Jenkinsfile_windows_gpu @@ -0,0 +1,54 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. +// +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +// timeout in minutes +max_time = 180 + +node('utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') + custom_steps = load('ci/jenkins/Jenkins_steps.groovy') +} +utils.assign_node_labels(utility: 'utility', windows_cpu: 'mxnetwindows-cpu', windows_gpu: 'mxnetwindows-gpu') + +utils.main_wrapper( +core_logic: { + utils.parallel_stage('Build', [ + custom_steps.compile_windows_gpu(), + custom_steps.compile_windows_gpu_mkldnn() + ]) + + utils.parallel_stage('Tests', [ + custom_steps.test_windows_python2_gpu(), + custom_steps.test_windows_python3_gpu(), + custom_steps.test_windows_python3_gpu_mkldnn() + ]) +} +, +failure_handler: { + // Only send email if master or release branches failed + if (currentBuild.result == "FAILURE" && (env.BRANCH_NAME == "master" || env.BRANCH_NAME.startsWith("v"))) { + emailext body: 'Build for MXNet branch ${BRANCH_NAME} has broken. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[BUILD FAILED] Branch ${BRANCH_NAME} build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/publish/Jenkinsfile b/ci/publish/Jenkinsfile new file mode 100644 index 000000000..9a360c6b5 --- /dev/null +++ b/ci/publish/Jenkinsfile @@ -0,0 +1,107 @@ +// -*- mode: groovy -*- + +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +// Jenkins pipeline +// See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ + +//mxnet libraries +mx_scala_pub = 'lib/libmxnet.so, lib/libmxnet.a, 3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, config.mk, scala-package/pom.xml, scala-package/**/pom.xml, scala-package/*/target/**, scala-package/local-snapshot/**' + +// timeout in minutes +max_time = 120 + +node('restricted-utility') { + // Loading the utilities requires a node context unfortunately + checkout scm + utils = load('ci/Jenkinsfile_utils.groovy') +} +utils.assign_node_labels(utility: 'restricted-utility', linux_cpu: 'restricted-mxnetlinux-cpu', linux_gpu: 'restricted-mxnetlinux-gpu', linux_gpu_p3: 'restricted-mxnetlinux-gpu-p3', windows_cpu: 'restricted-mxnetwindows-cpu', windows_gpu: 'restricted-mxnetwindows-gpu') + +// CPU and GPU. OSX nodes are not currently supported by Jenkins +def nodeMap = ['cpu': NODE_LINUX_CPU, 'gpu': NODE_LINUX_GPU] +def scalaOSMap = ['cpu': 'linux-x86_64-cpu', 'gpu': 'linux-x86_64-gpu'] + +def wrapStep(nodeToRun, workspaceName, step) { + return { + node(nodeToRun) { + ws("workspace/${workspaceName}") { + timeout(time: max_time, unit: 'MINUTES') { + step() + } + } + } + } +} + +def toBuild = [:] +def labels = ['cpu'] // , 'gpu'] +for (x in labels) { + def label = x // Required due to language + toBuild["Scala Build ${label}"] = wrapStep(nodeMap[label], "build-scala-${label}") { + withEnv(["MAVEN_PUBLISH_OS_TYPE=${scalaOSMap[label]}"]) { + utils.init_git() + utils.docker_run("ubuntu_${label}", 'publish_scala_build', label == 'gpu', '500m', 'MAVEN_PUBLISH_OS_TYPE') + utils.pack_lib("scala_${label}", mx_scala_pub, false) + } + } +} + +def toTest = [:] +def systems = ['ubuntu1604', 'ubuntu1804', 'centos7'] +for (x in labels) { + def label = x // Required due to language + for (y in systems) { + def system = y // Required due to language + toTest["Scala Test ${system} ${label}"] = wrapStep(nodeMap[label], "test-scala-${system}-${label}") { + utils.unpack_and_init("scala_${label}", mx_scala_pub, false) + utils.docker_run("publish.test.${system}_${label}", 'publish_scala_test', label == 'gpu') + } + } +} + +def toDeploy = [:] +for (x in labels) { + def label = x // Required due to language + toDeploy["Scala Deploy ${label}"] = wrapStep(nodeMap[label], "deploy-scala-${label}") { + withEnv(["MAVEN_PUBLISH_OS_TYPE=${scalaOSMap[label]}"]) { + utils.unpack_and_init("scala_${label}", mx_scala_pub, false) + utils.docker_run("ubuntu_${label}", 'publish_scala_deploy', label == 'gpu', '500m', 'MAVEN_PUBLISH_OS_TYPE MAVEN_PUBLISH_SECRET_ENDPOINT_URL MAVEN_PUBLISH_SECRET_NAME_CREDENTIALS MAVEN_PUBLISH_SECRET_NAME_GPG DOCKERHUB_SECRET_ENDPOINT_REGION') + } + } +} + +utils.main_wrapper( +core_logic: { + stage('Build Packages') { + parallel toBuild + } + stage('Test Packages') { + parallel toTest + } + stage('Deploy Packages') { + parallel toDeploy + } +} +, +failure_handler: { + if (currentBuild.result == "FAILURE") { + // emailext body: 'Generating the nightly maven has failed. Please view the build at ${BUILD_URL}', replyTo: '${EMAIL}', subject: '[NIGHTLY MAVEN FAILED] Build ${BUILD_NUMBER}', to: '${EMAIL}' + } +} +) diff --git a/ci/publish/scala/build.sh b/ci/publish/scala/build.sh new file mode 100755 index 000000000..17f969afe --- /dev/null +++ b/ci/publish/scala/build.sh @@ -0,0 +1,29 @@ +#!/usr/bin/env bash +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +set -ex + +# Setup Environment Variables +# MAVEN_PUBLISH_OS_TYPE: linux-x86_64-cpu|linux-x86_64-gpu|osx-x86_64-cpu +# export MAVEN_PUBLISH_OS_TYPE=linux-x86_64-cpu + +bash scala-package/dev/compile-mxnet-backend.sh $MAVEN_PUBLISH_OS_TYPE ./ + +# Compile tests for discovery later +cd scala-package +mvn -B deploy diff --git a/ci/publish/scala/buildkey.py b/ci/publish/scala/buildkey.py new file mode 100644 index 000000000..8a1b7bf63 --- /dev/null +++ b/ci/publish/scala/buildkey.py @@ -0,0 +1,165 @@ +#!/usr/bin/env python3 +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import os +import json +import logging +import subprocess + +HOME = os.environ['HOME'] +KEY_PATH = os.path.join(HOME, ".m2") + + +''' +This file would do the following items: + Import keys from AWS Credential services + Create settings.xml in .m2 with pass phrase + Create security-settings.xml in .m2 with master password + Import keys.asc the encrypted keys in gpg +''' + + +def getCredentials(): + import boto3 + import botocore + endpoint_url = os.environ['MAVEN_PUBLISH_SECRET_ENDPOINT_URL'] + secret_creds_name = os.environ['MAVEN_PUBLISH_SECRET_NAME_CREDENTIALS'] + secret_key_name = os.environ['MAVEN_PUBLISH_SECRET_NAME_GPG'] + region_name = os.environ['DOCKERHUB_SECRET_ENDPOINT_REGION'] + + session = boto3.Session() + client = session.client( + service_name='secretsmanager', + region_name=region_name, + endpoint_url=endpoint_url + ) + try: + get_secret_value_response = client.get_secret_value( + SecretId=secret_creds_name + ) + get_secret_key_response = client.get_secret_value( + SecretId=secret_key_name + ) + except botocore.exceptions.ClientError as client_error: + if client_error.response['Error']['Code'] == 'ResourceNotFoundException': + name = (secret_key_name if get_secret_value_response + else secret_creds_name) + logging.exception("The requested secret %s was not found", name) + elif client_error.response['Error']['Code'] == 'InvalidRequestException': + logging.exception("The request was invalid due to:") + elif client_error.response['Error']['Code'] == 'InvalidParameterException': + logging.exception("The request had invalid params:") + raise + else: + secret = get_secret_value_response['SecretString'] + secret_dict = json.loads(secret) + secret_key = get_secret_key_response['SecretString'] + return secret_dict, secret_key + + +def importASC(key, gpgPassphrase): + filename = os.path.join(KEY_PATH, "key.asc") + with open(filename, 'w') as f: + f.write(key) + subprocess.check_output(['gpg2', '--batch', '--yes', + '--passphrase-fd', '0', + "--import", "{}".format(filename)], + input=str.encode(gpgPassphrase)) + + +def encryptMasterPSW(password): + filename = os.path.join(KEY_PATH, "encryptMasterPassword.exp") + with open(filename, 'w') as f: + f.write(''' + spawn mvn --encrypt-master-password + expect -exact "Master password: " + send -- "{}\r" + expect eof + '''.format(password)) + result = subprocess.check_output(['expect', filename]) + return str(result).split('\r\n')[-1][2:-3] + + +def encryptPSW(password): + filename = os.path.join(KEY_PATH, "encryptPassword.exp") + with open(filename, 'w') as f: + f.write(''' + spawn mvn --encrypt-password + expect -exact "Password: " + send -- "{}\r" + expect eof + '''.format(password)) + result = subprocess.check_output(['expect', filename]) + return str(result).split('\r\n')[-1][2:-3] + + +def masterPSW(password): + with open(os.path.join(KEY_PATH, "settings-security.xml"), "w") as f: + f.write("\n {}\n" + .format(password)) + + +def serverPSW(username, password, gpgPassphrase): + with open(os.path.join(KEY_PATH, "settings.xml"), "w") as f: + settingsString = ''' + + + + + + apache.snapshots.https + {} + {} + + + + apache.releases.https + {} + {} + + + + + + gpg + + gpg2 + {} + true + + + + + gpg + + '''.format(username, password, username, password, gpgPassphrase) + f.write(settingsString) + + +if __name__ == "__main__": + if not os.path.exists(KEY_PATH): + os.makedirs(KEY_PATH) + credentials, gpgKey = getCredentials() + masterPass = encryptMasterPSW(credentials['masterpass']) + masterPSW(masterPass) + passwordEncrypted = encryptPSW(credentials['password']) + serverPSW(credentials['user'], passwordEncrypted, + credentials['gpgPassphrase']) + importASC(gpgKey, credentials['gpgPassphrase']) diff --git a/ci/publish/scala/deploy.sh b/ci/publish/scala/deploy.sh new file mode 100755 index 000000000..4eb33907e --- /dev/null +++ b/ci/publish/scala/deploy.sh @@ -0,0 +1,41 @@ +#!/usr/bin/env bash +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +set -ex + +# Setup Environment Variables +# MAVEN_PUBLISH_OS_TYPE: linux-x86_64-cpu|linux-x86_64-gpu|osx-x86_64-cpu +# export MAVEN_PUBLISH_OS_TYPE=linux-x86_64-cpu + +# Run python to configure keys +python3 ci/publish/scala/buildkey.py + +# Updating cache +mkdir -p ~/.gnupg +echo "default-cache-ttl 14400" > ~/.gnupg/gpg-agent.conf +echo "max-cache-ttl 14400" >> ~/.gnupg/gpg-agent.conf +echo "allow-loopback-pinentry" >> ~/.gnupg/gpg-agent.conf +echo "pinentry-mode loopback" >> ~/.gnupg/gpg-agent.conf +export GPG_TTY=$(tty) + +cd scala-package + +mvn -B deploy -Pnightly + +# Clear all password .xml files, exp files, and gpg key files +rm -rf ~/.m2/*.xml ~/.m2/key.asc ~/.m2/*.exp diff --git a/ci/publish/scala/fullDeploy.sh b/ci/publish/scala/fullDeploy.sh new file mode 100644 index 000000000..69d674a97 --- /dev/null +++ b/ci/publish/scala/fullDeploy.sh @@ -0,0 +1,23 @@ +#!/usr/bin/env bash +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +set -ex + +./ci/publish/scala/build.sh +./ci/publish/scala/test.sh +./ci/publish/scala/deploy.sh diff --git a/ci/publish/scala/test.sh b/ci/publish/scala/test.sh new file mode 100755 index 000000000..5cef35ca3 --- /dev/null +++ b/ci/publish/scala/test.sh @@ -0,0 +1,28 @@ +#!/usr/bin/env bash +# +# Licensed to the Apache Software Foundation (ASF) under one or more +# contributor license agreements. See the NOTICE file distributed with +# this work for additional information regarding copyright ownership. +# The ASF licenses this file to You under the Apache License, Version 2.0 +# (the "License"); you may not use this file except in compliance with +# the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +set -ex + +if [ -z "$JAVA_HOME" ]; then + source /etc/profile +fi + +# Test +cd scala-package/packageTest +# make testlocal CI=1 +make testsnapshot UNIT=1 CI=1 diff --git a/ci/requirements.txt b/ci/requirements.txt new file mode 100644 index 000000000..8f21ead27 --- /dev/null +++ b/ci/requirements.txt @@ -0,0 +1 @@ +docker==3.5.0 diff --git a/ci/test_docker_cache.py b/ci/test_docker_cache.py index 358d54985..0a3bc4640 100644 --- a/ci/test_docker_cache.py +++ b/ci/test_docker_cache.py @@ -135,7 +135,7 @@ def test_full_cache(self): """ platform = 'test_full_cache' docker_tag = build_util.get_docker_tag(platform=platform, registry=DOCKER_REGISTRY_PATH) - dockerfile_path = os.path.join(DOCKERFILE_DIR, 'Dockerfile.build.' + platform) + dockerfile_path = os.path.join(DOCKERFILE_DIR, 'Dockerfile.' + platform) try: with open(dockerfile_path, 'w') as dockerfile_handle: dockerfile_handle.write(dockerfile_content) @@ -196,7 +196,7 @@ def test_partial_cache(self): """ platform = 'test_partial_cache' docker_tag = build_util.get_docker_tag(platform=platform, registry=DOCKER_REGISTRY_PATH) - dockerfile_path = os.path.join(DOCKERFILE_DIR, 'Dockerfile.build.' + platform) + dockerfile_path = os.path.join(DOCKERFILE_DIR, 'Dockerfile.' + platform) try: # Write initial Dockerfile with open(dockerfile_path, 'w') as dockerfile_handle: diff --git a/ci/util.py b/ci/util.py index 4d68b57a3..9a8d52eb1 100644 --- a/ci/util.py +++ b/ci/util.py @@ -18,7 +18,6 @@ import os import contextlib import logging -import requests def get_mxnet_root() -> str: curpath = os.path.abspath(os.path.dirname(__file__)) @@ -89,6 +88,7 @@ def under_ci() -> bool: def ec2_instance_id_hostname() -> str: + import requests if under_ci(): result = [] try: diff --git a/ci/windows/test_py2_cpu.ps1 b/ci/windows/test_py2_cpu.ps1 index aa38b81e3..46e49baea 100644 --- a/ci/windows/test_py2_cpu.ps1 +++ b/ci/windows/test_py2_cpu.ps1 @@ -16,11 +16,14 @@ # under the License. 7z x -y windows_package.7z + $env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll $env:PYTHONPATH=join-path $pwd.Path windows_package\python $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 +$env:MXNET_HOME=[io.path]::combine($PSScriptRoot, 'mxnet_home') + c:\Anaconda3\envs\py2\Scripts\pip install -r tests\requirements.txt -c:\Anaconda3\envs\py2\python.exe -m nose -v --with-xunit --xunit-file nosetests_unittest.xml tests\python\unittest +c:\Anaconda3\envs\py2\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_unittest.xml tests\python\unittest if (! $?) { Throw ("Error running unittest") } -c:\Anaconda3\envs\py2\python.exe -m nose -v --with-xunit --xunit-file nosetests_train.xml tests\python\train +c:\Anaconda3\envs\py2\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_train.xml tests\python\train if (! $?) { Throw ("Error running train tests") } diff --git a/ci/windows/test_py2_gpu.ps1 b/ci/windows/test_py2_gpu.ps1 index 5f8de5ac4..d362c61da 100644 --- a/ci/windows/test_py2_gpu.ps1 +++ b/ci/windows/test_py2_gpu.ps1 @@ -16,15 +16,18 @@ # under the License. 7z x -y windows_package.7z + $env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll $env:PYTHONPATH=join-path $pwd.Path windows_package\python $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 +$env:MXNET_HOME=[io.path]::combine($PSScriptRoot, 'mxnet_home') + c:\Anaconda3\envs\py2\Scripts\pip install -r tests\requirements.txt -c:\Anaconda3\envs\py2\python.exe -m nose -v --with-xunit --xunit-file nosetests_unittest.xml tests\python\unittest +c:\Anaconda3\envs\py2\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_unittest.xml tests\python\unittest if (! $?) { Throw ("Error running unittest") } -c:\Anaconda3\envs\py2\python.exe -m nose -v --with-xunit --xunit-file nosetests_operator.xml tests\python\gpu\test_operator_gpu.py +c:\Anaconda3\envs\py2\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_operator.xml tests\python\gpu\test_operator_gpu.py if (! $?) { Throw ("Error running tests") } -c:\Anaconda3\envs\py2\python.exe -m nose -v --with-xunit --xunit-file nosetests_forward.xml tests\python\gpu\test_forward.py +c:\Anaconda3\envs\py2\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_forward.xml tests\python\gpu\test_forward.py if (! $?) { Throw ("Error running tests") } -c:\Anaconda3\envs\py2\python.exe -m nose -v tests\python\train +c:\Anaconda3\envs\py2\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error tests\python\train if (! $?) { Throw ("Error running tests") } diff --git a/ci/windows/test_py3_cpu.ps1 b/ci/windows/test_py3_cpu.ps1 index 0dd48de26..32da4885f 100644 --- a/ci/windows/test_py3_cpu.ps1 +++ b/ci/windows/test_py3_cpu.ps1 @@ -16,11 +16,14 @@ # under the License. 7z x -y windows_package.7z + $env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll $env:PYTHONPATH=join-path $pwd.Path windows_package\python $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 +$env:MXNET_HOME=[io.path]::combine($PSScriptRoot, 'mxnet_home') + c:\Anaconda3\envs\py3\Scripts\pip install -r tests\requirements.txt -c:\Anaconda3\envs\py3\python.exe -m nose -v --with-xunit --xunit-file nosetests_unittest.xml tests\python\unittest +c:\Anaconda3\envs\py3\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_unittest.xml tests\python\unittest if (! $?) { Throw ("Error running unittest") } -c:\Anaconda3\envs\py3\python.exe -m nose -v --with-xunit --xunit-file nosetests_train.xml tests\python\train +c:\Anaconda3\envs\py3\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_train.xml tests\python\train if (! $?) { Throw ("Error running train tests") } diff --git a/ci/windows/test_py3_gpu.ps1 b/ci/windows/test_py3_gpu.ps1 index 4a0feb1ed..b30b22ae9 100644 --- a/ci/windows/test_py3_gpu.ps1 +++ b/ci/windows/test_py3_gpu.ps1 @@ -16,15 +16,18 @@ # under the License. 7z x -y windows_package.7z + $env:MXNET_LIBRARY_PATH=join-path $pwd.Path windows_package\lib\libmxnet.dll $env:PYTHONPATH=join-path $pwd.Path windows_package\python $env:MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0 +$env:MXNET_HOME=[io.path]::combine($PSScriptRoot, 'mxnet_home') + c:\Anaconda3\envs\py3\Scripts\pip install -r tests\requirements.txt -c:\Anaconda3\envs\py3\python.exe -m nose -v --with-xunit --xunit-file nosetests_unittest.xml tests\python\unittest +c:\Anaconda3\envs\py3\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_unittest.xml tests\python\unittest if (! $?) { Throw ("Error running unittest") } -c:\Anaconda3\envs\py3\python.exe -m nose -v --with-xunit --xunit-file nosetests_operator.xml tests\python\gpu\test_operator_gpu.py +c:\Anaconda3\envs\py3\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_operator.xml tests\python\gpu\test_operator_gpu.py if (! $?) { Throw ("Error running tests") } -c:\Anaconda3\envs\py3\python.exe -m nose -v --with-xunit --xunit-file nosetests_forward.xml tests\python\gpu\test_forward.py +c:\Anaconda3\envs\py3\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_forward.xml tests\python\gpu\test_forward.py if (! $?) { Throw ("Error running tests") } -c:\Anaconda3\envs\py3\python.exe -m nose -v --with-xunit --xunit-file nosetests_train.xml tests\python\train +c:\Anaconda3\envs\py3\python.exe -m nose -v --with-timer --timer-ok 1 --timer-warning 15 --timer-filter warning,error --with-xunit --xunit-file nosetests_train.xml tests\python\train if (! $?) { Throw ("Error running tests") } diff --git a/cmake/AutoDetectF16C.cmake b/cmake/AutoDetectF16C.cmake new file mode 100644 index 000000000..04331c3ac --- /dev/null +++ b/cmake/AutoDetectF16C.cmake @@ -0,0 +1,53 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# Determines whether hardware and compiler support F16C +# instruction set +# +# The following are set after configuration is done: +# SUPPORT_F16C + +if(AUTO_DETECT_F16_CMAKE_INCLUDED) + return() +endif() +set(AUTO_DETECT_F16_CMAKE_INCLUDED True) + +set(SUPPORT_F16C False) +if(MSVC) + message("F16C instruction set is not yet supported for MSVC") + return() +endif() +include(CheckCXXCompilerFlag) +check_cxx_compiler_flag("-mf16c" COMPILER_SUPPORT_MF16C) +if(CMAKE_SYSTEM_NAME STREQUAL "Linux") + execute_process(COMMAND cat /proc/cpuinfo + COMMAND grep flags + COMMAND grep f16c + OUTPUT_VARIABLE CPU_SUPPORT_F16C) +elseif(CMAKE_SYSTEM_NAME STREQUAL "Darwin") + execute_process(COMMAND sysctl -a + COMMAND grep machdep.cpu.features + COMMAND grep F16C + OUTPUT_VARIABLE CPU_SUPPORT_F16C) +endif() +if(NOT CPU_SUPPORT_F16C) + message("CPU does not support F16C instructions") + return() +endif() +if(CPU_SUPPORT_F16C AND COMPILER_SUPPORT_MF16C) + set(SUPPORT_F16C TRUE) +endif() diff --git a/cmake/DownloadMKLML.cmake b/cmake/DownloadMKLML.cmake new file mode 100644 index 000000000..eabf861a4 --- /dev/null +++ b/cmake/DownloadMKLML.cmake @@ -0,0 +1,74 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# This script will download MKLML + +message(STATUS "Downloading MKLML...") + +set(MKLDNN_RELEASE v0.17-rc) +set(MKLML_RELEASE_FILE_SUFFIX 2019.0.1.20180928) + +if(MSVC) + set(MKL_NAME "mklml_win_${MKLML_RELEASE_FILE_SUFFIX}") + + file(DOWNLOAD "https://github.com/intel/mkl-dnn/releases/download/${MKLDNN_RELEASE}/${MKL_NAME}.zip" + "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.zip" + EXPECTED_MD5 "443e661bdfd32dbbc99b460b43afceee" SHOW_PROGRESS) + file(DOWNLOAD "https://github.com/apache/incubator-mxnet/releases/download/utils/7z.exe" + "${CMAKE_CURRENT_BINARY_DIR}/mklml/7z2.exe" + EXPECTED_MD5 "E1CF766CF358F368EC97662D06EA5A4C" SHOW_PROGRESS) + + execute_process(COMMAND "${CMAKE_CURRENT_BINARY_DIR}/mklml/7z2.exe" "-o${CMAKE_CURRENT_BINARY_DIR}/mklml/" "-y") + execute_process(COMMAND "${CMAKE_CURRENT_BINARY_DIR}/mklml/7z.exe" + "x" "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.zip" "-o${CMAKE_CURRENT_BINARY_DIR}/mklml/" "-y") + + set(MKLROOT "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}") + + message(STATUS "Setting MKLROOT path to ${MKLROOT}") + + include_directories(${MKLROOT}/include) + +elseif(APPLE) + set(MKL_NAME "mklml_mac_${MKLML_RELEASE_FILE_SUFFIX}") + + file(DOWNLOAD "https://github.com/intel/mkl-dnn/releases/download/${MKLDNN_RELEASE}/${MKL_NAME}.tgz" + "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.tgz" + EXPECTED_MD5 "95f887af332205b1d15b392260003952" SHOW_PROGRESS) + execute_process(COMMAND "tar" "-xzf" "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.tgz" + "-C" "${CMAKE_CURRENT_BINARY_DIR}/mklml/") + + set(MKLROOT "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}") + + message(STATUS "Setting MKLROOT path to ${MKLROOT}") + include_directories(${MKLROOT}/include) + +elseif(UNIX) + set(MKL_NAME "mklml_lnx_${MKLML_RELEASE_FILE_SUFFIX}") + + file(DOWNLOAD "https://github.com/intel/mkl-dnn/releases/download/${MKLDNN_RELEASE}/${MKL_NAME}.tgz" + "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.tgz" + EXPECTED_MD5 "a63abf155361322b9c03f8fc50f4f317" SHOW_PROGRESS) + execute_process(COMMAND "tar" "-xzf" "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.tgz" + "-C" "${CMAKE_CURRENT_BINARY_DIR}/mklml/") + + set(MKLROOT "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}") + message(STATUS "Setting MKLROOT path to ${MKLROOT}") + include_directories(${MKLROOT}/include) + +else() + message(FATAL_ERROR "Wrong platform") +endif() diff --git a/cmake/FirstClassLangCuda.cmake b/cmake/FirstClassLangCuda.cmake index 4b44077d2..8d79c2b63 100644 --- a/cmake/FirstClassLangCuda.cmake +++ b/cmake/FirstClassLangCuda.cmake @@ -120,6 +120,16 @@ else() list(APPEND CUDA_COMMON_GPU_ARCHITECTURES "5.2+PTX") endif () +if (CUDA_TOOLSET VERSION_GREATER "9.0") + list(APPEND CUDA_KNOWN_GPU_ARCHITECTURES "Volta") + list(APPEND CUDA_COMMON_GPU_ARCHITECTURES "7.0") +endif() + +if (CUDA_TOOLSET VERSION_GREATER "10.0") + list(APPEND CUDA_KNOWN_GPU_ARCHITECTURES "Turing") + list(APPEND CUDA_COMMON_GPU_ARCHITECTURES "7.5") +endif() + ################################################################################################ # Function for selecting GPU arch flags for nvcc based on CUDA_ARCH_NAME # Usage: @@ -185,6 +195,12 @@ function(mshadow_select_nvcc_arch_flags out_variable) elseif(${arch_name} STREQUAL "Pascal") set(arch_bin 6.0 6.1) set(arch_ptx 6.1) + elseif(${arch_name} STREQUAL "Volta") + set(arch_bin 7.0) + set(arch_ptx 7.0) + elseif(${arch_name} STREQUAL "Turing") + set(arch_bin 7.5) + set(arch_ptx 7.5) else() message(SEND_ERROR "Unknown CUDA Architecture Name ${arch_name} in CUDA_SELECT_NVCC_ARCH_FLAGS") endif() diff --git a/cmake/MklDnn.cmake b/cmake/MklDnn.cmake deleted file mode 100644 index acaf878b2..000000000 --- a/cmake/MklDnn.cmake +++ /dev/null @@ -1,44 +0,0 @@ -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# "License"); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an -# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -# KIND, either express or implied. See the License for the -# specific language governing permissions and limitations -# under the License. - -#this file download mklml - -message(STATUS "download mklml") -if(MSVC) - set(MKL_NAME "mklml_win_2018.0.3.20180406") - file(DOWNLOAD "https://github.com/intel/mkl-dnn/releases/download/v0.14/${MKL_NAME}.zip" "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.zip" EXPECTED_MD5 "8DD73E7D3F19F004551809824C4E8970" SHOW_PROGRESS) - file(DOWNLOAD "https://github.com/apache/incubator-mxnet/releases/download/utils/7z.exe" "${CMAKE_CURRENT_BINARY_DIR}/mklml/7z2.exe" EXPECTED_MD5 "E1CF766CF358F368EC97662D06EA5A4C" SHOW_PROGRESS) - - execute_process(COMMAND "${CMAKE_CURRENT_BINARY_DIR}/mklml/7z2.exe" "-o${CMAKE_CURRENT_BINARY_DIR}/mklml/" "-y") - execute_process(COMMAND "${CMAKE_CURRENT_BINARY_DIR}/mklml/7z.exe" "x" "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.zip" "-o${CMAKE_CURRENT_BINARY_DIR}/mklml/" "-y") - set(MKLROOT "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}") - include_directories(${MKLROOT}/include) - file(COPY ${MKLROOT}/lib/libiomp5md.dll DESTINATION ${CMAKE_CURRENT_BINARY_DIR}) - file(COPY ${MKLROOT}/lib/mklml.dll DESTINATION ${CMAKE_CURRENT_BINARY_DIR}) - file(COPY ${CMAKE_SOURCE_DIR}/3rdparty/mkldnn/config_template.vcxproj.user DESTINATION ${CMAKE_SOURCE_DIR}) -elseif(UNIX) - set(MKL_NAME "mklml_lnx_2018.0.3.20180406") - file(DOWNLOAD "https://github.com/intel/mkl-dnn/releases/download/v0.14/${MKL_NAME}.tgz" "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.tgz" EXPECTED_MD5 "DAF7EFC3C1C0036B447213004467A8AE" SHOW_PROGRESS) - execute_process(COMMAND "tar" "-xzf" "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}.tgz" "-C" "${CMAKE_CURRENT_BINARY_DIR}/mklml/") - set(MKLROOT "${CMAKE_CURRENT_BINARY_DIR}/mklml/${MKL_NAME}") - include_directories(${MKLROOT}/include) - file(COPY ${MKLROOT}/lib/libiomp5.so DESTINATION ${CMAKE_CURRENT_BINARY_DIR}) - file(COPY ${MKLROOT}/lib/libmklml_gnu.so DESTINATION ${CMAKE_CURRENT_BINARY_DIR}) - file(COPY ${MKLROOT}/lib/libmklml_intel.so DESTINATION ${CMAKE_CURRENT_BINARY_DIR}) -else() - message(FATAL_ERROR "not support now") -endif() diff --git a/cmake/cmake_options.yml b/cmake/cmake_options.yml new file mode 100644 index 000000000..a4323feb9 --- /dev/null +++ b/cmake/cmake_options.yml @@ -0,0 +1,53 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +--- # CMake configuration +USE_CUDA: "OFF" # Build with CUDA support +USE_OLDCMAKECUDA: "OFF" # Build with old cmake cuda +USE_NCCL: "OFF" # Use NVidia NCCL with CUDA +USE_OPENCV: "ON" # Build with OpenCV support +USE_OPENMP: "ON" # Build with Openmp support +USE_CUDNN: "ON" # Build with cudnn support) # one could set CUDNN_ROOT for search path +USE_SSE: "ON" # Build with x86 SSE instruction support IF NOT ARM +USE_F16C: "ON" # Build with x86 F16C instruction support) # autodetects support if "ON" +USE_LAPACK: "ON" # Build with lapack support +USE_MKL_IF_AVAILABLE: "ON" # Use MKL if found +USE_MKLML_MKL: "ON" # Use MKLDNN variant of MKL (if MKL found) IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) +USE_MKLDNN: "ON" # Use MKLDNN variant of MKL (if MKL found) IF USE_MKL_IF_AVAILABLE AND (NOT APPLE) +USE_OPERATOR_TUNING: "ON" # Enable auto-tuning of operators IF NOT MSVC +USE_GPERFTOOLS: "ON" # Build with GPerfTools support (if found) +USE_JEMALLOC: "ON" # Build with Jemalloc support +USE_PROFILER: "ON" # Build with Profiler support +USE_DIST_KVSTORE: "OFF" # Build with DIST_KVSTORE support +USE_PLUGINS_WARPCTC: "OFF" # Use WARPCTC Plugins +USE_PLUGIN_CAFFE: "OFF" # Use Caffe Plugin +USE_CPP_PACKAGE: "OFF" # Build C++ Package +USE_MXNET_LIB_NAMING: "ON" # Use MXNet library naming conventions. +USE_GPROF: "OFF" # Compile with gprof (profiling) flag +USE_CXX14_IF_AVAILABLE: "OFF" # Build with C++14 if the compiler supports it +USE_VTUNE: "OFF" # Enable use of Intel Amplifier XE (VTune)) # one could set VTUNE_ROOT for search path +ENABLE_CUDA_RTC: "ON" # Build with CUDA runtime compilation support +BUILD_CPP_EXAMPLES: "ON" # Build cpp examples +INSTALL_EXAMPLES: "OFF" # Install the example source files. +USE_SIGNAL_HANDLER: "ON" # Print stack traces on segfaults. +USE_TENSORRT: "OFF" # Enable infeference optimization with TensorRT. +USE_ASAN: "OFF" # Enable Clang/GCC ASAN sanitizers. +ENABLE_TESTCOVERAGE: "OFF" # Enable compilation with test coverage metric output +CMAKE_BUILD_TYPE: "Debug" +CMAKE_CUDA_COMPILER_LAUNCHER: "ccache" +CMAKE_C_COMPILER_LAUNCHER: "ccache" +CMAKE_CXX_COMPILER_LAUNCHER: "ccache" diff --git a/contrib/clojure-package/LICENSE b/contrib/clojure-package/LICENSE index 8f71f43fe..26c4e047b 100644 --- a/contrib/clojure-package/LICENSE +++ b/contrib/clojure-package/LICENSE @@ -186,7 +186,7 @@ same "printed page" as the copyright notice for easier identification within third-party archives. - Copyright {yyyy} {name of copyright owner} + Copyright 2018 by Contributors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/contrib/clojure-package/README.md b/contrib/clojure-package/README.md index 8c71224a3..ba6160aed 100644 --- a/contrib/clojure-package/README.md +++ b/contrib/clojure-package/README.md @@ -1,154 +1,227 @@ # Clojure MXNet -A clojure package to the MXNet Deep Learning library +A Clojure Package Built on the MXNet Deep Learning Library ## Introduction -MXNet is a first class, modern deep learning library. It supports multiple languages on a first class basis and is incubating as an Apache project. +MXNet is a flexible and efficient deep learning library. While its core is built in C++ for maximum performance, support for multiple programming languages in intermediate and high-level APIs is a first-class feature. MXNet is currently an incubating Apache project. -The motivation for creating a Clojure package is to be able to open the deep learning library to the Clojure ecosystem and build bridges for future development and innovation for the community. It provides all the needed tools including low level and high level apis, dynamic graphs, and things like GAN and natural language support. +The motivation for creating a Clojure package was to give Clojurians access to a world-class deep learning platform, thereby building bridges for future development and innovation in the community. The Clojure package provides all the essential tools, including low-level and high-level APIs, dynamic graphs, etc., and enables building advanced architectures like GANs or LSTM to tackle challenging applications such as image recognition or natural language processing. -For high leverage, the Clojure package has been built on the existing Scala package using interop. This has allowed rapid development and close parity with the Scala functionality. This also leaves the door open to directly developing code against the jni-bindings with Clojure in the future in an incremental fashion, using the test suites as a refactoring guide. +To maximize leverage, the Clojure package has been built on the existing Scala package using [Java Interop](https://clojure.org/reference/java_interop). This approach has allowed rapid initial development and close parity with the Scala package functionality. It also leaves the door open to incrementally developing Clojure code that directly interfaces MXNet core using [JNI](https://en.wikipedia.org/wiki/Java_Native_Interface). -For a **video introduction**, see [Clojure MXNet with Carin Meier - Clojure Virtual Meetup](https://www.crowdcast.io/e/clojure-mxnet-with-carin) (setup instructions from 20:49) +For a **video introduction**, see [Clojure MXNet with Carin Meier - Clojure Virtual Meetup](https://www.crowdcast.io/e/clojure-mxnet-with-carin) (setup instructions from 20:49). ## Current State and Plans -The Clojure package is nearing the end of its first development milestone which is to achieve a close parity with the Scala package. +The Clojure MXNet package is currently treated as *user-contributed code* within MXNet, as can be seen from its placement under `contrib` in the source tree. This means that it should first undergo a stabilization period and receive feedback from users before it can graduate to a fully integrated and supported part of MXNet. -Help is needed testing and generally making the package better. A list of the pacakge status and contribution needs can be found here [Clojure Package Contribution Needs](https://cwiki.apache.org/confluence/display/MXNET/Clojure+Package+Contribution+Needs). Please get involved :) +That said, because it closely tracks the Scala package, Clojure MXNet can be expected to have a similar level of maturity and stability regarding the low-level functionality. It is mostly in the hand-written Java interop part of the Clojure wrapper where bugs are more likely to be encountered. Such bugs tend to be fixed rather quickly once they are known and their origin is clear (see also [Getting Involved](#getting-involved)). -Testing instructions can be found in the testing.md. +For an overview of the development status and open problems, please refer to [Clojure Package Contribution Needs](https://cwiki.apache.org/confluence/display/MXNET/Clojure+Package+Contribution+Needs). -## Getting Started +## Getting Involved + +By far the best way to get involved with this project is to install the Clojure MXNet package, run the examples, play around, build new things with it, and get back to the development team with feedback! Your input can not only help to identify current issues, but also guide the future development of the Clojure package by pointing out must-have features that are currently missing, or by identifying usability or performace problems of high impact. -The following systems are supported: +There are two main ways of reaching out to other users and the package maintainers: -- OSX cpu -- Linux cpu -- Linux gpu +- If you have a question or general feedback, or you encountered a problem but are not sure if it's a bug or a misunderstanding, then the *Apache Slack* (channels `#mxnet` and `#mxnet-scala`) is the best place to turn check out. To join, [ask for an invitation](https://mxnet.apache.org/community/contribute.html#slack) at `dev@mxnet.apache.org`. +- If you found a bug, miss an important feature or want to give feedback directly relevant for development, please head over to the MXNet [GitHub issue page](https://github.com/apache/incubator-mxnet/issues) and create a new issue. If the issue is specific to the Clojure package, consider using a title starting with `[Clojure]` to make it easily discoverable among the many other, mostly generic issues. -There are two ways of getting going. The first way is the easiest and that is to use the pre-built jars from Maven. The second way is to build from source. In both cases, you will need to load the prereqs and dependencies, (like opencv). +Of course, contributions to code or documentation are also more than welcome! Please check out the [Clojure Package Contribution Needs](https://cwiki.apache.org/confluence/display/MXNET/Clojure+Package+Contribution+Needs) to get an idea about where and how to contribute code. +For a more comprehensive overview of different ways to contribute, see [Contributing to MXNet](https://mxnet.apache.org/community/contribute.html). +## Getting Started -### Prerequisites +The Clojure MXNet framework consists of a core C library, a Scala API that talks to the core through [JNI (Java Native Interface)](https://en.wikipedia.org/wiki/Java_Native_Interface) bindings, and finally a Clojure wrapper around the Scala API. +Since the core contains native (compiled) code and is bundled with the language bindings, your hardware and OS matter to the choices to be made during installation. The following combinations of operating system and compute device are supported: -Follow the instructions from https://mxnet.incubator.apache.org/install/osx_setup.html or https://mxnet.incubator.apache.org/install/ubuntu_setup.html -about _Prepare Environment for GPU Installation_ -and _Install MXNet dependencies_ +- Linux CPU +- Linux GPU +- OSX CPU +There are three ways of getting started: -#### Cloning the repo and running from source +1. [Install prebuilt Clojure jars](#option-1-clojure-package-from-prebuilt-jar) with the native dependencies baked in. This the quickest way to get going. +2. [Install the Clojure package from source, but use prebuilt jars for the native dependencies](#option-2-clojure-package-from-source-scala-package-from-jar). Choose this option if you want pre-release features of the Clojure package but don't want to build (compile) native dependencies yourself. +3. [Build everything from source](#option-3-everything-from-source). This option is for developers or advanced users who want cutting-edge features in all parts of the dependency chain. -To use the prebuilt jars (easiest), you will need to replace the native version of the line in the project dependencies with your configuration. +**Note:** This guide assumes that you are familiar with the basics of creating Clojure projects and managing dependencies. See [here](https://github.com/technomancy/leiningen/blob/stable/doc/TUTORIAL.md) for the official Leiningen tutorial. -`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.3.0"]` -or -`[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.3.0"]` -or -`[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.3.0"]` +### Option 1: Clojure Package from Prebuilt Jar -If you are using the prebuilt jars they may have a slightly different dependencies then building from source: +If you are new to MXNet and just want to try things out, this option is the best way to get started. You will install release versions of MXNet core, MXNet Scala and MXNet Clojure. -*For OSX you will need:* +For reference, the Clojure MXNet jars can be found on [maven.org](https://search.maven.org/search?q=clojure%20mxnet). -`brew install opencv` +#### Installing additional dependencies -*For Ubuntu Linux you will need:* +Depending on your operating system, you will need a couple of packages that are not distributed through Maven: -``` -sudo add-apt-repository ppa:timsc/opencv-3.4 +- [OpenCV](https://opencv.org/) version 3.4 +- [OpenBLAS](https://www.openblas.net/) +- [ATLAS](http://math-atlas.sourceforge.net/) +- [cURL](https://curl.haxx.se/) library version 3 + +##### Linux (Ubuntu) + +As of writing this, OpenCV 3.4 is not available in the default repositories. Therefore, a third-party repository is needed. + +```bash +sudo add-apt-repository ppa:timsc/opencv sudo apt-get update -sudo apt install libopencv-imgcodecs3.4 +sudo apt install libopencv-imgcodecs3.4 libopenblas-base libatlas3-base libcurl3 ``` -*For Arch Linux you will need:* +Note: `libcurl3` may conflict with other packages on your system. [Here](https://github.com/apache/incubator-mxnet/issues/12822) is a possible workaround. -_CPU_ +##### Linux (Arch) -``` +```bash yaourt -S openblas-lapack yaourt -S libcurl-compat export LD_PRELOAD=libcurl.so.3 ``` -_GPU_ -``` +To enable GPU support, you will additionally need the CUDA toolkit: + +```bash wget https://archive.archlinux.org/packages/c/cuda/cuda-9.0.176-4-x86_64.pkg.tar.xz sudo pacman -U cuda-9.0.176-4-x86_64.pkg.tar.xz ``` -If you want to see the exact versions and flags that the jars were built with, look here: -[Scala Release Process](https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala+Release+Process) +##### OSX + +```bash +brew install wget +brew install opencv +``` + +#### Installing the Clojure package + +- Create a new project with `lein new my-mxnet` +- Edit your `project.clj` and add one of the following entries to `:dependencies`, based on your system and the compute device you want to use: + + + - `[org.apache.mxnet.contrib.clojure/clojure-mxnet-linux-cpu ]` + - `[org.apache.mxnet.contrib.clojure/clojure-mxnet-linux-gpu ]` + - `[org.apache.mxnet.contrib.clojure/clojure-mxnet-osx-cpu ]` +You can find the latest version out on [maven central- clojure-mxnet latest](https://search.maven.org/search?q=clojure-mxnet) -Check your installation with `lein test`. If that works alright then, you can try some code! +After making this change and running `lein deps`, you should be able to run example code like this [NDArray Tutorial](https://github.com/apache/incubator-mxnet/blob/master/contrib/clojure-package/examples/tutorial/src/tutorial/ndarray.clj). -```clojure +### Option 2: Clojure package from Source, Scala Package from Jar -(ns tutorial.ndarray - (:require [org.apache.clojure-mxnet.ndarray :as ndarray] - [org.apache.clojure-mxnet.context :as context])) +With this option, you will install a Git revision of the Clojure package source and a [Scala package jar from Maven](https://search.maven.org/search?q=g:org.apache.mxnet) with native dependencies baked in. -;;Create NDArray -(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50 -(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension -(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 2 x 3 +- Install additional dependencies as described in [the corresponding section for Option 1](#installing-additional-dependencies), -;;; There are also ways to convert to a vec or get the shape as an object or vec -(ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0] +- Recursively clone the MXNet repository and checkout the desired version, (example 1.3.1). You should use the latest [version](https://search.maven.org/search?q=clojure-mxnet)), and a clone into the `~/mxnet` directory: + + ```bash + git clone --recursive https://github.com/apache/incubator-mxnet.git ~/mxnet + cd ~/mxnet + git tag --list # Find the tag that matches the Scala package version + + git checkout tags/ -b my_mxnet + git submodule update --init --recursive + cd contrib/clojure + ``` + +- Edit `project.clj` to include the desired Scala jar from Maven: + + + [org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu ] + +- Run `lein test`. All the tests should run without error. +- At this point you can run `lein install` to build and install the Clojure jar locally. + +To run examples, you can now use `lein run` in any of the example directories, e.g., `examples/imclassification`. You can also specify the compute device, e.g., `lein run :cpu 2` (for 2 CPUs) or `lein run :gpu` (for 1 GPU). + +#### Experimental: Using Scala Snapshot Jars +**Note:** Instead of a release tag, you can also use a development version of the Clojure package, e.g., Git `master`, together with the prebuilt Scala jar. There is a repo of nightly built snapshots of Scala jars. You can use them in your `project.clj` by adding a repository: + +``` +["snapshots" {:url "https://repository.apache.org/content/repositories/snapshots" + :snapshots true + :sign-releases false + :checksum :fail + :update :always + :releases {:checksum :fail :update :always}}] ``` -See the examples/tutorial section for more. +Then you should be able to run with your dependency: + [org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "latest-version-SNAPSHOT"] -The jars from maven with the needed MXNet native binaries in it. On startup, the native libraries are extracted from the jar and copied into a temporary location on your path. On termination, they are deleted. +In that case, however, breakage can happen at any point, for instance when the Scala development version adds, changes or removes an interface and the Clojure development version moves along. If you really need the most recent version, you should consider [installation option 3](#option-3-everything-from-source). -### Build from MXNET Source +### Option 3: Everything from Source -First, ensure you have JDK 8 on your system. Later versions may produce cryptic build errors mentioning `scala.reflect.internal.MissingRequirementError`. +With this option, you will compile the core MXNet C++ package and jars for both Scala and Clojure language bindings from source. If you intend to make changes to the code in any of the parts, or if you simply want the latest and greatest features, this choice is for you. -Checkout the latest SHA from the main package: +The first step is to recursively clone the MXNet repository and checkout the desired version, (example 1.3.1). You should use the latest [version](https://search.maven.org/search?q=clojure-mxnet)), and clone into the `~/mxnet` directory: -`git clone --recursive https://github.com/apache/incubator-mxnet.git ~/mxnet` -`cd ~/mxnet` + ```bash + git clone --recursive https://github.com/apache/incubator-mxnet.git ~/mxnet + cd ~/mxnet + git checkout tags/version -b my_mxnet # this is optional + git submodule update --init --recursive + ``` -If you need to checkout a particular release you can do it with: +If you have previous builds and other unwanted files lying around in the working directory and would like to clean up, [here](https://gist.github.com/nicktoumpelis/11214362) is a useful script for that task. However, be aware that this recipe will remove any untracked files and reset uncommitted changes in the working directory. -`git checkout tags/1.3.0 -b release-1.3.0` +#### Building the core library -`git submodule update --init --recursive` +Detailed instructions for building MXNet core from source can be found [in the MXNet installation documentation](https://mxnet.incubator.apache.org/install/index.html). The relevant sections are: -Sometimes it useful to use this script to clean hard -https://gist.github.com/nicktoumpelis/11214362 +- For Ubuntu Linux: [CUDA Dependencies](https://mxnet.incubator.apache.org/install/ubuntu_setup.html#cuda-dependencies) and [Building MXNet from Source](https://mxnet.incubator.apache.org/install/ubuntu_setup.html#build-mxnet-from-source) +- For Mac OSX: [Build the Shared Library](https://mxnet.incubator.apache.org/install/osx_setup.html#build-the-shared-library) +In particular, ignore all of the language-interface-specific sections. -Go here to do the base package installation https://mxnet.incubator.apache.org/install/index.html +The outcome of this step will be a shared library `lib/libmxnet.so` that is used in the next step. - Run `make scalapkg` then `make scalainstall` +#### Building the Scala jar -then replace the correct jar for your architecture in the project.clj, example `[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.3.0-SNAPSHOT"]` +- Ensure you have JDK 8 on your system. Later versions may produce cryptic build errors mentioning `scala.reflect.internal.MissingRequirementError`. +- Build and install the Scala package in your local Maven directory using the following commands: -#### Test your installation + ```bash + cd scala-package + mvn install + ``` -To test your installation, you should run `lein test`. This will run the test suite (CPU) for the clojure package. +#### Building the Clojure jar + +- Enter the `contrib/clojure` directory and edit the `project.clj` file. Add the Scala jar that was just created and installed, e.g., `[org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "latest-version-SNAPSHOT"]`, to the `:dependencies`. +- Run `lein test`. All the tests should run without an error. +- Run `lein install` to build and install the Clojure jar locally. +To run examples, you can now use `lein run` in any of the example directories, e.g., `examples/imclassification`. You can also specify the compute device, e.g., `lein run :cpu 2` (for 2 CPUs) or `lein run :gpu` (for 1 GPU). -#### Generation of NDArray and Symbol apis +## Docker Files -The bulk of the ndarray and symbol apis are generated via java reflection into the Scala classes. The files are generated as a compile time step (AOT) in the `dev.generator` namespace. +There are Dockerfiles available as well. -You may also run this manually with the repl functions: +- [Community Provided by Magnet](https://hub.docker.com/u/magnetcoop/) +- [MXNet CI](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/Dockerfile.build.ubuntu_cpu) and the install scripts + - [Ubuntu core](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_core.sh) + - [Ubuntu Scala](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_scala.sh) + - [Ubuntu Clojure](https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_clojure.sh) -`(generate-ndarray-file)` -and -`(generate-symbol-file)` +## Need Help? +If you are having trouble getting started or have a question, feel free to reach out at: -These will generate the files under `src/org.apache.clojure-mxnet/gen/` that are loaded by the `src/org.apache.clojure-mxnet/ndarray.clj` and `src/org.apache.clojure-mxnet/symbol.clj` files. +- Clojurian Slack #mxnet channel. To join, go to [http://clojurians.net/](http://clojurians.net/). +- Apache Slack #mxnet and #mxnet-scala channel. To join this slack send an email to dev@mxnet.apache.org. +- Create an Issue on [https://github.com/apache/incubator-mxnet/issues](https://github.com/apache/incubator-mxnet/issues). ## Examples @@ -160,11 +233,11 @@ There are quite a few examples in the examples directory. To use. There are README is every directory outlining instructions. A good place to get started is the module example. -Do `lein run` for the cpu version or `lein run :gpu` for gpu. +Do `lein run` for the CPU version or `lein run :gpu` for GPU. ## Generating documentation -To generate api docs, run `lein codox`. The html docs will be generated in the target/docs directory. +To generate API docs, run `lein codox`. The html docs will be generated in the target/docs directory. ## Code Coverage @@ -182,12 +255,11 @@ before the submit a new pull request so we can keep the style consistent through ## FAQ - **Why build on the Scala package?** The motivation section addresses this, but the main reason is high leverage is using the great work that the Scala package has already done. -**How can I tell if the gpu is being used?** +**How can I tell if the GPU is being used?** CUDA is finding a best algorithm... As long as a Context.gpu() passed in the code as a context, GPU should be used. @@ -197,30 +269,17 @@ This command can be very handy too timestamp, name, utilization.gpu [%], utilization.memory [%], memory.total [MiB], memory.free [MiB], memory.used [MiB]` **Supported APIs** -There are 3 high level apis supported in MXNet: (Model/FeedForward), Module, and Gluon. The Module api is supported in the Clojure package because of the existing support for it in the Scala package. The Module api is very similar to the Gluon api and examples of the usage can be found in the examples directory. The Model/FeedForward Api is deprected. +There are 3 high level APIs supported in MXNet: (Model/FeedForward), Module, and Gluon. The Module API is supported in the Clojure package because of the existing support for it in the Scala package. The Module API is very similar to the Gluon API and examples of the usage can be found in the examples directory. The Model/FeedForward API is deprected. -Gluon support will come later and may or may not be built on the Scala gluon api (when it lands there) +Gluon support will come later and may or may not be built on the Scala gluon API (when it lands there) ## Architecture & Design See the Confluence page: https://cwiki.apache.org/confluence/display/MXNET/MXNet+Clojure ## Building and Deploying Jars -The process to build and deploy the jars currently is a manual process using the `lein` build tool and `Clojars`, the Clojure dependency hosting platform. - -There is one jar for every system supported. - -- Comment out the line in the `project.clj` for the system that you are targeting, (example OSX cpu you would uncomment out ` [org.apache.mxnet/mxnet-full_2.11-osx-x86_64-cpu "1.2.0"]` but leave the linux deps commented) -- Change the `defproject org.apache.mxnet.contrib.clojure/clojure-mxnet "0.1.1-SNAPSHOT"` in the project to reference the correct version number and jar description. For example changing the line to be `org.apache.mxnet.contrib.clojure/mxnet-osx-cpu "0.1.2"` would create a jar with the group id of `org.apache.mxnet.contrib.clojure` and the artifact name of `mxnet-osx-cpu` and the version of `0.1.2` -- Run `lein clean` -- Run `lein jar` to create the jar -- Check that the jar looks alright in the `/target` directory. - -To deploy the jar to Clojars, you do `lein deploy clojars` and it will prompt you for your username and password. - -_Note: Integration with deployment to Nexus can be enabled too for the future [https://help.sonatype.com/repomanager2/maven-and-other-build-tools/leiningen](https://help.sonatype.com/repomanager2/maven-and-other-build-tools/leiningen)_ -You would repeat this process for all the build system types. +The release process for deploying the Clojure jars is on the [Apache MXNet developer wiki](https://cwiki.apache.org/confluence/display/MXNET/Clojure+Release+Process). ## Special Thanks diff --git a/contrib/clojure-package/examples/captcha/.gitignore b/contrib/clojure-package/examples/captcha/.gitignore new file mode 100644 index 000000000..e1569bd89 --- /dev/null +++ b/contrib/clojure-package/examples/captcha/.gitignore @@ -0,0 +1,3 @@ +/.lein-* +/.nrepl-port +images/* diff --git a/contrib/clojure-package/examples/captcha/README.md b/contrib/clojure-package/examples/captcha/README.md new file mode 100644 index 000000000..6b593b2f1 --- /dev/null +++ b/contrib/clojure-package/examples/captcha/README.md @@ -0,0 +1,61 @@ +# Captcha + +This is the clojure version of [captcha recognition](https://github.com/xlvector/learning-dl/tree/master/mxnet/ocr) +example by xlvector and mirrors the R captcha example. It can be used as an +example of multi-label training. For the following captcha example, we consider it as an +image with 4 labels and train a CNN over the data set. + +![captcha example](captcha_example.png) + +## Installation + +Before you run this example, make sure that you have the clojure package +installed. In the main clojure package directory, do `lein install`. +Then you can run `lein install` in this directory. + +## Usage + +### Training + +First the OCR model needs to be trained based on [labeled data](https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/data/captcha_example.zip). +The training can be started using the following: +``` +$ lein train [:cpu|:gpu] [num-devices] +``` +This downloads the training/evaluation data using the `get_data.sh` script +before starting training. + +It is possible that you will encounter some out-of-memory issues while training using :gpu on Ubuntu +linux (18.04). However, the command `lein train` (training on one CPU) may resolve the issue. + +The training runs for 10 iterations by default and saves the model with the +prefix `ocr-`. The model achieved an exact match accuracy of ~0.954 and +~0.628 on training and validation data respectively. + +### Inference + +Once the model has been saved, it can be used for prediction. This can be done +by running: +``` +$ lein infer +INFO MXNetJVM: Try loading mxnet-scala from native path. +INFO MXNetJVM: Try loading mxnet-scala-linux-x86_64-gpu from native path. +INFO MXNetJVM: Try loading mxnet-scala-linux-x86_64-cpu from native path. +WARN MXNetJVM: MXNet Scala native library not found in path. Copying native library from the archive. Consider installing the library somewhere in the path (for Windows: PATH, for Linux: LD_LIBRARY_PATH), or specifying by Java cmd option -Djava.library.path=[lib path]. +WARN org.apache.mxnet.DataDesc: Found Undefined Layout, will use default index 0 for batch axis +INFO org.apache.mxnet.infer.Predictor: Latency increased due to batchSize mismatch 8 vs 1 +WARN org.apache.mxnet.DataDesc: Found Undefined Layout, will use default index 0 for batch axis +WARN org.apache.mxnet.DataDesc: Found Undefined Layout, will use default index 0 for batch axis +CAPTCHA output: 6643 +INFO org.apache.mxnet.util.NativeLibraryLoader: Deleting /tmp/mxnet6045308279291774865/libmxnet.so +INFO org.apache.mxnet.util.NativeLibraryLoader: Deleting /tmp/mxnet6045308279291774865/mxnet-scala +INFO org.apache.mxnet.util.NativeLibraryLoader: Deleting /tmp/mxnet6045308279291774865 +``` +The model runs on `captcha_example.png` by default. + +It can be run on other generated captcha images as well. The script +`gen_captcha.py` generates random captcha images for length 4. +Before running the python script, you will need to install the [captcha](https://pypi.org/project/captcha/) +library using `pip3 install --user captcha`. The captcha images are generated +in the `images/` folder and we can run the prediction using +`lein infer images/7534.png`. diff --git a/contrib/clojure-package/examples/captcha/captcha_example.png b/contrib/clojure-package/examples/captcha/captcha_example.png new file mode 100644 index 0000000000000000000000000000000000000000..09b84f7190fab4cb391f8a3927b10f44d609aaaf GIT binary patch literal 9762 zcmV+-Cf(VIP)002A)0ssI2ZV4tr001TFNkl$qd)Kg=JPmwuTSN6f=S^pBb7Zny2R zWJ{E6%91IIGHG!GH-H2PR$;5Ed-E>2oOvh`g0KQSx5g`fB>Z>tnC%C^w(T2xTUdOfJL=Hzse)#S6+cuN-_SaD$Q` zA;ZuM0bqH|Pj)(6f!O+^Ns+YV-1}EPY%k5B%eph^&%>uOK4VP(7tT2VoNb-!|Myh$ z`NwHCQL>fJ{p!~H0eMN0Jmw&m5$EwAzdmzo@pjL~EMt|R{BIY29r|I7RgUjJb}PSq zs`)%XefdQ_sp)JH+aXxV4<|pFOz%jMG0v;DmM96qqx706+5ctj`jDAaN_RUQKlF1` zDAWg|{YtA-%@-O${jtPX%a3NxwDW~PyNQx<7MJ|eBe%AUyeMYm>|JVEpSUj)mT7BR zy}25zOioVMYxT3Ob3YqU#=(%y7D%LG5V>u@t4QfaQdLZd6hHWuam9cJ5E1i=jC9r?K{(1A6`V%3T0XBo?Z98iNYZw_IAmok{vkWcTr2SI;xkM%XX+};6 zQ1A2o9k2^qA-I>tiIOeZK7Hqmq>MWv5P_?mY8tudZnwXco6xGIofOIE7r(f1@8%M8 zzFN89SLrSi1Jwu9uJOIs6C&XLpBoX7W)_Qm8%x9D_vOcf?P3xMEKs)`=UuJms$P7T%uhhQl3 zxA%_XLiW;L%e0M;hzJC6p2T_bM6|RaW3<*hMd=uu<}&aGT?Q96W^JI7?c+_Ea)GM~ldjl!7U9Ct7}f>#r$gOfVt}dC>3&e>nPsA#WgbLxya-{jJ@Elv0V7Grid*?8G3^ zX8_0mz!Cre-v6h-LV?O`-n$;}9US}7!MC^GsR3iaXt3NOAp$~N#7U7n5iK2f{@$U{ zvEk8jwcP9VF45JcqQf)diGMu7aUQqgcAG7v zEC+xR4+q`qKOOp6Giq#Ide&m)CKP8$H||}YzH(k&PACV#34jb)WxxUw5P-E71#r(H z4ia)x%sm@E(~@(DxRw1Z$r4$}Bu&6tTUc}*1`Irg9Pclm$3x%+#}6D!lB80p=m9g< z9LrUn7x~20)ob}=SHw9gX>$-v40=m>oMcyF0Y{C6YSLppZ%u0};As zTjwT|J3rk0y+&An=wOmeelhW7O1aJyN{qOt2f&G3nWxE<*oIx#<(&7a3OFx^B>>2a zyj~kRQJiSY`7@J$>F{1^v)n1HASLufenVVl?>zUmBb-OD8MlQETQ7=Yp|{x6aW9RT z0zV5*jvgQM2A&kVN-3GknWfn&dAp4Z0vrG&Ly_JV-1h!$$SAF8S zp67Z4wl)wB40;1!-8}zQ{CCC4I$#{?X7OCUIMupci7Njvf|ESCH8a^=?1D#7K#9UK ziz4TV$!=cc*QRfLHhDG=3j}T%fgw`X%iK5jzY&o?GB{F=s{M_s6L*|FVo7yAxq2qc zBs84FrtN(3W@i!rA`zCn4F^UH3eQjmXb}urG6bA}dMfVbNt`9aPr_MR7mTq=P!_=V zSH~86i`R?E%+bh^mD!T*NtI@K=KKEB>K*_#B3Nd!-JAdV-lddhE;u3gcb9gJ?H+xC z%PrCJ=KQUeH=jbr1S3Nx*S>P=;o$z-ShLyy07p3eheIzswC7-})4t+eP17{V3JOqC z`r-7MJ?)$C?SFG~f|oh7N-%;6+R z6>6q4b66jG+P$xD24{Kp#l)9gkr*C z%}lY7d0M=B;8ZoLI>K3I7XTc`aU7>ss{ZigTf0g-!`QPu9j8evX}6O0=7D8o$T;)3 z8?x|F?Z7MJF9vP^0Nmrqo;thz!BmY6km}oyl<91#=w&8*dj0m}!Rr&8=9xA+ zqXGmg`{l!9hZ>dolQsD+EL}9r7;Y_qAnqotDTek`Muy9cjojGiU@a0MA);`E5W;bsb)S4R#Cb9r?d}zc1polo@&n%3 z3-xE;IrLV^+sxz%{jgE3|MaCF{p{Hv4T%BYdZ1|&HCw3S{Edr6QABk9)$k zxdlSx%nAa8M9Kco6P(~$1j0YH_jzBrXbBjQz=YB-C%$}21jB=f;rav{v>oi)KBrG% ziIcTo-9B&oxe9{FjfN}D@y7lg)#38PY}&{Vo3+NCfsq#~&(w=r#G{hCp*I}oi32Ro z6M@djBPT1RvL{@o0kj3gl(LqbySX&^o5>H_>D*K0*mr!L>xf582xN#Hkt6VJM~b$N zek6GPJ=HQ}1F)GzU|j!@DP+9Nim0Rgtf)WKIV}%e0#tw`P zjBQ;{L{8WY8~bWwFYJFV+SpUFU}w~MHyJTJPTt<7upF$QY4wkN_iCd(bG=9=c>3kdnMJ9lts$`u&lz+ z4>u=xqtxnF(0_Oxf|XXrYGX7YlCdf+))L&yllk8KpDuoWE54PnEJ>wj+}n$HmU^A< zz4*o^1Y2u+DoIS5P>u|3VN|C8&(6=!54crFIIimk-ui#OaxDuXs-m*_43kWLaqXPU zB{*mj-4AXP01=S_AYiU=bEh#-|M}>9Pc{8ss3J~dD-FvKLm&b&CU*;CjJ1}r^%eEn z)aoNh^ZF9J*j+--0D-JU!PXKS7s+q0pPpX0-Gwd%Kt8D=)v3OiTpgPqW5^;e^u2Xl z_@qb{pX!qh%&HAsD`$T1~W+ zN^V!0*J-*aiYSPRq5uE{Mr+D85|y-Mdp2uzBPnXG@P!k(PQ@*1HZ7^&Uq1cim2)yr z*S2h}?G?!vv*!|%A|hE*z1*`-Ev)|e?%B^5zC7Q#^skqHbA9pFY_ApP@roYrujN)Y zw>;;Ef`F7T#DO70iwqb5l)UJTgRhjG5)lCcAboZBD>8PAZLUS4y2hmkDPFoC}(-r&3Y|0pYj}C#vMRJhxlH9 zcQveRBEBnAs{ruv^qKxI+cFn6k4zx~F^BaTmYHmI+rOLmXpRks7Gs87w3;|M`b@N`F$G^J% zyMH?L)8+%!x7}K`yvQe}uIE@Vz<~MB?R_?I?r#|c1jg80gLRQ{K)#W?_)x?}(l6GO zHv84h_xE_a-rW6~b`#aku>$-bJM$b=PfsFc&TAvS@$xkl)>1KQ@3l;YS5P9U1w%oWjH&&zZh&Z6n7fttr z$&ZiDAN<~t*Om#ETFzw)bNNDQ@5kpDXN1TZGljO9u>!d(fT0Lo4Ypm({p#v(-`V|x zX4EKoQ6t!1ty-KViIUP}$gv7#(6l^~k{?BGRCdY*8Q@H~&dQkO^473<$mRaSMe^zG zzruAqS?vcO?4QK+K?#z66 z>+KHpG`OnQToBm4^nL+7>5S6h)y6$uj~7o#5EMqv5v`7zl`fNB{`Zq^)A^ zR(I-RaAS*J9wkP9`>grqv)~|Wxv2bKZwYi-3VT{#l z_3t14UVPz$dwPM4Mb3aBJYblJv@%2%fdN1yfPPODkzUwb5E6;H+-npD+ zSz0LT0JxZFdA@ULpWGL@;TCmToX6K@uE#22jD;eI+#vY2Q(p4IDvu%_Bql?SnaDqy z{ggW>b&3wpZIP)=X7ZtN#K;Mo!O(v_{PR1VnUAK=WO=4hBS+u>0E{!n84#JmWJ@_{ zs8!1^H=ZAk8e7r-+r_#y=Nl626Fcqigb}3>5cx zfx1K8OVbmzpgOn(f@LP#owv-Oi#}&_U0w#ZpCAWT3Nx+F+6fZD1ZV2jpM<-gFE&= zy^z^jXL7?m7%Tx15g4$AE-u}=EGcujvojn%)EIA+>TB)$&zC2@0I+B1AFJ$xAL#@ySANKWjV5gPj!w=^ zn!*?jh8r#zN%B4Kh%>%#@7``N-Z9u5EY(V0>FL#Bt#zf=cfvw16rSTVpDdApdvw66CS`a$G{p$O1`Eli$fd79-#u9Q-R)?kq#X^Jp-dFZ+RX^wRf z_(8o=fA5)hMsdWk4geISG#Ld&|AD?P+9=1Ddd>kXC%-aPcZXDi2o*dBkdxzbTypWw(3e=z<=iARp*hFM*rJNZnP$9~{NLAc4uk-!UF$8oJkhL&23 zS8iR6vxFhz$T{Oc0GfiH8)3MSn(F7?kz=#m*i{?xnNQYoh9VNk7;GVIiO766;DI0j zpL@G%JJ(W`s6>*KB$rF~$&%GJU&;aujyeYq?K~K6Q;|k~X?)j#llx9Y7#5w(0-*zQ ze6f>uGM!qo+j5E?4UrS}f0jR4pWu>T8t`jJ!UMuGL{O-Ekt`eq0#+5stSW~H{ouLr zlfLhfwh74_y~#L_3sW$c<+dOqofM#8ZG3yVwPn#X%Y(2)b3Xvbad*aPgLobwB zi4!YObxm9us6D;XoOSV#nk)ct9hVV0nZqpO4Bf~B&ZJ3CPoL@2xsj^mzy)xD0syL5 z0lG6{PB|jRjG*65j+X+o>7&2zL{o?w$ zZuYe5>(-)A&FPslu!#h_t~%?lY8 zz$g^Kf#LCGqoyyrH+xepIiF|=09IS8jVd${0Yhs{$bvVAUtg&h^aE`D{W^{_5LE}f zYDhj>N}-sH?|gjWPjUXl?v2<8{W;$1P4%hSxMT;w_xyTgs4`M5^*TM0000Cb4^B0n zpUdYjE?fbCz2!Y;Tj$;!c_Uo0u`^<+%@CL+BQ#q|md;K7T{4LrK|_$bz4@-!Scl+M z@$@81WG<6D?c2DlFUt>%qKyucCz(u(G}9>)qU4rF>N`U(Oq5KtY^QU7X?@vB`|KhS zIhyIj2v-;m$bc+~lm|RmsrCz{?zZk$E7c$fwtg--@Pgy}kKOqDREHhauWK>7op^ajj(>aex%>l+19ze<G&!_TjtjZLqg6lcix z500%ou)0d0_l3Bpdl?l53^UG)A+?#J*ynX4_XI|L`!F~|n02vTPUf4_HXhkW{N~azG`Z3rcHZX@vh9l3YTnPn= zzO3Ku-EL*`{qj(2tSE3OEgz~LaD|t4Enz!@92rYZ{`;>_mrq6r&B%`cAlJoWy10-o zCOT=rHZod-gb{-TkRx*hA~Jdc2PsyP3LEgcQcfNcQaROp{$AX2m6P&<2^K4PW9C+4 z-;nFN_cy`-5+zSLVZY1IXaCxY+nHaWLX#UO5I5#-TCj{G5Fmmh`o~bV;DIpUWey1- z_6tVv$lgQC0s2+d4t;ekGkIQqt-%ly1MFnob~fL}#naPg5|sb|AQF*FBFx;8VCQoO zPI{hS?=}AK)Q73bkh4Tf8>0_GbqpU$0Kw4Fq!iIRXwnp_lBEIrpe_Uzsh1EpH%hp~x6 z>&+8^02Hiela7j$JOO~BD6ULSEYM=E_50U9`oZw)M|K~qiP}SnQAKg}=GDTe{tG4s zxufi+X9(8i6247mAAM3ce|yV#$YP9~u*{=s5iQaZ39Hb?lwQ6wF&GSZYvM8&dTAo*D270cGkCz5dgq~DJ;>3A?Uuk zK0bW-FWoPC5EB>+RBH}hN+-HD1^^;(EO(6vrF(Mt=(GDyG|P>?B8l8EWMN2QYBJ;q z9EshhA}?}nRi<-kbCs&I_rBaSJW_+&<}a-%ipAbiUKE+l``!m3k|n8SE1g?Q;WE~8 z2`lLAV=JVt#6wQ_0{}7{5(BeIJ9iX$nC4ll)9Um(eT!qIvW@d(cCi&}i9UursPfuK zurto%N**ooCG7z9s0_igW5;XdZ>-66XBFphOU_B1-p@B8NS(H``INGY3s;T}9$p*D zey#CRYoRTW0|zoxxcYqOBJ(*|bVxW>_?dIS866xN7_KyG!8hdtC9m}Q{+Ivl>)%K> zMUK^Swd|Dob-P5Mj1r#1gwYPgES>GNxMG1HF5jZER`Lo}T%Nu>nckrg5bsx?8D*r+ zHZO==#U*T?&b8S10QiH{@$u@BV|$M-UFzIY(@NNa)Iza%r!#YRes=G!v0AC>dw!fH ziArX=v!C7js|pPo^3Zwlz{$W1CTAy!A#es-A&Dvu?C^$nRGQ1ZEbredG23fhnVE>0 z0OMhQ+7XKHK3Bhh8wg_kfq* zuM1eUMwfbvEj_p7cKZ$mSX*+!s2WWrcTVlxFog2>T&|1t30~=C0$hH5@YLbqgWZL= z$Ft02ZNBikn;#BAZF+I~mBXjXL3uG<`smspXP}k(d9IZJrvy>es{nvs4ZBMT5uiqf zhzwVnlFR$|D(FH-PPCx9{X1EFO`+!X!&7168gZ+OP zq95>@?Y7Qb`?KMOTriH+d6CJygi)h8G!WJrL4BFFG)v<=Y30^xV+FDP>eTL_uC5*P zcY_THUX?l~qNSqa-SL2kxq>(uV>(4VBk!e4#f9Wj9%{z9wG=341e+)KpD20J?BYF{ zr4l3r0Gxx5UFvDvFzlBXd6s3jJJWw^eO3sC2EkBZyj?-Ft>^2dApm%5*Bfg$VAOE= z#udQ@LDBbxNLyC$dR{qL-9K8{v$s4Nh9LkXN?u&J(%0}S(-m)W;0G97$NtA?i z?riJa9)DMrmvg0*uo^WF9d5xX+ECTjtL7?j(gwg8_yS_=8k==h$0= zQO$F`&|EKX>^shrB$u&GAYuSO1Ugkt4{A~6nbBj-q2@|saNP>>Rd?;I+@#_>{?U+N{+ksf+_CF>+c&Yqscdr_gEh3i*0SFF=o&Ju&VBm>d zeEJC3P=eRK#F7{N_{b0c_2O^l^g>E0bAlt$oYXUfuu_S3n)Z7s@2WkU*YoqH9 z3;--p28Gs|k%NFPi8A*GqRNqxLqh{YQS^=0-FgN2@+S56p6^$9`TNJ-P?z+Sxsy_n znjASMN|Lp|k3TNDg(->|eHSy9c&RA6=x_iaZ2}&Qly~+|&HO{KjpCrPUqZxp_rLjn zzWVo$D%=`lf&*trK!(VehzHGJXt>l^KBXV|Vc_~6IiwA`0ubXoiFtCb+u{xffM`WE ztIXu*SI*AiqK+&wgMg^9B!ZHNBL5+mw1$4@yIuf6rixIM2HeW=%Atch4h)2~wP#tk zs4Z6sy2st0oOt)f;;j!RPbVs7E=SH&lO@(_RF>(?DsquIoH-m2p|HO726*+*?(y(H z1;GGN@}l9WF%s@9x=IzsY9ei25WetSatHm|+k3vh>h+N!YlzJKq^m-6CBEKRs$ZGD z{PLlfBR^6~C7G1DoLRivmh+OO=n7;=1U_?{eCUNkC)YiR`t5;*uOE12>B9hy9(s<-C+qmfB1jiHwQR0dN3-EmsO_VmVfH_|C~U z8gG$hJ>IkSpY-8 zF*o^QrhjwccO#V@I|g@UCRbWto4GMLKh@2;OF_(pAa?&9g~oes&xxV(Q1DQK{nAh{ zu*(@PL%4frWTdjQ7S`52wyguy>poL*qxTNH)wlEiaPu^26YC5aLym|D7D!u56j;Bi zqmzweJIceAp#0Fjo*uSOuwQbR^Sz_5eR=b%13UKzUg&}d{2=n8wcBc5XKckA` z@Sa2K-VyLP*CBi}efRz~#&MDS)4`v^9w9@*@A!`9FVo?{CvEi;=irKVpe25s}c$n`9d_#f?c^`^*Y|Gm9i?g7YPSalYU^GE^EAHhq<^W()Z(2lZ+tZg(_t2% zk3r$p+{#f$y_L~Ntiay7)o^HRe4=x$Qk3E)X-dQc2L%uSS!^hD)HL8tAHtrrD zs|y#ugKs`7iXzUFd%f0gu70o}Iw@p8E6S@0?pHM`!Z}lg^1-3;AHMKry)yJA?1A8zY{aSf5s{jB107*qoM6N<$f)Wn*ga7~l literal 0 HcmV?d00001 diff --git a/contrib/clojure-package/examples/captcha/gen_captcha.py b/contrib/clojure-package/examples/captcha/gen_captcha.py new file mode 100755 index 000000000..43e0d26fb --- /dev/null +++ b/contrib/clojure-package/examples/captcha/gen_captcha.py @@ -0,0 +1,40 @@ +#!/usr/bin/env python3 + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +from captcha.image import ImageCaptcha +import os +import random + +length = 4 +width = 160 +height = 60 +IMAGE_DIR = "images" + + +def random_text(): + return ''.join(str(random.randint(0, 9)) + for _ in range(length)) + + +if __name__ == '__main__': + image = ImageCaptcha(width=width, height=height) + captcha_text = random_text() + if not os.path.exists(IMAGE_DIR): + os.makedirs(IMAGE_DIR) + image.write(captcha_text, os.path.join(IMAGE_DIR, captcha_text + ".png")) diff --git a/contrib/clojure-package/examples/captcha/get_data.sh b/contrib/clojure-package/examples/captcha/get_data.sh new file mode 100755 index 000000000..baa7f9eb8 --- /dev/null +++ b/contrib/clojure-package/examples/captcha/get_data.sh @@ -0,0 +1,32 @@ +#!/bin/bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +set -evx + +EXAMPLE_ROOT=$(cd "$(dirname $0)"; pwd) + +data_path=$EXAMPLE_ROOT + +if [ ! -f "$data_path/captcha_example.zip" ]; then + wget https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/R/data/captcha_example.zip -P $data_path +fi + +if [ ! -f "$data_path/captcha_example/captcha_train.rec" ]; then + unzip $data_path/captcha_example.zip -d $data_path +fi diff --git a/contrib/clojure-package/examples/captcha/project.clj b/contrib/clojure-package/examples/captcha/project.clj new file mode 100644 index 000000000..fa37fecbe --- /dev/null +++ b/contrib/clojure-package/examples/captcha/project.clj @@ -0,0 +1,28 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(defproject captcha "0.1.0-SNAPSHOT" + :description "Captcha recognition via multi-label classification" + :plugins [[lein-cljfmt "0.5.7"]] + :dependencies [[org.clojure/clojure "1.9.0"] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] + :main ^:skip-aot captcha.train-ocr + :profiles {:train {:main captcha.train-ocr} + :infer {:main captcha.infer-ocr} + :uberjar {:aot :all}} + :aliases {"train" ["with-profile" "train" "run"] + "infer" ["with-profile" "infer" "run"]}) diff --git a/contrib/clojure-package/examples/captcha/src/captcha/consts.clj b/contrib/clojure-package/examples/captcha/src/captcha/consts.clj new file mode 100644 index 000000000..318e0d806 --- /dev/null +++ b/contrib/clojure-package/examples/captcha/src/captcha/consts.clj @@ -0,0 +1,27 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns captcha.consts) + +(def batch-size 8) +(def channels 3) +(def height 30) +(def width 80) +(def data-shape [channels height width]) +(def num-labels 10) +(def label-width 4) +(def model-prefix "ocr") diff --git a/contrib/clojure-package/examples/captcha/src/captcha/infer_ocr.clj b/contrib/clojure-package/examples/captcha/src/captcha/infer_ocr.clj new file mode 100644 index 000000000..f6a648e98 --- /dev/null +++ b/contrib/clojure-package/examples/captcha/src/captcha/infer_ocr.clj @@ -0,0 +1,56 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns captcha.infer-ocr + (:require [captcha.consts :refer :all] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [org.apache.clojure-mxnet.ndarray :as ndarray])) + +(defn create-predictor + [] + (let [data-desc {:name "data" + :shape [batch-size channels height width] + :layout layout/NCHW + :dtype dtype/FLOAT32} + label-desc {:name "label" + :shape [batch-size label-width] + :layout layout/NT + :dtype dtype/FLOAT32} + factory (infer/model-factory model-prefix + [data-desc label-desc])] + (infer/create-predictor factory))) + +(defn -main + [& args] + (let [[filename] args + image-fname (or filename "captcha_example.png") + image-ndarray (-> image-fname + infer/load-image-from-file + (infer/reshape-image width height) + (infer/buffered-image-to-pixels [channels height width]) + (ndarray/expand-dims 0)) + label-ndarray (ndarray/zeros [1 label-width]) + predictor (create-predictor) + predictions (-> (infer/predict-with-ndarray + predictor + [image-ndarray label-ndarray]) + first + (ndarray/argmax 1) + ndarray/->vec)] + (println "CAPTCHA output:" (apply str (mapv int predictions))))) diff --git a/contrib/clojure-package/examples/captcha/src/captcha/train_ocr.clj b/contrib/clojure-package/examples/captcha/src/captcha/train_ocr.clj new file mode 100644 index 000000000..91ec2fff3 --- /dev/null +++ b/contrib/clojure-package/examples/captcha/src/captcha/train_ocr.clj @@ -0,0 +1,156 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns captcha.train-ocr + (:require [captcha.consts :refer :all] + [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] + [org.apache.clojure-mxnet.callback :as callback] + [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.eval-metric :as eval-metric] + [org.apache.clojure-mxnet.initializer :as initializer] + [org.apache.clojure-mxnet.io :as mx-io] + [org.apache.clojure-mxnet.module :as m] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [org.apache.clojure-mxnet.optimizer :as optimizer] + [org.apache.clojure-mxnet.symbol :as sym]) + (:gen-class)) + +(when-not (.exists (io/file "captcha_example/captcha_train.lst")) + (sh "./get_data.sh")) + +(defonce train-data + (mx-io/image-record-iter {:path-imgrec "captcha_example/captcha_train.rec" + :path-imglist "captcha_example/captcha_train.lst" + :batch-size batch-size + :label-width label-width + :data-shape data-shape + :shuffle true + :seed 42})) + +(defonce eval-data + (mx-io/image-record-iter {:path-imgrec "captcha_example/captcha_test.rec" + :path-imglist "captcha_example/captcha_test.lst" + :batch-size batch-size + :label-width label-width + :data-shape data-shape})) + +(defn accuracy + [label pred & {:keys [by-character] + :or {by-character false} :as opts}] + (let [[nr nc] (ndarray/shape-vec label) + pred-context (ndarray/context pred) + label-t (-> label + ndarray/transpose + (ndarray/reshape [-1]) + (ndarray/as-in-context pred-context)) + pred-label (ndarray/argmax pred 1) + matches (ndarray/equal label-t pred-label) + [digit-matches] (-> matches + ndarray/sum + ndarray/->vec) + [complete-matches] (-> matches + (ndarray/reshape [nc nr]) + (ndarray/sum 0) + (ndarray/equal label-width) + ndarray/sum + ndarray/->vec)] + (if by-character + (float (/ digit-matches nr nc)) + (float (/ complete-matches nr))))) + +(defn get-data-symbol + [] + (let [data (sym/variable "data") + ;; normalize the input pixels + scaled (sym/div (sym/- data 127) 128) + + conv1 (sym/convolution {:data scaled :kernel [5 5] :num-filter 32}) + pool1 (sym/pooling {:data conv1 :pool-type "max" :kernel [2 2] :stride [1 1]}) + relu1 (sym/activation {:data pool1 :act-type "relu"}) + + conv2 (sym/convolution {:data relu1 :kernel [5 5] :num-filter 32}) + pool2 (sym/pooling {:data conv2 :pool-type "avg" :kernel [2 2] :stride [1 1]}) + relu2 (sym/activation {:data pool2 :act-type "relu"}) + + conv3 (sym/convolution {:data relu2 :kernel [3 3] :num-filter 32}) + pool3 (sym/pooling {:data conv3 :pool-type "avg" :kernel [2 2] :stride [1 1]}) + relu3 (sym/activation {:data pool3 :act-type "relu"}) + + conv4 (sym/convolution {:data relu3 :kernel [3 3] :num-filter 32}) + pool4 (sym/pooling {:data conv4 :pool-type "avg" :kernel [2 2] :stride [1 1]}) + relu4 (sym/activation {:data pool4 :act-type "relu"}) + + flattened (sym/flatten {:data relu4}) + fc1 (sym/fully-connected {:data flattened :num-hidden 256}) + fc21 (sym/fully-connected {:data fc1 :num-hidden num-labels}) + fc22 (sym/fully-connected {:data fc1 :num-hidden num-labels}) + fc23 (sym/fully-connected {:data fc1 :num-hidden num-labels}) + fc24 (sym/fully-connected {:data fc1 :num-hidden num-labels})] + (sym/concat "concat" nil [fc21 fc22 fc23 fc24] {:dim 0}))) + +(defn get-label-symbol + [] + (as-> (sym/variable "label") label + (sym/transpose {:data label}) + (sym/reshape {:data label :shape [-1]}))) + +(defn create-captcha-net + [] + (let [scores (get-data-symbol) + labels (get-label-symbol)] + (sym/softmax-output {:data scores :label labels}))) + +(def optimizer + (optimizer/adam + {:learning-rate 0.0002 + :wd 0.00001 + :clip-gradient 10})) + +(defn train-ocr + [devs] + (println "Starting the captcha training ...") + (let [model (m/module + (create-captcha-net) + {:data-names ["data"] :label-names ["label"] + :contexts devs})] + (m/fit model {:train-data train-data + :eval-data eval-data + :num-epoch 10 + :fit-params (m/fit-params + {:kvstore "local" + :batch-end-callback + (callback/speedometer batch-size 100) + :initializer + (initializer/xavier {:factor-type "in" + :magnitude 2.34}) + :optimizer optimizer + :eval-metric (eval-metric/custom-metric + #(accuracy %1 %2) + "accuracy")})}) + (println "Finished the fit") + model)) + +(defn -main + [& args] + (let [[dev dev-num] args + num-devices (Integer/parseInt (or dev-num "1")) + devs (if (= dev ":gpu") + (mapv #(context/gpu %) (range num-devices)) + (mapv #(context/cpu %) (range num-devices))) + model (train-ocr devs)] + (m/save-checkpoint model {:prefix model-prefix :epoch 0}))) diff --git a/contrib/clojure-package/examples/captcha/test/captcha/train_ocr_test.clj b/contrib/clojure-package/examples/captcha/test/captcha/train_ocr_test.clj new file mode 100644 index 000000000..ab785f7fe --- /dev/null +++ b/contrib/clojure-package/examples/captcha/test/captcha/train_ocr_test.clj @@ -0,0 +1,119 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns captcha.train-ocr-test + (:require [clojure.test :refer :all] + [captcha.consts :refer :all] + [captcha.train-ocr :refer :all] + [org.apache.clojure-mxnet.io :as mx-io] + [org.apache.clojure-mxnet.module :as m] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [org.apache.clojure-mxnet.shape :as shape] + [org.apache.clojure-mxnet.util :as util])) + +(deftest test-consts + (is (= 8 batch-size)) + (is (= [3 30 80] data-shape)) + (is (= 4 label-width)) + (is (= 10 num-labels))) + +(deftest test-labeled-data + (let [train-batch (mx-io/next train-data) + eval-batch (mx-io/next eval-data) + allowed-labels (into #{} (map float (range 10)))] + (is (= 8 (-> train-batch mx-io/batch-index count))) + (is (= 8 (-> eval-batch mx-io/batch-index count))) + (is (= [8 3 30 80] (-> train-batch + mx-io/batch-data + first + ndarray/shape-vec))) + (is (= [8 3 30 80] (-> eval-batch + mx-io/batch-data + first + ndarray/shape-vec))) + (is (every? #(<= 0 % 255) (-> train-batch + mx-io/batch-data + first + ndarray/->vec))) + (is (every? #(<= 0 % 255) (-> eval-batch + mx-io/batch-data + first + ndarray/->vec))) + (is (= [8 4] (-> train-batch + mx-io/batch-label + first + ndarray/shape-vec))) + (is (= [8 4] (-> eval-batch + mx-io/batch-label + first + ndarray/shape-vec))) + (is (every? allowed-labels (-> train-batch + mx-io/batch-label + first + ndarray/->vec))) + (is (every? allowed-labels (-> eval-batch + mx-io/batch-label + first + ndarray/->vec))))) + +(deftest test-model + (let [batch (mx-io/next train-data) + model (m/module (create-captcha-net) + {:data-names ["data"] :label-names ["label"]}) + _ (m/bind model + {:data-shapes (mx-io/provide-data-desc train-data) + :label-shapes (mx-io/provide-label-desc train-data)}) + _ (m/init-params model) + _ (m/forward-backward model batch) + output-shapes (-> model + m/output-shapes + util/coerce-return-recursive) + outputs (-> model + m/outputs-merged + first) + grads (->> model m/grad-arrays (map first))] + (is (= [["softmaxoutput0_output" (shape/->shape [8 10])]] + output-shapes)) + (is (= [32 10] (-> outputs ndarray/shape-vec))) + (is (every? #(<= 0.0 % 1.0) (-> outputs ndarray/->vec))) + (is (= [[32 3 5 5] [32] ; convolution1 weights+bias + [32 32 5 5] [32] ; convolution2 weights+bias + [32 32 3 3] [32] ; convolution3 weights+bias + [32 32 3 3] [32] ; convolution4 weights+bias + [256 28672] [256] ; fully-connected1 weights+bias + [10 256] [10] ; 1st label scores + [10 256] [10] ; 2nd label scores + [10 256] [10] ; 3rd label scores + [10 256] [10]] ; 4th label scores + (map ndarray/shape-vec grads))))) + +(deftest test-accuracy + (let [labels (ndarray/array [1 2 3 4, + 5 6 7 8] + [2 4]) + pred-labels (ndarray/array [1 0, + 2 6, + 3 0, + 4 8] + [8]) + preds (ndarray/one-hot pred-labels 10)] + (is (float? (accuracy labels preds))) + (is (float? (accuracy labels preds :by-character false))) + (is (float? (accuracy labels preds :by-character true))) + (is (= 0.5 (accuracy labels preds))) + (is (= 0.5 (accuracy labels preds :by-character false))) + (is (= 0.75 (accuracy labels preds :by-character true))))) diff --git a/contrib/clojure-package/examples/cnn-text-classification/README.md b/contrib/clojure-package/examples/cnn-text-classification/README.md index 86a8abb06..19bb91373 100644 --- a/contrib/clojure-package/examples/cnn-text-classification/README.md +++ b/contrib/clojure-package/examples/cnn-text-classification/README.md @@ -3,19 +3,19 @@ An example of text classification using CNN To use you must download the MR polarity dataset and put it in the path specified in the mr-dataset-path -The dataset can be obtained here: [https://github.com/yoonkim/CNN_sentence](https://github.com/yoonkim/CNN_sentence). The two files `rt-polarity.neg` +The dataset can be obtained here: [CNN_sentence](https://github.com/yoonkim/CNN_sentence). The two files `rt-polarity.neg` and `rt-polarity.pos` must be put in a directory. For example, `data/mr-data/rt-polarity.neg`. You also must download the glove word embeddings. The suggested one to use is the smaller 50 dimension one -`glove.6B.50d.txt` which is contained in the download file here [https://nlp.stanford.edu/projects/glove/](https://nlp.stanford.edu/projects/glove/) +`glove.6B.50d.txt` which is contained in the download file here: [GloVe](https://nlp.stanford.edu/projects/glove/) ## Usage You can run through the repl with -`(train-convnet {:embedding-size 50 :batch-size 100 :test-size 100 :num-epoch 10 :max-examples 1000})` +`(train-convnet {:embedding-size 50 :batch-size 100 :test-size 100 :num-epoch 10 :max-examples 1000 :pretrained-embedding :glove})` or -`JVM_OPTS="Xmx1g" lein run` (cpu) +`JVM_OPTS="-Xmx1g" lein run` (cpu) You can control the devices you run on by doing: @@ -24,10 +24,36 @@ You can control the devices you run on by doing: `lein run :gpu 2` - This will run on 2 gpu devices -The max-examples only loads 1000 each of the dataset to keep the time and memory down. To run all the examples, -change the main to be (train-convnet {:embedding-size 50 :batch-size 100 :test-size 1000 :num-epoch 10) +The max-examples only loads 1000 each of the dataset to keep the time and memory down. To run all the examples, +change the main to be (train-convnet {:embedding-size 50 :batch-size 100 :test-size 1000 :num-epoch 10 :pretrained-embedding :glove}) and then run - `lein uberjar` - `java -Xms1024m -Xmx2048m -jar target/cnn-text-classification-0.1.0-SNAPSHOT-standalone.jar` + +## Usage with word2vec + +You can also use word2vec embeddings in order to train the text classification model. +Before training, you will need to download [GoogleNews-vectors-negative300.bin](https://drive.google.com/file/d/0B7XkCwpI5KDYNlNUTTlSS21pQmM/edit?usp=sharing) first. +Once you've downloaded the embeddings (which are in a gzipped format), +you'll need to unzip them and place them in the `contrib/clojure-package/data` directory. + +Then you can run training on a subset of examples through the repl using: +``` +(train-convnet {:embedding-size 300 :batch-size 100 :test-size 100 :num-epoch 10 :max-examples 1000 :pretrained-embedding :word2vec}) +``` +Note that loading word2vec embeddings consumes memory and takes some time. + +You can also train them using `JVM_OPTS="-Xmx8g" lein run` once you've modified +the parameters to `train-convnet` (see above) in `src/cnn_text_classification/classifier.clj`. +In order to run training with word2vec on the complete data set, you will need to run: +``` +(train-convnet {:embedding-size 300 :batch-size 100 :test-size 1000 :num-epoch 10 :pretrained-embedding :word2vec}) +``` +You should be able to achieve an accuracy of `~0.78` using the parameters above. + +## Usage with learned embeddings + +Lastly, similar to the python CNN text classification example, you can learn the embeddings based on training data. +This can be achieved by setting `:pretrained-embedding nil` (or omitting that parameter altogether). diff --git a/contrib/clojure-package/examples/cnn-text-classification/project.clj b/contrib/clojure-package/examples/cnn-text-classification/project.clj index f44144942..29ebefe5d 100644 --- a/contrib/clojure-package/examples/cnn-text-classification/project.clj +++ b/contrib/clojure-package/examples/cnn-text-classification/project.clj @@ -19,6 +19,6 @@ :description "CNN text classification with MXNet" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"]] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] :pedantic? :skip :main cnn-text-classification.classifier) diff --git a/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj b/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj index 29ff36fe1..3c0288c9c 100644 --- a/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj +++ b/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/classifier.clj @@ -16,7 +16,9 @@ ;; (ns cnn-text-classification.classifier - (:require [cnn-text-classification.data-helper :as data-helper] + (:require [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] + [cnn-text-classification.data-helper :as data-helper] [org.apache.clojure-mxnet.eval-metric :as eval-metric] [org.apache.clojure-mxnet.io :as mx-io] [org.apache.clojure-mxnet.module :as m] @@ -26,30 +28,50 @@ [org.apache.clojure-mxnet.context :as context]) (:gen-class)) +(def data-dir "data/") (def mr-dataset-path "data/mr-data") ;; the MR polarity dataset path -(def glove-file-path "data/glove/glove.6B.50d.txt") (def num-filter 100) (def num-label 2) (def dropout 0.5) -(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size embedding-size]}] +(when-not (.exists (io/file (str data-dir))) + (do (println "Retrieving data for cnn text classification...") (sh "./get_data.sh"))) + +(defn shuffle-data [test-num {:keys [data label sentence-count sentence-size vocab-size embedding-size pretrained-embedding]}] (println "Shuffling the data and splitting into training and test sets") (println {:sentence-count sentence-count :sentence-size sentence-size - :embedding-size embedding-size}) + :vocab-size vocab-size + :embedding-size embedding-size + :pretrained-embedding pretrained-embedding}) (let [shuffled (shuffle (map #(vector %1 %2) data label)) train-num (- (count shuffled) test-num) training (into [] (take train-num shuffled)) - test (into [] (drop train-num shuffled))] + test (into [] (drop train-num shuffled)) + ;; has to be channel x y + train-data-shape (if pretrained-embedding + [train-num 1 sentence-size embedding-size] + [train-num 1 sentence-size]) + ;; has to be channel x y + test-data-shape (if pretrained-embedding + [test-num 1 sentence-size embedding-size] + [test-num 1 sentence-size])] {:training {:data (ndarray/array (into [] (flatten (mapv first training))) - [train-num 1 sentence-size embedding-size]) ;; has to be channel x y + train-data-shape) :label (ndarray/array (into [] (flatten (mapv last training))) [train-num])} :test {:data (ndarray/array (into [] (flatten (mapv first test))) - [test-num 1 sentence-size embedding-size]) ;; has to be channel x y + test-data-shape) :label (ndarray/array (into [] (flatten (mapv last test))) [test-num])}})) +(defn get-data-symbol [num-embed sentence-size batch-size vocab-size pretrained-embedding] + (if pretrained-embedding + (sym/variable "data") + (as-> (sym/variable "data") data + (sym/embedding "vocab_embed" {:data data :input-dim vocab-size :output-dim num-embed}) + (sym/reshape {:data data :target-shape [batch-size 1 sentence-size num-embed]})))) + (defn make-filter-layers [{:keys [input-x num-embed sentence-size] :as config} filter-size] (as-> (sym/convolution {:data input-x @@ -63,9 +85,9 @@ ;;; convnet with multiple filter sizes ;; from Convolutional Neural Networks for Sentence Classification by Yoon Kim -(defn get-multi-filter-convnet [num-embed sentence-size batch-size] +(defn get-multi-filter-convnet [num-embed sentence-size batch-size vocab-size pretrained-embedding] (let [filter-list [3 4 5] - input-x (sym/variable "data") + input-x (get-data-symbol num-embed sentence-size batch-size vocab-size pretrained-embedding) polled-outputs (mapv #(make-filter-layers {:input-x input-x :num-embed num-embed :sentence-size sentence-size} %) filter-list) total-filters (* num-filter (count filter-list)) concat (sym/concat "concat" nil polled-outputs {:dim 1}) @@ -74,10 +96,11 @@ fc (sym/fully-connected "fc1" {:data hdrop :num-hidden num-label})] (sym/softmax-output "softmax" {:data fc}))) -(defn train-convnet [{:keys [devs embedding-size batch-size test-size num-epoch max-examples]}] - (let [glove (data-helper/load-glove glove-file-path) ;; you can also use word2vec - ms-dataset (data-helper/load-ms-with-embeddings mr-dataset-path embedding-size glove max-examples) +(defn train-convnet [{:keys [devs embedding-size batch-size test-size + num-epoch max-examples pretrained-embedding]}] + (let [ms-dataset (data-helper/load-ms-with-embeddings mr-dataset-path max-examples embedding-size {:pretrained-embedding pretrained-embedding}) sentence-size (:sentence-size ms-dataset) + vocab-size (:vocab-size ms-dataset) shuffled (shuffle-data test-size ms-dataset) train-data (mx-io/ndarray-iter [(get-in shuffled [:training :data])] {:label [(get-in shuffled [:training :label])] @@ -89,7 +112,7 @@ :label-name "softmax_label" :data-batch-size batch-size :last-batch-handle "pad"})] - (let [mod (m/module (get-multi-filter-convnet embedding-size sentence-size batch-size) {:contexts devs})] + (let [mod (m/module (get-multi-filter-convnet embedding-size sentence-size batch-size vocab-size pretrained-embedding) {:contexts devs})] (println "Getting ready to train for " num-epoch " epochs") (println "===========") (m/fit mod {:train-data train-data :eval-data test-data :num-epoch num-epoch @@ -103,10 +126,10 @@ ;;; omit max-examples if you want to run all the examples in the movie review dataset ;; to limit mem consumption set to something like 1000 and adjust test size to 100 (println "Running with context devices of" devs) - (train-convnet {:devs [(context/cpu)] :embedding-size 50 :batch-size 10 :test-size 100 :num-epoch 10 :max-examples 1000}) + (train-convnet {:devs devs :embedding-size 50 :batch-size 10 :test-size 100 :num-epoch 10 :max-examples 1000 :pretrained-embedding :glove}) ;; runs all the examples #_(train-convnet {:embedding-size 50 :batch-size 100 :test-size 1000 :num-epoch 10}))) (comment - (train-convnet {:devs [(context/cpu)] :embedding-size 50 :batch-size 10 :test-size 100 :num-epoch 10 :max-examples 1000})) + (train-convnet {:devs devs :embedding-size 50 :batch-size 10 :test-size 100 :num-epoch 10 :max-examples 1000})) diff --git a/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/data_helper.clj b/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/data_helper.clj index 796652177..82ba13087 100644 --- a/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/data_helper.clj +++ b/contrib/clojure-package/examples/cnn-text-classification/src/cnn_text_classification/data_helper.clj @@ -21,53 +21,84 @@ [org.apache.clojure-mxnet.context :as context] [org.apache.clojure-mxnet.ndarray :as ndarray] [org.apache.clojure-mxnet.random :as random]) - (:import (java.io DataInputStream)) + (:import (java.io DataInputStream) + (java.nio ByteBuffer ByteOrder)) (:gen-class)) (def w2v-file-path "../../data/GoogleNews-vectors-negative300.bin") ;; the word2vec file path -(def max-vectors 100) ;; If you are using word2vec embeddings and you want to only load part of them - -(defn r-string [dis] - (let [max-size 50 - bs (byte-array max-size) - sb (new StringBuilder)] - (loop [b (.readByte dis) - i 0] - (if (and (not= 32 b) (not= 10 b)) - (do (aset bs i b) - (if (= 49 i) - (do (.append sb (new String bs)) - (recur (.readByte dis) 0)) - (recur (.readByte dis) (inc i)))) - (.append sb (new String bs 0 i)))) - (.toString sb))) - -(defn get-float [b] - (-> 0 - (bit-or (bit-shift-left (bit-and (aget b 0) 0xff) 0)) - (bit-or (bit-shift-left (bit-and (aget b 1) 0xff) 8)) - (bit-or (bit-shift-left (bit-and (aget b 2) 0xff) 16)) - (bit-or (bit-shift-left (bit-and (aget b 3) 0xff) 24)))) +(def EOS "") ;; end of sentence word + +(defn glove-file-path + "Returns the file path to GloVe embedding of the input size" + [embedding-size] + (format "data/glove/glove.6B.%dd.txt" embedding-size)) + +(defn r-string + "Reads a string from the given DataInputStream `dis` until a space or newline is reached." + [dis] + (loop [b (.readByte dis) + bs []] + (if (and (not= 32 b) (not= 10 b)) + (recur (.readByte dis) (conj bs b)) + (new String (byte-array bs))))) + +(defn get-float [bs] + (-> (ByteBuffer/wrap bs) + (.order ByteOrder/LITTLE_ENDIAN) + (.getFloat))) (defn read-float [is] (let [bs (byte-array 4)] (do (.read is bs) (get-float bs)))) -(defn load-google-model [path] - (println "Loading the word2vec model from binary ...") - (with-open [bis (io/input-stream path) - dis (new DataInputStream bis)] - (let [word-size (Integer/parseInt (r-string dis)) - dim (Integer/parseInt (r-string dis)) - _ (println "Processing with " {:dim dim :word-size word-size} " loading max vectors " max-vectors) - word2vec (reduce (fn [r _] - (assoc r (r-string dis) - (mapv (fn [_] (read-float dis)) (range dim)))) - {} - (range max-vectors))] - (println "Finished") - {:num-embed dim :word2vec word2vec}))) +(defn- load-w2v-vectors + "Lazily loads the word2vec vectors given a data input stream `dis`, + number of words `nwords` and dimensionality `embedding-size`." + [dis embedding-size num-vectors] + (if (= 0 num-vectors) + (list) + (let [word (r-string dis) + vect (mapv (fn [_] (read-float dis)) (range embedding-size))] + (cons [word vect] (lazy-seq (load-w2v-vectors dis embedding-size (dec num-vectors))))))) + +(defn load-word2vec-model + "Loads the word2vec model stored in a binary format from the given `path`. + By default only the first 100 embeddings are loaded." + ([path embedding-size opts] + (println "Loading the word2vec model from binary ...") + (with-open [bis (io/input-stream path) + dis (new DataInputStream bis)] + (let [word-size (Integer/parseInt (r-string dis)) + dim (Integer/parseInt (r-string dis)) + {:keys [max-vectors vocab] :or {max-vectors word-size}} opts + _ (println "Processing with " {:dim dim :word-size word-size} " loading max vectors " max-vectors) + _ (if (not= embedding-size dim) + (throw (ex-info "Mismatch in embedding size" + {:input-embedding-size embedding-size + :word2vec-embedding-size dim}))) + vectors (load-w2v-vectors dis dim max-vectors) + word2vec (if vocab + (->> vectors + (filter (fn [[w _]] (contains? vocab w))) + (into {})) + (->> vectors + (take max-vectors) + (into {})))] + (println "Finished") + {:num-embed dim :word2vec word2vec}))) + ([path embedding-size] + (load-word2vec-model path embedding-size {:max-vectors 100}))) + +(defn read-text-embedding-pairs [rdr] + (for [^String line (line-seq rdr) + :let [fields (.split line " ")]] + [(aget fields 0) + (mapv #(Float/parseFloat ^String %) (rest fields))])) + +(defn load-glove [glove-file-path] + (println "Loading the glove pre-trained word embeddings from " glove-file-path) + (into {} (read-text-embedding-pairs (io/reader glove-file-path)))) (defn clean-str [s] (-> s @@ -84,9 +115,12 @@ (string/replace #"\)" " ) ") (string/replace #"\?" " ? ") (string/replace #" {2,}" " ") - (string/trim)));; Loads MR polarity data from files, splits the data into words and generates labels. - ;; Returns split sentences and labels. -(defn load-mr-data-and-labels [path max-examples] + (string/trim))) + +(defn load-mr-data-and-labels + "Loads MR polarity data from files, splits the data into words and generates labels. + Returns split sentences and labels." + [path max-examples] (println "Loading all the movie reviews from " path) (let [positive-examples (mapv #(string/trim %) (-> (slurp (str path "/rt-polarity.pos")) (string/split #"\n"))) @@ -104,41 +138,68 @@ negative-labels (mapv (constantly 0) negative-examples)] {:sentences x-text :labels (into positive-labels negative-labels)})) -;; Pads all sentences to the same length. The length is defined by the longest sentence. -;; Returns padded sentences. -(defn pad-sentences [sentences] - (let [padding-word "" +(defn pad-sentences + "Pads all sentences to the same length where the length is defined by the longest sentence. Returns padded sentences." + [sentences] + (let [padding-word EOS sequence-len (apply max (mapv count sentences))] (mapv (fn [s] (let [diff (- sequence-len (count s))] (if (pos? diff) (into s (repeat diff padding-word)) s))) - sentences)));; Map sentences and labels to vectors based on a pretrained embeddings -(defn build-input-data-with-embeddings [sentences embedding-size embeddings] - (mapv (fn [sent] - (mapv (fn [word] (or (get embeddings word) - (ndarray/->vec (random/uniform -0.25 0.25 [embedding-size])))) - sent)) - sentences)) - -(defn load-ms-with-embeddings [path embedding-size embeddings max-examples] - (println "Translating the movie review words into the embeddings") + sentences))) + +(defn build-vocab-embeddings + "Returns the subset of `embeddings` for words from the `vocab`. + Embeddings for words not in the vocabulary are initialized randomly + from a uniform distribution." + [vocab embedding-size embeddings] + (into {} + (mapv (fn [[word _]] + [word (or (get embeddings word) + (ndarray/->vec (random/uniform -0.25 0.25 [embedding-size])))]) + vocab))) + +(defn build-input-data-with-embeddings + "Map sentences and labels to vectors based on a pretrained embeddings." + [sentences embeddings] + (mapv (fn [sent] (mapv #(embeddings %) sent)) sentences)) + +(defn build-vocab + "Creates a vocabulary for the data set based on frequency of words. + Returns a map from words to unique indices." + [sentences] + (let [words (flatten sentences) + wc (reduce + (fn [m w] (update-in m [w] (fnil inc 0))) + {} + words) + sorted-wc (sort-by second > wc) + sorted-w (map first sorted-wc)] + (into {} (map vector sorted-w (range (count sorted-w)))))) + +(defn load-ms-with-embeddings + "Loads the movie review sentences data set for the given + `:pretrained-embedding` (e.g. `nil`, `:glove` or `:word2vec`)" + [path max-examples embedding-size {:keys [pretrained-embedding] + :or {pretrained-embedding nil} + :as opts}] (let [{:keys [sentences labels]} (load-mr-data-and-labels path max-examples) sentences-padded (pad-sentences sentences) - data (build-input-data-with-embeddings sentences-padded embedding-size embeddings)] + vocab (build-vocab sentences-padded) + vocab-embeddings (case pretrained-embedding + :glove (->> (load-glove (glove-file-path embedding-size)) + (build-vocab-embeddings vocab embedding-size)) + :word2vec (->> (load-word2vec-model w2v-file-path embedding-size {:vocab vocab}) + (:word2vec) + (build-vocab-embeddings vocab embedding-size)) + vocab) + data (build-input-data-with-embeddings sentences-padded vocab-embeddings)] {:data data :label labels :sentence-count (count data) :sentence-size (count (first data)) - :embedding-size embedding-size})) - -(defn read-text-embedding-pairs [rdr] - (for [^String line (line-seq rdr) - :let [fields (.split line " ")]] - [(aget fields 0) - (mapv #(Double/parseDouble ^String %) (rest fields))])) - -(defn load-glove [glove-file-path] - (println "Loading the glove pre-trained word embeddings from " glove-file-path) - (into {} (read-text-embedding-pairs (io/reader glove-file-path)))) + :embedding-size embedding-size + :vocab-size (count vocab) + :pretrained-embedding pretrained-embedding})) diff --git a/contrib/clojure-package/examples/cnn-text-classification/test/cnn_text_classification/classifier_test.clj b/contrib/clojure-package/examples/cnn-text-classification/test/cnn_text_classification/classifier_test.clj new file mode 100644 index 000000000..744307e3e --- /dev/null +++ b/contrib/clojure-package/examples/cnn-text-classification/test/cnn_text_classification/classifier_test.clj @@ -0,0 +1,48 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns cnn-text-classification.classifier-test + (:require [clojure.test :refer :all] + [org.apache.clojure-mxnet.module :as module] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [org.apache.clojure-mxnet.util :as util] + [org.apache.clojure-mxnet.context :as context] + [cnn-text-classification.classifier :as classifier])) + +(deftest classifier-with-embeddings-test + (let [train (classifier/train-convnet + {:devs [(context/default-context)] + :embedding-size 50 + :batch-size 10 + :test-size 100 + :num-epoch 1 + :max-examples 1000 + :pretrained-embedding :glove})] + (is (= ["data"] (util/scala-vector->vec (module/data-names train)))) + (is (= 20 (count (ndarray/->vec (-> train module/outputs ffirst))))))) + +(deftest classifier-without-embeddings-test + (let [train (classifier/train-convnet + {:devs [(context/default-context)] + :embedding-size 50 + :batch-size 10 + :test-size 100 + :num-epoch 1 + :max-examples 1000 + :pretrained-embedding nil})] + (is (= ["data"] (util/scala-vector->vec (module/data-names train)))) + (is (= 20 (count (ndarray/->vec (-> train module/outputs ffirst))))))) diff --git a/contrib/clojure-package/examples/gan/project.clj b/contrib/clojure-package/examples/gan/project.clj index 06f623227..a326f7a56 100644 --- a/contrib/clojure-package/examples/gan/project.clj +++ b/contrib/clojure-package/examples/gan/project.clj @@ -19,6 +19,7 @@ :description "GAN MNIST with MXNet" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"] - [nu.pattern/opencv "2.4.9-7"]] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"] + [org.openpnp/opencv "3.4.2-1"] + ] :main gan.gan-mnist) diff --git a/contrib/clojure-package/examples/gan/src/gan/gan_mnist.clj b/contrib/clojure-package/examples/gan/src/gan/gan_mnist.clj index e2e336453..944791bce 100644 --- a/contrib/clojure-package/examples/gan/src/gan/gan_mnist.clj +++ b/contrib/clojure-package/examples/gan/src/gan/gan_mnist.clj @@ -157,7 +157,9 @@ (save-img-diff i n calc-diff)))) -(defn train [devs] +(defn train + ([devs] (train devs num-epoch)) + ([devs num-epoch] (let [mod-d (-> (m/module (discriminator) {:contexts devs :data-names ["data"] :label-names ["label"]}) (m/bind {:data-shapes (mx-io/provide-data-desc mnist-iter) :label-shapes (mx-io/provide-label-desc mnist-iter) @@ -203,7 +205,7 @@ (save-img-gout i n (ndarray/copy (ffirst out-g))) (save-img-data i n batch) (calc-diff i n (ffirst diff-d))) - (inc n))))))) + (inc n)))))))) (defn -main [& args] (let [[dev dev-num] args diff --git a/contrib/clojure-package/examples/gan/src/gan/viz.clj b/contrib/clojure-package/examples/gan/src/gan/viz.clj index 8b57b9432..67f78806d 100644 --- a/contrib/clojure-package/examples/gan/src/gan/viz.clj +++ b/contrib/clojure-package/examples/gan/src/gan/viz.clj @@ -22,7 +22,7 @@ (:import (nu.pattern OpenCV) (org.opencv.core Core CvType Mat Size) (org.opencv.imgproc Imgproc) - (org.opencv.highgui Highgui))) + (org.opencv.imgcodecs Imgcodecs))) ;;; Viz stuff (OpenCV/loadShared) @@ -83,5 +83,5 @@ _ (Core/vconcat (java.util.ArrayList. line-mats) result)] (do (Imgproc/resize result resized-img (new Size (* (.width result) 1.5) (* (.height result) 1.5))) - (Highgui/imwrite (str output-path title ".jpg") resized-img) + (Imgcodecs/imwrite (str output-path title ".jpg") resized-img) (Thread/sleep 1000)))) diff --git a/contrib/clojure-package/examples/gan/test/gan/gan_test.clj b/contrib/clojure-package/examples/gan/test/gan/gan_test.clj new file mode 100644 index 000000000..71b9126ca --- /dev/null +++ b/contrib/clojure-package/examples/gan/test/gan/gan_test.clj @@ -0,0 +1,25 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns gan.gan_test + (:require + [gan.gan-mnist :refer :all] + [org.apache.clojure-mxnet.context :as context] + [clojure.test :refer :all])) + +(deftest check-pdf + (train [(context/cpu)] 1)) \ No newline at end of file diff --git a/contrib/clojure-package/examples/imclassification/project.clj b/contrib/clojure-package/examples/imclassification/project.clj index ad0a28a76..5f77cf55c 100644 --- a/contrib/clojure-package/examples/imclassification/project.clj +++ b/contrib/clojure-package/examples/imclassification/project.clj @@ -19,6 +19,6 @@ :description "Clojure examples for image classification" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"]] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] :pedantic? :skip :main imclassification.train-mnist) diff --git a/contrib/clojure-package/examples/imclassification/src/imclassification/train_mnist.clj b/contrib/clojure-package/examples/imclassification/src/imclassification/train_mnist.clj index a43dc3b69..e61e9ebf6 100644 --- a/contrib/clojure-package/examples/imclassification/src/imclassification/train_mnist.clj +++ b/contrib/clojure-package/examples/imclassification/src/imclassification/train_mnist.clj @@ -32,7 +32,7 @@ (def batch-size 10) ;; the batch size (def optimizer (optimizer/sgd {:learning-rate 0.01 :momentum 0.0})) (def eval-metric (eval-metric/accuracy)) -(def num-epoch 5) ;; the number of training epochs +(def num-epoch 1) ;; the number of training epochs (def kvstore "local") ;; the kvstore type ;;; Note to run distributed you might need to complile the engine with an option set (def role "worker") ;; scheduler/server/worker @@ -82,7 +82,9 @@ (sym/fully-connected "fc3" {:data data :num-hidden 10}) (sym/softmax-output "softmax" {:data data}))) -(defn start [devs] +(defn start + ([devs] (start devs num-epoch)) + ([devs _num-epoch] (when scheduler-host (println "Initing PS enviornments with " envs) (kvstore-server/init envs)) @@ -94,14 +96,18 @@ (do (println "Starting Training of MNIST ....") (println "Running with context devices of" devs) - (let [mod (m/module (get-symbol) {:contexts devs})] - (m/fit mod {:train-data train-data + (let [_mod (m/module (get-symbol) {:contexts devs})] + (m/fit _mod {:train-data train-data :eval-data test-data - :num-epoch num-epoch + :num-epoch _num-epoch :fit-params (m/fit-params {:kvstore kvstore :optimizer optimizer - :eval-metric eval-metric})})) - (println "Finish fit")))) + :eval-metric eval-metric})}) + (println "Finish fit") + _mod + ) + + )))) (defn -main [& args] (let [[dev dev-num] args diff --git a/contrib/clojure-package/examples/imclassification/test/imclassification/train_mnist_test.clj b/contrib/clojure-package/examples/imclassification/test/imclassification/train_mnist_test.clj new file mode 100644 index 000000000..2ebefc2fc --- /dev/null +++ b/contrib/clojure-package/examples/imclassification/test/imclassification/train_mnist_test.clj @@ -0,0 +1,39 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns imclassification.train-mnist-test + (:require + [clojure.test :refer :all] + [clojure.java.io :as io] + [clojure.string :as s] + [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.module :as module] + [imclassification.train-mnist :as mnist])) + +(defn- file-to-filtered-seq [file] + (->> + file + (io/file) + (io/reader) + (line-seq) + (filter #(not (s/includes? % "mxnet_version"))))) + +(deftest mnist-two-epochs-test + (module/save-checkpoint (mnist/start [(context/cpu)] 2) {:prefix "target/test" :epoch 2}) + (is (= + (file-to-filtered-seq "test/test-symbol.json.ref") + (file-to-filtered-seq "target/test-symbol.json")))) \ No newline at end of file diff --git a/contrib/clojure-package/examples/imclassification/test/test-symbol.json.ref b/contrib/clojure-package/examples/imclassification/test/test-symbol.json.ref new file mode 100644 index 000000000..ba1d2fad3 --- /dev/null +++ b/contrib/clojure-package/examples/imclassification/test/test-symbol.json.ref @@ -0,0 +1,105 @@ +{ + "nodes": [ + { + "op": "null", + "name": "data", + "inputs": [] + }, + { + "op": "null", + "name": "fc1_weight", + "attrs": {"num_hidden": "128"}, + "inputs": [] + }, + { + "op": "null", + "name": "fc1_bias", + "attrs": {"num_hidden": "128"}, + "inputs": [] + }, + { + "op": "FullyConnected", + "name": "fc1", + "attrs": {"num_hidden": "128"}, + "inputs": [[0, 0, 0], [1, 0, 0], [2, 0, 0]] + }, + { + "op": "Activation", + "name": "relu1", + "attrs": {"act_type": "relu"}, + "inputs": [[3, 0, 0]] + }, + { + "op": "null", + "name": "fc2_weight", + "attrs": {"num_hidden": "64"}, + "inputs": [] + }, + { + "op": "null", + "name": "fc2_bias", + "attrs": {"num_hidden": "64"}, + "inputs": [] + }, + { + "op": "FullyConnected", + "name": "fc2", + "attrs": {"num_hidden": "64"}, + "inputs": [[4, 0, 0], [5, 0, 0], [6, 0, 0]] + }, + { + "op": "Activation", + "name": "relu2", + "attrs": {"act_type": "relu"}, + "inputs": [[7, 0, 0]] + }, + { + "op": "null", + "name": "fc3_weight", + "attrs": {"num_hidden": "10"}, + "inputs": [] + }, + { + "op": "null", + "name": "fc3_bias", + "attrs": {"num_hidden": "10"}, + "inputs": [] + }, + { + "op": "FullyConnected", + "name": "fc3", + "attrs": {"num_hidden": "10"}, + "inputs": [[8, 0, 0], [9, 0, 0], [10, 0, 0]] + }, + { + "op": "null", + "name": "softmax_label", + "inputs": [] + }, + { + "op": "SoftmaxOutput", + "name": "softmax", + "inputs": [[11, 0, 0], [12, 0, 0]] + } + ], + "arg_nodes": [0, 1, 2, 5, 6, 9, 10, 12], + "node_row_ptr": [ + 0, + 1, + 2, + 3, + 4, + 5, + 6, + 7, + 8, + 9, + 10, + 11, + 12, + 13, + 14 + ], + "heads": [[13, 0, 0]], + "attrs": {"mxnet_version": ["int", 10400]} +} \ No newline at end of file diff --git a/contrib/clojure-package/examples/infer/imageclassifier/.gitignore b/contrib/clojure-package/examples/infer/imageclassifier/.gitignore new file mode 100644 index 000000000..35491f1a0 --- /dev/null +++ b/contrib/clojure-package/examples/infer/imageclassifier/.gitignore @@ -0,0 +1,12 @@ +/target +/classes +/checkouts +/images +pom.xml +pom.xml.asc +*.jar +*.class +/.lein-* +/.nrepl-port +.hgignore +.hg/ diff --git a/contrib/clojure-package/examples/infer/imageclassifier/README.md b/contrib/clojure-package/examples/infer/imageclassifier/README.md new file mode 100644 index 000000000..a8328607c --- /dev/null +++ b/contrib/clojure-package/examples/infer/imageclassifier/README.md @@ -0,0 +1,24 @@ +# imageclassifier + +Run image classification using clojure infer package. + +## Installation + +Before you run this example, make sure that you have the clojure package installed. +In the main clojure package directory, do `lein install`. Then you can run +`lein install` in this directory. + +## Usage + +``` +$ chmod +x scripts/get_resnet_18_data.sh +$ ./scripts/get_resnet_18_data.sh +$ +$ lein run -- --help +$ lein run -- -m models/resnet-18/resnet-18 -i images/kitten.jpg -d images/ +$ +$ lein uberjar +$ java -jar target/imageclassifier-0.1.0-SNAPSHOT-standalone.jar --help +$ java -jar target/imageclassifier-0.1.0-SNAPSHOT-standalone.jar \ + -m models/resnet-18/resnet-18 -i images/kitten.jpg -d images/ +``` diff --git a/contrib/clojure-package/examples/infer/imageclassifier/project.clj b/contrib/clojure-package/examples/infer/imageclassifier/project.clj new file mode 100644 index 000000000..2d5b171d9 --- /dev/null +++ b/contrib/clojure-package/examples/infer/imageclassifier/project.clj @@ -0,0 +1,25 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(defproject imageclassifier "0.1.0-SNAPSHOT" + :description "Image classification using infer with MXNet" + :plugins [[lein-cljfmt "0.5.7"]] + :dependencies [[org.clojure/clojure "1.9.0"] + [org.clojure/tools.cli "0.4.1"] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] + :main ^:skip-aot infer.imageclassifier-example + :profiles {:uberjar {:aot :all}}) diff --git a/contrib/clojure-package/examples/infer/imageclassifier/scripts/get_resnet_18_data.sh b/contrib/clojure-package/examples/infer/imageclassifier/scripts/get_resnet_18_data.sh new file mode 100755 index 000000000..1a142e8ed --- /dev/null +++ b/contrib/clojure-package/examples/infer/imageclassifier/scripts/get_resnet_18_data.sh @@ -0,0 +1,45 @@ +#!/bin/bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +set -evx + +MXNET_ROOT=$(cd "$(dirname $0)/.."; pwd) + +data_path=$MXNET_ROOT/models/resnet-18/ + +image_path=$MXNET_ROOT/images/ + +if [ ! -d "$data_path" ]; then + mkdir -p "$data_path" +fi + +if [ ! -d "$image_path" ]; then + mkdir -p "$image_path" +fi + +if [ ! -f "$data_path/resnet-18-0000.params" ]; then + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/resnet-18-symbol.json -P $data_path + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/resnet-18-0000.params -P $data_path + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/synset.txt -P $data_path +fi + +if [ ! -f "$image_path/kitten.jpg" ]; then + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/kitten.jpg -P $image_path + wget https://s3.amazonaws.com/model-server/inputs/Pug-Cookie.jpg -P $image_path +fi diff --git a/contrib/clojure-package/examples/infer/imageclassifier/scripts/get_resnet_data.sh b/contrib/clojure-package/examples/infer/imageclassifier/scripts/get_resnet_data.sh new file mode 100755 index 000000000..fcef59bac --- /dev/null +++ b/contrib/clojure-package/examples/infer/imageclassifier/scripts/get_resnet_data.sh @@ -0,0 +1,44 @@ +#!/bin/bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +set -e + +MXNET_ROOT=$(cd "$(dirname $0)/.."; pwd) + +data_path=$MXNET_ROOT/models/resnet-152/ + +image_path=$MXNET_ROOT/images/ + +if [ ! -d "$data_path" ]; then + mkdir -p "$data_path" +fi + +if [ ! -d "$image_path" ]; then + mkdir -p "$image_path" +fi + +if [ ! -f "$data_path/resnet-152-0000.params" ]; then + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/resnet-152-0000.params -P $data_path + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/resnet-152-symbol.json -P $data_path + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/synset.txt -P $data_path +fi + +if [ ! -f "$image_path/kitten.jpg" ]; then + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/kitten.jpg -P $image_path +fi diff --git a/contrib/clojure-package/examples/infer/imageclassifier/src/infer/imageclassifier_example.clj b/contrib/clojure-package/examples/infer/imageclassifier/src/infer/imageclassifier_example.clj new file mode 100644 index 000000000..6994b4fad --- /dev/null +++ b/contrib/clojure-package/examples/infer/imageclassifier/src/infer/imageclassifier_example.clj @@ -0,0 +1,111 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns infer.imageclassifier-example + (:require [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [clojure.java.io :as io] + [clojure.string :refer [join]] + [clojure.tools.cli :refer [parse-opts]]) + (:gen-class)) + +(defn check-valid-dir + "Check that the input directory exists" + [input-dir] + (let [dir (io/file input-dir)] + (and + (.exists dir) + (.isDirectory dir)))) + +(defn check-valid-file + "Check that the file exists" + [input-file] + (.exists (io/file input-file))) + +(def cli-options + [["-m" "--model-path-prefix PREFIX" "Model path prefix" + :default "models/resnet-18/resnet-18" + :validate [#(check-valid-file (str % "-symbol.json")) + "Model path prefix is invalid"]] + ["-i" "--input-image IMAGE" "Input image" + :default "images/kitten.jpg" + :validate [check-valid-file "Input file not found"]] + ["-d" "--input-dir IMAGE_DIR" "Input directory" + :default "images/" + :validate [check-valid-dir "Input directory not found"]] + ["-h" "--help"]]) + +(defn print-predictions + "Print image classifier predictions for the given input file" + [predictions] + (println (apply str (repeat 80 "="))) + (doseq [p predictions] + (println p)) + (println (apply str (repeat 80 "=")))) + +(defn classify-single-image + "Classify a single image and print top-5 predictions" + [classifier input-image] + (let [image (infer/load-image-from-file input-image) + topk 5 + predictions (infer/classify-image classifier image topk)] + [predictions])) + +(defn classify-images-in-dir + "Classify all jpg images in the directory" + [classifier input-dir] + (let [batch-size 20 + image-file-batches (->> input-dir + io/file + file-seq + (filter #(.isFile %)) + (filter #(re-matches #".*\.jpg$" (.getPath %))) + (mapv #(.getPath %)) + (partition-all batch-size))] + (apply concat (for [image-files image-file-batches] + (let [image-batch (infer/load-image-paths image-files) + topk 5] + (infer/classify-image-batch classifier image-batch topk)))))) + +(defn run-classifier + "Runs an image classifier based on options provided" + [options] + (let [{:keys [model-path-prefix input-image input-dir]} options + descriptors [{:name "data" + :shape [1 3 224 224] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors) + classifier (infer/create-image-classifier + factory {:contexts [(context/default-context)]})] + (println "Classifying a single image") + (print-predictions (classify-single-image classifier input-image)) + (println "\n") + (println "Classifying images in a directory") + (doseq [predictions (classify-images-in-dir classifier input-dir)] + (print-predictions predictions)))) + +(defn -main + [& args] + (let [{:keys [options summary errors] :as opts} + (parse-opts args cli-options)] + (cond + (:help options) (println summary) + (some? errors) (println (join "\n" errors)) + :else (run-classifier options)))) diff --git a/contrib/clojure-package/examples/infer/imageclassifier/test/infer/imageclassifier_example_test.clj b/contrib/clojure-package/examples/infer/imageclassifier/test/infer/imageclassifier_example_test.clj new file mode 100644 index 000000000..4b71f845d --- /dev/null +++ b/contrib/clojure-package/examples/infer/imageclassifier/test/infer/imageclassifier_example_test.clj @@ -0,0 +1,58 @@ +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns infer.imageclassifier-example-test + (:require [infer.imageclassifier-example :refer [classify-single-image + classify-images-in-dir]] + [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] + [clojure.test :refer :all])) + +(def model-dir "models/") +(def image-dir "images/") +(def model-path-prefix (str model-dir "resnet-18/resnet-18")) +(def image-file (str image-dir "kitten.jpg")) + +(when-not (.exists (io/file (str model-path-prefix "-symbol.json"))) + (sh "./scripts/get_resnet_18_data.sh")) + +(defn create-classifier [] + (let [descriptors [{:name "data" + :shape [1 3 224 224] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors)] + (infer/create-image-classifier factory))) + +(deftest test-single-classification + (let [classifier (create-classifier) + [[predictions]] (classify-single-image classifier image-file)] + (is (some? predictions)) + (is (= 5 (count predictions))) + (is (= "n02123159 tiger cat" (:class (first predictions)))) + (is (= (< 0 (:prob (first predictions)) 1))))) + +(deftest test-batch-classification + (let [classifier (create-classifier) + predictions (first (classify-images-in-dir classifier image-dir))] + (is (some? predictions)) + (is (= 5 (count predictions))) + (is (= "n02123159 tiger cat" (:class (first predictions)))) + (is (= (< 0 (:prob (first predictions)) 1))))) diff --git a/contrib/clojure-package/examples/infer/objectdetector/.gitignore b/contrib/clojure-package/examples/infer/objectdetector/.gitignore new file mode 100644 index 000000000..35491f1a0 --- /dev/null +++ b/contrib/clojure-package/examples/infer/objectdetector/.gitignore @@ -0,0 +1,12 @@ +/target +/classes +/checkouts +/images +pom.xml +pom.xml.asc +*.jar +*.class +/.lein-* +/.nrepl-port +.hgignore +.hg/ diff --git a/contrib/clojure-package/examples/infer/objectdetector/README.md b/contrib/clojure-package/examples/infer/objectdetector/README.md new file mode 100644 index 000000000..921c53e04 --- /dev/null +++ b/contrib/clojure-package/examples/infer/objectdetector/README.md @@ -0,0 +1,24 @@ +# objectdetector + +Run object detection on images using clojure infer package. + +## Installation + +Before you run this example, make sure that you have the clojure package installed. +In the main clojure package directory, do `lein install`. Then you can run +`lein install` in this directory. + +## Usage + +``` +$ chmod +x scripts/get_ssd_data.sh +$ ./scripts/get_ssd_data.sh +$ +$ lein run -- --help +$ lein run -- -m models/resnet50_ssd/resnet50_ssd_model -i images/dog.jpg -d images/ +$ +$ lein uberjar +$ java -jar target/objectdetector-0.1.0-SNAPSHOT-standalone.jar --help +$ java -jar target/objectdetector-0.1.0-SNAPSHOT-standalone.jar \ + -m models/resnet50_ssd/resnet50_ssd_model -i images/dog.jpg -d images/ +``` diff --git a/contrib/clojure-package/examples/infer/objectdetector/project.clj b/contrib/clojure-package/examples/infer/objectdetector/project.clj new file mode 100644 index 000000000..4501f14a3 --- /dev/null +++ b/contrib/clojure-package/examples/infer/objectdetector/project.clj @@ -0,0 +1,25 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(defproject objectdetector "0.1.0-SNAPSHOT" + :description "Object detection using infer with MXNet" + :plugins [[lein-cljfmt "0.5.7"]] + :dependencies [[org.clojure/clojure "1.9.0"] + [org.clojure/tools.cli "0.4.1"] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] + :main ^:skip-aot infer.objectdetector-example + :profiles {:uberjar {:aot :all}}) diff --git a/contrib/clojure-package/examples/infer/objectdetector/scripts/get_ssd_data.sh b/contrib/clojure-package/examples/infer/objectdetector/scripts/get_ssd_data.sh new file mode 100755 index 000000000..06440a284 --- /dev/null +++ b/contrib/clojure-package/examples/infer/objectdetector/scripts/get_ssd_data.sh @@ -0,0 +1,49 @@ +#!/bin/bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + + +set -e + +MXNET_ROOT=$(cd "$(dirname $0)/.."; pwd) + +data_path=$MXNET_ROOT/models/resnet50_ssd + +image_path=$MXNET_ROOT/images + +if [ ! -d "$data_path" ]; then + mkdir -p "$data_path" +fi + +if [ ! -d "$image_path" ]; then + mkdir -p "$image_path" +fi + +if [ ! -f "$data_path/resnet50_ssd_model-0000.params" ]; then + wget https://s3.amazonaws.com/model-server/models/resnet50_ssd/resnet50_ssd_model-symbol.json -P $data_path + wget https://s3.amazonaws.com/model-server/models/resnet50_ssd/resnet50_ssd_model-0000.params -P $data_path + wget https://s3.amazonaws.com/model-server/models/resnet50_ssd/synset.txt -P $data_path +fi + +if [ ! -f "$image_path/000001.jpg" ]; then + cd $image_path + wget https://cloud.githubusercontent.com/assets/3307514/20012566/cbb53c76-a27d-11e6-9aaa-91939c9a1cd5.jpg -O 000001.jpg + wget https://cloud.githubusercontent.com/assets/3307514/20012567/cbb60336-a27d-11e6-93ff-cbc3f09f5c9e.jpg -O dog.jpg + wget https://cloud.githubusercontent.com/assets/3307514/20012563/cbb41382-a27d-11e6-92a9-18dab4fd1ad3.jpg -O person.jpg +fi + diff --git a/contrib/clojure-package/examples/infer/objectdetector/src/infer/objectdetector_example.clj b/contrib/clojure-package/examples/infer/objectdetector/src/infer/objectdetector_example.clj new file mode 100644 index 000000000..5c30e5db6 --- /dev/null +++ b/contrib/clojure-package/examples/infer/objectdetector/src/infer/objectdetector_example.clj @@ -0,0 +1,120 @@ +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns infer.objectdetector-example + (:require [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [clojure.java.io :as io] + [clojure.string :refer [join]] + [clojure.tools.cli :refer [parse-opts]]) + (:gen-class)) + +(defn check-valid-dir + "Check that the input directory exists" + [input-dir] + (let [dir (io/file input-dir)] + (and + (.exists dir) + (.isDirectory dir)))) + +(defn check-valid-file + "Check that the file exists" + [input-file] + (.exists (io/file input-file))) + +(def cli-options + [["-m" "--model-path-prefix PREFIX" "Model path prefix" + :default "models/resnet50_ssd/resnet50_ssd_model" + :validate [#(check-valid-file (str % "-symbol.json")) + "Model path prefix is invalid"]] + ["-i" "--input-image IMAGE" "Input image" + :default "images/dog.jpg" + :validate [check-valid-file "Input file not found"]] + ["-d" "--input-dir IMAGE_DIR" "Input directory" + :default "images/" + :validate [check-valid-dir "Input directory not found"]] + ["-h" "--help"]]) + +(defn print-predictions + "Print image detector predictions for the given input file" + [predictions width height] + (println (apply str (repeat 80 "="))) + (doseq [{:keys [class prob x-min y-min x-max y-max]} predictions] + (println (format + "Class: %s Prob=%.5f Coords=(%.3f, %.3f, %.3f, %.3f)" + class + prob + (* x-min width) + (* y-min height) + (* x-max width) + (* y-max height)))) + (println (apply str (repeat 80 "=")))) + +(defn detect-single-image + "Detect objects in a single image and print top-5 predictions" + [detector input-image] + (let [image (infer/load-image-from-file input-image) + topk 5 + [predictions] (infer/detect-objects detector image topk)] + predictions)) + +(defn detect-images-in-dir + "Detect objects in all jpg images in the directory" + [detector input-dir] + (let [batch-size 20 + image-file-batches (->> input-dir + io/file + file-seq + (filter #(.isFile %)) + (filter #(re-matches #".*\.jpg$" (.getPath %))) + (mapv #(.getPath %)) + (partition-all batch-size))] + (apply concat (for [image-files image-file-batches] + (let [image-batch (infer/load-image-paths image-files) + topk 5] + (infer/detect-objects-batch detector image-batch topk)))))) + +(defn run-detector + "Runs an image detector based on options provided" + [options] + (let [{:keys [model-path-prefix input-image input-dir + device device-id]} options + width 512 height 512 + descriptors [{:name "data" + :shape [1 3 height width] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors) + detector (infer/create-object-detector + factory + {:contexts [(context/default-context)]})] + (println "Object detection on a single image") + (print-predictions (detect-single-image detector input-image) width height) + (println "\n") + (println "Object detection on images in a directory") + (doseq [predictions (detect-images-in-dir detector input-dir)] + (print-predictions predictions width height)))) + +(defn -main + [& args] + (let [{:keys [options summary errors] :as opts} + (parse-opts args cli-options)] + (cond + (:help options) (println summary) + (some? errors) (println (join "\n" errors)) + :else (run-detector options)))) diff --git a/contrib/clojure-package/examples/infer/objectdetector/test/infer/objectdetector_example_test.clj b/contrib/clojure-package/examples/infer/objectdetector/test/infer/objectdetector_example_test.clj new file mode 100644 index 000000000..2b8ad951a --- /dev/null +++ b/contrib/clojure-package/examples/infer/objectdetector/test/infer/objectdetector_example_test.clj @@ -0,0 +1,65 @@ +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns infer.objectdetector-example-test + (:require [infer.objectdetector-example :refer [detect-single-image + detect-images-in-dir]] + [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] + [clojure.test :refer :all])) + +(def model-dir "models/") +(def image-dir "images/") +(def model-path-prefix (str model-dir "resnet50_ssd/resnet50_ssd_model")) +(def image-file (str image-dir "dog.jpg")) + +(when-not (.exists (io/file (str model-path-prefix "-symbol.json"))) + (sh "./scripts/get_ssd_data.sh")) + +(defn create-detector [] + (let [descriptors [{:name "data" + :shape [1 3 512 512] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors)] + (infer/create-object-detector factory))) + +(deftest test-single-detection + (let [detector (create-detector) + predictions (detect-single-image detector image-file) + {:keys [class prob x-min x-max y-min y-max] :as pred} (first predictions)] + (is (some? predictions)) + (is (= 5 (count predictions))) + (is (string? class)) + (is (< 0.8 prob)) + (is (every? #(< 0 % 1) [x-min x-max y-min y-max])) + (is (= #{"dog" "person" "bicycle" "car"} (set (mapv :class predictions)))))) + +(deftest test-batch-detection + (let [detector (create-detector) + batch-predictions (detect-images-in-dir detector image-dir) + predictions (first batch-predictions) + {:keys [class prob x-min x-max y-min y-max] :as pred} (first predictions)] + (is (some? batch-predictions)) + (is (= 5 (count predictions))) + (is (string? class)) + (is (< 0.8 prob)) + (every? #(< 0 % 1) [x-min x-max y-min y-max]) + (is (= #{"dog" "person" "bicycle" "car"} (set (mapv :class predictions)))))) diff --git a/contrib/clojure-package/examples/infer/predictor/.gitignore b/contrib/clojure-package/examples/infer/predictor/.gitignore new file mode 100644 index 000000000..35491f1a0 --- /dev/null +++ b/contrib/clojure-package/examples/infer/predictor/.gitignore @@ -0,0 +1,12 @@ +/target +/classes +/checkouts +/images +pom.xml +pom.xml.asc +*.jar +*.class +/.lein-* +/.nrepl-port +.hgignore +.hg/ diff --git a/contrib/clojure-package/examples/infer/predictor/README.md b/contrib/clojure-package/examples/infer/predictor/README.md new file mode 100644 index 000000000..9ca71cf46 --- /dev/null +++ b/contrib/clojure-package/examples/infer/predictor/README.md @@ -0,0 +1,24 @@ +# predictor + +Run model prediction using clojure infer package. + +## Installation + +Before you run this example, make sure that you have the clojure package installed. +In the main clojure package directory, do `lein install`. Then you can run +`lein install` in this directory. + +## Usage + +``` +$ chmod +x scripts/get_resnet_18_data.sh +$ ./scripts/get_resnet_18_data.sh +$ +$ lein run -- --help +$ lein run -- -m models/resnet-18/resnet-18 -i images/kitten.jpg +$ +$ lein uberjar +$ java -jar target/predictor-0.1.0-SNAPSHOT-standalone.jar --help +$ java -jar target/predictor-0.1.0-SNAPSHOT-standalone.jar \ + -m models/resnet-18/resnet-18 -i images/kitten.jpg +``` diff --git a/contrib/clojure-package/examples/infer/predictor/project.clj b/contrib/clojure-package/examples/infer/predictor/project.clj new file mode 100644 index 000000000..0bd1eaee6 --- /dev/null +++ b/contrib/clojure-package/examples/infer/predictor/project.clj @@ -0,0 +1,25 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(defproject predictor "0.1.0-SNAPSHOT" + :description "Model prediction using infer with MXNet" + :plugins [[lein-cljfmt "0.5.7"]] + :dependencies [[org.clojure/clojure "1.9.0"] + [org.clojure/tools.cli "0.4.1"] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] + :main ^:skip-aot infer.predictor-example + :profiles {:uberjar {:aot :all}}) diff --git a/contrib/clojure-package/examples/infer/predictor/scripts/get_resnet_18_data.sh b/contrib/clojure-package/examples/infer/predictor/scripts/get_resnet_18_data.sh new file mode 100755 index 000000000..cf85355fa --- /dev/null +++ b/contrib/clojure-package/examples/infer/predictor/scripts/get_resnet_18_data.sh @@ -0,0 +1,44 @@ +#!/bin/bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +set -evx + +MXNET_ROOT=$(cd "$(dirname $0)/.."; pwd) + +data_path=$MXNET_ROOT/models/resnet-18/ + +image_path=$MXNET_ROOT/images/ + +if [ ! -d "$data_path" ]; then + mkdir -p "$data_path" +fi + +if [ ! -d "$image_path" ]; then + mkdir -p "$image_path" +fi + +if [ ! -f "$data_path/resnet-18-0000.params" ]; then + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/resnet-18-symbol.json -P $data_path + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/resnet-18-0000.params -P $data_path + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/synset.txt -P $data_path +fi + +if [ ! -f "$image_path/kitten.jpg" ]; then + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/kitten.jpg -P $image_path +fi diff --git a/contrib/clojure-package/examples/infer/predictor/scripts/get_resnet_data.sh b/contrib/clojure-package/examples/infer/predictor/scripts/get_resnet_data.sh new file mode 100755 index 000000000..fcef59bac --- /dev/null +++ b/contrib/clojure-package/examples/infer/predictor/scripts/get_resnet_data.sh @@ -0,0 +1,44 @@ +#!/bin/bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +set -e + +MXNET_ROOT=$(cd "$(dirname $0)/.."; pwd) + +data_path=$MXNET_ROOT/models/resnet-152/ + +image_path=$MXNET_ROOT/images/ + +if [ ! -d "$data_path" ]; then + mkdir -p "$data_path" +fi + +if [ ! -d "$image_path" ]; then + mkdir -p "$image_path" +fi + +if [ ! -f "$data_path/resnet-152-0000.params" ]; then + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/resnet-152-0000.params -P $data_path + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/resnet-152-symbol.json -P $data_path + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/synset.txt -P $data_path +fi + +if [ ! -f "$image_path/kitten.jpg" ]; then + wget https://s3.us-east-2.amazonaws.com/mxnet-scala/scala-example-ci/resnet152/kitten.jpg -P $image_path +fi diff --git a/contrib/clojure-package/examples/infer/predictor/src/infer/predictor_example.clj b/contrib/clojure-package/examples/infer/predictor/src/infer/predictor_example.clj new file mode 100644 index 000000000..05eb0add3 --- /dev/null +++ b/contrib/clojure-package/examples/infer/predictor/src/infer/predictor_example.clj @@ -0,0 +1,101 @@ +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns infer.predictor-example + (:require [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.image :as image] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [clojure.java.io :as io] + [clojure.string :refer [join split]] + [clojure.tools.cli :refer [parse-opts]]) + (:gen-class)) + +(defn check-valid-file + "Check that the file exists" + [input-file] + (.exists (io/file input-file))) + +(def cli-options + [["-m" "--model-path-prefix PREFIX" "Model path prefix" + :default "models/resnet-18/resnet-18" + :validate [#(check-valid-file (str % "-symbol.json")) + "Model path prefix is invalid"]] + ["-i" "--input-image IMAGE" "Image path" + :default "images/kitten.jpg" + :validate [check-valid-file "Input image path not found"]] + ["-h" "--help"]]) + +(defn print-prediction + [prediction] + (println (apply str (repeat 80 "="))) + (println prediction) + (println (apply str (repeat 80 "=")))) + +(defn preprocess + "Preprocesses image to make it ready for prediction" + [image-path width height] + (-> image-path + infer/load-image-from-file + (infer/reshape-image width height) + (infer/buffered-image-to-pixels [3 width height]) + (ndarray/expand-dims 0))) + +(defn do-inference + "Run inference using given predictor" + [predictor image] + (let [predictions (infer/predict-with-ndarray predictor [image])] + (first predictions))) + +(defn postprocess + [model-path-prefix predictions] + (let [synset-file (-> model-path-prefix + io/file + (.getParent) + (io/file "synset.txt")) + synset-names (split (slurp synset-file) #"\n") + [max-idx] (ndarray/->int-vec (ndarray/argmax predictions 1))] + (synset-names max-idx))) + +(defn run-predictor + "Runs an image classifier based on options provided" + [options] + (let [{:keys [model-path-prefix input-image]} options + width 224 + height 224 + descriptors [{:name "data" + :shape [1 3 height width] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors) + predictor (infer/create-predictor + factory + {:contexts [(context/default-context)]}) + image-ndarray (preprocess input-image width height) + predictions (do-inference predictor image-ndarray) + best-prediction (postprocess model-path-prefix predictions)] + (print-prediction best-prediction))) + +(defn -main + [& args] + (let [{:keys [options summary errors] :as opts} + (parse-opts args cli-options)] + (cond + (:help options) (println summary) + (some? errors) (println (join "\n" errors)) + :else (run-predictor options)))) diff --git a/contrib/clojure-package/examples/infer/predictor/test/infer/predictor_example_test.clj b/contrib/clojure-package/examples/infer/predictor/test/infer/predictor_example_test.clj new file mode 100644 index 000000000..02f826fbb --- /dev/null +++ b/contrib/clojure-package/examples/infer/predictor/test/infer/predictor_example_test.clj @@ -0,0 +1,51 @@ +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns infer.predictor-example-test + (:require [infer.predictor-example :refer [preprocess + do-inference + postprocess]] + [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] + [clojure.test :refer :all])) + +(def model-dir "models/") +(def image-file "images/kitten.jpg") +(def model-path-prefix (str model-dir "resnet-18/resnet-18")) +(def width 224) +(def height 224) + +(when-not (.exists (io/file (str model-path-prefix "-symbol.json"))) + (sh "./scripts/get_resnet_18_data.sh")) + +(defn create-predictor [] + (let [descriptors [{:name "data" + :shape [1 3 height width] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors)] + (infer/create-predictor factory))) + +(deftest predictor-test + (let [predictor (create-predictor) + image-ndarray (preprocess image-file width height) + predictions (do-inference predictor image-ndarray) + best-prediction (postprocess model-path-prefix predictions)] + (is (= "n02123159 tiger cat" best-prediction)))) diff --git a/contrib/clojure-package/examples/module/project.clj b/contrib/clojure-package/examples/module/project.clj index 487ceece9..b667a2a4e 100644 --- a/contrib/clojure-package/examples/module/project.clj +++ b/contrib/clojure-package/examples/module/project.clj @@ -19,7 +19,7 @@ :description "Clojure examples for module" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"]] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] :pedantic? :skip :main mnist-mlp) diff --git a/contrib/clojure-package/examples/module/test/mnist_mlp_test.clj b/contrib/clojure-package/examples/module/test/mnist_mlp_test.clj new file mode 100644 index 000000000..5fbcdd3c0 --- /dev/null +++ b/contrib/clojure-package/examples/module/test/mnist_mlp_test.clj @@ -0,0 +1,29 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; +(ns mnist-mlp-test + (:require + [mnist-mlp :refer :all] + [org.apache.clojure-mxnet.context :as context] + [clojure.test :refer :all])) + +(deftest run-those-tests + (let [devs [(context/cpu)]] + (run-intermediate-level-api :devs devs) + (run-intermediate-level-api :devs devs :load-model-epoch (dec num-epoch)) + (run-high-level-api devs) + (run-prediction-iterator-api devs) + (run-predication-and-calc-accuracy-manually devs))) \ No newline at end of file diff --git a/contrib/clojure-package/examples/multi-label/project.clj b/contrib/clojure-package/examples/multi-label/project.clj index a41c7fd7b..6e6a14340 100644 --- a/contrib/clojure-package/examples/multi-label/project.clj +++ b/contrib/clojure-package/examples/multi-label/project.clj @@ -19,5 +19,5 @@ :description "Example of multi-label classification" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"]] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] :main multi-label.core) diff --git a/contrib/clojure-package/examples/multi-label/test/multi_label_test.clj b/contrib/clojure-package/examples/multi-label/test/multi_label_test.clj new file mode 100644 index 000000000..446a84626 --- /dev/null +++ b/contrib/clojure-package/examples/multi-label/test/multi_label_test.clj @@ -0,0 +1,26 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns multi_label_test + (:require + [multi-label.core :as label] + [clojure.java.io :as io] + [org.apache.clojure-mxnet.context :as context] + [clojure.test :refer :all])) + +(deftest run-multi-label + (label/train [(context/cpu)])) \ No newline at end of file diff --git a/contrib/clojure-package/examples/neural-style/project.clj b/contrib/clojure-package/examples/neural-style/project.clj index f8d6e28de..b6d29f7c0 100644 --- a/contrib/clojure-package/examples/neural-style/project.clj +++ b/contrib/clojure-package/examples/neural-style/project.clj @@ -19,7 +19,7 @@ :description "Neural Style Transfer with MXNet" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"] [net.mikera/imagez "0.12.0"] [thinktopic/think.image "0.4.16"]] :main neural-style.core) diff --git a/contrib/clojure-package/examples/neural-style/src/neural_style/core.clj b/contrib/clojure-package/examples/neural-style/src/neural_style/core.clj index 50f95c975..ac1f537f1 100644 --- a/contrib/clojure-package/examples/neural-style/src/neural_style/core.clj +++ b/contrib/clojure-package/examples/neural-style/src/neural_style/core.clj @@ -24,6 +24,8 @@ [org.apache.clojure-mxnet.random :as random] [org.apache.clojure-mxnet.shape :as mx-shape] [org.apache.clojure-mxnet.symbol :as sym] + [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] [mikera.image.core :as img] [mikera.image.filters :as img-filter] [think.image.pixel :as pixel] @@ -31,6 +33,9 @@ (:gen-class));; An Implementation of the paper A Neural Algorithm of Artistic Style ;;by Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge +(when-not (.exists (io/file "input")) + (do (println "Retrieving data...") (sh "./download.sh"))) + (def content-image "input/IMG_4343.jpg") (def style-image "input/starry_night.jpg") (def model-path "model/vgg19.params") @@ -39,7 +44,7 @@ (def content-weight 5) ;; the weight for the content image (def blur-radius 1) ;; the blur filter radius (def output-dir "output") -(def lr 10) ;; the learning rate +(def lr 10.0) ;; the learning rate (def tv-weight 0.01) ;; the magnitude on the tv loss (def num-epochs 1000) (def num-channels 3) @@ -157,9 +162,10 @@ out (ndarray/* out tv-weight)] (sym/bind out ctx {"img" img "kernel" kernel})))) -(defn train [devs] - - (let [dev (first devs) +(defn train + ([devs] (train devs 20)) + ([devs n-epochs] + (let [dev (first devs) content-np (preprocess-content-image content-image max-long-edge) content-np-shape (mx-shape/->vec (ndarray/shape content-np)) style-np (preprocess-style-image style-image content-np-shape) @@ -193,7 +199,7 @@ ;;;train ;;initialize with random noise - img (ndarray/- (random/uniform 0 255 content-np-shape dev) 128) + img (ndarray/- (random/uniform 0 255 content-np-shape {:ctx dev}) 128) ;;; img (random/uniform -0.1 0.1 content-np-shape dev) ;; img content-np lr-sched (lr-scheduler/factor-scheduler 10 0.9) @@ -212,7 +218,7 @@ tv-grad-executor (get-tv-grad-executor img dev tv-weight) eps 0.0 e 0] - (doseq [i (range 20)] + (doseq [i (range n-epochs)] (ndarray/set (:data model-executor) img) (-> (:executor model-executor) (executor/forward) @@ -237,8 +243,10 @@ (println "Epoch " i "relative change " eps) (when (zero? (mod i 2)) (save-image (ndarray/copy img) (str output-dir "/out_" i ".png") blur-radius true))) - - (ndarray/set old-img img)))) + (ndarray/set old-img img)) + ; (save-image (ndarray/copy img) (str output-dir "/final.png") 0 false) + ; (postprocess-image img) + ))) (defn -main [& args] ;;; Note this only works on cpu right now diff --git a/contrib/clojure-package/examples/neural-style/test/neural_style/vgg_19_test.clj b/contrib/clojure-package/examples/neural-style/test/neural_style/vgg_19_test.clj new file mode 100644 index 000000000..a7c978607 --- /dev/null +++ b/contrib/clojure-package/examples/neural-style/test/neural_style/vgg_19_test.clj @@ -0,0 +1,53 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns neural-style.vgg-19-test + (:require + [clojure.test :refer :all] + [mikera.image.core :as img] + [clojure.java.io :as io] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [org.apache.clojure-mxnet.context :as context] + [neural-style.core :as neural])) + +(defn pic-to-ndarray-vec[path] + (-> path + img/load-image + neural/image->ndarray + ndarray/->vec)) + +(defn last-modified-check[x] + (let [t (- (System/currentTimeMillis) (.lastModified x)) ] + (if (> 10000 t) ; 10 seconds + x + (throw (Exception. (str "Generated File Too Old: (" t " ms) [" x "]")))))) + +(defn latest-pic-to-ndarray-vec[folder] + (->> folder + io/as-file + (.listFiles) + (sort-by #(.lastModified %)) + last + (last-modified-check) + (.getPath) + pic-to-ndarray-vec)) + +(deftest vgg-19-test + (neural/train [(context/cpu)] 3) + (is (not (nil? (latest-pic-to-ndarray-vec "output"))))) +; generated file different depending on the platform :/ +; (pic-to-ndarray-vec "test/ref_out_2.png")))) \ No newline at end of file diff --git a/contrib/clojure-package/examples/pre-trained-models/project.clj b/contrib/clojure-package/examples/pre-trained-models/project.clj index c61e9a6f8..11e002503 100644 --- a/contrib/clojure-package/examples/pre-trained-models/project.clj +++ b/contrib/clojure-package/examples/pre-trained-models/project.clj @@ -19,7 +19,7 @@ :description "Example of using pre-trained models with MXNet" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"] [net.mikera/imagez "0.12.0"] [thinktopic/think.image "0.4.16"]] :main pre-trained-models.fine-tune) diff --git a/contrib/clojure-package/examples/profiler/project.clj b/contrib/clojure-package/examples/profiler/project.clj index dfcfea7c2..cc50482d0 100644 --- a/contrib/clojure-package/examples/profiler/project.clj +++ b/contrib/clojure-package/examples/profiler/project.clj @@ -18,5 +18,5 @@ (defproject profiler "0.1.0-SNAPSHOT" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"]] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] :main profiler.core) diff --git a/contrib/clojure-package/examples/profiler/src/profiler/core.clj b/contrib/clojure-package/examples/profiler/src/profiler/core.clj index e366c578c..67ba0feb8 100644 --- a/contrib/clojure-package/examples/profiler/src/profiler/core.clj +++ b/contrib/clojure-package/examples/profiler/src/profiler/core.clj @@ -27,9 +27,9 @@ (def profiler-mode "symbolic") ;; can be symbolic, imperative, api, mem (def output-path ".") ;; the profile file output directory (def profiler-name "profile-matmul-20iter.json") -(def iter-num 100) -(def begin-profiling-iter 50) -(def end-profiling-iter 70) +(def iter-num 5) +(def begin-profiling-iter 0) +(def end-profiling-iter 1) (def gpu? false) (defn run [] diff --git a/contrib/clojure-package/examples/profiler/test/core_test.clj b/contrib/clojure-package/examples/profiler/test/core_test.clj new file mode 100644 index 000000000..1173f0755 --- /dev/null +++ b/contrib/clojure-package/examples/profiler/test/core_test.clj @@ -0,0 +1,31 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns core_test + (:require + [profiler.core :as profiler] + [clojure.java.io :as io] + [clojure.test :refer :all])) + +(defn count-lines[file] + (count (line-seq (io/reader (io/as-file file))))) + +(deftest run-profiler + (profiler/run) + (let [new-file (clojure.java.io/as-file profiler/profiler-name)] + (is (.exists new-file)) + (is (> 10000 (- (System/currentTimeMillis) (.lastModified new-file)))))) \ No newline at end of file diff --git a/contrib/clojure-package/examples/profiler/test/profile-matmul-20iter.json.ref b/contrib/clojure-package/examples/profiler/test/profile-matmul-20iter.json.ref new file mode 100644 index 000000000..d6baa4211 --- /dev/null +++ b/contrib/clojure-package/examples/profiler/test/profile-matmul-20iter.json.ref @@ -0,0 +1,271 @@ +{ + "traceEvents": [ + { + "ph": "M", + "args": { + "name": "cpu/0" + }, + "pid": 0, + "name": "process_name" + }, + { + "ph": "M", + "args": { + "name": "cpu/1" + }, + "pid": 1, + "name": "process_name" + }, + { + "ph": "M", + "args": { + "name": "cpu/2" + }, + "pid": 2, + "name": "process_name" + }, + { + "ph": "M", + "args": { + "name": "cpu/3" + }, + "pid": 3, + "name": "process_name" + }, + { + "ph": "M", + "args": { + "name": "cpu pinned/" + }, + "pid": 4, + "name": "process_name" + }, + { + "ph": "M", + "args": { + "name": "cpu shared/" + }, + "pid": 5, + "name": "process_name" + }, { + "ph": "M", + "args": { + "name": "MXNET_C_API" + }, + "pid": 13841910479334118176, + "name": "process_name" + }, + + { + "name": "MXNet C API Calls", + "cat": "MXNET_C_API", + "ph": "C", + "ts": 51195258331, + "args": { "MXNet C API Calls": 1 }, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, + { + "name": "MXNet C API Concurrency", + "cat": "MXNET_C_API", + "ph": "C", + "ts": 51195258338, + "args": { "MXNet C API Concurrency": 1 }, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, + { + "name": "MXExecutorForward", + "cat": "MXNET_C_API", + "ph": "b", + "ts": 51195258348, + "id": 6902988396839073221, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, + { + "name": "MXExecutorForward", + "cat": "MXNET_C_API", + "ph": "e", + "ts": 51195258357, + "id": 6902988396839073221, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, + { + "name": "MXNet C API Concurrency", + "cat": "MXNET_C_API", + "ph": "C", + "ts": 51195258358, + "args": { "MXNet C API Concurrency": 0 }, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, { + "ph": "M", + "args": { + "name": "Device Storage" + }, + "pid": 13545698322897290393, + "name": "process_name" + }, + + { + "name": "Memory: cpu/0", + "cat": "Device Storage", + "ph": "C", + "ts": 51195543378, + "args": { "Memory: cpu/0": 8 }, + "pid": 13545698322897290393, + "tid": 5603937861270119161 + } +, + { + "name": "MXNet C API Calls", + "cat": "MXNET_C_API", + "ph": "C", + "ts": 51195258559, + "args": { "MXNet C API Calls": 2 }, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, + { + "name": "Memory: cpu/0", + "cat": "Device Storage", + "ph": "C", + "ts": 51195857697, + "args": { "Memory: cpu/0": 67108872 }, + "pid": 13545698322897290393, + "tid": 5603937861270119161 + } +, + { + "name": "MXNet C API Concurrency", + "cat": "MXNET_C_API", + "ph": "C", + "ts": 51195258560, + "args": { "MXNet C API Concurrency": 1 }, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, + + { + "name": "[dot]", + "cat": "operator", + "ph": "B", + "ts": 51195857671, + "pid": 0, + "tid": 5603937861270119161 + } +, + { + "name": "[dot]", + "cat": "operator", + "ph": "E", + "ts": 51196931353, + "pid": 0, + "tid": 5603937861270119161 + } +, + + { + "name": "WaitForVar", + "cat": "operator", + "ph": "B", + "ts": 51196931369, + "pid": 0, + "tid": 5603937861270119161 + } +, + { + "name": "WaitForVar", + "cat": "operator", + "ph": "E", + "ts": 51196931376, + "pid": 0, + "tid": 5603937861270119161 + } +, { + "ph": "M", + "args": { + "name": "operator" + }, + "pid": 10847949044720084585, + "name": "process_name" + }, + + { + "name": "[dot]", + "cat": "operator", + "ph": "b", + "ts": 51195857671, + "id": 5603937861270119161, + "pid": 10847949044720084585, + "tid": 5603937861270119161 + } +, + { + "name": "[dot]", + "cat": "operator", + "ph": "e", + "ts": 51196931350, + "id": 5603937861270119161, + "pid": 10847949044720084585, + "tid": 5603937861270119161 + } +, + { + "name": "MXNDArrayWaitToRead", + "cat": "MXNET_C_API", + "ph": "b", + "ts": 51195258561, + "id": 6902988396839073221, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, + { + "name": "MXNDArrayWaitToRead", + "cat": "MXNET_C_API", + "ph": "e", + "ts": 51196931386, + "id": 6902988396839073221, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } +, + { + "name": "WaitForVar", + "cat": "operator", + "ph": "b", + "ts": 51196931369, + "id": 5603937861270119161, + "pid": 10847949044720084585, + "tid": 5603937861270119161 + } +, + { + "name": "WaitForVar", + "cat": "operator", + "ph": "e", + "ts": 51196931376, + "id": 5603937861270119161, + "pid": 10847949044720084585, + "tid": 5603937861270119161 + } +, + { + "name": "MXNet C API Concurrency", + "cat": "MXNET_C_API", + "ph": "C", + "ts": 51196931391, + "args": { "MXNet C API Concurrency": 0 }, + "pid": 13841910479334118176, + "tid": 6902988396839073221 + } diff --git a/contrib/clojure-package/examples/rnn/project.clj b/contrib/clojure-package/examples/rnn/project.clj index f411a5458..64f4c2907 100644 --- a/contrib/clojure-package/examples/rnn/project.clj +++ b/contrib/clojure-package/examples/rnn/project.clj @@ -19,5 +19,5 @@ :description "RNN example" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"]] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] :main rnn.train-char-rnn) diff --git a/contrib/clojure-package/examples/rnn/src/rnn/test_char_rnn.clj b/contrib/clojure-package/examples/rnn/src/rnn/test_char_rnn.clj index d03b1a6b3..22a2982f2 100644 --- a/contrib/clojure-package/examples/rnn/src/rnn/test_char_rnn.clj +++ b/contrib/clojure-package/examples/rnn/src/rnn/test_char_rnn.clj @@ -17,6 +17,7 @@ (ns rnn.test-char-rnn (:require [clojure.string :as string] + [clojure.java.shell :refer [sh]] [rnn.util :as util] [rnn.lstm :as lstm] [org.apache.clojure-mxnet.context :as context] @@ -24,6 +25,9 @@ [org.apache.clojure-mxnet.module :as m] [org.apache.clojure-mxnet.ndarray :as ndarray])) +(when-not (.exists (clojure.java.io/file "data")) + (do (println "Retrieving data...") (sh "./get_data.sh"))) + (def data-path "data/obama.txt") (def model-prefix) (def start-sentence "The joke ") diff --git a/contrib/clojure-package/examples/rnn/src/rnn/train_char_rnn.clj b/contrib/clojure-package/examples/rnn/src/rnn/train_char_rnn.clj index 150cd94e6..41a764f7a 100644 --- a/contrib/clojure-package/examples/rnn/src/rnn/train_char_rnn.clj +++ b/contrib/clojure-package/examples/rnn/src/rnn/train_char_rnn.clj @@ -17,6 +17,7 @@ (ns rnn.train-char-rnn (:require [clojure.string :as string] + [clojure.java.shell :refer [sh]] [rnn.util :as util] [rnn.lstm :as lstm] [rnn.test-char-rnn :as test-rnn] @@ -34,6 +35,9 @@ ;;https://github.com/apache/incubator-mxnet/blob/master/example/rnn/old/char-rnn.ipynb +(when-not (.exists (clojure.java.io/file "data")) + (do (println "Retrieving data...") (sh "./get_data.sh"))) + ;; batch size for training (def batch-size 32) ;; we can support various length input diff --git a/contrib/clojure-package/examples/rnn/test/rnn/core_test.clj b/contrib/clojure-package/examples/rnn/test/rnn/core_test.clj new file mode 100644 index 000000000..b19857724 --- /dev/null +++ b/contrib/clojure-package/examples/rnn/test/rnn/core_test.clj @@ -0,0 +1,26 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns rnn.core_test + (:require + [rnn.test-char-rnn :as rnn] + [clojure.test :refer :all])) + +(deftest check-trained-network + (is (= + "The joke that we can start by the challenges of the American people. The American people have been talking about how to compete with the streets of San Antonio who the courage to come together as one " + (rnn/rnn-test "data/obama" 75 200 false)))) \ No newline at end of file diff --git a/contrib/clojure-package/examples/tutorial/.gitignore b/contrib/clojure-package/examples/tutorial/.gitignore index c53038ec0..338927e78 100644 --- a/contrib/clojure-package/examples/tutorial/.gitignore +++ b/contrib/clojure-package/examples/tutorial/.gitignore @@ -9,3 +9,4 @@ pom.xml.asc /.nrepl-port .hgignore .hg/ +filename \ No newline at end of file diff --git a/contrib/clojure-package/examples/tutorial/project.clj b/contrib/clojure-package/examples/tutorial/project.clj index 4910886ca..58a10f04f 100644 --- a/contrib/clojure-package/examples/tutorial/project.clj +++ b/contrib/clojure-package/examples/tutorial/project.clj @@ -19,4 +19,9 @@ :description "MXNET tutorials" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"]]) + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"] + + ;; Uncomment the one appropriate for your machine & configuration: + #_[org.apache.mxnet.contrib.clojure/clojure-mxnet-linux-cpu "1.4.0"] + #_[org.apache.mxnet.contrib.clojure/clojure-mxnet-linux-gpu "1.4.0"] + #_[org.apache.mxnet.contrib.clojure/clojure-mxnet-osx-cpu "1.4.0"]]) diff --git a/contrib/clojure-package/examples/tutorial/src/tutorial/kvstore.clj b/contrib/clojure-package/examples/tutorial/src/tutorial/kvstore.clj index 558b21f0a..f35d4a069 100644 --- a/contrib/clojure-package/examples/tutorial/src/tutorial/kvstore.clj +++ b/contrib/clojure-package/examples/tutorial/src/tutorial/kvstore.clj @@ -16,35 +16,44 @@ ;; (ns tutorial.kvstore + "A REPL tutorial of the MXNet Clojure API for KVStore, based on + https://mxnet.incubator.apache.org/api/clojure/kvstore.html" (:require [org.apache.clojure-mxnet.kvstore :as kvstore] [org.apache.clojure-mxnet.ndarray :as ndarray] [org.apache.clojure-mxnet.context :as context])) -;;Basic Push and Pull -;;Provides basic operation over multiple devices (GPUs or CPUs) on a single device. -;; Initialization -;; Let’s consider a simple example. It initializes a (int, NDArray) pair into the store, and then pulls the value out. +;;;; Basic Push and Pull -(def kv (kvstore/create "local")) ;; create a local kvstore +;; Provides basic operation over multiple devices (GPUs or CPUs) on a +;; single device. + +;;; Initialization +;; Let’s consider a simple example. It initializes a (`int`, +;; `NDArray`) pair into the store, and then pulls the value out. + +(def kv (kvstore/create "local")) ; create a local kvstore (def shape [2 3]) -;;; init the kvstore with a vector of keys (strings) and ndarrays +;; init the kvstore with a vector of keys (strings) and ndarrays (kvstore/init kv ["3"] [(ndarray/* (ndarray/ones shape) 2)]) (def a (ndarray/zeros shape)) (kvstore/pull kv ["3"] [a]) (ndarray/->vec a) ;=> [2.0 2.0 2.0 2.0 2.0 2.0] -;;Push, Aggregation, and Updater -;;For any key that’s been initialized, you can push a new value with the same shape to the key, as follows: - +;;; Push, Aggregation, and Updater +;; For any key that’s been initialized, you can push a new value with +;; the same shape to the key, as follows: (kvstore/push kv ["3"] [(ndarray/* (ndarray/ones shape) 8)]) (kvstore/pull kv ["3"] [a]) (ndarray/->vec a);=>[8.0 8.0 8.0 8.0 8.0 8.0] -;;The data that you want to push can be stored on any device. Furthermore, you can push multiple values into the same key, where KVStore first sums all of these values, and then pushes the aggregated value, as follows: +;; The data that you want to push can be stored on any +;; device. Furthermore, you can push multiple values into the same +;; key, where KVStore first sums all of these values, and then pushes +;; the aggregated value, as follows: -;; using multiple cpus instead of gpus +;; (Here we use multiple CPUs.) (def cpus [(context/cpu 0) (context/cpu 1) (context/cpu 2)]) (def b [(ndarray/ones shape {:ctx (nth cpus 0)}) (ndarray/ones shape {:ctx (nth cpus 1)}) @@ -53,22 +62,33 @@ (kvstore/pull kv "3" a) (ndarray/->vec a) ;=> [3.0 3.0 3.0 3.0 3.0 3.0] - -;;Pull -;;You’ve already seen how to pull a single key-value pair. Similar to the way that you use the push command, you can pull the value into several devices with a single call. +;;; Pull +;; You’ve already seen how to pull a single key-value pair. Similar to +;; the way that you use the push command, you can pull the value into +;; several devices with a single call. (def b [(ndarray/ones shape {:ctx (context/cpu 0)}) (ndarray/ones shape {:ctx (context/cpu 1)})]) (kvstore/pull kv ["3" "3"] b) (map ndarray/->vec b) ;=> ([3.0 3.0 3.0 3.0 3.0 3.0] [3.0 3.0 3.0 3.0 3.0 3.0]) -;;List Key-Value Pairs -;;All of the operations that we’ve discussed so far are performed on a single key. KVStore also provides the interface for generating a list of key-value pairs. For a single device, use the following: + +;;;; List Key-Value Pairs + +;; All of the operations that we’ve discussed so far are performed on +;; a single key. KVStore also provides the interface for generating a +;; list of key-value pairs. For a single device, use the following: (def ks ["5" "7" "9"]) -(kvstore/init kv ks [(ndarray/ones shape) (ndarray/ones shape) (ndarray/ones shape)]) -(kvstore/push kv ks [(ndarray/ones shape) (ndarray/ones shape) (ndarray/ones shape)]) -(def b [(ndarray/zeros shape) (ndarray/zeros shape) (ndarray/zeros shape)]) +(kvstore/init kv ks [(ndarray/ones shape) + (ndarray/ones shape) + (ndarray/ones shape)]) +(kvstore/push kv ks [(ndarray/ones shape) + (ndarray/ones shape) + (ndarray/ones shape)]) +(def b [(ndarray/zeros shape) + (ndarray/zeros shape) + (ndarray/zeros shape)]) (kvstore/pull kv ks b) -(map ndarray/->vec b);=> ([1.0 1.0 1.0 1.0 1.0 1.0] [1.0 1.0 1.0 1.0 1.0 1.0] [1.0 1.0 1.0 1.0 1.0 1.0]) +(map ndarray/->vec b) ;=> ([1.0 1.0 1.0 1.0 1.0 1.0] [1.0 1.0 1.0 1.0 1.0 1.0] [1.0 1.0 1.0 1.0 1.0 1.0]) diff --git a/contrib/clojure-package/examples/tutorial/src/tutorial/module.clj b/contrib/clojure-package/examples/tutorial/src/tutorial/module.clj index 3cef342f0..e19498111 100644 --- a/contrib/clojure-package/examples/tutorial/src/tutorial/module.clj +++ b/contrib/clojure-package/examples/tutorial/src/tutorial/module.clj @@ -16,6 +16,8 @@ ;; (ns tutorial.module + "A REPL tutorial of the MXNet Clojure API for Module, based on + https://mxnet.incubator.apache.org/api/clojure/module.html" (:require [clojure.java.io :as io] [clojure.java.shell :refer [sh]] [org.apache.clojure-mxnet.eval-metric :as eval-metric] @@ -24,12 +26,26 @@ [org.apache.clojure-mxnet.symbol :as sym] [org.apache.clojure-mxnet.ndarray :as ndarray])) + +;; The Module API provides an intermediate and high-level interface +;; for performing computation with neural networks in MXNet. Module +;; wraps a Symbol and one or more Executors. It has both a high level +;; and intermediate level API. + + +;;;; Prepare the Data + +;; In this example, we are going to use the MNIST data set. If you +;; start, we can run some helper scripts to download the data for us. + (def data-dir "data/") (when-not (.exists (io/file (str data-dir "train-images-idx3-ubyte"))) (sh "../../scripts/get_mnist_data.sh")) -;;; Load the MNIST datasets +;; MXNet provides function in the `io` namespace to load the MNIST +;; datasets into training and test data iterators that we can use with +;; our module. (def train-data (mx-io/mnist-iter {:image (str data-dir "train-images-idx3-ubyte") :label (str data-dir "train-labels-idx1-ubyte") :label-name "softmax_label" @@ -47,11 +63,13 @@ :flat true :silent false})) -;; The module API provides an intermediate and high-level interface for performing computation with neural networks in MXNet. Module wraps a Symbol and one or more Executors. It has both a high level and intermediate level api -;; Preparing a module for Computation +;;;; Preparing a module for Computation -;; construct a module +;; To construct a module, we need to have a symbol as input. This +;; symbol takes input data in the first layer and then has subsequent +;; layers of fully connected and relu activation layers, ending up in +;; a softmax layer for output. (let [data (sym/variable "data") fc1 (sym/fully-connected "fc1" {:data data :num-hidden 128}) @@ -62,7 +80,7 @@ out (sym/softmax-output "softmax" {:data fc3})] out) ;=>#object[org.apache.mxnet.Symbol 0x1f43a406 "org.apache.mxnet.Symbol@1f43a406"] -;; You can also use as-> for easier threading +;; You can also write this with the `as->` threading macro. (def out (as-> (sym/variable "data") data @@ -75,10 +93,15 @@ ;=> #'tutorial.module/out -;; By default, context is the CPU. If you need data parallelization, you can specify a GPU context or an array of GPU contexts. -;; like this (m/module out {:contexts [(context/gpu)]}) +;; By default, context is the CPU. If you need data parallelization, +;; you can specify a GPU context or an array of GPU contexts, like +;; this: `(m/module out {:contexts [(context/gpu)]})` -;; Before you can compute with a module, you need to call `bind` to allocate the device memory and `initParams` or `set-params` to initialize the parameters. If you simply want to fit a module, you don’t need to call `bind` and `init-params` explicitly, because the `fit` function automatically calls them if they are needed. +;; Before you can compute with a module, you need to call `bind` to +;; allocate the device memory and `initParams` or `set-params` to +;; initialize the parameters. If you simply want to fit a module, you +;; don’t need to call `bind` and `init-params` explicitly, because the +;; `fit` function automatically calls them if they are needed. (let [mod (m/module out)] (-> mod @@ -86,29 +109,46 @@ :label-shapes (mx-io/provide-label train-data)}) (m/init-params))) -;; Now you can compute with the module using functions like `forward`, `backward`, etc. +;; Now you can compute with the module using functions like `forward`, +;; `backward`, etc. -;; Training, Predicting, and Evaluating +;;;; Training and Predicting -;;Modules provide high-level APIs for training, predicting, and evaluating. To fit a module, call the `fit` function with some DataIters: +;; Modules provide high-level APIs for training, predicting, and +;; evaluating. To fit a module, call the `fit` function with some data +;; iterators: -(def mod (m/fit (m/module out) {:train-data train-data :eval-data test-data :num-epoch 1})) +(def mod + (m/fit (m/module out) {:train-data train-data + :eval-data test-data + :num-epoch 1})) +;; => ;; Epoch 0 Train- [accuracy 0.12521666] ;; Epoch 0 Time cost- 8392 ;; Epoch 0 Validation- [accuracy 0.2227] -;; You can pass in batch-end callbacks using batch-end-callback and epoch-end callbacks using epoch-end-callback in the `fit-params`. You can also set parameters using functions like in the fit-params like optimizer and eval-metric. To learn more about the fit-params, see the fit-param function options. To predict with a module, call `predict` with a DataIter: +;; You can pass in batch-end callbacks using batch-end-callback and +;; epoch-end callbacks using epoch-end-callback in the +;; `fit-params`. You can also set parameters using functions like in +;; the fit-params like optimizer and eval-metric. To learn more about +;; the fit-params, see the fit-param function options. To predict with +;; a module, call `predict` with a DataIter: + +(def results + (m/predict mod {:eval-data test-data})) -(def results (m/predict mod {:eval-data test-data})) (first results) ;=>#object[org.apache.mxnet.NDArray 0x3540b6d3 "org.apache.mxnet.NDArray@a48686ec"] (first (ndarray/->vec (first results))) ;=>0.08261358 -;;The module collects and returns all of the prediction results. For more details about the format of the return values, see the documentation for the `predict` function. +;; The module collects and returns all of the prediction results. For +;; more details about the format of the return values, see the +;; documentation for the `predict` function. -;;When prediction results might be too large to fit in memory, use the `predict-every-batch` API +;; When prediction results might be too large to fit in memory, use +;; the `predict-every-batch` API. (let [preds (m/predict-every-batch mod {:eval-data test-data})] (mx-io/reduce-batches test-data @@ -118,23 +158,33 @@ ;;; do something (inc i)))) -;;If you need to evaluate on a test set and don’t need the prediction output, call the `score` function with a DataIter and an EvalMetric: +;; If you need to evaluate on a test set and don’t need the prediction +;; output, call the `score` function with a data iterator and an eval +;; metric: -(m/score mod {:eval-data test-data :eval-metric (eval-metric/accuracy)}) ;=>["accuracy" 0.2227] +(m/score mod {:eval-data test-data + :eval-metric (eval-metric/accuracy)}) ;=>["accuracy" 0.2227] -;;This runs predictions on each batch in the provided DataIter and computes the evaluation score using the provided EvalMetric. The evaluation results are stored in metric so that you can query later. +;; This runs predictions on each batch in the provided DataIter and +;; computes the evaluation score using the provided EvalMetric. The +;; evaluation results are stored in metric so that you can query +;; later. -;;Saving and Loading Module Parameters -;;To save the module parameters in each training epoch, use a `checkpoint` function +;;;; Saving and Loading + +;; To save the module parameters in each training epoch, use the +;; `save-checkpoint` function: (let [save-prefix "my-model"] (doseq [epoch-num (range 3)] (mx-io/do-batches train-data (fn [batch ;; do something -])) - (m/save-checkpoint mod {:prefix save-prefix :epoch epoch-num :save-opt-states true}))) + ])) + (m/save-checkpoint mod {:prefix save-prefix + :epoch epoch-num + :save-opt-states true}))) ;; INFO org.apache.mxnet.module.Module: Saved checkpoint to my-model-0000.params ;; INFO org.apache.mxnet.module.Module: Saved optimizer state to my-model-0000.states @@ -144,20 +194,22 @@ ;; INFO org.apache.mxnet.module.Module: Saved optimizer state to my-model-0002.states -;;To load the saved module parameters, call the `load-checkpoint` function: +;; To load the saved module parameters, call the `load-checkpoint` +;; function: (def new-mod (m/load-checkpoint {:prefix "my-model" :epoch 1 :load-optimizer-states true})) new-mod ;=> #object[org.apache.mxnet.module.Module 0x5304d0f4 "org.apache.mxnet.module.Module@5304d0f4"] -;;To initialize parameters, Bind the symbols to construct executors first with bind function. Then, initialize the parameters and auxiliary states by calling `init-params` function. - +;; To initialize parameters, bind the symbols to construct executors +;; first with the `bind` function. Then, initialize the parameters and +;; auxiliary states by calling the `init-params` function.\ (-> new-mod - (m/bind {:data-shapes (mx-io/provide-data train-data) :label-shapes (mx-io/provide-label train-data)}) + (m/bind {:data-shapes (mx-io/provide-data train-data) + :label-shapes (mx-io/provide-label train-data)}) (m/init-params)) -;;To get current parameters, use `params` - +;; To get current parameters, use `params` (let [[arg-params aux-params] (m/params new-mod)] {:arg-params arg-params :aux-params aux-params}) @@ -178,20 +230,57 @@ new-mod ;=> #object[org.apache.mxnet.module.Module 0x5304d0f4 "org.apache.mxnet. ;; :aux-params {}} -;;To assign parameter and aux state values, use `set-params` function. +;; To assign parameter and aux state values, use the `set-params` +;; function: +(m/set-params new-mod {:arg-params (m/arg-params new-mod) + :aux-params (m/aux-params new-mod)}) -(m/set-params new-mod {:arg-params (m/arg-params new-mod) :aux-params (m/aux-params new-mod)}) -;=> #object[org.apache.mxnet.module.Module 0x5304d0f4 "org.apache.mxnet.module.Module@5304d0f4"] -;;To resume training from a saved checkpoint, instead of calling `set-params`, directly call `fit`, passing the loaded parameters, so that `fit` knows to start from those parameters instead of initializing randomly: +;; To resume training from a saved checkpoint, pass the loaded +;; parameters to the `fit` function. This will prevent `fit` from +;; initializing randomly. -;; reset the training data before calling fit or you will get an error +;; (First, reset the training data before calling `fit` or you will +;; get an error) (mx-io/reset train-data) (mx-io/reset test-data) -(m/fit new-mod {:train-data train-data :eval-data test-data :num-epoch 2 - :fit-params (-> (m/fit-params {:begin-epoch 1}))}) - -;;Create fit-params, and then use it to set `begin-epoch` so that fit() knows to resume from a saved epoch. - - +;; Create `fit-params` and then use it to set `begin-epoch` so that +;; `fit` knows to resume from a saved epoch. + + +(comment +;; FIXME +; Caused by: java.io.EOFException +; at java.io.DataInputStream.readInt(DataInputStream.java:392) +; at java.io.ObjectInputStream$BlockDataInputStream.readInt(ObjectInputStream.java:3182) +; at java.io.ObjectInputStream.readInt(ObjectInputStream.java:1032) +; at org.apache.mxnet.Optimizer$$anon$1$$anonfun$deserializeState$1.apply$mcVI$sp(Optimizer.scala:84) +; at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160) +; at org.apache.mxnet.Optimizer$$anon$1.deserializeState(Optimizer.scala:83) +; at org.apache.mxnet.module.Module$$anonfun$loadOptimizerStates$3.apply(Module.scala:594) +; at org.apache.mxnet.module.Module$$anonfun$loadOptimizerStates$3.apply(Module.scala:589) +; at scala.Option.foreach(Option.scala:257) +; at org.apache.mxnet.module.Module.loadOptimizerStates(Module.scala:589) +; at org.apache.mxnet.module.Module$$anonfun$initOptimizer$4.apply(Module.scala:407) +; at org.apache.mxnet.module.Module$$anonfun$initOptimizer$4.apply(Module.scala:406) +; at scala.Option.foreach(Option.scala:257) +; at org.apache.mxnet.module.Module.initOptimizer(Module.scala:406) +; at org.apache.mxnet.module.BaseModule.fit(BaseModule.scala:407) +; at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) +; at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) +; at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) +; at java.lang.reflect.Method.invoke(Method.java:498) +; at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) +; at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) +; at org.apache.clojure_mxnet.module$fit.invokeStatic(module.clj:551) +; at org.apache.clojure_mxnet.module$fit.invoke(module.clj:538) +; at tutorial.module$eval1787.invokeStatic(module.clj:250) +; at tutorial.module$eval1787.invoke(module.clj:250) + +(m/fit new-mod {:train-data train-data + :eval-data test-data + :num-epoch 2 + :fit-params (m/fit-params {:begin-epoch 1})}) + +) \ No newline at end of file diff --git a/contrib/clojure-package/examples/tutorial/src/tutorial/ndarray.clj b/contrib/clojure-package/examples/tutorial/src/tutorial/ndarray.clj index 858316eef..d18bb53da 100644 --- a/contrib/clojure-package/examples/tutorial/src/tutorial/ndarray.clj +++ b/contrib/clojure-package/examples/tutorial/src/tutorial/ndarray.clj @@ -16,42 +16,53 @@ ;; (ns tutorial.ndarray + "A REPL tutorial of the MXNet Clojure API for NDArray, based on + https://mxnet.incubator.apache.org/api/clojure/ndarray.html" (:require [org.apache.clojure-mxnet.ndarray :as ndarray] [org.apache.clojure-mxnet.context :as context])) -;;The NDArray package (mxnet.ndarray) contains tensor operations similar to numpy.ndarray. The syntax is also similar, except for some additional calls for dealing with I/O and multiple devices. +;; The NDArray API contains tensor operations similar to +;; `numpy.ndarray`. The syntax is also similar, except for some +;; additional calls for dealing with I/O and multiple devices. -;;Create NDArray -;;Create mxnet.ndarray as follows: -(def a (ndarray/zeros [100 50])) ;;all zero arrray of dimension 100 x 50 -(def b (ndarray/ones [256 32 128 1])) ;; all one array of dimension -(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ;; array with contents of a shape 2 x 3 +;;;; Create NDArray -;;; There are also ways to convert to a vec or get the shape as an object or vec +;; Create an MXNet NDArray as follows: +(def a (ndarray/zeros [100 50])) ; all-zero array of dimension 100 x 50 +(def b (ndarray/ones [256 32 128 1])) ; all-one array of given dimensions +(def c (ndarray/array [1 2 3 4 5 6] [2 3])) ; array with given contents in shape 2 x 3 + +;;; There are also ways to convert an NDArray to a vec or to get the +;;; shape as an object or vec: (ndarray/->vec c) ;=> [1.0 2.0 3.0 4.0 5.0 6.0] (ndarray/shape c) ;=> #object[org.apache.mxnet.Shape 0x583c865 "(2,3)"] (ndarray/shape-vec c) ;=> [2 3] -;; NDArray Operations -;; Arithmtic Operations +;; There are some basic NDArray operations, like arithmetic and slice +;; operations. + + +;;;; NDArray Operations: Arithmetic + (def a (ndarray/ones [1 5])) (def b (ndarray/ones [1 5])) -(-> (ndarray/+ a b) (ndarray/->vec)) ;=> [2.0 2.0 2.0 2.0 2.0] +(ndarray/->vec (ndarray/+ a b)) ;=> [2.0 2.0 2.0 2.0 2.0] ;; original ndarrays are unchanged (ndarray/->vec a) ;=> [1.0 1.0 1.0 1.0 1.0] (ndarray/->vec b) ;=> [1.0 1.0 1.0 1.0 1.0] -;;inplace operators +;; inplace operators (ndarray/+= a b) (ndarray/->vec a) ;=> [2.0 2.0 2.0 2.0 2.0] -;; other arthimetic operations are similar +;; Other arthimetic operations are similar. + -;; Slice operations +;;;; NDArray Operations: Slice (def a (ndarray/array [1 2 3 4 5 6] [3 2])) (def a1 (ndarray/slice a 1)) @@ -62,7 +73,8 @@ (ndarray/shape-vec a2) ;=>[2 2] (ndarray/->vec a2) ;=> [3.0 4.0 5.0 6.0] -;; Dot Product + +;;;; NDArray Operations: Dot Product (def arr1 (ndarray/array [1 2] [1 2])) (def arr2 (ndarray/array [3 4] [2 1])) @@ -70,23 +82,42 @@ (ndarray/shape-vec res) ;=> [1 1] (ndarray/->vec res) ;=> [11.0] -;;Save and Load NDArray -;;You can use MXNet functions to save and load a map of NDArrays from file systems, as follows: + +;;;; Save and Load NDArray + +;; You can use MXNet functions to save and load a map of NDArrays from +;; file systems, as follows: (ndarray/save "filename" {"arr1" arr1 "arr2" arr2}) -;; you can also do "s3://path" or "hdfs" +;; (you can also do "s3://path" or "hdfs") + +;; (ndarray/save "/Users/daveliepmann/src/coursework/mxnet-clj-tutorials/abc" +;; {"arr1" arr1 "arr2" arr2}) -;; to load +;; To load: (def from-file (ndarray/load "filename")) + from-file ;=>{"arr1" #object[org.apache.mxnet.NDArray 0x6115ba61 "org.apache.mxnet.NDArray@43d85753"], "arr2" #object[org.apache.mxnet.NDArray 0x374b5eff "org.apache.mxnet.NDArray@5c93def4"]} -;;Multi-Device Support +;; The good thing about using the `save` and `load` interface is that +;; you can use the format across all `mxnet` language bindings. They +;; also already support Amazon S3 and HDFS. + + +;;;; Multi-Device Support -;;Device information is stored in the mxnet.Context structure. When creating NDArray in MXNet, you can use the context argument (the default is the CPU context) to create arrays on specific devices as follows: +;; Device information is stored in the `mxnet.Context` structure. When +;; creating NDArray in MXNet, you can use the context argument (the +;; default is the CPU context) to create arrays on specific devices as +;; follows: (def cpu-a (ndarray/zeros [100 200])) (ndarray/context cpu-a) ;=> #object[org.apache.mxnet.Context 0x3f376123 "cpu(0)"] -(def gpu-b (ndarray/zeros [100 200] {:ctx (context/gpu 0)})) ;; to use with gpu +(comment + (def gpu-b (ndarray/zeros [100 200] {:ctx (context/gpu 0)})) ;; to use with gpu +) -;;Currently, we do not allow operations among arrays from different contexts. To manually enable this, use the copyto function to copy the content to different devices, and continue computation: +;; Currently, we do not allow operations among arrays from different +;; contexts. To manually enable this, use the `copy-to` function to +;; copy the content to different devices, and continue computation. diff --git a/contrib/clojure-package/examples/tutorial/src/tutorial/symbol.clj b/contrib/clojure-package/examples/tutorial/src/tutorial/symbol.clj index bec71dee8..e88260069 100644 --- a/contrib/clojure-package/examples/tutorial/src/tutorial/symbol.clj +++ b/contrib/clojure-package/examples/tutorial/src/tutorial/symbol.clj @@ -16,79 +16,66 @@ ;; (ns tutorial.symbol + "A REPL tutorial of the MXNet Clojure Symbolic API, based on + https://mxnet.incubator.apache.org/api/clojure/symbol.html" (:require [org.apache.clojure-mxnet.executor :as executor] [org.apache.clojure-mxnet.ndarray :as ndarray] [org.apache.clojure-mxnet.symbol :as sym] [org.apache.clojure-mxnet.context :as context])) -;; How to compose symbols -;;The symbolic API provides a way to configure computation graphs. You can configure the graphs either at the level of neural network layer operations or as fine-grained operations. -;;The following example configures a two-layer neural network. +;;;; How to Compose Symbols +;; The symbolic API provides a way to configure computation +;; graphs. You can configure the graphs either at the level of neural +;; network layer operations or as fine-grained operations. + +;; The following example configures a two-layer neural network. (def data (sym/variable "data")) (def fc1 (sym/fully-connected "fc1" {:data data :num-hidden 128})) (def act1 (sym/activation "act1" {:data fc1 :act-type "relu"})) (def fc2 (sym/fully-connected "fc2" {:data act1 :num-hidden 64})) (def net (sym/softmax-output "out" {:data fc2})) -;; you could also combine this more dynamically with +;; This can also be combined more dynamically with the `as->` Clojure +;; threading form. (as-> (sym/variable "data") data (sym/fully-connected "fc1" {:data data :num-hidden 128}) - (sym/activation "act1" {:data data :act-type "relu"}) + (sym/activation "act1" {:data data :act-type "relu"}) (sym/fully-connected "fc2" {:data data :num-hidden 64}) - (sym/softmax-output "out" {:data data})) + (sym/softmax-output "out" {:data data})) net ;=> #object[org.apache.mxnet.Symbol 0x5c78c8c2 "org.apache.mxnet.Symbol@5c78c8c2"] - -;;The basic arithmetic operators (plus, minus, div, multiplication) - -;;The following example creates a computation graph that adds two inputs together. +;; The basic arithmetic operators (plus, minus, div, multiplication) +;; work as expected. The following example creates a computation graph +;; that adds two inputs together. (def a (sym/variable "a")) (def b (sym/variable "b")) (def c (sym/+ a b)) -;; Each symbol takes a (unique) string name. NDArray and Symbol both represent a single tensor. Operators represent the computation between tensors. Operators take symbol (or NDArray) as inputs and might also additionally accept other hyperparameters such as the number of hidden neurons (num_hidden) or the activation type (act_type) and produce the output. - -;; We can view a symbol simply as a function taking several arguments. And we can retrieve those arguments with the following method call: - -;;We can view a symbol simply as a function taking several arguments. And we can retrieve those arguments with the following method call: - -(sym/list-arguments net) - ;=> ["data" "fc1_weight" "fc1_bias" "fc2_weight" "fc2_bias" "out_label"] - -;; These arguments are the parameters and inputs needed by each symbol: - -;; data: Input data needed by the variable data. -;; fc1_weight and fc1_bias: The weight and bias for the first fully connected layer fc1. -;; fc2_weight and fc2_bias: The weight and bias for the second fully connected layer fc2. -;; out_label: The label needed by the loss. - -;;We can also specify the names explicitly: -(def net (sym/variable "data")) -(def w (sym/variable "myweight")) -(def net (sym/fully-connected "fc1" {:data net :weight w :num-hidden 128})) -(sym/list-arguments net) - ;=> ["data" "fc1_weight" "fc1_bias" "fc2_weight" "fc2_bias" "out_label" "myweight" "fc1_bias"] +;;;; More Complicated Compositions - -;;In the above example, FullyConnected layer has 3 inputs: data, weight, bias. When any input is not specified, a variable will be automatically generated for it. - - -;; More complicated composition - -;;MXNet provides well-optimized symbols for layers commonly used in deep learning (see src/operator). We can also define new operators in Python. The following example first performs an element-wise add between two symbols, then feeds them to the fully connected operator: +;; MXNet provides well-optimized symbols for layers commonly used in +;; deep learning (see src/operator). We can also define new operators +;; in Python. The following example first performs an element-wise add +;; between two symbols, then feeds them to the fully connected +;; operator: (def lhs (sym/variable "data1")) (def rhs (sym/variable "data2")) -(def net (sym/fully-connected "fc1" {:data (sym/+ lhs rhs) :num-hidden 128})) +(def net (sym/fully-connected "fc1" {:data (sym/+ lhs rhs) + :num-hidden 128})) (sym/list-arguments net) ;=> ["data1" "data2" "fc1_weight" "fc1_bias"] -;; Group Multiple Symbols -;;To construct neural networks with multiple loss layers, we can use mxnet.sym.Group to group multiple symbols together. The following example groups two outputs: + +;;;; Group Multiple Symbols + +;; To construct neural networks with multiple loss layers, we can use +;; `group` to group multiple symbols together. The following example +;; groups two outputs: (def net (sym/variable "data")) (def fc1 (sym/fully-connected {:data net :num-hidden 128})) @@ -96,56 +83,51 @@ net ;=> #object[org.apache.mxnet.Symbol 0x5c78c8c2 "org.apache.mxnet.Symbol@5c78 (def out1 (sym/softmax-output {:data net2})) (def out2 (sym/linear-regression-output {:data net2})) (def group (sym/group [out1 out2])) -(sym/list-outputs group);=> ["softmaxoutput0_output" "linearregressionoutput0_output"] +(sym/list-outputs group) ;=> ["softmaxoutput0_output" "linearregressionoutput0_output"] -;; Symbol Manipulation -;; One important difference of Symbol compared to NDArray is that we first declare the computation and then bind the computation with data to run. +;;;; Serialization -;; In this section, we introduce the functions to manipulate a symbol directly. But note that, most of them are wrapped by the module package. +;; You can use the `save` and `load` functions to serialize Symbol +;; objects as JSON. These functions have the advantage of being +;; language-agnostic and cloud-friendly. You can also get a JSON +;; string directly using `to-json`. -;; Shape and Type Inference -;; For each symbol, we can query its arguments, auxiliary states and outputs. We can also infer the output shape and type of the symbol given the known input shape or type of some arguments, which facilitates memory allocation. -(sym/list-arguments fc1) ;=> ["data" "fullyconnected1_weight" "fullyconnected1_bias"] -(sym/list-outputs fc1) ;=> ["fullyconnected1_output"] +;; The following example shows how to save a symbol to a file, load it +;; back, and compare two symbols using a JSON string. You can also +;; save to S3 as well. -;; infer the shapes given the shape of the input arguments -(let [[arg-shapes out-shapes] (sym/infer-shape fc1 {:data [2 1]})] - {:arg-shapes arg-shapes - :out-shapes out-shapes}) ;=> {:arg-shapes ([2 1] [128 1] [128]), :out-shapes ([2 128])} +(def a (sym/variable "a")) +(def b (sym/variable "b")) +(def c (sym/+ a b)) +(sym/save c "symbol-c.json") +(def c2 (sym/load "symbol-c.json")) +(= (sym/to-json c) (sym/to-json c2)) ;=>true -;; Bind with Data and Evaluate -;; The symbol c constructed above declares what computation should be run. To evaluate it, we first need to feed the arguments, namely free variables, with data. -;; We can do it by using the bind method, which accepts device context and a dict mapping free variable names to NDArrays as arguments and returns an executor. The executor provides forward method for evaluation and an attribute outputs to get all the results. +;;;; Executing Symbols + +;; To execute symbols, first we need to define the data that they +;; should run on. We can do this with the `bind` function, which +;; returns an executor. We then use `forward` to evaluate and +;; `outputs` to get the results. (def a (sym/variable "a")) (def b (sym/variable "b")) (def c (sym/+ a b)) -(def ex (sym/bind c {"a" (ndarray/ones [2 2]) "b" (ndarray/ones [2 2])})) +(def ex + (sym/bind c {"a" (ndarray/ones [2 2]) + "b" (ndarray/ones [2 2])})) + (-> (executor/forward ex) (executor/outputs) (first) (ndarray/->vec));=> [2.0 2.0 2.0 2.0] -;;We can evaluate the same symbol on GPU with different data. -;; To do this you must have the correct native library jar defined as a dependency - -;;Note In order to execute the following section on a cpu set gpu_device to (cpu). - - -(def ex (sym/bind c (context/gpu 0) {"a" (ndarray/ones [2 2]) "b" (ndarray/ones [2 2])})) - -;; Serialization -;; There are two ways to save and load the symbols. You can use the mxnet.Symbol.save and mxnet.Symbol.load functions to serialize the Symbol objects. The advantage of using save and load functions is that it is language agnostic and cloud friendly. The symbol is saved in JSON format. You can also get a JSON string directly using mxnet.Symbol.toJson. Refer to API documentation for more details. - -;; The following example shows how to save a symbol to a file, load it back, and compare two symbols using a JSON string. You can also save to S3 as well - -(def a (sym/variable "a")) -(def b (sym/variable "b")) -(def c (sym/+ a b)) -(sym/save c "symbol-c.json") -(def c2 (sym/load "symbol-c.json")) -(= (sym/to-json c) (sym/to-json c2)) ;=>true - +(comment + ;; We can evaluate the same symbol on GPU with different data. + ;; (To do this you must have the correct native library jar defined as a dependency.) + (def ex (sym/bind c (context/gpu 0) {"a" (ndarray/ones [2 2]) + "b" (ndarray/ones [2 2])})) +) diff --git a/contrib/clojure-package/examples/tutorial/test/tutorial/core_test.clj b/contrib/clojure-package/examples/tutorial/test/tutorial/core_test.clj new file mode 100644 index 000000000..0e5169c5c --- /dev/null +++ b/contrib/clojure-package/examples/tutorial/test/tutorial/core_test.clj @@ -0,0 +1,27 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns tutorial.core_test + (:require [clojure.test :refer :all]) + (:require + [tutorial.introduction] + [tutorial.kvstore] + [tutorial.module] + [tutorial.ndarray] + [tutorial.symbol])) + +(deftest if-this-goes-here-then-tutorials-have-loaded-properly (is true)) \ No newline at end of file diff --git a/contrib/clojure-package/examples/visualization/project.clj b/contrib/clojure-package/examples/visualization/project.clj index 18882d4ff..d91ace318 100644 --- a/contrib/clojure-package/examples/visualization/project.clj +++ b/contrib/clojure-package/examples/visualization/project.clj @@ -19,5 +19,5 @@ :description "Visualization example" :plugins [[lein-cljfmt "0.5.7"]] :dependencies [[org.clojure/clojure "1.9.0"] - [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT"]] + [org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT"]] :main visualization.core) diff --git a/contrib/clojure-package/examples/visualization/test/visualization/core_test.clj b/contrib/clojure-package/examples/visualization/test/visualization/core_test.clj new file mode 100644 index 000000000..1b10695cb --- /dev/null +++ b/contrib/clojure-package/examples/visualization/test/visualization/core_test.clj @@ -0,0 +1,28 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns visualization.core_test + (:require + [visualization.core :as visualization] + [clojure.test :refer :all])) + +(deftest check-pdf + (visualization/test-viz) + (let [new-pdf (clojure.java.io/as-file "testviz.pdf")] + (is (.exists new-pdf)) + (is (> 10000 (- (System/currentTimeMillis) (.lastModified new-pdf)))))) + \ No newline at end of file diff --git a/contrib/clojure-package/integration-tests.sh b/contrib/clojure-package/integration-tests.sh new file mode 100755 index 000000000..6e5868712 --- /dev/null +++ b/contrib/clojure-package/integration-tests.sh @@ -0,0 +1,28 @@ +#!/bin/bash +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +set -evx + +MXNET_HOME=$(cd "$(dirname $0)/../.."; pwd) +EXAMPLES_HOME=${MXNET_HOME}/contrib/clojure-package/examples +#cd ${MXNET_HOME}/contrib/clojure-package +#lein test +#lein cloverage --codecov +for test_dir in `find ${EXAMPLES_HOME} -name test` ; do + cd ${test_dir} && lein test +done diff --git a/contrib/clojure-package/project.clj b/contrib/clojure-package/project.clj index 3779c82cf..c4428ce6e 100644 --- a/contrib/clojure-package/project.clj +++ b/contrib/clojure-package/project.clj @@ -15,8 +15,11 @@ ;; limitations under the License. ;; -(defproject org.apache.mxnet.contrib.clojure/clojure-mxnet "1.3.1-SNAPSHOT" +(defproject org.apache.mxnet.contrib.clojure/clojure-mxnet "1.5.0-SNAPSHOT" :description "Clojure package for MXNet" + :url "https://github.com/apache/incubator-mxnet" + :license {:name "Apache License" + :url "http://www.apache.org/licenses/LICENSE-2.0"} :dependencies [[org.clojure/clojure "1.9.0"] [t6/from-scala "0.3.0"] @@ -26,7 +29,7 @@ ;[org.apache.mxnet/mxnet-full_2.11-linux-x86_64-gpu "1.2.1"] ;;; CI - [org.apache.mxnet/mxnet-full_2.11-linux-x86_64-cpu "1.3.1-SNAPSHOT"] + [org.apache.mxnet/mxnet-full_2.11 "INTERNAL"] [org.clojure/tools.logging "0.4.0"] [org.apache.logging.log4j/log4j-core "2.8.1"] diff --git a/contrib/clojure-package/scripts/infer/get_resnet_18_data.sh b/contrib/clojure-package/scripts/infer/get_resnet_18_data.sh new file mode 100755 index 000000000..601f362c4 --- /dev/null +++ b/contrib/clojure-package/scripts/infer/get_resnet_18_data.sh @@ -0,0 +1,38 @@ +#!/bin/bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +set -evx + +if [ ! -z "$MXNET_HOME" ]; then + data_path="$MXNET_HOME/data" +else + MXNET_ROOT=$(cd "$(dirname $0)/../.."; pwd) + data_path="$MXNET_ROOT/data" +fi + +if [ ! -d "$data_path" ]; then + mkdir -p "$data_path" +fi + +resnet_18_data_path="$data_path/resnet-18" +if [ ! -f "$resnet_18_data_path/resnet-18-0000.params" ]; then + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/resnet-18-symbol.json -P $resnet_18_data_path + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/resnet-18-0000.params -P $resnet_18_data_path + wget https://s3.us-east-2.amazonaws.com/scala-infer-models/resnet-18/synset.txt -P $resnet_18_data_path +fi diff --git a/contrib/clojure-package/scripts/infer/get_ssd_data.sh b/contrib/clojure-package/scripts/infer/get_ssd_data.sh new file mode 100755 index 000000000..96e27a12d --- /dev/null +++ b/contrib/clojure-package/scripts/infer/get_ssd_data.sh @@ -0,0 +1,39 @@ +#!/bin/bash + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + + +set -evx + +if [ ! -z "$MXNET_HOME" ]; then + data_path="$MXNET_HOME/data" +else + MXNET_ROOT=$(cd "$(dirname $0)/../.."; pwd) + data_path="$MXNET_ROOT/data" +fi + +if [ ! -d "$data_path" ]; then + mkdir -p "$data_path" +fi + +resnet50_ssd_data_path="$data_path/resnet50_ssd" +if [ ! -f "$resnet50_ssd_data_path/resnet50_ssd_model-0000.params" ]; then + wget https://s3.amazonaws.com/model-server/models/resnet50_ssd/resnet50_ssd_model-symbol.json -P $resnet50_ssd_data_path + wget https://s3.amazonaws.com/model-server/models/resnet50_ssd/resnet50_ssd_model-0000.params -P $resnet50_ssd_data_path + wget https://s3.amazonaws.com/model-server/models/resnet50_ssd/synset.txt -P $resnet50_ssd_data_path +fi diff --git a/ci/docker/qemu/qemu_run.sh b/contrib/clojure-package/scripts/update_versions.sh similarity index 70% rename from ci/docker/qemu/qemu_run.sh rename to contrib/clojure-package/scripts/update_versions.sh index 53a6487aa..607e3f357 100755 --- a/ci/docker/qemu/qemu_run.sh +++ b/contrib/clojure-package/scripts/update_versions.sh @@ -17,15 +17,11 @@ # specific language governing permissions and limitations # under the License. -set -exuo pipefail +# Run this from the main Clojure project directory with 2 arguments +# old-version and new-version +# Ex: scripts/update_version 1.5.0-SNAPSHOT 1.5.0-SNAPSHOT -qemu-system-arm -M virt -m 1024 \ - -kernel vmlinuz \ - -initrd initrd.img \ - -append 'root=/dev/vda1' \ - -drive if=none,file=vda.qcow2,format=qcow2,id=hd \ - -device virtio-blk-device,drive=hd \ - -netdev user,id=mynet,hostfwd=tcp::2222-:22 \ - -device virtio-net-device,netdev=mynet \ - -nographic \ - -display none +set -evx +echo "Replacing $2 with $2 in the directory $PWD " +find ./ -type f -exec sed -i '' -e "s/$1/$2/g" {} \; +echo "Done! Check the changed files" diff --git a/contrib/clojure-package/src/dev/generator.clj b/contrib/clojure-package/src/dev/generator.clj index 1f0418951..ca93c3421 100644 --- a/contrib/clojure-package/src/dev/generator.clj +++ b/contrib/clojure-package/src/dev/generator.clj @@ -212,11 +212,11 @@ (.write w "\n")))) (def symbol-gen-ns "(ns org.apache.clojure-mxnet.symbol - (:refer-clojure :exclude [* - + > >= < <= / cast concat identity flatten load max - min repeat reverse set sort take to-array empty sin - get apply shuffle]) - (:require [org.apache.clojure-mxnet.util :as util]) - (:import (org.apache.mxnet Symbol)))") + (:refer-clojure :exclude [* - + > >= < <= / cast concat identity flatten load max + min repeat reverse set sort take to-array empty sin + get apply shuffle ref]) + (:require [org.apache.clojure-mxnet.util :as util]) + (:import (org.apache.mxnet Symbol)))") (defn generate-symbol-file [] @@ -307,7 +307,8 @@ (def ndarray-gen-ns "(ns org.apache.clojure-mxnet.ndarray (:refer-clojure :exclude [* - + > >= < <= / cast concat flatten identity load max - min repeat reverse set sort take to-array empty shuffle]) + min repeat reverse set sort take to-array empty shuffle + ref]) (:import (org.apache.mxnet NDArray Shape)))") diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/image.clj b/contrib/clojure-package/src/org/apache/clojure_mxnet/image.clj new file mode 100644 index 000000000..e2e87ed47 --- /dev/null +++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/image.clj @@ -0,0 +1,139 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns org.apache.clojure-mxnet.image + (:require [t6.from-scala.core :refer [$ $$] :as $] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [org.apache.clojure-mxnet.util :as util] + [clojure.spec.alpha :as s]) + (:import (org.apache.mxnet Image NDArray) + (java.io InputStream))) + +;; Flags for conversion of images +(def GRAYSCALE 0) +(def COLOR 1) + +(s/def ::input-stream #(instance? InputStream %)) +(s/def ::color-flag #{GRAYSCALE COLOR}) +(s/def ::to-rgb boolean?) +(s/def ::ndarray #(instance? NDArray %)) +(s/def ::output (s/or :empty nil? :ndarray ::ndarray)) +(s/def ::decode-image-opts + (s/keys :opt-un [::color-flag ::to-rgb ::output])) + +(defn decode-image + "Decodes an image from an input stream" + ([input-stream {:keys [color-flag to-rgb output] + :or {color-flag COLOR to-rgb true output nil} + :as opts}] + (util/validate! ::input-stream input-stream "Invalid input stream") + (util/validate! ::decode-image-opts opts "Invalid options for decoding") + (Image/imDecode input-stream color-flag to-rgb ($/option output))) + ([input-stream] + (decode-image input-stream {}))) + +(s/def ::filename string?) +(s/def ::optional-color-flag + (s/or :none nil? :some ::color-flag)) +(s/def ::optional-to-rgb + (s/or :none nil? :some ::to-rgb)) + +(defn read-image + "Reads an image file and returns an ndarray" + ([filename {:keys [color-flag to-rgb output] + :or {color-flag nil to-rgb nil output nil} + :as opts}] + (util/validate! ::filename filename "Invalid filename") + (util/validate! ::optional-color-flag color-flag "Invalid color flag") + (util/validate! ::optional-to-rgb to-rgb "Invalid conversion flag") + (util/validate! ::output output "Invalid output") + (Image/imRead + filename + ($/option color-flag) + ($/option to-rgb) + ($/option output))) + ([filename] + (read-image filename {}))) + +(s/def ::int int?) +(s/def ::optional-int (s/or :none nil? :some int?)) + +(defn resize-image + "Resizes the image array to (width, height)" + ([input w h {:keys [interpolation output] + :or {interpolation nil output nil} + :as opts}] + (util/validate! ::ndarray input "Invalid input array") + (util/validate! ::int w "Invalid width") + (util/validate! ::int h "Invalid height") + (util/validate! ::optional-int interpolation "Invalid interpolation") + (util/validate! ::output output "Invalid output") + (Image/imResize input w h ($/option interpolation) ($/option output))) + ([input w h] + (resize-image input w h {}))) + +(defn apply-border + "Pad image border" + ([input top bottom left right + {:keys [fill-type value values output] + :or {fill-type nil value nil values nil output nil} + :as opts}] + (util/validate! ::ndarray input "Invalid input array") + (util/validate! ::int top "Invalid top margin") + (util/validate! ::int bottom "Invalid bottom margin") + (util/validate! ::int left "Invalid left margin") + (util/validate! ::int right "Invalid right margin") + (util/validate! ::optional-int fill-type "Invalid fill type") + (util/validate! ::output output "Invalid output") + (Image/copyMakeBorder input top bottom left right + ($/option fill-type) + ($/option value) + ($/option values) + ($/option output))) + ([input top bottom left right] + (apply-border input top bottom left right {}))) + +(defn fixed-crop + "Return a fixed crop of the image" + [input x0 y0 w h] + (util/validate! ::ndarray input "Invalid input array") + (util/validate! ::int x0 "Invalid starting x coordinate") + (util/validate! ::int y0 "Invalid starting y coordinate") + (util/validate! ::int w "Invalid width") + (util/validate! ::int h "Invalid height") + (Image/fixedCrop input x0 y0 w h)) + +(defn rgb-array? + "Returns whether the ndarray is in the RGB format" + [input] + (util/validate! ::ndarray input "Invalid input array") + (let [shape (ndarray/shape-vec input)] + (and + (= 3 (count shape)) + (= 3 (shape 2))))) + +(s/def ::all-bytes #(= dtype/UINT8 (ndarray/dtype %))) +(s/def ::rgb-array rgb-array?) +(s/def ::to-image-ndarray + (s/and ::ndarray ::all-bytes ::rgb-array)) + +(defn to-image + "Convert a NDArray image in RGB format to a real image" + [input] + (util/validate! ::to-image-ndarray input "Invalid input array") + (Image/toImage input)) diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/infer.clj b/contrib/clojure-package/src/org/apache/clojure_mxnet/infer.clj new file mode 100644 index 000000000..09edf15b4 --- /dev/null +++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/infer.clj @@ -0,0 +1,372 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns org.apache.clojure-mxnet.infer + (:refer-clojure :exclude [type]) + (:require [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.io :as mx-io] + [org.apache.clojure-mxnet.shape :as shape] + [org.apache.clojure-mxnet.util :as util] + [clojure.spec.alpha :as s] + [org.apache.clojure-mxnet.shape :as mx-shape]) + (:import (java.awt.image BufferedImage) + (org.apache.mxnet NDArray) + (org.apache.mxnet.infer Classifier ImageClassifier + ObjectDetector Predictor))) + +(s/def ::predictor #(instance? Predictor %)) +(s/def ::classifier #(instance? Classifier %)) +(s/def ::image-classifier #(instance? ImageClassifier %)) +(s/def ::object-detector #(instance? ObjectDetector %)) + +(defrecord WrappedPredictor [predictor]) +(defrecord WrappedClassifier [classifier]) +(defrecord WrappedImageClassifier [image-classifier]) +(defrecord WrappedObjectDetector [object-detector]) + +(s/def ::ndarray #(instance? NDArray %)) +(s/def ::number-array (s/coll-of number? :kind vector?)) +(s/def ::vvec-of-numbers (s/coll-of ::number-array :kind vector?)) +(s/def ::vec-of-ndarrays (s/coll-of ::ndarray :kind vector?)) +(s/def ::image #(instance? BufferedImage %)) +(s/def ::batch-images (s/coll-of ::image :kind vector?)) + +(s/def ::wrapped-predictor (s/keys :req-un [::predictor])) +(s/def ::wrapped-classifier (s/keys :req-un [::classifier])) +(s/def ::wrapped-image-classifier (s/keys :req-un [::image-classifier])) +(s/def ::wrapped-detector (s/keys :req-un [::object-detector])) + +(defn- format-detection-predictions [predictions] + (mapv (fn [[c p]] + (let [[prob xmin ymin xmax ymax] (mapv float p)] + {:class c :prob prob :x-min xmin :y-min ymin :x-max xmax :y-max ymax})) + predictions)) + +(defn- format-classification-predictions [predictions] + (mapv (fn [[c p]] {:class c :prob p}) predictions)) + +(defprotocol APredictor + (predict [wrapped-predictor inputs]) + (predict-with-ndarray [wrapped-predictor input-arrays])) + +(defprotocol AClassifier + (classify + [wrapped-classifier inputs] + [wrapped-classifier inputs topk]) + (classify-with-ndarray + [wrapped-classifier inputs] + [wrapped-classifier inputs topk])) + +(defprotocol AImageClassifier + (classify-image + [wrapped-image-classifier image] + [wrapped-image-classifier image topk] + [wrapped-image-classifier image topk dtype]) + (classify-image-batch + [wrapped-image-classifier images] + [wrapped-image-classifier images topk] + [wrapped-image-classifier images topk dtype])) + +(defprotocol AObjectDetector + (detect-objects + [wrapped-detector image] + [wrapped-detector image topk]) + (detect-objects-batch + [wrapped-detector images] + [wrapped-detector images topk]) + (detect-objects-with-ndarrays + [wrapped-detector input-arrays] + [wrapped-detector input-arrays topk])) + +(extend-protocol APredictor + WrappedPredictor + (predict + [wrapped-predictor inputs] + (util/validate! ::wrapped-predictor wrapped-predictor + "Invalid predictor") + (util/validate! ::vvec-of-numbers inputs + "Invalid inputs") + (->> (.predict (:predictor wrapped-predictor) + (util/vec->indexed-seq (mapv float-array inputs))) + (util/coerce-return-recursive) + (mapv #(mapv float %)))) + (predict-with-ndarray [wrapped-predictor input-arrays] + (util/validate! ::wrapped-predictor wrapped-predictor + "Invalid predictor") + (util/validate! ::vec-of-ndarrays input-arrays + "Invalid input arrays") + (-> (.predictWithNDArray (:predictor wrapped-predictor) + (util/vec->indexed-seq input-arrays)) + (util/coerce-return-recursive)))) + +(s/def ::nil-or-int (s/nilable int?)) + +(extend-protocol AClassifier + WrappedClassifier + (classify + ([wrapped-classifier inputs] + (classify wrapped-classifier inputs nil)) + ([wrapped-classifier inputs topk] + (util/validate! ::wrapped-classifier wrapped-classifier + "Invalid classifier") + (util/validate! ::vvec-of-numbers inputs + "Invalid inputs") + (util/validate! ::nil-or-int topk "Invalid top-K") + (->> (.classify (:classifier wrapped-classifier) + (util/vec->indexed-seq (mapv float-array inputs)) + (util/->int-option topk)) + (util/coerce-return-recursive) + (format-classification-predictions)))) + (classify-with-ndarray + ([wrapped-classifier inputs] + (classify-with-ndarray wrapped-classifier inputs nil)) + ([wrapped-classifier inputs topk] + (util/validate! ::wrapped-classifier wrapped-classifier + "Invalid classifier") + (util/validate! ::vec-of-ndarrays inputs + "Invalid inputs") + (util/validate! ::nil-or-int topk "Invalid top-K") + (->> (.classifyWithNDArray (:classifier wrapped-classifier) + (util/vec->indexed-seq inputs) + (util/->int-option topk)) + (util/coerce-return-recursive) + (mapv format-classification-predictions)))) + WrappedImageClassifier + (classify + ([wrapped-image-classifier inputs] + (classify wrapped-image-classifier inputs nil)) + ([wrapped-image-classifier inputs topk] + (util/validate! ::wrapped-image-classifier wrapped-image-classifier + "Invalid classifier") + (util/validate! ::vvec-of-numbers inputs + "Invalid inputs") + (util/validate! ::nil-or-int topk "Invalid top-K") + (->> (.classify (:image-classifier wrapped-image-classifier) + (util/vec->indexed-seq (mapv float-array inputs)) + (util/->int-option topk)) + (util/coerce-return-recursive) + (format-classification-predictions)))) + (classify-with-ndarray + ([wrapped-image-classifier inputs] + (classify-with-ndarray wrapped-image-classifier inputs nil)) + ([wrapped-image-classifier inputs topk] + (util/validate! ::wrapped-image-classifier wrapped-image-classifier + "Invalid classifier") + (util/validate! ::vec-of-ndarrays inputs + "Invalid inputs") + (util/validate! ::nil-or-int topk "Invalid top-K") + (->> (.classifyWithNDArray (:image-classifier wrapped-image-classifier) + (util/vec->indexed-seq inputs) + (util/->int-option topk)) + (util/coerce-return-recursive) + (mapv format-classification-predictions))))) + +(s/def ::image #(instance? BufferedImage %)) +(s/def ::dtype #{dtype/UINT8 dtype/INT32 dtype/FLOAT16 dtype/FLOAT32 dtype/FLOAT64}) + +(extend-protocol AImageClassifier + WrappedImageClassifier + (classify-image + ([wrapped-image-classifier image] + (classify-image wrapped-image-classifier image nil dtype/FLOAT32)) + ([wrapped-image-classifier image topk] + (classify-image wrapped-image-classifier image topk dtype/FLOAT32)) + ([wrapped-image-classifier image topk dtype] + (util/validate! ::wrapped-image-classifier wrapped-image-classifier + "Invalid classifier") + (util/validate! ::image image "Invalid image") + (util/validate! ::nil-or-int topk "Invalid top-K") + (util/validate! ::dtype dtype "Invalid dtype") + (->> (.classifyImage (:image-classifier wrapped-image-classifier) + image + (util/->int-option topk) + dtype) + (util/coerce-return-recursive) + (mapv format-classification-predictions)))) + (classify-image-batch + ([wrapped-image-classifier images] + (classify-image-batch wrapped-image-classifier images nil dtype/FLOAT32)) + ([wrapped-image-classifier images topk] + (classify-image-batch wrapped-image-classifier images topk dtype/FLOAT32)) + ([wrapped-image-classifier images topk dtype] + (util/validate! ::wrapped-image-classifier wrapped-image-classifier + "Invalid classifier") + (util/validate! ::batch-images images "Invalid Batch Images") + (util/validate! ::nil-or-int topk "Invalid top-K") + (util/validate! ::dtype dtype "Invalid dtype") + (->> (.classifyImageBatch (:image-classifier wrapped-image-classifier) + (util/vec->indexed-seq images) + (util/->int-option topk) + dtype) + (util/coerce-return-recursive) + (mapv format-classification-predictions))))) + +(extend-protocol AObjectDetector + WrappedObjectDetector + (detect-objects + ([wrapped-detector image] + (detect-objects wrapped-detector image nil)) + ([wrapped-detector image topk] + (util/validate! ::wrapped-detector wrapped-detector + "Invalid object detector") + (util/validate! ::image image "Invalid image") + (util/validate! ::nil-or-int topk "Invalid top-K") + (->> (.imageObjectDetect (:object-detector wrapped-detector) + image + (util/->int-option topk)) + (util/coerce-return-recursive) + (mapv format-detection-predictions)))) + (detect-objects-batch + ([wrapped-detector images] + (detect-objects-batch wrapped-detector images nil)) + ([wrapped-detector images topk] + (util/validate! ::wrapped-detector wrapped-detector + "Invalid object detector") + (util/validate! ::nil-or-int topk "Invalid top-K") + (util/validate! ::batch-images images "Invalid Batch Images") + (->> (.imageBatchObjectDetect (:object-detector wrapped-detector) + (util/vec->indexed-seq images) + (util/->int-option topk)) + (util/coerce-return-recursive) + (mapv format-detection-predictions)))) + (detect-objects-with-ndarrays + ([wrapped-detector input-arrays] + (detect-objects-with-ndarrays wrapped-detector input-arrays nil)) + ([wrapped-detector input-arrays topk] + (util/validate! ::wrapped-detector wrapped-detector + "Invalid object detector") + (util/validate! ::vec-of-ndarrays input-arrays + "Invalid inputs") + (util/validate! ::nil-or-int topk "Invalid top-K") + (->> (.objectDetectWithNDArray (:object-detector wrapped-detector) + (util/vec->indexed-seq input-arrays) + (util/->int-option topk)) + (util/coerce-return-recursive) + (mapv format-detection-predictions))))) + +(defprotocol AInferenceFactory + (create-predictor [factory] [factory opts]) + (create-classifier [factory] [factory opts]) + (create-image-classifier [factory] [factory opts]) + (create-object-detector [factory] [factory opts])) + +(defn convert-descriptors + [descriptors] + (util/vec->indexed-seq + (into [] (map mx-io/data-desc descriptors)))) + +(defrecord InferenceFactory [model-path-prefix input-descriptors] + AInferenceFactory + (create-predictor + [factory] + (create-predictor factory {})) + (create-predictor + [factory opts] + (let [{:keys [contexts epoch] + :or {contexts [(context/cpu)] epoch 0}} opts] + (->WrappedPredictor + (new Predictor + model-path-prefix + (convert-descriptors input-descriptors) + (into-array contexts) + (util/->int-option epoch))))) + (create-classifier + [factory] + (create-classifier factory {})) + (create-classifier + [factory opts] + (let [{:keys [contexts epoch] + :or {contexts [(context/cpu)] epoch 0}} opts] + (->WrappedClassifier + (new Classifier + model-path-prefix + (convert-descriptors input-descriptors) + (into-array contexts) + (util/->int-option epoch))))) + (create-image-classifier + [factory] + (create-image-classifier factory {})) + (create-image-classifier + [factory opts] + (let [{:keys [contexts epoch] + :or {contexts [(context/cpu)] epoch 0}} opts] + (->WrappedImageClassifier + (new ImageClassifier + model-path-prefix + (convert-descriptors input-descriptors) + (into-array contexts) + (util/->int-option epoch))))) + (create-object-detector + [factory] + (create-object-detector factory {})) + (create-object-detector + [factory opts] + (let [{:keys [contexts epoch] + :or {contexts [(context/cpu)] epoch 0}} opts] + (->WrappedObjectDetector + (new ObjectDetector + model-path-prefix + (convert-descriptors input-descriptors) + (into-array contexts) + (util/->int-option epoch)))))) + +(s/def ::model-path-prefix string?) +(s/def ::input-descriptors (s/coll-of ::mx-io/data-desc)) + +(defn model-factory + "Creates a factory that can be used to instantiate an image classifier + predictor or object detector" + [model-path-prefix input-descriptors] + (util/validate! ::model-path-prefix model-path-prefix + "Invalid model path prefix") + (util/validate! ::input-descriptors input-descriptors + "Invalid input descriptors") + (->InferenceFactory model-path-prefix input-descriptors)) + +(defn reshape-image + "Reshape an image to a new shape" + [image width height] + (util/validate! ::image image "Invalid image") + (util/validate! int? width "Invalid width") + (util/validate! int? height "Invalid height") + (ImageClassifier/reshapeImage image width height)) + +(defn buffered-image-to-pixels + "Convert input BufferedImage to NDArray of input shape" + ([image input-shape-vec] + (buffered-image-to-pixels image input-shape-vec dtype/FLOAT32)) + ([image input-shape-vec dtype] + (util/validate! ::image image "Invalid image") + (util/validate! (s/coll-of int?) input-shape-vec "Invalid shape vector") + (ImageClassifier/bufferedImageToPixels image (shape/->shape input-shape-vec) dtype))) + +(s/def ::image-path string?) +(s/def ::image-paths (s/coll-of ::image-path)) + +(defn load-image-from-file + "Loads an input image given a file name" + [image-path] + (util/validate! ::image-path image-path "Invalid image path") + (ImageClassifier/loadImageFromFile image-path)) + +(defn load-image-paths + "Loads images from a list of file names" + [image-paths] + (util/validate! ::image-paths image-paths "Invalid image paths") + (util/scala-vector->vec + (ImageClassifier/loadInputBatch (util/convert-vector image-paths)))) diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/ndarray.clj b/contrib/clojure-package/src/org/apache/clojure_mxnet/ndarray.clj index e37a8bc8c..651bdcb3f 100644 --- a/contrib/clojure-package/src/org/apache/clojure_mxnet/ndarray.clj +++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/ndarray.clj @@ -17,7 +17,8 @@ (ns org.apache.clojure-mxnet.ndarray (:refer-clojure :exclude [* - + > >= < <= / cast concat flatten identity load max - min repeat reverse set sort take to-array empty shuffle]) + min repeat reverse set sort take to-array empty shuffle + ref]) (:require [org.apache.clojure-mxnet.base :as base] [org.apache.clojure-mxnet.context :as mx-context] [org.apache.clojure-mxnet.shape :as mx-shape] diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/optimizer.clj b/contrib/clojure-package/src/org/apache/clojure_mxnet/optimizer.clj index f18ff40f5..672090a89 100644 --- a/contrib/clojure-package/src/org/apache/clojure_mxnet/optimizer.clj +++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/optimizer.clj @@ -17,7 +17,19 @@ (ns org.apache.clojure-mxnet.optimizer (:refer-clojure :exclude [update]) - (:import (org.apache.mxnet.optimizer SGD DCASGD NAG AdaDelta RMSProp AdaGrad Adam SGLD))) + (:require + [clojure.spec.alpha :as s] + [org.apache.clojure-mxnet.util :as util]) + (:import + (org.apache.mxnet.optimizer SGD DCASGD NAG AdaDelta RMSProp AdaGrad Adam SGLD) + (org.apache.mxnet FactorScheduler))) + +(s/def ::learning-rate number?) +(s/def ::momentum number?) +(s/def ::wd number?) +(s/def ::clip-gradient number?) +(s/def ::lr-scheduler #(instance? FactorScheduler %)) +(s/def ::sgd-opts (s/keys :opt-un [::learning-rate ::momentum ::wd ::clip-gradient ::lr-scheduler])) (defn sgd "A very simple SGD optimizer with momentum and weight regularization." @@ -26,10 +38,14 @@ momentum 0.0 wd 0.0001 clip-gradient 0}}] + (util/validate! ::sgd-opts opts "Incorrect sgd optimizer options") (new SGD (float learning-rate) (float momentum) (float wd) (float clip-gradient) lr-scheduler)) ([] (sgd {}))) +(s/def ::lambda number?) +(s/def ::dcasgd-opts (s/keys :opt-un [::learning-rate ::momentum ::lambda ::wd ::clip-gradient ::lr-scheduler])) + (defn dcasgd "DCASGD optimizer with momentum and weight regularization. Implementation of paper 'Asynchronous Stochastic Gradient Descent with @@ -40,10 +56,13 @@ lambda 0.04 wd 0.0 clip-gradient 0}}] + (util/validate! ::sgd-opts opts "Incorrect dcasgd optimizer options") (new DCASGD (float learning-rate) (float lambda) (float momentum) (float wd) (float clip-gradient) lr-scheduler)) ([] (dcasgd {}))) +(s/def ::nag-opts (s/keys :opt-un [::learning-rate ::momentum ::wd ::clip-gradient ::lr-scheduler])) + (defn nag "SGD with nesterov. It is implemented according to @@ -53,10 +72,16 @@ momentum 0.0 wd 0.0001 clip-gradient 0}}] + (util/validate! ::nag-opts opts "Incorrect nag optimizer options") (new NAG (float learning-rate) (float momentum) (float wd) (float clip-gradient) lr-scheduler)) ([] (nag {}))) +(s/def ::rho number?) +(s/def ::rescale-gradient number?) +(s/def ::epsilon number?) +(s/def ::ada-delta-opts (s/keys :opt-un [::rho ::rescale-gradient ::epsilon ::wd ::clip-gradient])) + (defn ada-delta "AdaDelta optimizer as described in Matthew D. Zeiler, 2012. http://arxiv.org/abs/1212.5701" @@ -66,10 +91,15 @@ epsilon 1e-8 wd 0.0 clip-gradient 0}}] + (util/validate! ::ada-delta-opts opts "Incorrect ada-delta optimizer options") (new AdaDelta (float rho) (float rescale-gradient) (float epsilon) (float wd) (float clip-gradient))) ([] (ada-delta {}))) +(s/def gamma1 number?) +(s/def gamma2 number?) +(s/def ::rms-prop-opts (s/keys :opt-un [::learning-rate ::rescale-gradient ::gamma1 ::gamma2 ::wd ::clip-gradient])) + (defn rms-prop "RMSProp optimizer as described in Tieleman & Hinton, 2012. http://arxiv.org/pdf/1308.0850v5.pdf Eq(38) - Eq(45) by Alex Graves, 2013. @@ -80,18 +110,21 @@ - wd L2 regularization coefficient add to all the weights - clip-gradient clip gradient in range [-clip_gradient, clip_gradient] - lr-scheduler The learning rate scheduler" - ([{:keys [learning-rate rescale-gradient gamma1 gamma2 wd lr-scheduler clip-gradient] + ([{:keys [learning-rate rescale-gradient gamma1 gamma2 wd lr-scheduler clip-gradient] :as opts :or {learning-rate 0.002 rescale-gradient 1.0 gamma1 0.95 gamma2 0.9 wd 0.0 clip-gradient 0}}] + (util/validate! ::rms-prop-opts opts "Incorrect rms-prop optimizer options") (new RMSProp (float learning-rate) (float rescale-gradient) (float gamma1) (float gamma2) (float wd) lr-scheduler (float clip-gradient))) ([] (rms-prop {}))) +(s/def ::ada-grad-opts (s/keys :opt-un [::learning-rate ::rescale-gradient ::epsilon ::wd])) + (defn ada-grad " AdaGrad optimizer as described in Duchi, Hazan and Singer, 2011. http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf @@ -101,15 +134,20 @@ Default value is set to 1e-7. - rescale-gradient rescaling factor of gradient. - wd L2 regularization coefficient add to all the weights" - ([{:keys [learning-rate rescale-gradient epsilon wd] + ([{:keys [learning-rate rescale-gradient epsilon wd] :as opts :or {learning-rate 0.05 rescale-gradient 1.0 epsilon 1e-7 wd 0.0}}] + (util/validate! ::ada-grad-opts opts "Incorrect ada-grad optimizer options") (new AdaGrad (float learning-rate) (float rescale-gradient) (float epsilon) (float wd))) ([] (ada-grad {}))) +(s/def ::beta1 number?) +(s/def ::beta2 number?) +(s/def ::adam-opts (s/keys :opt-un [::learning-rate ::beta1 ::beta2 ::epsilon ::decay-factor ::wd ::clip-gradient ::lr-scheduler])) + (defn adam "Adam optimizer as described in [King2014] @@ -125,7 +163,7 @@ - wd L2 regularization coefficient add to all the weights - clip-gradient clip gradient in range [-clip_gradient, clip_gradient] - lr-scheduler The learning rate scheduler" - ([{:keys [learning-rate beta1 beta2 epsilon decay-factor wd clip-gradient lr-scheduler] + ([{:keys [learning-rate beta1 beta2 epsilon decay-factor wd clip-gradient lr-scheduler] :as opts :or {learning-rate 0.002 beta1 0.9 beta2 0.999 @@ -133,11 +171,14 @@ decay-factor (- 1 1e-8) wd 0 clip-gradient 0}}] + (util/validate! ::adam-opts opts "Incorrect adam optimizer options") (new Adam (float learning-rate) (float beta1) (float beta2) (float epsilon) (float decay-factor) (float wd) (float clip-gradient) lr-scheduler)) ([] (adam {}))) +(s/def ::sgld-opts (s/keys :opt-un [::learning-rate ::rescale-gradient ::wd ::clip-gradient ::lr-scheduler])) + (defn sgld "Stochastic Langevin Dynamics Updater to sample from a distribution. @@ -146,11 +187,12 @@ - wd L2 regularization coefficient add to all the weights - clip-gradient Float, clip gradient in range [-clip_gradient, clip_gradient] - lr-scheduler The learning rate scheduler" - ([{:keys [learning-rate rescale-gradient wd clip-gradient lr-scheduler] + ([{:keys [learning-rate rescale-gradient wd clip-gradient lr-scheduler] :as opts :or {learning-rate 0.01 rescale-gradient 1 wd 0.0001 clip-gradient 0}}] + (util/validate! ::sgld-opts opts "Incorrect sgld optimizer options") (new SGLD (float learning-rate) (float rescale-gradient) (float wd) (float clip-gradient) lr-scheduler)) ([] diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/primitives.clj b/contrib/clojure-package/src/org/apache/clojure_mxnet/primitives.clj new file mode 100644 index 000000000..0967df228 --- /dev/null +++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/primitives.clj @@ -0,0 +1,46 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns org.apache.clojure-mxnet.primitives + (:import (org.apache.mxnet MX_PRIMITIVES$MX_FLOAT MX_PRIMITIVES$MX_Double + MX_PRIMITIVES$MX_PRIMITIVE_TYPE))) + + +;;; Defines customer mx primitives that can be used for mathematical computations +;;; in NDArrays to control precision. Currently Float and Double are supported + +;;; For purposes of automatic conversion in ndarray functions, doubles are default +;; to specify using floats you must use a Float + +(defn mx-float + "Creates a MXNet float primitive" + [num] + (new MX_PRIMITIVES$MX_FLOAT num)) + +(defn mx-double + "Creates a MXNet double primitive" + [num] + (new MX_PRIMITIVES$MX_Double num)) + +(defn ->num + "Returns the underlying number value" + [primitive] + (.data primitive)) + +(defn primitive? [x] + (instance? MX_PRIMITIVES$MX_PRIMITIVE_TYPE x)) + diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/random.clj b/contrib/clojure-package/src/org/apache/clojure_mxnet/random.clj index d6e33789a..0ec2039ba 100644 --- a/contrib/clojure-package/src/org/apache/clojure_mxnet/random.clj +++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/random.clj @@ -16,8 +16,18 @@ ;; (ns org.apache.clojure-mxnet.random - (:require [org.apache.clojure-mxnet.shape :as mx-shape]) - (:import (org.apache.mxnet Random))) + (:require + [org.apache.clojure-mxnet.shape :as mx-shape] + [org.apache.clojure-mxnet.context :as context] + [clojure.spec.alpha :as s] + [org.apache.clojure-mxnet.util :as util]) + (:import (org.apache.mxnet Context Random))) + +(s/def ::low number?) +(s/def ::high number?) +(s/def ::shape-vec (s/coll-of pos-int? :kind vector?)) +(s/def ::ctx #(instance? Context %)) +(s/def ::uniform-opts (s/keys :opt-un [::ctx])) (defn uniform "Generate uniform distribution in [low, high) with shape. @@ -29,10 +39,18 @@ out: Output place holder} returns: The result ndarray with generated result./" ([low high shape-vec {:keys [ctx out] :as opts}] + (util/validate! ::uniform-opts opts "Incorrect random uniform parameters") + (util/validate! ::low low "Incorrect random uniform parameter") + (util/validate! ::high high "Incorrect random uniform parameters") + (util/validate! ::shape-vec shape-vec "Incorrect random uniform parameters") (Random/uniform (float low) (float high) (mx-shape/->shape shape-vec) ctx out)) ([low high shape-vec] (uniform low high shape-vec {}))) +(s/def ::loc number?) +(s/def ::scale number?) +(s/def ::normal-opts (s/keys :opt-un [::ctx])) + (defn normal "Generate normal(Gaussian) distribution N(mean, stdvar^^2) with shape. loc: The standard deviation of the normal distribution @@ -43,10 +61,15 @@ out: Output place holder} returns: The result ndarray with generated result./" ([loc scale shape-vec {:keys [ctx out] :as opts}] + (util/validate! ::normal-opts opts "Incorrect random normal parameters") + (util/validate! ::loc loc "Incorrect random normal parameters") + (util/validate! ::scale scale "Incorrect random normal parameters") + (util/validate! ::shape-vec shape-vec "Incorrect random uniform parameters") (Random/normal (float loc) (float scale) (mx-shape/->shape shape-vec) ctx out)) ([loc scale shape-vec] (normal loc scale shape-vec {}))) +(s/def ::seed-state number?) (defn seed " Seed the random number generators in mxnet. This seed will affect behavior of functions in this module, @@ -58,4 +81,5 @@ This means if you set the same seed, the random number sequence generated from GPU0 can be different from CPU." [seed-state] - (Random/seed (int seed-state))) + (util/validate! ::seed-state seed-state "Incorrect seed parameters") + (Random/seed (int seed-state))) \ No newline at end of file diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/symbol.clj b/contrib/clojure-package/src/org/apache/clojure_mxnet/symbol.clj index 58b1d6d49..095165582 100644 --- a/contrib/clojure-package/src/org/apache/clojure_mxnet/symbol.clj +++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/symbol.clj @@ -18,7 +18,7 @@ (ns org.apache.clojure-mxnet.symbol (:refer-clojure :exclude [* - + > >= < <= / cast concat identity flatten load max min repeat reverse set sort take to-array empty sin - get apply shuffle]) + get apply shuffle ref]) (:require [org.apache.clojure-mxnet.base :as base] [org.apache.clojure-mxnet.context :as mx-context] [org.apache.clojure-mxnet.executor :as ex] diff --git a/contrib/clojure-package/src/org/apache/clojure_mxnet/util.clj b/contrib/clojure-package/src/org/apache/clojure_mxnet/util.clj index 6f22b0eb3..43970c0ab 100644 --- a/contrib/clojure-package/src/org/apache/clojure_mxnet/util.clj +++ b/contrib/clojure-package/src/org/apache/clojure_mxnet/util.clj @@ -19,6 +19,7 @@ (:require [clojure.spec.alpha :as s] [t6.from-scala.core :refer [$ $$] :as $] [clojure.string :as string] + [org.apache.clojure-mxnet.primitives :as primitives] [org.apache.clojure-mxnet.shape :as mx-shape]) (:import (org.apache.mxnet NDArray) (scala Product Tuple2 Tuple3) @@ -36,7 +37,8 @@ "byte<>" "byte-array" "java.lang.String<>" "vec-or-strings" "org.apache.mxnet.NDArray" "ndarray" - "org.apache.mxnet.Symbol" "sym"}) + "org.apache.mxnet.Symbol" "sym" + "org.apache.mxnet.MX_PRIMITIVES$MX_PRIMITIVE_TYPE" "double-or-float"}) (def symbol-param-coerce {"java.lang.String" "sym-name" "float" "num" @@ -66,6 +68,9 @@ (defn ->option [v] ($ Option v)) +(defn ->int-option [v] + (->option (when v (int v)))) + (defn option->value [opt] ($/view opt)) @@ -141,6 +146,8 @@ (and (get targets "int<>") (vector? param)) (int-array param) (and (get targets "float<>") (vector? param)) (float-array param) (and (get targets "java.lang.String<>") (vector? param)) (into-array param) + (and (get targets "org.apache.mxnet.MX_PRIMITIVES$MX_PRIMITIVE_TYPE") (instance? Float param)) (primitives/mx-float param) + (and (get targets "org.apache.mxnet.MX_PRIMITIVES$MX_PRIMITIVE_TYPE") (number? param)) (primitives/mx-double param) :else param)) (defn nil-or-coerce-param [param targets] @@ -174,8 +181,15 @@ (instance? Map return-val) (scala-map->map return-val) (instance? Tuple2 return-val) (tuple->vec return-val) (instance? Tuple3 return-val) (tuple->vec return-val) + (primitives/primitive? return-val) (primitives/->num return-val) :else return-val)) +(defn coerce-return-recursive [return-val] + (let [coerced-val (coerce-return return-val)] + (if (vector? coerced-val) + (into [] (map coerce-return-recursive coerced-val)) + coerced-val))) + (defmacro scala-fn "Creates a scala fn from an anonymous clojure fn of the form (fn [x] body)" [f] diff --git a/contrib/clojure-package/test/good-test-ndarray.clj b/contrib/clojure-package/test/good-test-ndarray.clj index 8cdfce76f..b048a819c 100644 --- a/contrib/clojure-package/test/good-test-ndarray.clj +++ b/contrib/clojure-package/test/good-test-ndarray.clj @@ -1,6 +1,7 @@ (ns org.apache.clojure-mxnet.ndarray (:refer-clojure :exclude [* - + > >= < <= / cast concat flatten identity load max - min repeat reverse set sort take to-array empty shuffle]) + min repeat reverse set sort take to-array empty shuffle + ref]) (:import (org.apache.mxnet NDArray Shape))) ;; Do not edit - this is auto-generated @@ -26,11 +27,12 @@ (defn div - ([ndarray num-or-ndarray] + ([ndarray ndarray-or-double-or-float] (util/coerce-return (.$div ndarray (util/coerce-param - num-or-ndarray - #{"float" "org.apache.mxnet.NDArray"}))))) + ndarray-or-double-or-float + #{"org.apache.mxnet.MX_PRIMITIVES$MX_PRIMITIVE_TYPE" + "org.apache.mxnet.NDArray"}))))) diff --git a/contrib/clojure-package/test/good-test-symbol.clj b/contrib/clojure-package/test/good-test-symbol.clj index 0f7479ad4..947d9262d 100644 --- a/contrib/clojure-package/test/good-test-symbol.clj +++ b/contrib/clojure-package/test/good-test-symbol.clj @@ -1,9 +1,9 @@ (ns org.apache.clojure-mxnet.symbol - (:refer-clojure :exclude [* - + > >= < <= / cast concat identity flatten load max - min repeat reverse set sort take to-array empty sin - get apply shuffle]) - (:require [org.apache.clojure-mxnet.util :as util]) - (:import (org.apache.mxnet Symbol))) + (:refer-clojure :exclude [* - + > >= < <= / cast concat identity flatten load max + min repeat reverse set sort take to-array empty sin + get apply shuffle ref]) + (:require [org.apache.clojure-mxnet.util :as util]) + (:import (org.apache.mxnet Symbol))) ;; Do not edit - this is auto-generated diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/image_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/image_test.clj new file mode 100644 index 000000000..38ab11c86 --- /dev/null +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/image_test.clj @@ -0,0 +1,79 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns org.apache.clojure-mxnet.image-test + (:require [org.apache.clojure-mxnet.image :as image] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [clojure.java.io :as io] + [clojure.test :refer :all]) + (:import (javax.imageio ImageIO))) + +(def tmp-dir (System/getProperty "java.io.tmpdir")) +(def image-path (.getAbsolutePath (io/file tmp-dir "Pug-Cookie.jpg"))) + +(defn download-image [] + (with-open [in (io/input-stream "https://s3.amazonaws.com/model-server/inputs/Pug-Cookie.jpg") + out (io/output-stream (io/file image-path))] + (io/copy in out))) + +(defn delete-image [] + (io/delete-file image-path)) + +(defn with-downloaded-image [f] + (download-image) + (f) + (delete-image)) + +(use-fixtures :once with-downloaded-image) + +(deftest test-decode-image + (let [img-arr (image/decode-image + (io/input-stream image-path)) + img-arr-2 (image/decode-image + (io/input-stream image-path) + {:color-flag image/GRAYSCALE})] + (is (= [576 1024 3] (ndarray/shape-vec img-arr))) + (is (= [576 1024 1] (ndarray/shape-vec img-arr-2))))) + +(deftest test-read-image + (let [img-arr (image/read-image image-path) + img-arr-2 (image/read-image + image-path + {:color-flag image/GRAYSCALE})] + (is (= [576 1024 3] (ndarray/shape-vec img-arr))) + (is (= [576 1024 1] (ndarray/shape-vec img-arr-2))))) + +(deftest test-resize-image + (let [img-arr (image/read-image image-path) + resized-arr (image/resize-image img-arr 224 224)] + (is (= [224 224 3] (ndarray/shape-vec resized-arr))))) + +(deftest test-crop-image + (let [img-arr (image/read-image image-path) + cropped-arr (image/fixed-crop img-arr 0 0 224 224)] + (is (= [224 224 3] (ndarray/shape-vec cropped-arr))))) + +(deftest test-apply-border + (let [img-arr (image/read-image image-path) + padded-arr (image/apply-border img-arr 1 1 1 1)] + (is (= [578 1026 3] (ndarray/shape-vec padded-arr))))) + +(deftest test-to-image + (let [img-arr (image/read-image image-path) + resized-arr (image/resize-image img-arr 224 224) + new-img (image/to-image resized-arr)] + (is (= true (ImageIO/write new-img "png" (io/file tmp-dir "out.png")))))) diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/imageclassifier_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/imageclassifier_test.clj new file mode 100644 index 000000000..e3935c31e --- /dev/null +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/imageclassifier_test.clj @@ -0,0 +1,128 @@ +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns org.apache.clojure-mxnet.infer.imageclassifier-test + (:require [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] + [clojure.test :refer :all])) + +(def model-dir "data/") +(def model-path-prefix (str model-dir "resnet-18/resnet-18")) + +(when-not (.exists (io/file (str model-path-prefix "-symbol.json"))) + (sh "./scripts/infer/get_resnet_18_data.sh")) + +(defn create-classifier [] + (let [descriptors [{:name "data" + :shape [1 3 224 224] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors)] + (infer/create-image-classifier factory))) + +(deftest test-single-classification + (let [classifier (create-classifier) + image (infer/load-image-from-file "test/test-images/kitten.jpg") + [predictions-all] (infer/classify-image classifier image) + [predictions-with-default-dtype] (infer/classify-image classifier image 10) + [predictions] (infer/classify-image classifier image 5 dtype/FLOAT32)] + (is (= 1000 (count predictions-all))) + (is (= 10 (count predictions-with-default-dtype))) + (is (= 5 (count predictions))) + (is (= "n02123159 tiger cat" (:class (first predictions)))) + (is (= (< 0 (:prob (first predictions)) 1))))) + +(deftest test-batch-classification + (let [classifier (create-classifier) + image-batch (infer/load-image-paths ["test/test-images/kitten.jpg" + "test/test-images/Pug-Cookie.jpg"]) + [batch-predictions-all] (infer/classify-image-batch classifier image-batch) + [batch-predictions-with-default-dtype] (infer/classify-image-batch classifier image-batch 10) + [predictions] (infer/classify-image-batch classifier image-batch 5 dtype/FLOAT32)] + (is (= 1000 (count batch-predictions-all))) + (is (= 10 (count batch-predictions-with-default-dtype))) + (is (= 5 (count predictions))) + (is (= "n02123159 tiger cat" (:class (first predictions)))) + (is (= (< 0 (:prob (first predictions)) 1))))) + +(deftest test-single-classification-with-ndarray + (let [classifier (create-classifier) + image (-> (infer/load-image-from-file "test/test-images/kitten.jpg") + (infer/reshape-image 224 224) + (infer/buffered-image-to-pixels [3 224 224] dtype/FLOAT32) + (ndarray/expand-dims 0)) + [predictions-all] (infer/classify-with-ndarray classifier [image]) + [predictions] (infer/classify-with-ndarray classifier [image] 5)] + (is (= 1000 (count predictions-all))) + (is (= 5 (count predictions))) + (is (= "n02123159 tiger cat" (:class (first predictions)))) + (is (= (< 0 (:prob (first predictions)) 1))))) + +(deftest test-single-classify + (let [classifier (create-classifier) + image (-> (infer/load-image-from-file "test/test-images/kitten.jpg") + (infer/reshape-image 224 224) + (infer/buffered-image-to-pixels [3 224 224] dtype/FLOAT32) + (ndarray/expand-dims 0)) + predictions-all (infer/classify classifier [(ndarray/->vec image)]) + predictions (infer/classify classifier [(ndarray/->vec image)] 5)] + (is (= 1000 (count predictions-all))) + (is (= 5 (count predictions))) + (is (= "n02123159 tiger cat" (:class (first predictions)))) + (is (= (< 0 (:prob (first predictions)) 1))))) + +(deftest test-base-classification-with-ndarray + (let [descriptors [{:name "data" + :shape [1 3 224 224] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors) + classifier (infer/create-classifier factory) + image (-> (infer/load-image-from-file "test/test-images/kitten.jpg") + (infer/reshape-image 224 224) + (infer/buffered-image-to-pixels [3 224 224] dtype/FLOAT32) + (ndarray/expand-dims 0)) + [predictions-all] (infer/classify-with-ndarray classifier [image]) + [predictions] (infer/classify-with-ndarray classifier [image] 5)] + (is (= 1000 (count predictions-all))) + (is (= 5 (count predictions))) + (is (= "n02123159 tiger cat" (:class (first predictions)))) + (is (= (< 0 (:prob (first predictions)) 1))))) + +(deftest test-base-single-classify + (let [descriptors [{:name "data" + :shape [1 3 224 224] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors) + classifier (infer/create-classifier factory) + image (-> (infer/load-image-from-file "test/test-images/kitten.jpg") + (infer/reshape-image 224 224) + (infer/buffered-image-to-pixels [3 224 224] dtype/FLOAT32) + (ndarray/expand-dims 0)) + predictions-all (infer/classify classifier [(ndarray/->vec image)]) + predictions (infer/classify classifier [(ndarray/->vec image)] 5)] + (is (= 1000 (count predictions-all))) + (is (= 5 (count predictions))) + (is (= "n02123159 tiger cat" (:class (first predictions)))) + (is (= (< 0 (:prob (first predictions)) 1))))) + + diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/objectdetector_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/objectdetector_test.clj new file mode 100644 index 000000000..e2b9579c7 --- /dev/null +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/objectdetector_test.clj @@ -0,0 +1,82 @@ +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns org.apache.clojure-mxnet.infer.objectdetector-test + (:require [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] + [clojure.test :refer :all] + [org.apache.clojure-mxnet.ndarray :as ndarray])) + +(def model-dir "data/") +(def model-path-prefix (str model-dir "resnet50_ssd/resnet50_ssd_model")) + +(when-not (.exists (io/file (str model-path-prefix "-symbol.json"))) + (sh "./scripts/infer/get_ssd_data.sh")) + +(defn create-detector [] + (let [descriptors [{:name "data" + :shape [1 3 512 512] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors)] + (infer/create-object-detector factory))) + +(deftest test-single-detection + (let [detector (create-detector) + image (infer/load-image-from-file "test/test-images/kitten.jpg") + [predictions-all] (infer/detect-objects detector image) + [predictions] (infer/detect-objects detector image 5) + {:keys [class prob x-min x-max y-min y-max] :as pred} (first predictions)] + (is (some? predictions)) + (is (= 5 (count predictions))) + (is (= 13 (count predictions-all))) + (is (= "cat" class)) + (is (< 0.8 prob)) + (every? #(< 0 % 1) [x-min x-max y-min y-max]))) + +(deftest test-batch-detection + (let [detector (create-detector) + image-batch (infer/load-image-paths ["test/test-images/kitten.jpg" + "test/test-images/Pug-Cookie.jpg"]) + [batch-predictions-all] (infer/detect-objects-batch detector image-batch) + [predictions] (infer/detect-objects-batch detector image-batch 5) + {:keys [class prob x-min x-max y-min y-max] :as pred} (first predictions)] + (is (some? predictions)) + (is (= 13 (count batch-predictions-all))) + (is (= 5 (count predictions))) + (is (= "cat" class)) + (is (< 0.8 prob)) + (every? #(< 0 % 1) [x-min x-max y-min y-max]))) + +(deftest test-detection-with-ndarrays + (let [detector (create-detector) + image (-> (infer/load-image-from-file "test/test-images/kitten.jpg") + (infer/reshape-image 512 512) + (infer/buffered-image-to-pixels [3 512 512] dtype/FLOAT32) + (ndarray/expand-dims 0)) + [predictions-all] (infer/detect-objects-with-ndarrays detector [image]) + [predictions] (infer/detect-objects-with-ndarrays detector [image] 1) + {:keys [class prob x-min x-max y-min y-max] :as pred} (first predictions)] + (is (some? predictions-all)) + (is (= 1 (count predictions))) + (is (= "cat" class)) + (is (< 0.8 prob)) + (every? #(< 0 % 1) [x-min x-max y-min y-max]))) + diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/predictor_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/predictor_test.clj new file mode 100644 index 000000000..e1526be61 --- /dev/null +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/infer/predictor_test.clj @@ -0,0 +1,77 @@ +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns org.apache.clojure-mxnet.infer.predictor-test + (:require [org.apache.clojure-mxnet.context :as context] + [org.apache.clojure-mxnet.dtype :as dtype] + [org.apache.clojure-mxnet.infer :as infer] + [org.apache.clojure-mxnet.layout :as layout] + [org.apache.clojure-mxnet.ndarray :as ndarray] + [org.apache.clojure-mxnet.shape :as shape] + [clojure.java.io :as io] + [clojure.java.shell :refer [sh]] + [clojure.string :refer [split]] + [clojure.test :refer :all] + [org.apache.clojure-mxnet.util :as util])) + +(def model-dir "data/") +(def model-path-prefix (str model-dir "resnet-18/resnet-18")) +(def width 224) +(def height 224) + +(when-not (.exists (io/file (str model-path-prefix "-symbol.json"))) + (sh "./scripts/infer/get_resnet_18_data.sh")) + +(defn create-predictor [] + (let [descriptors [{:name "data" + :shape [1 3 height width] + :layout layout/NCHW + :dtype dtype/FLOAT32}] + factory (infer/model-factory model-path-prefix descriptors)] + (infer/create-predictor factory))) + +(deftest predictor-test-with-ndarray + (let [predictor (create-predictor) + image-ndarray (-> "test/test-images/kitten.jpg" + infer/load-image-from-file + (infer/reshape-image width height) + (infer/buffered-image-to-pixels [3 width height]) + (ndarray/expand-dims 0)) + predictions (infer/predict-with-ndarray predictor [image-ndarray]) + synset-file (-> (io/file model-path-prefix) + (.getParent) + (io/file "synset.txt")) + synset-names (split (slurp synset-file) #"\n") + [best-index] (ndarray/->int-vec (ndarray/argmax (first predictions) 1)) + best-prediction (synset-names best-index)] + (is (= "n02123159 tiger cat" best-prediction)))) + +(deftest predictor-test + (let [predictor (create-predictor) + image-ndarray (-> "test/test-images/kitten.jpg" + infer/load-image-from-file + (infer/reshape-image width height) + (infer/buffered-image-to-pixels [3 width height]) + (ndarray/expand-dims 0)) + predictions (infer/predict predictor [(ndarray/->vec image-ndarray)]) + synset-file (-> (io/file model-path-prefix) + (.getParent) + (io/file "synset.txt")) + synset-names (split (slurp synset-file) #"\n") + ndarray-preds (ndarray/array (first predictions) [1 1000]) + [best-index] (ndarray/->int-vec (ndarray/argmax ndarray-preds 1)) + best-prediction (synset-names best-index)] + (is (= "n02123159 tiger cat" best-prediction)))) diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/ndarray_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/ndarray_test.clj index 79e94412d..9ffd3abed 100644 --- a/contrib/clojure-package/test/org/apache/clojure_mxnet/ndarray_test.clj +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/ndarray_test.clj @@ -97,7 +97,7 @@ (is (= [1.0 1.0] (->vec ndhalves))))) (deftest test-full - (let [nda (full [1 2] 3)] + (let [nda (full [1 2] 3.0)] (is (= (shape nda) (mx-shape/->shape [1 2]))) (is (= [3.0 3.0] (->vec nda))))) diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/operator_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/operator_test.clj index 1b4b2ea2f..c97711b5f 100644 --- a/contrib/clojure-package/test/org/apache/clojure_mxnet/operator_test.clj +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/operator_test.clj @@ -462,7 +462,7 @@ test (sym/transpose data) shape-vec [3 4] ctx (context/default-context) - arr-data (random/uniform 0 100 shape-vec ctx) + arr-data (random/uniform 0 100 shape-vec {:ctx ctx}) trans (ndarray/transpose (ndarray/copy arr-data)) exec-test (sym/bind test ctx {"data" arr-data}) out (-> (executor/forward exec-test) diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/optimizer_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/optimizer_test.clj index f6461b10f..599a0672b 100644 --- a/contrib/clojure-package/test/org/apache/clojure_mxnet/optimizer_test.clj +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/optimizer_test.clj @@ -44,3 +44,13 @@ ["sgld" optimizer/sgld]]] (doseq [opt opts] (test-optimizer opt)))) + +(deftest test-optimizers-parameters-specs + (is (thrown? Exception (optimizer/sgd {:wd 'a}))) + (is (thrown? Exception (optimizer/dcasgd {:lambda 'a}))) + (is (thrown? Exception (optimizer/nag {:momentum 'a}))) + (is (thrown? Exception (optimizer/ada-delta {:epsilon 'a}))) + (is (thrown? Exception (optimizer/rms-prop {:gamma1 'a}))) + (is (thrown? Exception (optimizer/ada-grad {:rescale-gradient 'a}))) + (is (thrown? Exception (optimizer/adam {:beta1 'a}))) + (is (thrown? Exception (optimizer/sgld {:lr-scheduler 0.1})))) \ No newline at end of file diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/primitives_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/primitives_test.clj new file mode 100644 index 000000000..1a538e537 --- /dev/null +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/primitives_test.clj @@ -0,0 +1,45 @@ +;; +;; Licensed to the Apache Software Foundation (ASF) under one or more +;; contributor license agreements. See the NOTICE file distributed with +;; this work for additional information regarding copyright ownership. +;; The ASF licenses this file to You under the Apache License, Version 2.0 +;; (the "License"); you may not use this file except in compliance with +;; the License. You may obtain a copy of the License at +;; +;; http://www.apache.org/licenses/LICENSE-2.0 +;; +;; Unless required by applicable law or agreed to in writing, software +;; distributed under the License is distributed on an "AS IS" BASIS, +;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +;; See the License for the specific language governing permissions and +;; limitations under the License. +;; + +(ns org.apache.clojure-mxnet.primitives-test + (:require [org.apache.clojure-mxnet.primitives :as primitives] + [clojure.test :refer :all]) + (:import (org.apache.mxnet MX_PRIMITIVES$MX_PRIMITIVE_TYPE + MX_PRIMITIVES$MX_FLOAT + MX_PRIMITIVES$MX_Double))) + +(deftest test-primitive-types + (is (not (primitives/primitive? 3))) + (is (primitives/primitive? (primitives/mx-float 3))) + (is (primitives/primitive? (primitives/mx-double 3)))) + +(deftest test-float-primitives + (is (instance? MX_PRIMITIVES$MX_PRIMITIVE_TYPE (primitives/mx-float 3))) + (is (instance? MX_PRIMITIVES$MX_FLOAT (primitives/mx-float 3))) + (is (instance? Float (-> (primitives/mx-float 3) + (primitives/->num)))) + (is (= 3.0 (-> (primitives/mx-float 3) + (primitives/->num))))) + +(deftest test-double-primitives + (is (instance? MX_PRIMITIVES$MX_PRIMITIVE_TYPE (primitives/mx-double 2))) + (is (instance? MX_PRIMITIVES$MX_Double (primitives/mx-double 2))) + (is (instance? Double (-> (primitives/mx-double 2) + (primitives/->num)))) + (is (= 2.0 (-> (primitives/mx-double 2) + (primitives/->num))))) + diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/random_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/random_test.clj index c4e919807..6952335c1 100644 --- a/contrib/clojure-package/test/org/apache/clojure_mxnet/random_test.clj +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/random_test.clj @@ -26,9 +26,9 @@ (let [[a b] [-10 10] shape [100 100] _ (random/seed 128) - un1 (random/uniform a b shape {:context ctx}) + un1 (random/uniform a b shape {:ctx ctx}) _ (random/seed 128) - un2 (random/uniform a b shape {:context ctx})] + un2 (random/uniform a b shape {:ctx ctx})] (is (= un1 un2)) (is (< (Math/abs (/ (/ (apply + (ndarray/->vec un1)) @@ -52,3 +52,16 @@ (is (< (Math/abs (- mean mu)) 0.1)) (is (< (Math/abs (- stddev sigma)) 0.1))))) +(defn random-or-normal [fn_] + (is (thrown? Exception (fn_ 'a 2 []))) + (is (thrown? Exception (fn_ 1 'b []))) + (is (thrown? Exception (fn_ 1 2 [-1]))) + (is (thrown? Exception (fn_ 1 2 [2 3 0]))) + (is (thrown? Exception (fn_ 1 2 [10 10] {:ctx "a"}))) + (let [ctx (context/default-context)] + (is (not (nil? (fn_ 1 1 [100 100] {:ctx ctx})))))) + +(deftest test-random-parameters-specs + (random-or-normal random/normal) + (random-or-normal random/uniform) + (is (thrown? Exception (random/seed "a")))) \ No newline at end of file diff --git a/contrib/clojure-package/test/org/apache/clojure_mxnet/util_test.clj b/contrib/clojure-package/test/org/apache/clojure_mxnet/util_test.clj index ee7710317..c26f83d5a 100644 --- a/contrib/clojure-package/test/org/apache/clojure_mxnet/util_test.clj +++ b/contrib/clojure-package/test/org/apache/clojure_mxnet/util_test.clj @@ -20,6 +20,7 @@ [org.apache.clojure-mxnet.shape :as mx-shape] [org.apache.clojure-mxnet.util :as util] [org.apache.clojure-mxnet.ndarray :as ndarray] + [org.apache.clojure-mxnet.primitives :as primitives] [org.apache.clojure-mxnet.symbol :as sym] [org.apache.clojure-mxnet.test-util :as test-util] [clojure.spec.alpha :as s]) @@ -54,6 +55,16 @@ (is (instance? Option x)) (is (= 1 (.get x))))) +(deftest test->int-option + (let [x (util/->int-option 4.5)] + (is (instance? Option x)) + (is (= 4 (.get x))))) + +(deftest test-empty->int-option + (let [x (util/->int-option nil)] + (is (instance? Option x)) + (is (.isEmpty x)))) + (deftest test-option->value (is (= 2 (-> (util/->option 2) (util/option->value))))) @@ -123,6 +134,9 @@ (is (= "[F" (->> (util/coerce-param [1 2] #{"float<>"}) str (take 2) (apply str)))) (is (= "[L" (->> (util/coerce-param [1 2] #{"java.lang.String<>"}) str (take 2) (apply str)))) + (is (primitives/primitive? (util/coerce-param 1.0 #{"org.apache.mxnet.MX_PRIMITIVES$MX_PRIMITIVE_TYPE"}))) + (is (primitives/primitive? (util/coerce-param (float 1.0) #{"org.apache.mxnet.MX_PRIMITIVES$MX_PRIMITIVE_TYPE"}))) + (is (= 1 (util/coerce-param 1 #{"unknown"})))) (deftest test-nil-or-coerce-param @@ -161,6 +175,12 @@ (util/convert-tuple [1 2])))) (is (= [1 2 3] (util/coerce-return (util/convert-tuple [1 2 3])))) + + (is (instance? Double (util/coerce-return (primitives/mx-double 3)))) + (is (= 3.0 (util/coerce-return (primitives/mx-double 3)))) + (is (instance? Float (util/coerce-return (primitives/mx-float 2)))) + (is (= 2.0 (util/coerce-return (primitives/mx-float 2)))) + (is (= "foo" (util/coerce-return "foo")))) (deftest test-translate-keyword-shape diff --git a/contrib/clojure-package/test/test-images/Pug-Cookie.jpg b/contrib/clojure-package/test/test-images/Pug-Cookie.jpg new file mode 100644 index 0000000000000000000000000000000000000000..56f5dc16ed7a8b5e3c5168906c67be5bd2a02463 GIT binary patch literal 104323 zcmb4pWl&tfw(j7r!96fI!QI{6-Q8V-OJEq>!r<=imH=UJ_h2DF@WFxv33;4Tb?g3l zRj=OK{i}QR>fU=V?XUl>|Jw!ND$6U$1K{A`0E%x9;NL4iL`GXqN>@W$n8v~OgR`fd z1C2iyJ0}g3pO2Hf1C5fh0*$h!oC1wMKQ}WQjhCO5tCMdKjTns(ANs#dfD8Zu9{xY= zjS$}&5-JiBA|etx3JNkRCORf2208`?7B)UE7B(I>1_mw>F5Wu=LPA1J9AXk80up=z zLW2JYfkSwkgNTHNgoH+bg@Hxz|4sh}0eGnJl868VI9dQa9vlK5+`kb3B>?cYv;Vy7 z{}2KqJQ5r-3IG-DZCDi-01tfP{pMfC7&K2afwgymyop0Z zLPmKT{DAkilQ)7xd}|2*Wr2goLwJwRg(yXf^p0E0%8LM*PCAW8JF?{81_1Mo%Ol4;lv4iR3fe5YkjqTYT6}dq6^hXRa#n&kneCYcF$eh^B)oTo~)K>cz zoCLQ{&=2^#hXr8W3VXHYv<4xPeFCP^`${bzuP4`~e6wbV z7Ou|>mVz_)3TUl4VU(%19~9LD4=RJN2Q?=sNabtlfVHyAtB1ij6@>C3^Muu)E$XY< zW&+Hj3Q|X=z|y7)fE+2WB$tfL;gc2r=66SP72<#40fR10 z{5e9~l6ptoc7HzYf zhMOgL9^S+KmS-n^xKwg9^jI&NjunJwt}aL6%_K{8OOrsMxo({R4y|Qejlf7AjhV_& z#bsg>9BTj&6L1TS4UooS#R7}4t#@8QjFHh9RAG3xW2Y6Zq}X9Y zyO_bwqX(3it6(rC3x%?BABnfqG^GVq*qJk*$g$GSr{8b+S6Q0}Rutb>S(Qk|P!2d% z=Rgpp^AhBlj8KxCx^ub+NOU32?89%ivS+@$Yy-ih5JTVz9AL5u-Yz7EiKwL2@?a)o+EwCwEds4<5D?jM{^78= zG&g}ic?CJ<%|_oxIC@2>@zzQhXIf9li-a=mT1Q4<5s9m*eZoT4XdtK1oVQOd17&?L z&}qYf5+VoHBazfpI2S+|t#GZuQE{c?7dN;ieO0`G_cbVK5oYLS_&)D=!Yb37lkg*jE#R_b>S@>k5M)D_xkAMTg1BW zP#Ztl-;xerTn0l{S!6ok;t0`mXJi`yjMdaqCko6Ea&{&-ysi$+h)||qVOj76)w&p% zK4;QaMaq^{Rn7_OB4O<`rdKYQxz7ufV$KW^MxV}Pte%$eG5-M?&~j250PsR3oJS|B z!t?ndngAgxTVav&|eMUCMXyE<^c?Vu5LvNj@{^C?=jF<>T zCA5-EbIK)(+8;3Z&ZM=P<5zh=QgJ|CVSI~NPW{S+nM<&hC}XAT4(gI-@8(6reSnJ5 zKJd0hDb;#YfFAX8QmyNOQZOaO{-v~fvxHvzOyA#R&3?s-Q6PD5kJ4UW7)V?8p$?7s zM0tGRWV|%vSXvi>&w|^!f!-9P>;?V@sL;(k$x27pwyXMO?xdt&?Tf0|T~SVX^4$P& zu@xQui``I#Ed^PaA4vt!oC%{ZOtNa<6u?aG1=frC2qLt>1MlT0g=k-;1iP~!@}TcQ z5`v25)^%{bmuY3Z7@eJnA(*<8ja)L!DDh#wi<-1lh0D_)_d#FM&|Iy#bOfPRsAH?; z&T&t)Nce2gS_p-spy&>CBy%-F1p+>T5gT55VTL-eeXbM~O^9H`rI3jn3m?E#TACOM z%GE|PEj=GUOil!$V`|MG5$YxtNzHr7X=zHyYw2?Wi7M;NwV>|^1GCMjq!f~i>&EZP zmz^k8RgA1{vVw+aFX%^+N?0GVzK`<9KI}=Rt-}z&s!AuMarvdI*+G2 zvDem?ua$)rl<~WS2(dxIfp;j8;xN0r$57E6Gv?n|`%DUa2=D~)`lyr6(S z@L|k1f^XQ4w!y*k7{a^rrq;QBGiZwyaQYNv??85Of+d_a@aIfy%yFiFnbbvAj7t64 zzCkcRc4NH6gh;Vwa_oC~Ubn@HEX8V2c)7|g?VArqa7siuK9L|d=m(m++Ey0moKUgw z9>f{5|JtavsYMBip4;-Up2Bv~r-=}8tQX%)(p`=FOfsWGk4y<|GXLQ%A}~)Fh%PSL zu7G3cO}&(-bCHX311soDFzX7qW*BK#vW6G&L2+(WF?QyMnW>@7k0`Wr8Jexc-%N;Jui{NHvAQV)q2v-ynOgXlbcJr_Wa`B~xA+kvDK%EIV1)xIzYH#&T<4&Hs;XFjP^`*`9LEp^91-74u5QeBu%JyZS4 zQoFrsumX0!@iW5wQ zL&)i@DMbVlA}vACF(9b{vfNjvdTsPKq?xm$G{EJ1X`3U99fvX9mNIB8M$o1xr-LbU zr7+HPU^8&+*#(!~B>0b|r__u2pE?DgZ3A~} zOzFLdc?tb|n(k5e_M<-5f%7yPf`@k~tL!8uQvG7;DVMsWP9e`yNlA6#wdAi|ln zJRTZUfI@*{wa;TkpIfy$V^rRjK601E!6!5h=ft#Yj~K>LaqKf;W5;H1ImSlLz)NBK zR|)nOYaH0pQf$B#iz1mhTC#d7;Oq_L5J+uNUHiDD)6OYhq?8BYXCxMpPK_$}u#RgD zi`P{YO%4}FbG|En)1VZVZ}bXc&3Z}~gW_CYdOQqX7mZ`jJ>1^^pi5&JE_c;=0ZnZp zWTagDbwLZ_(vY7*94HF*a~PwfuJZj#A$a{ub#sUgDadC~P%LujCC>S4dGM`^|5Q*A zn-^N#o!)OjX-`zDioD6PwT$tm8I{+8mEI~N0fPq%5g#CE_J)e|6T&9ID?+R;#3Q*2 zV%NHEG6x*yMU|$E(n=jz)tbB4( zt1LnY^!ziEoi>0vCRxG~oH58!ZOE<)NFg+`S%zH(+sfapLp@7POu(2;IOw!%LnJ{pHX#H8kS2{8OB7@g<~q*$KO)xfL7GR zK}%-YYSfESisO2kj|raJ1mdl6^Jcb07Xpm)FDQWpwc?umI6inkH^P$u@C z+MKtv9U+-H#~8El%L*cBoZCyKdoR~ZGR-lb&`V`ftfmHueILXv*LfJt;-i`ddWX5{ z&Gh`vYlef{y>iLf#XwM(jHUSUu|T?$`p87~Nn+L?Ran!-oH+!+HY zDZGM`iCI2zdY#YKeklZp+De}Km!>hT=Je-XgF3na6$-u5d#MPfx{YEa++h6<$yADk zvxZNPrC*P$qzQj zn+{d8q*#?I<|KyjIyhE)P&-4Kp#aIvTB|E;M@3JK4vhgg&VK;o&m#W-EU1@q8peP` zsG@&$%71*Y*84a2drEU{WPZ!0JYZhG`>aXQr0-?uN!;_~BTZhmLX^tbpl5z3mXvkQ zpFT}wlxL0O|2p?0?FqD0{HLk%)PmCnNJ5~v(DLnkmSXog;2FWFKh3Q5lsiZ97%Q-B zSs&}>ut%&5Dk3J$+z6iXpuktcpc7z|EfK(3_^Q0}>u*_mo{HgN5F828xTRhIm)=3J zKVRaFSC#w}vV8re_Ku~)(oS_pVSURIk&KrgMF^)+0A|YbP$Oae>yrOd=jI1o;0|w# z^zWVxf;L=@O@yzjUwl`;eAsgFQSmf)iDc8*BNjAZ39wTSl~m!J6uh*u3`r1~1|^bb&tWBj1@(WR^R%h%5C_Z3~;SDw-0?viq2 z-iB@@7gEn_-}W09?@uZil=hGX_fyjmEvvDWD0Tq^a1-zi!3K2)$*2DS21{0H#>}!m zFIoB?i@D6R$=H{2EurTrguLw^Nai>SBs@dw}o1MH##O@?R+k=4CmPoQ6a zL!RY^94oQ-R$Aenor~}|>^mW6BiSJZ<(qvJL}~f>-?m_|V*%_p zRrfRK?RE+e0^XlaP+)^J)#2@aOxe;6yyvQ*aQX9G4xWst;_8UHS)j*XTEnG-=F*@GS{MIT;fRcy|i@?Emm&Fjj;X z_TK5NZlvpR5ys8|M>Nj8FnXRhJ}S7bkolY{vNli2he+JnTv<@dChWOXI?#VLdri>l zK7{eI&MmL-g9Sq{E%VjcObf&98aiaCP{5{skwj0OYti^+;__ z^j)eH2>V3iO&bNN%8PD_H=KL@i{2@dhs2H#A*MW!-93R8*o<7ei543N?XpNY42SswU-}E#aBli`xxqzGQ~F; z#&%{PW>b3la3B4PjjhXMj5+?R9Tsc#EZX>0&a11Wo~NYTp4ix;n+BHSFXr}lB=;rPm7|w?^o7y; zA2aovhup-b!6u7I>0%wH|3hT?O0J#Yt#Yd!$-GAgC& zd&W2(ht+L|YjgJe425>2EOr}kwpZNM!^n5>F)G@~+b0SqF^%h4l>!MZ7qOokT&#lo z&Nf0QXU%}H7z3PBs*Ytp@tTvV0p$~0ys6*X@ncsDvi0!TK`$R7f1tYr&9`rcH7pkC zW%EjTXX|&~D9WRJB9G$QI@CFQVy^qjueE@8W%-Oz7q&!9Cf0EJ9$$Is4;LJf{HUym^dL-0MxbT<3>NjT%4jiG}`b#PE2xxgOmf3qjVkS&i+Jd)K}NOD7b;S9e0b1=5WbL*2EHwU7yRZM} zlf!cbEXKsptnq*mu1sIzV^YuGu&gpaff0-b(l4_6{P)+BfXHh{;hDDUbICRma2LEKJy`Z~ z%u8txOYlvB$%T6EQ7Y&bUMNZa18mb(ZGyo4(ad7}e` z#ybE-8O0u|^HeNrZ!X_hzR5Mc;Xu-# zZ45#12(pe{Wk00Yo%9VCd3Dx1u(1O;>uE18qG_94D1R*vtFOEGMj)`U&mu*&-=P-O zsraW@*f!L8qI3pcad3_hn3c}B7;$o2N2-&&+&13*Vwf}4dg&Sk2i1aswU@Wom3>Hspv}SOJ zf+j9i6q$P@X|scCXnB&Ilth-67Pa+^1rpoY|DHI-Ips1tNe$J=d+S0ewA(m>-IdL) zph{ZIQnhEXb9@f;?}Z=sN^(BgeXLWS>?iY9HW~}Asg9{rQEWdbgXl}`L^yM)jdC&v z4lsA0vZ2J-7sP9)1rQuqm&Xa}mv6OT$1KT?<4mj*m2`yKGV)xH zOQ>8YGss*JXTI_K62vldIqqoqSTY}oAPI5>X_tpQ3B~dX&uho!h_kPQvTg!XX$4j& z(|Auc5R&Y8U5#inAf-TAT?a+yrxelBjwRja)f43Np|u&%KrSug)nu8-aJ&JfC%cf2 zIe%`*W9b75!ncb2Hk@^#t(_7>(k_u_W%({j7HQ5(mnsI8!-H~0{07l#M$9tBOY&J4 zAdx)OUsSu3HaQekC^Gi;2Uf3j1&15M+Z4LIZ_z|97b?aCJAxch6?q4#8{W_wLOvjz zrmB6VAbc@mz5C!{bBK1ATvk{nkY~F#-+}Q&MxXCg`!6^-sawSe@N*3 zNvXp{5rLFZZcH>^Pi$-?33=P7@LJa99{~4e;QAjRyyBzEauH>dZ%-p?&vi+dZELQg ztxjaBy4T$p;3!4;$mRO*LS+I7F)C1PRS?B|_OHYA&bVTyial5sEgePdJBpM%&m5~aTtM}>nOJx+1&B6GTHr8 zMf<~@Oeg|&L9<4!bDd_^Q0(NpWr=pN*P6fweBn>MXw|GX03^(YuFX^pLLHB5bV`W^ zKl0}UoM>Kr+;<}AlQD4!XPZ>Ha}{1N=lf9m;YX|QZa1sanrXO0BcQtX`;Xo#aK|iH z4QD4CdVja@H7Ht?h@>4{mD<72KZBmh#u9VvCzZu$$x_TMoFJH91Iy`KDtov}vjXWe zk2m4QE4(I&Gkz7FLAG{3aAh?PRd<^o24|TI{E(|`h_+jVTWi_hDj6hkYlU!ruaqW& ze}Hc!A6$Fqs%#2yCA(Oe)8$0v?e)o9@&RZlm&VmVupR$;VNu|MlYi%V^5~&`nyTyM z`Nx*(=P;hlFJYOXtR~&bQ(l(WqzvbIj9;GlIuQ=0=K6z+G zJ;=o^D(P>~!v_?~95Tkg3>Lz}%*FD~)ST9ELdp^_wi)t=owUhaqeatX%X1z!05(N3 zn+)w{-ml;5-Kn4l1;9ufd;zwxhyV_&>>p!RAgZi-4Zj-0sc^qtlZul~N3dC0k~Fbv z@cmoW0-9;EZwaNVJa%onebF>kOw%BV|FNCXW2t{r&Xsa>^U3D|_t`t7F+Wd!>9&SX zrOk2jTzv9VHYI-D7&2sak`{>C3Vv#SFsPBqOH1iZ!Fh1#FP%g z@w_UbA?6Dy{m9aBvWf9px$0+lXI>03NY~xL^q!QAU<3s+m4E$=nQgg++W{4}F=|`# zb<;6i<*;t==vs1XClAb)CUNNSjjRlMCtBP&~ zDq#502FT8(^NtTI&>c-;`9Q^W-58Y@YWxhO&;9{$*k+qbPqEw((*JWnn^CN_T*8Y$36z33F_lrn5` z(;(_dsb#5@s~iH}RJFw6K^>Y*RlrC(S71b#TI=}Wqq|byq0)ekYzRM)UF1Q3O;R7$WY`&E8S;*q1I;u4YCn8Jt0LXsv= zf*mFdPslo|<79wqv{LT1#bYFt4?i{Any0L}hO>+-<{N^GDuv_J7+{c77yFRFi%T#1 zfp*6~5J}RP8xN+>v-96Ew`cTNbhRoN9hSr-$^?v^1432mp&tfHP3(7=hZN)P4e;1S zM>^k)SWN-wNVH}X>py;24&JO`oOk_VA~}D%=}6bp-5vT{1fhq@>}yQ1kYcN+X`4H{ zRLIXc=sE{4IV;8od9`p|Bh*+cE4H>i{-b;tpK6f&nGY8L`BG)Hto)i zko^y&a&w>u+oudL0xQLSe$JNCFfMr+d6xpktBAv>1R>1LVVS<=cuJ^|cYQCL$&cRx zb1v&@=VQ9GEV7o+gt%)VXOr$-$C|S5Xo}@W48Mdld(akEOKXdDaau-pT8=;9XUpxA z$KF@vbzi?ICu99SqSzoma`#LwHz_=+!IpI0-*+?pWQ3O!}A@ zd*Xh{Ch*Ru?x3?9Th7c5=ZcIYIA_nI@+}tU*JSB@bR*AE^80a#p z4WIl5ZWebJNsZufSQEc1y>l?$bh?=YrOWwn(35}It;(#$SLY6(c=em%{ZbJ^$q(DH zL03$~)AZ^A3~D_LKL4#VA2`KmkiIwCk?B;2KGzm${S7yU#i4HJr%yleo9XO**h&}c zXSYiZc%`wK25+iRPaoYiNI%p9dpq>vzb5`MEtLN#J=pZ-AfNPXf1_Tg$e-fGlP^U> zn!3Z$y?#y|f+MPgVC#QfI&ndQjQqpSOQLJ|eMCRHyDcn({sFl3)|j+cki9QdLi^zK z{oA1QXS{>lgzpI~0sgxVPhpw$U0$r*BJNw?f>6Y@jwLP^yUML6&q5gA8$}|}gsU#y zoHSkyN}@o4NGM&TM|Wl7P1xS2$pRNFfXjb?eS)8ByU7~}B*_8^t=$yOLzMpj62@I4 zjnIa6?M`iEt^RQWPSKF?A3IfU)-l{%dSPBgQz_RK;X%+n{9FtAyp zbsO$@fAl%Wp|l7QD#)!yOxRvqVjoROx%eAS*ZuJUKcjc%=qNSfeP$^|h&7j_R`#A0 z5omRr#jNOWft`!}15DP8kkwoRZ>oWX!zmwn+BeCs{>+Mfx$|9BL3|$4A4hZ(8hdT8 z?{x+5h(7At(}d5wMLgqAl#ZDO8{gnq&xz5G2QiwgT?p-B}57YnV$t`TR7lFE(Ne&WqoeEGg`*UsYp}sJb`{L73yqJCM z9J&<3`Ejpv9_Yv1iXRh|6MOGPuUwp#o@(rM2wIR4uMBlr&B(m8&8Y2ek6;B31g!k6 z55X0OK`wFde6Q!G9CtGw!Va*NMFDy(SVaoW<1e@3FlF)=)?GE_8T&4yeNuVSaiO7F zrL3Qbk==jQS>@7QzUQ|n+=k^!J=^>8?bnVx=m^nR3tmak1W6Q%ue3W-QwMw0*h>-a zjdkzJPM3U%h)|M9C4A)0dB|S8v|vEF>2wJ`DFO%YfKSj0@NvGyCKw>x5SGqK*yG?! z7}wvCbO5(e3KSZ>{)#M^bNCc>e576o@35QABP)j-qzx0dJ|>5LY?LH@HZWiO>Y5yF zm}+q<&^i;DzHm8lw~Nc_2N9n(=MTlBA*JK3j}K~eSSo72l=S0;VRe>?dNU!`8Fz;4 zw;p!+o?)@yqD0xJ;W%tYgzFgBr{$Q%{%5sXux00mP+nbqrl=(!k z{OGPmmVNR>9_s_m!IDq18Y^Riy>EiDPzCTc^Pp^im1Zx(8#rERLx9=_a^})gij4n6 zo0ps&tAU)C^aib9+KVqApddi>`6~B~Nx>uDa$)HqK(@GRZW_Ql&&_s>2nT(jfVPi_A-!TAn})FF-fy_Xem21~mq|-Y~FO z8bmD*FQ+Pr1e7Ao4PaukvO{&186dCJWL)jsvdB;tJz7$0^Yf`l4*q zf?gJcgnduIKZ%TQV5CF}Lib0>ThTyU#1`|?oSQX=G5x5T((uI&Yr5ymyiqv++Y!Sx z;sc^sv$T}8L}sIMI(I*&|8Asl$H3go>0{qOI(au*I)Ad~cP+3xf{^}(yu5BY9KvF{ z5$-^eX9sRrjd4?mxGvNjANP&<&U&7n+w?B?hXS4+GI zHJ^YumjS7UT(Ctv^C-XYJ!}FoGD?Jcwey2vO@FuHLH?#$5g9V0yVRgtzvu2%s0hNA z?uQY`m%N|m8}%-9wp-;^z+_bzb~@&xVb&x?(lqK3`cfA1u{TDXLTg7E zFQ3qG>q3uB0Udd-JQ+pkT+WTUue5W4PX(h+B+)E*a(~qDGYoIM3crliC!cv#2t}y& zJ9|ZMNA`}14%XJDec-lq{p1#DPC=Qc=#zsJNgd_e8RkR1RHM10-~B*Z;o;9NzOf8o z3!O-GWzYpxy=MFT#Vst*M8ST}!%NhHj_F5!d;JUvz*}C0QT*FHCq}b;})Ss zgGlubC9UpN0Rx&H`2;Q25a9|v-Pb*e=sQFMVVB27_b{sj{j@A)e`<}hSHcxx8oo? zIcpD@7j`h|Z)+1V7+SfD2gVJ%*1`<@cg{yBj@bH=ISKM3`p=991`hnY!iOW|{QPHs z8oF~dr%GP7I}~~A2J8F5bdEay>@)XnyvZ&}7Dc+Uk*wvq7{n~np1E-LHI5`@KmS05 zfJq-2I%K|@Sx8fS4zCqj?g6OpxORQRp?c|SE%gLqn z&s8~Ny%{tS&Fy(o;;{GQRHePh2}>|1r^@X!e(mXvT;1;GHb4A$h0NkPbUQ6>pH(A4DP<^SREMA?ro^pgb+=So;vR9PPaZnu^AL~xRLc59K3jMpf zDeox!YAHq*ZoqKz9}>*P;4A(Ri%)CYDvhTCJkCU&M4Gv_Jc<>TnvFXm@@0Z3!>P2G z2XGh6Ut<24EmHRvScz1w=)DkXVG<--#REetJOH^Y;22qP4T9Wv8$@mB~xAs`6NZV zWY1L^Q)YMex<*9kD*Jc_Z3-+4YZ}nA*VulQ z25uKKhdK{gqE*gKSydttNe~C74f@9tItrLpep|Lz*RgQNnugQbLqn5_cvJ08+`yN% zq+i|LJycHo_}J_7cA~r^pB@kOrc$zd9cuYR%na1JmuW{!!d}Jlo2da~*rGh#xI82q zeRzZ8q|ApK>VKC8TxyJzYQDQ%D-<8cZ2>b#K1yVDZQ!tIel3vnK<7hBWIuO5;{S!e zLh9Q4XJG!2$!^sqeNXA|mA){FRv^cl{b$DyhpUD)!JEc=V3x$Rb127J?`jL-mxLOQ ziwDot#>{#mLaE3Xh3$54lq`kJ6lnqjOAU~KnatIlA0@4d8S*6K6&pLBA=uk&OAsBn z$+&N;{-XGII5PFa=%`8_eMDag|q+2=W{7nj#ruk zbY&Ljr1wH{o8qablscLV?SDDmeAsG7vVl{W`|y?=84LXR4(o2*?Cwnq$BTUG}zd)7km4@w}_<^X# zt;&gW0~M>^77)cJxE>jZ6{RR{`sACk0}(HvA5AMi$*HO;&G-j!y%R58$G7o8B?2NM zvY}i(vU^&*SM%(5XFCDIoXGwG6kVua*P|dxq)@sc>WQ#X8M*;X-Ij%%-gRpk7U^o@RfJ|MqMxRDsuMC zQ^tF6C$H4LxG(t5Hk`0F5LeWH4cb&{aaeBr8ton}M6QDO6$n4@2N2y3FI1%dG5jCE z(T_2GYiCO}o%N4!8n7+pMP3d-ESgH!rfOVuff{$uDWVM3Eo~$%9<}g$cew9%1C0&d#`*yI>Ohzv=&^WYb7SpRQ97>Z< zq}A1}eGnz+p<^^v=FGjxn>>Uq&`UD)Tu_xoEfal#;{5mBpPq)zlK~0rys|%OCY$(1 zws03745K{z%sZwIm@PkbMDq`d;)9P$J9dhfqm#s?xQj3)8}>9J^O4c{W&6YMz&2^- z*+P|GcF}c0pshNaX(`y8a~>4<2BhzMV7=6SADnw!>K#*^TWY-C?x*7ZDJ{qT_xqi; z1nyI^7!RZ_WWuKF3L#^=3M$Rr%%_qV zZq!7bcB7bThou3N`UD^5oHewYbN$sNxxQxEs!P!=9fxK4PGHwMANz7r;?n@DZ z?_g>~6%BsUaS2H0*(tNFI{%(HVzWtLC8iR-{_$7Id74H)iZ)+r>RBJIX7 zUF;csrk1Uu%yQwb%z-hI=Gtuzxq*Y`ok{&Qc-UuFL_g2o94}Y5_1#v8{)s~^)XTBi zNd6Mnv{g}Rs1{OX-{I#TIjbOEu3g6>tG;~jyfRv2Z)0m^4hoZzUY zVh@nEYBb1QeYw40q8>b79?g)E>*@3;vZxbKZJ&0o^@+DXf?|mq9RU-QjCM&{`x>I{ zE>%@Uy_%k;j%Bj&8T<)iY&Tw6uA?%^U0LFo{>5vCnGv}#KcM{Ju9o~gvplmB3c+n@ zdZw4OLg_j4=P~$55-1-D6fh#wA%wozoV>hcyOJaC-BSGq$kiwgoOIWg2bX+4S&wBz zJs=C_g2I!4breOk7r>TF@j@m3`0Q8$BzxI%hq+QP>BzE6)#AfQvRxe}CU|sb!s6mx z9p;TVw;0Mg=AC8*3f=uVKKs99L;AzjWmN=(r6o|^c|p@9uE?%4mG_4!R7y2+^z&Am zpwA@rvR>~Pw%vH%cxtQY!Mm~B$z6?VIzM3$_!pq=j zY!*rD(yeQ0j$2uN)KXU4`P{bqdStpTXXMdnxBAPMlkj)q9W*~0jOh_B>*I=%hwAfU z%+FD_RtDGIS2%YAYIE-rO*{i;qp5NaJ7!$=)RK$b=Ln+Q1}5|OT75vUY%Py+k5b1Q zDNC-uENn$$e6e=@%vgm#XrH!Ov-#+wyD-n zwYfFikJw2SH)mzow`x#|zNO`Rs(ISq>}Aol5hpQr`msmP8ydAY4zpHR&X6~MbXZT` zU#31Y5bn|r`V6@P>6&(@6}5!X$9alzix?bPjNl{(&yD<00X*IT(N|v?UzAqFX+Em8MhcN4Zcw~a0H z!#w#JT3l;SMABX#Q1^YR5Cw1CS>%IUsnm4-)aDqdv0@|!?Oiv$r>T>k9`r!8io}Ma z@X4nN6hoa$@L721d-8u!i88dM)yyzf9#XA~>i*>k|LZ!R(A|^V-|xgaef9JdV=5P4 zKzAE4*)U4|*PM7ao#*b6c=>#X8TW;Gan7h-;fp+_O)A`bN$!Te$JXxm1E_TK+8d_A zl@*kG+oQ&ZB)IsxyaBlVX_1i=g5#D7@3XifA-#&DO6x^>Gl);F#=4sAATvySwp^9e zAVBnZnzsSnH9WGQ@v3mN`G6U{L)_8wPp>raf|lpx(o*^g?ILsEDC$0kdl73IL1y$qEU>8w)2-lQ1% z25oHTz*mTxG7~<37U7TwvQ7`B7^ez>hl8btPa;(b|Hdcv2?0=2Xr)ZMxqKekHJ$4i zJK7v)LDd(s_eD_STb@nF`vnW>y*@^d6 zkQ_xE0NG+8E)<8)xfQGR*G0$rVV0@|kB1xZm(7s2H{RSKWl^$_ct^D#b`O7v2gVE& zO^f9&MdQ!z-&>IxPkKMT9eLGneOJ`yAXokpNV=B5<$gmOI$>7)oMx=wV8>Um)oGVI z!owT#*=8o?cZm|K+-h2>QV>BCMU<0+fg|N7aRsy1loGe88Tak?O-|VZLKW@JN}~G0 zSBqMMsVTF8uz@1`NlQ9w>BOPl5Pk^y4ZlzU*VB$Hlh|&py5W% zUeDkpMNyT_#Nc7WdeZ!Toqyu;1LwMka*K$R8yzN3=_?A|*24?bV{V?`Z=hh?$$L2w>_(f!?c+t(~RK#%`n(t!@tdL<Xm zPif`8>P{VMwf?N=?qLyy5)EP4Y?yzC@+?r|uL&P1It9n>8Os3Ka){xj*VEvmM(!=J zajQ8w(OZ8d;+Zj4@=>auEKP~!`!5`?&dL^ID_ec+JnkmZ+$@_4W^)2tS1p*4dsO@S zy$~&Is<{xk-5&K?w@m(&Wh$-mx-mg7q35UV3nFb_qeUrJ`5^WyxsS6+e;`xUr4OctbWD6 z$greAqs&jXohUIAJuwId_Ue5Tn@})}88;6}tYVp=a}XDs{+bkdBa7}qW2I`Gv|u*F zdhAh3WuARZ8K8cOx}WB*M?*U;s^O`N*D5&1OyDKoGWuD6J^2r*5Vg!sAR1MLkIL&3%@IxN|GKbGryd6GE{9RPYY0Y>JDEzNtGey0OMETP)CH5o4dg_YaG~ zVUczoHR~rjJ1Q*I8uw9BuI^dtCwA$aLp$$>*3VmxkP`8+Tp&5hXAe*3#=o+oUFvpD zCtbDOv_}pNmD6pKBoJ{Q6Wr27a5)2P`6SxZ4Hm9pMbMq{6}~B#2KjBe9JP$WB{#W# z?}0C?N9p{malb{py#eE+E;YVCM*bSVuXYT!+ukW+U;nid9B=lAn1_^{MHTD`eonNT z$^b`pV;1eF3}J`~leGW!(C0i&pW91l*tTp>P_m%!=ej8qQAqN>#^lIk+Ut(XYB~)^ z=BpF>(Eb^*H-tQry*XXkwunNp1rvW~BsHviqE?-C&2FF7QFMuW$5G+F z$>}??$wVi(&URhkJ7D^_4{yx7qX5jY*8OH!y+!$xk6Pyd2uqkaHLGix9cgB&j-Nu# zT>uBKmEb!6$`e_>W#LCV-yD zN@MS&jR#w=-1bg!1wKX}d3^hk;MPZlbSbgdg(Al>QLiyx*w?fLjO5ix7x&(iw~x5s z=PRtAhJ2U)YrT9iYa9-QgV*1qSqmVK0?7M>)cX-uS^OUWjzDq0*v)IC>QP6c*v3&U z?px*Cb^_o9JD@?9hAYmW>0A%`Ss_h|b8E!6C+C-9Tc&{1@8$iP)N`S5g2p(0>xf$J9$`a`l8NuzG zV$0x7roSAp+4T*!op0l`mwKcMmZ>U?c@yox89WlaDd5!VeG$}sKXq#-OX_gnO3)i< zo$lMhu!T?jMGQ$?06>wplEmy)#@hN5s4r~ndUH$YjShRQCcxtvXuC zPQ9O1(l2MfxSmZRhlC!=AsgyN3WFZ=nH@s_HYq%-7vz9ydfTHk%UvpK{a;do-dPy6 zo|777;~UEOjlGyg6l3vjE08dZYrg6I0i$Skr&_hY(e5rGjM~KTun3Ge*ryC%XBozP z*xCWZR~q<(@ZMgZnp?eFSGKg&ZX{7*WpM}0vhqiZJGd*j1D%0bjD-M^iqbh6W^zhL z$lY7ew=%rZY4)vpZsZcjG-@WA=(lERNjO~b_`MEIan7OWi~St8&OAvjNHOmOf;)~s z!@nGXUweE<=^HB@63X6rW_zofe;-o5y`A8bVu6_C4n&|6#?8tPlx;)BeEZb>F>P*U zIwif7uDen1uOk8&IN~%8JU!XYNAVvc3JrRgvbK4daU)!S6cgaT z46HJz+O|D^LYmJ}o8~u5CVv~NDRJ%Fox6Ks*lZ`Qw2=Fct1GODv)QuNCdQRLa&>YAlwCTgSb&pS3tN z-Q0otP_38FzS*s(OtCj6JdO$Gmu`#3DfVkqY3`#A58kV%vfK2{b=#howw~O`tP#|3 z39U}|K|)T<@_UZeJ4IYqqeL+jH71Z181Gg$_9g~7AFWl&yHC=zTkOfI2^{4<)n#LT z$S|$PwNh?1Byv4y;zvf`9IJB~^MQfK{M1+>!3S>e51DF$%JrcmpDuY7ZyofOlBAKh zJ;r&@(y@`EkX!5J-c~WSM;JfhP}A?Fg6cV9lfrd85lUW5^AZ^`g5B}kH5+Z>c=nJR zIRy5qi`mhqnk`nvr3wYYC?^9Mt!nxZz=ASIAlBE?8q#aJy|fK!1UIa}F&rb0UMUW* z(&p23dx+(O!3Bv>IZ{4{+Ovyk(Yscx)b^PBRN}ajC)^tt^sh}G*2wkrp`Z-dCkN|W zq-0scevZlvP2zj0?tay445n*%injgDf*6g!{k!6|=;hgr0i5_je`H-QYgktH%Hwo$F!50a@H55=_Z<3<&lUDOgtT3NIP~pbApXgEE~%xo zQmlz3jH%*yWP`X7_}AA1fN}ZySJ%#RF^=B#(Sxfwax0aU^DQeUtvfjNq~sU|y2{X; zz3F*7(=rMKgA@X9#Ra?5FG~@EXi$TST2M1hKO&ZtRz(GzU{Z6fJs{$gm~d(tc=jbF zF8tFn&>V63)lFYQlMD)4LG-628O|DF!L$ z29uO`sv9nZfkGJ6W3@RCCX(g^rJ67dKkXRdr=}40gT|$Q-fBcXu3g>YG76)=83Zu zYKi1dG3iXkDRIUzO-6a`LL?anYL!8IRL3+ku&BAIZLA=`sZ;tYN0Czc*MQAr_N|LL zG2XpiuV)o06(UKfo<8-Hz^D_!tI;M&7@?NItQX>;yC$fR37=}i&9QQLsBQ{5t#7J2 zmJJ@#NhE!tMovu|QC3+i9M(FGY-3^_GBd$7#(Ll5RqmS434}(x;GW0xtsCMG##?)r zmM@k`S3F_Lu>9+kTkBuGH55oL3W*OeTsQ>t*;}{xSDlwX=d@TX}SIYvs$lw{{Rx;j`^=Pc~kA4j4;;d>Hh!?{vAJ4 z+e3GDmYTMKb^J2#GOvLCf<=5_=Zpp!KZVJ!s{S5yY&|QZjXos1vA9yn@|SlKFfr#F zS8*ZNj^lCu?Q)KU_*{WHkN0={|DI&x8U9sdASE%52P>y68I0Fjr* z-GSs$_<`W40ggD%F3)>mA`}mMZ=JwgVdv`ljG05D}XR~rg z0PW6d<5lU6YQo!Dyy`flv6k>I-9lwyX)?(eA%mbQ* zO>PZD?6vt=TG}aw_5w&B9#Y#%l~u-9ET?WyT=;v_7M*|7ho|kePu`Hl9EzGoqT(x~ z7UAW844vu-1!Y267-X*8rFQN|JA75M=Uq49oL3fhi>q8cqTC^p&rxN-SIz?lR@~?a z^AVh=-lTwBIVZq4uT7e2x8J(9*~vAwp=+7r+NA=5qy98`Duk#E2~}ha5nH2mH^A*< z;mlXogH6(}G&|MtQpZ)dSnY$Ixp*$2!b15Ak_xWjxbw8p^jWndq3T**t8;ay%^pjO zut?w#jO~Nr&Z<`vV2L>0xUojJOlhqgtEM_;D@|_KLFyRtVncIftXaw>LWVua@kjyj z7lPrIbb_)m2e zg~U^8x;50-cAV~-HQGhY0Kb_-q45`R&Q-xBky!GssOUW*HB0LjN$hVfrw$}4WdGeC}Oy>!JPBU7m(MJn71kwDpfwsA~EQ}G?{#=Ymqv{B6&{4W??Cc==2M#Tb*Li#xGvVH<{50aUzKn6OsIdSTQ4vAFXK} zO=k_X_t4m*cv1)^1y0nD`5Y10`PdxSL}Z!IDA3KGvL1Mq<9M|z8y1cWxLlTbBRi8A z_ly|$a{{V=%0G$3u z@*vfudS$e_u9ZF0tlB*2w#|1E5U9c~a-d|8dF|%GBfVd;fpt9+JwvKBv1F|sjkVPK zq--AxGD)y77#y4edwIq^E2?On38(bGP4w0M$~D%tsIAS@Lu)Fr%R5DEaD5Krp7X<>!Sj z_D$mf3T$~|80VG_NdR}-^{jGB^lb?4$Mj7G(#uNe&YP0j-pbm0TkS^kf@F>ciZnt1 zz&bh@DEtGo2<=w)3@PUe;vyJ7HdB6IP8jqs0FQCppaXhfi z)RRN8wYz|_#s=uE?TW_21?&zMHHhQzsCGCRt!gWdozZRe*6H*6>rjT=ug>f+tgO5s z=LHZbk@MJu$oFm+E!p0qNa`~6fV;QTE~2ceR_1x`-KB$3z01d9apUYeVzY}Q$rN$CYC?>X?!aV&T>k*4ufN)D2U^rM z+onimaBr@JBX5ezqBvIooyrzX9UIkl@@hJzg}v>xTW!EnW3=%<8--Qm?eeL>0D34s9emT&{{XZi==ZjEx&*%~ zq*_B9QQgRMbV~F5A`g&~GF%c#$ITbCjL6aWqBjc~MF8Zs@Is>Gt7VCI0z=aD zpNjo4(Kh14L6XW{Zq)wnx3e(@iaAyAQ)EG;nnZ2{V>_JiIo)5MwjUWao}HR`EN^GD znJt#W)>V-BkVJOoC1cq0hi=`#;;s4@<0^E8!rLO9Hp0r$;drc|Gf3{)RR?hdmIrSq z06XV7V0DaT7k!jzIPs(H-5S$U>RX%1BGzwUwbkb*Rn%u;^ADT>3pbi56^JSjJ4oH| zIf4}=}WzLS+`ZT60wa^$##z+5-=9yXa{c4a6bXB&j>$gyPJo2ul39O zZtwE;mp&A^BWaAw{Hj!PP-g@nDmfj4q(5kxZnR~2{63fI=#(X%m!{i65t|1tiE9#tF2EJbC{R0sdySJx_=l`42S}FM-&iQZ*4_oTF-<+xawZ1zn4p0$r|YKnBkI6a%7WivfMByC63i6c5#~RJqc%H)5P0D zWZKTB{{R~6SBevK>oCX(A1<+?4=N{MGEZ_&=d1Mosi^7xC8nu$^HSi>H7!DDyd}qf z;1Cdw{kwWB_C&&yp);HI&rT|TpIr_C2n+uY?Rlq``(fwCC8ZrXb#cKGTQ+1-X9=>jV%(Q`l|lai&m!ZV`K~pjYZ|Xd>T9HF{X45`Ftmkb>U)7B>G8%6 zOo~<>6l1e8?etvodhL%`SZH?EuO_7|>;ajixHFcK7mdj^%f_e)<9eV2o_+b8r2BMU zP>-c$)wjdVC#&v+n%0Dux{REx7Re!vvOnT6knWHS<0OPW{Lvxl*)L>UDB`>r_**9HrF#xX&s`51lt>jXxcz{W($n4I6H{WK=Wd`-&R>zY5Id&+ysps z_pEld%p_nz2+bzV;9#8Koc{o81d4|j4#|YMoQ5BTGIb5w85-YA(>x}5;|JPW;zF>A97Y+H_Y22<TO4?NMZEmhD7TvA@I6hs>0;vo*VmLYNn&p@F zY|_rV;wqU8u&{g|?+-5r>tA^MX83tEyni#Q++MA_g7RtJJ+KqDFs4@uP?Lef4WJGW z09VFcTc^Qosmi`JzUE@W2y#{>_8bMkuR;^OGm|7WXw#ogh10^cgXbC1K> z_swg%lwt`QCUlgq8<6>P^sVaeSut5lVPgbw#>uzDV0Vx?Zg2vEPb3e{v zjl&rS&{nOe>2g@ON5;_QH+nG~Ha?l-@~iza@K_{koN#*{L8`9EU!!waZB@<6 zl*$g@`gx#U=r=lu-SRVdSn|A3Wk~v9=lWDM%}Fn9VvOnm7!80lNKgL&%sKpxHEpHC zX)G#kO}j~*z>^YWcHu{Fu4`o_(<;t;u4ttnDmcjmNTpl!HLGeGbdaED{l-!(le zF(g;XJ1#P^yq+>`^*gX~KRkVFUBBo^uLf-pzD7Z{^&b8i$Uk)Aul*Ei&L1R2ImZ+} z)w*k4HuiT73ohmy?b=a?-B<%!&CQ5H*vKIEG`-U@YJfo@JZBi@tss}hhyrD&qJ`P3$D+~~85-<}J9zA4(nIbu+*YL}xd5`9 z`qkqk%E@sg*3PkKXetg11O8Ms-w_9oDa{@knY)d|56q9HPE|v+smhbvwHqqBKIr|8 zHA}fP;ig>2y4eAqC72TN9QQ`=x6qE=kF9-V!^wePEB?sKcMgjcz3sRPp=5?ro#0@F zV0$u=k)KQuc(1CPf%LAN4RA7_Vl%F3IeX%)CdDHt&2MIWkeq`DwrCDG=e;)|rKMbR zM9QgHqpx}kj&nz2%>rM-O-m*~7>Zs|MWl>B7^P-jKPqZoGfGOtipqrr8xkl;sTn8QKHLiW+BZmzVy692ANud%wjd-m_R5A+K(T46&ONr zDTv1yrPtn=i=5(sZXQ6U2Rzcza0MPSlZvu3pqps1uWE|0Et-fNcB}wTG`^vG6@(2WvMg!`;&Xs1tUXPS3p2LW1Co6!=C6$(5t4bH)+s@VaLwtOwfML3b|}1SST9;d0ESq`6#oFNe81G% z>{^A(Gvx2*W^?*ipPME8QP+wd*vz*YcCC8NHO?P60XVIzr9K?~{*qZ>b`eHK=5#Cv z{Wd*D-rDNY z?b1Ufyn2*(T3oTWnL-1&1GTZ`*m4hiS1eJY-3E{z?R+c#AQ~8x1mGh@o^>0;7l4+V<*d=E0oSQQ4>UaS`LNWDLHAyRU zx+fpSa$>e>^XiuK*!2V3K|H<$j;kcr(}Tzp$0p%}oaBt-och<$T`6H<)jD9)^=R#F zuI7ycFYoA?XCnZzNxe?q0od{|_5&ELJD|Q2b*#Ef6X=oJ+$8QUrLfd(q_>lHKM>l? z!@mYIpQUzP3rW+UnkVYL15&isVs_4?v8z7PxFAUu$r&-}A^WX#LM+Q9lG-b^zk_=A zoztH&qt6^R7Ywt_ZFwq2ml6;T^^}%YVSq!4KRO7~^zVrGT4b7}H~l-+R*kyft8{GL zTHUBpyG+nYyM6-Xk~#I{*4<&K+ga)}#i2#3U4{X^TJFt}Tgpk=Q3}2Q9IhC9oE&p~ zZ&c|%n7sMR9Ws8LxL{+M?PW-#o%t=eD}7J$kM9G`Z)r0G+`l5~X}%;&r`*kJHIBCo zO6E;pO}<#JJX{^7;v5+Bz&Ui?f-u{LPJz_grPaJvh`Lp!mZ6^sEiNqH%fux4!a@#V zVi|};EgYkE*z@MAZBE~*Z|tJfI#*bSsv$4qM|q~aHxgPfIol(NJa~v12haHq)vd2z z+4Zfp&XKL_I+sstA_E-P4{k5C`;@`vXh1BWgXUlzPBK`LM&Cq6D(xE9g{xfj=9_t< z^Q zf~t$haI6n1GPbK@rz}!kbcNsUoi@rvo*x!cUdmI0v~L+`_hjs1$2n8yz-v|M`C_!w zHIAK`9>ZN|p_fSj1kRTP_yxm>Rs<^p;0wzYibUE^q_*BK9l#7fmbOjyQcy?89evYHqfe$xtUr&S+1)HLoHDtyfU)D> zc|Vb>9-GpXd^5AYgTs*)`^1qOmWtvbBSgwQH-*}LfXDL++5Z4&D?0_T&@~M{8IaFq7`|JlkJq}7)E)+^XjYpoK&>?_@TDxiS+5=g8DmH=4iIC{1!e0szSm)a(s4wbKI4IRz2KP_{2=E8v2lcvr)c_2_o&pG3hT(WNTbV4iHy8LzO zSmW_JKc{0urfEDRqisfcV8GQO-;@yfW zvE&@AY)(5V_NN(kQ<6;hg4f`#pJk+tacwSI;y7+6wziE!?wNN8Ezyq$Vaby@077>h zwhQ{7!L-r!JreRuOKY20BN7W}rH$cv^5`Z$vf+|rp3Jbj432B)uZeS}htU>Vo}CxT z^tO+r+gXOVZ@R4jflTp+VAC$L-E2kvn#wEajO8kR7 z-;Zd{Jj_l8O+CNE-lI)KMPU@v%>)tL#S9TM3s{cf9FPA1)F5sP*+X-K#cX|7@lx|u z>ZqiTPisqA=9cCwNz898%&(Zqf8^RiEJGe$q<})W6|=?hM^IZ2_CG|9&pMizuJySa zWwpb&BIf8Z9(~RTS0LmfgiI=f4kYrXQFQNGbry%K&3JVwEd)^EZ(dFEvJ4kszz-(k zFo*)N3OwEGYqa>G)*5$MTJ<)ut>4?*#@{JoFQXUIfHs4&9E1gc8DZ^!a5GkCqWXVT z(^cWT5OhA5h+JC~k5TG7^Okqqa_bT)&f}1vNmA3FB+=?C6GA%W^u$Mt;-Uwwa_f*P7%VMI^4tX<;PL z+JN6(s&=aSZRglvbG-Dg$GtaK)o#~WpHbEIhCUltnu_X~%J%9BV=LlWL*$9@1GUt0Mx zPmH?fP2JgD>M&kt`dVy?+SbwuEeogzglW#=q;a^Y7(Y{8Z%F;8v_!IJweeo!TnOHK zt?(&0$}Z&y7-zd@w1(R$ZXi+xJ^T`x+zgsVKsc`7mt;Bpj* zk0kp0b6Re-(mG~|J)Wa`41HT9Opk1)=#U#6Iox--^}>Z5oMMfk{ANp=sFU}kV&2;l zvcqVXQwBT%8C)WF<90EgMQknc5b7^%)7V9(TdS5b+}uDUU`J!&ou#l(10+^b$&IdF z%R(7q__d>7>8(oN;uldtdv~IswzGftc%umtc$xfcp-BgU_>ST+-#D%AJ#$glbsw2B z&bn+Ng2Um19E>~Uu>tXneh|4m%~0KHIv-M6JTuv~+}5p?7q?Nc#(ha*6le4H6}EKx z6Ju`^HKM^F4H&e8c?1tq6e{D;o@+Q{a(ZQ_1d_G?0K$4yx~;-V6qYwO)};!F;xQtx zAB)StKbaJzyjK^nHNDl0+I6MBlrfaX^BiEd<=75Az-;^Wt)@Aa-FzW9d3R-dlgEDC ze}!Nyen0V+E3aP-JkCGr^cB$;eY2gadPg?8(xO|2k2l(tx`xDGFL4?Yc*%EcHy9aQ zap~z*7rO4XcO|x;qUic|gEGb^5}%NfV0+y;66P&@ZEWS%#nYHNL*g7@IYgZs)! z_5E2o-&R%uY;5h+`JC=z$M~7a1o^Um=Jc+0@r&T(H@Zaj_WDdQ-WA@8zm$US%fF*xnX`qfun={NfBn{g$ct!1S^ z7c=S_+{_;!(pUFofJFB`pTKJsJ$}i-8LVQl>$@)hA=E`gPXf5B(B#AoQyq`kR z?o-d8Y<+7}=^Ywd+tVeWGEE;b9>ag3B=*OD(!RC${qTpSZ8Z6vdXG!I)WW$DXci98 z-~dKva!3AL`+fdw(lmSRChe`3LuiIfZQc~QIL2I_2X1Sw6XcBN8g_Lj=;-=qrLFYg zA-&Y^ki77~eD0?KVN7$DAP=IKJmWNY`ce%#?s#QB09fRCN_kNsi};RDuz9ie3WJXLyOX@c_GA^5AZp|jEcJh41*35jWuj2{F4jG9)%){$k z7vg@B_8mp2x$7ONj?O&Q-1o5^+ z%91}YLHk!eis9AwWbO9NY+5d(I?4oiZ&T_`T&ijIbiSIizp|P&w~G)K%PKJ_Ti;&d6W$E?^@%Z7E)^ZBh=ByeW#f3^y#jH0tB() zB#CkAxW{wbXy8{K*Yq1=rJ)`#w)F7aepw*?wc0hEX8I^%7WW1`VZ3Ev%y2trJ-(Hs z^$x2xp&PEHX=Qp84bof0V37#)BN+PRam{Tx#j7)nS;Q~wOe{#`cJ0j~Jf}6gYdTy= zrQwcf)b?3C!2OMCcUIdR@rubm*+!2=CP=Fd;0mfhR-B70quQw?EM_)WU9=>IHpcPLk?#{CI+fIOLJWYXzos{!uhpLn8Tex7wSxgia$V z=4|DMuYYPB@e>(PGHO~&G&gXz2|LDD*S%Soqx8@AVumZvhdOMCM6h6#J@y{Ndyd_` z^X*-hW4Dgxxi{?RWei;nV|!$)922MUB3{Bl!T$hX%vV>KhBe!Td{N6S%vw|FNyurb zKD3m%{Ht0iRsu%mmX)#%8L4+Xnv{u11NWfFJhW!-LX(k66CqN0pgZ@YK0q|9)*2xCxR7z)CccJY6EwwngDKTNtZdM+dh=6rmk~d#3c5mWL_%5RRmHp zkO37GP1uB-98%XDnrcDLX<6!~(G-%Ek#m}AN?phqs=F*P9WxhtRw`;P^(YO-3F4iB zIHe+Oq*IZVITC?QM@qu^9MdRyt12inA18`&DOfxGX}I|Yt9EM;=sQygywd5A zMK+?KG#M!9c@z~G9`s~6B85g%flNh9r)pv`fI0P`8cx*2V*`p^4-{R-r5UAcWiT;J zFKSjIO)`e88f&oY=|w||3LYqrQ5c-ti=;7j2(vUr;s!Dsp)oc>Nj2yX-RGiM!Cn|Cbu0rI}Vnyl4&j$ zCsUCc9j^ZXOyJj@l69ndSn=Nd8$FCG(~vxtv%RN=G2^_UKt9|6FmOBOlAlwuyiXfd zvYsT!A2$MSick9TcK-lc<$Z0{!&1J2OK63wCjp5L$L`n$u6hrmId%3C2T$mdE~yrcskEhhmsghd+Kq_G7;KG< zgdcJbr+V#In&(QiLVapYD$4pISLSr=#T#VhNmAT-iO6m+=v`M(&|gnF_LJ0B z*OrJ7I@vTAFfKAi*$X4D93vbN_)7LQe-Qp6X}xO<8e{3U`ji(k2;zGr7P7cM+kzqn z{C697eQPPcEY=a^W3PI?T}NA*2(?KrH1#1_E;dE94lpDfdyMB5W6;`uovvQM*Iz$x z8!%s$wT{lkatPt(mQWuf`zw>qF;y{X@9_57vRlkxQF4J%t} z+U}8Z7cj?TsDws6v$hf!>z%pBuWIUYOGhhzyO17Upjsr_IDpUQIrDr4k+q zA!fv61KW^rNXV{VfALPkMbmDjvb30MHp6Ha8k4dGPc~!oWSnCl1IBx26~=EqAYSWw z!{x6o9(URe#LX;$+arVZ$@JiyfGca$z6k1n5H00sWxlnzx3(9NGEO9oTfW60lNlo` zl1k(rdsS<8*=76*^*6@54PRG%VjU{(*Hc#7T;7|M101I*oD32-ZuQBY8+Ea9hP$VT_OB9AV>MdXs}0Ix|#zNoA9!7HF1Lm_aNC8Hx<$6f%r$Vfadb z*gF`NInjC>LX!3)6`X!N1hJGT63Gc89wx(p;DF$R!>&+}LYnCe+S}@O@M!VeHm_)^ zBe1rdZonCd?cITAU{q}4K@={d~$0VGFQmhVc`ci1M zOQ>}_CbIqWC|9`&5*V)c<-AF4{)F@w>o^74Vaktg^+f^gzrAV$8IT(o!x2h+gH#hw0SRLSk*Vjw8lizA_9F1$$2^4 zdCGH0&!{G)r^j`75=!4O4ZOK<7sGg6WMF%MvB>IwxRtW?yt3+6iySdr$8B?|JKkI_ zS9C(+=3T3vFaz!^t33TvVbiyFaF>%%SJ1S(atTz}ZDI~cUNB|BBal_` zTCc<@bi2zgqnAZdHk)U7-X-fCc+sns@l^*395QY67;Jjh#n$&N(!E!r#bqmLr%Hm( zA>k7sg62;qYvT{3QnzALk zD71Yp<6pQ%js%tF0@eis_;942Sc?5KwCCHsebex(qcpyo=o?9|qFrmJb=^I0?|{qr zJ_@Yx;YM2?48}IVBo!Qtir}BJmq-5qwmocZwWo_n)AdU!Y%HZ>SHyBlK&VQPBvyt+ z8@*T5fDhKc1+Oi1YimJ%QXNhw)oqMMEJd{-n&2vN=8U06Q=U{0Y9-24jOQ*f9qX-9 zWcX{R*!1?9V+^`<+EiBVzGxj5Ye_-}<1wPhoadZYwbuUt5UjPIhY%vz-9(*XuG_g+FvDwuA!^RYi`kk65b1?F)2Cx zMB*j`-v^A0=~_O$qm6d=s4r||u+y|{Qs!F%{&rirw%W!NFByv2U|rpVTnu_wO>u1H zUn-#wiTz-8y<*A^qLSWiH%_>Ha_Yj?GV!T~Vz#!bR0fc|hAaR#FklA+V}pLs-EXON zM^9)L+Ww_Jn{hXfHTupPLZ~D%@~T3zt1MuVo$?|mJ0k`6WQNbEWWUt)JGZh>zc694 z1hP6Q@WR9o7!n0Y*9BC>qaHGv<^KQ|XN#__dT?pBcJbW&o7_Pyrd{ES#3-0<&dPEW zuLY5rwxn@^l5g3%OQYqjb?&t7OzI6C;#nDd7nPK&9f|_Ol01Yi20N&3++wVHU#j(O za$gAyaM)bRE!riIa-eg9BTvM{CmGIg$Bboxp*? zVo7&n$t~yx2Q$;D@QD!8Bz4rh3&4(YdBT4J13S!x3`90 z32z?-gEOe+H6X~u02>@&D%a8X{ZDB%u8;fL{dH|l#%LwA(5@f-+6i0mUEP7m#s`NC zhs1s`imR~vN6{L=Xk-^REu?5Nk@FG>0?BiR$b?A8DviNnGEUKua0PZgp7`;7q-$&Y zuGdN$Es*dmw2MgQOK{Q*JcJM8%Oig@jN6F-7Vn8x}S^yg2SrlUoBMiNVhg)w^{6ISHieMQyKi!r#X#_)=YUJ)Cw8JM9HxR=L9EK{+KzIIe7%qV-2sJd<1Gds6sy z)%u>Fr|Fh9Hh(M=dyn2z-Wa#wIgSV-juk!fSTO+e$)~zI;N|a0>8U22qSXYhm&4e!KNjCh7s&wL{{XwK?Cw$o-N|IIZJ-ujNncKC z^IN&I)+`?9{g=Adw0oA=@1v?t$Orx-!qNu7K8UP%^{#B5EM$$L=1Mw9(r)Lx31?*_ zc9TsKi4q+)A>hJ5$=diO`y8Q*sjklEY@5R71u zPni1Tm(EGAE`Byh{JkA9&k1}{w}S%PESJ+Qm7{{n- zHZ|)KyCRY?8vR~I0evy$=kG{lr&LyW?HkNm<4ZJ_^K4kqVIyYF^YlT%D0vytO*mXV|DQt5Wrw)&LjEj<+YhFilL@Hqf@g9FpBHC=Pn zT`kly#L#FhdvMHVl1rOY9AUG|?nwLd-0_;(wTIEJ?GEu}5q8DA@uYG{#0=p~ig0-J ztq)GNvb4LJX=9c2= zw0S*z#l)%!9>Iwp+3#KJB8&K(>g1|S{L{A8Wa(8JYw2mF1~_|*nGcGn2bhp>2JH6l zi~*lr??Ke{U2{}@OHmIr2^uw;%rh#e17`!U9znO*s_|QQ#;%apni-@{nZLbTJipCO zU{yxXd_+JX59^%gIj#p~uij{u@*PpHV=_29LWDB7?7NOIgZ((J*iFqG^N$yQ#VX|_;L zEy!uL2_pgaE4WzScx@AjVxt z7~{|%^{!&a6seHn+Os_$ z@Dr=L#F1mA%132qTms&MHPc7zXV*GsgxZFUI>5k#JSW6J+j0jS{#dWPe`T(&(t5W< z5=9_;wE)ZDh|$+4kjOESliUI;=?2s6%LzOP!NxL!Zb#)^aDtp}&UY>jsQfwWzlC@G zDQc5A@f09tz&T$rTzz>JqebOv{et+J@C!`q3kfZatb4BDk}Qr2H;AsJeqq%A0C-pC zPsXo<`j5jNqP>nwc^1++{31&5jC05T0Ls-8#?maBmeHlyOz;@mH!C7ADnaaPrRX<9 zR3BD$0O$J7etHX)h!FY@5l<)!M+z`{80E0v?F4??p*BD|6&;gzwE z@T;DX(c{1BOCciou&jZkT;S~oC;C$Q%mR6m^L8Aaj6W_7U;HJs`HrZ9;#ivI)+7G$ z6lV|N_GTYp%|a^A(vO5)GMzQkkwFBaSR_?&z+gWdHa!U(AFwsLOrU+LgQ1t@x_-(r zBFz|xY?VgP>*z@IIIFqek8@sxbdkX;v00Bal&#*Hm8m&<{VM2%*u00D1M)edy#@PX zu4n;|G$%Y}h~vEiQ^Be}ic4q|gb-;blvG@RT6P+vmNN>9nA+LV}7 z8lg!kSxq%5rDdTbQS*vwA&OQ$)ZAUo0MtCwaV{xnk7{l{^eICzj?~m+kxNIM{V9m( zRgOS^#+*(mID1o(u>erA8BveYnMzd-X_)&87a2~}WS}A9jGP)2V=>b)b4td0QxKeS zLZgxPqQ}lCN!p7ada^0d!f{7aKtbAyrlbc$iZUK)WNJEdKrjsvC;_&uOKHuY;hmU{bq7)lz-P-T#{^*USWa}Qc64Kf?esY%xBD6z z)8^t?jx#*A#NhjYdHl_CUcKuXZ!elNeI)9q@!$Y{!_u|hrtAGjS(Us?h6==;r-762 z@A=laZXRR&SwnwXW5n0WLndYT$D+<1Q|9Df5#?-l0ZV7}uc-e3WuB?h^u1{z)Yne9 zvbbHFu2N0$eTs$y)04+F!hRfSmRha+aD381C_A$`Bl1R`ZfzB13Q@1B3ym@5V?5L!U z6VTQ-y+?U}YiDknR;dy3B9<$`GpGX$>mV7GKs!pSF9!@)BEF63>)7=^kCNY7ib#4 zc#2F^IWYeV4UIcc90$$=>r0}v~i2(Pe3oi8-;H1NS`A!#mA zOBa?87k2&u&fNRsxvk&f??+m7LdNJ}a$#8~j$6Q5-58bLqy~;y&JI>Y2hihiHI(>@ zJ{N2E8a2Um&|zh3tu{4zqmhRq?kHs>!!ocL@Yw*hKuW(4vo964IM-%bC(=gs@RQ(n zjc&SC?xhqOj*)(R2fhd+n$}jzm?coe4oVe~bAswr;e!s}(;Xt+AJwd~xxBKH&7Ldk zm{2|4nNhhUZ|0IR$;)SQ5*u(R)vf$6wr;YrYx@Z%>a8wd$$Ofj#`bZ&JXGOhkdR9N-We*~s9}TL*Uvu?KWD8oTfKg7LQ8j> zTUCiKZ7il#xLH5qE!DRhmPcSnAZNtfuCC99-+HO+X82d3YWjWj`*a@)?@!gNV<{t| zFjhVm2ZtiA;jn@iJ`wWBx#7J>Z!O$cTK0u*^XYnm{_*e?w@4r?JYNm-{{RvfLOlZo z4Z#N+_d)B>%g|b$t*YvGR}%zz=Z0;_Zh<4kX5H$dLzck>8A%=UqRU9nhdzyg*GB06 zu-CQCBzW2l^2htq+Rb^je5%aR#xhh0-6oAeBp8?s<0kt~)!N&qwfnnwxBclI>fWeB z6r}b3&wvl#L+jCpU*$q9cfmox;nIM?@lPqALV9W=xRX(+y?6)30oNumeq?4@l?I6J4A*rE` zI3sEVojJ=O*OfH@@y$qibc z#eGiV{{XA(b!#P%35~qaAY!bJB2ZL&dEUh@b29lldZafqxq5UE*y>tF|K$u%Wm%NuWfY;^@baL zQaBnkmff(EIoMWJ9Bl>2TpmzM7$9vmZPGnRodMPzJ*){)*3(soQnXm)Bgcy6)6U%F z6>ZAOc?B?gRe0ZJuk>3clsfxN)isOjnU?l@ivFfttLhhC6cPndksC%8 zvw%p99D|d9d)AGtYwL5>7S_6q7Ute1iGoHLxuF-W9sGnj#k7;@ayBxlh6wM8K{9 z2V_~o@Glqfx-rPD{{TgPqt;W|X`c*yc$V);w$OTiSiaQnAH*@o1;p(u5PR*3u*cWJ zs*bzsNo^w6w7n)#EczCusL7|^i2neHRZs&Ygz>pnMU=Kb@K9s9txKhDueEIxD&8xL1@G?tj#27r<7)o#wx4^*T5Lb6`|^#cXO=SYjA3iD_^y(+@_`dYnd#1)QX?~5Ar2UQQA0PfD=<7YN`%S6atnmeTWVvwb5d(#9#HvT~sQ~+PTb6_SN8gKk z4L$8`t|zs9$$fhiQsIv5vj&X0Y_|lt001y>D@5ttFHUV%Hr6jz#R9CxH?tRP%1~js zI626|pD5sxJ$8<(>WwPbt{~MkeOFM`QrM(|F^g$sl|~2r0U3)Jz}bQTK5tseQBw44 z2{@f!PWaP*q-gBDO>Jv`e-;c|5+f{0__sHZt{dqfIL3Pl>v~7+x6<)x*E)U7HhPu3 zlO&BL{o}NtkO5>^u)S!TCAv~&NTXgz4#^Mk8ya?TaezOSj3u&hk3h(I^4C8~nzLy}>BXO*mz^i9*i0>w>qpdWRBj=(^Ay{AaNw0E z>$G!P-2GK;Ykl&NKy=B_vLvdgp@wh*HW-uX6OcmuKTo_d!{>v|pLn3L(8!G}1}DQp02VuO?m7JHo@lyk5Zgv`=%Cr|3sTu-DV6Sf3Cyq(rcCXJLiodhV+-e$y=8RzO|S>xJewo6rT zMn{!o860t)xjyE)zUyw{&UFNHo}ShEM^};wH(EzADJ1I=LW}CC7y`b7HM9+F zMuif^e;gNpW-%Ouk+_Y{UpWN&k(NIBHTB2E-`RUdlG5h(%EPH5njT(D*r9M=l5;B} z9k~9^1$>LuUkf#j3saWjJA9a&$OMKzG5V_Kx3@f3Zam6bW-`XsXfjJ<19O&a_SwaAyNS!6qiUW9 zve!yx6l&T9(n~au!5@yS009uQ9(_CG?^n;!=Tvz_%8M%gJc1khig_G*lj}$6fKBxF z))v=m9u_Y&NgAT(mTa8l`wVl8*83ikZ^V(>PGYu=K{F{=k2ZRn>bS;yTa6 zi@RqmtNbyO&ma%IYFGL+mQT-bMgZW~)Ot@uF1T6()#wx zMV9vF+`5%qxspFRf%PRo&-};UyHn4O$27Cc_CD3D&Adw%`K~;e>uBnPi`vmZT#z=X}XYDUNt382XCRSq3|LTbMX^QGDfnaN~*|54P32Oja&A9 zlyuAqYoPN z=9PEvQdfS3$JUpVnu3CI`qJ)cnW<;0 z8?AzKie6K}qrERDY7#3gF4V-7yq%~3J7SfS*EGYiBA1h#nz@5iEhymeNllDXmjs2R zr92uCCe);-2Ah!cN=@9=4UnY2YF1imPg+)578{SXH6L?JM@&Y1)B}+o)a*T}C{8g> zLVUCXahhf|+($IDbj)4GDMnAR$f>kE(nxz#NOMCU~~V(oq~hk-^SB z)z6nF9V19Z%d>a*&HGDg+U3l)dPMI9x48UFGwOXS=PgsI?lmixHxj&=K5sEOq~@C9 z`Li^Paq4M1nW2g!8tutHjQ;?YaJ}uFsVJ&OB$cOdv6K4M-j!|{M?`LV{cF83F08h1D_4ie_v8db_+rzb1+Z;-LE1g|F z_9UOkdQ%=de3@L?DRDsFQz0bsezl0#dja@!>q9OF7|E=ptQ#Ybt}C(ha%;1%d?M+M zKU&r8qQ1DhSlPxJ*_8GkptXIg@WVizb+n3j;gdw+5pJG2JidT=gDLvwgYRDwd=t^* z)^(WHc{LRZcSgE$A${JTiI@Cq^fTc1O6a#Xt0mMOO7|?Ao?E1jIL91-12F!AzJJY^ z7JUbU8;R5G`fB-x%{mP;O}BCc0@=)xf!LB%GZFQ{A6!!U`{_RlE#6!GGRs!H1ghTa zQXoc|`rFg<^q8sL3H@jt0L zyHQ;`Q<~Nb0#yjU@uCTwa?K0w9sLjGUQ{Y?;(Bt6kK{A)58@@)SGCkyW~9+C;_$65 zJUC$=+5i9)WcNRaDTZmv)xI#<#QXK6iK-zl&*m5vCcEH=Lb#AjHnu}URKamb2 zir!7p4s(>jAY}8wrn*AJK|QRK1=8mnrPQd>LoN@hfOlY?ag*Dz@7$rSoE+PgBkjN0 zZcP_o(^~gKxf%t|lO%IGUWQ2EaKx#Rz!`wPcyBx`#G^7~53AFvMDXTYAeBDtVhb)%=0KIN!u*i%MvdYmb$1F_43gTGIh{RwKBFX>*%ddPp>g$QD zwQmYW)HRJJ#_||tnD1E#nG9XXz-bZGXB@`)5Nt!T&1(4mk8+Jej9ZKwY7!T zk99lRXtMaz#L7#Er7@lO0~?i^JdhhZQz}7FO~#$oo-O^w<*k$=%F-yU?x3?}R`AZ) z)T-wMsgX!;!qKn<0HjwR_!sdS=S`M30^oYi+>Jf$B418{{Hf$W0r0~!KQRu z5qo_msTG-)-c3XkVGERINmR_L8@K#F5ji;pRP(ge);fD5outv!x+?EW)bw3KnITNGH@>KBR{ z=&^J_cKD3EpTgVlN7ve|_R&dXYe%~MMxlO^*h-P43SwB|Co02hD-8btc!Tu~TBfV4 zMX2fV!*K$w+Dy+J=r973S)<3IhCB~U@!Zvg=AC&x_s>1Fh_W~ZTo4rF%edpd7j`I8 zbiMYbkwp{_EYhkAes=>SF73Jc;Ea2Q_Bic|;;7-1ZJjIO#zfnug_UuQ#j1G_J{%Kg-8FH{oIb05I;LS4rv(ac}ioEppi|bi1Ws z)4>$NAWDVtk~7&CJBlwLE9@BoYOPmH)-u-6=TM?+-3ikE4U*d5NGG&1H<4V0V-Uw; z3f)2jcRm|e6Fo@XL9WLw`Ze9mjp4TgbjWZ*@Hk+32OZ6Q5z(4EbvBQs^y*#3HMRWG zM`b*+D7m$j5?4sFK`I(g41w6>fw)L>^WR zNbgK!ZG4gS2gA*NK7;6c*>Cj5*I<;3Yc<^Vvq2-qU2WeO<%3FMd>-;_IAWsy->s#w zp5Lu*FD1LvwJQhGwKkh_3h_R^@gnurDZL>#pJPTS8z(MMaSWn$3^6pLQTLKB%xqC zS2gQxPRB!+sC3|v+}qsQUBLp{NLVJv@sc(_%oU*u2=2V*whddVZZ#bvr}do%3t}`I za}3&i&aLt>s}zwiPED~AGLld5!}(6dQaPtw-Or}m!{P%S?wIB&ZFXSF5Abs8J21dj z_uLqF<5d$mG;^PdD)4n>^g=k{R&m{5warp7uvE`Ki0K;#nJdk~c zeLwtY}`T4!bb!O7O&MBz2k|luWs!kTVoV4 zyzEJXasCn$&eOQ(J^aMxy{2fb9GBOZusrrKyaH(v)NPI2bI%;$jyoF2iOr?_m+u&# z<`O9~ptCD0oyxfdy)b(Wj@(pOAJjE%X2(gk(~ZoZDxy2zm0WzgNJh{~;|xNAJ$sNb z%{$b3{#?Xx^|&yY|+hkFU*wwGQ!c5 z;k;Xfaj-qpfJk28+56+mk)B7m^u-H*BezY~0$j`4H zUa+~;bmX$WiVLJdc1mrd@s~gSUAxb2*~M7dYmcT_#cdlrmd}RU6#gMhp5Yka9CzdM zsyudW9JF?>j&o_NTElC3b*0_-+grwgWk6HSj}T$FIP#SwAMJLpyM7FOU(oKf9X1*M zAWIIZ3p76~egrKtcks)uQ+9Gxl~O^*%=xSHLQjZO^$hlHtIcuL3vR6xGL543V;lt~ zfh?eY7|CTFfWfYt)87<-dRh3eUEQXkX1gGnm%?k9LBWin+!W){2v9~wF{W|KNsJYN(@vP?>j)!vz332kD`>W#tUDb+?cq<|xApR~6Y45b?`l!O!ojV20 zh{_Gj+Hvur0OYKQGqmu35-7%Tky~lH8{S#n6&F@HAvYXy z<#++nmOyy!R=aK2T3g)O!kV8@{Fa+@923CTQMJR`mLLEmP_h&!)j(xncYxV7=jDo; zwr`C}4@X|o?DT6pPY(A^w2jPcBa+eFEWW$~I6mB;r3C1mIrEKmZE7aDkYG2BJlnpm zqD*>naaAzulA8!#d)L;Y7>zDfl1;$z!F3s7k5C&Q%=oK2u8P%Uy^B?fIOMTZB@M$B z4WH(G2F^#g^zU4aIO!`ji;TA21ZxdLOmvbfTM=Me%@>Cyx(pBHSvbkZa7Jh_TE(k) z&8g{^7ZP?5!c2eJmk+8mb_sv{%7j1p$I**3s)gvRG%k8E@4Na(WaV*Xg- z((SL1FleTZBO8J0%!L$ZKTK3g3fJhecAa)x^hUL!EwO^uTX>?0n=vC0@$ER|93Bom zD4MP9mA%7Dr^{)0N!mfWR&_bx?E~xDl)u+*ui9&S!=cM63_KU?{zdj769?~{R3g>1 z9V*pVP`T;YWjFzBpb-#1yyRxKPgUr{-LUNq#W@V>MgdKkMB)t<{^k=wq=cQNXFtpS3Gl&eznwPy6LTLO})MN)KxP0mS>e0{{Sc; zdR0$Rf9*Lfp@v8~}Sw%0$p^o5Gl zgW}!AdE-pj;Foz&@yGV-=6GYs-Jv;?dkq?=*DVC4B+B@DP^on zeR`{S_IF!&RJx1s0mtD01Nn+8&1Yjgg)dq!C>V;_f{y)$!{6S%ed*m>_iSqbv7O6c z{{Rjmq=W8P3<3I9!rGMjm|{PKxzGFDvIWnu@iz*)(M7I~g@4rAJeOW2)zUnQKv6tl zWbgfz!St<1tn`g1sBNddlGg4;M)J=MvZo)}+@q2{yqaAvCY*e;z_P1g6tlYXAE_9@ z$J&{W+UnW~K2rMNN0J@~k&}b&20^9JG!B{3U>!pYaZd6eaB^ZLm*_y|zfS%I>o8q( z)ud6xvWU7m-UxZhaM`Z-}PTnG_ zpqeSXRKOS+&UXEQA8Nig^RpbP(cX*TP#R3;g&o5*3S5oEcLN!yG=rM;R?ZNP{*}xA zG<+7&{w?Uf73H*I`W|7NFPrMWn8>bHWmzZ%(Jpxlg7^4adynlrq*kYEI zs`W-KwktRUcBH1@Q9J!TVBk9FtAS$)x7)YNo>SQj&0KsVP}~=t$IN znv14htMrUt0CGw@7I-T~8>CD&%&rg8njmchq{{RFW}t!rfbd4`J<8$&oE0>Ml&$ zems0|i(a~t(@kK*kk6hg=S#U-;#PSayB@@C3hV$D-mzBf|t!lL7aL5$Tq6O->p zo=o%WOOwrYB}n3y&m#9H^)y>(M2;6d{VNhC%#7z8Q~fo)wOCdtqMky)1({T-{dlQ* zLMq)KUjE8`9rIr=DnX^hr#>W_jnRlO?~G#y)48v=ulkQ$>0X+)8dBH`hhHVOoaM8CpP7Ax?PToQ~D;+2r!(Yti;NVH|joMW)_$H(6^JQ(mTzWoZNET}N1v zqK+3J;3;vmwmHVq!@esT*Tm7$T{x>Qm1Td8(c0ZbdvO{wVC11#5sxGsmjj*#IIRz_ zJ~z!3&CEJn*P0%e8mbFhYqKIhx_DqLYKI?#X&BCTcdh}cYxnw&zdR^z4xupFx4}D< z4nnFfG3mzKSEYl;OSF0U{EIq2S$@zn>UNir>9IDMd$gB*E=GYi@EFK9VZCvXdB;50 z0Mw+Ev6adpCnvy>NN-)huOD2VXu6fYr8Ms)^Gx`Co+ZE`cY6{?eh+%6)Nb9Qg4y)z zdr6gxc%ckWDxXH@X&%I2{cF~Y66o?LHr~ah3E}N&5H<)+m#g6?>j7@R(MDi0)9oJ^Fd+4Ofz z{7H{Z)Szo?X>M7bIPdiNRT^8Um@gwEW1J8{$DbhQI3v5+bv~b~-CEyhH}cN~)xytr z@-hZ%8B3sw<_O5j#_g~&a-vX(Kn2zQG0}B2>buaIhNUf~^kgEIRYi3q?ghBm**VV& z#GS=SEPIE;EpASd)MA>-_Qz4No++Z5*5ciyx07;VVvvwLIHfydI4V=bP{4R&ILZ|5 z9g8KZXM6a2)fcwiOKm=-YiAhQccI?c+bJ=K{06KrNfh1?Ba8vu8r-_u_rF+O z+1M3|D^8-)CcKfii5Pe#6(7DZ=_Eb)CLG|An&90N@h4E9tF48^&8a@AC9b0l#mfUU z3mwFOiBAN}56nziznhcmTZWy~T5gf5d^_s9h%V;SA<>0|nsl=iQ8BlkK^Q#wLrHI( z4*dPGakgxvhwgQ&xGeQsjZapNYq%|DmN2;5<||8v$9`AEkPP~qpGx_&;?G!I!_l`| zRMN>T_hKUy@~96bw}~X1n2*F{Y1&Nl+<#X%gLE zaRWOQ)>D#4n4s)P9AD#Rt#zw)Hl3x+L{9=mZ1&K$_~8+;j7B~7l_2|(nye$VGEd?| z;C`03x}=XS%(kdteXSDiCE1cV@yC?zV}gCKJAir@Pg={O{3H44Nq?WzqPupCFfgnP z(W-;(x>P?+ow%X=9O%2d8{d(+iR812RkfRtHp*p;MhW!unA`1~d)CX=`Zc2T{LtCm zOu7WRl>Y#`qSJ?r8ZE`qrYC4p7G*mZCo^&eD${oKPBhZl9Sh;y+U|mFbqHkBt@W8M zFOzGKW>{fYZHh~6fgPI+cmr|C+ln{CZk*6>dPeT=P`UpA`cP37tE!f|m6k>!UHFXx z@^g)dS&m3kK-N0SORHJ6Ym2LgvbD4Q&!*Z%E>bHS6haxs(S&(qi+5n&azkVa6|x%- zg_m*3rpmTAx^%Lv_Xor{oU<7MgMeeVjAVCdSORf@R8eo)qdC7?U&j|s^xd7TJ}txV z)h?r%3aOfD1aZc><%Dr489kZg_04uJoA_yA@gt`#v|Se42(PIkAA8id&FZ~~SxoDXdMYiam9 ztlVjRKYd}U*<1B8Y1Z=ITv>m@x=@VidWg%*9fnC2l#RX_Kgx4<^ie8ey05~u)ilJ_ zbsZ{gI^ya`t!|TI{Gptv#L$#2_{c5@1xqZL-O3Hjd^owezPW~79BJasQVY0kt+&Xs zHr0+)hC+E^B#KlM-9|w`^``3Ge`ya)K81f}KAbfb5n0~a!Gn1piW`4*DtvIwfPe7{ zoPxE4_b>D~=TAq<-ajQ*ZYoxXNYW5mUn-bCd!=Y{oz$lEsXSgRM zOBvvrl+M%u{W`bXq{inlDw7Uk-_%_5y2I{Xde@wU(#-L`}@mn zHfh|b7Xxfp0!Ibold^>j1Mc#bATT*lM$4yko|mocxRwYO(QTue=&Y+2>0}vY2s7nC zCHC^gpH4XruhQ*xHQn8!m~7TrHY5Hu0Ay|+nfLU?Z+F?x#jT@$Yi&*|yNgpTt*y&k zGOfeRi28llJ2p!rU zKh(Fq2d=4>7#$r97Z$fr36R9#ZdXI!-y9G?Ac8YhN2o&GM=hL$Ttkp2kwFfazy)!G z#~k;-tx_|TudugM%hgx6dtivXizU!d_z{It@)XJSu)zVhW3pCn z$Jh`96a7Mp*C31;6-F|9R^b+%x>zSC_y7i-v;P1Kk^ny341wRhE{Te=cDhVbtU@^g z$aY8>kd;-zW-LGD1CjTqF66SaOXQN_Y-O}%Yj(kq&gyoMS0s6q2G8LjV0Wuql#fT! zp}AumHwHs`q%x9@GqFg|90U2|y=Z+!soZr2u=cjhFBGtfCWbaZ>JJ-G{v7e;`g&Cs z&X=SYR_kf*QLU|HWV9-Zv4z3f0K)hDAod@hdIp)&{YTap(lwo(-TlnWS=m*c*puAg z5_$gs8v0lEd;O9VR&><5k`Etg3~Y(Qh3}U>yk@^fJ`?;P&^k^8g4WIpW{eE12&jZ- zvB2ky*3pb_Y@>rwxgVCjANvvfPU^*qR_X0x7^GJ@j>r}VY1%)C=nh)|^T5gHwL8>5 z{-6H<7xbxaq_OFpO76}#ZwcgzC6&|AvMjHI^aTCu_XkeC(z<@Ysx(7t6gSG{{T?E4s@CuOMf46E3uO*J54) z=_HKu@^Eptk2j~GrtKt>GOPG_;j(xYZK(8hjnH{?9Z4?0{{Z3T@=1@Td)BqOSf_ND zGRhmwAA_3l@?w@ZO!YD3jQ101H&1Yg^3H{nU~Cv-20P$?kF{()9o3qhmaA_ixRX@j zzudsfvZw{JyoSn;W578aR*z!a*oKxkP~$t#0CxMDvzaH;ExbhADBK+;ie2hGJrCQS z;}y#&r>1o&Q?z|8(p^~s$Zs@j(`f??!#&)JW+@`#NKQ%d6Osr!fyQvYpFn6D{-e_} zX_oq|S6YnidGglIfM)PI1;V%tOQN#!tlNVC(d33d5qvuMi84WPV>A}GD(t75a!B#Ilw)NB`Mxzja!b|+!Z%$4IKETMtijerRZapz>i zPR~moYg<0a_-)iM+up@vr(7ki)Z<{YydX$?bvSX#K9Wj3| z*|qgQhHRsn;!8OguZ(4mX`zt)JgjnkFms$^gT-ssTIPXpk;AII+Fj!|Q7)q^1CyM$ z@J4w50H1(!THIY{<4!Kp?@#c7{{VP^+cbV2{tt@`j1Xr4WG1vFcf-v4RK$ zmWqY!eqQDTM!+GIA2M;iTRJ`cxLs+r5IyxkfMN3W{<*Q=zVLGbe@JSbLYZptSnrif!dv~lyrO9x!KAopN zFk*&4hmTU;8f_!(Ro7aI!y3ao*|am0%#%rQWFPsP^x}`DX6X(!VRYTH)cE58WIoc7}bup^B9Gm6tX=i$}9o|=<~k5RgW zZWi#yMZKOzcQGX7k3i(-@gHj1x=*MqAnG#h6agP{xtXvzJn*^4+mG_CuU+Y@KDTv> zCopJ{$O_w9i5;e!oRN+iMkEXjo_m3kJFv;OM;26kFReA^kE%*i@XYA^$c>Soy-9l7 z{^2nz?Q_9$2;cX|2lTI?z9jrVk5ja}hegqBEN!d?+ll13MTXq-gbqj&O5EoeKAAWs zzIKyJmrT51vLh;-FhjxmeQVW=w0Uxii#JZ0>lv9qXi?blxm8;~m2Jt?F~*`o+^b1vk%sk zmAN!o7y@Z#IW-K@D?}L3oZx?*O^*~8HB!%@PQ*~1jw(_g%7r1o%?iZ(vo*H+?1q;zuty|?mN=c4=psunqEv1gIPB01&s13sQc2fj1f)6+*XUAHx)Mz zYECoSoQK!iqz%S$DVW-nN@hMNX4vFMwKWmWKGdXMl-zjYh0wwpW+BZj7~+|W+M$v% zz3AycN;>}lN|G34G|Y6e`cnx*A%r=jp{0=Gm`VWa+K8u`8qwoiim`rFgU3uIFC8%% zG%@xbKT0}K5cZ?TQb^20CYRPVXmnjVYpL4_8k0;X0=a+3sit0zx(PEx3Z3M8dsSPc z*(qB-KK-P=INx<%y)~p4E|y7^@X@&Z>*v^RPAIWPc!l`Nwz_pOnWGJUgKCqO{0)E3R=4zH24kbkE?rEAL;;sGH+^)=Ie6LgR7 zZCwdwUif8gq+n#6;NW9C)_G-6W$?!4k@gov>Y8PTPNST{2uY`zUnH zSd+f)DW1;cZd~O~P;xjRlbqyNI(=41X1bE)7I~)1KaC24r`>Wer|ZQZ{h>O9;t?#X zWU5^va35@d9#ugBdE?8DJ?qz4_?|!9zRk|tP_vn1x4(JsrJ7T3c?_u2OBPscmN^jR zn~{LQw-S4kT1L8)K@vvO8RJ(d7DXV18Nm!jFmQO|7(5ENUy0Y((uLMm3-{WQi`I!& zR|v(Idx4VUKaBZ^;G9ij@5=!<_0~PA-3oNBUA}Pq)Gc!WFcSLcy za(_is5;PiIAZo&PwkDu+Di4 zlis)8N35c@iLLcrQc0|TdR|Rz@PgoP@Z)Ai0Dv}`nEwEC1Cv}WwF~_c%JN9$Pm#MZ ztcWF3;JUFUFqzHKU;w0fvDR*Z3 zd4I#+sd{rkmsIK-Z9UJ1(#o;zd?w~B6t9~DBX>U7HL>UxV@v7XZsOt=ifAp6$7a#8 zxK@=nI6i~~Bc3_^4wj26eVe~XE~7U}X!kdfn|_{Iu991g#a4O5GD^FF<(TaX2m`i1 z7i&f8dy}e53SG3C73}SI7KX}vGV#W)4~3Dp%azFF@5gn>(0Y5QuJo-oT?^)Yt2KR7 zSGk{E)E;Y#g`9Y6IaVtv-bfk8C)TW?ZpyrYb*NVJQ_`VAu^AlVr5>Vf=bqX3*87Oc&2Xx+N~{oDB%g?S<8LP!{69ln;x3`WO=~N&Kn_%T5S5JmpsakimiBz&& zIW7n%B={yo3;mZPf!$c-{q56Qg!XQlp1F50yS?N1HYoC;f-K_lj%ja!K(nC%Bciz@QwL^72=)%L!6(Msk8@4U!hOlWdq%jFFl=a5G2}F^P^q zjPOs=x)!t0-UXDf#<0le&+`y7Fky^@9$bIaI43wHVzli?rY|(GvO2hl{{RVPAd&C? z0OGG1reiE_C#-b#zLFJ;F7g4(HqqZ5&u_hUYdcY;^v0QKc#=h9s88dqDCxX#>+{zT%nN#N<-N2%@=Y+6WXiaG7RO@c)P-XLSW zNMp31kOa02 zI(OX+I70q?b{-U>YFGi2neL0deZ{{Y0rdM|tGd!@E%f_? zq}UO)RU0lxxj$OwcY2ilMvU5p!`!o+w}`~x{{Y!F$n7a9`-}5mD&c{~S3hrB z=(9~oMuP-iJ*!69BRC3u>blzD__vZq z@S_I)$_sQA2wj!y6lk4(O6aE#;TFF|j-l;O%cx z7?t{4p}r?vbhe#!ZKl~zso59^YiXx@d#Dtzh!tCwLdpZK3&c)Jl>~wL{ipRgyOr1n zGm?bO3o1N=7(KDc03V|QGC{4^L-@geZKqnv! z2DvifwWG5ZW$gP?O8A)ZX#uo=x}Kd3f(w<5tasL+@~+Uyovbocm85mo*UA>OSkxmqjks0OwlwGS}MU4OjOAf zdCD&J#sZv-fTP?40M|e$zk^+0rF!6=pN=buXScqFe-*R~$V0}hoHIh$GNydZf^o=gX)yDgMK*uy57&yw6Ss4afh6kK}AS-FsH8gED z-7c=WYGldTZmz;icP$b!oyVT)-K&mheL*dYXGpr1-rSXQ8l;{ojOEZtDmV->e3EgD zVTS^`W}Tww`u&VLl+oN4w{aZOLu4S9-MyM9fblcCIS6}>2YTmQ=+a7^wmW4Wmj$)G z%y;Re$l1@%p?hV=b}S-g0RI5Zim&QErtk#IAMRaN-2frEqfO0Z&KGbID zjWum2mK%H91&o)L^g@qmF+YTm;TSRdpQdP*+HXo}sSprP73>+1hAkI~>w&qLG4;xk z*z-qyRTS0VX1<+G2UA!{WX9&}U>J_uNtqZQY^;HCocH#wbJtxr4gQ;RJ;s2I0uUO5m zY4B~!aXYUSCY*P zR{Bg0A)7z^+2jLkDsng{e;>+dQ(Q~{cb6tyjj*)+G5o7*&>a}JdDMW z`QTTnjqZ`<%`VP2(7z9Fde2dhPqw?YidG?2rDOdt4r}b+grBj#jiK7Zd(=Cx^)$l_ zxG^u-4*vk1b<27*)$h8sWIfxnAHu)G0g?dO^pUa ziWPkjbCW=j@!E(_)`0DrlCiK(aA+=Qj%X}<)EWYMQd8cTm9QySGyq)wl)S?tj^l$% zvF3=*OjbcaCnNl+iMt$9QXFQj(Vt)#`G}?FYAc)?T2tPtYY}-{icRUMds40Sp>PdA zm$9SBX;~X56^&a1NyQ#B%`YEeM?+VO1F>^W!g6UiPXe2W`KsV?DVWA-Sp6xeO$xL( z9|Th|cBN3~6x38DA(-iyyV9t8Q!#g;BS}VHc=n(r9X}Ncv53VP4-^$NWI5-x3bjGk zzA1?5XwD5C4G0=hfkuh_GeB0zq9MVn5|yE~J+VhaOCh5n`qY_?CnQr5(vh_@8O;FX zeXEv!(K9Wl!`(_|89N97S5S=CKKR+B9c!bm;=y%D+1Wk#;C{79{w%b6KPNRyhkLo1 zkMke#s{a5R%nN>%bJo^Rru7x{5-)~Cgzf&dt})AZ;<;neIwjeCZ*XN@z;Hb)H+P$e zMr@JkRk2=p2tXM=gW907g>A&;>q4d7yX;kF0i28l&*7^ZYpEb?L{9bxFZ4a}TGCwa zVcnb5)zlYDwE1%(04N*``_wc_?6T@C%caNV{1^%|{cE4yvpnZn54>ZJbcjGsH8 zB>nMPP2H$T2ppVpezjP-EhJV4p697<7}4$`IQM%jF34z@y>7sap!7Z6Su}(ZbqZkJ$I;GYO_xb!$8G> zQyx_A9;BZ5^c>ccsp>Zu@)_>;w1@(3E3^!HWA)G0pXx1I;#;|7a&|)aj8@5S@C&SEgT)$afsEjg2>!K2)ISaSkE94~OHH}74Uw0OsBh1)tnGU)lkB~* zx{6C{h$n_SfMS*zE%HcNHZrTJQ_rCugs2CK)aTM}ucErMR`c22VJ&u&>Ku^79>4|& z2q!sVk6af?)aSCF2M7rV`#pv|K9#ifJT_W|zL%(3NgQ{Q#Md_xs=nxESKF|XNWluO zyFlhbJt|`9mXft-<@W8eRYF8$u=A0UzgpR)zOcRmSgd|dL2_qkp&_HeQRPtGfNk7z zFxd3vZn)7?N9qhmG&99;qTx&j%DayvA48mdF;mgB$nRxDYm*9+0cDA#!21DLlcP0u z5*zt1uVpfUA)T`CD&)3X>FviA(Ykx5?X5b7-p>66j-w2*h+g6U0E6QP1S>lBM_=du z>G!Km1K`LyW@NLH`gTW{{{Xnm@H|7E={z~u!x%Zrk2ZT^w)WF)x+KkSs*8)6q=2vP zjaFBIf^(S6keNTIe})3G zkalehyABwSUO>ZWnueq#I;Fm;boy?jBw;1Hp=hSb^DnN`yUl_z^&C~-S=@B?rGKYO zqxAgRWQu_qCYvVUcPE&@lbns-qZtCUeNO(&cq?nBLYA>aHwn=0Z#GYuIr@M`IP~^K ze#^H(ZnbMmJDY2mAY_ff6U5AR?f(FiJmC4U_s6D7F={b2jMjuKmbVKVhvc+!1t8-* zAH+xaRcw~2acs7;Ot(26Bx@TuNc_2A6yO7#WPm>Ot-n%u7jZ`_?kEBQFC!hW2VqyF z%-qYd9J0^M05g<-8OorI1Z6fPABJ4der(zh)xXePF|b0k?KD;uFB z*eS{N?fF#>vX2OqF7VNZW+<5e_QppZpQTi*U~CstF&Bz&0ubOd`TQ(9bLsCxzR}tr z_q&;-nnpP+NhO$^XY0-@bLk5~2K$@C;Ky?s>;-aJTx};Ef{f<|wH<%0EtgS}d(A(@ zfXvbqwv)kd+ZZTT843X*yB<_?-lbd758|w-oOmTsE{SXvbtgMWIKlh>0Id~oF6`l* z9bQQ8cCH%)p83H)8(W_Hd~%SnmJHY*iGVzx)c&=+e7tk3{{Un;Dsm45dSi?l zHN}MQW~|ywH;Wy-GH#iRHvaj*_p04O8?72bCbtG8^0R@t2qS`hxcA0t>sPjmPXdtqxiR{}^KOkr?&WHFW7+dLdq8fNNNbzN^$yVTS7)~91Fyc4QT zZ>VcuIA^?Nz$+8GF@3ydP7kmg3Zc2T&}CR9k}G{e<(Zo2b59Y$&eT%ipD80JkH4)U zVING(tlS+}R#gl!ZzRhwUA~zqxPnKxVTx;0(d?oTOj7Ob7j`GGx<`s;-f+dw;Ny%H z2aZ7Iu+^I_qe<7gdiP7Xp7I-MZJ}2x%81IUGm+|h5%0xp<5hicrk+^|lI;06wBRuW zl0wIv;{bzz1_<>7E=6o~``G2xbUCIMfcOn_JiDa?XL2wYCy~gM$+$d$o5}~6XC9n>Bl_2J>3*RW z*84#b5Tk?0C+%EPYiM-O3+1DdJGCrlk)#D?^PQ}#k1rSmWaNzToL0#Ov`vg5hw(P| zUjG0>eJc#>md{7Umkz{!HgzO=n6$QxwtKc4AcN0p<_)RHK#jRkz~Zm^@tO4)oc758 zepRT*`(-)zuCE<&ibtL0Ny=@Ln&_m2S4a2<@e`(gAACpC^}ep@SUSr@fn-^sAKIW7E=Hv6=kc_{DoTogOAln{UN#uF^WUuC=XKK+$9A z%^L3Eq_xtakjR$Np2!&TfA1Vts~nrqu1`JiX(nzy)i@*(j!jxpy_AYk(RB{4@kDdU z6;&n2%$8=xTRFu6@0DER%~-dS#Loa4;>dSndM1{Qu1|uW;;U)NRlVo$LNJ)cSkjUr>Bi>MpCBM2-m| z>Fp%zSLSIJJX^=HTMXQ14V)9sE1p|vBcU&5qpgc&)^!OiA`sa%)0oyXltm|nUQR&l z4tW@@cT=-5PvbB1dC2v_>_^hN*Tt<~JKmw`{TfX&Jsotd9@kD`C9t=d@V+UMP^>}b zQ?>93K%0PU*C^@;{#tbicLCYG{{SkdQlurg?46QLaPb>NW|l873Ub*8IUmxtO)FGu ztHim9qek2eo-P$xk716$=QuxKO4DsM18XEwBfMo>c2M#Uzt*JpO4km2qj4w6_;7u` zoYb{w$w|eXtD}Ye_Nh8qMPqR)m)#S@5@0s-<;F*^xCDANgxg3e>d3MTD}#;I@J~QKp>xy_L1tQOnHvid`I*=b0QEh+IpV&< z_&wCFl4+4&UERTJBejb7jUu+6HZt)?A~0n~K5&XhJcTF9e6J!E(d>Bg<)(Ih7hAKs zmRoxbF)pqa;K3S~OKA!L-b;Cxz>xm{E>I6nJ!^I7`@0*f`-nbv_ouH0ViEkou(?Lg z2#Ng2;F3I~Z0&=b3J#mr&1a`a4C33XZTHCWN+ylO;0!1Xm}0~5k^?xw;~6zWeX84a zL=xJB)@?0BMW0W1iv40ijix^n5-1Wv#XnHpYahb! z*{&j0oR8)VhBl4CuAjW$bb2mRL|SRZPLwEIZXJmh6pYTyt(8Bh3Dn`NYE z+NO%u+Kd{F;)CJ~n3h0|j&}xJs+GtDD}BefuO(d`#!HpdFDg(Mu$#nZw zxY5Q4p5jTT2_l>j0_JY-Iqn0Un@mASmX$CvFETo)vJf3P&=@LU` z(@(2IXKxV9i50sBPCK)zr~}XsUMonxsH>|vwz;V38dzACORJgPyvl&P5$H;<9CCJ+ zQOA1a`ku0vnv}M77Lyj$!hU%J$yk}DhiLh2PK#GX)MQoa37 zPI26v@mufgz3~1k+la3xg=DyS1c>efEJ!_v_KEdA+*fuu@@X7dr^OcE3_oPqI)_J# zQR*9+w80{S6^ttuob$B^$%@0Up)=K6L8~%k$*3`?lZ@0iZmL-vK;PV(duC(-rD75_ZpDefAE}}- z=8;Akjr&Rt>w9KoPiK>m$NrvI?pm89nv*jIg(Fmn!0$nj=QRwpRkAMrv<7U7oQJgq zja1f=7HS(FDosOk#YzvLY;i$zlg&-J=7htXnu42T9E?)!MUe6IpfTcw&^MsDqR40s z>q0lasYn(uJix=~)dkEh{0#2?G9DqvDJ_w^J=9i7Q zqoMrhK*Aa_98$_?sAyHOr8L5xKGd=tulUhZnic43HpdiICYD3mm`2FOSp}vu8~Rbz z=`}c_#^;V`9DhUTeze4GNh4rW5jPYoO%F!dq`HR0Ug-BxV=VEN0-A`?+;5DIYJ1Q> zCVyyNjJxTsx4V{fWl7|0arLfB!v?=Vziv;5!sn;0^-E|A+-@u79e}UT*~vgb7-8I3 z7aDYLcd`vGDkf5B1<5(A7dYdJ)`0vkj2z~zY-KV?gLVP#eT8euDF>5Q+I7YK%&@26rHD>pK8?Z?s8il&0bwhXN;)8%H#5_e)1y>rC4CrX;Eo07w^gMSrPG) zYb0|{TEMd5NXb2`Peo0Gqi$te*wG6^8075oll3R&$GCoZt*_#)opEh$An<2522bMN-4>Em7#NQyJ3aR2KKQN~ zNqxkQ=%%rpZ&;nTGZ5f6DkW^TKgOuvb6|2wJPNkHXw$}76{Md(@frB~@s595u7-b? z4K@QvPm~`|t#`!XH;;#%F{AwaQ|s3W9AJj<*!Qor{u=6i8EIt|zMR&v`-O0O`d8=7 zD?7+qS7YjOd8<23Qs%&hhE|QU{HF&s*Bpv5;+D>SEz6U7KT#5UUlKL9SlF{Y z>y&?eP%ZAu2U)Ga}cqL+fCFysFH>-3L7*1UK~+EqesWymYv7|)>k{{W?Y z;rm+jOj@n&wza4DcK6W4={j%?!UNq@44eXTYbfMQ>CNM$tsf^{T1l(Dw6aFi2o+-7 zDVgFdGBT652x>D*hUB!EK5f;|g`h2$n=MDTKgDK*!@J2(i z${r6HPEF|?nBb$fY&M+r7XXo>8s_ zWOwARhiMQex1-?X^MP8%xu<^kf-6&~w3lP%nmypJmr z_?ssK^v!2nj?#T#E}0W8ga9mNJ(ySY5;6e0vyyO6-1n))btSFCK_U%~!0(vna0i2& z{eP`*vzf}v*!1O$7?vR`hHb8?lO|4iu65MO8NPOn3Ke462Zk9Pt9-V*abnY4c#6c+_9-f7LJ!{%?oi9pTaMDD9K-{D9{{ULn_3P6vw`)4H z9F|UbCuzYW=~Sy_-j10bs+M{+<4(%WxD|bw;{{oLH~@@dxF=CueAUrVbGiL{;<~1< zEQdg2;v7c1w&a}sD~{jD)*7UuWLX(f(bNN-EwlY0Yd!*b;HQHTH=4nfY zEg?BQy?;vGwA+%`ASI(+yt`Fk0G9DHfwT^Njso5N%@o%2G;+s+4&drRQ^yCrORL#f^`Xw7vYI!vmSm1bo)IK%6RSms8?Z`~ z<{3Ft%HsraRMryBZlXtN8U`yFK*R8Uz>8TFKpz5rdJz6?1Q@#`{)g!CY*R3~e6VdQx|=c}PT1g^w5+ z{LNR%43Uqgy;;RFiMX8uNtWgvK3mgm2BB`AS!0=)?BrpN)&v8b2JMlI=M{R^si*4k z39PR)i_M8V0WKq%2qQU1?7-w>8Sm>{zfIQeY=K0Pxf^^u_XKv&`BnCptu@lLiE|=I zk_la88BFAPkEU~h&w9>KX7QRkgkKPJt5GGr&7|#TD(#jjZQdaa`*2unaE!$Bk^%eG z8(z+_XmPvR#_*JgNply56Et|x^MQ&Cp6L27q4mS>af~TCEn&)sx>QlWJ-ASU^ zS~+1Rq-Dl&tAdqVuJc4P44q zN(DLGyM`<1euAFbU)ks4UcMd}FYfOx^t*L}?Vy#KNmfF9HA1pRHsB^f1I#;D80j9J zk#t+lQstw!v$q0C4bn^D+{@i970?_=>$reQ@UC((T~FdKP+D|X!EGC%ZQza_UsJsO z`K?VItWl~gkt;!LoNpUdgB;)jKp%&KRq|%nYLUdQb8V>09EuWTVvef7jl+Y;;P&+5 zs`aw6y133j#xf{@ij%>ii*Mc&tP1`c5O4}JmOg+51m>}iWXou(Q}5oggUk;d`KF`O zos)Ms_andBvGT5^PSdpV4l$a!jWljrHk@?(i(8TwSl`SiLzM(}VBbt~rwl!_gI{O- zE9(6;^vD-eM*6#1j~-o4F(gw31o$@&Jn@DbcFI-I=Pi+dK0WASFKwjK@1>D6=_E#Z z?d_I%ZN@+>s$B`dB<;=)ImdeJUkhM|PqddtzrDECZLQS&rPL%WX~{lhlx0Zoj2~Pa zSD&7$+3EQAXVRKqQrS09F?d?u?)p8CWew6pFQ?!;U1ZJ|gSdZWjukKU31ZVHklR~X zTWTn)9G2S4GPEpkRktYF$-wiXkWMk{T>jqEPSa1$MWf4WW@AJ~g_h#fKm5^>6>u_h zg&^k`&1~8RuLZw{Wpl12iz4hqw!BMWC@0LSq;62zK2o?mitzF@dvS}=cCqWaw8<8s zf1v4xbj~ANcx7f${{V*AP|P_#TsPjUW$CyrE^aO_noU(9QM&siF768DnND~haBJQ9Sml&bwVA(aiGc zcN&4RJ5{bP!^VmZL0E=(;PP>wzcg!Ewg;?6!-J5WPk;gso z00WPueM#_Q7U|BFhH(i?i>yO!81Eyn!l)j=a(?ymHk)Y^U5Ty-i78;)Jqvpup!OB@ zr@@^*;^xXdPX7RRj_I=18~*?TA6EYJcKY*O_&+nx@)bha*EDNik+ew5Z*J_z)YGiw z(4kHM?Sa_(Rv6&-ucU`Z&);SpvqL8}mued|ETeWtL(K(`^r^KC&OWtPUM-QT{X%IS z($3kMIi(qSm;V3;KUV(n?7v=n15>}$4XH?BPa5uEd?<5{Jz2e!dY`BRB;vhkYx1$N zXEMzp-z=nlHv4)HVdybXi?HxR9j&}@#DOA>LZS}90l=>@#bb6m)E-Rcq|1-!^B(jD zJXD#*1>f4B(JCP$WYE~uiJ;s{utg+{aHyq^_jMA~UYC0ML$j{c8M&^@^xu%fuKrjv{%YjR& z;*6$+Xlgddqp8L{DM-(nnNvboc>QxmLra6r6&ogmD(GQbwG|Ex0ZwR&Xjo}aG}I>p z-jj^veQBu9KhmWXmqT&34K{>Q2yu#PCaW15&&3{*hpc*TOF2+5=LGt3Uz*>vH^8gz zfV-1Wju!bzWWd1p=QaASFw256J*z_aq43JbuQax`xr{oTk;h|-#&N%rG>i0pHIso* z-Op<79~-}Col~GS3!8{-+}Vdb1>g^HTtZlyDGInMz~F=FRZ>jR=#0hylk$s8x^sS%cJ;na1W`V8QU`I1aEQ2maHjUWr&mU3;u&**{{n(zA zANx$;m(p){Yyf_gT`tFbEqN&Z1Z}l**tRqNpRESrLAa3{(2-Zy(F8~*0L6OIJi@2u6{N;+Sn_?U&^4{jp)Bx4;oC$4LX2`w1~)g#MAsj2yRL2 zdzwAZl0Av|LGCMfsyXE*#w5Hu&RF-Yzu~W4UG%QMZyL_aX)Em`b^?>Zu6WLJPkNZR zV4hQkTo7tmp1mGbtoO}2IL_agJ0@tcTm$A|jl(C7NC()~gV0@C&(pTk7ZINfGL}5C#{<|5`kD%A_qRss=|A7C z;gL2)3?-mu-ZqAAsa$B9xzqYG>U*dl`G&uiQ*Uo( zi{{=71Lo&}xD%Wx&24FI)wl4ADY}H7&FVRvt%$PL8Y2Rqm?E=VAgEQ&uMD^t|( zE-m!i2ztG4udio$C9u-$=0y+&0JOy5ki=tdN`)A0ti92&n^D)%9^+75G)V63Ug3i* zdqzvRCE4U805HuXU<{GRLXWe}a`ZW@Xff%XJv_QxD|K}>%R)69nInYVMzP_dfJ&lG zv9LR(@(gf7u?t-OXtX#j?vql#H=2#g0rJ{A?Sd8H;K)z$SmPP
a_rR$Aad;MVP z@hqtcjyPgTN{Hb^?HYlARY4;Q#!1dkdj9~Iu+pvCXf4!2W5St;XJe1V4hJOj^v)}L zF_y}0M#jKgywOCueY>CG+oZ+XA?R6%%Xj4a(vazlmh;UpWx{_A3KlF+13ZEUInR11 z)E?&NBo>UY$UKj?>06GEJ)5zL=gy@703XChYFjd+nKxK!nq`Ilo{a#4_B4uoz>dTz zRvTG?C~^lYf_WXds~&;TWxlw5N>h}Dwx;8Zx2Yd$+&ZJ9ZFP9}+aOomJ72vZC#>bL~wvs7Gs6anXo+p zVo&E$1sZMkUQOA$BBrgSn3-41qn~fBXlLQMeM)z@pB9P8DaPaMD`e7dn${%)G6FCM zZ_1jwd0DaIFP%4FH{m$^C+m)Cic}p-LVv*0?c#>f1Xu%U1Dtj|(t4bGVrFsx+B1{f zRIg-F9^LyJXmxT(9`u#OvR+%OZY6=sWburi)HJG8P)&45EO31(Ww?!3%m(cKFK@Lu zai)}rJO2O=lYx$EqrkYAl!=)~ZL6BPc3|xdz2}J`!OjPN@~DYRE=yvA9IF(5Wy@n6 z)Yc}NE)EZ{t5E(=H26)vgwa-HaT*m0M{<2?zGY-MCm?#%uzVunfFOVeuik|lu$T5K z_W@v15R8D$lS@R89`7;19fdfT5|o7tlAx21zj`EwGb<3dMg(mc)HKR|u{&?M7|!u>b&LxKulP4o?Ig8@+N4it1Bk63BHQ3gr}}WgU`Ol>RQ* z+@xe}2iNi~>4C0wQ@Mi(n=$NT7$-f)W*xqV-xbW^XJlq={ZW5sZ((4JQr%{di8Q@U zT3B<82F^;b#s)!BFgWaMM(W;^(PX&3ytUJxLYnJ~D;RFW*~KRH-TX;OZx6%WfLNWx zWv~dY#nJi}pRF;HXL6?4q-m#01oDzLvV}NFl=HbV83s0Fb`w-id$uP(L%GBhisbIG`%FHtatjS z@X#OkcnI@Ieevy0>D?uB6t>RtLm!OT8!*lX89r_}-QUpF#8PQm0^44s$*A7!luFY- zh!o? zw>LnRdwY2Xaq6#+!_GM*V~YBB;a;59mKI5ZFa)~k^T|A+XS7XIT`w6;g>_`+Z|xh;f8d5 zyKj};K*x+lY;qJFU^8cSGnUPHS(}ri)5lQSM>Mt5ZnWP8r@Bop<}$=c*G_IE$jJdq zxPM;b^X*smu)#gUC6<`_uA?d)^Xgaj_dXXKZjWvTN{&3Iu*Z7tx1S3vCsTKI4T{9c zi9UB>5*+p+fdG969lKV6eQ3WAwF#!MgFv;gb;|e!R(W!8L_}aK9^4DbP)Kgwo7iL0nR)%eBk3Guj$^JyXlLaW2WJ;NZ(Gg8E@T|rbu4( zh(E=4A)8?%>T%m>;>+Kj$C0q%hS8;Rtd$Lwmus&u}a>8_cE z&fi$K)MSN~{!ZTFGz8;=;h`B~Pxf>7t4AAYvpHo}k;iO$c4W4jz`3`!ff&18e5`V$ zK#Y?P@XX%4j1D<&4miieJv!K#)(fO~%l`lgofbBmpTreM;pF*`e@yyzPOh`MxM{8= z7Tr7@w+6XMR#ArG+jdC7#xsoIb6hJ_&{{1*Bxy*7R>Dg^aHG79oc{ostss?Xw&caM zew#ykZIg)OWn8Z%-1c9o&-ho?odu_;GQPBEXb6~Ov0O&ZTr+T2G8hfA|k zEx{PwkHsPP%YTscuch=vk_!k!40(9PdR`=Mc6r`xsZnsbQG?&q`_x^rQO}fTlR=V0 z+P%v>E;f1O6ef`8gHdyuTKf9mO@PmLGF!q3UGg>oC$Jvm`T@zPviwkHe_E@()tt*E zrGh2R^6>l5@ZiYEG7-;iLXJ<+auzb*>!0xaT`I`%WMQ`EQU3raKk&!Y`M!dzEbMLc z>t(mLiD89uLa6RfB$Mh%lwm=_3GKe z7%7rhJ0gg8k-Y|eJ<0R{niZY7wuJbMB%VXQSpdfNKEG}N1JD6LFi&$qleJ0T=|Oj? zvnI9;&pc41KJ-?lrA<&BsH1?iAO_9#CR0K9w@s55&BbUNg?D@ahlu?LV2drl8E;E(`aZ0 zqBy3bG^B0KGKY!*$c-?a=}6BbG}7^k0faQvWa5;J(=oLG%8|ttO#w)wDWDFGOrvMD zD3o}cClmuH)MYffnkrBSQ$$uY1vEu814#7BNse>PBC-V>@%O1MMOTD6(DyU}wTlwd;&uZm#f%0mxub4T5M-$b=o*Q_BHxD9Ds_c*1BfP zU9*zj;uVowJ;rM}MLnX*RDL6B8hqMy#E?#gIS}B3N~`VqSKq%JKV*3CZ)dpE;RU8A zX)s_vRee7I0DG3-6h zonp;U52X_q%`o`a0_&tc{+zUN@)O7xX7(QsyXxw`r z6tMn{P`H`&Jx)|qSz?fFk|!QlL+SMQ&v1Da4OZX9wi4U#lFag~--%d`Rk6Tq56>C; z4@xfSqk*loRn2f>5f$L$AkfX!mpEK=^re`PaC28$bSZ8D2QI^tU1_FrHAPbs>t2t8 z_op=_l)m{F1F$)yqXne_MoBd?(kmX#{{Wz(Yui%uIQjrc$v)N7#A(sC zmimvZa^IOw;@9-DfXi3)Fl;#|7mmSzAp6fQ6b@{*@+MjZJPS1UcpN}!Rl>)lDz zT~pQ49d=I^NTX>*i#oZshNoci%dF~7&m=L- z&l(p5V}9atidAvwcPacNQ(B(6YaO`x8*c+ob&NYm#7!^&_w9yqSg~AZb_WBRC~RK- z`R*<@35Y5pNb$kr01N^7R+=-DGotj>)}d^=hNyvHMiIi&zC1vxFfpIQjAPTk)~R~4 zRWe>1V2YwIDv-p2KAHajL03BFvMgYGYdG561Ljt2{{T-z?^-0%n~S3)gCv~bRika0 zjo6tBlB9v^D*7aE30Vm;U=Rm=I*8&o&3#~_c? zQo6TL!*zIJhxdKV7WU#cCIwTHpd11`(kUf(^S}q1qOrWaS*|YQn`>Yqyn}NU!5^v5 zda_BCcV)hy0^8dvU;hB@0Xf|PkIG_IEMyOy5;2Xv$;La@t*c#&javR;EUr95v91{J zDmWXn&T;Qs2D7OayVG@PEnu@)N~=d>DG#{^KoTe+jpj_8 zgOiT@cB*S}^3gDN<0l6k56Ygi-Ennu8FPnS!#K(R09vWlBujVN*oXyEsyxNB{MBFJ z)1o@9yT@&8d_~(gnQ(a=U9Djvk?th-r#t^7e&iiEV>N@-UBP`Sqcz9VUT{~){&K-Xoi13adT~R5b-U9K#TohWdo0TG&+otT?c8TNr}jjSc9LvWYv=v(e!4O))xT{+_J%YcJgexOW4s; z>@w}coMD(AbKlhG)1CD{RP_d;?+e`A4K~@@&1YFmo){-41e6oFV<#CPdR0!4Z@RM9 zWEa;zEf7^;WG?fvjuYPj17V230~}W1bW2MOD%#@0;Ud=~VzTHDZMg&lWnD`gFge2m z$nK{oySc}JT{Ugq@@A~zzmysIQ8#NVS7HAX{QHdv$c6;iWg$) ztn8$P&mnw>ah~9wmpm zH)6xV+*AX|QIptmJ?hEXl$92+^*#N%U6U-2Ebe4!Ly}aLWh7^uki-+-jjMImqkpH` z+B~-(!jHmf-d8OwTerr96M(8q5C#D!BxjnSk5OBI!JNe^^SMSz$0Ltgt83;53^!xe zv{q3U7n+sj#ujV0k|{If5)v_<$DZEbl_m3HHI0$a{OZFvByxW`MBN7%{69leRCbTV z7-Y^44Ew%7HYKH?=C8%?nn5*H@{Rh776SOqJSlj&71m#|ut zR$|^LH61S7LbQ>~MFd;qzL}IArdD!FFFC@6&lm%^w$U$Uw3AY~v%R#I;`Vj3wz-wx z$B{+}0zKJC&JYscMF%;c!xDeHLyfB(YT17gf!mg5b_~o;-NYHHC@*a-g@Ia83xxC!TQIfbLOLXMyw&!aXg%8Fc6G4N3n1cXbW=u(Z-VLbnKn1@W#< zkRg=rs=Qr%U5Y6lmV28KPl!IR$v>q86ci{y3;j`Tj=^9;Rxt{o3yyq zdaGD%Q>a=-Yl!Z0T{+qwcHNl3$pDu?2N(bmT~ndHIO`hniw!fQ8q&y0(ZMH|Pl|KN zEwqCn;~msz>s-s=_rf=`5o-FarIn14i0*X#2G&$*Aq0RXa>0a3!FGu}rH^6{C+Mz+ z_pczd6OADfyp_+CpG5JmDEn8;a4K!+fN#lUojwt8;4N7 zAPYRIeL@z>r`%-yt4!;jlY%RmZz8yW-bR~*_gm#~y|82n2fm$q1Q8D|fGPHcbr1eR+ z)>}@o(=PQW=3>%GWRge;TBMY>BhKTnwL%iOUlvs8fqx?G8qcD z^kZKwe$YK;oh_)uI%@t+I_y5^?yXTH@dg0h8AWd?J==Cj?Z-3@iRuXNp@#DM+S{m? z9yFKwl!EGE?S_?!iASRnzf9I^{k2T!oOv|Svvs$`doG}Pjn(WwEpEHoO(_l3akp*E z$(AR+NyjJaTsHSnw~lEp-rrE2rzFKC+)NqCIo?BKjB*Fp*1MNdUR&y-7_KdBwRwE9 z2yJ9`g&!Pm&RNJD?Z^QBmCY|^xYp#jxfZQ|G*h&3^zhpQ{+BhyItyIUdM8IR>pHEb zojl~7moo*DR_(PD@jp4>1Dfbse_3j}q-r!inZCJ*sKw^5{{XY$1pffa;LJzbEFXIH z{549?C(CyuZ#tX+dD)@kC&rDF`j7=9t#uxmp(IVHUqNhU;uzVpB*)nhWt{yK8mw-f z*L7^RvD8g(3GZj1$MVaMV>=d<{m`e{oYDGINa@2X>9!Er%{LQHgEGzk0Njw}oc&mu z^%22o2#r@+)^YL{AG)oN{{R)LC-G1BU)hI>>NF`9ujFG__0;&D(Wq-qV`fQju;eBz{g(9TPG&=4n{DJ?@% zi%*S9?s+6TVun6mR`yZ;eSqYUIi>D(mY!KHw6sghge7E=n1~@hyV!Rl`wP#$JJBrk z)wPkQmP2!T<7?bF!Nzm7M`d52`<$L=2wGY=m*(!aHRCSO7nOxi@~`MJ-Pn&n01Z1< zNud&ZP^+sj$3JQdVy4aMLUTfB9F4^RjTS-;0gVYD^d~hoG$(2T4slAzd(osml(WzS zdIX1qMRQBYXaKS1mX*0Q!#6af-RM?8S+kl1hKk+jGW(hkD>r&hQ^BU?Jeo>V#S0(} z-h%%C;YDN31^BGyNC`b@Sv_fqTZ&oDRRO5K2&NO7S~KMo!Ww`bDY&>4qEj&%*#WrC zHinao@l8Z>DnQIs zEg9mDjX*G_iiZ>gG*vVMYG|ryXxo}PngGgZi$PPGGMWI0Gs&W<%?W6OgFrBi#weRZ z3@`*AdIB~m$la(94K~{$gh3noQRPqcqNS|yjKy~2*h&MB^{FFqMn@H68Cx+O_1(Ta zYhgP70BYU2{{YH)r)>3mTbVpHlWeE|01iR5ze7Mq2N>p|eLaYbt@FTBw? zlf#LoBLYL&e#Vjgrur*R=`N$Vv$2*LEvF(xhB*PwPxE7(ag$u@_8RNkT`RA~lYXdT zxyOdgISrmMUrhemeHyp6b{hSix&n$~npHuZZSsFQlI z&E3qBC*UWrzy_)Gy+O3enY<*ph18_5OZa48@dS4SatH)g`PBANNdSh#GOD%|>PR9C|ry#e1HBBZ-ZuDsip_W9$!6_LrNsa;F;Eos!jB*8K zwB*JC?z~f$8jZ%0aAkxm#~=$eyr7upOoO%59!5vp5C|ZEYBO0wB0bfZZ9YYa2?fAS{{51B{Bl&?2-N zw6dZ3D_!KFbG?ysz|X12sr&m>OD2ef7bA7IlSz+9f?00NNb!)?aUHYV$TCYv{NZ!h z0DFL-kVqx2xpx+kdm={uHPaa8GRlVmKsfGLgXIUXzzc#n6jMPQx3=vXGOTK^zEjKn zbLsRSZca(9lUCHj+d*$_dk)-fsb&bE5 zKAy+ku9cYw)%B_MOUIqx#B<4Fpb`2CrM;Y%LO5?+RQyuU+ZnGVv3Ws0Z)(vhE7)i* zil6{(Vfxm=YZb(ZhXyixU<`hAm4=fP(*}@&ys2D}SL<687S7NtB|Le9?OnJ8k4`_% zse3hw40PLhq+ge4qM4&mRY#PZa&miQ(${yEw#_r!ME?LT>|GV#b13`c-mV6miE((L zO2H->R7str`EXc{InR7mPNf79TVOK@R1t9zDgZ!1LWG=>dB~}@%S)qC)URb@&R25h z1apkmH1;US%iQ*2c%<&;G8SWzkTIWXYfpw50Qo=$zoQ_Xryl;bc60X4x57zm;Awy3 za*}z$8*zpF^G3MRzG?^-)>K)e+}T{=K>PRLcc{SnUY(`saS594^5)s)m16R&FSv-% zmL2#dv5|%tJ@K0>LL`z{S=_fcEuEvHDvq4+5sjX&yHUsuXeS?NygC#izryqC1(V}y<_;P>Qz=xKZ1Doa%iK`6wk=Zu=z zbj@_y#U-!`RgQDmeMvuROHyy1pJYI71mT;|XO=W0iYlz3*o2oMsfx~ z_Q~|m*0y~gqhZuE87}0!jwD0Ic_-{EUTcfZFUN)%WkisHA}e_xX~@X+!Q^J8GTdmd zK1MsR2iBFjQ7*%?2s^jAt6OO!kZhL#`;27NbvU7D^BSrV!E#A2{*@`2wMdLQ?D3>= z#GV?)F&~4KE6+G1-}SDQ(wBCcy9*sP%))ext%d3%Ol~qVu|EXlle8`ykT89D??h|% zoh#KA8m^% z+3FG`cRG}56ju)I3&em&xSX<*IVZOV(;Q&dbxhT}Mxi4JTm21M^xlhW)cTBeGir9a zGsqil-SK6BFa)S}0PO_fcM*loNdqJ)EueIf(l#i7Zwts7I0rv^gc~goGsogWPcFO= zJ$bFH=ci`HdGPCAx9R2Ae{{ztlLL!un~NzHb&1Y)nL;r!?j4E1ZKoL^(z-60X!jQ} z39W8(31#u8Yj!x~gk$9|=mP=|VUk5&>C);NR)uV5)HNU6(urbANG2)*x8f)WDCkOn z%bs!0I5fmPDQT+1aeD@-U~VRD^61YPwVh)mVWK$j-A;cBHthcZB_Hh4jaI_e*8J>e z@MOqD*rohZZ~}z@WamD z*7$R#+>pK(AgjDiTy9*e4a0`#BB_3jD@O0son6&^N2lFbSm{lt*aa6CR&v6UN&={g zBPdAYL<7uYj4BM1U3skO z6GLJwr6U4$jm}xN_H*0;<|8?^=uU&UhW`LhePc*Bmzsh3t6Lk% zmf2;CZ~RHK=J1bfah$LlxfMxDnRLHHy*aGeopj4*6#8a~YasbHx7=rP812|IpAoS8 zaB`%Bkk~cZ`U|0=)b+a|YpBZ}sUnMmr)kzdHf`Gcw$(M*aH=&bWW9R ztLoQJcd1?Kvdr!Kx$JFT7;tj0aIt~7sKY9+lmXwIp!*l${f?v1noN2ZMVNG{?vY;m zSD8^`h9bFQRgUL;m%_1B2PB3$TIG|rbSsKC&WGtORkSN;+QMk9#Q6C;(pFoUTOiAH z%*@g;#uz&{pa=jM>e@xTpD@<1c9&OCAZaaaUn?6CoXH~$AU>W@eT8bB8oeZRr@LKh zO-fkUTTQ6F+ep$m^AOSisyP|Q(00aaa_M2GsNr!C}Lnt5&+Pw?P~Ip;n0rzh)@0rji(yMP3Y zz-0uHxyK)ur5fYqY@%gWj@8(6v?xDrG0kfg?8eWKemz+0_c|t^Ep#i1nL$bQ9a`24 zbPfl2QT&g_0NQsEoVEcK^AAaUNz!zFqqa_^ypL6y7Tl3u#P4p00YHKn%Wpn|h8%)1 z#eMJbZ4K3w-WSL#s<(?J?xM*uW1b>0IBq{W_%7yKPM_;Zby#&90Go(7mJ@&vC^$yQ zAdXdiJNom?<4050z9VY)mZICIx(en&Brk4mb){V<%HAdeJ0OqvgMr5Vrw5)5 zXft%4yFG+4YI^O(y7I8u=_+h(k$d@AW=8w(8SZO)>Fl z%Q2A@y7Dq`dHrg*^*ooWET1e^Klp|fh)3M_X8QLzW9?Z^U!zz@(JzKwGp6+BwK`ep zw)TQMbl)UVe->swtmn+xJo0P0ME)F5sbQ8p{Y`OyhgYWN@@wgtb8vUI-*4p~kgl(7 zGf#N{ECf-F%n!NkUH%-jaCwnRdM;a)p|HrJla(H{CdX{oteNIe7w$z2jwr3^Lp|!I zgeoVBl|=G?tpNkxu)x(6onvivE5oC~F1b3Ltk!t>bUvY{Zy+Po0?q#bEl2l8rK_%W zd+!Dx{{Vr~*`th)`KIDW{I&z?4I5s7TkfQ_w!D(c1(Q+K6Nk050h%&6%>W*Se@DQ<-76Kl}}w5U2kDxhD+YVYGXOt7|(OT`tirtnK2pwvX^d3Is*_0mVs; z00^Ou+8E-7HHs+IDk2Ojf!F{$3iF(O>P%>CXb80lvMOZcniEipAjXF0gHmHbcJ!ej zoY0uj9N^GefQvVJ3mTiAO$m>B05Y`!%@xi~1F#ZPddBP}{a zh26p+?#yvXuAz5+acKyG5B~s(xE!DEr?1@Nve2x~8c#4Bs-DB^Or!CFXhjR12`<1XwGs$`)0nu{iSqGbEf)=-&@x7OLuwP z7u9qoZVE`4^7Pq*n;y)~i9sH?L0VO5n`<~vb2$3FSzuqObVVzt{@g9To9j#&EV(zjhQ z+85KLyShuHbN~j4KvpDj8028^53YS`vM&zCucC)Yjge;(Is7ZhP(3m2+}5kAYBv)& z7X_Gs!9KL^tExTC%Ogx?RN-Y|$sbWwrX{)H23!xw)mFA@-HA>-Lx4s;wItL0Fr|YH z+=Gl}n!1POv60`lE7ntsd6HI*Wl_jR>}yN1qIXd&D5NR-=e;{?aE%;stVT!1GC0K! z_D9t(USAKwi*|510-dz9RNCdTK;zib!a<^!Xza?zzI|%)FX0~%)5k%G)?~;Wjy}YH zl~Wm@AcfBSjy;86Sj!sNBV@=MlAv~fN~1>UQLyPZ+6I@ce(=;ZPu^M#fu4OPE9qeQ zkO9Ud8DLbhu+DiL){m(`w^na=3ozOOvdBi_#2-&?Ju7sJN4B@sZDYOEZET`3_|x#x zJYcedr2yr^lE87>xUF~MG?qWVZ(-6eqSP$Xd3-ycl{W72hbpA0B!X2{isXP>zH?e6 z=!tZ5XvQ%N4L_p zoeuKS+6h#ohJAx;gMhyIAE)VBJ-qmc$RrDb7XbN=exB83rt2aGXytNZZt6#^X3^0W zZ(2`L-my!mRz_P^RE)QnD9IpW^Cp2ZMR#*?9O}})hF{3TB%X6#=_EJ$jPB(Y)q*H2 z-~=bvpQd@QPo}+<*;SPr49Xb}?lX_i{{UK{p;KpSY^0J`!pS0$_a925)GoYB)|9ek zMgbj(_pQfLX0`Fn+(u@S5gcUqL+Ia@D^}NZu-jTcD}W0wIp@};5cC?(<>a)Dx9uc= ze>$|%q>5cqFBCU~r)zI)=a2j1wcC4n5sDGY0OV)Vooz{A3{g(!!6CbMr4pEask)7C zne9|tBMkU)5lg9JidtB0N&=IKC`mLqH<(*V#SYap7e~D1)Q@D{{U*kg_%M0`p{m1 zs}+62*RUALqCgzXvRwJc@~G@8+d`TN)hQ#h%LMZ9e647Vh3gMu-qcamemR=4jLGMpR2x>B#Oc z(s3hN*<485ZVGQi^ufW;em(iBJ8MhGVv}Ncan8GP;B9a*ouvBxt93pkEhG?|R};iG z{JVm)ah_MVp#*RTu;A46DOxL*fi&oK4LFf3x^v27O~i6=aukovcpOw+8*x%7wA*N{ zwB&scP`-y*4rP|z4kKh-700^+yVQ(hw>29rVoO^~tv>Bu$~*M}TavOV5t3K~8l9}0 zjz$+eaC_6*-$|fRb}i)7Cb*FJI&6?S#>=0_9H%N&eNG3xZTAv5*B0=} z9l5xcF*TG#10RAIhFFOtw+sl$TpZ?k1rn=H>Km?->GacYbnAUvPQH(IoUeOu$b@Ik zu&k<}=OYf8RN9v(pLS-Dh;#VRlOPftm=V)~lxnXfO0~VEBo!gnb ztCB}!xDacnbl*{Gx_#N!H5{duwK805cV+e;1(bzBo>(^TGvfz)W17~wTFM=3u5EQ4 zPSRNPi+hxNjh4>sD{#!JlN?C-7%d?=IAWv%D|6E|4G&xD?Ppi&OI=3J$(fmL{9W?A zqa*mvahEJd90E=@kOCa1Bx@M7j>{eNJy$dr5LzwFcdo}uhAVlM?^FU7Tm=PSUksou z3*a0os2JROdrQ_u)EYg^OR3KL9(!xc*#u3$pD#Fw>FlyleCh+k>^RXkHELKUA(V`oT8rK zGi9@r+zRN1Yh58y)-(`pz}Ty}pJ9dn05ep5Cv~Fqu8VGqG<&Etb=LIxOX+s-)sw8IGU&VqE$Le$XR|5F0(OSLbptgodZWAPOKo+H2pStQTM3S^#CD{9hqV*ia$f? znxeStKAv+k>ODuGTd6!lZ>U6woabt&&PUqlUM6K6GprF zJDo-&e0x~igEh;2qCj8zQVYT%L$Och4P0rOMTUoJZFJi! zeLmQo#v53cBq!er0L@u8flm6~y{dV1-4to@AO0^{eT03WLR|frr9rwmH9R)IeLl8$ zCe}~kS_A(8*j^Za_Y0q`Y)o!k+4UGYYKUyDuy-1oH9x6eG;Lw?>WwAqJP*qqqG#Nc8ta4~FcQh;o=uBvj zLwW#DL6fnfy#_iUN93l+XrRC_EYpnj#!f45{~`DWD^9 zMN>c>NX1Mx4GCz9e_9bil;l*Fgr^ihXh=|VQd$_PuX+ILcm|53W1eYv`{If_Pz)|H zMoux!ERDq$H_v*tnMiG47^9@4wnl#RhW`L+^nsiLkG0p3PjAkaR*gU6kHURF;zpk& zH%L4}REEd5did+CKVz%wZl$<}+APA&nB*J=2EP5ayIGl#jgqf)J^tg|QQjnz8`Lo$ z+7(GbH)Z20K4AE3`yAS6nq`&Qovig`WtDCe2G~cTQ^N}H*8O#bs%fRRQyARZ0}r`nuUcM#i? z*}%Rv-)J3wsad9vD82-4lI0{V8j=41E^z+) z2LSu=x9gG*9sdALwbOLIveo8c1QzlW6WgR42H<}ZfIP3qBx642xwq{B)0%dnX)VT) zcDMGD1C%91k-wo*xmFx=oPEKs114AZk9QsK zoe}S4BFqNWSyToMh2^jp_ml zM!&o}!eLGXPH@4Q= zTxDj5#xia|i=F_%2b_%Y!5?2rQd^eO1B8+BBrO{5%9Rbl#yf%!+nU#}E+a-|+BXg| z3GZ4}ktuG=8=X<4K1qA>+??mvlh4wDavE!28Zy!W#Ee*HcO?CBk6NZiFDTqdO!61F z6*WO2J9~`dJ?NJ9X6JeZG8S$!PqhtlxfN{$5?S4&0$iaC8WMBw!Spm)^qKBlG&Yld z9FRsyKJ_Ma*+-HZiYJkAlls$RO_~;31J*JHtNtxA{I8w=24CT4eOqGs+}WKYa%ALlVQg=JpF3kJ!VPmS}CLwrsIT@ zxX-9HN8XF*(r48snlTX|BKJ@+n%sIu_xB7U@eyR1Ngi+PmFBd)Y9S7$M0^JQ4PR*{ zaT^yUqhJ+sdlSdyR%pg9<+E_O)#9GT46b}L4gr5cd;b7h*JH3Z(5Y1tDQ&zQ;QCcm znxoo4VP-0hzQgL9GrXASPV(X zAdV{XC5_mIkfQpwYahE;F2T2)H(tCBe>FTEbiA%oJVVP!B$oA32lK${5iqebF}gR zR$*Kk*@`#~fIUyOZv7*4%`MHmI)YirEG9ONU`!VYkajn>AP`Tlw|dR0H-xW8bkO?l z+h5af^etxA=SeVJM=j)H${Cg`frWNo6B5Q-3JUJq%WwuMZF8sfJN*_*PMOfA>bp(e zBi(8G5(}F(?;!CbMrT8S!_ETZf-6Pn-m<;u9abKp(xB4ky|eK44OYe}o;FEVMQF>0 z^9Du3fEWTtY}Vn^J$J6^I{p5k(~wPPWcMcGf8Fvmh_W&Rl|@$0)g+LrSdh3K>KavO zw<#x0Xw|ek-kQ3#xVF@7bgLNvz>+Jeqq=fmkW>axM;=nijCVQ3HEE+*Yub!d7?CCL)60=B_4Z z%Mgcl8F(9{B}-$3<2CfRNozK?LfSh!H?q{X<|dWCVnrO`rI8e>kOo38U=$Vu%meUy zLz?eUmr^!*UXo;IS??AR;=>yZi2h@Z(>BgbAiwDY9udrO(@G<_2DPrcT42-IBa zl4eC%^Y~Q*VNQ73OCIE$l4}UBxw7+@2VIx_0jKIST1zeFt9N-Tk}d3}J`!yWoDi8v zhH;Jw!0vgi`%&sg8cy2_n@Q&KY>wTx#GssD$V!8fF`WKJwEnBqZTbev;tP2#^t*>q z9G9Ak?{*xO+@l~V!3(&t=KyvGSJU;aLsHX>`n)$5x@6842DoDk*DQRymL%9lc^L=@ z9kX3cDCpt*pC)v@Humm&aMw4|Th8*B&7^ULU7+#jTmhePky~Vz>}Y3%P;DIAT84You!Z9?D+U2C%r0{{ZhXT1f|q z@8&4KPW4FI(-MvB+wV19J~(_7vyWEKB_fF=$j|q|KYnXQ>nmG>6zw&%`dy<9-Xs#j zh^Nz%ScCQ;*2{0yR(d_1%$7C~O>8-T1=dne_Z5jBrE^ZO*L54oo_~~;Cv+#nc8Mh0 zjGxD~)O%D;N~l6K&zw5bq+!&$l6SJ}3)ZwRx4^i#-@BeWHr!`Ef~(i1wA)9SEQOWa zsJmlEw^JjxzSZsa$@|w&*1R2N-UxMT3t3>d18v-nj>LEJxd$UY)uHS9e0D1>#lEPi zwmh+4F2%{lPjmLJcXf7k#-53MI6-fz+%GFE9$YUOZ1Onz*JZPnosLFIk;yr(HSmVr zCuuH9x`kq62aKHh*KN?@4PuPk?j+~e6|aZ6Y|G5j(Cm%PM72s>7U1MioPl3Lj|_`Y znADx9WvW3F-mrUCY9+X!FnAOuZ|hlh&)SCG^adE_piO1FR1azc%t6VaF{w2Tk9rYA zN&b{FYHV>sY7#-H4rs1;A6g5V08D)-Zfb03PIEvB&S)~60Y!QQhJXtnY7_d=7|>kz zpam^1{iw1YXbxxqN!ZfPXwtW(F6Mv?l%$8fF)6^MWIgBw>ttYBC(s(YK`?XalLoG(~C(nwpvcg=#W3MJ%R@jgPee%9<+B6~9Uip=+CO{{V=$T(Xb(kaPL2e$?c7ZUD*l6|L(3 z02I1cM7(P&jcN(?74KnU8t&wMJSc-O{KZdY5;mJ_`&~lRPi;JR5T4RTcMs`Wf5Z!0 zYwb39q>b(%P$NDQcF!m6>Frwnr_>kP-P2s?9TC*4cHjJKh;+@~+q_v7n0*-!+O<3D zPm1&0kDcirpPgB-lTD}rGUxkV8WH+pvvnq!U3x3<$Qtl+h{M2*!*+u@chNB5YV zpTEC)<^KR3v;lgs-PpNEr^~m6fhoZ5Gu-3Q5`Mj_VZFBaiJ;1HX`#6-krBF{Qm5D* ztL}KNd)2MgTHckY+3FWMMXW3fY?RF&;0}0fZ3G_Mh$g;bo1NM8TH<`E)OvZfzmP`k zPDm_M1KS7w^G4~*7Rx)P*^qfuf&u)!t7X^hCAYe1UgF;FK-qPQa@(=oHyrl&ss_8B zJAAIhiALYY<^%dyvj?Nho|`kT5#SNwqey@mKsZ&w#(mHJwXWUC18qpA9J7GG2A-hm+savRJK017Js5LID> zK7)#dx`vQ&c8|r5IIW57m8>n|Z#{v}l-2A?G8_;e@eXRdGVA~j2V+EA<|Ut!MGs$ew3Zotp>$k2Xb#LTs9ep7#_nEgx6^E08Z672kBK; z5~DEzhu)<`hRZ99)RZVl!XlG~##oQ3^~F@}X=FV2HD>)gsM+audZen6*<8&eF`pGT z??9!pncSlexFi5`#&Bt?T@<7$#Ne^vM?ZSkqQ8)hhMv;6Rm&1NUwXQ<(UlogVa|QH z6@6!<$EU#nbis1F(oZQMjQ6X^Y@o8Yb+(b3IanM6=E3X_zc@9MpXk*=F3PkgPag~8LgGA7%*Vvsx!Au? zttF$%vcj+(p~(QA#D?R)^{QR=UXewn*c+?K1fRl;sb8k zVEP5CSuNP=Gg&I`p4%bYKZ~CK0AEUe!$@EbhEv+TX@`CJKBw$lNd)fCaq)&0kGyFErJbW`ack zJagyHydU-LO4(@9+};Qp>P)KhBJuzukLr6>l)q$gOy-yU8!figUEneDu^wO6sy;$n zX(UO33=ap;*I;W>A)ei`_R$d&6&=oUPCs61gQ#>=43a=rM0}N8=VFgvwOusK-apZz z#Uz%9PEQ;YOvR`+;&)*7$f)Z2YumX`H$0zeLOtg`Q<3dTMt0MtF>@b_c^M77anH4F z7MJl`__4YNjp5mkWA9q3xFB)OSy}24L1{MiCQ>%5XOGg1(Q9LOa4xB62mUe-DeaM( zq_mQKBKj%9hIrd{))@w@p}O!?7E)Uugp6cRZWvx%>`{g6qN+n_gS*xmM_1tIB!THq z>AL#u@Xz8PfE%#ls;9A1j?2McLrMr6AYiFHfPHFe$x>yF<=J*TocfA(!fR7%1HxsH zcTvHr6XGjjff%NuM;*O>wG~8Jo9mTHN~)9|fQoBKk_MO=)G}unZr{>|G{G~gZpl7` zin6n`vay*&v%bb2W{Vjgk>J%`1Z)p)E~Tf4tTgqNBR(seNLiVEGUwaV+O?G4;-MPSVg}sX zP70_Xas!O!k82_MxY=HbIPV!2Q3$&bAWS<=Qyr|r|CESAoE>x_N@1ou7sJQ zg_b!YI95p%f)M1WWI0C2!yJs^B}Y_Us%J*hemrX$owPbn#EG>Iq0%p-!VOL1x=_Kk zmr*J?g~vUxc;JlWS!wo?SqGn|^|&sy@h}Z!OCzLYfN-%+WK?a~+t`3k1}mA|X>Y3N z(IAUX)X)aU%SyJlma?79L&W&sp4sR5Kwxkv9WB(idJu}t%+~Oh@mXe8KnsD%Rsi7o z_fx^D%GxaoS~>;Jhj?u6&D3^xaJ0fiySuwK=oOR(DdTY9fN%y5N3JW@hp4YLyIYpj z=am!)VSBF$Vz!7d+GB|Z$iM@zZ?Dptk5tk1xR&bbD|SnP{CfdzL~uxRiI7HjCmV7A zBO|f&zt%2nb&YDnPB#{PLW>H<5@|0aw}1i8c9uEpFbAh^ZYjz1cSRgpDDJwd8V1pY;i z!aY9LCeG8g=25>SU~a;%UeVOX*B&o?-B)79N-m@qWXE1cd9tL4f{5` z{a)Js^{rCo>KWWOFd|4DhJ6`MdGEB2)y{g_H-7+G4L?!0jFBTGDw6~16rV95o<>h$ z#Sd2MZ4KlN1Xi=e(qYyMm@WdgFr-Nk5&_5aC>)M^)O9YSU1}L+-6it?s%_(T3gMp(Z6u3LQ?Q35 zXFiqNt>?RYZwW@|@qx#sU&NmoJ1ONt&<`8SpL&6lbq?v78y=Nu>y)7eW zG{lDjl#taB0_K;H_oGQmNN^264I6!Fh|MP(Q!%wl1F;%q4I>%kQ*pLvz{=AR8dVQ8 z$}j*_w`dHZrlWdA4KR)EMc5xhOh$R6;(X$njk8+;)HK2#)SPD^QxTph2N2QJ(y^T4 zj*UPthaA%=MJT42M&f{BO%Y86PH2LFGSOCoJJC{rVLuHOO$I)~h_nHe@kAb!C8B|v z0E<#uii(&Rpd_ZIy{aH_Sy}+8+KLY!S{imVp0lmm=oViwbkFB;98koirJ6(dQZ*d2 zA7S@60Ga~ENmAY?mRDG!MJ(>@6;ux7lh{_7s{BTt<4n7~>FXt*PPU1qlUl#rmW4pi z@Kkx{C$QUvC;94gX)CL=cimw%ycWnq!=lMD{JclFX?-Sf_3|m~3G)>Cm#p-?XHwF1 zt0tdSK{&j*O9)zAtVA*{dvhU=*Rq;gU6poF$?&VKI=aHv{{X3a$Hu*M4=#ZAIMX91 z{{Ri+jzoDe*p_lK4h3u)Mz^heIJHYl-hzksRDG_xDlsZA)Y=fTuh(aZR}$;pMRN?2 zTWEILZMrFTxz+B%1X0OmRL%e#oRNyup}yM)5-+FiAhe~yXcR#wd>lvoD1l-!(N#I;tf#1c6@aPBB3g(?l@z0|!^+_x=D=0dT zQzyguy9p3Lhqs#L@Pi-nvmd2fOZzf(uTspneQPEE0AFgG3JZ;Db}1M>k;H@qAF2A* zyVjixW$>R)llM1Y-}H^D^Btb4bq(ddf3mAM@j&_qtg_pQY%ahR?6@2EpIcZ>>^V{9Ni- zX7D2#UtL7bF*Fg}M91}66n*nd^{-#+x=ox&dOW0)k2HQP1xK5MoQ5CybK1UsP{^f9 zk?z89t7i=B-xe9m_-e!t55-Tz$6?0+asL3Vb4#FOnL?8^nI2%-=iA=5zlfbhb*Kgx zDweJW9tA=PIrSs)dt}!*o=v3faJlSH73pG4Mw#bjlD!Muw3EoifdC%Ix#!lCmCArJ zap_FnT$rNp(Xy?Uz~d~$i~(I|6JU24o40>jZUPkyWpR#sa(?s(2)A*S zz~i{>Q+tJLfK+*XhjUgmh{Mks&Zh^%1Y{}a9<&*zY19_y+b6L0p~$3>k6zR!46GH2 z@cl^Ul|;P?Ok`H`4WQ7BW?2G7=PgsZo5Um34hCuM9ukB$%*fni`cTa)Wv+{(Z@5&E z$!_JBlUog(N|y|wM6H~z(UAWDl_;*3mca%aa`}rOJ5M|VQL?;PhFOXeoE&yFlWm;&m<*lG)%Nzo%NdEw_RMs|0d8urXmIUyb*9QZDNd14# zr*%6)rQ5P?@zn0%e=ZMyPhQmtTP$yvxB4a5P9RZp``J{;*kuv6JB*)R+z(?~cAJ@Q z-2nv1!)a0V?fx}!(|V`R*vkuivmJ;8l12u=KBu;OW~=OVrH1O^zFnnI{xV342LprO zoPDU(5(diXL9fjv!ZpNuyfDSF$7Sc!irj8?t&+r&8I|zGu#E5du!u zC2&1gCcUE1Z)Y7NqR9nOAx*4x{msc*#AhSk-MlT6SCj zM|>VDdDhlOm`EG~2H!6O>G!RE?!-!2oaB9ZC;ck%bjnU9sS_5Jfyfxm729%!=Bo8pVq7GE+mgC ztU|T}0B|_%ny_|ePZM(-a(VQnlA3*tT>QPSf))s39F5?XargHXS11XYK3#A}4bDFF z^scw}2#R2fBX8l@KQegy>bCv_f>u>BD&w;s^s3Pf7C*h3{dU|*6lkfi1Hix^L6h6o zgQx2&r>m@X_H%fg6LD@F{zbce#ZpA#GI$*N;;i1z*q}DoMo%a&9m0r@{XTCsL1?ji zqZ4kCSrG2xBn{;r`83v_sL8owB$p&@Z(59zNd$36G;v209l^|M#EMOf$pt2U$dWXzC zv1p){>D45P1Q|&DZ^I|{S2*+>WD3&cn)Y>VHs&epH7gfpW|l@mzyLNhMBrl`*Xl4u z4#TVCI!Lv;ia8>VAd2f4Mr8z@k}%}=;rSzvYBr73)_OLraIc`QW&=WGkKlMnbtO>%(K|TQB=a^4-fF z^`*1vv9ydD-X=}$k0~gu6$bA6M^SzW0H-}t~%P%@I{MTH=ZN0U2U!rz~G(ax0tCM zWVaP(($QLKx^9~#)}^I~VCQVI+#|%G<7=o~F!oS$$n9L#4PwsX?A+XVmlyI4;3@fa zR?ny)miOR}^xWYixbbcl8J*(F#}n;U=Yk1tcRTIkgeKzN#S5Fp@^}eG$phi}* z5v{Y5aD1WI@=pMjYz$;JGg=kSq0;&kjQVz)s@S}aFb(_%Yxn;ElyT^GkD#u5qkL29 z9-4-IKTOp17_|8Zp3WiZwj^v*Hzm<0VBd+?dM7M(Q>$exWZ0bZZ ztj>uiV0Niw$?xuY<2Ca>m#*sZT8nKyEge=6SLbE1w~^WV7{$bKu_OD$b6rEAej{p{ zW}YR8=}fK!cN%r1f-}MVV`v#X_u9vSp4H2e*GEh#=Pmtr_Au&_*kew!7ZL~m0E*Fq%6p7s{QGy}meq8-ty1JiW-cxzW+bd?;fVAb za!(zybM!UV`aAY{>dj9@mF}&hwsl?c+1>e4VUFdOEZF|Z!u!=_B(%a#!t`=UYp0@0 z=`4p~k<;*J9Dsd$_Z)reu-ULnBzA1G4C5V#YNqM$gBp&Vrd!E%e3v&2;pd5?aO&T1 z2;#Rl_+<4Bk-WP<*6!R|OV77_#SJY!ZH-TCukHdSrBY z+_jm<@lz=2$MX;LqG-n0%WR)8VZGdbUg;5C8=M%Nb|Cw3N1?6nQ0Z+-H?^Kw);nf9 z<%AQp6moxF)R&VvFN4uVh7o&5KELR@dlMv)T&(f9AyvrSr#yY>3A%O&i?c}4p}|v= z)pqeIaR@)B=OH1sUywWgvO-J-h|Ct8JH*Zs1sf$pvHjlIH0rS zR$aX)M9>jqLt|27nj0DbY-$W~Qf|~XGytS-XbxzP>q2^11m=M!YBSREA4&i+Qu3Y+ z9z#n?*chM}l9rc%X!06fW5J*ol9iW^)We+6T!1Q$KvC!qMsv<+E_kZQ0GF|)Bs9#Y zidIhb2mqFqo3W-`($iHzn2j?TrD7Qr)O=KdggK_;G@Nw86UVPLEDfVk)Y8$@3s8W< z8ZtI%WnytmL^E1NP#l2MXlYm$nv7Mz%tY;`kmiq<8|4L z3}m<3tNvaO{n3B%vw!NsDB_9*CN&8A1%3|FHugToi{jir5!?6mb^^L%B`w9-&eM-* zXmN^{ttM^nq~OgQL;gj{{Y<+fr<4iTVMFs5ypLsbNW)lUq>JA zwzoHDwquVA`kV@C*G#sx$AxbipX~+b^{0_=J5UDJk|6U&51p@$|$C(v&%{{WO#HnY|p zMSY~*UFdp1(yY=lQ&86tmE}0Xi0y_SlYl%zo%oFQq!1 zyWrfU1q%SgN;k+`J4+pdB=bO4#qk$Q_>FKbR>xB6tvV+UXQ1g8hG-b6%FB5(5f1Le zL|+NU;9KzSiFffAU;hAuXQFhSyAB@qN7Sxy>E(~Zj34@#R_Pju(IY{%5Cv5ofE|FN zAm)J4Pr`eRYySZ5KCS=~^dO z+UhEj71FG3?cdBUyj0vUF~JN@JMe!>=${sC{zG4qSzIJ%J`54A@7LS<*D&jidg}W~ zQ1QVMzDt#b18x5Ros{G#9*2SLn)%F;`LuloIVX)DK72^HOUGEGxIjmjhA+1qascH0 ztC;T%<;Y?i$~NSh-g>LSj(dl93Rs^86dq(xChp>>&vS@ zoNtv!@Ru8WPzy2j&N!;wKUn@54RGD%d-LA63%5(nhBhV1>_#XFq_n{ARa2h%HDEt7{KqD zt>i4jCnWGW^`?c#hjin1(%*;<(357Aa~DtFf&9}L73(%*|^*> z?Nw^aav^bW(N04Sf0!OAS+10VGI9FQZf+V$36PTfccstLt3!})7!$#!BbR}ZkUc8D z80=_?F5dm>IF?MXNt!2TfsA0}V|O?sx7M3&g(H$UjCdb_Hh(ed2lS*OaIU1b-hC*y zmouir;*%Qkjs2guwNSc|a~0L4?WA_iaOF;1i6JDTXO71Yk?sXi1d~sgEUL|KCz}I| zue#=%wz##sx&HurPYZHw4+z_a#|%ef^`qTbM;wu}TnS^4f5bt`pJVU$1m>WJvDdkZ zduZ;hd@(9TiB>#<*yDKzu_M2xdG+DjOY2DFwR@{{zmYjbnVW3z{uYPmK;#k1pUWEc zyhhi>ag{1g*2e_qoxih+50f%X%WZVZ>d3h$joFtZXR|3Ba)f%+qRK;>owcpi(3ee; zTnE^O?qJO(1~_a5IO8}zqtmsXg!*46SF^qJ?SMcdBi5@WeHTpK zEOkw8<4=Q6(k$kdK&%?HV)5>#;RkYTU(rK+z4o4!c zwRMrTS!TDfk_hiu;}Ur8p}h>N#HfGg8-_8K^yJERLvKaho~V{mOKk)a*h;6vds!lB zS_b~k$K{tf!Q_nbire}};dY_amoX*P^i~6Y=4)l~3C|yd3HIIfL3bh%ptmL~! zWE>{tBYJyczM9f{GHYu%;Ao>!xGJvPg%#sw@;H7crH>yj&vcJW>7NcXRQU;#E2&if z03H}oS-(Z~_RcGN>78m5YNG0OEZmO~vPrF8Zkif!A>{-Cg&y@;Do+dU@d6?YDv!N* zbMrEC+20=;4mxLB0@}_3L?FigVow=ATIQbrd2Uyy|<~7fZP%F~Q@-;cwf+IZnmBH&;fw)E`iuHcbx9~dsd0nJy3Kh(pXuf@jRt5_(!Sx`&Rg}OA}2P&L49} zT)m3NSh;&SZB;FzQX+i$i6HJH+tU@m`hrfM>5FYiVuIRK)8kk~b7U4f@aNxlNI4bJ zw7#r?5nUZ*iV&vYmLJ0mxexUNkIueM{i8a<9Y+4@eJfU)-Xz3RJSZ8v22eh%#FO+Q zy|n)FIN=A4(PORky|kD1mNv}Ibw20*ArFiLfyZIT-mmnnUJX{&C5LgICB(p#Z{5+xn008n7dSbqo_&Er@c1e;r*5WbeV~jA*A79G3)bjXp z<&B+FhA$!&G(Ke@kjhnuq2i)kFwJjs=mHr|=s zV?1pK+P&W&>Z>J3BBppb{+{8yZqYzmhRu(?F+Hrd!Bjd>KprFuB=OeAMLs1HV%^rjIrUP%6w%0zvtK?IARdS)p8oYdlk=e08wRcN~e zqD%qBGa3_KiV{z5DcEhv9jf+$7~`hmkKcn*ir*c@IFRGGs<{Ah!;^tqE|b#NSBM=6 zj1iNOjE`J%Q?ZW6rhZ!IQY$U&%f1U-a5%^=Pi*^A-w}Q~>0KJ{Zu0==D2bIunCH09 zryic;HRE|cYmdVpXRYISoW5BL`Lt_2HFGkVBnJQ!$u*>PWboNY%_^!z*zV3NSznHN zRPjL`oqD>J+o3HgKGaX`Hhbfo4_eeN`pWyO^;L?(B%P4$E)}rN^yjdzp3N-TKPG$l zp^hCRxqLVl+O*#(I;7UqM}s0inm+h%$l|u!?}iu9BbzxA@zCU8?cTpk`j67M9+lM> zJvXV^E};Y2-9sX}Bu~Vt>$g1T+#juX4w3lFYlxIg7`pEnz|S}w`V9V=!St@r6E<8? zt2q3u5X$W%QPy1Psf(asJ|@gC$2rITm1)xUOLKN)lY~Ux0UwJ#ziL0lj;hl2OCOzs z!|`M&#T##xk9?{0K3}m1-nEW}>I=JzYoT{0%eZ5}VBV#3f&F>=R~MftXOfh?9bOa` zC~-$#w~E5v-F|9N#xcvdAJf{aG~GQl-}l62TUl9*PT(gaw;rCqY*!QbiSfHj(YkhI z)HOeov}uVg9GRAIfCC(O;ZkLbc;M%p z4*2a_Hox%$P0~7GY;PPjs&2r`gd2QA{o>djb68uJssjag#u6TD(6!s%g4-9z}C`FdeqWM$!4XQ~v--qF*So*R<=V;u%rw*}WBs z44M0tEB7Pbo6v2(a`RBsPa;R~BL4vG$C3S8ajScsJ{!wr&M2bRLYfhgsHb+rzuPnTW8{o7!hyUP1o=%1<};ijHVj znae+DjVDaf>{C;}z80)sY*2>F&fWH+4qKl<57xd?>Wy0SRO$KFXZ_P;SZ#qL{ye9! z-1y_a;RoyVuetRfg!Z~VpQ&oPo1UoZS>Y=tsi-RCDH!nKjBy%gvM+uzNw18)C)~b| zCB~O`97}wpvRg+G%w>lGus9%gUrg7Lnmi|@dcHg58f^LJT!tG-wj-7~#vAc>#!uAu z`w?2cqMIe;j(w|O)zy<#o;FYdwj1e;XZo7g@7zet!;#omOg$LnqPA!7+atSnp`?X< zYZnp*BzGc*9Ow~)gS*hzLPsF%(}s)@fJg(1kBPTw2XcG+A8JR$&uW`o9|T9~yLDMB z1A_igPSz`&@+dPSNpJ`m6d2#VGLr;7`_-5%ZRIF-v2O^GxG#FJn)W#Dj4A}m8*;ed z_Wf$27aN0jUc`HfCVWK!TN&@oWl8MOJqDgwWK=P$ZX?WFwtke{DzQMN(Xdx5jsZVf zuaZ9kATjv3%|aHL)s$qXf3+wkhc^K-hdIx;dS*zNEJqTAR1T$oG4-mb z?cA@Bf(Ybht5!%gnd8n66_Yar8SU>wWGnBE+|fg*0Z8VihyyI_F@iv-$#~8eocmEH ziCusk8Wc8U%oK7>D5cS1rOeMVuH(t=n%%A7eJmu*=+Ux<9OZ}HS3ZVgZ6IOJ4Oz#m z$#i3doQ2ODkF8P0&Eqk>cp@>xo-A%NfPHIExP&S7t%p-*GO%&6*UDC^XHa|9)+f51J1U?hBq-K7gKn{r&2$?&%%`J|5XYJER2pzJwoNuk@;Bi94iQcW{m1>6fb7|0#PS)#Vf?JKD6w3}(I;${f$va2JNWRZUhkEr+H z`eui1EzR7fHQOxG40)Au-oB&1^`L1wEykd609SF(hFEStVOvI*ph*Rvjdf^cg`@~0 zAcC#zeMjVYtlO^5;G0I%@Yd_)Y~*X*J?|_oM}amru}`v)Mn*}&!S8}Ot;0>}X(!a+ zI(5Ch;ed$(!bQTAUqYuZ8{@j?9N_vIy7)^Q>26ZzP-V8ew%K-(?Zwn;MlcS4iy7n+ z0l@VH*7J9Bp=$!l?@5U?uBd>kqeMJ7@7MWcMqGGuelAx4^N_rXN{sv=^m06mn-Cr?woIt$ovcF2eoX|+sGtnuy+H4$Lm+kEslq0G|W|2 zAUsJi$v%Vs08gz3JILmY%FE$MSIwT_o^kiDFE%7^42|s6^~~E^$(0SqZ(7jx%jscI zgrYiNxjYQ}3f{Vo`{XUBxpm&(h9vcKTnnl%?{!^b8+(#tMqu(s0M60sbMN2nUOe#J zk53%*SM>*6SV5>N$*CU~Bp(kM81%*ut#!|a_Or>QM=Hu9idHU`$rxN``IwIFUp3w9 znogtDcG^ahr$K39CCbLP3cICYo=GRv_x}K~Us81QE|;fSgajTCsU3!R{{SlMlsF|6 zQOlOi>r2w(vyMp}ce7yR(>4fq$((K=wtQUnr^A|7FW~|(KxP0D{cEXwLb+MA$?c&b zQB~qpfNaM-tk!a*Mx~P<9tw5>NMt;Cl>J&W+WdDStK8(Uz63 z#278)JZ%~F=YvG}pP@%ClYe<-YdT-s;pC0H#4ZTmf!qx6YrhsXh3w9A=CpH(7XGcHT?8K=1(0z#XyVXOrj;)}s=&va;9WYg(^O zT^S$e!rexpb2M{f1pfdo)7bra&1rY`2FmhVbCOi>ap_%(-&)hP_3-7A-dR}z+DGGF z;QdE6q;;-{`egqAFfHVoIf-#PWQ6DNW7EIdy{{L@cx6wc%<^!otNApxgV50008iGf z#i(`4ki+RilGg(wzMe8tNu%dnTwOAq;Gn!DRl8f9_Mo++=e20v5LI|2R2n>0n?gr% zK$0J&WZke=v~(7a2A_DFnj2fW8479vTO@hrmvlX<$?e=9!$5myK9o%`R=jkikaB9} ztmHkGmy*@jy3>;M3P~WPnggEIc=l}MfGK$_Tpr(QaweHd@qH;-Vk0$oHG={8O(!Lk zjgheal}<=gDCCbP6s)lQD*5!ubNCGbwDujrry>d$gFF#SJ-0jx*k4aMFtix zhyug0r^ z!L)?^L8h(#Hr%QI0NkQs^vJ2==pAk=KYDfx5C%xDPjB(f#y4r{`88O<@!~ZjcANryZj4flWnik5B1b${&wbryu(@fd2J3pB^kR&Ydsh)TR!S5n?DCjpu6TjGt=P z`irY=G)s?`eeXG3szfu4=Oq1rC-SXga`RB?E1Qdn zZ-U7pG)klSV>}Q~^AbC;?_9Z~DJn+zoMS7Ix@%oSsM?{uzj-1y;zrzj_?N?M{{VZF z{VV6*po>!Yg>`v@-;|SA|2n7BZ$8Ow{T|206FLk)&jU`B9V1Y+ITp(Qi_$TwN zY7UCHAOmw50D3KOL&ua|wX@cKMmZ#`)4EHipwcxc?xVPA=7{blETs;!D3E&Mulif1EOc0vt~Xsa<0qZ+K_@KP zJeEDP=qo&)H0JH2Tz*CjiXy!>~r)y*2mO1*V&|L3{puKg(QQy8#ex( zjy>xPz7*uuEgHkk#+KQ|wK(m%>hP=jm-Pr|qq8 zEg<<^_04WwT71CY^y5VO+ToLGIoG{26Agq>m6g|warRsw#fHUczc-V!jC=v zjKl-^R{os>WdLF?CG!9aW6*kisTuwmX*07Q8e&-PVq5Vab5@6qD7!iGc`%ZsYrhNn zzfse5nBLamBl&WSmYaVn~+Dai@~3~|^G zuodv~d^pnWBw1p4!zvW~To8NLL+L)fpQ3(K|f}$^9}dyC)Cpx8!f_+0rKRZxjpKPe$m})YZ5_gdmIp;z(F5xdh3Tb`6HT2 zEAeO9qR%{kl_#sDmU?v0AN*oAMm^l{ztq>q6MoWNc-^&4zuk;gE&l-R2iMn(f%S$S z8;4z}1_K%L4TD-7Hf34rnU~w#4<|HGI%m-jcy+J4FH_PjP6ET;&=}wbz zo=JmK)EDd7t)oAbgdgPf-?SYUNa>v_-%Zltw9~9D*JM`K;ZYEM0RI3U=ef;(UB6y+ zUCSKn`izA4&BJH>X*TMfpmshcuc#CCxPSGaKUDQ-?QY=0-N77-pEF7ZKRpY1j6%hAbmv^a8s3=?yv^z$qQ5YoP2M zvr!ywD4EZfqlQama0w0dPw7vS2ND`R`R#2cf4fG9N4;VT z!0Y!F*W$C@agXU)YXxt_p#D^L4py3V%Ax)feK@09XxDQ%Q5jL{#BeL3K9~y}E)UwI zG~fmY>rp`GlWA8fa8$4v?NCiN>o{1S-nuerJm7_4p43TnatJLKhQ9Rx=O<0O-VcJ= zJm3mq8%vasG?6j%5?Mxz z2LSt0j}dtZDnSR2t#lJ@7&zEJN`qKN&7FhawJDP&?9oJ9yx75RaqV7^cu51PNW!xZsi8~JD4-Dd`sSkl z0ASPdYsJlNjnQXsk+&bc0s1av&R?3INE-AWX2n?Hql^$v7-Iw5+MJK2ZWQHt`_YDh zc9Cr;4iCLifk=TwBytDh9^##r@8@@J{{WAiz6Jp&`G?$h`ijJQP27AKi3DP$6@|<$ zhr#n>Yh-s6EUk7_-GZ{Ro-xN1y9|$itzBPf*H;hXYTG#4AEvq_^W z%d%2HIUT_ri1a)I=Eg}C3CwJL52k(nkFT|KSZ21gv{Vs7(Kg~O=0M&F?ST9->@dNy zG1%5Jf1`Lsdp0@xA);|5_NAuJ6P1x}5wQ)x3zE)9;d~5aFeLXFsJ4}70!aQmaahWu z!-h#W#N-}VcsS2~FbMk^rcG37@yz;Ly|J5i`*~qTkCBWKj1l(-(}RvHt&uneogOw}6X06$+ErheGeS|TpxPdwEK~! zO(y9)Sj%owtQ#EU5%=}?u31Id(4?8P+UhdNcMKNou_TZtGtND8k8@vnbR)yldTeP7 zxiG}G%EN^m0@xm$;8()m4lXp^L#o7@tjlX{Yafh+oNjP;cJ;}{b)L2O=-Nh&Wn-ht zCS$ruBtw(I?f#X^k~Z{?oOLf}(VF*OTlf*$$$cD=#W9vgW039+Pv~$5+PAKh*S>$J z8tIrxEzFGIFFRK_1Xt$0$5m^ZxUK_uiercn249&3#z(oTfpL3e~Lrc6GpH8_(#HCwmgV~00UmScq z(%(+%iFI<|!)*>>iM+)S@DI?pEnNfR2ghjs80Znbp6x>R=1<|;j?7ygziia`82RN% z6OGhcXLhvsk#D8gTD9%U8I~g)yPE(%;m&=@t=~u1V!LQ8t>jVRMzXRg=N{+tugcc= zztkO3)nemQki}&80(Q4v*hk!|Kf=D)_&e5jJzdnguCHkC9i7Fjb`ZzAJ4VvX+4tua z(+nSPvmBCAR*#@M+ToL`HSLY2S(^!QJ&~s%|58D5pTz^ zU=n{Ye_Hx`sEwwH(NOKgmrrbw$GH9?ybS#_&3t3=(^85)p?hYXQq>?q6l=-IZOq^G z6J628$7i0hGm~pN-Lzb)0(yn2*tY_7%!y&NhA zVlv-M@P3B5Pf}aG-&bmPFw7)f{vlzUo->N9v%b5v(e5B*F78Pg&PIK|Fe_Jq*9S7Y>|V(KTd1l`MS?h z)bxuTJ)DcGnR}J0rUyZi+v@Bz}Nr?_8eyQ-Ub^i&F$fBvz5oNtoRLZOR9>(7F1PTC9H; ze|*PyvM_=Ng<*>a8%aKI{{WX2%yqt`l4qV++^M-OZjlBdQXq0k9_G4m`14((o_V0U zX6K=NT^$puy`{>$x<(bFnkEiaSY$H&21Z49TMoOo)p|l}$fS<))_BoayM6%Q%s*b< z)$rF+$9M9}1&RiZwu6Sjj~<}b+3?Ha7Om0t*EV-bFKlHSd^c=u9^?Hhu3jv=iaCGD zgLK*Tj260ex*?6(Nl}=;kaOG`v2LGWr^*l8n)%mG_@xJh95*)-;YkT80F_+!`qxCz zKWGl0(Ci|CvnwV*au{*-_pZMhS#kU%v~l?n#Wl(=XF`)fW7$8Y3Hp7{bU6D}44<^$ zNGwW7oW40wwOc>!SJF|b2?fKqpxa)pGt1Gn9-Kad2iMC&ZkZ=12l86c?mue%2p{2T z(;R!8RXykJYi6eqXvM$0^$Ig@LrCd|rZP#|e@cX%FwQs_9@WRMe``yo2f)$ckLt~u zq`v*Db?H}ZR+h1o>$a-lE7^4f+B&Dc`2PUa z)2`F0l=`B9Qn9!^Bu-#hg?QIGzhs!P9!eJ^eZZB_=y_Iaa88ZZ^;l zi8;*~XZEVqpCc4SmmahQs@$;0`O(toU}u9=!sF8>h_ht!XbZ@vE66-kNqK?K6|bzN zdodJrxhEN*FQdHNcx)PBZr?q{QAKdL#W`y{aj`B74`65ug}p2QClp1vE7_0hRQ~`n zrH%nn>6#&!pIQRtH5pvvaTue;cJep@^8VFB71q;(+uoaseTQ+qflu8$>yeJV!w!=&Z;AnhR=E{PRG|21cO7h=0$iN``&<%LJ{wfW? zS3)z*0qy|Dy#&f1oX`_)2Hx~P0w>9D@vJ0-=OZ zQb_|80z%k#;MN|>HwP#W+L$~Fz$ZUS0TW=k7|5YDf~e#M6#FnO#Y#JQXKY0qNPU!M zp|raTeH6mM41lH9Ciuc=6)-%gpy_1lWQkKpfA++%HtL+s6p=a-X; zE}WPBn@0v&bry4TXaU1<_NcX&sQ$In?!F22MWwq-Et>86HfvgLjMf#(wTy#l#YB9Z zGwMtJ$(9VU?kv?lMOy;{CcHGzoMRQT+;o{QVHMOP#O-%0~nHhCB;iiDbWRA6vDt7$f>RE9zM ziV(F_QUcH#>S^fQD&ecsTS^AkUBa)8s3$nAHMV_u`&LxYHPLFP>GrIhIYMM^VtuP( zlE8R8uS*ZWIKW5-la8=X5ip`YU*1Z>{~d`wGC{Sej6lI&^7Os z0FXsttQFXnVm`vQC(}?5Am{5@OCr6%%~%@ZSo!doZ>g!ioCqO+&%JCW&O)T350F!3#v*X#G8^OJ*G8j1Ow9gIZlN6^RHlUY^*f2X4_@9QH&IV^NP&n#6W%-c|&A zW}?BZjXTGPf&kOGfmQ;Nuw zw;cT{_*xCP0A`}FM-ES|2z``orIwRBe_AW)J{kfX6Y4QnAyA$HB-CEn0M9x4)Tc$0 zNgC|>c~%4vR)W@BvBq=nTOqene-9P!mQ&nOyC`d`WI(_mA8McO!bV68-+I3`v5w1= zMT*^+{v3U30js`~FZMsJMRd2_%Hp>NwZQM1h_qJVf=yj?StnMFRdl9ffG`aiG+sR8 z?Zs^3z$K4punNYZU{URRitOgqI+Qx;Gd;VGE`&Qu1$zk zCusYcQQ$KiDuMMB;)fw|AzWb7G2gl1v8)?yoOi4Xd^SL;xS~{;(%woK9}$P=OfIu> zhaz_UYPF*yIPF4U-)gS|U%1P-dam{d1#TFT>P>F_HTy$#cT3xwYh6-Bw>ZpfatixCx-ga^$(0EsFF0fv{-gW;pGpZJpTZv z+PGkrL_ShQ8VjjGIbv%+@u7D`*ONcl-gJ+ORvKjY(@hTZJE2I_@DvOYT9;Gn7W&q$ zJ@9f7n{-jLXTtzsV!7weU<{L*jF-;i8;(BplkpM8zP+C1ugiPj>qX-K-@lfVg(Lq4!1df6w zRz*eZ0T?6x{p*-({{RtW>Rm;zZr?MI6dYgxJu8|Cq#WdmlE_{A^I9w@=6R&exiy%Q zR$xKh$>x|E%x91k{p(Ls;epO6xXEBK-nLkhGv_94E3Tcx$`Dg$UzFpJSLs@!O}GU= zikpBi_2RTpiro}#vuFMFG63Eo$KIwjNt|T4Z2Q)_@;1N)Hw>-ejQ6WjAh%6ISkDfA z)iu47k2O^W<0hvJ$2`=~ETp++Jd>J(+&l2jNcx&toSsEM+-_jB118~sz+;MtW8BbL zz`*sa8Nd_)o>DlWw3HT?amlEeLG3^lrarVLkdwtkxHaOtK{=oz(zm@pv2ID`qVq3d z%>n1d09SG-B<6%>&w7NDg%klKd{9}a5>8KQ3(iLr0h(M=@;N-v%()%tQe0wyleRr* zCG$gR_fX8hde9SjG0g>>oYWdj`q1QjiUZ7e&S)<6hkvaFnt;N~kF^v|DK&bUoSFgE zl$s!H(x}*?tI!6{Dr)A24P;OZBh>eCkwA=}dM@!mH4LEed(pRnNDSvAieV+n;k(cc zq?x8-pBx|7tD|BM(wGviFlY-EVBCr#$&TidjzNwND4sHp<`e_U!G>yLLOpn-(q1s8 zh^-+!;}ipFHh9l8L+2RX#Ufk--dpOX_`3cCc8)&y)DDaam-P zV!27^v|{AyC>XynTKGXh8O~@iytgt1jy7}X6cb(Vp}%EaOH^p&)FmN!C2@>bZ_$5e z9Sf(TjI>Jf0h8;KUq9sibH%~3Q=0T0T6}J|Ankyo#|{$N}dpF~x5B5A5}< zYEA{lVHnPFoc>kzR)NrV77%6|WI^_)F0_bkLrb+*zFWxu08#l9{{Y1rN56;Z9y~wt zJ7>yvf3p;eaEU8!2f41f(BHF;nWpVBM65@vb6pvFhfT6w&g#s*^qBa!r&`EkMpAR$ zyiC7W@-nK)lm3rOAAtyO90zpXg46U#fLmC&{{WYPPhV-#M1j&cln+3lSHIe;YwJ%C zI1b$LTCTUR+yy=xWeN1JCVBaP%Ff7QmF*T>cCXS+NTuzY3Di2U!3Mr^_|foY-ulvQQ%*o8P+Q15 z{SAG$(?8TJEdI}t{{Uyrd7nt*AGwGAkD5agDl7Q2nj0k1Na-Lb>Tp!il`=D5cNUt_ z^6gfcGRj2=GcRsyBx0lo=T{){9w#k9WB}31)C%)JLu!$Vi$fo`KYE!$4;Y?~bVRFcW0J5(CD7P-b~$r8jcDNGfx>8TjQASdfnI(LbHfGGa}3bGLy zeuL{qNPrkMEoc>MrNpPtmZ^(njl75{*#;XJqDJT!k%LmY0)jCj;gnDs%|WKzcfrL7 zeQKoL05OCZ0-++Tia{LvifqTYa4Ba}jmX7800blsf29>>ju`OR`%y4T@HwT{*D^D| zZ)%jb0GDg2I0P@XOT!2-Ii^!AcB1lqs&8yyOt<1RYjg_u;xax0@t#Sh9uNiqrfs5Q z-+}e16dTVusd4=Q0cDYpwFj^y@G(}Qc&`Y_G~&TV_C#kRB9wbyrEIcW{&XX4$19Oj z#rp=g^37h@SJ2h-*^K@XLN+<zkoov~TafPEtYM_IT)oRxJ1_w3ew$G;(U{?!#w`yuQLXOnm z%j43Bh(9_37%jhxxbsh{;BiwBlTs=`=791hK_qiojj(w*6@?%Q6pp#TGy`oecv{Rd zfC%-VM%Y^MlLt7dOrt|jHo>2I93-s?0Bh1KGBMTQlY7508|!T)U1-Qk-#)2kPv7IJfLWO4yw0Q6pk?EV z1I%b_YsLM|L8KUe-XiaK#h14T(F29*0$ z!)_F?dLp=>8bgm-DCgFb%}pf50N!e9+;c!l0}(`^8I1AGNVh*q1a0X=$@iccS$w?I z!{?k*@js?0!GNF`j!;fJQFt`6xb&hdn4k=$knnk?qM4X-X)z|@Swpq5C zRKDKSXxUvk1R4ViH#o->c@Q$L(n&s)xMf$xZoLiC&8*!d&Zodrn&$E|$H#-5`bOAr z%RJN}?ASgPd?K5x?xT|BSzyRNO8T>=ejaIk9cYVZ68MR^Ft+eyx;K!P( z)vn8@hY`zqk?BcTIJZ`5AA#y?z;oXdy+L+vS6xSMrdbxqI2F(J-mJRSgQN$KaaBE4 z)QKT6BOGUw^r+~{t!KRmB%Td(Pb+SYyjXayOhIkrVBRYe_c^NTy&e{A`||nstBb2! zw^uQ#EPE5{TJq=`tdp4~+#?4hXBEvAyFCQ|0FviQlHp`Xqmh6o7|#{HX}V&vug!VI za{E13%ezjRPn63+3jIXe*V_LYl zGVBlAxT_AE)_e#IQnp*ttmPJ{!-GoES*}}LSckZUSTgcfsdP@czQ5E(ywNmdd-krG zZ#A_0gZ>lfMQVE2?)xj!@d3CV_~xvnlA}np^wEEC#%l})BP*UNn%c%I$R)O%?ncP( zgG#ql+pW2bzc}EY^{HxIRQ6X}l#(;vqA%sk*+&M|9Ao2`!teHW`LAHWjgC$@;<)fT z0k5GxE%hPSG@~1m#~C^Pwe!;`jo1;`;=g4250{h3@t53tN9Uhd@#BLv#db0ANFdis=j0T}b%qk&M2?(b9DJ%vaz04B2MVB(|OkSZ=%<062VbBcq$ zYanH4Og5bQP#NBFijMLB0HsM>H>Fq`xK~avn6l(!x39ecSUpIra!-0Hrbo44pkG=7 z4IWoL%@1Kb3L!nQQ$e(IKmnr1dWb=eMHQPZ^sJqraX7j8FhE@9$Ia!0l7TPB`;G4PXT1)bIxYVvIlnh7^-P zc?Z2q0MynpN7As-?G;w&8Bw%xQeQ1adsLNs5^8|SZ2C|flj({(xQt?e#JTN2VMrbM ztTKXf1rSG3nm~gXpdpY4)|Qo8Mxv{Z^WeFe(kcbXXt$8EkH6x#XD&WS_PXdb??fcYLOfE6a zWfYT|i$L=Xn!#Vvro#rJP6bE?W6z~|q5yZSm-ypsxfTwK4?_i9M(c?hD2$ zDmNc$@m8#ifj~=v-n>{=7B*wp)QWLHM}OyDJbmf~N}kmvLj8pSF*ySiCXtD$8*;qX z6*nAE9ujdwMe~wq;t-sk^eG$AP!f(@J~5h{YbbB+lW0C3!z0z_a8)EJ*i%#Hx>Kxv824-^3k`kEPVoz5{sX?FKCtkP~d&S(QY z&gSjK4oMX9C_#V}7nCue591DKGE7e%lrttfP>h6{0Nb!Rp*+`D;Z8Uh_w=@G=vFpg9M=ohX{LvAI=Ri0OO-(8%z@kMdZv0RTXEo&+tSJ?iZ)yU0 zRUFn9eDSyX)J6|_CKXqKKu##k(kKHZh#Xhb9}hHJdmfZVbpl3QAM~%9^y`lX>NauZ z1RDA~qcyvG{VFJxhnMSLA^xFrcvNH2^j}6x?l#Tarf%Z0icnYspL)L1`hn-U@Wh^6 z_6D^XbcZK&Ex32Z7f91Ceq;s8{{VKsFq|!)ZyB#=LpL%cysB1JP8a();;A*> zqidVhLg#2cweR+tFo_y4=YlAs&DuH_D5_@frD|cEB#J(SRqgaiEsJAnmLB!Wk5U_E z5(XHjk6*mOPz}T1J*qfLF6cGOrdofoK;8?I;~?jn+wAUO)2!Q^f#2&|b+=JTaR$xZ zyVAOkQpaZlc#;9ZtlPJ;^xm0ecdU&=P<2&1PvJDYw}VfaUCst-zo+#pIfn5%O;Jp2+d~17=soKi_fe>ZD9Ag{;^Vy|2U0B81jvH~!I6bHc z$94rx+=>9=YziE$zp$aJ`_#NaKwv9YB#f^lb6EwCAlI0d??6RHK9%f~#bgjG-n0fb zp7j&OWt``Vjtd;n8Njb%vPLW0@-Qd}5$2*4;-j~m)cq&`n9g{wcxTp)#y+*+Ge84o zc&vCB1m>991w;(c5JmKGQY%rO&Ad$dRRyl z))|@3eJVg^muqBlD04T9=~!0xsTSSXP##T!JPc(^0r*A+v41KC# zkiF=CIN1=A`p^u(h50 zyNUw~m*W*Si)S=8lf(Qqly-5+pe4VIR5p^;mRwYKpGpF52G6YyId+cKl~zB9@j{b6 zlmyP{y>Kc?AB1vh5wTkIjLpRXiKkXL6%e`Ru&{09;;>)ypeE#P#Q-o`%Q-xHRB*nu z228o2mL&06b{>MDJ#)nY={&2N0za71q+QEP5w|#?F*AK=IK@mUW0GqQI5YsAfCht| z)Iy`ygv>oC4>kvWC=VtN6($9{)Czzzo@fXXO+buROJ<&2pm+76dDO^ zngTfYuNTK^@$c=QLGuj`ZvYjE`RQVJX+Ki_oicGL0ikx$@ zoF3EzU8-nzeW_NPH6@%UuQUTL!7Bd%qk7S_YxGBA5S(|dT1$<~-n9Os*B-_~W8jZc zYlF=3*O3PXbolw@rS`D)z;7LA#LHhC1`_R-elDQLl>hhr| zJ${sm-bZeQ%i!Z9K9#y^9Y?22q`pdcn|K5s3wb}CI_O5wj@e!=ifzAi%Szf*xwEIsO?UPkla(6`@NdC@=Wv-@} zySJS7;)~byiFByIKQn$|d9>t=3g?#nAi-5R;$ z9qP*0uWdSVHanLi)Q;86^}mQV`fZ~rbynxgyEVhQ-{QXD=?tZu@eQ4MT+-PZ##sc? zP;}wQ(cmiiNx81}=zN=0Zwn3vnyaX#!D8a>pF|RfePmii#FZmQ3ng+!B{U z(lYa!qb-_KYU?=n^8fkkyr1{H=l!1dJkOuF_g~LP4$cpWSG(H0tuf2j4d;IAyZuaF z4ny~v%|he;wMVo^Ah=%_+PXP2k11h5jwW0w9?f0yLC+wuNu8+g=P%6pEBkvL7JrYb z{K-Bt>L&BzhvVw_b371>cz1B|NwH&5w%*#!@!m)a>fR6$p}y46E3wFpMFDrKa@yq3 z9BcW0GmZy4d85f`{v*|xS)yj~08o>*Wct3+l9O!UGmB`Hv{U$G%SjkJX|>I3P>4n& z9W6E(Hff5oQ*>kvUi$U2u{(8AOH9Mq(qdMBONs#&d;2sfG7l(De;ZBE#0BAngxY?b zRw~_5CuNS$J+n#-+)_D;(gphmpasg**A9xf2RhK>s9oFGKJSP-+ZuJR?q?J&Syx5* zCX*j3ju88C7CTNrnsf){!KV6+UrhrG9y!O@^;(g@dw;+q*g<*mm&^#O(LqyhiN2NG z<>bDta-(T^X6o*1OrmrCvg%0LcyJ)u|J3Fb63C@y(UkgE6C<+#x9J(XTevm7csGY< zY9TVZ#|jM6{@B%5q7R~L7ATf2Z5 zz_WeL4ZxUYYHrJk9+2sYr-qxV1T%LY(Uk$A{Nn5}oz>9y z$e+gDo0;=FI%KQ)3OOD<#v0KNmyrQb)qs*or2J4P zMt5^b_!Xj2sc#t=2S2A!>CaEQV35UZsmJ_ZaQ#4#c7v-tEdHt1*lI|sRITd9u8s3V#P?g`$6uLdh{0#-^>l#M1AQc%szj{IPRZ8}&w|?llbx`Gr z7X7-eNSJOJO9jqThFJlKC(>HWm~~Mz zc&3`-5tlu|c_)vOV@mlR#|_hoZsl(SR>F8Hqk|+w3Evs(7U8B4>pG$cWXj=GUr?Po z?jQ*^LNB(*Bsj=Jb}RiUP{HqA2G8}{6S0@=-UorB!uI)4mpGqq_VfyN29(@BRO|6_jNY^h$;y>qjL~8s`>q> ztIJAs=&h1(hb}B8)Bd^iI`nz)O~dBQvMcM2ZOSOK?YG|B{doDNa{apeDf;@RvheT@#ey{}raawz!J*BMu> zLJ;}EkLZ-3xqW?{jbjp*g`gk2gUxRYmhaSW9L){Z9BoNF7^rox2s)?Rm?U0W+ZcV_ z8oY{(9~#s=XxiblT&~btl&pt!pPk<8Q<$}*-s6j%#koUgYDODQw_S31VH;b!qc!Ws zI->6k-e_Oao2>Fs-0;}7u zezWR~vz>9&sk~?P_co8nsOQO04NZ$4MUs*r<}+cNnNV!Qd~R z^#EJkGr+>8-aoPIRe;e4tr^FMcryi~)PtVrF$)v+1=Gyh+V1UdPyOMAB5<%^sI9b> zx*(z&)0_6TGJJJIH~SZUlMc+!$5*+*>Bs|ac^cnMuB3`<8(5o61?V(LV}+!E^;}6K z5n$UtrLD=LShYZpj=cMeys}%Ol}=6S{2u+r1O!~#r3AwF^a~+=L7WXWnFnGqOB^NJB6NZ1lEyY4bTCKO*JYS@FZP0u^vc2k=$t*2yBV zI%=rXRK2_9s*nB+K2-G~k5sF%C>Gaj&>>);)@rOrYdNVW$JN6n(3o&U!Vp`tE-n_r@<}#XfpE>{b!K zKk=>S-kyiZiR~|eJ6j%CuA{$Z#?NCUE{jjUQ!K&Dlt+%E+3~>hYZam!fA*|NT4cEN zPg6^<&|~^|+rS(-R8I4y_>C7!F6}gYqFvN^>`n18L3cAwZgcW3b!q;eo*hime1x8F z7Eg6(auSby!3=tOSNh-SUxM0NQXcEXNvAK7kBHt%;}Y(h&r+6eHuW2ff19wsfw-W@ zQy!!Wwr#v?clvw@6+S3dON+y{s)hJf7Tvh>=liLyZ-h9i`$PLXJbrI0TjCD#vQmq(hit6s))!OB(@Yf&V<+HOAw~JDm^0l{TauSJ zBuD7dECA5u?U(~6F)<-Yptx*nw-4fGb~{kEx1e-iN;WYb;{OJ;z=$$ywXes;RnB}m=$iPG1a8DNb zte3b*nR&A59=IxNO54n5T31`_@|vrEsB>6A9l;Ww$rwpOzfd1>p30~iL`UG3GkORB&x&N8PIBh=;FrlBa9y!U0^<@1KBw^4k3rye zbHuC|eP1ifTEaw4cS>NmrQSGL0_TUL1_^#>FH;>^cRv@7)^!fQqyG96T^ej6#gH(V^l;unSb$Azk&{Fo}Ig0LH9!I4X8rwUmfTk&9(3d z%+`bCmAB>A%+kI4vR;Ux<(e+K^=^bBls2%ysqn%%(Z*PUfEZ=m1NbTT1nm{qYm_mX zsv;S`Sr(KUeO;l%==d!Jh84u7yhP6YbiL2)*JlcsB=w>2D_pJUMipXL#! zKVn}snDp#=zrx?k6!F;eiBMYG32M>F{$|*EN8vq&<>j0AcGisrpx( z@sEe&DdrLzKuq>M`#2r9J7asHuCD_QWSi{_-uCayc<{3S!JqcYHyL*l(lzbJyUfHP z%2M{O-AYhGslh(|)ae}xc7ziwQw!TCO#+QyTB4i40L@ zUMjGFoK6+jD8Utmx$6O)ZHUD>46-V#dqMUDr1dtiy$HEg+-&Fo^|<%*l=R9ay`m3{ z5Ix#lem_u|I#aad8Z$LD0o_{qd0EvMd{B+)`ymppQnBPJSEyuS}q9 z3~QDBV6wT_AZG(}80)e1(vp9G2 zWqwwQ1Z}5?*bJ(fJ%@R`#iwvPaajq;PoPQh6Rr;T(DK^J9|2ytMuHhzX zCt;#cW~f3#K1Kw)lpqbTVNnfk{Rq^;$UZqta2{7vRTNxwE^DesTr?=DfYGn#AL#8+ zaY=y|D+$cu^biR6GNc$751c`JY;0z8ws6mr=3~SDD@^;c-a^ z#`TB$0Zeyd(QKbgd$OJmC`QO>2Z%dmS{(Y(k(8FhE;UM`*B5=3a+L?*=%0bY@HXJ) z9@9LMxfir<&Az8C^K`h742W&E;%vF3mjO@@#3dR9)w$3$1gfw5n*>Kw+(be{>j`76 zBtG-`&HL>OHQ$n>wu~yilcf-NlBK@VVt>>(Je6(wtIah}hIKMbY$=ZT+nFIf+MgfV7wB@jOl8 z7R5Nkh-3F^f)nUVUi|7^EJmI}zMV(~b$IUr@Yim60s zBSp}#lo^{`pJSY+M4wRzg!}HQDE# e(@Bh>$@f_EXT|k*NaoIb6M1N<)-L{kQ~w8%epme-{AEaF`wpKypRmB;o+T z-))jgBNuOfZ$B4rA3mAek^qQ?o&h;I;EG6+fh1M{GA=uj~Ml`T+!8{QZ3t@7(dU3-8h z{zoFI%UEPz8L7bZSc` zmSbApr)s;zbVXyiE1kDM91J%TuHXL{3(26dP%uz`P`Gd{IlLP2%DGe6u!Y^SZJG^T zO#?M;)PjBaIdLeSw|#OI>0G(pkwX`0+DM{cCEl_3F9Kd z2_jFPyfMp=0x4a%L}yYJ%E^6?&dHyena;(dM%XkHyFgLk8r$86Mx6YRqcW)ieM z=|3!Uv}IMUYPWzEoSWmEaT;cNSYs$=S5LUzr}7%c9BTBQ^g5$_AE7X4KG(yqSKQVp zjJmd6)oKm@fyu7dKMWwVt@|yFJoWrBTCEQNDUEGbo5iP2Da^&h$oPkm5g&5yZraSA z9M_R-8k&mP@2>}*jYZiy;*s*(4d*6_^M0H6;=P86GaPSDZGY(-woW0Vx>oiPe1nKJ z4|63I)rkT=zuur8fU&ecKe&P{g!`gO&m*M)QO0F^v0gaU<#BTP!CXU~3bRZ8yK2Z`AD5xc45F}G2BIHMvH>Z_#CNLhhz$7ziXAQs#(o|Xo!Tac-6th3m z?#2*P6UsE4hlJhbGG-KTh8&kaQ|MFOwvd;qfXsw0ApQbQDs)HORxzO_MKvBIn}{Ol z$(!irS!7mJK&T0S7K5X0xXAjcfa8Ln@z*;k&FuWZH7Z$+Hlali5(9%C{}|E~^Tj)g ze$C3S-9-0-YQ6_IMu=U!J+6A`tJd|^%p!=K;WZ3;I_03WRN=(-+*%sP=nBu245}3R zS*7ywr0k8InYuu<^}T>w>b?m9OvCT1tTNS9vo-&8i3(hz_ktbvF9YuCdO1HbG6t#M z8%+h7Q|7e9B=SGVh1H~_h345Scz8r*sDF6!=~q6%?P<_g#w7&x%(jVdqi^7|wYLQ* zT@LQbgiNe8Eq5>+!L+H{toNQOE1tIQD_9KbJm!{*UgTRXa{QeWRAS~gWJbEMVzj{l zH1_fwCDPw z1!TsP)XQ8K;6`{F8_w1%_zfeZ)R#G15NgPh@g}@tZFtW>PklG38WCf zdcv^}EOr2uEeCoj8v}jHsE2y9L8aI7=dF-IN)y#n;y$FxVB4_p>O_EdJ4D%~l`8BcNZ)d(zc3q; z%$_vxd@Q8`=fh-&6K9-BgUbr03OINY)k5(ky@6OjfXcj^`An`SEeA(mWEtcxq=v*5 zune%raM9ygrACPGJ{NMM`Oro6K0;|J_wh*?GwQO*GUqAv2(O#fwRRD{Y%egOCABX~ zTM$I%jgtUVHIxZ{sIWU6s9>EsntuB&y*qaR`Z(9b0-C&h_U%2TOe&Zb76e7k++4q= z!(E+=&^s(vQa+vgz_N{VqvZr~l(JyY{j}JG)7(6-EE$6o$4n_^Wnqb#7#{FX>WP z`k(S6tx$J(t@0v;em0s8%mbEXn?7b_li{?B{v7xrf)}JC-2X^>P>t)Z7tmrt#)^;ov=q#(aKYze(8Wm>V?nma4bs^S+2rhGEy0FJl zHHdOeHe_Hohd?`ET|`(!cCH-e{L*?7iP=sI_2K)Kw4pXbQ%ZKNLN*Kg7a+a-AbB{DKD@7HwZ%4DUw8boC?v{X#YQPoYnqEYN* zt9^By&higmQgfFYl9Ec!0M}|%s4TkZ;qTcl_E=SI(w*ZwX?*Ae^^aR`XrE<5X4u3= zM@c&CM$y_HBRkhKV$sRt(!ZT(X>Y>(A?N&3hQ_yiG^VmC~xuuI5>hMgu0H@r=)O@{ELH1B6rE zoQ|8;+)_f?FSI#cJKR#%4BZ2bIp|xd|G97#p!YQA1Vx=(D7(*kh*d$LlYzshRRgRV zlDeEqp74HKSL?uSJ*8j9IiJ#-wjgDWEJw@jf}k0;RhiBzy^M6}Zz-FVG zi^@I?ELEf%N7AdUlENj<(?u*Om6ZDk^^;u2 zJRHCzBQ-KilB1aIB`@@*vFwn3;QCE4rE(^5f)*vB{X`0 zF|e>yvdbHql{`u3knCrevDwQP1s${eKo2g2Mg(-v&Cv2OLw9#|jj14Oc-$+J8lce* z#usXw3J4;N4G|rl&o=hp+njGG&&dqAa9zrGf3NKNo?1R8sMk-I?JVzTd)FYnXa+Q$ zu*FsMw!T+bPuOoR2Nv|EOyGDfcU?t%({?*oAv%gGl$~t;S234>52WTRr!imm{_MGN zO{=Ix8PB1$7}Oz2oseky9*RF2HF&q16e2c{6XTRjU*vj>)&ImjOb}}5bMKI+Qu5R7 ztDoltd7!4>JJS0Ze~#olIV%6WmhR>uuYZ@1p>uf-!%CtcRGjx@JF6YS3yQ_*&Bs3K z5^!AhxHKJ_qsq#qA0SY!=9$d+ipbK@TK5YJ9~I6;a+c4AfoIwoaha;aN%tE)ldOBS z#%|YrEFbR<+tx$Ay`RLAEWV5RwQa!NKY{fNb;P!dR&gvEL5}L&t^y#&K)D}EN#nZx zlp9p|XPyd6c>w2JiKnSL9(Wmx`nexpgJo3+4yeI{rTlw{sVaF=2#7SZd?-~})WBGN zS)uevJyoopKj_wTu4T@{y6fyjrC$EUU1%$h}JhN)K(jT`}m65=;!JvoD&32)G1R)0?ppk7RJ-l7CkDKuP-VMsviIP zWi7OPj+$TRj35pBrd9A*pDn#<22J zUX#Ci)8k=*Qi8%3q4S2($+;}vu$(%Fk~rjiMmEyKwnj>fIE;;Bh5I$M~O;h7Dwzq z(GoW^vBH^RLOi#ash*f@XEXqB+L4))$!d_e>KQ5Q$jndPdL|N1HM-+UI6o;3(fCj@ z6vS%?52*)R7;Wm2FN|%sv`(kvp(KaJqm)yZ+49}Pf*eyx!FSUsr5>tLdF&dJ$Ihr) zVWQH9qR7(lnLd#ITBX(bi98amG&h=Ld7TcFLG`-G#BxTfVM9oqK$(;gO;mSm?^W0sCrJ$m!nUr+7C%-2HpX~MmKF^Q^yroQ0ppanQz(ec$%*5u{~Qw%9JJ> z8x%71%9o=V2m20xg^&TmtHx+m;g={;ww{A)o4smUZTt@VNe>b`*CInNq!alEGoyx& z$mGq`Iak}{7o_#fRgNBEQB#Yi9LOo)_Ha%O=9qUz?roy)Qz>i1E%752&|Ei`)i5 z=jioTRt)O*Bzdt7*ywZh%!P<>cDk?YBpd8|5m0GE7N`Cq&r)fNa{Bc z1ICw(qG!*URXf3-ra}}W;Yxs|FT}uXR0-SAHk`NHUH5(h)%i#KOTUU=e|WRKNcSIm zwOO6wbw*9>Cst26zq)w&6Gu+`zY%96duls}g+kl>qHIY;0y=%X3%9L6&5W0vhtsj4 z-8z3{T3SMtEIzoD!V);%8_SJ1Whzzg;mU;nsD(7_Nb3|;UH7PVI+XomDqgvvuFJ^( zX>Fe`Mow5MdBt?P2Dsr7yFuv&`|9*^Iq~Ty?>tX^h@FVlcugvjl{Timb3Qi1QpZWE z{D*YIoidD8j6-wzL7>LlmeJ)jqS`z6%x@?QB$Q!Q+xe z?At@LH$viL+PXX%)ZA4Q(-xh8rCL96xZBRp`mRu=nv8|_(|N{x3{k4ReS{f#&%Wk? zVVKlatmU$pe}n2?Y-$3lFH#rEvLTY*{WdOnc?M_L;vj67oTMTb?QX=w*>${avM}`l z-iCT8?3kOG;DtH>T;;YqJ!)qx z+fFho)axoI<+%-@VJ9eb%#QXxtkN@snz^WGrWawdPKxvV=5td$wXT*B{J?Z}Zcj|= z3Nj$0`6}eaB_mDS-}V}mSnNM?)AR9_`i-P*su(SJ1c!adpGC%n^>$68Ur0(uS~;NB z%wWpeD6MoFvkks5{CK<~-DKs);nKUT*+{H(P045Mul_O#F%Mn^Vczb7N40lYYK)J7 z=IR2=N@pheD63~k|8sM?1die?vg=E2`?;K6Bm-pa zZW1McLykt$pO^g&6A@{^7bT)wIZG;PkTV?aMo*fae4ui;VJX~9_+>!vOm0k4a}x%* zLq>3r#eeS$th&L~H`l&y&I;3T)v5+PbZ%IRv9?q~A>Wihm*Pzva@|V8pY?MJ3me|b zGjY6ji4L8fdu2tr*6vHq+AwqWA(@HI zq2uW6QR~aOnG$Vm0GWc>yj9(U+ovOWs-bZ&2sA!!ny&^bX%+~Ce$fl}96t#TirF2-XvMz(tBQw$8{@6JgB8L* z{0ijEooD6P?W`w-V3x4x3caDkxgtk#c0WVCOd8OwC~NHbxw&|f$hv0I@b{tzARol( zE&}3*{ngG`tF=Inm*Ib*I-H&|K*ftyEFV

J66l>z7T20v&BrMUK^shcTlD$)2!& z5`D6QGPrTQv=zN=2yoX7`b0JZ<7eEw>PtzYQ0@7e|0=C)kk-{g3ZpRi)C^3!!<8%g z!&_L7FU-EiMfl4VX6cSGvAF_JKmka$3~V94p7gGvDe+H2qRvIIESKyy_tU1ewy`ntT`-A zm_UBg*vY8U?MMe_7Uy?s^ax1=NK1B&p>4w-mJ22v;h` zu6QF1;AMlTUPtg=DTq5Q4ilqNvW=7qt*aJlMz>MUgUQ|C^x8!=q3pL5qt;qSa6ad= zO~+w8y<)J~iAeegu7H(*wqRl~a#k}@5H;HBI|}R4%EV;+88jwmJneP3u)fGM!qqW~9qqvtrMC&5c@Vt-e><%2E{hc8W2j9nXpE zSeH}W-kCKTdV}0yR39>}Kj3j%KR=+&h_lT)O_fwROc@d*hRVb_S5>QM+7Fd`d0WkU zYwk~22hfs(={SaTx6~nW4$)&#akaWUu{^j#??-=lh&%1B9Tmi?Z=fS_4Vsrt>)O59 z(LXb<{S*b=v_Zhs##=bP*@r*%*c99$tFP!umy=wW6}C$c1}V&X+N4!1zOXU8d#7Sy zTY_c3cEAD5At+4#?VD99TXwoMbh4~9#SLl7Sa{=TE3&XMF^Dhvyb&h<=CsQ!QKXo7 z)18YnNKVe3!*{@?Mh=0pG? z`$qW`Q?+c*<3ErSX;!QKZb{JbFFErH3D zj84IjTP*^;F8_m-4kk8NGcMkh@6x~Jv1Z5Z5QTXzljP=KpH2 zs+M92lq;2S`>B4&>(!q4eW~aEBGP3_XWh&_3}83oV_ovBHkN(0P8GDi2Ap?$kAmne7&I#_)dNHkkzC{`1%QPk&O1rsd zUbhYT{U*5;6!oAJ38VB$mkvCFoSE$Dd6qQtqX7PHDcW&uOp6fXlPXNO43~`fl_`%8p2gNG+Bi8I#aoU!$;gt*+9#YT8Lt%Mj$frL@qyQWp1$eUI&FU ze1rAjve}byr0*pzW%4Rg-!i*vJm2ifV=Dd0aPS=|A}iy!sbiv(CI2I=Hs=uX-PPz$ zlFdIxZX*5aE14im+3%Ez0I_ zg_#CFg@mxU5*9Wi5TY>-3vrm)vKN*JUX+L;;71CMeL>KL)bn(n~$&c=3jD za3-?Ms&--&a5M0zm(DpMlZ5kVx^`mSa_lGDkSwh-?F@$Abk^E-=GOCA05hX#pa3%p zrC8^9ivn)qmU@PNx)%1^kQ#kxY^DyTVh)(y_=At=1RqduHofQ-E?SZ1P>V4A(w84*aYEy*c3EQAqM&)!X&O#eqbu6~>NBPS*(f#gxmnZonDAlT>-6WKU(Vsw zpi^(1Q-Y}}%XdLX4<=j6FeZ3bGdK_Er}*3P@eI(R$3$O9W8~BtiH-}avO?vrx$W_7 zUkHC|-A`Jvve^`S+&tYujkF4Sy2850t0hR+Sv=)jdh64}?q0c<)G*EY_ePG4zP0Iq z6S8g1Pu?ZGF+Ya)etxd7N-wN`67WW|q(LC++VKoN=9NRkgE#kv8=fdCYjT3c>9Tt& zM@w7ghcd7N1wTn1S$&*166H7Ewi2vq4~S@ld|7??Jm9T-nf&Z?daOPf2GUI^tp@%! zV_#=tFS2<_e3bO?4#$s2_X5z>D|eC$%Z!hhcINds;Q=J*b4MR-2boaks@af1Whr~& zfH#1-vPG*=u9`2oYT^1vdp2t5QrY`QRiV{ok1LK(4AhzovP+e3&Py@gg2}{0*OX?; zpxM^GX)zSf)^`{;@rd23Wy|}+SV&YU_N!3Lj*|i39F6C$nBa zjtTIE#>iX-9NsXR2@P6$ged;3@Rk)mnoyfL>%o+jp6T;_1WRMqrK>;=EZt7}eXz|# z#AjAm*Di{jKz64%R#cmv9p+$quw6B-RPfQ!=O2hZtEm5h7ZE!VYLH1uj=%4SM&$i2F09$2^!%e zl&?%SzAd>eqtriRrXM%I0*iWEOrSA96CpoD^X9$E-;7<>z;Ke{+&wqN?e!v2b+g}7 zqg!)1wd(NQ=8MjU(<+Om8Gi0o?=gN99wG>UgAvz9*JpOa0y)-y^A3#QCbFSYc_T7{=A+G4@;e?D5J zxRJ*n9^@=^$3}-?_Y4pN&n1q(p!;ayjnjwQF+S>lHhA`CC$=JQkA30}HIy1l=cT+{B?K5AHkwM0C(Fxu;Uvm{ZT)>Dr_!t&ny4`v-_d;l%`#h$k z(YI9UeOilLnMf0;Z%E+f;v~ApgDKQ~MSZp+ZPU!stbnAcNN2dcPsH_`MNG`}^=$Fv zETYL^qMl2^O)+1iFHbOTW~3%Za(5iw-r!=Y;jMyHE}(1omIm;#^Y5x@1y#PliW{+n zqHG7kzD=Nql3A>l?}DHgn0hOyXhof)FRi}notWEA73*y0;kx^wNKkhmiZCN)gZ)q` z%$toGlJJmof;l&gJ^I*|6efEdpQ z(|O%PXVZWqVjC0u`+Vz2q=4*@uNp9GovPL=lH@O-5%Z>_rD2lW>!3DVX&q%iQQ5Q= zSDG#xL(&CZ8x5PpPYkrv?B&c!Tu2h={NB!Z)*o0~<)g}v87iZz4QWBPdh^1eI@vlU zZB&IT?+nj4cGV&s5Lpdz_fES)GDIyQZX?bpQ7cmglO5YKhyn=+dLg!MlSAyl64MEY zM51nBpp_o1s8Ff&WN1LlQeigh-L9!#0ecPo3(uB_=D;7EdH|iMG5T6ixFHprOeR9v z5Br+uJAs#>toh^AH}2N=v^fgbXAEFXnNi$#7SrV#k8u|r8ae#jR~O#SpQbpocUmZ(m+J5 zps3yn=iyuxxKB}+jxD#@@-G0_uVLkFrtkcTF^RY^^=xDLL4NXg8u)cgg-B=rOO|v6 zNFCaOLXQH3z05wL(ucPkebf$6)&9Z8_Ci8GyvZO0$>G_jlN8UF48edo`V65!&qr** ztj;Qn`iUkkBwk8&qOAG7k5&}6b&yFezm(ljnI5jxNPA^rq?a{D$KKD#j^6fn^!$cGmB1SQBA0;IBVtO@mRTyK&}sj z1ejVmIoSId3SLz!DLuCMC>R;PTUmiIdQY0B3vWqTqAs2ohoT zO8UVZAHR=3HS0XqaznX zjj3E$t^Kqitc?nKUg1L2E5Q^+Yh)wxbbPa&;jpXNs;mDEO3l1*l^PT|YMfFcyf@k~ zz5J_WT5l;yYek?!_gJ~X2BA^)7r>y?BDd0KWX`6SdUus8>*{Pea{9xJ1>33G;fdT! zac3&X_NJkZ3;64=<5Ki-37>#_e<=+d>ra@}#QKh!;TvA<3KKw>?Tp0^M? zPpZk4Apnx3VvYK8dq%a2PsV-V3AKK%PhO5r z9t*!O?=kZ9#A5FUs2y1$G;T0_Zi%+Q_dK^e!D2z?|1ifgq>&*BjtT=&$IwA9Ki_P zklm8QjZtK4-!b?&Y@sD9Qdy7xSkOaGMcuVx>mUGNcybKx(D^n@mzv_%^@R94r`hX= z_E^0aD=(rPYyH}KlN()p#%dTosPes7FhKkDpCwD}n7y--W+dAut=iUgVEV6I=IBH7 zxrTj0TGe6sBbzTqw6xiZaDtmR?log za$=*l(hJ!Od1!{x-iU-82TO`G^$@*cTUryzYp0&2#}~~z=bR;-JLn;fglS~&LgtRi zM*jksz0H1$UE}}$hUJNY{tM6aCqjfbtecK;6~dnCiZX+jqN8UY%W4bQ-)q3_)k6h` zL`@dUoJgER1)?!nHB6NHRJ_3C$N|VkLfXvw)NU+rW%+`L?nmLCrmj%HR1osUh9Exf zUW+rCCFRw*&o}gy%HLF_{slZ82r0WOm|k}|!p964R;vC%$HkmYk{TMEe+CxKZ9O*I zX$LP#fUb-ct|Ev+T0I-kfz#Ykt( z`#?|X3~!uaii`*9b`N|%f9t>1>b~{Zuxc}|fk(!%wJDx4=r3S`5Jh;Ih5t0LQ-5Bj zqFuo`-gTkv#ymw_^nLp$(^7*sZsNKr<^3{I$jLzl@-Kj05&Y2Z{BEU$FXyutp2QGM zkdcWqObxhjqUyHOvssc&1ilIv>#G@DQ48z}Ot^)Fl|Q5i+d?())`DM3wEH|9Y~sJy zifcAvq=lw@REoHNSFGY^ME5Jy`_F`z?)o;8ikU6!b&%cDOw3=vFgWi?zynY{8Zv`K zk38=fGV8poQuwpkFsO`~wcgV?Tn0DK{?x<_fGt&66N;beY+ks|tguPMS5Rd)>aUyC;{(Ln?hWj7HDc=ZuN_-mQU+wx-f2z_I}bhU61tdcC1XqF8H zwp1IPWUkSk$>8L}tlAtj5RA``#Mr1`yV5ZRt~iaS8#9iEW5vXyyD!<2+^~W&4I8rF{lS^Bb8iFU>0M7|U!-|5Ekl z0qaw1WJYYY?!W->5;Mwr`|S%+(O!tZ+9E=ff=3hCn`UD zgbkQyZZK=tuXSVEE0nNQ*49bW2|E0$QlPf9R5?6X07f*G#pXnD_T$Ku*WfuN)uMWX zhqV#~31j25nne?I5$1`Oivk%JcPn2hD<(=!%uvG#bA|VZ5KOA@QURF$O3{qn0BT|; zfE;9GED$!Hzp?^x6LXHFyBYNJ;RsRJcTz=6y*S3SG~G0#leSMR^>BL$ZoS9fx`2#A zq$6NcmrpAou!mWwj%UBO-&dXGq!5;puFY(OzqX&s7IZUBz)-t#3W|Za9#<5sS2w-W z^Oe29YQm6_nJHUkIe50^o--G6gs@#^)YN z9lWT?3;7ctFNXFPj`#AHEbYqiW;h8gK)Qn5;frpjt;w;6oZinTc&{&r8qBhu%eMf; z>(ig)j_ivN$CCrOQZ_#CjV#vb{h?{PO9)!&$4;cyI|#a^vwo4(LmqBiiUZixglS0l zR&u|nNc^$Aao%>U(C#5Y(OaY79b7qty7_EkX_{P;n(J->b9Ay$tF$W1pNqeMuJ6P> zkq*OO$Fc>9yd0rcdcG4sjtKlTI#2W5l*~;5eeo3@g2Fy=Q3Xf?hj8e%()ZJ+8`ur~ zw#GS8_kL?pXAPZ2jXaB?7OpsPN8X0=ilSSe^Lu{`{M`nQ{Kg_JW! zAQhoTrY9_k(1Oc}P#XtNwO8IbQ@@IGUH~NsYdK?OB#YSkX@ZUqBO(8X3Q}>~oa3!& z!RTFmC4+GOps4d9-Te~%H2&rXeY1kKx@Nx8u&jHC!CKc#`gMjO8ILaLlm1s)8PsBKHh+^7ey)}GC!d06-kUGNBF)fQC^g!SSzVtvD znPpsnZ?>V)FNwGu3H&TOvz+{-XD~xvEPd8<3+)Q#7YHJErm_wC`XpuFZ4yPiYu8V4b!IjVv!YD+v#z1E@apxts32jk4fim5j!oB=(FZO?ou!UZY}d

hPAkbbTr(^R-tMK2 zKN0>_fp0py#L1`FRr&*5;-DDr4LLD|sj;tZQ3zKN@*cwl?-923bSUpcacn1JL{nYv zZtft-cac}hZN}n*gHxCHDFv?0CxyQfV9AWc=f(?@7if+U5rSJOQNp>2UTVfCY}SbF zqaxj|>0{M`iOe$Rt9u3RW`~8f`k|HzD zF^%opSyX9+jYUn z&%KiTR$^Vi!830A>~toy)LJOz!?S86`uLBg2Ua4rSk58ug@-Kly(2<+7h z$4T(;B=Ewob?78uEe=6);xm2g<$@*#FTG96J_{Tve(iRE2dF!0pqX(Mqorx|Jm0x? zn;B>Y0=^EuFo+Rx8l(n?tgXnM=cN6S#$h3#tgPlvKPyb0qr#omugkC3hSZIPlx`Qm zvZDfran~yU0tQVc^qq>L(4W@B9>`>is0P||d53}ZJl!nJr>ead>u!vlnM1|`e*Xms zx4$4%27eqmmG(+M4H<{0|CsP_*Ls={{hFIIzEq{!#hDC7*rt6dNA6FgHCE_&kKXSgyWf0T6hkPnzXgD zq4eo@6Lk4bK|I1#K+_75c0QPlwJo>K*xk=|54;NMt!uuNsV{Bfxmf#738k{MdPf2n z`^po}=l7!Cwd+$7`6niNYFh^%l#vXpwrONCp6VgnR?Ol9Ha@A@J)5MHD>i}aKH!!% z8#3+sK|2b=6#N{XdVRy>j|Qf&G6?aq;T^Z7iC#t(IdC_9tV-0pTutTGQtR4vtZYDU zbIrRyLZ|hq*bmh6^7hgK0&R>QPR;_(NVA&k=7q^Y0SYmUS;0Q<;zexTndS5N4(}la z?oNwrZo7FJt+-SD9b^q4b~Arrav(?Sd4|`mhi?prte4`S9DS(ncm7Ez)ZbnkPRDnN z0ZZ*Hz~hg!|1fTZ(Cb@S&`1mgxM1n)e3!5{XI)|f`M`@)BCM(`Rb!v<;VKU$ z$y09BH;tcS-v<~`(RqLOVg;Ue;R4Tq$+Q0{8%JPe!goy?$K<(N@mC(*`}#p zCa7nmM7qGa9Ny3?c8=&X=-Bn#I2G@oWbv&XBjuz)PFBW!xPRCuGn6O?cdqhk+}9xLsyJI|)dcQ7%PI=(?BoP&&tt2<9Udg<0YlLFU&$)A<=b{O_@mAR-C8?(@HZo= z(<%gUV5$Ho8siaRU9!-@v!|ZZgpF6OhegI_&2r5?H0q;?3?zPIjLab8y=$MZJVo$*EQ>u`JXy`1j@@{YN;(mL1~|&H)xKF6gTz{uF;2> zlL#8K8u5TZ0Y4w1R;%MP2d{s-sr_9T9M0c8=4u2ZH8R6T&m+xfBR;|_pJ)&h*J^~O zA3t*oxnV-%*;pn5=I7j(4pyT8EBv%daJZN26^m<-IHEHPgRlr}1a6>nI=lTzgH>Xc zOdpn?p1n9%*+jR1DYQF-!RDYza>CX={rqF7ujf0siB^p)mLvf$KbC5e62j9Ry>T1; z08;FyJIURCr|gsXCQG-TRi;$ESu*DyDUV6fUXMNCGxnuI~dAUJzO8#6NUx;9&QfT9Mg;f}KN> znh8dNpshkl)u9(Ip%;NKf)ibmGjBCzt7M008mx$5cwqO_a4>(~d%E{CXU4r{SJDYB zyeL}rZmBX_5Md{4x5A6TX2(EleG_d{KzzFdYm@zcX9t%pt~iB6`_&7G0X4U-Bv*V^6#fu1z zWdqkq}J(~TaA&G>H&zxkVtNSv~L%nAXbjFuN~x6wrrARi#y%DRM#cF&C30@RoX-OezPAI8PYw!62fhHh{vM*0a-Kb=qX zru|7^P<>Cusc1?9(gQ3jLoUl~7(+VlXa_QMZEj86vN^jo;`*-J{PSPH2ki$=5Nw1a zxE`6k2HIl{8h{;DTdr`ywHuEG-ZJ!*${#kbS(nO&=2@%7i8@Bp?AW?WM5r$g9`gnW;+R)l-b*$*?EEt+alshA9LgB)?M*w zf9@(tQX6ps&?#fF19D!_^*p!4%}b#H#$9h;Mb!u2&V;1zypoCKfEJ% zbO@y~wwB;gtTT%a*8C!&i{$GVDoP8d_7>nFTB&ozHZwfTzEOOZ_zVk;%)UjjsvD}@ zU{1Z*C8Ij0eMcHyrmyJBykxXmnjFunS85YV>3CPW`wQ&ppe3s(g7m@?rp) zS)Xb0-APx0`>mn4LACDGYF7Du?;nN)X?{)DQ~EDXrByui*{ z2sB27i8Tm!H%;3&KAn269Dy&8Y;MXN<=io=eRRnX@*BPLYTbS*yZC7Dt7}SVi-J!! zN}z)fG-JmBMF7`0w^N%u7UNsM8&kDkO=Zo4LA(tk&5q zy;B30AVhVuPE+@ur8Kji8QTsPMLWa+-2QmOXlZf69?_@PS#jb8~fa*B$!H1=7l9ZCcOK6$(h$eWkcsgxh z>1*)FVwr#6)D-kDU^;ur)hW^$JX%hAP5vK$`%)zg`bvwjA_aefac zvCVT?nnH>hhRyjjTdq&XTer1AP=FXePj_*vm4%#BO9!`FpUnN7EFzM5MIxRw>mHW+ zvj&&&6O*DsSdX_A4|#Ke?$v>(n>xQr)5gnOauaC2wsd*Jj5iTU$FY3j5`?6}N!E~j z%D_Z44H*9)0HHu$ze|FNdv0)OCl!N`MkyMhdM2Q}Mlw$IYPCvGBL=9DW zloCM%Qku;!(K}8ut;c$~+O9fqb$ixXY@P-@!_ZZ#-Et!XIQ!8-HH%%$5Io-+2c}*TocSG^Q>?ABv4?;0`?1MU8g{0INTGkZS4({6ik1m{yCD zXa4|;^{10K9rX{+stCGw3a8O?2YlABQEx=;@D)y?aQGa%(byl!ZY7>X8 zDO3Ld#Z`~mDIBVXe)5LFA<7mSEEfx)Z3eTRtG^u!4V)`}4Lfw|6(|&?!}c{xyGn9M zQ6%7ZpxDV2`?S}j&CsPSNm93MESEL?vN83|YaiLxS4mrMS4)dUah{KEac-m>rvs3s zg(wk?@!GN(jK+Dwi7|qi2ltEluM2GZ_63r!VG|aWhbR{V{ z3IPGWGmiP7*U~yv#{Mt$E6aFJ>6`p(a8OIBQd%oZs5=pm2`AVL8lkZw`EhMcLFya4 zr{re}jKi*!H~e}^lcW>4{J`R^eHnS_nYNPTE|tVMsv)I4DW^3k^0ctWh_=tVmXqz; ztq=Tn=a`jQWC9?09WDr-pxp$6sDQ>HrT)gLmTKaY=UZ)t9*o zMzv^za%)n6;W`*gWAj<9DZwwKon&w7C3=cSS)wP($La}fg%A<*YA8YyJ7KpFQ}G0E^UDb;imX{gB&>N>Vq|hcu*toazZ2@ljl* zx+z?ATkW$S>n0ty+~l&kPpwWxkbs{ROZ-c3$_h6bQa{96HljQsVd0okMgh+wvq)5U*02a9kg>+#xoX0b$qefu*59d5o2Dl)!R z=9aG$!cjW~9DWiBKwog64i3kB_;nuBZtCl0_SFo=L$m%yU@2M{OJ+H3dONb>Tqq3aXaOvB#sJFXxgqLH=@eT$Qq@=A!a)hjG;M^qPK!I0FO{L~9Qf4I_{#AaQh+H+ zu$&Rx6(t~UHaXnySZBjjT`tmBYLwaehHwV5b+}Rq(~+E=haJLwYw2%_HydA5baI>$ zVvB@CONjh-jx#siuwNlr!6;&4_3u+H0HNdRNBBg0+OKSMov!S zoPVts`dtSfdo@m-wxzoqj*Gd(L^a%GG^R^R%Sw!t7YYi21T4Ad83S~Y-l`o+r+R?e z;kLscwB%e~>3Odqpnz0`1x8xH<1DF1ZgihYyxAQV+x+{4>1E*!r^FN}1f|DR;$KT> zC!gLWzz3ltgIVWNNtGY3t<&v64@S5_i+qk2bdx?i%2-Q@7(2DCvY*5Pxvt+!CsuRj zNwZ=2UkW|<;cdp@c9@w@+?$^I8Cgj1U5zCLwPYNEqUNj|WOg-_b#3Y+vfMCU`PE0) z4Kf@9QcsShX0qI{V zLYr}!bdO?f(1|l8w$c!_X$ez{01|$R&2=e0$wgAOT9I=y%c3}OPfG#9p^wA~)wiJp z{RiHqrgc@h_dZ?R2HJ=V2xVWpTGF(Flb_+xSApE(p!!bnYJ=34%VegO!qhL>PQv7Pw9Ay=o$8w8f`sA2mRVBI!}hnk>Uit0#7Mfkfno+D1dY7 z7fpOTvUE>VUEj1xTUPsWx@{4h8dSGlh?%cGgFehW$;z(w_c!| zVYKuevGlZ=n%3DZig6$#OI%ta#gkjLlrA^ z3v@Dbp*oaGQJzLqfPIa5`La1WM`kD`vZ}tV7p|ph!PCllw&oUKsh|p!Ji1O*?USEx zTKWdlDbAbtTuq`u`N{9bw0`#**6K<;RRj2Pp^#OObfEV&iF`HbbMz(SiD>&1oDM3mzwxuZJ%UN6toO{Ht z+A?Ake$RCdmKP~3$B31jl>@@xJ83EBggcKl!UAqEUWhiHsj81;(30 zUSirXe|mgkfPaLkWR&PNgohcQaFWavKy8SByDC^KZ7W*Bm6QBSP)W+WBm}3qt7C*# zJ6(~sM^x7oUyF}X10+g;6tsSY(R)>)Sw zc^Pf?T2>YnjX}m1q@Bt^&XEbxy;R%@ETY>c-7`XplO4GaLbwLAf)X7{!=oI8sc75y zM;f!e8PneoVy6E9Ze^)UbssG1SzByO0RALvLW@gQMpKmX2mIC9iN;zvWpH--GI@>l zEQpdmc`_wPN{+da-;T6gLNHU|$w>)JgbZN5&?g(@f@(hN5*JvgmZrSVQ6WKUYCuE< z&RWsu3oYtw7g9XMAOf&R!Agp)mmYxlzta;-{SVc#CdG6Tt9AO?xKjd~-xwez)cXYm zf=-nbkSUig&visODZBLi2p701E=aYh_L56^8Fn+SfTZC;Ckq;P2O#Wd<61+)OSE8< zEH7GRWxI{lO)DX@G1SUV`2__@C0Pe4K3>$jb-vTd zHt!ADw%90s_YjjRJ4^`ET^P!P@rfvLc)`@1>c)H3KFqe_hRcj4hnD74*wv>TZj-tX z({F0>Jx)Jrc`8mG^JN1EIRxYU>*s&%P}f?^_^)@B z)Sx*TE}xTIZwFEt&J&Z8QhkZz7_X*&70A8V`fhJfD%T~}BSUfa(-jGj>eY?6Dgzkx zP_K;orYrYL^!wY*mV~#~@)IG%m$Z|R2+2EDeKTv*?)G>;>dkxHSAw13t2+Z;4G>15 zQ-2}b6oUi4nihEt5ka0dz@p>CQq%^V`cd)iOK^-Nnw+=T)N$neg(RBrG-G5`dt%lx zq|xnJ)u(jTrX7gj5|7TMnXzDF!^rwl6;LQf#(B|Axn*8IyZhFZn+KQheqxaAgkY#Q z`cX)37txkIXIeoTXeVJTa+=d-SjY#+Dgt$!VE*qPYBVc>NO6KTl?=j;=6h_E*8lsl9-dVC3(;NU}B=%|tNgn0{3#$)C;~!d=alCe*-ETQ3*1n+jlOmjKjj4qo=QSeg1q$PShZL4F0x31% zlby3f7QyZBUhs(oID zLU1xFh+77cs1kouK{`pvsTWbUJMw4;k%CThSXRcGbxo^0VARAp;UEm)gHTRwk_Bk({mXCb|4>sRg$0PL_+xN987pXWNCPgv< zuD5$n^PkR%YW19DRhBW-VB-#cw0k{6fIRryn#R$mvw0X8P{-DbMbr_EXZ-6jV*Uxo zh)2?-qu+K1#LWQqUsrWd8y-FC8s~Ue1Ov@Oi_!{nuOfw!a%s`A$@ihrg)Zu%?gF_K z6#2lQk2dt$NK%UNkbd-uEEgLpIZ35#bgZad*0ngIr00FJS_k%ZhULeuoVP3~EXso+ zl>r$}f>eT>5&;Msoyi~NR9mV;e~TI4nwL)|%RB197BsZXiM~!%-s}ZJPUAuDTvYjk}ATJ0S>Ehh2Tv+*TWN#86Qw#zK7AI5|#9^{w^WEH>SGz&1)&E8AF z8hk`UVND@m<0(*az!W=qnQ*T}~p@rPsp zwf$1q9`$I)f~GvAvO8tH{-eJK-=EQN>@Gn_D{$J(ha|EO1=inMo$SznNK;GZGV zq-zN|@iwl77&U1e^P5^IRw(^8!pq~R?TtuuDJoMkS1G}Plj0>hqpSY_mg2wat=IcJ zU;1Lg^~%9}hQk(X>e-0>&MbzP9yG8?W#z<5))10+Iv3dCKAGQ9bQGIo&0OtmH+YQG z{tA9ou^ky!OKrTDKvPOr;Q?Mns{smGHDvfoi)>4q-A~g!0V+Mt4rBElqGH{7^}tXI zIuPQD7C;5W;1I1F5TK@&MUG4MIh^9w%}e6P!-zd04Od>Ev8E(Yr_`U9I^#0pP9JZ^ z8>J4mpyO&7NlKNe0Zr9*Erq*7@NJXtGFn&4i-bFXT3l{qc#&Kzc%RIqvb2IfiLkH< z(Tdw$l-=W@jkW1Au2La3WlX2DoOLC)+2K1bXa3vtQ{=k z1ARRvRQn&Cywi}@*06>#f;4!_C9~zilt!GK%4sj98ZEuD)zgpIZ#J0n9JL^!W9R%` zO8xsK#D|mS+$lOi0bGYwQj(y$ZmwI@l!?;})VS+yfTWm_)Jk-y?p2{fZ$M2^EsK$L zd2m@vFrivTQietpRii$N-_oyt6K~ebC8}ih zBRa#*-eU*6OzPB?6@{xA&IE-3PrtQ%+QstWH{wA@%)1iohmf`(Nb^QpCsK|F7;Ply zw})Mr<58qdzE0V_{{Uz<=g8^%W$VXGseM5aH#trr1NW}7*NZ9TD}oAS0kFrD9O9_I z55HwxI$x-IfY_4t#SxoC2RTqu+(RxnpK+9s2L*ohru;y*C#bqAP0nzNi<8!;8MLys z97Hcd4=F0ZAvr=pDFCO)S04}_3 zx|TgFDLF^(Z6-#>Tqk<^H-jU=Oje7YAzrMy%VA+4o&3v2Gs(`>&&3uW>D%mUqM;@@ zx-Cz%>OsSaZKj(fpI;RZZGd?vIjHwwy=&=bZgODn1=ue$Icw76?z%sEfI9#ZoO`Q1 z>f9r<@i(RHod?@cPVO0Kxx>nI8wmS}QP%bO7-Uz~3ffm{yZD9%E7as^G9HeJ(1$^=F zR^blk))v+;&M7wOueTCv3xPR~>d+U39{Att1xt;_*`n2-T-z^DM^m$4xJhxxWkYf4 zO(VrPk}?TD;oNGf?^X02 z(Q;d#5@4BeOqW6nSD&Dl2aK;We&EaE5ld!E;fZFu%b8fHUh9enC;%9 zde5nthiQe?-9Tc;xX8JvIVELIGadsB68h9vooXl!0Ca(^cmSMIX7#PHH<_2o*C&j7 zW39|^vW-boib53Qji)#&Q#oxTI|c2Hs*`qRFIZh<^%d&XA|FU}i|$<~q4x-CT*W$xdixZQnbS&~Ev|6Y2eB1v3S4#MM5!rFrw%U-?+k&HnDldwV zw+O+&KfEDW8+d3)8nNMnTX^zsAt(5A<2^{D7Ql0k} z(`@*X{pRN?+~g~5&Bmd|F`FTGSM()r6el{LC<-G=e1T4HsEwh%DRmRQJq_Q6{{Rha z;@;aTDZAVnPBgl-oW}}l%;TgFF~|e}k%Z+&-q!R-+;v7|!zZK`s3>{qGJyX8ew51! zR-ZTb0ci?I1BGeIlz){w84G_~^fOm5i;|-?af$#ANQzSIbd$mqk_g}UicU5gRSqG4 z!<5HNPFK#P_)_Ow<0?mhjD8fYg`l!de-gRutZrdniPaFnRqk!oFj!7om1~w?2qa)> zFCkSrwc){Yd@hNqoqc*!fr|I8`-8FTFyPORz9l$4~n0UWofLp&sGnwpPs>#mud@)sn`a%>jD)o+m6jjiMydE&`R zlR}YPn2@ga_{mrAV7!Sj7poyjTw|}n$mc@9--`Iprhm1^#7lIvS~}~cZC7}0BgoA5 zi^;}tmA2_6dyHg-xGO}n{Ac)0)IqDFpmkSB-qeL4Na>62QH{s`?)J)1&dDTa81rpi zD3jlAcggRL-R6Ter=k zcZUk#!a!M;!CGHd1^`kV30~*;Navqgwb{DYr25X-!)y45XzIsetL2d@d_~!>ER1C# zdJY^p1QHTA^{#clP2FC!1;K7r`4PW-BCABlkPi=g&zIRR!kGXT+luQ;XTnm#zk~u#kaJ%h77`Rf5E46A(|v1nzjUXgQcF3_D5sz<5d6+&Sz#m5dZYay< z*vInr^s3F$&7)V6wfWYxfN+n`imF}hX~8-`$m12bf>82ex4T&xP)Qi)ZR(#oI-6cF zd98Z=Z2>Dfd-_&+dV&EY3Ur7{1yyP6GAX4ed}ALii+)ij`5x&4u zGKqo|0qASG2cY7fDDU}VxKKtxAFUv4V*xwq#X8=@Z>=R`JbKrf=bB(K^m-bRia6VP zk8cG@+r3IfP6B-?tU%WG98-x%$l{p?1A0(&0x6IvK0XF0v5a%IMkVl`4HVP*)11{% zjach|VBk;^5;KegMk_eT=7xA@YHAUYj*d?M07?zscI`*V8b|f0_3m&^$F&M1*~Ca& zV>vWqS3-fwpb7`lnRg*42OmmFAX`kD{j-6)nETU=-}Ec-ezj40LXnY@2fYC0$!)9g zdsMEOOEn1nMZfo)d(rXwlS$cilQVoHZAbH?n7$r-Pu8fNg&O>RrpO+By>rb;vDAZM z;(lharZ41U_mw9XsJO$2erdyI#>Y^ed`+rJi`v}b%zf)3jnqjv^X_V5CsDW*07+Q( zJWys~yHC-{cjzQVI5_x*bfjeYd?a zJ|hgsN|urU^5=d*{*)EP=F9m5JMU69xoSg*TS)UHg#*|hZ%VI$6Lx)d)KeLrmSkJo z@mTOig=tX)5rQ_x-)vXT9}v+ccRAlGsID$SIk((e3qb&RwC7?;0D!EIL&a$w9n}{J z$qSs*h*57spDu?QQ6tdqKEzcot!>dIDnyICHm0nVArcvP3qo*kr6e4jdvEKJUp?xR zSr1PIBz%+A##hdn%aeFZZjs`{@my&PHh=KI!Af!2I}CA0o|A4(lr8q;LW43}P?IT5 z6$Twn=~IVep=2X-woY@tOZ{~Lgvoihw|@Y5o@Ki|LO4NANYSL69&iV9wS5)+p6)ie zeJs_VtzC?N){=#UoL~?Xl_y|DYRr5;=y2;7--vxRGAzjDUq0evpq5=) zbQjg5`^HoO1!&czoTvC<;lBs7MKUTCS%l$gQcO(quSGdnhRhc}D{n)Cvh8WO1oE+Nzqz(Fsaf9k45>z`Jvn6#l}L%g7C< zrWD?$QnfS`vH?nx(aJ(qry)o>c2FP@Sx;T{ z!?zpKE>;QGG&gaG&yNpKC|8bD60IkP8&YwoY_I}Vns0e@Bymdi4}bW`__xB3Wwyq) z^*mV3++j4*RAa3H#Vz)8&Im5Jlq>?31%bY#(jwlVS*`+mEJLGFqP9qG(Q5qg( z9Oo%?v~kad;z-I>q)khHwbLFX8mX}}qD_lr|)T=V4 z+TzEOZP6h)9uwuvw&aJ~@mT->Hp)VO=5a%{Uu+3w-jqj8rtamOSbCD!_1X6x327{} z#DADbKf;B!9S4vy6Tztdn3&z?qHQ+|laB`Nw@2SWQ#eXAy8NJAN#6}R(ndkpI0s>q z`o7g>wI&^w$8C9N>lR|fej!ha7EsvwZfYc;PhLA%%CZ5A+@Q^hBBlnDv!Q7Q1v3J+J-;e3+jE74L0b`e9B7XCm278 zC$&!WUrW8$cl(vn@UPjkEmk=%E^d(L%e2xMOK8$g>se@nz5?7=(zR}ZgKv{}wavB| ztXuAOFZOvZqspk!(IVnnZ6IX$g;CL|Kx6lS5;>>IJ}IOuV%xHj@#n5OcdI&r>(P;| zZNZNMBJHu-OPFq3Sq``pqi-^aM#Ex|dsoi=A^RpLTiUKJvK)gh0a0x$C=JSCK?+82 z2+EL?#@;CU)#Lj_^p)^l6VijHM&8n@V+igr3z^AEQEDI%nP&E~&Iy-nZW!nwF%^jTzYu z6`^jn>KR+wRtdsgB09kYlAU!_z2DxE8o|`PM{kxIgK=rqLx_@3%wVkzA$Z*&A)=GQ zd^3P|6&=&<*}CFOjzV=Xo4#L@F{tBCrkx8)T2gRw3fxK)jtU6pQKx#I@p@IED}g1Z z=Ank1@S?V%8T6_|h-?rOpW-1)^yiA_;F$K(q7SGnuE(@Yi6OKpl4|0I`cBV3tU7J!@-J5vTq9aguo0=ZLroV#kC7>Gy!y(r1`=15 z6QsL*Yn+>`X9Q(8G)`r!dns^)FykSl$du}pokU>1oRTmUui+V;>DZkW8L3@QbcEE1 zw&gi)>HA!l6j0<=;s+ZJl@JzEg!qe3a^qxzNfhwV`yLFW7 zYfgjbOj|XfASmfS@S9LcIsWm|Qb`HiR-tq=-w~dXV}o>JJ(?s%E%l*~4Wx&OQr&m( zB_+h<=qgi%v>+`&Y6c$H)l=pOPxT~8jH>LnZa3c%M^w2AD8R_UEVkE*tvqRM9o zpBe2oOEi-z(jT?NWyuVil9!pU6{L9>NZlZ52g(AKfH56HDF_8=0R#^=3u&|bMCo}-g51T2<;ZCU+=dvqC&D>W zl=hLM%ruQWl6TF0hMukKkBe}jbj&M=wZf5YYDC$qj*?t))2l;qf|M50H2_PeO4c&3 z6954vbQev+YJHcY`lI$+ExZ*zU&x{P4?e7FenYyDTy;uNT3aMG+DJYb+bzm6Ku%FU zS-R%cX-;qm&IM^d8onK?sqPl8fwB}r>FY#9Ou9}eNpX}P z?((#R;lkQhu#lXRuymZ7%x~Qta7BP$8b!w2V}P{Bv_ewWkmAltQGvhV%9N$#lB}F$ zR*Fdv7Tcn}j;FgL#CqS=x0!b39e=YfvKHw|iOKNZ*4VB zP*Eg>IKqh8D+6T?Frl)&0m!9Iha%wPCRyc#k>4Z{w|d5Y7JN917I1N^QSf3k9Z*E1 zdCouLx!8Y&Cw?pH9*Ik@!C~n#nr)_(l%ce2pTawBx1qoT70(NEq)XIIaVQ_!XQ>%( z&td7O7R0Fxu%8g2B@ZJB0H~9cC>(K&``67YQ=TC>UjG2i>00OZfW4-7TR$1cK4DRl zp(-c|DMukW0B7s(St0QpR&YQV{cBnLHMvG0N5QH4!BvYPyG$QCeNJ0r+XC;?JO@=V719xTOk0 zm&QFi)Y;D@Iz4?U=8taP;+R3tU@7*8eg3qplM6`$1M;Gpa~q5us5A}vr`iNyjke83 zNESu90sjCh4b=C?){k{K&P53FK^)i7Gu4YD+)3Xw!jLnZedxzi*kYL}89Q@G(MYgW zCw<0g3P#*fD@Q+{(vVTMPTbNA*dmdiT1s0W=ZY^t+PG1;q6Tq>RHbJdpK6wj%7FV& zkkSe7QtaTQ4slG7sA#sV9gQ;LPI=mnKcS{pfJxe&L?bO6jFK~)QSnqb%>n>P-`9ap zvf@I48~tjL><%P}$QzyLXR;1+k9v4{hH00G!5QP$s*sL91bm#})N`csQV)`H1q8O# zPShFCIE@7XkbkW-?mJV9LWbj-X+c=r;EG1YS|a8k?gteIEwTy5)n$;zpiZ7DB3xoK zw-p&tM9~IRq3SEil;TGx=}x-N2_RDoY=O8m6oX^QxhXu8MX`7w{vtm04zvTcAR%cv z6-zeDqzkZF8CM?lY_iVQ;2xEg#apO22N|m!nfab70uD~xR@hO$bj#75)33;VNUOv| zwyoP(AbM73(a|LqQny|uKg-^)uvu}Hr+^P}No7*eYHTOL$ymUsJFK$abSUrdQZ6Yi zM$}dLnyOsqqs9YYC-bI`#+45@48500B>Eb;biJ7ql9V9;IgEPMe!>wTgt#{6aaw0g zL20$6*1}VcO*?cV?381pt~R(0-yL6skQ5T6x$le=t8ROWqV?;-U8Pc^v`f$zt8aIwjD zIU(g3uOVTTXUtQPuYW=^2-wxfr}_usUs7A2LM6^>#e$?^p2OD{{TW?BPLC@7*n?EqjE5aq+!Og5L|4d@iwqYNE_|7 zVUaqA;v$gZBf9K}6ZA}N#@udW4wNWiTAL+$c?C(_9D)XLXF7)E)K@h^YpCx9*|w)r zyI)c=+LJL2qjvmk>RM7pbqu(Jm1D|jR_l{q1D!v9k#39?bJZ0{9u#KWaVa{Gw#iFM z)T9p)wFD?Qm47m5@<{@NXWTlS`;-9KVf!@ThupPJXvjlsD32Jn=5msS6=VW3k`J+R zOIGA{1-@%9NsiQ~T(`ds)hTId+@nu|<}#3=Tv9z zbIhsv9{vC)i~8=4Op5A$LFvwyLJ1xkl(N`q#o&2P_#EytxT{t3rsj1sFEMVBF!efC z%QM?b^s4t6rj+l_wd(RPmR`Zin@DtIrF9u^izBEYe1-ikMsXyv)9QapWU}c?-UGQF zVt|vS!RAh!`GML_Me5yQHduGq_c)iHl4TRwkje_)Ub5?He<|#etb>v=Qa2lhjJUe( zeo(D8j`ch#SBVxm4&~bqQpJO6>Iiz!!VOwECeM-)xcsh@YkTx2WG?F&SNynPE zPMNv2)v}}8rJ+iO4&jN!ytlvYvRv@*T&Dpp#td@*->uI zy>FJwYe+B0aXXJO8B$76v|!{Z0~yVIukg`}H^b{ao>PnYrpo?(_X<4tURE4H-1$pO zLbI?gBaP`ZW8%kz?G?_rVZM;~WoYX-j4k?@?eN0A%Wyea@mdaF?HLE)j zkoO!NWaAj22Nr_cx+`A|?bjAMC*p5Xbk|cdtK#FO$d46%?ZiIv-+8n*z!_6$!5bY| zHQt~2gK@N8Q4-rZw=GaFjzwkFq$Mj2$Y?m?fZM|HR4oG{{ZH9-lw`7;o2>|OwUr@5~r<=!I>Hju}B5K0u*JH zzO@06ubFX50Y@iEC$%%8lc+QyvLVi78$Z zSPwsq&ZpCoPD%2o03ZS*CbbZ>p4n3PASQLBewmAHakvf(t>oMqVUG>hm}Qn0G&pu1 z8Au5~@YX@^R~BKwhQ)^S9z4e4;7oB3Ct6vz{L7A{g%gFLBq_nGB>3nX5m8?WM@@A2 zf`4r|wW;TA<=E>_gjFC&aqzK_NC-}~{{R-CgpI}t*1nsa)*U46rgYntzDHEtU!3#u zn^P{^`hF?UrrT=+!3%Y!nQgJUu-L%P*=|+i{{W)POJeQwFZOiLyeYul-h9U)*i3>% zcA^9&Z%kTF{u6QAN;0fuH2(l5t5;k7rtc=-)3zBh-eOcXQ)FT1JWRHyb(dG14Os=( z3Xa6AZ~zvMh!*)bXm-w>vATvEx})Xix|hnt3lt@xpxJlvWzxfnQ=278ZPfrno7evU zXa3H%*`>D5FBb@3B!p!6%8ulR6qKZ3k>VsQ>c(`Se>G9HLM7zo=c*f+^nRPRnJ99= z(Q)EN9G1(gTFx*J_lnY`e+X=)CYfD^={7qkk#hNKbI458!)c%r>rE*DscQUKQrF~4 zyt&+rBnmI1m__zw=ch|;N4i`SHQ92Jr&3yEYdXlrkV(K#a@o$wz=%bbW6)>XS95k$ zsS}w>173Ga5!9dEB?Ek^Wwhk^Q;;+GjZHn$9<*3`FROkbvuSkOuaXwK%0$PI%H1oJ z2_c4*r2t!Pv@0Yhk^G>OT9-|~WVb+y%>Jih;`?l-NJ~U07j&q&@k%mMl0m|p#z4s6 zDznF-TP+qrV7JXBnT*PiyOXip2$ZA{(NOav{oTBTpdarvq~HQG2R`Dv1V-#_B6Foq zh^T=*%?>!0LP=0z8j5#L$s;4WhXknd6G-|`r&zYhj_tbW-L1(=Sep7KyMkU&3PxKQ z*e4&4QqBQ57y`ZVzR_Tn)9aHcyK-2I;l@L4t%%W*4p!nD1R!sp-nvK_*yjm3=fiJ{ zms=-=G{&;DWtB4?)XOPfjUb#Y=@0pwBO`>AgQZ^7N5=mEjonVgzr?qA_jjf&G?gX8 z9s>^ny1y1zft(}~Kjq$xewRk$Zqf6nP(IbF?DSYDGi6P{FG>ngKs;c%q@A|}gzh*S zRlniIg5>L;v?jUZ&bK^SM071Jq1`;IK4mC@u>m_1&1P3y@?_q`xw4&pV+Cn&tbwV) zp8N#)j=*-}weN%yXI?H5Zf-_mL1ivA9mbThl9EyswQiJxZ~)YOcFjZ7N!25-9VNcc zq`nzJyWAUH-u9^{=A7}Ff*B(K0urYZqp}A2QMVa2KWDkR$5cs6w?pBCFc|lKW~0xw)WQQLXqOQ>1-)rdIPdS*lpUg6#PVqV`6RIOHCQ~Ql$8K@c>XK zAm;}aEL~`v_V>{V@e$9P>np4O05n#WK1vsWay>TZ>)xuQYIG#H7me~pKRWaB-kS2B zWhp>$MByN5-?!43LxB$Nq?4cF?N_6cTXtEk2uLXy`qk#n+D?;}tH)4eLuRN`KiVv z4LsG5loaOTqUK{5qt^Jr-l6U?01DoTh)a^v0nZgny||JU?^YYMu$<)8BJ(>|q7M}| z_BB~h>LrXT2D1L4SOsA3S_S$}vafp1@0d_l{Blh+Xh|9M-9);g2Hw>SK_ru$b5=W? zZEI1+NT~A&o)Sf87e_SXWJ-qnn&y=dFe+7WGq@P0%oV4EoK$2&Q3GowTmFPUevrl>h)0BEws6`qY~yR~-Y}y$Iq+Ab>_F z*E{-9%NgdrjP#-g;&~X&Fi?KfV$qMTDZK|fQZlK6sk7;dMvPHq2apXUwtYFI88A&9 zx%*cN4mjqD@W~?pd)JW4iN-4e&cS;qIxEwBm~MVxLQzoZ&S-idh7v$TW}uASh98vOypwhEr-Csnmj){B$6kovO)a zb+(;KjtCpoYB*3k=B91r_^BSdaZH%$(YSO+Q^C@f9?}jrC;e+ljU}lmNolZhIUTDK z==hj<3P2o;1s$h1wE^l z80m1GK#+X}JQ<`*bmg)2AQp(uz>E-aSG!~jvod2fZkn$Z;t*T;ojV?RPx;b%TG*ct z0U;UOX1jHIMYc*7r&vR2BL@q~KWZ;#iK=x)<#tY;jT+?L%3O3ZH6*wkPDbZZ0YHFw z&0`mSpoMar<7i>Tpn?{qqab8{6@+CV@A$UItzNpS^k>*3&TihPVg!U2SaHOy2OE_E zl%IUn8Vsv_!SvdgmV`)cUPnsG0*5+D$DbpyA46VGsBO-Uyh$Zw0JO`t%WOls8HJ@J zswxl?^KE1CC&M6+KbRekGwD{zy+L+Gp~Xw-L3w5?XXTe7D@l~n$R2jzK+hfbJPksf zYR?!rBHQjV7<9N@Ojyka769<{_UuWCyvn^#QLgBjChWiO8PtM0)>HujW#HS?$ z9YaY5t+t+(xgzYjJu>uLf}uxTt6Ygpr6FiY0c?kW!jZVtf;8amgUyGcA-i|zSn#d) z*jJWe$Wsv(^hCD%ZRjd#w;n;%Qlpfi%{}xeCz_Ip(qXhFS4PH+W8BoQoawo83f9zu z4!;bb6samstQ28sP~#esLwC`ZyD0H`j>US8^c#v-CRL{G8RKhkCAmgA(&9@2Ux2!d zl%sT&DYWTP){#^8Xx9_g_GBdtbpv)7NL)LJK#r-f-!3b`aU>0GsYxK^AhhsC!z38_ zjP#UQk*gEOjdpCmYiJ;;ECnURyvh*NfD|}!9s{@s8B2f+ zqzbup2TVwlW1ZAChoVQ4-O4S-Gj2PTq0p^sV{l3wNiGyPPvHTzgsCLg&$Ip@l)o9f z-O4Mdi25vd0`4;PzEad#dBlu>rX&!R4gz!(k&3E=m(ov??G+w|_)XK-JFvFJY1GbC zhW=J-MJ_BlhJ&oEdJh!r!5b;}sDmq`skjK{X1Na-z z;?;xbJw;r;2lT`{z2Z!jxa4tZMxceDt#M^G;PF@_l1`F^Cnxw->5N(aIrJpnsp!6< zm2FO|q_>!Y(*eK03^c}ZEY@!_KIUVruGz+fV~ePjRhrmRu=b&G$Y;C?`46PI<4UQD(f{V0ARF z4l?3Hfc&d`xz85-1-#o~Lv<1Rv@K&%{{V^N;~P^+H1sBsj|+_Ze_vRxvykGXwCI&2 zvciI;uv|-MQC0~+R+p8Xu#!i5)_UIeaocmD2k02#UXcVbyDfH19V|)!xD+6;`_r))amrjF$ zeCeBGNQ~t9483V}nBE#trM%%wq~irCZ)6U|YF;vPLJ7}H!KBhB>>YnD4_QmFT;7ry zxCB69E;w8vO*XfWjilr{mzUxxqH-0dSUN}$-`ZQLA8V)jYTX7&ZdL9Sc?>dMEw>6H zJj<5&In#|^8+x}SKIY{}HE8$|@KLY!H>P^7?h$ffyPV0Aknm70NJ7b8H_6CJBpr$t zu5&{Cc=&w=&ueneHhFP{Om_J#*H_{=`whnb0CR}92nPw>eiC*bKmo=>rwB%68uRfpc#CT^7o#!WfqV z;^pJ1UV@^ez=a+HtQTm z5|uOar3KgBP{v&F)Z$cj0I5L($_A~1&C^mkL!~Sq4_TiP9YWywi>$dP#aga!l_<1Q zjo)zyTjxnu3c}o3N)nKx2)g9_nr&YcReE-O85b_5oqLGemYbv~p~w-`W47j}63j>q ztw1a#O~z7A;I|YwwH<2|CF8Et)Aufd(+mr-8}TWLs8Q0k%@9Nyr|8iQtpyf<4}UYMJ}mph}wY-hLV7WThzB4eO@3_i#k^3%1AR-_xplBU>Pk!==HVJ zy)d^CbDq>e!EL-{)ta^H47j%*%G?X}*O*yN> zLr8HskEvQqgso#E1xr?}fB^<$tDtz@MQ^${bdCE~ET0?{f{VyJSu)2Z;$-b!hN8b)l2zgLEM$)V`87d)eq%RttJE&{Y5(!X~*F!Dn zg}imN9-Ml_^oYrN@TDc;9Z||(ZZC*d*jONj&H&P#&Z4ydkc4737gKdVQoh$n+2=@YhFVjpXXMshxM?Q}0idANbp9d<^AVR8 za1trTx)J@%X4K8<$nxK}^u5ws(p(|9tbl^<`bw4%Jcvp_N>CN0ARWmh*0Fu*Tc!U1 zrQT($YtS3;v*E*vqT}vhX>A1g+!^5OLJ0(&fTeYV7JituL7UROO53;h)Bft@>#GTl z(hkaeAt@OrASDSICz5xgI&-OKHGONR(KgA?zT)qY0*@oVz#cN>DZUcgA<}9g*2}to;*_>EU!5WR>T91^zYBE)g3#cH8v*O z_(>*y0h== zwL^SP=%y|GJj9E5Y0m{j0);09B^;NG7BqsBO4T_ zkJ6ZpaRGi&fx$WF>t48!>I;r6B`19$q0anwsDB+ubR-o4wn3uN6GyU5*QW|UonLJ7 z5_FvVb6EYBZj}!;XckH2=^BTvO{P*T)h%*{5;OJ9TrG1jK*`#(D_rd9IsIzcZn)_K zCwiZf>WwB0%p@F&n__U%Jk>&jaoZ(TGR4RQ?y7Ym8Z9o*0Y8eNE;j%M=74_bfsE8F z`<|eok@?Yb0z(&h`hoL)wM@H7$vf1Y%6_bne_E^DWa`ci)QL70(O$c~t5*EyV^r&+ z3d(>RRjzp|Bx63Jpk+rcDEsYKNRwqr_=QV5AIi9kW=|O%jad_HQq+9Ad(my$PCpGZ zS|-(%GidVcpVFCj(yV?CYV?aiC-DB6t~+GF&WvWc=BP&mXAjzr{!>CC-mvJ{?Y9-6 z$+PLm2;5Xtmd!3?pbq~4YJ`@Kc|!UkBf=mLE`8|cVT6pS8T!=JTX-lQUFflFDg|0b zYnirkxa`T+GtD-%_9BT=H^wvml#z%^`8X#?*c?}}w?oNZFdjurXSYm5d? zF;wJ><76X@jm{`il9D#S^{KaGk4lezImjSj6In>Q=onk&HsY(4hv zUJRUUIjOlEQ;H+qV>}JLDmp}Bb|S1Lf_9-^iIKsm%48mjq=*A@Y93q;a-Y_&k3<8J z?NPEIg>QkqMaP6$EoN=cT0#dQ)KsB#f$T1Rhgl)h%tlOg?Je8wT zQd5rOiqo2H5o3)=*p5M}r)NwE%j=Dl;Ac6ax<*10vXyU8*L8TxA-w(1CZ~EL++L7? z{>jeuR2dpAmDeAAJ{ZDJJ*d8+5zI9t;4e-Bi6c+mr6IL!g4$G-q$>ieJzXR`(u*oi z2jV%~+irg$LwB=8_e(Cqh%#81>;5~5!)r^(TX`AfK!TsRrtl&>89{xu=PkO!=tv;? zo(Lwr(KD4H=NYwIVT7Rr#gXCos4Dj^0i5r)KJ>fgx)XN+Jy8_X>GNu5LfTh+08f-{ zzBlx*FV(NA+1-ZdJ;?XV#le)zA)Ap`io}K#rAkV1_?%ct@lZWRahw{kxeDt1CtJQY zsIysdd3WU;a7jXvHwsRjlr|wd?ZK#CoVvBucNpltNKzS!vRe6dG?ErElz^h)+#c!0 zXjjN$-x)>?ruO4HJR(`OB12CS0Z4Qw0X~6V+#SVB8gdNUj*Noo*7B`ZsWNQVYqUk3 zY{GR0L#*l^Qd5$U&`{th3C0f~;;MZ@<*%q6Y5t|XrVRUoi1FPFINOoc{5knxhnr9s z0ZMI2SSP({(`NOBvDVhb$EVLz4eGyU=(3aZ>n6!l-??;4` zxU{I`mfS|%g(XzaTl5W{)g7p;b*;JEjm4odGf%T8Q;SIet(P?nF8~tS(vN)-@RE94$($wC&xq!l)K1Wi`&NE5a zp|!BIB0- zHFv*s_ruh@;{IaPt58F(Eep4)))TEKTG=5=QgCyrwT0kr5(x&edo`ASL-kB4&>M$u zwaS*J;Jy;<%Ot{~%3W{wtQ8{!rv)cYbf*N4h)Kn4n@U|DMqaGV!H0Lb!ffTlEZ^Y1 z+FN}D>2bMiDGDvRk1n93Y38mREgJw zNV~D&lH$^+<&`_DxN}=hE3hPyoZx*ccelNDu*I}nTTy-Y&AAFwX(>onu;YsCbm78A zdyMl~hxSzIxv+X>(PNa_K4vQ^NA468;xhIY(q05*Nz^pz17H)_*8K9_QMgJWOvGt& zJRLYn9PwN!3dZL-B}cF-$<9S_MqH&Q_b-ujesBDGwc4RwtQT7w%6YqVnJH>0N|4Kp zG*aA&t}Q?fDp*4efChXb5)RdTjq$Zid^;9vg6A#CscGqGYB*VS)*S>ftZa~_wvQ28 zkbsnp*NSWVRoE?%I(x!)EVtWs;}s|^C0{Wq)V5q-h@}tRlH-dhNf}BKwQ0)9s((OO z+5Z69gytqT`%>hkF10D$+hE9ZOHGb5;ipp6zv5c4{1s&=@n5*DREX|?>PPMot`|Ae zd-`U&Y)y{v# z`SIk#dQ1|D_Q}D*2?^Y5QBLX!Ep8LObnY`w z{SfHhoA`!{G^ZjrZA`gsVJ^KXNn39^GN%^dL0Snvz=W0ga52d1p5~~JQ8K%?M>E#< z_0OD`_In|gwwG=bOLH&N0S-I!4y7f;w2XqLk_z$?6dMRg6>gxt^h_DjdPdy>G9raJ zn2p?qQqyS|irWaoY_tFxmgAqsqC(W8LjM40sqesu*Gb6uy30}$(Ta;(EijE9UXh$BZK?L7bZf|b&W(Q>IqVgkenx(&Zmpi z7dt#q44pJSsOc{bZ5%coCXrnHh{vv;@$ za&d)~HshWUc;O@|(!A-wNx%Z<5-(KQS!PxF_;+U<%MhE6^IssyCk0NGr@=z>fD(kQ z!lROKHmOVM`i=T@jCAxki2^a7wFPugR$h@U*6S7N*eRw#jWPP$wtQ zeJi&Zks(SB%9845B{^+^^GZt`=}USOr)=;Bt@)}o(%~@|C9{8&kuAVNO#C5BPIekj zb$eo^#a@TERc3+H*DJ(l79q-u;4tPA6P;=s{{RZoG#(E1eCjKUZhSMl-Y?d-kjr>2 zWxo)%;TYAeD#lKEAd1LrZvic(yviCw%Su5F2K4^`<~xtIXrC0P!GqBa+ime$dC3VY zd%ty%L0*h~pBS}l<7=_r!f$od12eT`R~ktNZk#qqy98kv(Z z!s5c2E5YAeVLwhP9#T-JO1D8LI!MU;=tP5S#N^lkSp;R@N_pZZ9^BRYlbQ_Icz3LJBzS@emsEJoyM2%VA zia>HDCfq`aIr~&Ri8}MW1mxngoaT@mD&K!fv{5$BJT>;Fk#AF{AbqIlvCGGU_Nn+5 zu%Hf{?YF&EAv+->*^b^%(vND)0l-PCR9is)UHxi7yE)hdewDi#tfwN*Mn;3Zc~*Q3 zXsgm~8-5W@xoQ9$Bz{#nM$JZZD$1S6N8i?<<=Ki?l&2W&T8yhu9H;*P%}}##ARK@z zno+G2akf#|x2Xg0??k3Lf_SR|w*-^0A8Hm%aD^NVzLm!y>YcVBkh7JQkoe^j}gXq2A)K2!8CA;amF{N z*^Rd1qVf&M4H1FCz|S00sfslI6TL_xHvAfSF~%@Qed&F`a8;oAvW9@{!o#c?h9wg!$2?TX?UI)G5!!o2kT2Lz7AQDVT|vC=}C4j zMsZUqh^%VC_O7lXvA733*F3Tuk}A(d=gUDo7di5u%B>GVIXh(WnhkJ?BoVbiy6{X+uMq(+HFf={YX!gsR=)QJ{yia5K&`S@&Eymx(}cPxSu)S{JadlfMrOHwTpp zd`6xK!AMlM8943bN#7OQf^l~cpi&SKutxzsvDbS z>INhLpBhU`j2R#S<{HvLD#mvl58=&hx5=F;aI?QGLo?$(G`Zr$`Nrd9=Tg>$1*JLQ ztn7Eqe328X?)Jynyg1XyLqzCX(zU4P@gYPcD;eyeMl=1Qp_uMDbz<1bXDuk1GCXIG z8PeLb^f>wu4JJ6tq>VJ6X5!aQ%1LSY_WMj&vICwK-rH_Zi50K_8E`X|XWaeG4{+(J zc4iVKK)6n_q-8%c(@AST#I);98_uDyImsBuHIc;l`4aJHofQ_PyJu{r)fp4k$_AY3 z0mW&?Nf=4aPaxECem~f{owm@cauXqf6A~zn%f*Zxma-5MJLv;fPUlZGr;dLHi(e+& zU}vXovf5h+>Q5QvtHUtR?1>0K!3`H$ASoSya4#9}+faupWP5x_mZT&);PIM|axXC; zC@b-(w!k?~dkmlE2C;sc_>Xas8R-z`-mPh6Ki;k6wzrfA;te!{vz_@V2OQ&SrO*D; z9}BGCFd0?bsXC8XOod#eB(~dOSqf2cejAD;0u%etFbQo$U^WHIt%E9Q(>GqF==+t< z_M64)rEXSYwJ4(D9zxKQg#HEih+154Cm<~<89M>Fr{{b!v|V8v)EKdpw1utd7RUS_ z2m!vPRFW2VI}%5*8&}3%bMbSj`hxJQr%qhDo=w9@2%6>P_Srscg{80>(}Tv8B|8r; zC_6vJU+o1YZ(eNHHm)}dQjsQ2&N*?p#0BY2rx2v2%%~7GA!S?S5yg1&`cfxccG>n% zPv2z3LSBl>E>M_nN>zYPoTJ-gUp4;#XfBSn+xpntBijC6ETuN< zR|xL-iHx#$3rHAJPI3Ih1D+48f7_+m3T=;w{TSqVtsM?dfSpQE&V22a44i?u)87KL zE}iR7hF=?D{>Y0dhsu$qEV$x<3nyYm#RoXS;1HaR^ISRSlhZqrbai2ba`g6X-wh)& zlOAV_6qKO|WpS34(E104vY>(dszA~>^AX=QZbk0bY=a%U>fD!bag@3tv=p_r+P?;r z5w=_*I>^RUH~`}~={}o(rC=f(M5XA-W;+UaYQv#v)U)ZVX&D;|ExxNPGTZWH>K8e3 zJVz4w{o`S2OMZL3AsCNfpdv*Okp4<{WVJqOl$zM z+BA+Af`FxY)I7vv7|Cun)!n`wUN8MaW?i1d#HFn-IaUV z5Se|pnZlcG(4?}Kk&Nn7XI8b0<0~1?anlyLDiOC-W*-6iP08}5E*p=-i++k>C7DN4 zo*`jDD?)pymqrP|2}s=5*YIPbru5fWKW}~%c;V}Wx1i5$(613CYFnfMwxo~}b$M}& z9kW$`hnEQb94#(iVW;+$jh3RZH7nQiPI&kX5yq_qDJvP+0o!};Nc9B!=S$iqwGg6x zy|A^y()01ac!_?;5xHZmR!=0;3?E|uUczNBP@CtAFtVdMo^ z0|)A^y@7n{nfCsgu*Xz}p(N!oUSVzxK!TON`<@F21f6JU#c~i6_^Tu6D}9&Y{fQRI zH!={@Npj-y){v)_&VoQXwI4ChlmMS8$OeA%{h~j!_d>MlBtL4nO&KAjT$)HjgdZ}> zjtaNnAw&;P!&Jwc+|vx;)8vT-d>*lMOU_(ww>v$NX-Yf^R|)29w^B2fS!gB2D;<)7 zw)`E$=S@k!^(=I(S?;%6bCDOQ+$~xm#%5|kH z(5gp;zX?bwK3o8Ju9zq#S4TXfk-KWmK3`6Bdunp@rF5H>)2)uE1=x|*;9xfFcE~tVlD4os7LF7WalQ@-&MNs6;&;QouY(MA zeTfm{OhR6RB16bQ(g7d==52fvq%5ya`bG{1>RXM8HiI_795{y%rkZgpdH4>Xa@l#M zVL98#>Jmml0E(kZv`Rd#q>A;nRfnnCL9=3jAN2B zRqLPa_${NS<+IXKZLhLg1iL9-I!dstJlmzDWB_reZuzI#lW@C6a@ix{q$gm=N|&MA zv!*d#z@timEpNY-AY*)IG*y<*)AkuJ+^lFs3xhG#+AZ-WL~$W!gee8&sbFLH^n;wA zO6F7K&W=1*i92fN~(>s_O!tmz-dY=9NE z9CKRvD>s?j-o7s4fjDQDH;vm8*TTi zA{PcPO)b{mY?l$C2}?w&Iq#m?{ObM@m-lY8CdypR6mJo{{SICA^fOh=8=fWINXgnsfcHB zzV!zcK}Q7bQ;@;P$KMq;z=`-ysXBM5_>#1MGrd4VNjTUHcBxqDQ8?3{{L(anPPK4S ze~5uZe$-M^l7A|QLR2>u(vys23N#xeS9H~|S85T-gy#qKpp~d_bMKnsNhi!{k`=JD zJ(XaZl!no8=_=-8%}FLO#&AC>>5LBer1x3{k)9b(+w`w#{{Rh17|PBKaL2|lKTau@S~ZcYZ#e`| zuE#r&Fa=B&FgV-vrk23Y^+@Iln;_g{{52aYHl6XGTCzI9PI1T5fRMo>7^=rT9hmYm z05lFpJLaEeKBJ8^X^b<14G`3Dk_a2)nxgd@D)J366-0g-kY+akjwwvGI3kT{p`41# z)DfKQLL+BzGf#+sk)A%3V+cvYig*-a8b;Wx^BnS#8X;h310%H|$fE}dCZ-hFBS`JV zGFnd@=bDuAk#5LGA_Z@c(wG+VPU#zZ)ahU-@%dL%7!$CrXOJ-=k!}MA6tCH7Jf|kC z4Z_a)M*h_4hqgB3(ucS@R8|*i1abOP4qEUrq*buP17Ik3p<|J^(y1JjG9hi2@S+e7 z6;hI|#kK}ie?93;=w(?0j`T|3OX(>I;BlJteKuPQHNF$2S`aQD3Uwhw;1R!`tdnCt z+o#5G=^Nsz_X%m2G)8eaNauk-bPdvWxHl!Z+~D@49HYWf3M6;iWBF12Qymuw4v1=P z#^R4LBsAtv8d9U?r;#Qtsn&{HVn|5}(wB7U`(vNI9n>9D7Uu~LL5L#CXca5Yf`jN(+7mlYMZ4yP1b3r+^<)YcR91RC$crP`$2V$6}#_Jx$bmQ^84 zr7L$(&Xp~*qfqRk5PMYNCkQr*O(`@~dQ;%F`u^*zqBxN3R|ACs#$HlA2yD0xah_5~ zt}E#8guem$TGMc0i{wj-swq%H-)RUrUdl-f>rqawFhYKuRjruTo|t&7Hi&llX!182 z5?%4VRFV>(IOk4IeR&nEbjMQk)t1(j+pe+Qj^Ng{7u;pdB<`glSRo^xPAi)xX=axh zJrs^Ljy$5a6yL)>owjuK$}B5vR$DMug)6}1gdGP~axihgHTj9vy%y_TNn`4V;uM>m zu)6A6I+R^Y%Vl65#1NeS0L&}(>dL5BSjv$O=K^~YTP>;9W-WLa#s&wJobF0%^TXp` zN4wIpZ#_QrH>1hvxzi;u;q$olvQbLXusGDqt+a4C8nc65pP|OLiFsN$^GW`<;D@nv zyQpM#q`#h*>Xm~HjTSS}^K?>X46Wi(Fti0Y~)| z7!Yg^D%l)Qb|7Uw3oWFm`7)I_q=SK_Dp+h_6+6dz6585bpDto1I~7sVqFCcz31m$Yy%yf@H z>GGjk;>hlav|R2iMrFw8mej>KsEAu(3WAhIWwaef%%8-Pq5&sK@0yRFe7N+?w(N0k zQdAg>%xMonP)f_JIzV+G{uL;UC;{A^ytQ|;Po9?Le8f1@Au#jG(HXx1#}e=uY-%|H zDNBb>Do%oT=Cf;oBK_CV;=d9*vTl+PjHHYtHWZzCJd#{GhTA6_k(y-H#Q^l(A~N%$}0}75u4VK+iocQZp( z(iD>Bq~k{5q=Uf}M^tpAIM)Fqqs;iNMN&}>{FE#xek}zgtqqgDl>$9LtZU&7wNd)6 zY@B8rOt#Q2Em$gAfyVrh0*~IlsflA|d;uX!LKoqZm8&`c;O86f+iI`+m#KG2B)-e- z{YiQ#admC}?8UC15jvK{;$M*(8dUb7a(XLC@aHy^5}%liKxM;_qO1fGe90}jmle5u z9k_KLSi_%rwOgICGW#rormje73wKYMD?`9|$=nQ_x|{%|xy@wv4xF{#y5b#@I)c*L zR_w*NtTxy|)5sB(pxU0cKecLAOA_0jY- z^GC)%gsN+#nK_NNY0Y@Yb_^sYA#QvD-t7B8*-kXm(NJ^CzprkaE zf^?^CIA5VTPC(+nJbKFMZM929zSm@3sm7E^9tsd#17M-3c%Hf7RF_BmM!ZCIq_-tX z#!^zS3KgEpPB3excy)0mvR;|?KgA2=exQQMcjj*HFar`Jt+Y0@0CEc4@dmNn0H8VB zH9+YbG)Y&MS%D?yn$~uEDAAtYLVvAPU%kqm3&amP(pH_)!%0hc$8`FANgqnT{47e| zWphkGP|)EFrA0({&}uWa5u#dp^~elFMX9MQ$CTS(1uT`NUbym;`JYP1Zpf0$YQ%&q zPWVblKc>}w>T9R%NeV;XGL=RJ4YsK5wh|8`%4;^i$dHmkrRhl+(v=aB?geJJ#Lic; zyCvnV%&7?{JM)j$nq*~r(=M%UJ)9g5QcXz1jHQ4``sSk{8~F@?!BHD#tgs@Dwg+*| zM_b{Lp|K*ZcF0-MHlzXxiyH#~f2B&tkAdabQ_&!#Bm+`R!brfzDQLh+kKdE;O}fzo zH8b*u8x81b5y==SKT10_5N{2;(`Y$W9Q1Gw>HR6Tc%iJRtsZA zBx5_&R3s<=07j?n=*a-~JXI2)7CTA?pbcE@Bqtl@srLT>FMrakmgn@5zZFr7)w$Gw z1x%sDY2;N>1c87-six!tk>;ad4SB$GKu93u*PVQ0ni1&Fq=zy||}a zY<3jtTgLdVFLzr!-vc3{2NdEO$-o&D@*Bv;1w5AZRArbcv>~=}+jH$iw$TGLrrRTt z^rG5lSMc%eNV3>$Lv3fa^zVbe6jL!2NZ8Rr8Ne7k(BLj*xh#-^Yq*7>jphn{03=OH_7#?xONTMDF zam6|EqqR3CQlEk(!x-F9l4D^cV~%tBRfNaD;LxweAH)Z>JR4&osJQDc=}HfpkB@Fr zzY}1C>qZs|To^~^UVZ6pqX__vSFPz@&WI`)a?pe+E7UjMq^;7G3>0MXwLtju+OhtA9tsQk_$a-10IxsK1Es zFoIL92sr2P(sFiDX%!oLZb+5#&CH71s>XbGG^CHND)#S8v~0?VyR@$nO_eREZ6vfw z8){fkJNwpEWuH1k{{RK1*m1VisH?_XYAMeNR^vZv`Uj=DX}6-Hd}-=zV+z)^zNHb^ zsHhS40+Th-NNJxQb^idv`}BLtybElUG6_&a%2qZQ$x=WE?fKM~!`_~~+x!<~O|!Wk zQOcuh!|6J=0a{CeM<30D`d8K6MbH!O0ZNRZ@mTVqN%05QNhAuO-8wTu!;P^GwYG7o z=a7`6d=QX?rw7<%``4cH$v!P;>R8j*b-fWAkd~$1Os5%W1xJhEIWdm<9Nj7-7~G8i z09v!!y3#B;X|p(DA|jRI;q@eysBxVNDs5h>&fcc7gC64*GoOT^mq$;lby;-xsrZ4{3erf^G8k!0vk!l0~tsg4DYp0bT7vKyXnh{ zEi>fDy4jo>5E^x1Nk|@CD`biBhtxhW{VO%H>pmG2h3dX|#;{es#KgNE8>U6f+{$?YpNCqUf!j#zq@TI_Tq~{y% zYw3Q!_zCd);`|mmucha)Yw9;z3Pgxbq@mn)T0qjQZ@NZ%8u`!S7xrQJlhZ4HOum+f zdW#rpT)1E|;**VvkdwNPY>My5{V%O*b9^WLnavfK57QOOKiw7wJ}2~#OWGQCWz)ex z%ZMzsl@*kflpJR%!9BgrWY2XS^RI61Pd+2WS$;P#)Y?5<jq(^z zlpKGDBQ$Y(Ur*_uxhba2DdoxZ#x#SvU#sFk_~+A?IBu8OwM;Oikdfgh#2P$3c_BpM zN7Ey0<2CwXBsFsBf1X;vLYqp`NWmZ=AH(mrKBs|SmkOT)p}IyFMk+f^OTau93PK_fuJEgn<*a4h+dh9&!^-Pgp z`94)0UXL8-uZC4Fkmh{-(TF}ANUbaw=}E$zfA^YIfq()^%9Ft=NLQe$z0O9fJZYvA zF^poE?YJ0Ef|ggQVZ8E;nQAwf7%{5Ti`U_AUeKAoL;F|Krq<;EjS zw5F`dY+NKSEvar4G}4kn8fcxtQjmv^LCGn_N_1o?i1?|~kBKp#M5kSF4-AhGtt6Et zJf!8-0(Ss~cBz+ba_!bwRwoh+ZAmI=andxolC4Ww?Uby9@}iF%kuUniv| zJ-H64wKUldX*k~$CNmx$GCuE%GCmWvQn*8k+M0F*l`^a?`>y4sZfMz7X#v8m%1~9YPvHP= zR0Vq<>DeQZxqKdfs>LjEQGQJa;744wVpr@t!>DzpR?|peyi$BnqNKZ#z6inR8w&ad zsw}ZDmdA^PHzCrJoG-@6T6YII7|w8>PIs@4JZp?=nxyqTr7CrCl9r!>Q*L!3Ng7r| zXj4uMauk7`@!kfW=TM>*r4YO(dr*4NR!KFlc(6Fo^~sPF@REQu>~ z@E|siF(mf%8TYJ$C&a&q_ooayZ1(KPRucN(Uy9ntBzeG2c-*LQR$hg%{KB7aqbuFTswN zVQG7;dgb30_mEN>fo{mw;y^vY(2RaBIT;QEU!a!mn}yL`Cj+OgF~mkKaZwGIT^g1W zl!T2bISJz=>^bJK?}UF3<9sB8E3JCA`wCxDU4MBpWXF>nwgN^HHLDFJBVmEC$I^r9 z--|Ze)PyX)TY%9~MrNTbNm7Oa{%oZQZ*2EbJ8_z}T=8jA`d37uiQRd9m#(GiNmkIL z-z7qf9Z7AaGj5WE6^)Lk6f}dl-zV%V9qSK>_L&Sj5-dr{kdRwT%t*oVk*N7l;Xa&k zRr{w|!RgtEW(@nNOweDkNd#vFK_j*{ ztb)%D99yGoNlS}xAN)W9PuOqzR*fN zDwqqIdH_}LPxE7^Kj`73Io${YVs|WfIwS4Q-7fBasEIAe+aZf8sw%ljx2lG3M=4z!(veL6Nz0Hd{bFM^!<0w+jTO)pU864uO zciU`qFczlJl`DK4X0OxYH!(-RC|;bCDsySH%4xd z+v+MA-+n4CKs=05&M4`{0V_L@c>cA+lcSzJ#S4-zV0kp-0q^aY4n7R&WJat?<=>k+G`c ze7Xu&afq46mw$ROxIOgqQgJQiKoSlhhX zr61wr+J`ohKZ3M%SaaKo9d;eR4|*T%*|?eUvy;L|q#II!$rYfg!;Q)~q$15PM= z3@K4WZq`OJj`ZqnXBa3ym0d-MpWxf+PNv2&_&*_A*kMsA3ALPl98;*Z?}D6fwOxN= zV15pJ;*^U%4oY$Kt?V`qET*R0ry5hYu%}T4Tol#7+46Qrw)M~0;DO{nTvC6xVc=EU zw{M&pDX1z00Fp6Q1lmac98>Mu$T-uQKETQhfUNM7L18FCbOgn8TABR4a@xwzTQI~EdUcR-}TQw&tJ5`!Y5L7@$2==J? z5u_!x5~2nxzXUkDDwP$bGa4iNr#s_*X!)%NA?s-%Z9qQ4%#0Dis|-7A+oWYiw#8HH zLUk;RB@L(fmbGChPr8T|=yG&XXs+4sYlQh{&CWAWop%m(?bJuN#eQsuPwv=h3R{jJ zK$L<{+OP0>H>ct$6K~feDGNSS#&_l76Y29SNn77W8&~||r24n1a}- zDJ-_8T37tTQFZVBtA#({tzRbP@MV1?@VBX(Z7+35vvll4I0Dx$V~UWB`n1s@KSv5r zwQP1Cr|1rtjq*0B^Dgf;Fyx6#G1*uAxdB9bWQzIH*>j)NokoFjxj!YwloHBZAU4Q9 z!rN&92p{r+T0E|rCfZqQ=(ws^D{*Hz_N2)w(nTP-ZL>_c^?YlbcXT1o45I-EkcMOF z_tF6Sl4~-)-kEV4U49cwa|mWSA8Gybx0TET$id_5Qe8aD%*S#A1yT|bhb20nDuy;b zbztEC0EV-!v4?rI$dDtdN@6ICx?E@_N8bTBA8*dQ%)hGScENo#jV8r6sBI0kZjo>k z{$qcZVmnR@_6r1{Ap8DawE@*}qg%W;WZ31}+6yjdinkE8C>@j=LX-jLjFa1X%HTM} zCS1BiR2eGC(x+hSLKU6ztb`o;l?t?3`irFBf&=zh3!SA!*Wuh?G|IF1fIuqJIQ}9@ z?NhDUN+p}M+2u@vmhSCGo?QuJ!hOU^aB`#;(`m>wHm{DnJ(Of=z2IBN>p*>{n1L)K-)Q2 z)3r%jFVgyDO|^2f^uwqT40sZ*&Ce1`qzx^%p)HoicilwqkWE(>g~@(F`ZKQYT{PUN z?Dc4Ax00n4DfS!kqzo-6DoTR1gPbiu5Dw(l#qj&%lM#WG$$=G5CP>Kx zQt{z~@2rzpwEm%k)M$80mu_NGG$KkZ;ciD$gE)B&tpm)Vk(8?>gOGEYkAC5Cf*HBY zn|HZH0WCb_)}2xr2{;}Sf^?vqfrVg>{*=;bx)DbEKE1R2Rrq6iNRJVldmb}MQX5c5 zh*EzMI6AOB>pA%2eChLJTXvSx+=pB!Y%QdrF6487HtaPW_xc*gVoqXgUk=`=vRqSb zC2dGlW^j{}mt1izww8e0ft9HL01!Ae$$e;|!}jK_(3_GHmK$;mbviLOPGCwFI_^@Lp0>g)KTpPBFIHv+7UYwx5O8 zz}mvnOBw-bBr9};x%5O+QZvKEdeV0nTauh>vQF z1ZK-AZ3!w`Op>i?BL^c;JY*2Ae%KdFgovxPqSVaQwBbq;mk^9N0Mt&Bk_p)=PW)#) z*Po6{gG{90`KF3xy3pgWSeo2fDp?9chC^Yr5LB%V>eYo5lC>u}0373UQQbti@7r96 zqar)ewRm78lC4R?&=jPcCzG7_BDD0mY?Ekh*QdB~YcSzg=xfmPP zL)LF5(ENyY_Fqsa@YJI44U&+Y0#xP{p`G`}TOb^nlY8O&yzkn-ES@1oU+{% z@5{Dx8DU;j60?E;I7!j~$m9y3SU+jaLs8>IOoA2TXnCOEQifnK&4SA9s}7R16)d6L@STn^zS?=L)1qfP7U12=&woryfo;sTnF&_D zE7CTIk>jN{^oA$Ss#r z8*_9mDH_sFoQ^6o;d_Gg*OfD+MYJi9w|+_*qq6ZMNpoL_3RQ%R6(`r6=DJ*ydwqQD z=Hlpq>OY7bZPZuRAYQtuaidA_-Yqe2T4Ow<`I3byLc9+ElO|zL?+#8IAz{$6 zGx$v%+A5c8Ght}7qBC+IdnHQ?2z3DQg|ecPo_Idupg;Ck0&~i4dRlR0 zoN{8I@T7990C*4A=ww7bZK_EEF6XB3K1uESq{{S!0Ri$Y{S@GSD6{P7d zke4(cMIFwcQ^@o*lgT9M1~FB#6X7SqJA4<|l>@_0h3s9Bk~I(E!mvM6>}zwrS^APJ z9|5GzWsM)o5|!=zD%=o#Gg18m5$LeoT*zo?r47)eY0`H5KzUEpbDYtWXWk%4b#~=V zf)ANrv!({}J#YZe(B_*GQg-_nCC65065>8mTAvv#xCfQTeERqH9+ev9v9(|}TXjHS zlmm~}v{iLElH!Hh*Jrwn;F)m)GBfVBi~vv;yPz|s*uKj-_mK>I4G zC?-PLoyN0ec__|JO~SF`PnUYT#^^*nwVwHCxRbbT5^d5j~S&T5$s68`x;+w>>EXT6`YJ6CZ{6XFamT`lvqFy!%~sr3=ODp84(D!r_!Hp z+%x!d_NlbQ0B(^>5yzztA+jkJ@Blh^q`d(+kJWPzUhQky{sY*FHB9g4tC+f=I& z(g!pX1ScF~q#9VG#KCPN0*q-pXYWBF+;^fHdm9x}@c>Xj?kTmWBWmx8F}*b7*v>Kc zrE&NGJchHdH5Dc(!60r$OD({$oQe_Q*bD3kBz)C;>Z;JPQ zC#P*}c%UejzyQK(UNyHk%703eiv$g{;-%w28+mKr!+@$(M6+t(Y5gdqTJiil)cY_} z0Lo~V;HT2N5nM1wprG&?iixwQjTE719-jcyKrFwdT8ts+ zKPusc+$2`^4Mkt}Kt!mYjYc(z*LF9n<<%m*mIt z@$Fs_2cZ0@dmm#_TK$wUvTLH*zv1WFtY#QHX|H^qPH|h)>K+cNTAwxL_{HgRb=gF^0CvFL$-(%Gj-k7n8 zooFLlTL>XQt$l@ScZjp>tn0Rj+L8uV@!eldv-YYdPO|N`>}0f7l2lFyCarEmpFUHv zF0Xic##;(bcJ>uHzXM~@TXbBcfcx>-QoO_~!ql#L8ncS0bz4`9(pien&L!k*RHZf$ z4tC%vK>LweTqOP`B+be=8rlon*j0Y_I@;8Qwt%-%kKtP=&^^Kc-|t*mBAMBaTufR` zj^?M-+_hZgsRweUwxj|(`O}Z{se2uSNOh!7ea1^(WUUs&4FmpmlrnMaT3VX6U6}`4 ziE|!T9%&(IKA|HzXsaJXS`(^U5hBcRkTUzL)K%(2Ml<@?BhM&ybo**(sQjnww-a); z-|kVv4X6Ui#-e`~ol8nUJ;uMT!l9#e9fDNVLz>IcBlutMOLjt=QqDZ=#-mkC~aA9jdwBolGGhtE_ zCDpo=ZLrIUXdze|4PX#918TD|@DHcb>uthq;Gp1HX(5#%D($0E0^8>vQ<||#>Dw?>7TC!+Bphc>K%&yf?IS5DF3g3d?enC@W*i~#0A*aMxb{{RviVmjQi^vhQW47L*CM8zReJVcUI^Ti7#z<-BP zR@oypyWROI{`?1Fm2?fLkR?GDGES1R&tT4TalFZ&L#8_arqJ4d^)uh zx~R|^Y-?JMuM9MuKmeR)&Pqppsiu@&kA>f7&lXEWi)xiB>guExoKn^h6{t1Fl#PKW zow>*ct^Ff?m1`F0*9%kWeo{$Fn^KT+yaAPNMiLI=a7Y{0vp3<3whLFz%D0^A3scT% zcS2M_&nh`cCv_blas_2wQ5toPF&(|IGV{xIuMbK>aIeIHoPa$BKD7ztkEnlht|{V4 z_%|Mvy0s;dAt`yaINO1=f(l%0DL;n=L?~l?ng&I8MPm zj!DVef30F232=zpEV$!MAt^{uQb-GMa6v!B0KhxsVAaiu-X+yWG`MaMu$GjCD6Pjq zI$Bmo;Lb+FvC66t#VG7)&N9<4MpGg26T%@%QdSr!)U7B4D?Z0yduJ7ud{pZTbXae> z4W^wS5TvO?Iyt}~jX5OozDeG!^6u@m8cAtL&Z6=ZwE|a!A1Fe%DFZm*V;$=j_``X9 z>v4vudZjwBoRh0Y2mpD3&@gg9!OD(!uBc*iXy$WBt6aLK<8ik+JX%Dgkn)2FN}N{X z@TXu$aezGr{pwRJQK#7=bv3mpisFz|!)he2Ac3Hc5<$+LvPe8uk?`lHZZFME@!8C? z0%8*!r%DuD3BfB$9KuN786@+vPT18O_&;cp+V34M#CIgTrFaW5QKuyS5wK2Fa7YQj zPQx1terptykAIp69&xF3Xa4{QCd-(n;9TR)eYGo6j94gJPC7sscZUW-0R-$yLB^!- za(>ThX~<05`lU_Dwy9nRooa}n6|2J~$FSH0t1Yb~2O|T^RfakVeKFLI#=1mPjwL~n zsV^ZRE*ynN@ZC;)`{_D&$E{k!QqggZ9&CA?M;$12D?>5X!;xD;ib{}QZwfk-g>R?~ zCm?~Il+>Na$;mY1*b;8HMj^D^?yHSxpW--tD_y`%Y^6&it3Y3f!8`1bq=G=kDwiP_ zmT!xc$&q+*DpJ~H#&IfbymFAFw1N-Oe5+&4Snto@Al_EbNm;`!f~S<`A%FsaDJXLa zI7s$E=Q^`k-NsK>i^8T!4o?LqGGt0p!-HN%>On#j4`GaYZ%+%0**P@VWs>REKM+?k zQfGAYa+`FPn_Dpw!hjhFNpA}Z#`xnQBXLs}uAQ|_3hF!0PuykC3?w@L0L-)_FrWYl z($lMO9D(If81(H>cHW`tGRmDrhSwqGf`x*^DocYH^D7}rBXTz%oic}Jl7&4PxQ?Yj z>TWKT9rnt~$^QUe^-1T7jbHR79B%Ams~@|q#m9?lgoYL5*A1mK>wiO?{^_dinkDKi z#IyTICA8p_$$6}e_8~yz9sP}5uDxq~wnuFUeqm)u^7v?H83Sx;2Pf(JRVq#Pbm$KP=`)xXKCswM=tTjf&#aqeHZHt1vDKRneB*lF9={Qczi2i35VPjSh4C zX;yuE)yJZxE2u6ikS%4l+F5M4&`3+D?d_jYpUbv0V(3e??&IR6wL@rtqy@M&0qL*+ zkG9pl^evL%)5=6qZfR=gQe5!$D0c@sPJYIw+MB42$BMRW_U4&|YiioTaytAXmXh*z zPy=Sk_4M`_^r|$weEXBdrN{xmbNiP_C+Ym(l}X(0@)FV+S|me#pa>w8IN2M56}Tgg z#*y+3+!|{09fXG5SMS?w1o&@bay0FwUx!8E}3jI8P)mbNSQgXaV#cskn&FHytYgDIkwRGAeS>cf4AW;M?w!S!D9hiK(OY zZCOG5sx`b0-KqA9L86r!iz62Z6A1mKWsj)x^ds`?`sv-X`3jR}+i#Eow_TenJTomm z1D`;wlq>%LFsi(Fa6lA$Pk=cUpWGE)64I^Gp({+d&N6#zP67HJ4KSnxjfHY0=42%J zQKh(s)ysLyXX=xY`cl_(i6KBqj+T^v;xL>Kx;Xr?K}81Nkaeg5x93bPzcObEuIqEudf& zRP-Bg)Coz+^`e-!okPpP%}m3IjBQVg(eI&@ens1%07sNnmJB{Np&-Qoi~+S+Au)gk zE7DE!(6jA{6 zu9SiXV)n&0sGij)pc?UP8r)1nsu}JnwL|u)l{7sm3S@T_IE02VHsX?E)T){SkXjy; zD8yln?MeA>K~iP0TrGjxw-gdA#?IKJ7{fWmNv!~OriuZf#K0df>t6X$&eV5?4JacX zmAIIX`EOBO5rO1;RF8%aYEWy`P~s9eMQ0JzuN<2(Pn9-6p zjHMq{uW!f?^QIqd#U!BtEhPT{3u)7zspS53GvEW-nst$+?LvT&Q<((`Nx}#DRa*5F zoTPE+D*13=kZ>xcc|kxs5^8c(hmU4?H7IG)l@W~8Wx?p>IZ8%-sE(vlh;)&jY9*Fd z>t|Ny-nrX4BApSg$5f|sk%WWIKCc2Y0||2~R@D6)n4 zn{^FaBQ}6G30#8OLO|dqxt=$d!+)Q#bv_-n`TqbVhTdO_?}m#hLr!29Buil`Qgg-% zvPty)D(P*TEFkHC^9a-eP|{ieaq6%=^H~&oC4$u@Nx1w*=*W%}rAmSGS4V;Gg)qrv zeV#vBA_qwH71iAq7rO7NrWIhDH%zmP`#AHi1b@t=ZA%|Yqb;re>;C|wODmH90H^-| z!{~cd6x`N=A}PlZHDT`{2R!OXKc_vbw`dk8QTrnE1xZV*IMWomganL-6PC`BJCdXN z;;KIlBFO2<$hIDlMXAC8D!p|~SEealvePJUNk05miFUHexR+mM?8_1sgSEUV#W$15 zSyDjHAB#29?bLL?Ke5fr>hJUWn6Fu}(y%q!;o5pp z>ohs>l8I1KUTh@*KHEu73XogxosObN{(P*DDdFq{#n z;4CRboj4@p9QLZ>dIzum4D@RjE{}%Q+FO#smYEU+e+i`@Hrfz00lrjLcmM%hv&5fr z{{RPMGA@!dPO`MmzCvOQRwlCupffP#D=U*ThCVGH2_xcETP$QS z+v;{g76SAf4>#h-J7gWW=Lfx8Z2tfntonvy%`0S+wO(;*Glf@%^UmA2hZa@LP&fmQrw3>Q|5nQndHA6+Cs_PJcrg)^mZC*aLqS8#l`TVAR&o4XdR1|#EPR}5{`>xfb3PN| zrFHf+%Bw{4b~&6v?@D`S#K%Mq_{DRV2>&Cf)$Kyn%K-~R%mS{qV!1d z$XXQSu(0ZXn@C8)a!$%4-0fE>)@Ljh0|ws>w%m^yC}<(Xl&eS@TS{_~gN=bYM{i2Z zWx4Txs5)ljDT_KRQkqJQxlF7$*;w-a@TFJ;@8X86jt%x)dFJm#Nlc<04yoP-pd0iIRH z-1e)lO7wrm$=xFN`(Y*v*A`eUtcc@69M9s=Fx#a=azP{>IjIhe_%rbXqk3XuWy`l- zuFs*xGcr;nC(KS#j3v?vaoD77^s2@Wd|&b;vPFKPmd8t6>}@7QzM8d8MAvi&u52R& z9mxtwSBMp{Bk8qQEt#Vpk20hv}=_hY}RrcwIyCt~hwY66tK`BVaT1 z&0aWVB~AYTCnT~*mXU-x)9_*?FD5D;PMm7g0*C%#zvoeM3?&sXWiZOY)1hceNIw4n z_N$LkUphi7%5SZH1$DPa$j;78e&X9JAK`fg#He%s0101uiGgeQTYVwBY+K}nugn^W z*AP~;dj_Xl*k9QysY}u|{{ZO}j!j)Z?fyzc`#rqK@TEjS*M)+N9bWlQKT;`8^ixvQ zhStN#3Lv!Z6M{X+l${(p5v&$KDRPJ{5 zOe<`aDJb7Ufdpf?P^T}StHIdDiP%~f}=q!Z`= z04?YTud2><3MkfFm2pmg6(LtNox-@PQtxE%u9;QrjNy65tq@sua^gAFz@(<)Kbolu zz8svdk9z36wdH?M_YrD#zZlX0N9phcmye8rM*qX;Ykc&1wyrA}$CR z^8rsHdwl92oo9&ZEO4x5iEpS1j&*I#0~g<(MsgMU*FJD4ZmOheBk?TOEaz;0E8)E z(lfs_F@N9nr1hmK69nDj^g{^5&B9>hSCP#e)twVA_ z^r+YfQ6w)Uind(s?dnk_KSF9U6~0ri6vMX4l0n=@j<-KOAlD4Lqp5UP?yyjzJVE6& zixfL}m=Z!%2?uJn%jo{Byf~N;x^7{Rv8fFm2w5JwRG>f}ybkn^p6P21iEl5fddk-s z_Eb`+R>gkMm!9CMpjmIzb*TRUhMhu9bO$7~3)NAfTcXCB9j_fgf)EY?J+&Ug)`(9| z^;x{Jb6)#_t5wmhS_ z(!B9i3unZxs_K|avpac{37|6jP?DHTY4q^e2ub$HHC&Nsb*qz-qOc0c(%M^20#Z8+ zbLvkd0M=YvbNwK7+~33U-yjkH%$Dxh{qo>rZ#rq zmlsuG)E^J#NL|8bPha|qKshryo2V73N7uuSkfHwo-z)1?`|O5oP|BpZmA>yK!^hgG zf%jy(x7qTr*vgzfRF7JgPgN-@ul@f3XQI%<71#Lw4QZaW{hWnA)15if7Kb;|VA+~= z<$Yh3<2(5gQF8j%uCDEPy;0V;sWG2)Hq=om=TfqEOKr3XN|bmBN-6$<_)FJaM-X1F zO{&La@Vs=&nATl+pXX(j6y*p102S4=&;I}uwP>9?d+956(XN{KX58D=+reJ9tq-6{ zTVQ_iZ{3uTh|Gcb#5U4^HdK`j=_8Lk)ydRZu?jXDN00#Sf6bInmlSbRX}`FA$u*qEIH6QO8z z^%OX;2P*R@=LBIR+dI{zpNKbW;|5KxA`CR7DQYKDmPx`gdDKoeIX{&uRm%S4H{-c( zW-DzdHl?csrPY2V41W?vLQXOcrDtr{25+hTCtfGV_3W>)@%ChhZal7xk@GF6c#O8o zZ@SUkrLqG50K7hhNyc|p#~LS1d_=h3bjM?+BtStxFUXejX+wD%r6D=W^RXj1<29b! zZoU}Mo||+b4mgB|7lo->K{!G{8|+R_*-lTan;m9iup3R0Ek{WhaY`g86M}i)f(LJE zhAgs-xi(WQAt>=kE8_nEK=kj#Yg*#mn+~lAP-VomkkRne0z*_aQtTOx3?WF=3X*mhDJMzW8Q_}z0_#4!xZD|i7RQ~AQqt6)6|dcF zG86`ND0zGHfym8#XIB<2b_Qj+yb-d$}{O+-=)dQS1yk; zG+s6IvbX(`eifgI9dxd<>UFgm&0q-d+J7NWy1heK2ZW6~k`I4M{SCF4mbih7a|lrl z0+$F5cz_NFPvSm=bL*P;$KkvbTx`rH-0MkoJ{!fxmK#uT;YAy8Hd0Q<8qUWX?cF;! zsI4tVh^cK#X$ZhBtcO;E=15QnN{HNNVhGrfK&_f`Qqt%Uik90n`JH*zD@>+Yu{K0R z>OvIxNRZw#trA;Qj|PG7d9V z74`M?FQOz!DU{YTC4MPRj187Ex`-TujE?(M9nxP?+$W{Tkz+d*u!K7u}B<0Y=AKfDD9SDhz2Xz>Wncg6%fNK#=)OzG%!$%eGDR~!Yp z$SD9UAzoLLk1CQ>l2fNi%5Z1yvwEJ@W=B!F+zY*`HAw(QKz&!clxCP~DG$rTk%rq9Je1|5{{T21_^dnPO`_G*)0>6Y z@uc*1&PEji=C(w;HOVA?FBPCp{{YUA;2Q;CR+^uUZOG~2arvjw9IxdQaPgFnlzfba z-cSlHr9f&)Qe13+a+Ls}D@(sYZFd_pDmJ%}HQphyZA7SUp+iz+PO_4wooPuR{L}?- zb$~O0-wbicyZ)yOl%V(hiWD19#7~9s*^)2UyHp3xn3r2M-PK!U{{V-?Wo43mlmdrw z=_H=F>IpiL)pqL-Q#O!F&DO@kq!jvxqlTS7T}0ITOf%CBvwrEGqmR+7-am5O#%3z% z(pQiEAIuc3OdnPzHoZO8kxbnMnlth3zYuylUqnxklH@C`m9aL@IlufwG8S@=@kMoM zANWIxe`ax7Zod6~e(3?_Pqq2^@W$Q zny}jKeI3*-w-VXZwlyWy7bEMQ?d!&)?RvTYZ-uiAMQSDttJ{d}IK3+oGN&282 zKc*=?2P*w+Kx!S@WM^61EWV_NTOF2>(6K)J%trnbT#K>q-t%9XxbdNtL$#CzkmNoha(z2zUeC!7gPf)n*|w2$#t zdDT{H-%h;JuNK!MGc0P0Bg*7?DLvYG1h(h>Dn9tDCj5(0@L2_es%`8ZqO`Eh`qvzi z?At@aTM16!rR1mgtmI@L6$r*^8dSzxaV|u2#EkkwtS>p#gTOfMI6mU5ur70LliIr5 zCrN`nkM88fc}%>6?6eS}^vztNb*vtz)eEkyNS`4a9(|1%~EbL&s`%M;P!p{O!a!Q)^SSx7u zUUsN2no!|il`B+bvT#ko$xyB;j7YBN;3Vp%9J?z{>-MPEUO3qO=vNk_zH>z&GC|9#xk}9h^~q{K zi|f4v${!uYCXpZCnQkLdgiqP$wgiWBxblTOVzrz zs6}cgpU%9T_rdR>Wp zSucs3)gwh_>!~`mY~LqKvdUaVtBKr?Z`#S(@`r8<-faVZ{~)2N`FX|O5u;C7^O#dHHqv|;qDaZe@9 z^O_JPM}8^54;#~1C^@%>8K=_qMx|s^sB_zjHfki2@4YL;CWAtWr^jl*p~AZ1a+8g* zTq!4N#-odKYGrPArxcS-mi=o4ZAzw=s(iLbB9L0h$E6DhHotRS1e47;rU>myODEES zhW9+zfIQI&XFCvSVV`Oy8A#uXbpZ2AOFQvS2CzS^Ycb@VyVA3tt#d{zpd4{qW;zL_ zjGk#b*FmFTqsv2OB&jMV&=5abuU=H!Xi-Es-T*4&i| z4ui*ut0|6?>ZFyBe5n2*XuVuhXi|ldDiU}%2UCYr{{YEP4S&tMo`pPCdojN{+*l<< zl@gvh7>pCFuH!K*#G<%cCfY(KcqXa~0U1`+Js3<$3 zu@)nP`7##ML?7bBhrK>sTY@j27-inAc$JG?p$!(%6)qt^xgWx^grNZ+{5H~thw_t^ z7`csf&7M0KUbcj{Uy#xyTl#eiQ!mJALnwAqhnh5X3=o~3bRQ@}4~bKZmN`~6Z{_sA z}Z5=r+_L_Pua+x;0*o_9-j*aw2j>>$Dw4KQV005<^lc`lJ)px6ZO4?oM&V#zQIpD(J z+^?)3Bf)s_*oLmnIG_WM4c$r*+Hlf;c%JH&&9`1#?fsX+8-tBOE&l*ybErCi!*Q;G z=wz_-?yKbFIc+9GdYmB&i4l0z`j&L1=_kzku-Y`R0#+AMO^?%#tD=~cnI<}r?eXZu zI3-I#%3-#o=Tj`Jk~VeiL}e&$o3b?T!`l>opi=a8A5cbL?2rZ(ZnrBaa&P!udkSff z zQmwWKC`+O{gz~VLRQts4xAm27lTnwBw0<8^T&1b7({^aGZxWD3&CHUprkE$yDt;?i z^p2s$bWSm@2n(>|1;;VC zX*`^NF)Hn_$8(B{!vxzZvP!7AOK+%xA5$!)3M)!l2~Gy*d;!V8$DqinHP52&*Tugg z;KG#J-au`=T2GYWI6bm;9lcMjY&LuR8SaS1Ne{Asr8Y>)bH-0@DETbIZ8}y*mr8)n zH`@dXXemMcB;_RgG2Wf{e{iuRrHsb#3oa$lai9UGQTlm)!mLo?b!D;SCSREea0R7) zV3UEt?l#BTwEKHdq#;DOg(b$yR5{rH0Nj4H6L)clK_T`2?I=h&86yCIe^2XFe44hz z^&(MJ1Hn_ML-`>f%${lq8HVkHL+3}Rxpr|t`4J>1d-T< zf;}ov8g9xd$tq@v8trgQhNre%kcr`K?sqHD;VqHahZ%W3#RnT=phv~K_fExks_d;s z@gqq@S&XSayOzVKTAL1_`BaqzxP<~RNcF}O(%oRhxV=juw1WMHRDP*t19DqpOKwX@ z9)*=SulkmI)iz8gPhIs#!b}Z!bWOI}mUuJSTAzlK{{TALEf4rw30z3JGK-bYbaBViU-24j5g~H%@$kz z=3T1Eba|IoagCDwl5=2)*EWZBGE}6vj1cP!d9n|e8^L)!MLVeNeL>Rj(bTM^4E3i@ zLwPO5y|9cHZg5;aR5$~PafiqEWz?n3QQy|zbQE$cT4y|zpz8Ey&D{PYbeB?GZ7D#q zbz`AFGYq!V*3PG4Nz{<*Bq}q`Ek0)5NG-ew(o*3CcF%{sSrY3V);dm1PMLmdnVHhJ zQ}Z1;b9C+dw7yTtyiUM4p}xp{2MDOSNVIgU3wJ?vyytHn4LP?Y^q*3*6<;-MY`8;# zEgv?7ItpaKQ3`3$r8H8kc!2enOy2DZMZ#f>?8h5IoS6~8>|4B7K@87fz%c3@(xfk7 zb*Ib{wJ22FvPZAa&*;XezW)F}K0F$Ym0dj5;ZwRw=_ld$RWh|O^a*noh>kc8l{Va^ z`I_7j;K~IZyhB7@E)W%TcEj-xrlj=s=VdN+raDO|V$q6ysC-|FbM|Ud2@c#g*Ztw8=tFU@d7x)Tg*?rwEddU;@2S-h{Z!>F%Hh z_JT#Ceo92>j1b9Jqw&-DcjZlS-3F@|sJ}WZ@ov-PRWUBxUM@7EB)pYl853*{{ZI4a4Dxgr8)E>v>%ApKB2O?8c)2(t`(Guwa?;0!44fKWJ6gW98xDc`3J?aQ#aCfOo z4^dsZb@$%3Jl}^c0jaShHXe&LvBPh&e6A?S2pi1?^!=e=sYTi=5=F~iJ0jVne*LC0`L;EaL}K_hw%PGYB^bwh4jnCb4QjP1(O!gRz^ z2oo)!{{Z^4irAw&mKJ<7{5~4I)g67aJ!lL`YFkt7vvn#Q_5lsP6Zn!+e9gF=jcqH; zl1Ctg5ZAzLjasrXs)s4xf6koMk&o#~JfG5mlJYiDk?ZuP98WmS6h_P*^Ms$N#IIQG? zp||t3=tP9Z#PP6o5upKXs(Poa-$WDr!e zPik>2o@l=SdQ!tf(ue@VL{|z;7+dzFmU4KkU_AuWt+uT3PlkP|*4qPYRtSV7fGN}k zZLy~ijP|FJ-?d;3fNQ!*{ORfWX^fwQ!`9#TVdvQ;3Y6J|mga*bY>EJ%_zy07OOw z1=miN!PA^>qv(MwYkBm|n*{t6AzRa&oMv^Q_CLJQ%A{Bm-V|s9QTQy#PmG z_=wMQ&TB-3B0_}Qg?QU0ve9xXPz||&E0_MImm1PoY2_6WjlasLifKA2IN7L5y+>c* zl9HceOkzFhl}K_zV?k-eu;i7YP6Fz`3oW5OXP#H^w2jXvNZzsTm3HkdEl<8W`e4d& z#^A+e-raDU8 ztA)-|l!ha+q^D%0sY)tQ2|^H%RHdkdp(!d+024*G@lwOmF`T#bwe6YL7m`|^eq^X2 zNl?O=gOC)Kak8HvE#1OG8#rGy@#LP~qQ5_>cuFwR+bk~hqkRz|IP0+PeNNi)&{|OZ z8zRT?X*ooxVDX(yvU@tQnvv{PnzIjELH&2wPQE~fB*!Na1xM7AOdQ~bL$S8wMCKAT`_WE zdy`bxKve;%bN>D&j!01jn z>CdTKxiVUnBI2@tn59<~{{YfGt1Rjp%-MZAHcYdb{YiaxOqKQ_hJ%G8*gG%(0KguV zZD#1?Z-&z^Q{Er6dfxo}*#|f}Lzz(eYD|&za8*wE3^Q!$_?YvC=E?&c_>z>^KXDvX z@<}DHi~hg#2q$~}zo-3`Lj0T1aMQqGfcY^l`{O-N|MM=Cj^vZ^v-D)zOH)ZB`>u4 zv^^b4KjAtv`2{CGb5)X}rdASo0B7G6G$okqElU|o4Y&X!v2A1g=&X;SeYMy%%50WH zvl=JHaDvx;EaZ5pKjB)El>Ie!G*iAbYav4p5*?DcNm74_8ZeSR>N|Soqg+eUd?J9S z9B_DHBa(On{Hj@?(TZX=;#@=|h=F5;r&uLJ{E zMRk`X$zeq+NK>t-e+tvONc(|T&+Q1NM^&cs!vQH>!s8v#ld(*X0^*O>;Q0Rlrn@lC z@k{K^dBrS??u-xPA56q&;!VcyVUDQQUaV?ARN4?YWr^tV%On2)l<|F^V$Eqk4En17 z0MtD(3M@Gej=T7A4as&%2?1_g>`et1;S`;naw-(!(1V{6DxYfvG>};88#{eN@UGQy zV+((!F0w64Kg1zT)h;nU$&iW39-7CkQfxPiw@=^g{Xx^Rmv6k;ab;Vk*&GE?qD&zxlWTNy;icfytgG<` zU1@1Lka25|($aJ3T3=TfE`_B%ks&7-OZ#;>-q(N6 z{XgV_jcN7#fAB@BD?J(1P~EyT-1`;cBG9R@fmCayvBDDK+s@%>AS*(ETW++FKqGm% zT{#}lCPk&^lBc4qwj4>>)bf?2{{SkIQladjLG)v-^x9?g&F!fYY_g;mortu6+D265bpSqb zl#R(q4xoNC5$85X=iGXG=Ao_(OO2{(f~H$!l`1rnI9HVFZPENLpoOQ@jHJ?flc$@4 z!g|-DC(T{9B!7F9nQk!`TLx?2{LcvOs zpm0*4cLWeY2e>qUV5jmP)R$Rx53=ixI?~ePZZ@)&wzQn3X;2`gK!8C9Al0kIXRnl1 zp|(1$M5y>_N*cW$_(;NAm)`>jd$3EJE|cRAT5$_W7zgQtjD7o1uxu$xBRu47#twd+ zsM|g9cE1`Vh^d`yno=i0TJ#5W{udCf{5O-6p~1M|g(RsYsZXA%jP1=wRq8TJvqslfD{{Y2E{`7wk{ovoks0F&5@+fpQl9!txwxOBOC4B9wX;;ciD9}ky;%U_9_irO3 zQaIcxB&35;dD!NTvtQF`ThXP)Gc9wpqI5laONmZ&xP>3U@-j*n?ox7{VlP_17HU=+H7srU)TZbRo42$;N&PCx6*xf1A4-Y0 zwv{Cy?TVXWDFEQ)W18{(MAe+LXW0Tbd~Zuj0CG5^!&%tZd=F~&k=6*7y{W`Cj?_w7 z7^S7JYGVxsfv+4C%GF}(Oq8aY{hWxur>1!U0f3E z*Bc-llg)DGha;mCqN5(=B+3!n)JgB2YeTejqYv7J`>5D)*eN7OhL;K#J=F4mp}HFf z@whi1g+61H#q>u@M!2};=I9oSY*i#YTR3s|2+Cu$f1N-QRl*aMAK?nsQ3h4Xg-x1< z+rkplOtPFb!zma;NC$+ZeaR%8l1)iLEtPU~VZRWhUEdPp!&+UDHI~?&EtdQyiwMG- zgB830)JDi2TxG>16XgIL#uvlRqU#%jcF3%$a$YJ8x2zxx0;MTSSCt4z1xi|ul%x_! z0=}?UM@G6VG~9JMwJ0G811eIEzz=g#y-U*YtxzAZS@i0W<+)32{{ZheAchiu@DiX2 z{{X~L3O-bzRPyAPZ=k_0muJeIS7S#?+GTWSM9EXpC!`|%dI>3V^MwTn?Sq9i0SW+y z&J_BR9Cu*bZPH@&mr&j&vj*P+Oqmj0LlN0w1x`HT)B;e1j#Nni4|@7i>C#N)UMndf zKg4(*^ogstd@_op_-al5;K>ds3@X_X#Qp*NvK>M59$Rf%c2d=+pZ(0TKe8?DDC^|N z@0NMp180-a-6btAUT#eo-A{aBVQDBQ&1;6>{{ZaiAP3$b->hgKxj^wM?2GSDBUTjS zfOFV;)`Ksp>?&V|P==D6X-Yr<3YY0+^!w;#9r7Iq!@HPVgU;$SD0EC@CZD#8ysOJST%rvS44xr&_dFCBlhZ1`! zm@P|KJ(ji)aY(YfnHD&zMI2Nh=HALr#Y8o3wOD=_^rP!~%V}k|Q1DI+meb+}9#{CJ zE!mTZJSJ>({{X@mOa`V}x(4ZX!nLyz`y*=3;mMA+w@4npD;@Ob{{V?oGh`zF0D6^k zkr}q0nChM_pkXa%@U}e?goL3Z(ITdmXL0LqOZ37ckN*HoTo*OpMJ30R5wFP&JbsQ; z=j!E`E0S8OU&0E)p#-00cj85+WxgLtbeM4Xwo6J(fH*HZq`CF;rO*7;A9ronquAuE z;Zdy7nm_X>DMbGO<}M$oH8=4h=O*XXChvlzy&5}!j@r@U+uZ~a;ky*} z!jR!{ztPe^I!u#pa!ar9Xr!YXD~-17a^q`8rkntS=tusI7>p&RkgblBkD*bc8e}ab zDInx`G}_uvh!UizB_l}o-2VWrBx2WNxi16O<)>Ln&I#D>&%f5BTy0GyL0l0~%4NvT z4B({h4r);njEI^Ra#As$YJMVEhNUGbQ9LNnlcgaspKUz3r67Isf29v4YH1kH98jus zI8;21S|o7PMZN~fT0zJ1sU#ohKPsw0d2d{`5>Dc- zuvUlSNRCHzAJ_i$i?O7rZKjhVIcSYbI7k@hCm-ilUy3a%?Pa;m)TKC?mU*ew=;V2^ z$^7FsKEk6cF#Jo@1{L^zX{ETX-Wn7ZvVDS5qxPfjZ+z{JLvEip>-uff>x{A>vJanD}=-=;=cJ(V^ZIAwO9qPK>iHU_0 zqkxr=6c4B)Bl8tKV13@1>Igkk5g3tamdxUj{6&?A2j&t--lRI_)f=aJs`X%k<{N`G z)Wuh2$5GbF`?jPXy%@f5UvJOyD_ms${{WxhnD|$AwpuPuboWt95v+H&qb5tr84xBk zyu|?aKncH{kNz5zq%Rz@>!Yx%0gom#H{OAq%H!Q^@+D7)82%)_$i&Fw8=Xxy9Zoq^ ze2>LgJNGN%XEZ_zu<8=Dox*`o1QFh~#%@FO9hBSL7crsy9qQ{>&BhR-juw6zpoz#VlQ++dVdL7#7WR}B=f+oDvt)$?M4k;*c9*Y2Mcq3!6 z+G6P$aMokG=ykQEEvG>8xY^-D`VrWI4;1PsOT8Jr_R`YA++{^Rsm^OYb<2-WT|a4C zJRs(Uvg=PH#AZt>#>wMtMVM|E7O3c*t^ZLJ`KhY+DhGbMi&3vi{ng!Q^n zqBNDLC;g!b9#&l^Z=L{Q4jg#k1grPDRI-@3P3dcklOx1tTzM?2Ecg;L^7Epog0>yT z;1U9oq2@x;m8S(FtG!#(Q7y<~`P1?qnbWr=sSw_K9z1ta0bEHMc1!K#9YX~3I1DN2X#;!o zR@0R}kU5yz23!Mokdol@rOi|$m0+iEcLW`=^{rzi@=CFFdL!*ii2B)mm4_R32MR)m z1b^%FqwWSZ(i@lCTP?6!Df#$l*f6CK1FepH6&qeo96#nJFHa!dwIKBi#w8fYP?ujCZFJ!nUEv z4OsNHW!vSw0&89ri0du5jOPj-r7YumY_@s92iB#&LmTO&E*aw$y})oHf;Z!OQC2wL zYBa-9Ja0`o6+;_V zlCH;W*Upmc@_LV`Q*VgtZR*zR&%2)rGbtLJd3gR5Cx23Kk-CLzpAqk^w9BPVfB+ zaM^GTcu~sTa~pWB5EP@^%8}(jtIFC}hw{it8)Bk*bw4Qqu!Iqj%~`E+S&-tMlA;)B zc&{jO5IYZYNJ#fS)i~@jbTpWWvfF-W9Wo^(hR^;RMfIyIYCXkK?^kH3Qkx|? z;8MOM{6ddwOU}OP5~U504gx?JN=Kjo0rVAw#rTE()b#>jMU+})C~`!(Sk#D-n;M;X zj`~x;{v{M7ZlO@kA93+Tas4Zljdg@Zg2CzuNNr^cEy7{Jl)U)HR-xUt@(CZrqLlHf zs69_-wq71wi;SS_?x{{V%Kqs6AEj8uxpi;^4%3EGEzyi5*8JmQ$T6 zKC1pD9_mt!K({NYXQx}LaXV~sO>pJJmXkWM&GyNAufwKpGh^DU%!yOxzKGG8O0nWP zwFIPRu*n%chDqY0z9L#*nu~W$^mMIu9-KnsMR@}sIQrBTDsdjKIFYB(me^15)Rl$% zy@Jtp>Zvce^O7@SwC+Jd-EUOK$GRh^G!JD<2b9q9Z!27O(w4$PHrRHhZdjv9_A*N( zwq+efD)G{B)bo$Kge6NPWbiU;QaxdC>04#jq$MbP&qZ{CziORWKew*ZA|^^ctgy62 z{{Z}Fed=n>Vzf)2{?uO~wJzP$ZGPB&mq00f#@3Mq-;uoi=Xuv{y8l&}8)9u@bmioP9Nu2&C< zy)k5vhY-};+$it*cqQ}~GwtDp*3aNt4n> zKU80p{f!*%T4ssVV-v#>@Z)HQ2{`8fbT@S(L6>}Jx?OysRy^O~NEE9KvK=OIBO?Wj~N z6`>E66R-xIdU_Cz^yOp@YFr;>6uwK3&W53>5f!Rr^r|9A{{V)C;V1s3k`wHl{{S}9 zT|uMfy6I8Zskp4eipd;!apthXasL2HDhH#SYDZ979Fn2(@n8Ap$vXz4q`t4wtbOWI z+NHT*zP})9T!vvZ;V1Z)BshM%v}&+!zpwHC0A%Eq>+Ssi0MS!L#5k~`09tSqlfsAn z#acSLDZfVP>*H!6`L-y@S?;GIsc{$g#B^utREwcZIZ5#GjPLZTY>DYtKv?IpwFO(Q z*C;^t0mMg;k@>0L%++uv(?qiArY6nPzdW|w1t^fxhi}D=#a#M9q0+Z;GF&qv8|^s` ze01&e5gZtGvHmTB(t9lkJ*t47rA#-PxiGR4tZPrFtzRwuOA6%`AH6_QfYPjuzO~Vd z9J?VoBUguIMXKE0sJh&w!Jg;J%0h6NF{Cov%{rfDgcW14Q5}s*$GE_^q4*b+EjHPx z2#~a=C6x^5d=BLVlkSqV_oz;kxIb{{mtzoC_fE`?u)OY=Zx>P!eL@~yN1%9)!lz&P zYrIs%aUeM68wZS&wm(|*VyUK%8NF!+{Ao^AB$f;js70@`ti;W?Y9Bkl)at&23j3Hp10DbEFHaWXT@+(xxh&p{RV^ zAo)hkyQv?AP3C$GBwqS){+g1c36`Ms!qByxYHa!CM9KbOJaiA|L!fTpsH+>#S*MAO zM@)uXZBgY$Q6UneFCHT*^a=nFe}zdxhUpYN%Ke$RU+#)`$nW?~EfB^Jmado1mu}TD zD2V#xz?RZwL6EfIyuyh9tI=s8NLcK&D0)yIvq=6N9MZYgm{f0M3Sr~`Qho7Gw*(wv zJ5=f|#EfXoKHao}HWa?-coDE8qBH4nXewxehY~-DwQQmJZau)KU9@-(siH!HPRavi zr;x9>!TZ$P7Tz+R^m^dz+q-RPPg@OTvU+k$yWp!2%*($+XW_@-az)sJGS)=vHDX&+@h@D zlk}^jmVHM$4)mm1JLx-nQ+vSRR2sIBc%u<%ByI&-k!DJF(@(W)#>FO;<~IVMZA5Yj z`%v!NKp%&xoXnltwG-N}&dl zq?1ns`ikg4CV;^D52bfUwGrW0ygJ4P(d}Lc_M!`Tr1)pjv4+A(?Ow$>;h&{*PAC{o z_3lM?9+kogqG1XmkV(eWvXVV(pIXKSbw`RBWaoM{#=^K9(+XyRfy#w*C$&4^9w|v- zUX_Azutzli0NRq4jXajm6ijdvN*VO7=sl@nCV&nBj(DyfaZYK-;*q3^CJ?QuvZGF@ zW|>gM24N~Y^GPZmw3UK4rj(;b%|h0_BOm;4oe7vvL$8a}Bj8k9egwbTblEpu*0Wak$l z_ncF%y1mrniaAn`xg?sebw%ywUWwC+P!6)8HhNDYOQ{7dpLW^MoOV*(s|QR>L~tEb zhyYhUT`DV&do}ZPVm-SV!4((Q9eB@7^!ss?y$?FrmiW%&Q2o+A$U3p==e1W`yvLt1 zJZUd!jg>heWB6D?O5HzXC0~42JJ!8baO*3a$IyinA>}xvjuWWlC+G$M`cp&nno&uW z4z=oA_gdXuN(+f+RyOZfUW)j5^|z;5g0z&&>JJKR(mxU*Az6&i{c8pF4@9x)RzHWo z4N-FWf15@;bpW%D6Z}U$$;qvL4W=sPyD?26CBu}b0|`I@C`kGwBz>!&Ea^TaA#Lo+ z&%>z8DG6-uF`RdyuKgz(*989Lt3b|2e`?h(R|{>mId#U;bL&;dx{}8(^9>KW+SrU& zIq*jZb-G?A6{{fbjFVsz^$A-g1B0eVKpQ-{Tpudj%$A4Zeu zS>s5!Ta>mQWT^#v5G!6sR9Ym^?Gz< zpQcikexj+}D!^*r)y--krp(ic3hj%OERWW@px*?)bDio;(t2&hxoow}=dBWL)3WQl|Kup z+D}VMa67b{#8{(#%}0I3ey=xQJ;P@5w^Z|KR;M{2d)Ao^sZsi0r)}>8;!K%yxw*ga zm9rVM{{Z@rRU?ibeg%}}-~+5WklXId3qQIfV|tmibwZ|mGrO|VQn1HTTb31_(q4r% zIQ#e`@~ZD$^!t;Nmy^DvCvodWbbXQbdJn53s2?ob1cB^Q~aC@4a*SHsSrhi_6@pC~mJ=^+-0YFe-abvC_f&#P*6moxqfnzD5x(XMiA_jH!0 z+7{rFk}#w^(u#QQJWLAta^dkYn~N+*Z3{x;rD4PlRHdA)M0ydVZP_)WJ=D`Wj@4xN zg$?yPCioF-x&S=Lp8+XLs!;y`<21NWd#OwGHPWE?{>MAqjn}3xR|`DGpLbN2+c;VW z4z!_#p#K2Ol1H|4+O4lC5hN}vd3CHMY&w79G@+>q`=pHf=hCyznwqC#vb7+jl#NOB zKGmQ_@F!Z3px`cBy-xlD)&BtGte>-<)uYhOQ#{JvL&%WAi)(E`Xi()U2V+j%Ev&i| zK09Hy-zQUP;kO6kNcP_&*pEsfhLBP;kaO)u#$hTeisl1yFp^IRSo|vc6NBtcYP}n+ zqKiJxmkA{`T)A}mPDw^{T4alI(xoM6AwZBuI3tl!vTeyyPE%ajB3cZ`E*4D+Id3D} zP=edO$<1i@Yf1r82&($v43JY^blZzMBCP=1VT+Ze?w@fQ-j1ctw{5Tmvqg(v6*XFgH=jH67dqS`nE^rw+;18qO0Pa@xdFgyBF>9=piP0V!y2@t&#L^B`5 z40V&7YrHpO=015o* z#^UD#PTrWJvWnnCCJ5t6rHCW}<%kwopbw zc&k%Z%y60|HuRCe{HRzZ0)XwC93uzmL$1}6jXwVXTE0lOq#ii^YB}3}GBNoINj+%i zo-&+mz#a(tiYb;fU=dRCAQYWJG{}sk`U>Tgju}IXKFdxiM-(TDplhMiUtwsc5RB52 zMGU7;Cc1|+Ld`*7?~GH40QRO3plQ^fYQ~{jL9P@ESQ~e)5NMchM|$CJ+MY^9D5kX- z6t+EU+7q5B@{x`y1ORphia;7#0qso`Z^aTu+*blqS)hX!^SvcZjPXW<3gK-hwF3br zIP|56^h#R6;+;!Fy<-6Uxf`A7VX82-?MN-2!m)w$54|T&4Lqa~%`#5>&FILfd@IL%e1tx%86HM!z5KYvg8EB^qdpGs8ijMJj5yZp)52ksN47oUr zmG>l?wApV@xTf1m)xW-KiEDPe+FJstjErN$`~Zp4nQ+&;C>< ztTGVXD{)d16QmQcsi&$LfJ9Bdr|t4qjVKK@;Uxb6iDw`F=v`2HpWCN zuId%0D>0jKMROl@?0Zb`>KdeJYDW6roP zHm@|R9HAxQ*(TKI<9c9juJj1*9;g{QXSw~vQE6O^??jc|8l;RimL+x;t#)Ovm! ziELSU>Go@Pdm`ubZ%_u(ONn9`7#x8pAM1+A`gU83{{U%iavH~9u2l!(lw-Oq6ka5o-*4Sev=^V{Q^3p$kp6-A@ydUDn;vN407jukr zpgMMCo>J|TK@uUjsP5!s>p>}3>YCVX;n?X)kfkALAxTnB2qV5}Cs7(I%Ejrj*!uUX zqsm%Jw>6{n1XivM7n*Gi%Kjt)gs=Qa^ggNly#)q*MP)z(kaq-Pp>)e7@z4((q>_}H**ccqU2LFc zst2Y5m7N1BJK$Agkt8dFWj&^xmFHL`AnjAuTLZF7X)T{Ox2;;OeHE>3>TAoKn!VZj zVVLeAMW6yXspQXkiAdtDvI5!B>1l?N)J<-y32m@9QjW<_)RG6@tQY>8jMd1-kA*dA zWxH!j8-@5Fg{*(f>YZ2pRdcmP5uR1y2pVmTDoO$4#v~=5{{R@ItNw#glbA$kC`)4? zjDDQgGHjzt%2bOelNh$7=NSB`yG^wb>{m-Vq3bslL_Q{e2Y|J#ZQnZmcE@_KTbpfo z@c`e}r|p)=s+0l8n1ROGtHM%EGBd7>)>lcpEk0Bctacsf7dlqs6_S^l)ufc2eP2ch(@M--YkrhMxwqKnc}q%@|M50s8`>03v^j-QzC zPh2`~djZggP4YHIKYqbOPLKNIEqV3uQ|U^L)5(1PDYaZ zVQsX%E8B9~fl9~SDphr?kD@i?L@RuD5`cshq;fqCPFm!~gE^-j$&9dCkTL%N31dqC z03qfc=@f0o#<;FK6(d)IwKtL92Os58egSD(7x@~=yB|!NX}93ZcHbj+W4NZ4M+Ae& z_B@YcQ?Z&$sNSIi!Q?`kvz4uo!-V%+UO;~6@ao5IT2|Nsop(ZPT&SV+CP5x!sL2oxG80x-SxU7wcv$^=AN_jJ@)%AC#ZM$`F?3Ka(69(1s&XPEI7jrYD(M+I zRg~2-d|W$_JJ*+<6m{ZcGGc?;k7_V}(&Gx5QD>oK7*g!SC#8V5c0?F^?gHfyH@D z4gjU4c;38KkxJNTYvqzp0Ml!XIK>nWPc_Bq!KIT!A~8|MbAh!swm}DKkx~IUq$!;3 zik*%7kzyuKA9^9ULqGc7jYA_mQBAUwz~Y%PUVz$)2Wl<2V2#BR+dyzB^q>z)su8ec z#m4xjQxs#0d}BDpDF;k8OL_WYtBCHRdb5XJo2*}5!D>GI=$!R4dU}u^I zY=TBd{{Wh$rMH8ypjMHq1XnIaOytWNKA$e1THtfg(Ec#sisrrcudN~wrw^?<%88)@ zv)_s(kWMQYIjSKXP|a;mp;X8X1SY&eq@iQIbhsgNzT>~3e0pEJ=p;*JADaCjJN!5W|)&*k@fCVTk9jTD&abEF3tYPA? zYl`2D(>ld@PJlV1W;pHu#RBVzN{;mV&R}m*a+XheRun#Eqt?Ci0bTP^3Rx@U)9kN2 z&>F@6F7=xed2Unzy022qgai;U@||cRWRZ z)kAe{{ZI8suII7ua-lE!t4d^iu}U+xMk?iO_)8s&5oP9) zwaqd%PjbOe`BnDS(h={R{Rx8Uh>0b`* zPNXunQNGy#RBu`Q8@D}Tvs*-F%S=I%;KFt;JuTI!{YXn={*hWk(xa_OaX1+5Qxn~f zF4=F$8Tm2KZ{9gjj#iVVQNr3kKfWUW0B6~jpzQm5Bo^?XF|iez$B8=6H%ic?qh%!H zA6oq`xOCi0wY_Y-%2pJd9H+H>@9{U`1Q;wkEfC5^qvlB4e&(%<)j53=B|IuGlt6w9 z^-y%D5#9L%rXiyHyD3tim z2*C`b{{YffzxvZq9Z7MA8Kxrzf~6hBXU%oY+lPyKp7~njtxGE4D3jQB9)qv|Re9d8 zcIA9R3W}11twZg#ReD>IH=2>A#k|BXp^$Nb-1_lT%-*85fa+ZtNY(!UugboBM_4?L zq)bhC1SJH1RbZ2Tl*0v0x=x&&XOW%7bjQ;r+0QJtZBshN8;Eh0tYO69FF637GHMDJ zP(LPHkyoq5N!SzkhqY!_yX)?_li?TQpDU?PAt$){RqDjUZKyc%JVbJb2d82x+*B5c zHs0B%bn0Avg!to9xZl#X?wE#_VWfkMjBmbcBIz}$_uE5n6@afQx%(Q|pu6(SrKb{7 zR6B86J=h;a)!s^rsgtEYzh$>Rr%siB_(?TcnJZHF0DDn)c`y8fN<*0NoLhWvg#fYo z>#C%gt|@p?&%If*MN-*mizCFfx;w{`hX>JVDL<+aQ&y?OU>;3Utn*Y+GB_vqkruJ; zTPbjUi8VWCm=r)HXV)}9(R+^}OGrA7Y1dsV)S?FU9|gctH_bllxK1-c^kf5Z2~tf{ z?oAD3%D1{|i+!|3LhD(_p>Cmq9s5Cx} z#{5xj;(={oZ`jdDUi{M`SXdxZ!iFeaW7d>i82Z(s$w+*w2X58kyeB!{g)eV_X)AaH z^HfZmJt6PTfLptBT#)|f27z0^pY))`u~HNWGz!ShPkMdVLGM90;K@0ze3VNj&!!w` zfyFXVt_i7Cuy@Eb!lR7i74>H&g0_RsXvTIO>#2>7DbzF*_N;3jWYJ0EiX9q9dUQCS zYGh%|9tCiME1oKQ9qWa_Xn;bNQ?(^d6W+R@4|;5;zH1ovbGB(?RUs!Hl==#BzSV+w zd)KuAwJQld=_^WckyzavQ-d7R6yd=%g%r{ht+q4_c~2VPkg?vIBpPi*gMnLN3lHbL zbi0ww0j!OwV5p6bD;RmlGnxtDPH{wOIL>Hi2K%b9fkSy$G)ru7X(1cti(w$*v4>-_ zeJG;Y18i5CMLM+MG)!0T$tjM!9@X<7#?`OoUjxwB)3?;+g=3so&A%BSI_!6f5>Cg8 zl$h~80QiQp`$f>MbslPv9auRB9+jwkLFu`RxjL^5kWSw9N|`fd^L;?5uA;fz9=Rp|mv>W^w_?@S`_^GP@DS&O+OVAPJu9;uEfJKcwn)EC>M7{ALkUAk&Q!H&LQ*$UlzSy3B%eW0 zom01YH6VKxxLMyJ-OyABRyaQX z)pfSbEGRZe4Xcg#{{Xd0L+Fs?0@BcRBxzPo=sy0UpwniZ3g9i$Qm`^SHDhmK^rnkX zY)NZIlkoZ+#9h8&hlp)hI!7N`+!&U^n@e(10RUjub@0cg-Ap{as1%F4F zkYhcE6-BVs5}h|BF7X+HbK{p8$F8>4pXxY zEM}x&rwR!{;NTx@(V%%W%T>umAZq^rI+8N=C(Lu2r&?TfAS9^o$F*1)C28E6S9BC6 ze1w2-p;cRytgJ-8P~G2=HE5I_WuC)pq_BHyc5kM z$34HTc}(nG60S)~+GzltND2q(oR9OS!=8upphLMEovG9~In|uh+ZiC>;XIM;UMoJd zI#;&mYVN)>O2!EecCPCr{pfVBea$alAPP{}QWu~5Q^{WZ(2c9yQ=wb)Kq0NGenk?6 zPz|rt(I{CQ_oA4gI<~JB9CN)i7m>|*YVz|`ixNokoFbWV)26t!ZM`t#gBzNvAs~4U z9OIGRg;Ky7Bx08(eX=t_y6H$c`%rF(H(NUU-~*B>E$K>9%5%5YyD8JS+;-Z8NGn%= zTI8IYAt|{ZXsilhKxcY)UZJ_B$j^HF6iDVk?_9_`is5W}n&Pkq)I NZ`_ZLr78E zY|;YARnN-W(W$&;+7OWc&1X| z#`WT{*iisNRB>EWyJHm3h(_X)r$1U26U%T@N>Lr@KoedGBtqe zpOz$TSi=QrSJsr5QU)_h`9tYm`BDyOGaU~PTIO3k(`$~QzVwu~q0VbqLXeip+Z0P} zp*w>?Hj|n$wCCEg07_Pric61-Qgob9?j>8C(Pjg<;nWa)D=q6@nimF~8Z(i_YHzZ# zDmq22FPtek%_CM6==lEtss0e6Q|on6!QXAFu;0N?-vVL-pYYe(Qo2otvzCFO6VWk( zfh>SNl|Br9NuM?J{{X?ZmmSv(pbTRa)Mr_TX01=8j zK1`)cMNSokxa{tp)f0SMQ2eUKF$h%uAeB7r$dUs1A$!f zi;-wFi%(Hr8=Z953B~wR+7E=|Y^!AtvciYjt#8?G(*vF_55y!%K}jTbAL&QL>NwXV{G7Cd z*b;NKbmGXanGBfSSa(e!R97tB=PyYb0*9Xvry3*H^laZR-IzQo?1g1oY`8~2YtqG2W zhK2!C2_B%Pp_A3qq*5N9*3X@7T^l)YbS<(5lZ~jZp6L0^zWZuH-*7Rp74!-^PBI@u zEu>(a9k{6o{T&cG5uqt_1aZwldZ{vx3QWg(U#8~UCM{kXl*^ogKM%cMx(JIj%BPFFalq?(*j>m&VTSS;h5*cilW0B&tIVt{O>s5@UAsk}5FE+@g zLrHNfN*_unhGTmgi&XJ;HDDhIS!mm!`Rb`v!j-e(oC@SnYKHplCwcS-R3o0Wql1=FMP!s{>_zVh)qFR&K3p zNx-Yk!Y`K?s)gMFv=f@-&l@j^lpC7#ICjAl8eRA8>rdYo`_6mRV{TWCJu}dcM4DYV z-BkN-9P?4AU%eF4{l3*EG9(baxBRQRjya|Ru${NB7YN3GI$90TQdTpurG<{>giEK_ zYFb>$-;B^|fS7Ab8TLKt;H~I3-1GvUO7q15tTDCca(JQ=p_~o=v=eK_H`xCGT70PD zfr3KwN-7w^p%SWatmA5AD@U9l=LUs!SOl7JZULOo?kK1Y`Kra7*)$IN_T+8-XeGI} z^i+@ncIVoooK^|v)9F)_ws1~+2Y<|}>nx=GX@#jpSR2u?h7qe)^%u4XK1}-PoE@v$ z_uO+!4H9;){ISk)n)?w($;~B7{Hfx?yNc(_sO&~)u!5aL>?yUR<35yUfOCvj6&&M= z1|!a@X-ZbD#YicG!A&H`8~JyvU`EL_LII_sIn|9cN+XU1V1g8@(!2uB!jAcD@+nDU z8{)Bn5QT6>I+jWP8Z8V1+Mh!h?M8qA8Nl3Eybj`|iy0hOK3_bF)M7L>*wT{9hXSMo zap_$V#S;MrGBc*SnEC!1kxYMTSdJ?&A}LXldr<0(0ph0>8`_y%Vcb@r*%Bk~_-KaO zz}+T|OCHqnSQU~30q`<56op0hsZktKOnvsFMzSm9f$d3Eykm-&OnvvIi22}FqcRjg z2eo*>AIthw;EX*hpDXQKiL8Z8B!T#8VkjM|TjhpxUHNg?P@`E2^e~`up4D0E3o2iZ zr0CrHRrzcuABLhXaT2D~H1_nRlx!){iYnBIG77*NfyE7YbxDk+1q`Hjt6xz@3_7EY zvsw37Piibz3fF<4k4p3Mz8X5Qu0|d4F8J%x3rsw9l@$K~$|?h*Iz9Q*(=pkjKqzLt>aj-_RnJfl1KdGGIvvHVi%RnA|?A!^sSPM^}V-kRyFjo#$sNyu3XP*GO- z(kgLDZI0XN7fzdjI`kQk$t5z7G^;rCbNbbe*U;Bvb`GqmD?Ud#!pCD*Ux(cv76b%a zpDGQhK0}Yelm5Tfv^ek!ASL!vTxA(?KyIKvi`>^N?etYiBvox43d@KBOJxfq3OV2F zUc7WH`?`HRA&$q>wNU1sNG-PWZSR zBOq;2!nSiZSetNit^t0AZ0WwI!XLZ#eZM)9XnbqR4x3R*zVqSNb8T}{&~mg%|M#vC8sH=q4;U2&zOnpQx{ zxHS@5l7XQvel+?YTAb+S6+Y?|f=Nlv2>nG=*=W6RZak>vrvVv4{XJ4^=pTmt7Qz^6 zTASCX1FIM!se7a^g41Pf(83Vm-yWOOw;1}+Q{pG-R$*>whCnIPw>4e8#7NXov-Hhf zCQj{@IjrcQq#61}b_(K#VA#y+*CHrCg1vtUZybkF8k7!}F;qYy>DJG>uKP za!C78Fq~=EfCh<~WU~b%gM*s7L{dvYfrC|>LrFSCTH{AjTLzg3$oc9TNjR!a{nRNc z7{++3)y0COn$Nn81+=X3jMqMHiel`|FKa1~pnKGFOXCWnua0YYpL&jE&wc%C!^zp{ zN1{mz%0_6mR0n#GLXHOXOH1^rG(G5$3)=$~?Rne$sf4KMuIs?u)}btYE5~Y3cihmS z&p}=eWZ?SIp=>=4@H0=P-OhLewMMq+2eAg9L!&?CO5znJN;K%$`q4>VF}+1H=f2cp z9PiB|R8cU!LvBSVxG*SSv&L~sQufYhFR*MHUU3ML2zcel>?KVg*tjUb1|7f zV`0TcJU~wsD#0)-mJ zP|zHTa5gK7UY)T>`GG{+AMnSeaHHCihe7X7gP!$)V`Vb9(EJQLcK5Z6Gp&zuco@y2@%$PjeNQBRwCu0%^235#d-dzJ4apz z*--dj(rKG7#9LuRt5TFldepk1NVC8#pC2VoIr ztXj8BL2bb2%VdxKA`c_-{*^+%{7HUqPsuGfTYW0R6gkd&`+u!a*y;As3d@)oMXLG zdXCDaYSN68?OGGaTznzGagB`uHX3Co1v~nIU6G2EPF`GDh+H~(*9ir@o%>WPofNkf zkkJ|6AY!*Cx_!5Q6MObG);e9R?%q3bMmSqwc~UW0T@th`E5bjOT5Y{PVK=O!aqY0J zMUIqcVOj>>)DKa^jcbCnl%*#X)$CHXL*2JX$}di(prokbTvY<~czL-V3Qj!Rf2~@t z62tNqM*y5v2LAvZ^*E=o<>tKB?Va&jC*j}POMYc=nwz&yW=jPsUZJ26JDSDOcLMGs zsm~^KpRV<8iGD0O7CS(cs05Nb8ZRB~8On`19jY|=YG~uju=*PFJuoH1Q!OneKosx& z4GP`ihFU|PhgtwVx2dj+y+wB9{9<0o17_qug;;OJ%vY z8bMN0r6gd2IrghmhoDGku(FayKt2A3p`x&cW#&4;N!!}2pySa8J5)#AdIsNieVGij z1A##7Fl$}t%PF_OLuEO_am7RQ>}F)Vj}g2mW850jVMYmMBVmE*U09=T$Vy%adz5le za5xoBx<^`+cdZI~Q>2B8sodkEB|`?T&=azgDU4*E1w%_f1dZy`IVn1TLGcJIsgtaZOl0|3M z7t#twuWG+rRD#-Ycf~wJXQsAF57Mb!MGY%ydB$;9xko7?shvcXcxuLKd1Gj3Ix%jh zhFW=QN3VLXHtrRS_p6Umq=m0idCxUMb&?s|$_{BPxGJDcA6v-x>$s5s0a|iK#sx_7HGnyqU;}telF!Vk4 zr5AS@B7hEaOYs34aoT`vd2Wsl{`9muQk?3})gJTMJb|Cik@G4Ez*f|<=;Y28`~Iea zanz{ep7hJIbqs^f#)U5cc0ZL(v&qW#78kxg)bnc6GI#bg2aE{XpN`>BIO3W=Cr$|; zbp(O9t}JKXyp@ji(nWoc2EupZloP)+ps%GXn#K}EHdBsi-0w(GqRb>$Kpwl-G~oBH z;L?QdoYz7Cr6Yq&3Ga$10zjmxIqy#P;Xi-Sm zQwT{E@CUzoY5=tyjj6IuE4jxsg0V_udb|j(1&a1TG#at2R-s2HD^Z1jDVLF`5GmY>;R6+g=%aOvQmD(o?O!@| zb*Rajw*LT_@3nnD)lH>QR&kw;V7JJL7fnQe7Wu9(Qxm5PB06@~n`OU35&>7?{{T8& z!do`B3$Lp|&Qbu})Xad=7)Wg+DC{bWer|R0=$6uO$}y~*_C2eG(v>r*bz*M)Upnfz z?LrGnSPIA;{WI-Q-D6~!C&0dMrw86NLpfp}L#K z*!Lclb?K>A&k2bEy+1I{wXI*oqqaXfkwv}du5ZqprLwr_`DG;FrNoo@AL=P2oQC9) zceO|HA~unLvYiU`){&5Y;|KGp87h?0T6qccHg`|4-)e=oIOMxhUAZ!-?;&YRjx=^r zyJxBiPimsI1)HE;og(Nhcnnm1Km7juu^S1uGw6{+rW9K80+y z#*U@hlDR|@oG53w#^BUrOwF`iT9*~&DTVAlpI>UQ+U~a5P5#bOe7m+!^5T^m1OUB2 zLce6oTsIC7Rjs}ArCy&yREm`d+D9YxT&ErlJo=KPZJ$xt{VGy5)d-S_4csuL5uY~M z^dQwAtszgeJS9XLk&LB4>_??mWA#}onn0ZB+1r5O~t_SNb- z0sB)CtpXMb9ej_ zD3i8pdg?1G-%85U$i{KSV_jQgax;rsLD>3Nm!BKEI$@HuLZ)8Bekisj6OS>)S5)tY zbdii2A&H5|%D+ANH)BYy*`!i%}28EKhRU?btPHI%_Gprl4Z=CB~(gO0R$6K@zuE| zjY$-Yst8K9t~f{;PB-*Rh&}(p{5UxGzN)~eE zwuLz<2}v2_^r4j{JDh<*NOa)wLMlNUf@(ZZA1kM_VKPD6R&(h`#e5D`o+`G~jmAl- z7}7STOThSYQ$GF+rFfK$CXk@8afypVn#b%NP;rXEyB5(So#=#<*ig;n z8Y#Lq1Eid172u^BSQ)Mqk-)9kEbvk9OFN1{80NdSD1)*Z{Lox=0y1$}hgL=`%~J8U zYk0aNQ!3A;WZheC@trtEbL~*fth5~!3u7ii$X-fF=8+QL9xR9TAS(w@9)s4bwu>^_ zK|UG!;;P+T?I%$39cNF;`uY866K^+dW))Cm34B<0!D{$K*93?&V8yn z94m~7??|~2h{!tfu1M?;xv3^BP;63Fb(Dm;HPjV6r4HVM(y85H)OT5O)sU!`sO=d} z;Ey;@BkR}?dRnwUMV5Ta6{hJ9G=!*3G2!jcshv2VqnL6 z7*4W5&V4hsed>*S>ZD9zyVS;(!X-2dD0FBlLU#%&JCWNILN7NPjB7+04Mm0XGQe%` z#1Ilvq^SC2=k%p(Lw0QaJJj~qV$HT0YSk&V_zG8)6{D1I_*4kbrgKvLIeKF9(jm=l z#$~XxzZ~aL;MS|j%yli^oQ+<)4tJxvN+JW7qIB?`_5s5IY9Ur;Xp052;c4jvD5pY%1+ z6>8BJrd(a>by^nOpD-&pSN-Zv_tp-ymZ!yDRE;Yr&-aM$kMgcJVuyN+xP>6P%Dk!^ z&k{epJI0mt{Q<|K#^n1@qY4kVjq@LgQucST?^c*P4{d099H$5=o!i9>soptthP0JQ1+X8Ajd;VLQ{d*Y-&?Vv7GH!f~7>8vLa2p zP^h<}6p|F0vpNz+Flbj|YDvmzj8&yF&>OZ%_o&CBg)7S51zKdn&+yO=vWD^pG(ph> zZQ&y&12hbnDcwV0H^oj!*dEmtF+&7re@e#ai_@W75nW2oa1P?UwsL#t=}t8im1#N6 zWwR1C_gy$ji8VIN511L#$f)~eyn+-(TP?9l6Zl3ckgb(_pHC%C9Z2n2H(T_>dK^)2 z7|(7i==o7waf7G~;oI80HIcYi0f?* ztfc-ES>sc3paJ}=#mi8YBC*I?nV5DN+*F)dTDd#ss&S-)$SRPNc$1dQ3| z2+8-MA5K9zr=BN5_M*Edi2V^r_ob9-q1#ufG+Sy&=DxLT?mUJ^aZCAh z*L*>WNqil%MvxB{h}hRg+fz$_o#};0IoVW9*kZGaZ0*f9)8vmeJET4)pJgF^tq9ltIpE=4NXg(M-(;zLYFv9q3<{1zFYhq^e3l z-!;TY3Rc;m=*TUmf@q{|wrF-$sUtM_+icb_=Dj?c=x`cgYvg8{E+`&rT^g(@OHXns zmlQFzGFDA<3OpUCIapLrD3jj2g=ys1h#*oio@fjbmxt23tvqIi)DuZBfOoBCpwMYt zb3&o)jg2qGIHIHmhg3nP9RiZKh&vi+$Qj&<&6y!-(kbLD6N5pQ(s(u0ypg`D7G{YM zKo#tyjpzX|W12{s0l`%jBnl*Qk=~M3?ZpV0N%a-gO2-C`qj4LathPofr7qZIK1ok{ zw7l?fB_Pu*#RYlA1s+V&d{G$|$lEOwq;XllRxuttmt9hP+47ujeT8gtbfK*y^sJAs zp*p3L9BNp~hE4#e`l!@&H>ZDA%yRp8a%|IKZM2tPB}ylK2+y@={X|@d>f$YL$F|1Q zRbe!b>09xTqMQrV>dNL+1R*s)+b$!z7aG4>ze2jTag_jYZ#2Y*AI41)YX>I%O zRY7N{y0Q#d5D@5GCfP@XqMQ(vsQ&;8K{|=fp464Q(X*_dDJgOzPn-h-!dOuAC@V_RIMP+I8)F1jBdI#R z$J5sKI7^n>g`RYyJwYJLdNRvJ8Q`~# zh#AQnW3@+hbW)M0^jlq3Xt%v%zt04cP>icqTEGb-?Su9erC1WL8HZB#LX)9jcTnL& z-mtE$mDJ2U-O31{|z$`Y*QCnJyh z)gdJoiOMcW*XA|gup}uuNKhpEQL$Y?Y%4nvOm$Q3x6`X}vJ_U3Qla&zn_TiEAh@sL z9M`Fd@i-;2H?JSWHvuOY?O!}~04_STgSO`#>wD`dEx#EkA5yrK%5bFY4r_ zx3y}Y8&aE2usq(QJk=9%=uZ18Jv=1DOHYNAr~nRqJu5|n^0$kA9Bn0O*-6C)I&Lb` zw#MmJ_#0sMBAK?wl?BHT(NeS>$MYtS88inZDVtD_DL*QNsUYB>jq&MUM*JwVw%it- zD8V2TSdF=m{FWR_NGl_b_4Kd9_?{K9t%k`Rsj)6oC6UxiJA^g>#}t7vBg#!VB!RJ@ z7UDv-O?nh_hqmsm>13-OS`D{UcA}fyXEmt6@1XO6nu4Abl2wqOTCqNmQg;V61wATO zQ`)r{uF95EkWPNp8sfr^NCPx{=7>?>dWm!`D+&t6c@${YDMW>AGfuF)FC>9MzavUA z4)hYP3PIR= zdU0=@WF4v@m-3&*p8U}sI4M4A97pC5r}o_bdsmD8Pxb9V_@_VjuC45&KWbTZL^4Pk zO3IXIbmtf*o_#=!;~lA-jQ*U|S;3QRSUBzeRO4iUjwy|2JJBtq6qB9>G#AJr9iO5> z1aK)pC|7|b(z|B9tmzQ7C=O|owS{nL#VU}M@4XaZSk>6mN=m`}G)e||rD)9%Rf3?9 zcN^2mUMXq-cBB$BLd9SqA!Lr6hWyK0o>CL zq!F>KVlqOMG6Al{E-3D4Pc}1(b|Xm}={18yjC_RdY0~zcjSd!4FcXSp$4)jU@~qt% zAkAR$N)76$^gc&=Et!KDCwdxYsvRxq+)_a~rd^f+{8Y)94&czBhLr)&6w2K|IMqPO zn}M;Y=Ow85fui(fvTem@gG0Em2F8M7QkF5OntYIbP)|BR$fdIdDpG*vkf}@KRYxuIdCeomQijHr z+ZiH6%2wnPfm{<6t7DpuNprqw#DxKpcA%3prb;4OxF(im;BI-SpBRqy;`uuhOb1Ao zW%9M*B$}3EWagpmXe+@YrW6Vho-3mgrDSI9&^)IUuu`6Ch1L;VLKNvBkOu_n+ zj42*3lxN#Ked={FX2m5Im<%+dk>aQfkEJsv-+CIPPr1Zalw>7Lc#8Uj*NJ=jk5ZFb zC~tHRK-(NHz0(Hb^X}<(QzuwR&cP(|bK4bJyxJzkzDz}{rqKzP8%r^e>O!1Io?NW!v6-mexuhqOqzEjEcwC8C0c z60Pv|kVaB=ByN2O$l9w8u_bCvx)ce>ZZZq5Hj)$K^Q#`Gll2s_)IJ1gw_9z^Z8ZTw ziEOPoN|JvHjy%ac9FOv)w?2mAqH03J8e2=T+}mZT2xz3G1zreH$MO%7k7H+X?bp2)}W%cRj4O@wykHUZ_Uqbw`C=%DY&wd+sbsWOR4RvJAyl(&aRLx z7RRBvX_V{lu=);^zN4+4`cke)^v-cs#=H`nr)aKxE$i!tRP^)KE}?T&#=ij|jHav- zu1>%}Mt=wZ8WCM965S;-;l$Q*yAb>ec0q7q5U$os{~a-%CM zJBrBqjbTnWow9TJR+n^+xbakzk+H1K_@p$YEoX9S8Lo2KTy+#wrll=3rx4l5$=a&d zd!y}Wl)yU-C{^m>lzfrE0IN9r)CBuG5+ect1zh8d)gu;=u1d<~m#F2frwh%MCvo%@ za_O7H^45j2RpK_BDLfSqQR`66bm}31qL7~6x%Cw(3eob6H!=&lleTbi{{ZHpUnX+V zq}XpMb<)Gg8j-=^5$RTTtlHf9#z4{rckVqaC2{+b5}EK@aIFLPfDQ5Mio0|_SzINw zVM=RG{#D@MdlA~Gqw-C=E|w?_eDVuOz}1~6jQdy6T`qhZdtoC3NUSEt`=(gK8cv~{ z9gS{%F0i-=2W*UwYNj>DMCY_ix}vW1K9j3B%^Bez;=vWt+D37WtEiO&YQWn&Q_U3- zPG|&fE1E)cf(2t9S~YsqbhzLs`WhkmVT7Ejqn@~j6V62o3RYL|Fa85m`(H3BT6uMYH73CqU z;2dOgUo3T1oiL`5P(cU1ec#v7P^K0GX~qstE938p@d9o~N->oZ1$lnA7m~Qv%=MN> zru8iqttkL{Q0y`7Qn1>_HFh4A=T#k1R4tI8I5b;pB;`l)uO%tQkSp48xH!cdptcYm z2PrBbQXP^L{9m0gr$TTB*ye(fCpj7Rskdi5=UoPggd7EJ{b+_|hl*m&RoPq?Fy=>tFISy_0=l72cyuU zeQGr?g&Y%0*TxP-ERige332^E=k=taY3xRGO%srEX&{uVwM$1gY%v8n$s;sM6rDTl zPu`iT*|XwP(MeOqE9u%1rx{RODV{u zAQ80{6Ii4xM>OblCWd)hMrpRzrFN}#qflFm9jIlwNp(jZ_Muj_6UnR)q`AH`y(?Mm zK_N;flSQ_LE0bEyNKsOaGUC>O>qLf$?kTrfDi{h&~Nj%U7deW=|nl;jwK?HnXtWhf-0iZqyr~i*4Qa{b}7` ziWOY(O{vEcMgYLA$UH-KJ7Sz|qE9)X+K_Sqz%+9Y4P!W_TM_7;w>Fm04l!Q1PeVY0 zp-duL5Z~g))l=$gs*$=C!K=<{5hn<9kz{4$$zDO&b3$I4ZRn05p4Aayx`n3-S2ZDV zn1wK)oyhdAcRon38fI10R-U7AZ?R*PSSlC=C&1Cx6NK+*$8#oYn8OR`USKAH7 zTt!D}W6uT@d`4Px3R_D4R4beg!19yys2e?{uNDe?ccVVTdKi?Sz~f~86?%Y1G04F7 zt6cj8_)NDY^ft64rAW{YS#)wo_!Kjd$A3{>DIzWSC2r0tG$QMM8>}r}Vw1aMoq?$6 zP9{TSl4(~rR~{>YDJ``jpd3LU;F=yHGaV~IAme{}zBWooul0kT8%ooP$nPzQGL(R@ zG36b`Yews-{{StNE0bA7o4X5;RE3RMDg)N1Dsp8PL=R9;T3b^AT0jSXOjLEds9Rcm z2?PR0Njy`R*v?E#szQh=1SI+zq$M(%aJKqLP&ng}REtt3?K1v)jv7j%%SZ0h_(99>Q$+{d9qN7;Q+%=#H7P(HxT-2>MwzHtBccEb3i6)bm1<_>*hWSUD?PH_ z)igGgsDY0AZ&qX?T7gn=fx!ZrKVoeHb$s@uL|Q@ui3Yw__?K#Rb6k?7;Uu0o8`sox z?at0Z(4Yz9$}1xHiPCSOw+dNSFi#c9>Z8gfF`qoQ89?Bt9+{%rMv}spQ8@i5W?Cy50Pjbz&iiNYSqs61#!(nMWOv0JhqV9)dKtF( z^8IPn&_d3`oYPqGw1SYF?gj-u+eDIf-_TbW9+>>-lp{#um7+4(*+%=1%A0%?0!Yto z=&%NN*m{~`W&?Qj8`9CDtOP!F>CeznjGXVswMM4}c+T}8$U1U8C>MlJ(hmq~+nRJI z0OpoKY-FeJOs({Tj^@6jNYvQsPI;y3NIOvB^y8eGZPyk?Q?+Xi1(w5rns3924CGhIGpv;dr0oxSHnQxp^&bShGBAZrLoTn5RPBMo5Hy-q3X+g)eMXA-atm29zZzqngb4@ZTj;<$yy1lof=Wnh4Kek{=q#2ZP$2LDbnK0bG76bp@S|wJE1i zgy~So?_0=FK82Hv=DMiCTak(=7Sw#BZ(2*rDId~@?qELvr{5IoXaI9u4y=vG>0Bxx z!B3;Y06U(pTq6{OOfLTF$Ll6jWxCkW;8_LW5*ufe)^``$tQ}U^+#AV&`?sp4>fd=Y(vPx zK;EdmQP3{WDk)Y0=7wH!jUn;zi?##P{{Rspc$>Tu0py*k^9!jX-B5%f(S>eLwPa)A zT=cqJKu}0I02;V-?TTfk=2Y`*QW7zbQ&5AN(?v3HdqkTrnKyY8%x2RXI?5Q-cR1x( z{*@r5z`5!1)U)ab;ryxQ;WEpG=bQ;75R_y5sfQ*)Zc}L+YV;V*GL7h+(8cQ5h|m;^ zb6Sv7<_Z22>ME%|8qK`CmOtJI^AItd3>wt!%n3>eLIF9?9jayigNZStmV$K*f&~}p zT(*Jl@zXwPzgyKl$sw3dgh*QlAdkf6Gy2mX4eav2W;*1Ck`km8NI~04&cyp2^ZC}T z)jtenK}!4puL~r$us;ucesvYlaGfklhdHke)VQ=94U{?igUwDjnw2r$MAf2KrL56S zyF`{$veSnsS?~Qf;QHdFrL^O0=!FTw*&0An_WuBs`+adobe+0PGN`YB<2flp$j*`2 zE7d>Jr7rLtODm4hG2i7nPExO?YVQ)F3nt96=EEKy5fLmpwdqoDa+6YVZRu(_Ndr1+ zVrNgE2gh|mOF_!E0Gf@PC&N$f#x~-SEf9I#8Q;*?&0Rr#m2|kcl`TQMk(~Ff6q{jvSw7!W!;>Kq(!tUW3Ws`b z>gAb}^9~S_p`4xbiixwqaZ5&q&JN)E*Lu3#@}M~9gh&bW-y^rusIiFViKKM`WI++F zYQnVOf$2=!EfP(`qrR zqROMBM_!_=I5KgXlf>6!b2JNO+Up(0Nr<-^J-|FxnPtB^(i;jVNg3X&of9?%#gs-{ zU@QPIwAGU7U~#=FYw#-s5OIolDMM+ADJP~_oe2#1@s!NTHD#sj- z^z{6K>_g6}#FQV>qHYn60?X<^9nv?V6w{^Btt%rtW15&|E03}RjD-`yJkYk?29heH zrqMA0Zb(vr>^99qPWXQmEV!0azo^Y=k>H`C3y44>y0(}km4S~+bhIUzLV6@bA;m`u zNjvSmao(S4r;xJ3$nF5GN%TWX2`WzAsvY2$7pbQkSqIcnrhr0OA?R?FMky&ze)VH$ z%qSow#HZV8HqmNZ>q#mA_p1wbkbrfZ{i%~^4Wc&M!FdS|4DHPU)>amz+t#AkBTs$m z+a5DX000hXKB0u;kQ_o%Kq8G=rVFNgS=Y9QWAp zNvz4$V4Rxp^2Vg->rT)!%PGhiBk57kyrZ4)eWVNs%O#@M4#6obVP352N$P{!l$MJcWQH>Ii?f!bt_ zKT25|RtXvHMK;g_KE3P9Xlons??+5JI~bhN=ou6_>nR1?k?L#cT(7~RsAReT*ysGK$$u-w z5dQ#xq7;np&e);dXaW?2LBD~A%RxgRZ(Ktv3P0;eWo{_wD$P9FR)u+v)stqUii#Xi z3eV?7e6T_?38u9uD;w90m0+Z()~%Rn%x4Capn%o~CyE@YZZ89RcLaDkDa8|w zK#}W;HMV?RnmY<<=*wwif30dYiPF*$sGqeCpcnW-#T4scpapMBjLKWWl%9QSH13Rm z($lFx-x<2JJzZN*fTb-4ZIa``cq4W5aBsF zuRl6kAMnPkdyS~HIEAD4hjZ;qac@I3&@-){30gNesPyxD;DAIn_8}lnr(xhPhHZ_|_$uBoypD#c^)D z!Ujg6?McZFqSy?i2+~qrNWzZ9b6k^YdQ0hXxa6frIRc+iT*aj583!Y5){AC}!l1U7 z8%{vpgfThEvO*+~gK|;VIidkb) z*i_s~%XTJWvRp;)R$)8xwz^8NC#{TRZ{JLVZ^w~ z-AGn98RochRkU{FPLn=;_`!8U@3|Q`DnF%UQEkYS;cH%%m2HaDek()?oVfsOt--2q zM_LTan`>8?C4Xv$JF>HSvr*_ihK{HRT$WO;fl0?S=U#L&6Kiy^!a{;FkbC-yyW1X* zZiLYoNJ=*q3GoxEqsOpL?=i|Pxz`+r`VTplG>D^jX;7=;vT;A zty?!pzjT5fj-;*B=^*0+oc$|UVdmyULRZKgt1pTKMI~+=fw}gpeX{J5lqsh&2;zc! zpM?ytSLm3S#^kVo3PIn6$o-$@#fr02L9s5VtAIG+`qCqZy~ zcA`K_n};xaq|q@MG6(Nc;tAtwZd5lLUy(t2eQREWRL=&Il!JlpYsN#Z<+QeNcvdNK z+gZk>eQB}K3u2!3ij_N{`tev+s!w8jcZ(5krP;fv_b6Nw|r$)+boNh8Hry(?ilr4~= zeMJkSRgHAT!L=hVF@kr>s?52OmYktU$E8*ngNJm4gahlfSYp9!M;XR_F;!D^O{7N4 zlDN_}sd^h5MHwYg?f=4<${;l zR-k)iccpUp8;xw3gFZogrQzkq*rMgmf59%tMzVpp2YMOnd15<9#Vc`1*e07_h=#lu z(zJzt5ES=X7QBr))OQCMC5S329N_PoiFWnW@tjJb3L_;d8;Uwue51wi&H%}vt+Hln%a1Um zuXOs0@4X=##|9L_d83i9QpwfWG`${DkWzO^Cx2>Mjr11V?3_z0Pzc7L1q&|M zh4_J3_RV>z&$fWduv{JeXeo=h$P0D1z$o0O=}9TO(j7%r*mDYTpTcr|>9?O$hYLXA zdB**!yBFdey_ZW5Fr_$yf&j?Ku1@Pfw^UlUt)&#|1w#P))iqs;kbT1N{gLL|l-Y4B zI0Ox-_!r2s_yGj=Jc`aPy=d&BTbhBl89UT8j;fNOQNG0l2c2Hh&54k9xZ^I&00TwY88_up*sFGId2}&wjZ{oZz`Y zQd>%}m2wZ(t!i8s(dAYTrx>Q;An019Mo6VJ+8tR5V?qhs_M}Ng4&NYIaY1QLwy=;l zJJd4zg4Eg+mnEeuPTbXo=hM<;M{z95N>it7RzG>@d*q33AuAXMIX&sB9Y|E)XtRr& zUVONj=qLv`EA8~6lh%==Vlt$qOD7l@#Zq8cA2Hh*ixS%+r z_;qp(5219z+@EN<#*p-uJX8#Y4DZ^9ORhH{rNnGb#+4>}EF-~&JTs3dsD)3txZ;~s ze6O)1WA96@>?xK6hMgKywF^173QKK`L~tlSom(1Qe1_Bqm}k8)AWka4QH5>t_gP;n;*d>XbnG){5k zYj92`(oi-mMKyLcma1-nyeMNat~;NC_!Q^r+&A4j_#|LwbXn z9DO?2SPL8A_o&ypkuXwQo`tkW3dTYEiae6+9B-n*agfSdQdF%waak8ubtlNmaw}36 zKkC~a)hpEfaG#Z0BQIL9fJI|Bx$|Ft#c@heaGVl&`c!k};=Z2CVaL0WRp$I&DkVo& z;FDCVjFifE#SiJych7n@=)~956&28}yufaL{*_C-+V3t%NNFwv=X`gsJB+05owpc( zd_iHE_Vp0xE}(k$svkg1O{!yvmT`q`lUC{dJ9(bsnQ67Cft;SzpWb?FsT!2yabqkk z8NmXrjX1kxa!o#sjXg~oLQ@qlCv%gE&Hgd<18Wg2$#4{p#e-Bhtb9eb$y=`5j*RN~bfuk30aGkk#pC`#V|1oM%RQ$0V{Fz#1{Jst3- z+d5ABU>dDZlRWOqVs(iD|~Go@Ydqv3~_gK~*}`Pi*F9l}UBQO1q8``zZ;r zKGRC%sRwU*(q()ZzV!{Ao8i8=+7F+>KAxXSMB{e0GrBcSpX*pozzb#IqDI&`sOz6v z#6*N9OF?03(h1upsvQ&4^7?XWVLqQ8KRG*Kj^9eZKcXVje+5XA3@C&)94$$jl_7e z$kfjG^Ca(@ad+fHQhaEn0&sTiOgkD{RhHGA9H5QpYn9?N5jDt$R563)&h&XTqf%ti z>KkIn3dkqgo-yLi!-&SA`qq5i;5RKUA~DGYK`K_n3Vz#tyxi{YGRs~IEhk_>8~WzA zxSOVp!j+l|VzPwcNCf&3QFj@SMP<~XqvhNmK~PZowqc&*HzoK!3bf-H)O~6fsQRy| z8@YHbHpj|12-KoMH2l|rc)uf&y5@SWD~uw#<6vMEifgE(OSv*t;kwe3!71D6nv97q z_SBCR2#lefif~SSD_7~qAl&U?iLA8a>jf(EV2UwrOO2cKZjYYslG3TVSa1?@KNW zeX&Ytk4#p-2V`<+IuQ*4njyz>ayX#SzSuunS#T$QKJ+i-hDnQgV+X!{Dl$XG3M86+ z`2#sB;-Y0d6`lV8?L`siXPulu$ReMIaZy+SUrL2@T9#6kbL&J#W!&+#Z`hQY@?4?Z z)Di(I`qa!+xhu_rK&s866U!iPT3=6E0$K`i9G&X;Zcm7uh;P{MxRNyEYLA_0eUN;- zR*oVzk%BR`0XhRtAc6*Q=~l^sHQ8KwDW7G;l^r0#kPwuelDuY?%ONRRK+Ak7L(|s1 zZ;xu*=8A%V2>~M|WcH~Bt-&pY>vaSYl(sRP1Kd}nx1+nFDmA7vflevZEDeGFA?aO_ zrYaP%2s!}R4oykRvz52=t(2um9rh%1$9iRLy@2C-U0wzXcii#~8lYX2b@eAw%elJE zvdW0)ZNJ^P>E!3I^{Z8-S0|vNMdg&48nCpWU;*nzx|fjiZK1TOOA5#%2~IXAG`3@0 z8r0!R)RiQqI!06vuhNRJ1he5Sm0$vcum{qGn`5`mU^4r@MV~T2+*1s(cSAZRfHf~j zQz$tn-_nRnGZCseVMjT{)Dy5um5kBuNf1h! zao|86xunRB%%_y8I@RG3q@azDtqm)ul_xF5kmD=-YD&?9K9oA#P<#DM0)J9KkK!o{ zLf&t15_cwn{{Y%sv0{OBwuFTfj4U57trB`Ir8webnCerUtSd^1p$EeIbABS^g}vir z1x^(JDOWYHzC8gi;w7$knT6;mRy>>jbv0?~gj@!;SAsTCP{j_%!dpvBA*hH;X+4p# zKYsM|K8v--35fKUBODZ@DZrh7_pIEXMqGT7v8LOfDpI7Ow^Dq-(k5Q*47IImZAwb9 zoQ>!S-46ybRODF_6y`MQ3&;b$MN&WZbJU2r%U$x}^OlCsh{~HdQc3J_=|Rd-@nK0n zv7*PgB6`y%D%x5xf|RIaRHhPJ@fD3Y+*LNu(>*a{Vm-n`&cj|p5T&|EDm-Sa(rw?p zWhk6R6^tivywgSA&>gPF7dk7-`Bh0vdU)R%r+;o)t#Mp-M~1XCGOhAzHch75?6#PA zEw|84vPQ@2NRw-W5ou}Qg(*PkZ11f;wAR+r1t-{3w>xK01V3wN=xNTE zwY-vIp$#~)PP}(D8#dh;nT^}-4k5)If|K5-T9q;*FA1@CWcgB&sYm^4q}Rn;a~HV~ zFWo6{h=#O;K4X<7L0$4VrKqs0M7nwwz{8EKwj3%thONajcKCm3nIRGzX{PbWN&RS+ zx|gPI5>TkqooJxop=rQ5U#xc74*bofh%dfwVz0Gnc!4g?E$bqhj6 zU;?eNMn`lQf#~iuweko%8lv?)l=U2B8)fzjR^3V}@SMPK)p@(aHrPtHwJu3{k`@#j zR-K4A)yEZOjnOAGp3P*pCRhuQlGsn7r7S$o5YCmfp~|&$xBGw0!n~B^f=dJU%>iXli?M%g%pjH0iYDja-}-^M4sDXrXk32 zX>&r8fwub(y?W_36c}GOa#~wLQdFbdVy5Hf##?leO>*CIb;Z2fEg)yO^ASikIWM0W zqaY7TwL!kyCM7CmI?|vBI&+YCKSM)au5HF|a)@DjB~G0nn$8|>!ldqqMqw7#kuEqS z{{VKbHxvcR>kiuYg*C)9zf9ER*z>2_()F6>4cExlQlY6oP(@|;FNR%XdY{;uHc7xLJ%(|;1&-asXDd|AHOYNF zF{dIq2?|yUK>q+btRtWx6+I$N$;Q&=N*hYglu<6uh7{VHVZ)u$ruPa5y+1A^$b3KZ zn$o0$pqwAdr)s*k6&9btS#I9D8vg*@<~Zs#V=6ovm5bFfW0IFi)9XP@OC&u10EYU? zHdlmuA8M^m>L_-13v!!#LawukPZe0YF^!IVoj_jYiL}6cX27)p82aw{Uvd^JsP!+R9DgTR2w9745VCvjKq znuiRj`-RHp;AHI6Xh86rRxnQciX!U{N?G~4Y=pSkSn(F5fE0tVDbJ+`QdSzKRom%H zVwnw?%MT?WWhgs)=hBO}^rV7IanOg6f(XvvdY^@Hj{HaLrxDw42wbL~3R=cJKgy-L zHq&m3;-Lp5w%9tgtYoX-dNFERXcA64B3iROZ;jzUZ)qqeK2f+dMX;^6{3qc!{6YAV z*y64dm1&0=^~6+((h!2Qwj1+oI|`@UdZpB`<;mD$HV+4E4&t=pgBIBB`qON23b;aL z)^^W^r0Rxx~a*Ed4`>3yy^P}3j z;*~(+W)#nHGr-x|IY@Y37bUyU5Oe zp{apY*6(Z^YJa4LG@e zGkU>NToH~6efP*eN}7$kvH582%l6cF%I3~*L!^R3eq@o~J?JT2Lc4%0tqsX(r06Tt z=09qW>prdOCLzU(4m<#j1i&wN$K;rGHtTF#55-dvf*Bgb_QYeR$+f}D>| zcIK%F)9jq4t3~>KwB7AqB&K}IPMiasK>aJ>KkW(d_DzER_0}D!^g?mNsqYMg=j&g6 zIEC5cr7;uYAwE?HK=SYS_Qt}nzwJY*9}VXG!ces-#3>6wI0Msuf6ANHVp(yv`&&6{ zlcV8~l7cy>kZ?u?IFzLCkAG_DXef<2RbaDtDH&fhV-dGitP08inhVQfT}Jtk;MTrDFBp=nwoY?C0Gg@ z)Me6R4Kk%_8OGEyQrOXs&#GFw2bLtF$WoZV>RN~%=K%6{sTStMNNMKOLdx}Fc^J;> z89vkipKE#F3NjQ>+EfRNsk5kdIUMhjYAx3uVYcGAa1Y2+p-!!~kB6ZrJ{Tvq<0Ny% zS}%{>^_su2W>wxAirW_yq`8L>($kgaAA}LLHRD`U6}MftlaQdW&?h_g$6_iuj;@Iu zE;DfmWwofS<;DRiJ^aU!xZb9%y=vO!s@oUB)!`+;ogicrwh89}v@1*0{^G3~eo3_B z;BBd|I_@2}wuB=9XbL-?%2DI7B8%7JG^NC4Db#g1q^n>%k@-}2?a7lNtx{TXl}K9K zOK1hp3?%u~t+AZ#o;En3(H{29b+IPv9W1Fn8)|uihcHOSNFZY!@sevfS3ptuC6yXn zcF_79b@Z!s1IqpJ+PQM+#NAYPT_Hj}Yer1Ji0M z=i*3h4Y@U>ty+?Yd}5M)h6X4Z3w=@Ao`#=7m8C$2fpDZ8lc$V!;-O|iyPz^-+g?yw zlpAmrBsNGSWFGk(4CbcpG0^F!EtWBDVTlo;+}v(ks*bk;pFmfKr9&B3*e5@wBlgdN zr%`V365srWxG9y&PSVMd`K%(#zLIR65sA^lbZ}&Vu*WRRCSfY_-wZ> zn3EPG>pCTQdtH5{L_DgOAEV%pV4dqHw@ZO+!(kOeo#?;~=TcgU53>y1-OrEP$&VljCnxs^{LzK#j8EATrMd@WU!EbbCsx{K#(${^r#!-y=m3Y{>isR zx?1BQ;AFiS2}0d$BO$F$Y=BQH0Y?B0vS_5)I;9MzTh}ht8e931K)~1O^m-PHO&&g-ogqH zLVSvEAfLCmsA<+_Ww|ANOVqaL(4IzLW*9jw={?-fhBA9|M0EAfqbxSYI(pwd;>;$T zZM8n?oJkyjRISE$#{+y-rx zY^7;A!R!DuJlA3645DaZz%?J;l_yZ_4l$g6N`BXMmKkYJjcAkSL0e}yJRfB7liGr= zfbEV&wlOM0nxhzsQo@4Pqm>+ICZcX}82RZboeh;V+QADe+02c`YGJj`W;2uIN-Il_ zD6N;U5Z};t7~Z2MrqgnemdqKgI#rKFsf_>MsvesxX=o0BOGd@kwc;l^q_WCohY zgq-s4YsA+TB)?wKJh4`<(4Zyt`v7ObPD=uSNpZg#t%!erdCUMNNpgup(6n&_=(Ow{`7s% zsUwuBxwc0_6OCs=FrQ4C7LpFzDswOIu@O$rwntI|f(TkQC}iU}#}$!vx5B8se1%;_ zurUR)RPU1C znGs(xXl>;_l9s%=DObO0+HH61>K_K%ku_%qZGNAaf!GM6Q4P(C8iT1h(*fuHoNB!7lBxOP;| z>NKI$u%8iXKv{4B$n`vNTfA#zn2Tm(5rer{Z6VQ(3G7q>-yGCe#ICBi*xI+VVd&G; zQtm5TOJ$s;q=AFyN_@j#0G~rhqSD(Q)R{rB^wp040Mlxj7F(!s4i=e&{6#dj(l!Uq z27PKi4@bvj_!2IGD)9hP+Kjb?kaM;+C)%Hr@nfy6Eh6Q6_(A&~+Rm`zB!`Jtn*m63 z7|7zaE}fNRz1Si3N9_0Q%}y!9t`Zwr#@a|br_hl?SlpGSGLnaAt}4^h@3yDy@*8{Q zj#O6BN!ELTjEvO9yQksJj$z7E#rX~DLQv5u#y85lVx(u<^g?yVCxoGuha7N(q$mTj zb>olpp;j(0Ee}MP!>oj*X>7iR{6rIwpsn?7`R!Vtk!^S(Bw21S-1$XA`Q~FB_*(R- ztPF($ds7WG{G~7#;kBW#;`BSLpyzDk*NU^;AqbkT7dF-J!5&@k4R#H( z*B)EnvXoo#)17RTgs%i+R!-wN+qNiD=@~zfrQR+woO%;oK2){krz*xa$2(MQ<~6=e zyge}wIUU!Vc}q!INeD(Y0)du_v${`uX**Xgz`NH)YIHeoT-k;a9&^E4o(rHB!5odZ zCZ!W*m2VJNSp`M7`mhRCf15q` z^u-*~Ubwc}%vm2Ub*#9{%Slp+%8FB{1B2;NaJJ~8b=a##sgWa7P@}3-g+a!=EDkov zmiQM43M9mzydU_d!!Z zrVEY$dt_}!Na@R6t6bP|V>2PQf*g?MgoU^{2@VtM>%9|iw@Q%X>1}R945+?b2|+<# z2}n+o6^wTGrcrN97NfKTh&SkR{&L-LRA$&A9s}|64tx2qM)Wk+7MP|>mewZRE8OH;ELNm7Xsnac!87TSZRPLsf zg+^^kSSrp6w44;`6dG5ctF45x?XqgyX_dr;meZGyiW0VzdDIVYtwVJ+@-$!n07O~j zL%2K}{6Hxh#&-ck^Xv^uTDpEbYn;2a&N7@#rcm3>1eX)7CuNOXWO6-fBdD!*dpwY- zw#m;#xDeEI!fCOoTM!OYf%WIm&~2r36;1YMm-|)bEwSTgzx=t9qPOEZrMNWk0ZR5N z2Vi`swHrR-%+8vHXzF`Zcm7&39*=RhGQ2pBQWTVr5eJ@lJ?iF;qh?*%7W=R5II~fy zQQUdPmg3qfz|SM`4*kYQHO2IWsvUx2?Jc?#JQ#?|xnhSxURd0+cQax$VOVv^g*g8IWKDd4!2k+|F*af-5hKkF`~gK&vr=(&!w z4fQ}gn$adk}`@8!-m`cmy+Met#9$@4 zB#ilRdmm6w^{J`%yK@L)E0PdjeJWZTTZH(DNXaS2GC<@F!1k`1Oog#-GLokw_&_1V zge64esaWA&n-9H5O)aGjHubHiSxE?5Uv0G^dD!4DW9dOny)?`QWtkCMLyJgpjb%z& z4u1&)bq;a$s=rs<7beOjvh=|fw)G^XwXL@rBpqrA1zMB7K;E;GqbRRE9~}PxYAX*< z*uF1Zp!CmAHD!)jX~!cn-iE`f1gS|#*aNo7+OkaxN#oMB{{ZbB0^=XzOS0v@!|`Rq zW!P^rwJ8cf*x;+p&tftwCvY74*Ny68mRTtDcH&9F4W-{Mk|_>kf}%IB>ECMe#3eZU z*O`@W$;}}d8guDQtmJL^Qgvr@T?p6TA8O^B*~d7Z@{Ne!+P#CQC{uXz5$I^PThC)g z6K&BI3uS|*-%3`UTW?w%ILx}*9ce?{R%LDqXy+Ii+cjy0b#b<|>&6D#*Q@F0s%KU_ z8nWaRhJgxD`cRUtYiTO*$=i+Uz58k6cUHoX_04YWwm7b-WJyvT;UrD^LQbN*V~S#3 z=}$C5oF_f14qTP$!c7KN=?6QGcCIP6BSQ)L2D8+YXLzWzuw(dvy&>>)HV#UPkK$G^ za0WYOr@v=1rI9Imkl{K&KsX(Tpab=;w&ELF&3M|yz`WDT{qKo@ zlpuNv_L7S1qjHvzWR9A)&p?FiH8~h7P*&grw&UMF(v>@;*pSOnU_F4+w4{NoyYp$CNRj1O+~RA9KxsiY}MF6ALeIOo!xDE1?A8HsI`5tkV8guO%s z6yqr)vBxJo(9+WaEW)|7&|XkNv^j-iAA}#FB#Jf1B}8FJVL>`lMv<|}_okNtE-)NZ zZ8ml)2?J7z_4Tact42-U!R95`VQE_3eWffE4Py!*=Wq0;U^5g>b#BB_a|10cN?U1H z0FK!n)hJ=1wu5SYL4S!m4TpWZQzitj3D5VS2ILHGN%x`0$axX(E^JwXN}6BGIG}{3 zCt8Wd{P2*FHXX_KqLB5*VplB^-D3$M0mON|j{9##n`#t-+wtAXz;DMZ^J5$1(uR)0 znjwhtlC&^Y`1T~7vPtyCZFx2P49@BV+Z}1Eq-R}0bHhs72?;7YW7~pBG+36~>+w|6 zjHSXbLbUmPN%sE$N)gy|ZeA_EN63L7jSgU_j2z_Vj{$In6zGR~#W${{Vt1{-Bzo zx89V)VdJS@3mGK#$;R|VOZC~;K4LB9)sS#QfayPD>+MIP(bIEK1QqH~PIIqw>S{S>odidt?UcqHcl_5&nizG)>HCa4Ngr?IyMj#IzyjkKp$vEn+GvQ@fx8}c-P z^rJ)83r)FA+HF27!P2GaWhY+brzJ{jbWd3Wo(t7YW3(ej>`sXy-V-zz{*C0QdFPC@rM98;Emh|JxZ z(dSN4kaL@ae;zH4l|R!^`eV4t)N4TdLdrQB>9d{Vt4OQ&h%eO!j`#BMM-rB zDp(2yY8>OSr*_vsQe70wkL)SZN>MgLQ(jsS17xftCnIC@JNwXgc2#F@Lb`?M;l&cB z(v_?YKqM3Vpb|L8>slNb78Jaawp%ZS0EL~m9DQre+oIi^ek*d2zZ@rCmPT@&!QYde z{RRaUIYbR6R;(zLy$m&Tf5OqNZ#>ePDuTSuzW>&`{JyrjC}4k3C96QgVZ zK|ARi=Clv?W$4iS7tkGHr0GgRcE&MP3yYC0i*cmB+ersxsNf+Uhi|CG9#R6Q(1qXb zO{fOhjIsgMl^t%WSOob`+XK>?W2kPi(#!9%r<)Hg)hMk&PI+<=2LKOBIWu}%LyWf~ zeYhPsO0p6Zka7oXeWZfHvv&DJ0 zp`aAEB;##GCw|0iK8CL|SX9^00HSitJ4knM`WsGa-igv z*(eE8R)-VE@TDN~HpkMU-kg?nIO`H7A(W{_rm&Q_NhM)9!Oq;Bw#I5^Yf@7kGT6D0 zK-8AjfW1h@3C9V@-xV3u5~Rgy8+OQzsX-1rveHOEQb_WpImjF5(!HihLe+E)DMCmeSfr>Dn^w7S7vdpvb`Hq$8@+Jbdw;CQ@8R8*K>%NgyQWAaY0owZGgnyFGIWwMA*D z2vXDn8wm*jEBs2|IKesgpb)O8i2KUAp@}h^bm?0_0HlmzBmw22E;HqY!GUWNC+K`4 zKqx410HSyH8*`2*vi0?roVWhRbxg-}INDS<3mM5O2aU+bxb&rW;44dE4O!pti?ZP* z<&fjr7nEfj;NyS+7*$BfL{FD)Sy*#AT2o<>fsKIA{!#YcyKiaM8jTs|08E4^FF-hK zl$?MG-z4K1?M*(^`0<;zMRk-Z!+K*GR3n`e=Tu}`WT#(L6a6F^{Gmt$!sAyJkB6MZBJ>?b>w2dpme9F#9`c#Y1 zB!;r<wbra70>T)b6VlQ-OLEDFwKTcnG*={ z4vi~l05p#_*ubM0wkkB|B{v~@p9l{v3IVW$j3)r@HzXbC_TWY`o0A}=Dnd(P#VgEG za85wmw?AqrxqZOt6-F-Cq2ot3-`XgYgslunQeQ(zC+HGRaq4#VqHRO0Zr1r-KJ*}% zt&o?QX&@zp?4S}ewn6PvW2e&@9xb`ThEz)N_*+OPH~{WPYGTPQY)g!{ZMO*SA=idk zQZQAGv!Ay#A3M-j7QW>zuAS=Gw^MXt?7(3$$AtmFml`SubrNz;Kmg-$n!2-aQcF!q zf#Dz^Eoku2RLYb#8O91m<8Nw{op^yU#OPrm=!6oMl8|>J&Y$KVr!@h7yveeq7U(e_ zQ;!`%_fS-jNF?NgfO2@_w|XgY1-VjS9ZhMEcbb$vbf0Stv*D^kZK2XZ<5HA110!M0 z1#P#xZMB;)>fhQCwKX0)veMU-i~`q?1rIPjyB>Sgn-inBpO>EVD9T2sT`I$h0|1@& zBB(bxH&jfTfXhvmkRB3Z!`V%za8kSw4&)Gdthqjd)fyC5Au!rLPC^_FtxX>Z3Ri%z z0N*)YoyHHgC=R2Oc6L&tTkX;=_V!fTq$##^w(1Tt*&$k$zJG*c1DsT4t?S27!AmyQ zqgzZ>mmPTuSb4>T94iUi8)tu|NUF`$w#&oP?|n?ll_9jGhGGGh^>BS~PwHsXUrP}* z-H^AK9Za<{EapX~-s+aO5~zYg##8{(0ZB>JNx{x)JEv9EQsz9&`7$HNg(pNgYfgd2 z4l;441n1P8cB{Mh_)*e}EXZ@;EbAdnwpI|XkIf(E?H+gqKhx6gYLHEa2oRy@}r&?hQz)G$^W!#hAv21-zLI zw9-zMV`U?JkapPdiW2b#)RNL+Jh-WBmy*K|C@IC0xd4NYTyQDl7OueMszf`G`jY1v zHrqY>_GEtuw#xi;2?T(ZYC2M_N*Px9!PtydzTat^Z*nSbcR!i|r3RdF(}bx+5{)2| z6R?KWB*&)ZCb`K4NL{ye}evdA#G|Pz4av@wtPnN>0O02pqa}MYkHBMh9+c`AUuD(XB{r8lGRw_vfG~IdoYZxa z4bsnYo^1>@COIAvY3~smfvF=pcfs2{;L~jMZMsY8Ri5ftWy~c>)2S^aZ>2rK=LeDR zML|kpt~7Hx?ECQ+wV^UlQXbMgrALJUw>#q%R@GctHFvaeQ$)uS8jXiz5BJ8CDJx4Z z6!GD>HykG*6pNHAv$GhNZdhtiY^13Tv3r-8iKM(9E@ihAEiW_L>rT>%~O&jA~@a05TJ~xk_ZQJr(vEwYAP)I>{&6;&7;n+ zzZWiLB_t>Pa~q`{t(Rnxe6f z7EUV8)CV1DxTuo3ua^{B!Es7Q;oOms4y=whrYtfp5M?S`_Tv@FGCUVsX_S$tI~8)I z0CT?mxHaha#UeB27f!z&D|7}>$jX%2<$L4;JC69L?kS$t!W?->E+M?Fmk;rIch8~Y zYAJjQ_1Wv>x7~>v)Y}poU=Zu!%YL}-#mNe5DIrK-2U2*dGyM%3Y&A7%Y8u~l_T)CT z%34t2M*%?KYAG5Hz~ejmT&7$dacP8saZie%U1kjIGpcP$ zISL6<)O6q|9rT^)q?!U~i?m;^GTvMVIvCrZL#{fh3`}Wp(#}Xi0NH14lt=^-+-_&A zoh3F)?xN{@THRssDl(h_N@RkqBLti%B`vkVa_W;-lS=?zeykU=1dw#GykT~lS1xY9t% zin2#xiheX0@rqLgT3b>{DM|p!yOZgk>s=l6LhG~TFYQh6&#W%;&s5tjZO?}RhaQyk zLl3Zke8WjPl-SOD>EE{X^2*VXo@?|=s+PsX4!yt7O;|82< zU?&?;Za=4DcFt>veObpf$&v>=kaI&*n~wD@8aOy43=Y*94=PZ=sm2J>J66RCV`IjL z+CVD61J;Vpl;bqhL(7n9(ZBHxtI&p#+X!lpC2B~d1t_bYX~zI4+L>$>L zskqSGg1|x$R6y@Z64cVN>pN$9_Y{6kPEWxO*>W;lN~5GD0|O-UM+*a(U#psb#h2sN$Xiu7XxW!J5OCxYSXp)sA zsE3XR2^1UakjhG$Z^R>f8XYru7#F#%AQ>$jr|=^r@7pzBUA}yo#!QQwr^HcFYzzPo zYLgc!0PB>}QgfVQjcP)k{rZ{+N$-wmxG5@-RNP&ml)Cv~#%VbyBP;$OJ*b#79F;XT z#JsXqzL0hwS{TS#dBO<7h5!`&M$*Uo(>hS_{vbCrwAVq?J$UGL9+assIHx?M6zaeO z+*MBOu9&mJlM+)ThZT~Blag^(ONIL8j}u8J#2621(3aosL57%cEjMrYy<<1KvbokL|BZi zu;-c0@K1VnlXVhq(zXdUJLrC0S})C%JxLE|7%P9gW#hn~x4k&0xrBYPq)3vyG9Or0rZEu?5{55x7Z zPJTMrN|hyODf|GOfDbh|?XJ}GJ{m@q5HfSkN8CDEG?9S4cpE1>QB8i9X4Biz32t&5 zPb4!dTFE4Mz#aQyoQz=Gt}Z(+VUV>eDo#cQPaAfoWOP!N>R)K7ryDp<4&Rj&&qluF zZxL^xyU94_mBQ5_o<(+DVqThiN(0CV=NYIjw~cJNDmwx*YI)^jP#=VPnutY((J`=O zl#no@I|?H0X?i;a__CB#k({fVdXrs~`ct%ub*(Oquc5T%RagD`49_Ky65ajnkR@!xQLf%xS9ZH@GDkG40syHt!%2VjJLAS!T zKuj2jM}Uph}uqb-$dkh0^cSC|E3x1gZl#!^YpY&iE9BDz$| zno80V6p}~yjY%;H^{EWEAHz|=1+;0@alX{VFNqdQgfyZv=|-{wytpbk;;iwl@-8T5 zY7}yv(2%SEDR{V#?q6kTrM(8=q$T%KmldIB83z?54m^fuxiOhV#;~l32W65!7CWBQ zW#TjJge|WDT27!Ddq~&5D)F5GHx>v>5kGgsd~~KR_Y#=Hkb*1Dpe8Cf>I6?ov~F* zoV&%kYNJMm+R{IZj(^g!sO_0PT12#HvXTjQC}jmo-2;8;O1E3tOV3P>9L^k0dCoW{ zg|^&fy2xf@psAs;Pj!3!sn>1}w;jhGM~*RMZbDZ;-oCxqj*J+C5u!iQAa8Ik)wSAZ0j$U}&48c8|9^%*>7Gq7TKu9^%SK5Vpq#9y6P#;kNc{n*3tvJSU1Z3kbh-U?= zYhD@=t>*zz+sKjhG?}*ub1m*$V<^K>K44lX4gfKv_NlpjDxlI(!cLMis2mYOLFsup zLlTv!DB!EmR;YFxZ%mcwtId`Ly)fi>4aoyju7v(9_RqCUyVPAa^SWPbD3gTc1A46v zl7NX1u0SOT7)T1iKK1KMeJqJ34>IX-uym7?y=igq1e@&%yWLv(sV+r;rAH{#*eJ+S z`wvm|rmc4OWwWnBV93;!tteKu)~tRU`|xPV4gBNmn2ZFaCkjbC`;U5-W&~KsQlt`e z`GP>(H7eNmNQ1gsX16HIi7iQ55}dc;qX3S5>OWI$rKc{ef})@Uji>O8{t$f)6@8iW zPOZqWB&p4b$RQ^KZRj>FcI&Hg+A`!YwK=J5g`8>~y{J7s47%BAdD!|A@*Yx@?vQ-g zC`sSSMG16%)Z6jdFw*O*)ZsbFPuJYgeMIHPD-6Z9+w~BfI^wv-Hak=dAKPVX(7kRs zk1<9ORbruC_Nmj$5z20?MW#w{{ST;jn>ByxVGj}(u*y$ zP7o30!1wo|ZfzMMiBY9K`n4z_q>na$GH`uPDT~}>+?-`@L`Q&=k`ws7shb_K#?>G( zA>hhRwzV&G9^iXYa#If;9!YCN%RQz~&!PgN>p*2*Fw}F%KGb6RvAd8=1Vc-W$Wp>y zX$J#I-%;(J)B>P2p~ob;>w^HODCaxWi;K0zjM7_dG?i{bM{lKCH_<2ZD%RDRFTZX~ z2G^taiv>El?~~0#KW%9;*CNisQtPCnLh>3`HyIwHr(wBnN!HMD)e-zX^IQvn+=f|w zm9nKP0Vn#=tx^}wkv9m}$&E&C5sTuof=aRgKj9yKCMt&>SgGky`SS zpP=L3qpsIkv2F>QaDAmjcm~8kfC@ny9QssM-s>{ZhZ>UVFb-Ql-9S_X%k7foi4se$ zx`d2?qB4*-7@@YbGF7%+=HKGPf90Bt;uKN?j-^A(>ACi$T`m)%MO51qS0;SQ0+r&s z`@gCG0JUeAA#Cc1iD9SZ#f;hW1T2D~{HMMuwIoiky7jq>thS&?1R(+9cRjtUe&MBB zHr|=I&zm7hmda4#8&8O>I0W_y&m$C+N}F?@^|_ZCEAc$Z5y(cLbHP61unAXgyXhAp zT&?pJAv(iu1v*u&8QUP#>;C`}di@A)Q)@&xs&Hybhwn4ok5N=$*N^VmjC&ONq z1N^|$bO>EV@a+t#cP|~4CqjWonS1h2uJrbhotBQT)76$mTr945&mjs4XsD#2c*b+= zM}9%C&a0x`jc}67>C)qGEh|XdN`j6?KDGKiW$FtYxc>k)&X}lH6r`Z;Ro`y)@E^w* z@gaOs>IR?@^6QkR5rKi`+~8LauHH2M27{|9tCBe1Gz;(JB<8wTf%QJLE5}*;*O1d? z5^`fMKdoEs=})zdhS@YPhM=qfcRZSXuEQJsDmf!TaqvvRR*{^Fm1X?nd}5%OBmqYu z>;OqQ_ol}Nwo(zIWw|4a?N5gQ?@;YLLBXQK!%sP@ts?87)jl!K1u)#82?rczgj?Cj z-v?~d0q~QJTl!F)Bu05SlFE^*kN*HQH{%kWhQ@&sPIo() folder. +The examples in this folder are built while building the MXNet library and cpp-package from source . However, they can be built manually as follows From cpp-package/examples directory @@ -18,24 +19,24 @@ The examples that are built to be run on GPU may not work on the non-GPU machine The makefile will also download the necessary data files and store in a data folder. (The download will take couple of minutes, but will be done only once on a fresh installation.) -## Examples +## Examples demonstrating training workflow -This directory contains following examples. In order to run the examples, ensure that the path to the MXNet shared library is added to the OS specific environment variable viz. **LD\_LIBRARY\_PATH** for Linux, Mac and Ubuntu OS and **PATH** for Windows OS. +This directory contains following examples. In order to run the examples, ensure that the path to the MXNet shared library is added to the OS specific environment variable viz. **LD\_LIBRARY\_PATH** for Linux, Mac and Ubuntu OS and **PATH** for Windows OS. For example `export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/home/ubuntu/incubator-mxnet/lib` on ubuntu using gpu. ### [alexnet.cpp]() The example implements the C++ version of AlexNet. The networks trains on MNIST data. The number of epochs can be specified as a command line argument. For example to train with 10 epochs use the following: - ``` - ./alexnet 10 - ``` +``` +build/alexnet 10 +``` ### [googlenet.cpp]() The code implements a GoogLeNet/Inception network using the C++ API. The example uses MNIST data to train the network. By default, the example trains the model for 100 epochs. The number of epochs can also be specified in the command line. For example, to train the model for 10 epochs use the following: ``` -./googlenet 10 +build/googlenet 10 ``` ### [mlp.cpp]() @@ -44,7 +45,7 @@ The code implements a multilayer perceptron from scratch. The example creates it To run the example use the following command: ``` -./mlp +build/mlp ``` ### [mlp_cpu.cpp]() @@ -53,7 +54,7 @@ The code implements a multilayer perceptron to train the MNIST data. The code de To run the example use the following command: ``` -./mlp_cpu +build/mlp_cpu ``` ### [mlp_gpu.cpp]() @@ -61,7 +62,7 @@ To run the example use the following command: The code implements a multilayer perceptron to train the MNIST data. The code demonstrates the use of the "SimpleBind" C++ API and MNISTIter. The example is designed to work on GPU. The example does not require command line arguments. To run the example execute following command: ``` -./mlp_gpu +build/mlp_gpu ``` ### [mlp_csv.cpp]() @@ -69,7 +70,13 @@ The code implements a multilayer perceptron to train the MNIST data. The code de The code implements a multilayer perceptron to train the MNIST data. The code demonstrates the use of the "SimpleBind" C++ API and CSVIter. The CSVIter can iterate data that is in CSV format. The example can be run on CPU or GPU. The example usage is as follows: ``` -mlp_csv --train mnist_training_set.csv --test mnist_test_set.csv --epochs 10 --batch_size 100 --hidden_units "128,64,64 [--gpu]" +build/mlp_csv --train data/mnist_data/mnist_train.csv --test data/mnist_data/mnist_test.csv --epochs 10 --batch_size 100 --hidden_units "128 64 64" --gpu +``` +* To get the `mnist_training_set.csv` and `mnist_test_set.csv` please run the following command: +```python +# in incubator-mxnet/cpp-package/example directory +python mnist_to_csv.py ./data/mnist_data/train-images-idx3-ubyte ./data/mnist_data/train-labels-idx1-ubyte ./data/mnist_data/mnist_train.csv 60000 +python mnist_to_csv.py ./data/mnist_data/t10k-images-idx3-ubyte ./data/mnist_data/t10k-labels-idx1-ubyte ./data/mnist_data/mnist_test.csv 10000 ``` ### [resnet.cpp]() @@ -77,7 +84,7 @@ mlp_csv --train mnist_training_set.csv --test mnist_test_set.csv --epochs 10 --b The code implements a resnet model using the C++ API. The model is used to train MNIST data. The number of epochs for training the model can be specified on the command line. By default, model is trained for 100 epochs. For example, to train with 10 epochs use the following command: ``` -./resnet 10 +build/resnet 10 ``` ### [lenet.cpp]() @@ -85,22 +92,22 @@ The code implements a resnet model using the C++ API. The model is used to train The code implements a lenet model using the C++ API. It uses MNIST training data in CSV format to train the network. The example does not use built-in CSVIter to read the data from CSV file. The number of epochs can be specified on the command line. By default, the mode is trained for 100,000 epochs. For example, to train with 10 epochs use the following command: ``` -./lenet 10 +build/lenet 10 ``` ### [lenet\_with\_mxdataiter.cpp]() The code implements a lenet model using the C++ API. It uses MNIST training data to train the network. The example uses built-in MNISTIter to read the data. The number of epochs can be specified on the command line. By default, the mode is trained for 100 epochs. For example, to train with 10 epochs use the following command: ``` -./lenet\_with\_mxdataiter 10 +build/lenet_with_mxdataiter 10 ``` In addition, there is `run_lenet_with_mxdataiter.sh` that downloads the mnist data and run `lenet_with_mxdataiter` example. -###[inception_bn.cpp]() +### [inception_bn.cpp]() The code implements an Inception network using the C++ API with batch normalization. The example uses MNIST data to train the network. The model trains for 100 epochs. The example can be run by executing the following command: ``` -./inception_bn +build/inception_bn ``` diff --git a/cpp-package/example/alexnet.cpp b/cpp-package/example/alexnet.cpp index 3d6e6855b..7564d4361 100644 --- a/cpp-package/example/alexnet.cpp +++ b/cpp-package/example/alexnet.cpp @@ -234,15 +234,6 @@ int main(int argc, char const *argv[]) { * initializer to call*/ xavier(arg.first, &arg.second); } - /*print out to check the shape of the net*/ - for (const auto &s : Net.ListArguments()) { - LG << s; - const auto &k = args_map[s].GetShape(); - for (const auto &i : k) { - std::cout << i << " "; - } - std::cout << std::endl; - } /*these binary files should be generated using im2rc tools, which can be found * in mxnet/bin*/ @@ -258,7 +249,7 @@ int main(int argc, char const *argv[]) { auto val_iter = MXDataIter("MNISTIter"); setDataIter(&val_iter, "Label", data_files, batch_size); - Optimizer* opt = OptimizerRegistry::Find("ccsgd"); + Optimizer* opt = OptimizerRegistry::Find("sgd"); opt->SetParam("momentum", 0.9) ->SetParam("rescale_grad", 1.0 / batch_size) ->SetParam("clip_gradient", 10) @@ -275,7 +266,6 @@ int main(int argc, char const *argv[]) { train_iter.Reset(); while (train_iter.Next()) { auto batch = train_iter.GetDataBatch(); - LG << train_iter.GetDataBatch().index.size(); /*use copyto to feed new data and label to the executor*/ batch.data.CopyTo(&args_map["data"]); batch.label.CopyTo(&args_map["label"]); diff --git a/cpp-package/example/charRNN.cpp b/cpp-package/example/charRNN.cpp index ad564f665..54b8eea7a 100644 --- a/cpp-package/example/charRNN.cpp +++ b/cpp-package/example/charRNN.cpp @@ -465,7 +465,7 @@ void train(const std::string file, int batch_size, int max_epoch, int start_epoc mx_float learning_rate = 0.0002; mx_float weight_decay = 0.000002; - Optimizer* opt = OptimizerRegistry::Find("ccsgd"); + Optimizer* opt = OptimizerRegistry::Find("sgd"); opt->SetParam("lr", learning_rate) ->SetParam("wd", weight_decay); // opt->SetParam("momentum", 0.9)->SetParam("rescale_grad", 1.0 / batch_size) diff --git a/cpp-package/example/example.mk b/cpp-package/example/example.mk index 4914b31ba..ef99d7426 100644 --- a/cpp-package/example/example.mk +++ b/cpp-package/example/example.mk @@ -18,7 +18,7 @@ CPPEX_SRC = $(wildcard cpp-package/example/*.cpp) CPPEX_EXE = $(patsubst cpp-package/example/%.cpp, build/cpp-package/example/%, $(CPPEX_SRC)) -CPPEX_CFLAGS += -Icpp-package/include -Ibuild/cpp-package/include +CPPEX_CFLAGS += -Icpp-package/include CPPEX_EXTRA_LDFLAGS := -L$(ROOTDIR)/lib -lmxnet EXTRA_PACKAGES += cpp-package-example-all @@ -30,8 +30,8 @@ cpp-package-example-all: cpp-package-all $(CPPEX_EXE) build/cpp-package/example/% : cpp-package/example/%.cpp lib/libmxnet.so $(CPP_PACKAGE_OP_H_FILE) @mkdir -p $(@D) - $(CXX) -std=c++0x $(CFLAGS) $(CPPEX_CFLAGS) -MM -MT cpp-package/example/$* $< >build/cpp-package/example//$*.d - $(CXX) -std=c++0x $(CFLAGS) $(CPPEX_CFLAGS) -o $@ $(filter %.cpp %.a, $^) $(LDFLAGS) $(CPPEX_EXTRA_LDFLAGS) + $(CXX) -std=c++11 $(CFLAGS) $(CPPEX_CFLAGS) -MM -MT cpp-package/example/$* $< >build/cpp-package/example//$*.d + $(CXX) -std=c++11 $(CFLAGS) $(CPPEX_CFLAGS) -o $@ $(filter %.cpp %.a, $^) $(LDFLAGS) $(CPPEX_EXTRA_LDFLAGS) cpp-package-example-clean: rm -rf build/cpp-package/example/* diff --git a/cpp-package/example/feature_extract/Makefile b/cpp-package/example/feature_extract/Makefile index f598183bd..193eaa7e8 100644 --- a/cpp-package/example/feature_extract/Makefile +++ b/cpp-package/example/feature_extract/Makefile @@ -27,12 +27,12 @@ LDFLAGS=$(COMMFLAGS) -L ../../../lib -lmxnet $(BLAS) $(CUDA) -lgomp -pthread all: feature_extract prepare_data_with_opencv feature_extract: ./feature_extract.cpp - $(CXX) -c -std=c++0x $(CFLAGS) $^ + $(CXX) -c -std=c++11 $(CFLAGS) $^ $(CXX) $(basename $@).o -o $@ $(LDFLAGS) -rm -f $(basename $@).o prepare_data_with_opencv: ./prepare_data_with_opencv.cpp - $(CXX) -c -std=c++0x $(OPENCV_CFLAGS) $^ + $(CXX) -c -std=c++11 $(OPENCV_CFLAGS) $^ $(CXX) $(basename $@).o -o $@ $(OPENCV_LDFLAGS) -rm -f $(basename $@).o diff --git a/cpp-package/example/googlenet.cpp b/cpp-package/example/googlenet.cpp index ad9212c75..4bd3be27f 100644 --- a/cpp-package/example/googlenet.cpp +++ b/cpp-package/example/googlenet.cpp @@ -144,7 +144,7 @@ int main(int argc, char const *argv[]) { auto val_iter = MXDataIter("MNISTIter"); setDataIter(&val_iter, "Label", data_files, batch_size); - Optimizer* opt = OptimizerRegistry::Find("ccsgd"); + Optimizer* opt = OptimizerRegistry::Find("sgd"); opt->SetParam("momentum", 0.9) ->SetParam("rescale_grad", 1.0 / batch_size) ->SetParam("clip_gradient", 10) diff --git a/cpp-package/example/inception_bn.cpp b/cpp-package/example/inception_bn.cpp index c499df77e..5b444e467 100644 --- a/cpp-package/example/inception_bn.cpp +++ b/cpp-package/example/inception_bn.cpp @@ -91,7 +91,8 @@ Symbol InceptionFactoryB(Symbol data, int num_3x3red, int num_3x3, Shape(1, 1), name + "_double_3x3_1"); Symbol pooling = Pooling("max_pool_" + name + "_pool", data, Shape(3, 3), PoolingPoolType::kMax, - false, false, PoolingPoolingConvention::kValid, Shape(2, 2)); + false, false, PoolingPoolingConvention::kValid, + Shape(2, 2), Shape(1, 1)); std::vector lst; lst.push_back(c3x3); lst.push_back(cd3x3); @@ -143,8 +144,8 @@ Symbol InceptionSymbol(int num_classes) { int main(int argc, char const *argv[]) { int batch_size = 40; - int max_epoch = 100; - float learning_rate = 1e-4; + int max_epoch = argc > 1 ? strtol(argv[1], NULL, 10) : 100; + float learning_rate = 1e-2; float weight_decay = 1e-4; auto ctx = Context::gpu(); @@ -172,7 +173,13 @@ int main(int argc, char const *argv[]) { auto val_iter = MXDataIter("MNISTIter"); setDataIter(&val_iter, "Label", data_files, batch_size); - Optimizer* opt = OptimizerRegistry::Find("ccsgd"); + // initialize parameters + Xavier xavier = Xavier(Xavier::gaussian, Xavier::in, 2); + for (auto &arg : args_map) { + xavier(arg.first, &arg.second); + } + + Optimizer* opt = OptimizerRegistry::Find("sgd"); opt->SetParam("momentum", 0.9) ->SetParam("rescale_grad", 1.0 / batch_size) ->SetParam("clip_gradient", 10) @@ -182,9 +189,12 @@ int main(int argc, char const *argv[]) { auto *exec = inception_bn_net.SimpleBind(ctx, args_map); auto arg_names = inception_bn_net.ListArguments(); + // Create metrics + Accuracy train_acc, val_acc; for (int iter = 0; iter < max_epoch; ++iter) { LG << "Epoch: " << iter; train_iter.Reset(); + train_acc.Reset(); while (train_iter.Next()) { auto data_batch = train_iter.GetDataBatch(); data_batch.data.CopyTo(&args_map["data"]); @@ -200,10 +210,11 @@ int main(int argc, char const *argv[]) { } NDArray::WaitAll(); + train_acc.Update(data_batch.label, exec->outputs[0]); } - Accuracy acu; val_iter.Reset(); + val_acc.Reset(); while (val_iter.Next()) { auto data_batch = val_iter.GetDataBatch(); data_batch.data.CopyTo(&args_map["data"]); @@ -211,9 +222,10 @@ int main(int argc, char const *argv[]) { NDArray::WaitAll(); exec->Forward(false); NDArray::WaitAll(); - acu.Update(data_batch.label, exec->outputs[0]); + val_acc.Update(data_batch.label, exec->outputs[0]); } - LG << "Accuracy: " << acu.Get(); + LG << "Train Accuracy: " << train_acc.Get(); + LG << "Validation Accuracy: " << val_acc.Get(); } delete exec; MXNotifyShutdown(); diff --git a/cpp-package/example/inference/Makefile b/cpp-package/example/inference/Makefile new file mode 100644 index 000000000..5efe6cfb6 --- /dev/null +++ b/cpp-package/example/inference/Makefile @@ -0,0 +1,40 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + + +CPPEX_SRC = $(wildcard *.cpp) +CPPEX_EXE = $(patsubst %.cpp, %, $(CPPEX_SRC)) +OPENCV_CFLAGS=`pkg-config --cflags opencv` +OPENCV_LDFLAGS=`pkg-config --libs opencv` + +CXX=g++ + + +CFLAGS=$(COMMFLAGS) -I../../../3rdparty/tvm/nnvm/include -I../../../3rdparty/dmlc-core/include -I ../../include -I ../../../include -Wall -O3 -msse3 -funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas +CPPEX_EXTRA_LDFLAGS := -L../../../lib -lmxnet $(OPENCV_LDFLAGS) + +all: $(CPPEX_EXE) + +debug: CPPEX_CFLAGS += -DDEBUG -g +debug: all + + +$(CPPEX_EXE):% : %.cpp + $(CXX) -std=c++0x $(CFLAGS) $(CPPEX_CFLAGS) -o $@ $(filter %.cpp %.a, $^) $(CPPEX_EXTRA_LDFLAGS) + +clean: + rm -f $(CPPEX_EXE) diff --git a/cpp-package/example/inference/README.md b/cpp-package/example/inference/README.md new file mode 100644 index 000000000..79831b40b --- /dev/null +++ b/cpp-package/example/inference/README.md @@ -0,0 +1,41 @@ +# MXNet C++ Package Inference Workflow Examples + +## Building C++ Inference examples + +The examples in this folder demonstrate the **inference** workflow. +To build examples use following commands: + +- Release: **make all** +- Debug: **make debug all** + + +## Examples demonstrating inference workflow + +This directory contains following examples. In order to run the examples, ensure that the path to the MXNet shared library is added to the OS specific environment variable viz. **LD\_LIBRARY\_PATH** for Linux, Mac and Ubuntu OS and **PATH** for Windows OS. + +### [inception_inference.cpp]() + +This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. The command line parameters the example can accept are as shown below: + +``` +./inception_inference --help +Usage: +inception_inference --symbol + --params + --image ) downloads the pre-trained **Inception** model and a test image. The users can invoke this script as follows: + +``` +./unit_test_inception_inference.sh +``` diff --git a/cpp-package/example/inference/inception_inference.cpp b/cpp-package/example/inference/inception_inference.cpp new file mode 100644 index 000000000..7005e745b --- /dev/null +++ b/cpp-package/example/inference/inception_inference.cpp @@ -0,0 +1,446 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* + * This example demonstrates image classification workflow with pre-trained models using MXNet C++ API. + * The example performs following tasks. + * 1. Load the pre-trained model. + * 2. Load the parameters of pre-trained model. + * 3. Load the image to be classified in to NDArray. + * 4. Normalize the image using the mean of images that were used for training. + * 5. Run the forward pass and predict the input image. + */ + +#include +#include +#include +#include +#include +#include +#include "mxnet-cpp/MxNetCpp.h" +#include + +using namespace mxnet::cpp; + +static mx_float DEFAULT_MEAN_R = 123.675; +static mx_float DEFAULT_MEAN_G = 116.28; +static mx_float DEFAULT_MEAN_B = 103.53; +/* + * class Predictor + * + * This class encapsulates the functionality to load the model, process input image and run the forward pass. + */ + +class Predictor { + public: + Predictor() {} + Predictor(const std::string& model_json_file, + const std::string& model_params_file, + const Shape& input_shape, + bool gpu_context_type = false, + const std::string& synset_file = "", + const std::string& mean_image_file = ""); + void PredictImage(const std::string& image_file); + ~Predictor(); + + private: + void LoadModel(const std::string& model_json_file); + void LoadParameters(const std::string& model_parameters_file); + void LoadSynset(const std::string& synset_file); + NDArray LoadInputImage(const std::string& image_file); + void LoadMeanImageData(); + void LoadDefaultMeanImageData(); + void NormalizeInput(const std::string& mean_image_file); + inline bool FileExists(const std::string& name) { + struct stat buffer; + return (stat(name.c_str(), &buffer) == 0); + } + NDArray mean_img; + std::map args_map; + std::map aux_map; + std::vector output_labels; + Symbol net; + Executor *executor; + Shape input_shape; + NDArray mean_image_data; + NDArray std_dev_image_data; + Context global_ctx = Context::cpu(); + std::string mean_image_file; +}; + + +/* + * The constructor takes following parameters as input: + * 1. model_json_file: The model in json formatted file. + * 2. model_params_file: File containing model parameters + * 3. synset_file: File containing the list of image labels + * 4. input_shape: Shape of input data to the model. Since this class will be running one inference at a time, + * the input shape is required to be in format Shape(1, number_of_channels, height, width) + * The input image will be resized to (height x width) size before running the inference. + * The constructor will: + * 1. Load the model and parameter files. + * 2. Load the synset file. + * 3. Invoke the SimpleBind to bind the input argument to the model and create an executor. + * + * The SimpleBind is expected to be invoked only once. + */ +Predictor::Predictor(const std::string& model_json_file, + const std::string& model_params_file, + const Shape& input_shape, + bool gpu_context_type, + const std::string& synset_file, + const std::string& mean_image_file): + input_shape(input_shape), + mean_image_file(mean_image_file) { + if (gpu_context_type) { + global_ctx = Context::gpu(); + } + // Load the model + LoadModel(model_json_file); + + // Load the model parameters. + LoadParameters(model_params_file); + + /* + * The data will be used to output the exact label that matches highest output of the model. + */ + LoadSynset(synset_file); + + /* + * Load the mean image data if specified. + */ + if (!mean_image_file.empty()) { + LoadMeanImageData(); + } else { + LG << "Mean image file for normalizing the input is not provide." + << " We will use the default mean values for R,G and B channels."; + LoadDefaultMeanImageData(); + } + + // Create an executor after binding the model to input parameters. + args_map["data"] = NDArray(input_shape, global_ctx, false); + executor = net.SimpleBind(global_ctx, args_map, std::map(), + std::map(), aux_map); +} + +/* + * The following function loads the model from json file. + */ +void Predictor::LoadModel(const std::string& model_json_file) { + if (!FileExists(model_json_file)) { + LG << "Model file " << model_json_file << " does not exist"; + throw std::runtime_error("Model file does not exist"); + } + LG << "Loading the model from " << model_json_file << std::endl; + net = Symbol::Load(model_json_file); +} + + +/* + * The following function loads the model parameters. + */ +void Predictor::LoadParameters(const std::string& model_parameters_file) { + if (!FileExists(model_parameters_file)) { + LG << "Parameter file " << model_parameters_file << " does not exist"; + throw std::runtime_error("Model parameters does not exist"); + } + LG << "Loading the model parameters from " << model_parameters_file << std::endl; + std::map parameters; + NDArray::Load(model_parameters_file, 0, ¶meters); + for (const auto &k : parameters) { + if (k.first.substr(0, 4) == "aux:") { + auto name = k.first.substr(4, k.first.size() - 4); + aux_map[name] = k.second.Copy(global_ctx); + } + if (k.first.substr(0, 4) == "arg:") { + auto name = k.first.substr(4, k.first.size() - 4); + args_map[name] = k.second.Copy(global_ctx); + } + } + /*WaitAll is need when we copy data between GPU and the main memory*/ + NDArray::WaitAll(); +} + + +/* + * The following function loads the synset file. + * This information will be used later to report the label of input image. + */ +void Predictor::LoadSynset(const std::string& synset_file) { + if (!FileExists(synset_file)) { + LG << "Synset file " << synset_file << " does not exist"; + throw std::runtime_error("Synset file does not exist"); + } + LG << "Loading the synset file."; + std::ifstream fi(synset_file.c_str()); + if (!fi.is_open()) { + std::cerr << "Error opening synset file " << synset_file << std::endl; + throw std::runtime_error("Error in opening the synset file."); + } + std::string synset, lemma; + while (fi >> synset) { + getline(fi, lemma); + output_labels.push_back(lemma); + } + fi.close(); +} + + +/* + * The following function loads the mean data from mean image file. + * This data will be used for normalizing the image before running the forward + * pass. + * The output data has the same shape as that of the input image data. + */ +void Predictor::LoadMeanImageData() { + LG << "Load the mean image data that will be used to normalize " + << "the image before running forward pass."; + mean_image_data = NDArray(input_shape, global_ctx, false); + mean_image_data.SyncCopyFromCPU( + NDArray::LoadToMap(mean_image_file)["mean_img"].GetData(), + input_shape.Size()); + NDArray::WaitAll(); +} + + +/* + * The following function loads the default mean values for + * R, G and B channels into NDArray that has the same shape as that of + * input image. + */ +void Predictor::LoadDefaultMeanImageData() { + LG << "Loading the default mean image data"; + std::vector array; + /*resize pictures to (224, 224) according to the pretrained model*/ + int height = input_shape[2]; + int width = input_shape[3]; + int channels = input_shape[1]; + std::vector default_means; + default_means.push_back(DEFAULT_MEAN_R); + default_means.push_back(DEFAULT_MEAN_G); + default_means.push_back(DEFAULT_MEAN_B); + for (int c = 0; c < channels; ++c) { + for (int i = 0; i < height; ++i) { + for (int j = 0; j < width; ++j) { + array.push_back(default_means[c]); + } + } + } + mean_image_data = NDArray(input_shape, global_ctx, false); + mean_image_data.SyncCopyFromCPU(array.data(), input_shape.Size()); + NDArray::WaitAll(); +} + + +/* + * The following function loads the input image into NDArray. + */ +NDArray Predictor::LoadInputImage(const std::string& image_file) { + if (!FileExists(image_file)) { + LG << "Image file " << image_file << " does not exist"; + throw std::runtime_error("Image file does not exist"); + } + LG << "Loading the image " << image_file << std::endl; + std::vector array; + cv::Mat mat = cv::imread(image_file); + /*resize pictures to (224, 224) according to the pretrained model*/ + int height = input_shape[2]; + int width = input_shape[3]; + int channels = input_shape[1]; + cv::resize(mat, mat, cv::Size(height, width)); + for (int c = 0; c < channels; ++c) { + for (int i = 0; i < height; ++i) { + for (int j = 0; j < width; ++j) { + array.push_back(static_cast(mat.data[(i * height + j) * 3 + c])); + } + } + } + NDArray image_data = NDArray(input_shape, global_ctx, false); + image_data.SyncCopyFromCPU(array.data(), input_shape.Size()); + NDArray::WaitAll(); + return image_data; +} + + +/* + * The following function runs the forward pass on the model. + * The executor is created in the constructor. + * + */ +void Predictor::PredictImage(const std::string& image_file) { + // Load the input image + NDArray image_data = LoadInputImage(image_file); + + // Normalize the image + image_data.Slice(0, 1) -= mean_image_data; + + LG << "Running the forward pass on model to predict the image"; + /* + * The executor->arg_arrays represent the arguments to the model. + * + * Copying the image_data that contains the NDArray of input image + * to the arg map of the executor. The input is stored with the key "data" in the map. + * + */ + image_data.CopyTo(&(executor->arg_dict()["data"])); + NDArray::WaitAll(); + + // Run the forward pass. + executor->Forward(false); + + // The output is available in executor->outputs. + auto array = executor->outputs[0].Copy(global_ctx); + NDArray::WaitAll(); + + /* + * Find out the maximum accuracy and the index associated with that accuracy. + * This is done by using the argmax operator on NDArray. + */ + auto predicted = array.ArgmaxChannel(); + NDArray::WaitAll(); + + int best_idx = predicted.At(0, 0); + float best_accuracy = array.At(0, best_idx); + + if (output_labels.empty()) { + LG << "The model predicts the highest accuracy of " << best_accuracy << " at index " + << best_idx; + } else { + LG << "The model predicts the input image to be a [" << output_labels[best_idx] + << " ] with Accuracy = " << best_accuracy << std::endl; + } +} + + +Predictor::~Predictor() { + if (executor) { + delete executor; + } + MXNotifyShutdown(); +} + + +/* + * Convert the input string of number of hidden units into the vector of integers. + */ +std::vector getShapeDimensions(const std::string& hidden_units_string) { + std::vector dimensions; + char *p_next; + int num_unit = strtol(hidden_units_string.c_str(), &p_next, 10); + dimensions.push_back(num_unit); + while (*p_next) { + num_unit = strtol(p_next, &p_next, 10); + dimensions.push_back(num_unit); + } + return dimensions; +} + +void printUsage() { + std::cout << "Usage:" << std::endl; + std::cout << "inception_inference --symbol " << std::endl + << "--params " << std::endl + << "--image " << std::endl + << "--synset " << std::endl + << "[--input_shape ] " << std::endl + << "[--mean ] " + << std::endl + << "[--gpu ]" + << std::endl; +} + +int main(int argc, char** argv) { + std::string model_file_json; + std::string model_file_params; + std::string synset_file = ""; + std::string mean_image = ""; + std::string input_image = ""; + bool gpu_context_type = false; + + std::string input_shape = "3 224 224"; + int index = 1; + while (index < argc) { + if (strcmp("--symbol", argv[index]) == 0) { + index++; + model_file_json = (index < argc ? argv[index]:""); + } else if (strcmp("--params", argv[index]) == 0) { + index++; + model_file_params = (index < argc ? argv[index]:""); + } else if (strcmp("--synset", argv[index]) == 0) { + index++; + synset_file = (index < argc ? argv[index]:""); + } else if (strcmp("--mean", argv[index]) == 0) { + index++; + mean_image = (index < argc ? argv[index]:""); + } else if (strcmp("--image", argv[index]) == 0) { + index++; + input_image = (index < argc ? argv[index]:""); + } else if (strcmp("--input_shape", argv[index]) == 0) { + index++; + input_shape = (index < argc ? argv[index]:input_shape); + } else if (strcmp("--gpu", argv[index]) == 0) { + gpu_context_type = true; + } else if (strcmp("--help", argv[index]) == 0) { + printUsage(); + return 0; + } + index++; + } + + if (model_file_json.empty() || model_file_params.empty() || synset_file.empty()) { + LG << "ERROR: Model details such as symbol, param and/or synset files are not specified"; + printUsage(); + return 1; + } + + if (input_image.empty()) { + LG << "ERROR: Path to the input image is not specified."; + printUsage(); + return 1; + } + + std::vector input_dimensions = getShapeDimensions(input_shape); + + /* + * Since we are running inference for 1 image, add 1 to the input_dimensions so that + * the shape of input data for the model will be + * {no. of images, channels, height, width} + */ + input_dimensions.insert(input_dimensions.begin(), 1); + + Shape input_data_shape(input_dimensions); + + try { + // Initialize the predictor object + Predictor predict(model_file_json, model_file_params, input_data_shape, gpu_context_type, + synset_file, mean_image); + + // Run the forward pass to predict the image. + predict.PredictImage(input_image); + } catch (std::runtime_error &error) { + LG << "Execution failed with ERROR: " << error.what(); + } catch (...) { + /* + * If underlying MXNet code has thrown an exception the error message is + * accessible through MXGetLastError() function. + */ + LG << "Execution failed with following MXNet error"; + LG << MXGetLastError(); + } + return 0; +} diff --git a/cpp-package/example/inference/unit_test_inception_inference.sh b/cpp-package/example/inference/unit_test_inception_inference.sh new file mode 100755 index 000000000..4f40b496b --- /dev/null +++ b/cpp-package/example/inference/unit_test_inception_inference.sh @@ -0,0 +1,43 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# Downloading the data and model +mkdir -p model +wget -nc http://data.dmlc.ml/mxnet/models/imagenet/inception-bn.tar.gz +wget -nc -O model/dog.jpg https://github.com/dmlc/web-data/blob/master/mxnet/doc/tutorials/python/predict_image/dog.jpg?raw=true +wget -nc -O model/mean_224.nd https://github.com/dmlc/web-data/raw/master/mxnet/example/feature_extract/mean_224.nd +tar -xvzf inception-bn.tar.gz -C model + +# Building +make all + + +# Running the example with dog image. +if [ "$(uname)" == "Darwin" ]; then + DYLD_LIBRARY_PATH=${DYLD_LIBRARY_PATH}:../../../lib ./inception_inference --symbol "./model/Inception-BN-symbol.json" --params "./model/Inception-BN-0126.params" --synset "./model/synset.txt" --mean "./model/mean_224.nd" --image "./model/dog.jpg" 2&> inception_inference.log +else + LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:../../../lib ./inception_inference --symbol "./model/Inception-BN-symbol.json" --params "./model/Inception-BN-0126.params" --synset "./model/synset.txt" --mean "./model/mean_224.nd" --image "./model/dog.jpg" 2&> inception_inference.log +fi +result=`grep -c "pug-dog" inception_inference.log` +if [ $result == 1 ]; +then + echo "PASS: inception_inference correctly identified the image." + exit 0 +else + echo "FAIL: inception_inference FAILED to identify the image." + exit 1 +fi diff --git a/cpp-package/example/lenet_with_mxdataiter.cpp b/cpp-package/example/lenet_with_mxdataiter.cpp index 9869356be..4df6fbee9 100644 --- a/cpp-package/example/lenet_with_mxdataiter.cpp +++ b/cpp-package/example/lenet_with_mxdataiter.cpp @@ -102,7 +102,7 @@ int main(int argc, char const *argv[]) { auto val_iter = MXDataIter("MNISTIter"); setDataIter(&val_iter, "Label", data_files, batch_size); - Optimizer* opt = OptimizerRegistry::Find("ccsgd"); + Optimizer* opt = OptimizerRegistry::Find("sgd"); opt->SetParam("momentum", 0.9) ->SetParam("rescale_grad", 1.0) ->SetParam("clip_gradient", 10) diff --git a/cpp-package/example/mlp.cpp b/cpp-package/example/mlp.cpp index 595d75c67..cc16f53cf 100644 --- a/cpp-package/example/mlp.cpp +++ b/cpp-package/example/mlp.cpp @@ -144,13 +144,13 @@ void MLP() { grad_req_type, aux_states); std::cout << "Training" << std::endl; - int max_iters = 20000; + int max_epoch = 15000; mx_float learning_rate = 0.0001; - for (int iter = 0; iter < max_iters; ++iter) { + for (int epoch_num = 0; epoch_num < max_epoch; ++epoch_num) { exe->Forward(true); - - if (iter % 100 == 0) { - std::cout << "epoch " << iter << std::endl; + // print accuracy every 100 epoch + if (epoch_num % 100 == 0) { + std::cout << "epoch " << epoch_num << std::endl; std::vector& out = exe->outputs; float* cptr = new float[128 * 10]; out[0].SyncCopyToCPU(cptr, 128 * 10); diff --git a/cpp-package/example/mlp_csv.cpp b/cpp-package/example/mlp_csv.cpp index 8aec4b76d..43a14c84e 100644 --- a/cpp-package/example/mlp_csv.cpp +++ b/cpp-package/example/mlp_csv.cpp @@ -72,7 +72,7 @@ std::vector getLayers(const std::string& hidden_units_string) { void printUsage() { std::cout << "Usage:" << std::endl; std::cout << "mlp_csv --train mnist_training_set.csv --test mnist_test_set.csv --epochs 10 " - << "--batch_size 100 --hidden_units \"128 64 64\" [--gpu]" << std::endl; + << "--batch_size 100 --hidden_units \"128 64 64\" --gpu" << std::endl; std::cout << "The example uses mnist data in CSV format. The MNIST data in CSV format assumes " << "the column 0 to be label and the rest 784 column to be data." << std::endl; std::cout << "By default, the example uses 'cpu' context. If '--gpu' is specified, " diff --git a/cpp-package/example/resnet.cpp b/cpp-package/example/resnet.cpp index bc86c0b66..0bb77a1a1 100644 --- a/cpp-package/example/resnet.cpp +++ b/cpp-package/example/resnet.cpp @@ -184,7 +184,13 @@ int main(int argc, char const *argv[]) { auto val_iter = MXDataIter("MNISTIter"); setDataIter(&val_iter, "Label", data_files, batch_size); - Optimizer* opt = OptimizerRegistry::Find("ccsgd"); + // initialize parameters + Xavier xavier = Xavier(Xavier::gaussian, Xavier::in, 2); + for (auto &arg : args_map) { + xavier(arg.first, &arg.second); + } + + Optimizer* opt = OptimizerRegistry::Find("sgd"); opt->SetParam("lr", learning_rate) ->SetParam("wd", weight_decay) ->SetParam("momentum", 0.9) @@ -194,9 +200,12 @@ int main(int argc, char const *argv[]) { auto *exec = resnet.SimpleBind(ctx, args_map); auto arg_names = resnet.ListArguments(); + // Create metrics + Accuracy train_acc, val_acc; for (int iter = 0; iter < max_epoch; ++iter) { LG << "Epoch: " << iter; train_iter.Reset(); + train_acc.Reset(); while (train_iter.Next()) { auto data_batch = train_iter.GetDataBatch(); data_batch.data.CopyTo(&args_map["data"]); @@ -211,10 +220,11 @@ int main(int argc, char const *argv[]) { opt->Update(i, exec->arg_arrays[i], exec->grad_arrays[i]); } NDArray::WaitAll(); + train_acc.Update(data_batch.label, exec->outputs[0]); } - Accuracy acu; val_iter.Reset(); + val_acc.Reset(); while (val_iter.Next()) { auto data_batch = val_iter.GetDataBatch(); data_batch.data.CopyTo(&args_map["data"]); @@ -222,9 +232,10 @@ int main(int argc, char const *argv[]) { NDArray::WaitAll(); exec->Forward(false); NDArray::WaitAll(); - acu.Update(data_batch.label, exec->outputs[0]); + val_acc.Update(data_batch.label, exec->outputs[0]); } - LG << "Accuracy: " << acu.Get(); + LG << "Train Accuracy: " << train_acc.Get(); + LG << "Validation Accuracy: " << val_acc.Get(); } delete exec; MXNotifyShutdown(); diff --git a/cpp-package/example/test_optimizer.cpp b/cpp-package/example/test_optimizer.cpp index ee1201228..42547ea6c 100644 --- a/cpp-package/example/test_optimizer.cpp +++ b/cpp-package/example/test_optimizer.cpp @@ -15,6 +15,10 @@ * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. + * + * The file is used for testing if the optimizer could be created more than 1. + * By running: build/test_optimizer + * It return 0(means no error) if it succeed otherwise 1(error). */ #include "mxnet-cpp/MxNetCpp.h" diff --git a/cpp-package/example/test_score.cpp b/cpp-package/example/test_score.cpp index f92560fe8..9252701e2 100644 --- a/cpp-package/example/test_score.cpp +++ b/cpp-package/example/test_score.cpp @@ -19,6 +19,11 @@ /*! * Xin Li yakumolx@gmail.com + * The file is used for testing if the score(accurary) we get + * is better than the threshold we set using mlp model. + * By running: build/test_score 0.75 + * 0.75 here means the threshold score + * It return 0 if we can achieve higher score than threshold, otherwise 1 */ #include #include "utils.h" diff --git a/cpp-package/example/utils.h b/cpp-package/example/utils.h index 98b647268..2ed5c4c11 100644 --- a/cpp-package/example/utils.h +++ b/cpp-package/example/utils.h @@ -42,7 +42,7 @@ bool check_datafiles(const std::vector &data_files) { return true; } -bool setDataIter(MXDataIter *iter , std::string useType, +bool setDataIter(MXDataIter *iter , const std::string &useType, const std::vector &data_files, int batch_size) { if (!check_datafiles(data_files)) return false; diff --git a/cpp-package/include/mxnet-cpp/operator.h b/cpp-package/include/mxnet-cpp/operator.h index 4d4bedac8..9f289f0e2 100644 --- a/cpp-package/include/mxnet-cpp/operator.h +++ b/cpp-package/include/mxnet-cpp/operator.h @@ -86,7 +86,7 @@ class Operator { * \param symbol the input symbol * \return reference of self */ - Operator &SetInput(const std::string &name, Symbol symbol); + Operator &SetInput(const std::string &name, const Symbol &symbol); /*! * \brief add an input symbol * \param symbol the input symbol @@ -133,7 +133,7 @@ class Operator { * \param ndarray the input ndarray * \return reference of self */ - Operator &SetInput(const std::string &name, NDArray ndarray); + Operator &SetInput(const std::string &name, const NDArray &ndarray); /*! * \brief add an input ndarray * \param ndarray the input ndarray diff --git a/cpp-package/include/mxnet-cpp/operator.hpp b/cpp-package/include/mxnet-cpp/operator.hpp index f4ce43d58..edc396f14 100644 --- a/cpp-package/include/mxnet-cpp/operator.hpp +++ b/cpp-package/include/mxnet-cpp/operator.hpp @@ -158,7 +158,7 @@ inline void Operator::Invoke(NDArray &output) { Invoke(outputs); } -inline Operator &Operator::SetInput(const std::string &name, Symbol symbol) { +inline Operator &Operator::SetInput(const std::string &name, const Symbol &symbol) { if (symbol.GetHandle()) { input_keys_.push_back(name.c_str()); input_symbols_.push_back(symbol.GetHandle()); @@ -166,7 +166,7 @@ inline Operator &Operator::SetInput(const std::string &name, Symbol symbol) { return *this; } -inline Operator &Operator::SetInput(const std::string &name, NDArray ndarray) { +inline Operator &Operator::SetInput(const std::string &name, const NDArray &ndarray) { input_keys_.push_back(name.c_str()); input_ndarrays_.push_back(ndarray.GetHandle()); return *this; diff --git a/cpp-package/include/mxnet-cpp/symbol.hpp b/cpp-package/include/mxnet-cpp/symbol.hpp index b82e060ca..aed963949 100644 --- a/cpp-package/include/mxnet-cpp/symbol.hpp +++ b/cpp-package/include/mxnet-cpp/symbol.hpp @@ -281,8 +281,8 @@ inline void Symbol::InferExecutorArrays( auto iter_req = grad_req_type.find(arg_name); if (iter_req != grad_req_type.end()) { grad_reqs->push_back(iter_req->second); - } else if (arg_name.rfind("data") == arg_name.length() - 4 - || arg_name.rfind("label") == arg_name.length() - 5) { + } else if (arg_name.rfind("data") != std::string::npos + || arg_name.rfind("label") != std::string::npos) { grad_reqs->push_back(OpReqType::kNullOp); } else { grad_reqs->push_back(OpReqType::kWriteTo); diff --git a/cpp-package/scripts/OpWrapperGenerator.py b/cpp-package/scripts/OpWrapperGenerator.py index 1b5f8b56b..ca430ec99 100644 --- a/cpp-package/scripts/OpWrapperGenerator.py +++ b/cpp-package/scripts/OpWrapperGenerator.py @@ -138,6 +138,8 @@ def __init__(self, opName = '', argName = '', typeString = '', descString = ''): self.defaultString = 'Shape(' + self.defaultString[1:-1] + ")" elif self.type == 'dmlc::optional': self.defaultString = self.type + '(' + self.defaultString + ')' + elif self.type == 'dmlc::optional': + self.defaultString = self.type + '(' + self.defaultString + ')' elif typeString.startswith('caffe-layer-parameter'): self.defaultString = 'textToCaffeLayerParameter(' + self.MakeCString(self.defaultString) + ')' hasCaffe = True @@ -221,14 +223,14 @@ def GetOpDefinitionString(self, use_name, indent=0): if arg.isEnum and use_name: # comments ret = ret + self.GenDescription(arg.description, \ - '/*! \\breif ', \ + '/*! \\brief ', \ ' * ') ret = ret + " */\n" # definition ret = ret + arg.enum.GetDefinitionString(indent) + '\n' # create function comments ret = ret + self.GenDescription(self.description, \ - '/*!\n * \\breif ', \ + '/*!\n * \\brief ', \ ' * ') for arg in self.args: if arg.name != 'symbol_name' or use_name: diff --git a/cpp-package/tests/ci_test.sh b/cpp-package/tests/ci_test.sh index 57007f3a8..4a17d8d34 100755 --- a/cpp-package/tests/ci_test.sh +++ b/cpp-package/tests/ci_test.sh @@ -36,6 +36,9 @@ cp ../../build/cpp-package/example/lenet_with_mxdataiter . cp ../../build/cpp-package/example/resnet . ./resnet 5 +cp ../../build/cpp-package/example/inception_bn . +./inception_bn 5 + cp ../../build/cpp-package/example/mlp . ./mlp @@ -50,3 +53,5 @@ cp ../../build/cpp-package/example/mlp_gpu . cp ../../build/cpp-package/example/test_score . ./test_score 0.93 + +sh unittests/unit_test_mlp_csv.sh diff --git a/dev_menu.py b/dev_menu.py new file mode 100755 index 000000000..7329b446e --- /dev/null +++ b/dev_menu.py @@ -0,0 +1,239 @@ +#!/usr/bin/env python3 + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an +# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +# KIND, either express or implied. See the License for the +# specific language governing permissions and limitations +# under the License. + +# -*- coding: utf-8 -*- +"""Tool to ease working with the build system and reproducing test results""" + +import argparse +import os +import sys +from subprocess import check_call +import shlex +from ci.util import retry, remember_cwd +from typing import List +from collections import OrderedDict +import logging +import yaml +import shutil + +DEFAULT_PYENV=os.environ.get('DEFAULT_PYENV','py3_venv') +DEFAULT_PYTHON=os.environ.get('DEFAULT_PYTHON','python3') +DEFAULT_CMAKE_OPTIONS=os.environ.get('DEFAULT_CMAKE_OPTIONS','cmake_options.yml') + +class Confirm(object): + def __init__(self, cmds): + self.cmds = cmds + + def __call__(self): + resp = input("This will run the following command(s) '{}' are you sure? yes / no: ".format(self.cmds)) + while True: + if resp.lower() == 'yes': + handle_commands(self.cmds) + return + elif resp.lower() == 'no': + return + else: + resp = input("Please answer yes or no: ") + +class CMake(object): + def __init__(self, cmake_options_yaml=DEFAULT_CMAKE_OPTIONS, cmake_options_yaml_default='cmake/cmake_options.yml'): + if os.path.exists(cmake_options_yaml): + self.cmake_options_yaml = cmake_options_yaml + else: + self.cmake_options_yaml = cmake_options_yaml_default + logging.info('Using {} for CMake configuration'.format(self.cmake_options_yaml)) + self.cmake_options = None + self.read_config() + + def read_config(self): + assert os.path.isfile(self.cmake_options_yaml) + with open(self.cmake_options_yaml, 'r') as f: + self.cmake_options = yaml.load(f) + + def _cmdlineflags(self): + res = [] + for opt,v in self.cmake_options.items(): + res.append('-D{}={}'.format(opt,v)) + return res + + def cmake_command(self) -> str: + """ + :return: Cmake command to run given the options + """ + cmd_lst = ['cmake'] + cmd_lst.extend(self._cmdlineflags()) + return cmd_lst + + def __call__(self, build_dir='build', generator='Ninja', build_cmd='ninja'): + logging.info("CMake / {} build in directory {}".format( + generator, os.path.abspath(build_dir))) + cmd_lst = self.cmake_command() + os.makedirs(build_dir, exist_ok=True) + with remember_cwd(): + os.chdir(build_dir) + cmd_lst.extend(['-G{}'.format(generator), '..']) + logging.info('Executing: {}'.format('\t\n'.join(cmd_lst))) + check_call(cmd_lst) + logging.info('Now building') + check_call(shlex.split(build_cmd)) + +def create_virtualenv(venv_exe, pyexe, venv) -> None: + logging.info("Creating virtualenv in %s with python %s", venv, pyexe) + if not (venv_exe and pyexe and venv): + logging.warn("Skipping creation of virtualenv") + return + check_call([venv_exe, '-p', pyexe, venv]) + activate_this_py = os.path.join(venv, 'bin', 'activate_this.py') + # Activate virtualenv in this interpreter + exec(open(activate_this_py).read(), dict(__file__=activate_this_py)) + check_call(['pip', 'install', '--upgrade','--force-reinstall', '-e', 'python']) + check_call(['pip', 'install', '-r', 'tests/requirements.txt']) + +def create_virtualenv_default(): + create_virtualenv('virtualenv', DEFAULT_PYTHON, DEFAULT_PYENV) + logging.info("You can use the virtualenv by executing 'source %s/bin/activate'", DEFAULT_PYENV) + +COMMANDS = OrderedDict([ + ('[Local] BUILD CMake/Ninja (using cmake_options.yaml (cp cmake/cmake_options.yml .) and edit) ({} virtualenv in "{}")'.format(DEFAULT_PYTHON, DEFAULT_PYENV), + [ + CMake(), + create_virtualenv_default, + ]), + ('[Local] Python Unit tests', + "./py3_venv/bin/nosetests -v tests/python/unittest/" + ), + ('[Website and docs build] Will build to docs/_build/html/', + "ci/docker/runtime_functions.sh deploy_docs"), + ('[Docker] sanity_check. Check for linting and code formatting.', + "ci/build.py --platform ubuntu_cpu /work/runtime_functions.sh sanity_check"), + ('[Docker] Python3 CPU unittests', + [ + "ci/build.py --platform ubuntu_cpu /work/runtime_functions.sh build_ubuntu_cpu_openblas", + "ci/build.py --platform ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu", + ]), + ('[Docker] Python3 GPU unittests', + [ + "ci/build.py --platform ubuntu_gpu /work/runtime_functions.sh build_ubuntu_gpu", + "ci/build.py --nvidiadocker --platform ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_python3_gpu", + ]), + ('[Docker] Python3 GPU+MKLDNN unittests', + [ + "ci/build.py --platform ubuntu_gpu /work/runtime_functions.sh build_ubuntu_gpu_cmake_mkldnn", + "ci/build.py --nvidiadocker --platform ubuntu_gpu /work/runtime_functions.sh unittest_ubuntu_python3_gpu", + ]), + ('[Docker] Python3 CPU Intel MKLDNN unittests', + [ + "ci/build.py --platform ubuntu_cpu /work/runtime_functions.sh build_ubuntu_cpu_mkldnn", + "ci/build.py --platform ubuntu_cpu /work/runtime_functions.sh unittest_ubuntu_python3_cpu", + ]), + ('[Docker] Python3 ARMv7 unittests (QEMU)', + [ + "ci/build.py -p armv7", + "ci/build.py -p test.arm_qemu ./runtime_functions.py run_ut_py3_qemu" + ]), + ('Clean (RESET HARD) repository (Warning! erases local changes / DATA LOSS)', + Confirm("ci/docker/runtime_functions.sh clean_repo")) +]) + +def clip(x, mini, maxi): + return min(max(x,mini), maxi) + +@retry((ValueError, RuntimeError), 3, delay_s = 0) +def show_menu(items: List[str], header=None) -> int: + print('\n-- MXNet dev menu --\n') + def hr(): + print(''.join(['-']*30)) + if header: + print(header) + hr() + for i,x in enumerate(items,1): + print('{}. {}'.format(i,x)) + hr() + choice = int(input('Choose option> ')) - 1 + if choice < 0 or choice >= len(items): + raise RuntimeError('Choice must be between {} and {}'.format(1, len(items))) + return choice + +def handle_commands(cmds) -> None: + def handle_command(cmd): + logging.info("Executing command: %s",cmd) + check_call(shlex.split(cmd)) + + if type(cmds) is list: + for cmd in cmds: + handle_commands(cmd) + elif type(cmds) is str: + handle_command(cmds) + elif callable(cmds): + cmds() + else: + raise RuntimeError("handle_commands(cmds): argument should be str or List[str] but is %s", type(cmds)) + +def use_menu_ui(args) -> None: + command_list = list(COMMANDS.keys()) + if hasattr(args, 'choice') and args.choice and args.choice[0].isdigit(): + choice = int(args.choice[0]) - 1 + else: + choice = show_menu(command_list, 'Available actions') + handle_commands(COMMANDS[command_list[choice]]) + +def build(args) -> None: + """Build using CMake""" + venv_exe = shutil.which('virtualenv') + pyexe = shutil.which(args.pyexe) + if not venv_exe: + logging.warn("virtualenv wasn't found in path, it's recommended to install virtualenv to manage python environments") + if not pyexe: + logging.warn("Python executable %s not found in path", args.pyexe) + if args.cmake_options: + cmake = CMake(args.cmake_options) + else: + cmake = CMake() + cmake() + create_virtualenv(venv_exe, pyexe, args.venv) + +def main(): + logging.getLogger().setLevel(logging.INFO) + parser = argparse.ArgumentParser(description="""Utility for compiling and testing MXNet easily""") + parser.set_defaults(command='use_menu_ui') + + subparsers = parser.add_subparsers(help='sub-command help') + build_parser = subparsers.add_parser('build', help='build with the specified flags from file') + build_parser.add_argument('cmake_options', nargs='?', + help='File containing CMake options in YAML') + build_parser.add_argument('-v', '--venv', + type=str, + default=DEFAULT_PYENV, + help='virtualenv dir') + build_parser.add_argument('-p', '--pyexe', + type=str, + default=DEFAULT_PYTHON, + help='python executable') + build_parser.set_defaults(command='build') + + menu_parser = subparsers.add_parser('menu', help='jump to menu option #') + menu_parser.set_defaults(command='use_menu_ui') + menu_parser.add_argument('choice', nargs=1) + + args = parser.parse_args() + globals()[args.command](args) + return 0 + +if __name__ == '__main__': + sys.exit(main()) diff --git a/docs/Doxyfile b/docs/Doxyfile index cee3942bc..bf6344d4f 100644 --- a/docs/Doxyfile +++ b/docs/Doxyfile @@ -770,7 +770,7 @@ WARN_LOGFILE = # spaces. # Note: If this tag is empty the current directory is searched. -INPUT = include src/common +INPUT = include src/common cpp-package/include/mxnet-cpp # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses @@ -805,7 +805,7 @@ RECURSIVE = YES # Note that relative paths are relative to the directory from which doxygen is # run. -EXCLUDE = +EXCLUDE = 3rdparty # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded diff --git a/docs/Jenkinsfile b/docs/Jenkinsfile index e20d984f0..676204291 100644 --- a/docs/Jenkinsfile +++ b/docs/Jenkinsfile @@ -21,14 +21,14 @@ // See documents at https://jenkins.io/doc/book/pipeline/jenkinsfile/ // timeout in minutes -max_time = 120 +max_time = 180 -node('restricted-mxnetlinux-cpu') { +node('restricted-utility') { // Loading the utilities requires a node context unfortunately checkout scm utils = load('ci/Jenkinsfile_utils.groovy') } -utils.assign_node_labels(linux_cpu: 'restricted-mxnetlinux-cpu', linux_gpu: 'restricted-mxnetlinux-gpu', linux_gpu_p3: 'restricted-mxnetlinux-gpu-p3', windows_cpu: 'restricted-mxnetwindows-cpu', windows_gpu: 'restricted-mxnetwindows-gpu') +utils.assign_node_labels(utility: 'restricted-utility', linux_cpu: 'restricted-mxnetlinux-cpu') utils.main_wrapper( core_logic: { diff --git a/docs/Jenkinsfile-dev b/docs/Jenkinsfile-dev index 169ebe13e..760a2f953 100644 --- a/docs/Jenkinsfile-dev +++ b/docs/Jenkinsfile-dev @@ -23,12 +23,12 @@ // timeout in minutes max_time = 120 -node('mxnetlinux-cpu') { +node('utility') { // Loading the utilities requires a node context unfortunately checkout scm utils = load('ci/Jenkinsfile_utils.groovy') } -utils.assign_node_labels(linux_cpu: 'mxnetlinux-cpu', linux_gpu: 'mxnetlinux-gpu', linux_gpu_p3: 'mxnetlinux-gpu-p3', windows_cpu: 'mxnetwindows-cpu', windows_gpu: 'mxnetwindows-gpu') +utils.assign_node_labels(utility: 'utility', linux_cpu: 'mxnetlinux-cpu') utils.main_wrapper( core_logic: { diff --git a/docs/README.md b/docs/README.md index c21836edd..80463cc68 100644 --- a/docs/README.md +++ b/docs/README.md @@ -17,9 +17,13 @@ git clone --recursive https://github.com/apache/incubator-mxnet.git mxnet cd mxnet/docs/build_version_doc ./setup_docs_ubuntu.sh cd ../../ -make docs USE_OPENMP=1 +make docs USE_OPENMP=1 SPHINXOPTS=-W ``` +OpenMP speeds things up and will work on Ubuntu if you used the `setup_docs_ubuntu.sh` script. +The `-W` Sphinx option enforces "warnings as errors". This will help you debug your builds and get them through CI. +**CI will not let a PR through if it breaks the website.** Refer to the [MXNet Developer wiki's documentation guide](https://cwiki.apache.org/confluence/display/MXNET/Documentation+Guide) for troubleshooting tips. + For more information on each API's documentation dependencies, how to serve the docs, or how to build the full website with each legacy MXNet version, refer to the following links: * [Dependencies](https://github.com/apache/incubator-mxnet/tree/master/docs/build_version_doc#dependencies) - required before you build the docs diff --git a/docs/_static/js/auto_module_index.js b/docs/_static/js/auto_module_index.js index 7f4e18565..8527a71ed 100644 --- a/docs/_static/js/auto_module_index.js +++ b/docs/_static/js/auto_module_index.js @@ -1,3 +1,23 @@ +/*! + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* Customizations to the Sphinx auto module plugin output */ function auto_index(module) { $(document).ready(function () { // find all classes or functions @@ -10,15 +30,17 @@ function auto_index(module) { var html = "

    "; for (var i = 0; i < targets.length; ++i) { - var id = $(targets[i]).attr('id'); - // remove 'mxnet.' prefix to make menus shorter - var id_simple = id.replace(/^mxnet\./, ''); - html += "
  • " + id_simple + "
  • "; + var id = $(targets[i]).attr('id'); + if ( id ) { + // remove 'mxnet.' prefix to make menus shorter + var id_simple = id.replace(/^mxnet\./, ''); + html += "
  • " + id_simple + "
  • "; + } } html += "
"; li_node.append(html); }); -} \ No newline at end of file +} diff --git a/docs/_static/js/clipboard.js b/docs/_static/js/clipboard.js new file mode 100644 index 000000000..75b6af353 --- /dev/null +++ b/docs/_static/js/clipboard.js @@ -0,0 +1,778 @@ +/*! + * clipboard.js v1.6.1 + * https://zenorocha.github.io/clipboard.js + * + * Licensed MIT © Zeno Rocha + */ +(function(f){if(typeof exports==="object"&&typeof module!=="undefined"){module.exports=f()}else if(typeof define==="function"&&define.amd){define([],f)}else{var g;if(typeof window!=="undefined"){g=window}else if(typeof global!=="undefined"){g=global}else if(typeof self!=="undefined"){g=self}else{g=this}g.Clipboard = f()}})(function(){var define,module,exports;return (function e(t,n,r){function s(o,u){if(!n[o]){if(!t[o]){var a=typeof require=="function"&&require;if(!u&&a)return a(o,!0);if(i)return i(o,!0);var f=new Error("Cannot find module '"+o+"'");throw f.code="MODULE_NOT_FOUND",f}var l=n[o]={exports:{}};t[o][0].call(l.exports,function(e){var n=t[o][1][e];return s(n?n:e)},l,l.exports,e,t,n,r)}return n[o].exports}var i=typeof require=="function"&&require;for(var o=0;o 0 && arguments[0] !== undefined ? arguments[0] : {}; + + this.action = options.action; + this.emitter = options.emitter; + this.target = options.target; + this.text = options.text; + this.trigger = options.trigger; + + this.selectedText = ''; + } + }, { + key: 'initSelection', + value: function initSelection() { + if (this.text) { + this.selectFake(); + } else if (this.target) { + this.selectTarget(); + } + } + }, { + key: 'selectFake', + value: function selectFake() { + var _this = this; + + var isRTL = document.documentElement.getAttribute('dir') == 'rtl'; + + this.removeFake(); + + this.fakeHandlerCallback = function () { + return _this.removeFake(); + }; + this.fakeHandler = document.body.addEventListener('click', this.fakeHandlerCallback) || true; + + this.fakeElem = document.createElement('textarea'); + // Prevent zooming on iOS + this.fakeElem.style.fontSize = '12pt'; + // Reset box model + this.fakeElem.style.border = '0'; + this.fakeElem.style.padding = '0'; + this.fakeElem.style.margin = '0'; + // Move element out of screen horizontally + this.fakeElem.style.position = 'absolute'; + this.fakeElem.style[isRTL ? 'right' : 'left'] = '-9999px'; + // Move element to the same position vertically + var yPosition = window.pageYOffset || document.documentElement.scrollTop; + this.fakeElem.style.top = yPosition + 'px'; + + this.fakeElem.setAttribute('readonly', ''); + this.fakeElem.value = this.text; + + document.body.appendChild(this.fakeElem); + + this.selectedText = (0, _select2.default)(this.fakeElem); + this.copyText(); + } + }, { + key: 'removeFake', + value: function removeFake() { + if (this.fakeHandler) { + document.body.removeEventListener('click', this.fakeHandlerCallback); + this.fakeHandler = null; + this.fakeHandlerCallback = null; + } + + if (this.fakeElem) { + document.body.removeChild(this.fakeElem); + this.fakeElem = null; + } + } + }, { + key: 'selectTarget', + value: function selectTarget() { + this.selectedText = (0, _select2.default)(this.target); + this.copyText(); + } + }, { + key: 'copyText', + value: function copyText() { + var succeeded = void 0; + + try { + succeeded = document.execCommand(this.action); + } catch (err) { + succeeded = false; + } + + this.handleResult(succeeded); + } + }, { + key: 'handleResult', + value: function handleResult(succeeded) { + this.emitter.emit(succeeded ? 'success' : 'error', { + action: this.action, + text: this.selectedText, + trigger: this.trigger, + clearSelection: this.clearSelection.bind(this) + }); + } + }, { + key: 'clearSelection', + value: function clearSelection() { + if (this.target) { + this.target.blur(); + } + + window.getSelection().removeAllRanges(); + } + }, { + key: 'destroy', + value: function destroy() { + this.removeFake(); + } + }, { + key: 'action', + set: function set() { + var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : 'copy'; + + this._action = action; + + if (this._action !== 'copy' && this._action !== 'cut') { + throw new Error('Invalid "action" value, use either "copy" or "cut"'); + } + }, + get: function get() { + return this._action; + } + }, { + key: 'target', + set: function set(target) { + if (target !== undefined) { + if (target && (typeof target === 'undefined' ? 'undefined' : _typeof(target)) === 'object' && target.nodeType === 1) { + if (this.action === 'copy' && target.hasAttribute('disabled')) { + throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute'); + } + + if (this.action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) { + throw new Error('Invalid "target" attribute. You can\'t cut text from elements with "readonly" or "disabled" attributes'); + } + + this._target = target; + } else { + throw new Error('Invalid "target" value, use a valid Element'); + } + } + }, + get: function get() { + return this._target; + } + }]); + + return ClipboardAction; + }(); + + module.exports = ClipboardAction; +}); + +},{"select":5}],8:[function(require,module,exports){ +(function (global, factory) { + if (typeof define === "function" && define.amd) { + define(['module', './clipboard-action', 'tiny-emitter', 'good-listener'], factory); + } else if (typeof exports !== "undefined") { + factory(module, require('./clipboard-action'), require('tiny-emitter'), require('good-listener')); + } else { + var mod = { + exports: {} + }; + factory(mod, global.clipboardAction, global.tinyEmitter, global.goodListener); + global.clipboard = mod.exports; + } +})(this, function (module, _clipboardAction, _tinyEmitter, _goodListener) { + 'use strict'; + + var _clipboardAction2 = _interopRequireDefault(_clipboardAction); + + var _tinyEmitter2 = _interopRequireDefault(_tinyEmitter); + + var _goodListener2 = _interopRequireDefault(_goodListener); + + function _interopRequireDefault(obj) { + return obj && obj.__esModule ? obj : { + default: obj + }; + } + + function _classCallCheck(instance, Constructor) { + if (!(instance instanceof Constructor)) { + throw new TypeError("Cannot call a class as a function"); + } + } + + var _createClass = function () { + function defineProperties(target, props) { + for (var i = 0; i < props.length; i++) { + var descriptor = props[i]; + descriptor.enumerable = descriptor.enumerable || false; + descriptor.configurable = true; + if ("value" in descriptor) descriptor.writable = true; + Object.defineProperty(target, descriptor.key, descriptor); + } + } + + return function (Constructor, protoProps, staticProps) { + if (protoProps) defineProperties(Constructor.prototype, protoProps); + if (staticProps) defineProperties(Constructor, staticProps); + return Constructor; + }; + }(); + + function _possibleConstructorReturn(self, call) { + if (!self) { + throw new ReferenceError("this hasn't been initialised - super() hasn't been called"); + } + + return call && (typeof call === "object" || typeof call === "function") ? call : self; + } + + function _inherits(subClass, superClass) { + if (typeof superClass !== "function" && superClass !== null) { + throw new TypeError("Super expression must either be null or a function, not " + typeof superClass); + } + + subClass.prototype = Object.create(superClass && superClass.prototype, { + constructor: { + value: subClass, + enumerable: false, + writable: true, + configurable: true + } + }); + if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; + } + + var Clipboard = function (_Emitter) { + _inherits(Clipboard, _Emitter); + + /** + * @param {String|HTMLElement|HTMLCollection|NodeList} trigger + * @param {Object} options + */ + function Clipboard(trigger, options) { + _classCallCheck(this, Clipboard); + + var _this = _possibleConstructorReturn(this, (Clipboard.__proto__ || Object.getPrototypeOf(Clipboard)).call(this)); + + _this.resolveOptions(options); + _this.listenClick(trigger); + return _this; + } + + /** + * Defines if attributes would be resolved using internal setter functions + * or custom functions that were passed in the constructor. + * @param {Object} options + */ + + + _createClass(Clipboard, [{ + key: 'resolveOptions', + value: function resolveOptions() { + var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {}; + + this.action = typeof options.action === 'function' ? options.action : this.defaultAction; + this.target = typeof options.target === 'function' ? options.target : this.defaultTarget; + this.text = typeof options.text === 'function' ? options.text : this.defaultText; + } + }, { + key: 'listenClick', + value: function listenClick(trigger) { + var _this2 = this; + + this.listener = (0, _goodListener2.default)(trigger, 'click', function (e) { + return _this2.onClick(e); + }); + } + }, { + key: 'onClick', + value: function onClick(e) { + var trigger = e.delegateTarget || e.currentTarget; + + if (this.clipboardAction) { + this.clipboardAction = null; + } + + this.clipboardAction = new _clipboardAction2.default({ + action: this.action(trigger), + target: this.target(trigger), + text: this.text(trigger), + trigger: trigger, + emitter: this + }); + } + }, { + key: 'defaultAction', + value: function defaultAction(trigger) { + return getAttributeValue('action', trigger); + } + }, { + key: 'defaultTarget', + value: function defaultTarget(trigger) { + var selector = getAttributeValue('target', trigger); + + if (selector) { + return document.querySelector(selector); + } + } + }, { + key: 'defaultText', + value: function defaultText(trigger) { + return getAttributeValue('text', trigger); + } + }, { + key: 'destroy', + value: function destroy() { + this.listener.destroy(); + + if (this.clipboardAction) { + this.clipboardAction.destroy(); + this.clipboardAction = null; + } + } + }], [{ + key: 'isSupported', + value: function isSupported() { + var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut']; + + var actions = typeof action === 'string' ? [action] : action; + var support = !!document.queryCommandSupported; + + actions.forEach(function (action) { + support = support && !!document.queryCommandSupported(action); + }); + + return support; + } + }]); + + return Clipboard; + }(_tinyEmitter2.default); + + /** + * Helper function to retrieve attribute value. + * @param {String} suffix + * @param {Element} element + */ + function getAttributeValue(suffix, element) { + var attribute = 'data-clipboard-' + suffix; + + if (!element.hasAttribute(attribute)) { + return; + } + + return element.getAttribute(attribute); + } + + module.exports = Clipboard; +}); + +},{"./clipboard-action":7,"good-listener":4,"tiny-emitter":6}]},{},[8])(8) +}); \ No newline at end of file diff --git a/docs/_static/js/copycode.js b/docs/_static/js/copycode.js index f9ebd64ab..d42e99277 100644 --- a/docs/_static/js/copycode.js +++ b/docs/_static/js/copycode.js @@ -1,4 +1,23 @@ -/*Copy code to clipboard*/ +/*! + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* Copy code to clipboard */ LANG_GP = {'default':'>>> ', 'python':'>>> ' , 'scala':'scala>', 'julia':'julia> ', 'r':'> ', 'perl':'pdl>' , 'cpp':'', 'bash':'$ '}; function addBtn() { diff --git a/docs/_static/js/docversion.js b/docs/_static/js/docversion.js index f87c4587b..1119f4ec1 100644 --- a/docs/_static/js/docversion.js +++ b/docs/_static/js/docversion.js @@ -1,3 +1,23 @@ +/*! + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* Set the version of the website */ function setVersion(){ let doc = window.location.pathname.match(/^\/(api\/.*)$/) || window.location.pathname.match(/^\/versions\/[^*]+\/(api\/.*)$/); if (doc) { @@ -11,7 +31,7 @@ function setVersion(){ $( el ).attr('href', versionedDoc); } }); - } + } } } diff --git a/docs/_static/js/navbar.js b/docs/_static/js/navbar.js index 0384194fa..5dde7d8ff 100644 --- a/docs/_static/js/navbar.js +++ b/docs/_static/js/navbar.js @@ -1,3 +1,23 @@ +/*! + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* Custom navigation bar formatting */ var searchBox = $("#search-input-wrap"); var TITLE = ['/install/', '/gluon/', '/api/', '/docs/', '/community/' ]; var DOC_TITLE = ['/faq/', '/tutorials/', '/architecture/', '/model_zoo/']; @@ -25,7 +45,7 @@ function navbar() { $(this).hide; } else rightPos = $(this).offset().left + $(this).width(); - + if(isCovered) { plusMenuList.push($(this).clone()); $(this).hide(); @@ -38,7 +58,7 @@ function navbar() { } else $(this).show(); }); - + if(plusMenuList.length == 0) { $(".plusIcon").first().hide(); return; diff --git a/docs/_static/js/options.js b/docs/_static/js/options.js index e9225aafc..f4fde4e1f 100644 --- a/docs/_static/js/options.js +++ b/docs/_static/js/options.js @@ -1,4 +1,24 @@ -var versionSelect = defaultVersion = 'v1.3.0'; +/*! + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* Installation page display functions for install selector */ +var versionSelect = defaultVersion = 'v1.3.1'; var platformSelect = 'Linux'; var languageSelect = 'Python'; var processorSelect = 'CPU'; diff --git a/docs/_static/js/page.js b/docs/_static/js/page.js index caba7dd1b..425998d6d 100644 --- a/docs/_static/js/page.js +++ b/docs/_static/js/page.js @@ -1,3 +1,22 @@ +/*! + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + /* Generate url tracking for each page */ var protocol = location.protocol.concat("//"); var host = protocol.concat(window.location.host); diff --git a/docs/_static/js/search.js b/docs/_static/js/search.js index 9df970222..6a70b4ef5 100644 --- a/docs/_static/js/search.js +++ b/docs/_static/js/search.js @@ -1,9 +1,29 @@ +/*! + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* Display functionality for the search feature */ $(document).ready(function () { var searchForm = $("#search-input-wrap").children("form").first(); searchForm.append('
'); searchForm.children("div").first().addClass("searchBox"); $(".searchBox").addClass("searchBoxNorm"); - + $('#searchIcon').click(function () { if($('#search-input-wrap').is(':hidden')) { $('#search-input-wrap').show(); @@ -16,4 +36,4 @@ $(document).ready(function () { $('#searchIcon span').addClass('glyphicon-search'); } }); -}); \ No newline at end of file +}); diff --git a/docs/_static/js/sidebar.js b/docs/_static/js/sidebar.js index 3e7ad41bf..65899d5dd 100644 --- a/docs/_static/js/sidebar.js +++ b/docs/_static/js/sidebar.js @@ -1,5 +1,24 @@ -/*Preprocess*/ -var LANG = ['python', 'c++', 'clojure', 'julia', 'perl', 'r', 'scala']; +/*! + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/* Customizations to the sphinx theme */ +var LANG = ['python', 'c++', 'clojure', 'julia', 'perl', 'r', 'scala', 'java']; var TITLE_WITH_LANG = ['/get_started/', '/tutorials/', '/faq/', '/architecture/', '/community/']; for(var i = 0; i < LANG.length; ++i) { TITLE_WITH_LANG.push('/api/' + LANG[i] + '/'); diff --git a/docs/_static/mxnet-theme/index.html b/docs/_static/mxnet-theme/index.html index a6cae4de0..302b17322 100644 --- a/docs/_static/mxnet-theme/index.html +++ b/docs/_static/mxnet-theme/index.html @@ -23,9 +23,9 @@
-

MXNet 1.3.0 Released

-

We're excited to announce the release of MXNet 1.3.0! This release includes Clojure bindings, Gluon package enhancements, ONNX export, TensorRT integration, and more!

- Learn More +

MXNet 1.3.1 Released

+

This release includes bug fixes, performance improvements, and documentation updates.

+ Learn More

A 60-minute Gluon Crash Course

diff --git a/docs/_static/mxnet-theme/navbar.html b/docs/_static/mxnet-theme/navbar.html index 977d005d9..50c3debba 100644 --- a/docs/_static/mxnet-theme/navbar.html +++ b/docs/_static/mxnet-theme/navbar.html @@ -27,6 +27,7 @@

  • Perl
  • R
  • Scala
  • +
  • Java
  • @@ -80,6 +81,7 @@

  • Perl
  • R
  • Scala
  • +
  • Java